id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
5b01231738f559ee87d357cc95ff2f0b096dcc8bb8c15f4a3f079eac1d082dab
|
2026-01-15T07:00:10+00:00
|
Complex mesoscale landscapes beneath Antarctica mapped from space
|
Science, Volume 391, Issue 6782, Page 314-319, January 2026.
|
https://www.science.org/doi/abs/10.1126/science.ady2532?af=R
|
Academic Papers
|
svg
|
564f8a12ff3b57c194351f4cb81157af3ed984e86a6d3545866b10aace134490
|
2025-11-27T07:00:00+00:00
|
Characterizing transport in a quantum gas by measuring Drude weights
|
Science, Volume 391, Issue 6782, Page 290-293, January 2026.
|
https://www.science.org/doi/abs/10.1126/science.ads8327?af=R
|
Academic Papers
|
svg
|
16d8fc0fbb73fe292e2ca415f0385c63b57112b77b49f2c60749be1ca5f3c2ba
|
2026-01-15T07:00:10+00:00
|
Transforming mental health research and care through artificial intelligence
|
Science, Volume 391, Issue 6782, Page 249-258, January 2026.
|
https://www.science.org/doi/abs/10.1126/science.adz9193?af=R
|
Academic Papers
|
svg
|
990dde5e11a4da6fbbeacfb5d4b8a8da1ea6c9dd1b63a162e06c6aff2cf72b20
|
2026-01-15T07:00:10+00:00
|
Growing pains
|
Science, Volume 391, Issue 6782, Page 322-322, January 2026.
|
https://www.science.org/doi/abs/10.1126/science.aef3527?af=R
|
Academic Papers
|
svg
|
7e115610b05ce854362a3abbc4aa21540a64147d0c9132f0b85f10fa9dbdb436
|
2026-01-15T07:00:10+00:00
|
In Other Journals
|
Science, Volume 391, Issue 6782, Page 260-261, January 2026.
|
https://www.science.org/doi/abs/10.1126/science.aef4211?af=R
|
Academic Papers
|
svg
|
ebf2adfee2443c831852a1f882056b4d2b232c9234c5743ed169b9611a9dc59c
|
2026-01-15T07:00:10+00:00
|
Blood vessels under pressure
|
Science, Volume 391, Issue 6782, Page 237-238, January 2026.
|
https://www.science.org/doi/abs/10.1126/science.aed9277?af=R
|
Academic Papers
|
svg
|
b390f8eaa667bdab15583d1e6f7378be56499d4d6fdf727683a4bd7b9619c6ba
|
2026-01-15T07:00:10+00:00
|
A new cell type drove human brain complexity
|
Science, Volume 391, Issue 6782, Page 240-240, January 2026.
|
https://www.science.org/doi/abs/10.1126/science.aee0974?af=R
|
Academic Papers
|
svg
|
24dcdadee86ff5ac8a51590a638c338af1f76abbb63b3cbe80694a72897e2887
|
2026-01-15T07:00:10+00:00
|
Robust perovskite nanocrystal emitters
|
Science, Volume 391, Issue 6782, Page 238-239, January 2026.
|
https://www.science.org/doi/abs/10.1126/science.aee0989?af=R
|
Academic Papers
|
svg
|
feaaf8291404cc6426d42719cc1f40cf4b91e7f4c259f245a018b309de3449d2
|
2026-01-15T07:00:10+00:00
|
Not a big baby
|
Science, Volume 391, Issue 6782, Page 234-235, January 2026.
|
https://www.science.org/doi/abs/10.1126/science.aed8356?af=R
|
Academic Papers
|
svg
|
bee94ddd6e1e6c2201d995249e73b322c17d7b1c5c4808414e5d6f0e3ab2be07
|
2026-01-15T07:00:10+00:00
|
Uncovering Antarctica’s ice-draped landscape
|
Science, Volume 391, Issue 6782, Page 235-236, January 2026.
|
https://www.science.org/doi/abs/10.1126/science.aee4245?af=R
|
Academic Papers
|
svg
|
7c887dcf92a3ff431316767fdf377780a3d2355464a9c1f3e6fbe4ccc3934117
|
2026-01-15T07:00:10+00:00
|
Canada’s dismantled safeguards threaten salmon
|
Science, Volume 391, Issue 6782, Page 247-248, January 2026.
|
https://www.science.org/doi/abs/10.1126/science.aee3537?af=R
|
Academic Papers
|
svg
|
68f48b4d3a6fce26ac99e248408e98a7dd1d01d21a87310325e3cdd575a7f5d0
|
2026-01-15T07:00:10+00:00
|
Climate-change extremes threaten Iraq
|
Science, Volume 391, Issue 6782, Page 248-248, January 2026.
|
https://www.science.org/doi/abs/10.1126/science.aee9226?af=R
|
Academic Papers
|
svg
|
b869a6b272458637c91707154df07f871c116ab9404458969abe4ef31a7fb055
|
2026-01-15T07:00:10+00:00
|
Misusing research to trap songbirds in Spain
|
Science, Volume 391, Issue 6782, Page 247-247, January 2026.
|
https://www.science.org/doi/abs/10.1126/science.aee3825?af=R
|
Academic Papers
|
svg
|
db805d41d960d17a2dc8d2a918368a5af6c4a7b386da5cba6fa5ac0970e1029a
|
2026-01-15T07:00:10+00:00
|
A difficult rebirth
|
Science, Volume 391, Issue 6782, Page 228-232, January 2026.
|
https://www.science.org/doi/abs/10.1126/science.aef4208?af=R
|
Academic Papers
|
svg
|
5aa7f61083509f0f0f4be870b3f338fec3bf54d0c1e1d91e5238a8484fa94611
|
2026-01-15T07:00:10+00:00
|
Scientists reject call to retest childhood vaccines
|
Science, Volume 391, Issue 6782, Page 220-221, January 2026.
|
https://www.science.org/doi/abs/10.1126/science.aef4614?af=R
|
Academic Papers
|
svg
|
c40b5216d1b65d49ed61c253a80ae38b97012693ed533bebb5354b730f0dfe21
|
2026-01-15T07:00:10+00:00
|
Cellular ‘vaults’ deployed to spy on gene activity
|
Science, Volume 391, Issue 6782, Page 222-223, January 2026.
|
https://www.science.org/doi/abs/10.1126/science.aef4615?af=R
|
Academic Papers
|
svg
|
b8f36efe7135930bbdb9c82d5d275776388f6240f03a18103342693fc0731e7c
|
2026-01-15T07:00:10+00:00
|
Low doses of insecticide speed fish aging and death
|
Science, Volume 391, Issue 6782, Page 224-225, January 2026.
|
https://www.science.org/doi/abs/10.1126/science.aef4616?af=R
|
Academic Papers
|
svg
|
2cd4fc1eb4db2d294f47162dacdba74d572e378211803983b062dee1e93d5c26
|
2026-01-15T07:00:10+00:00
|
Arctic’s ‘last ice area’ is on thin ice
|
Science, Volume 391, Issue 6782, Page 225-226, January 2026.
|
https://www.science.org/doi/abs/10.1126/science.aef4617?af=R
|
Academic Papers
|
svg
|
c88c9e7c470aac7df0b7965949583a91a6c833fa1535ee9088ca5b3e6b149a3f
|
2026-01-15T07:00:10+00:00
|
Ex–Google CEO funds private space telescope bigger than Hubble
|
Science, Volume 391, Issue 6782, Page 226-227, January 2026.
|
https://www.science.org/doi/abs/10.1126/science.aef4618?af=R
|
Academic Papers
|
svg
|
72a8c465445ce8a4e90325fd7ce506ebca4d17e6ed712a61e0a13588a65733ff
|
2026-01-15T08:00:00+00:00
|
The mirage of AI deregulation
|
Science, Volume 391, Issue 6782, January 2026.
|
https://www.science.org/doi/abs/10.1126/science.aee4900?af=R
|
Academic Papers
|
svg
|
ec44f9126ee51029f9bb713e9e7f21308284c2de2a9765335f55da3c3cdd40f2
|
2026-01-15T07:00:10+00:00
|
The High Seas Treaty, at last
|
Science, Volume 391, Issue 6782, Page 219-219, January 2026.
|
https://www.science.org/doi/abs/10.1126/science.aef3177?af=R
|
Academic Papers
|
svg
|
b6aadab79ec75444773c7cbdbf3017f96d84e34302c9dd0aa9da46cfd4f4131d
|
2026-01-15T07:00:10+00:00
|
A theory of change approach to enhance the post-2030 sustainable development agenda
|
Science, Volume 391, Issue 6782, Page 241-244, January 2026.
|
https://www.science.org/doi/abs/10.1126/science.adz5704?af=R
|
Academic Papers
|
svg
|
77354b6255170146328d8805cbf55bdc46c0110a0998578fd0e53c6a46d14815
|
2026-01-15T07:00:10+00:00
|
In Science Journals
|
Science, Volume 391, Issue 6782, Page 259-261, January 2026.
|
https://www.science.org/doi/abs/10.1126/science.aef4210?af=R
|
Academic Papers
|
svg
|
0db0d92a323ccbe0a8d56505fba4a34fc8c44426b393b1dd4b4b84cc16c4f938
|
2026-01-16T00:00:00-05:00
|
Social Determinants of Health Prediction for ICD-9 Code with Reasoning Models
|
arXiv:2601.09709v1 Announce Type: new Abstract: Social Determinants of Health correlate with patient outcomes but are rarely captured in structured data. Recent attention has been given to automatically extracting these markers from clinical text to supplement diagnostic systems with knowledge of patients' social circumstances. Large language models demonstrate strong performance in identifying Social Determinants of Health labels from sentences. However, prediction in large admissions or longitudinal notes is challenging given long distance dependencies. In this paper, we explore hospital admission multi-label Social Determinants of Health ICD-9 code classification on the MIMIC-III dataset using reasoning models and traditional large language models. We exploit existing ICD-9 codes for prediction on admissions, which achieved an 89% F1. Our contributions include our findings, missing SDoH codes in 139 admissions, and code to reproduce the results.
|
https://arxiv.org/abs/2601.09709
|
Academic Papers
|
svg
|
f7b907bef486c4120c8184508862d93d724f46026f9a48bb0ce09f6ed5c84795
|
2026-01-16T00:00:00-05:00
|
Segmenta\c{c}\~ao Comportamental, Do Not Track e o desenvolvimento jur\'idico europeu e holand\^es
|
arXiv:2601.09711v1 Announce Type: new Abstract: This paper discusses legal developments in Europe and the Netherlands. Recent decisions show that European data protection law, or privacy law, applies to behavioral targeting in most cases. Dutch law explicitly presumes that data protection law applies to behavioral targeting. This means that companies have to comply with data protection law's fair information principles. For example, companies must refrain from secret or excessive data collection. Perhaps the principles could provide inspiration for future W3C projects. Could technology design foster fair information processing?
|
https://arxiv.org/abs/2601.09711
|
Academic Papers
|
svg
|
68cfc4b17418b91d8aba66f7ba9dcf498b0bbf604b9d1589897da598ae74f400
|
2026-01-16T00:00:00-05:00
|
Behavioral Targeting, a European Legal Perspective
|
arXiv:2601.09712v1 Announce Type: new Abstract: Behavioral targeting, or online profiling, is a hotly debated topic. Much of the collection of personal information on the Internet is related to behavioral targeting, although research suggests that most people don't want to receive behaviorally targeted advertising. The World Wide Web Consortium is discussing a Do Not Track standard, and regulators worldwide are struggling to come up with answers. This article discusses European law and recent policy developments on behavioral targeting.
|
https://arxiv.org/abs/2601.09712
|
Academic Papers
|
svg
|
162b606f228bf44728259aae4c43e12995e4e621504474c5a33a5940433ad2f6
|
2026-01-16T00:00:00-05:00
|
LLM-Driven Preference Data Synthesis for Proactive Prediction of the Next User Utterance in Human-Machine Dialogue
|
arXiv:2601.09713v1 Announce Type: new Abstract: Proactively predicting a users next utterance in human-machine dialogue can streamline interaction and improve user experience. Existing commercial API-based solutions are subject to privacy concerns while deploying general-purpose LLMs locally remains computationally expensive. As such, training a compact, task-specific LLM provides a practical alternative. Although user simulator methods can predict a user's next utterance, they mainly imitate their speaking style rather than advancing the dialogue. Preference data synthesis has been investigated to generate data for proactive next utterance prediction and help align LLMs with user preferences. Yet existing methods lack the ability to explicitly model the intent reasoning that leads to the user's next utterance and to define and synthesize preference and non-preference reasoning processes for predicting the user's next utterance.To address these challenges, we propose ProUtt, an LLM-driven preference data synthesis method for proactive next utterance prediction. ProUtt converts dialogue history into an intent tree and explicitly models intent reasoning trajectories by predicting the next plausible path from both exploitation and exploration perspectives. It then constructs preference and non-preference reasoning processes by perturbing or revising intent tree paths at different future turns. Extensive evaluations using LLM-as-a-judge and human judgments demonstrate that ProUtt consistently outperforms existing data synthesis methods, user simulators, and commercial LLM APIs across four benchmark datasets. We release both the code and the synthesized datasets to facilitate future research.
|
https://arxiv.org/abs/2601.09713
|
Academic Papers
|
svg
|
95cef6abd43dc0988c6f16848bb2ab8e2c170bd3a586b7d41fa3d52b1e015258
|
2026-01-16T00:00:00-05:00
|
Evaluating Novelty in AI-Generated Research Plans Using Multi-Workflow LLM Pipelines
|
arXiv:2601.09714v1 Announce Type: new Abstract: The integration of Large Language Models (LLMs) into the scientific ecosystem raises fundamental questions about the creativity and originality of AI-generated research. Recent work has identified ``smart plagiarism'' as a concern in single-step prompting approaches, where models reproduce existing ideas with terminological shifts. This paper investigates whether agentic workflows -- multi-step systems employing iterative reasoning, evolutionary search, and recursive decomposition -- can generate more novel and feasible research plans. We benchmark five reasoning architectures: Reflection-based iterative refinement, Sakana AI v2 evolutionary algorithms, Google Co-Scientist multi-agent framework, GPT Deep Research (GPT-5.1) recursive decomposition, and Gemini~3 Pro multimodal long-context pipeline. Using evaluations from thirty proposals each on novelty, feasibility, and impact, we find that decomposition-based and long-context workflows achieve mean novelty of 4.17/5, while reflection-based approaches score significantly lower (2.33/5). Results reveal varied performance across research domains, with high-performing workflows maintaining feasibility without sacrificing creativity. These findings support the view that carefully designed multi-stage agentic workflows can advance AI-assisted research ideation.
|
https://arxiv.org/abs/2601.09714
|
Academic Papers
|
svg
|
487233275c74f8ab4ee6a1c7629d82c9cfd1055a5e4d815993c7a267afb3416e
|
2026-01-16T00:00:00-05:00
|
Introducing Axlerod: An LLM-based Chatbot for Assisting Independent Insurance Agents
|
arXiv:2601.09715v1 Announce Type: new Abstract: The insurance industry is undergoing a paradigm shift through the adoption of artificial intelligence (AI) technologies, particularly in the realm of intelligent conversational agents. Chatbots have evolved into sophisticated AI-driven systems capable of automating complex workflows, including policy recommendation and claims triage, while simultaneously enabling dynamic, context-aware user engagement. This paper presents the design, implementation, and empirical evaluation of Axlerod, an AI-powered conversational interface designed to improve the operational efficiency of independent insurance agents. Leveraging natural language processing (NLP), retrieval-augmented generation (RAG), and domain-specific knowledge integration, Axlerod demonstrates robust capabilities in parsing user intent, accessing structured policy databases, and delivering real-time, contextually relevant responses. Experimental results underscore Axlerod's effectiveness, achieving an overall accuracy of 93.18% in policy retrieval tasks while reducing the average search time by 2.42 seconds. This work contributes to the growing body of research on enterprise-grade AI applications in insurtech, with a particular focus on agent-assistive rather than consumer-facing architectures.
|
https://arxiv.org/abs/2601.09715
|
Academic Papers
|
svg
|
2514ff80acdd1e7bdcd0ac2c0d38be39c752c5ea7369619fedb7c4f351478ede
|
2026-01-16T00:00:00-05:00
|
Opportunities and Challenges of Natural Language Processing for Low-Resource Senegalese Languages in Social Science Research
|
arXiv:2601.09716v1 Announce Type: new Abstract: Natural Language Processing (NLP) is rapidly transforming research methodologies across disciplines, yet African languages remain largely underrepresented in this technological shift. This paper provides the first comprehensive overview of NLP progress and challenges for the six national languages officially recognized by the Senegalese Constitution: Wolof, Pulaar, Sereer, Joola, Mandingue, and Soninke. We synthesize linguistic, sociotechnical, and infrastructural factors that shape their digital readiness and identify gaps in data, tools, and benchmarks. Building on existing initiatives and research works, we analyze ongoing efforts in text normalization, machine translation, and speech processing. We also provide a centralized GitHub repository that compiles publicly accessible resources for a range of NLP tasks across these languages, designed to facilitate collaboration and reproducibility. A special focus is devoted to the application of NLP to the social sciences, where multilingual transcription, translation, and retrieval pipelines can significantly enhance the efficiency and inclusiveness of field research. The paper concludes by outlining a roadmap toward sustainable, community-centered NLP ecosystems for Senegalese languages, emphasizing ethical data governance, open resources, and interdisciplinary collaboration.
|
https://arxiv.org/abs/2601.09716
|
Academic Papers
|
svg
|
46101a05ba9a2af19f4decd52a7c68cc6a0470766aa1e60fcb5002ecd4b09e4d
|
2026-01-16T00:00:00-05:00
|
SALP-CG: Standard-Aligned LLM Pipeline for Classifying and Grading Large Volumes of Online Conversational Health Data
|
arXiv:2601.09717v1 Announce Type: new Abstract: Online medical consultations generate large volumes of conversational health data that often embed protected health information, requiring robust methods to classify data categories and assign risk levels in line with policies and practice. However, existing approaches lack unified standards and reliable automated methods to fulfill sensitivity classification for such conversational health data. This study presents a large language model-based extraction pipeline, SALP-CG, for classifying and grading privacy risks in online conversational health data. We concluded health-data classification and grading rules in accordance with GB/T 39725-2020. Combining few-shot guidance, JSON Schema constrained decoding, and deterministic high-risk rules, the backend-agnostic extraction pipeline achieves strong category compliance and reliable sensitivity across diverse LLMs. On the MedDialog-CN benchmark, models yields robust entity counts, high schema compliance, and accurate sensitivity grading, while the strongest model attains micro-F1=0.900 for maximum-level prediction. The category landscape stratified by sensitivity shows that Level 2-3 items dominate, enabling re-identification when combined; Level 4-5 items are less frequent but carry outsize harm. SALP-CG reliably helps classify categories and grading sensitivity in online conversational health data across LLMs, offering a practical method for health data governance. Code is available at https://github.com/dommii1218/SALP-CG.
|
https://arxiv.org/abs/2601.09717
|
Academic Papers
|
svg
|
9dacfd675b72b3141afb52b0d50c40350a3f3bbb1995a08cfe6f35e8c505e2bf
|
2026-01-16T00:00:00-05:00
|
StatLLaMA: A multi-stage training framework for building a domain-optimized statistical language model
|
arXiv:2601.09718v1 Announce Type: new Abstract: This study investigates how to efficiently build a domain-specialized large language model (LLM) for statistics using the lightweight LLaMA-3.2-3B family as the foundation model (FM). We systematically compare three multi-stage training pipelines, starting from a base FM with no instruction-following capability, a base FM augmented with post-hoc instruction tuning, and an instruction-tuned FM with strong general reasoning abilities across continual pretraining, supervised fine-tuning (SFT), reinforcement learning from human feedback (RLHF) preference alignment, and downstream task adaptation. Results show that pipelines beginning with a base FM fail to develop meaningful statistical reasoning, even after extensive instruction tuning, SFT, or RLHF alignment. In contrast, starting from LLaMA-3.2-3B-Instruct enables effective domain specialization. A comprehensive evaluation of SFT variants reveals clear trade-offs between domain expertise and general reasoning ability. We further demonstrate that direct preference optimization provides stable and effective RLHF preference alignment. Finally, we show that downstream fine-tuning must be performed with extremely low intensity to avoid catastrophic forgetting in highly optimized models. The final model, StatLLaMA, achieves strong and balanced performance on benchmarks of mathematical reasoning, common-sense reasoning, and statistical expertise, offering a practical blueprint for developing resource-efficient statistical LLMs. The code is available at https://github.com/HuangDLab/StatLLaMA.
|
https://arxiv.org/abs/2601.09718
|
Academic Papers
|
svg
|
d22623716d5009a15b483f8e9707b3664bbac91c259a50dfbc6858c09ada3bc5
|
2026-01-16T00:00:00-05:00
|
Bounded Hyperbolic Tangent: A Stable and Efficient Alternative to Pre-Layer Normalization in Large Language Models
|
arXiv:2601.09719v1 Announce Type: new Abstract: Pre-Layer Normalization (Pre-LN) is the de facto choice for large language models (LLMs) and is crucial for stable pretraining and effective transfer learning. However, Pre-LN is inefficient due to repeated statistical calculations and suffers from the curse of depth. As layers grow, the magnitude and variance of the hidden state escalate, destabilizing training. Efficiency-oriented normalization-free methods such as Dynamic Tanh (DyT) improve speed but remain fragile at depth. To jointly address stability and efficiency, we propose Bounded Hyperbolic Tanh (BHyT), a drop-in replacement for Pre-LN. BHyT couples a tanh nonlinearity with explicit, data-driven input bounding to keep activations within a non-saturating range. It prevents depth-wise growth in activation magnitude and variance and comes with a theoretical stability guarantee. For efficiency, BHyT computes exact statistics once per block and replaces a second normalization with a lightweight variance approximation, enhancing efficiency. Empirically, BHyT demonstrates improved stability and efficiency during pretraining, achieving an average of 15.8% faster training and an average of 4.2% higher token generation throughput compared to RMSNorm., while matching or surpassing its inference performance and robustness across language understanding and reasoning benchmarks. Our code is available at: https://anonymous.4open.science/r/BHyT
|
https://arxiv.org/abs/2601.09719
|
Academic Papers
|
svg
|
b9306b4b419280ca1475281b55234eca2c2a95b221b1784442280ebd8a1bef9f
|
2026-01-16T00:00:00-05:00
|
Uncertainty-Aware Dynamic Knowledge Graphs for Reliable Question Answering
|
arXiv:2601.09720v1 Announce Type: new Abstract: Question answering (QA) systems are increasingly deployed across domains. However, their reliability is undermined when retrieved evidence is incomplete, noisy, or uncertain. Existing knowledge graph (KG) based QA frameworks typically represent facts as static and deterministic, failing to capture the evolving nature of information and the uncertainty inherent in reasoning. We present a demonstration of uncertainty-aware dynamic KGs, a framework that combines (i) dynamic construction of evolving KGs, (ii) confidence scoring and uncertainty-aware retrieval, and (iii) an interactive interface for reliable and interpretable QA. Our system highlights how uncertainty modeling can make QA more robust and transparent by enabling users to explore dynamic graphs, inspect confidence-annotated triples, and compare baseline versus confidence-aware answers. The target users of this demo are clinical data scientists and clinicians, and we instantiate the framework in healthcare: constructing personalized KGs from electronic health records, visualizing uncertainty across patient visits, and evaluating its impact on a mortality prediction task. This use case demonstrates the broader promise of uncertainty-aware dynamic KGs for enhancing QA reliability in high-stakes applications.
|
https://arxiv.org/abs/2601.09720
|
Academic Papers
|
svg
|
edd8024a0b7e012acea66ba54d127c71a8266a43690503e72ad7983c496c99d6
|
2026-01-16T00:00:00-05:00
|
Cross-Platform Evaluation of Large Language Model Safety in Pediatric Consultations: Evolution of Adversarial Robustness and the Scale Paradox
|
arXiv:2601.09721v1 Announce Type: new Abstract: Background Large language models (LLMs) are increasingly deployed in medical consultations, yet their safety under realistic user pressures remains understudied. Prior assessments focused on neutral conditions, overlooking vulnerabilities from anxious users challenging safeguards. This study evaluated LLM safety under parental anxiety-driven adversarial pressures in pediatric consultations across models and platforms. Methods PediatricAnxietyBench, from a prior evaluation, includes 300 queries (150 authentic, 150 adversarial) spanning 10 topics. Three models were assessed via APIs: Llama-3.3-70B and Llama-3.1-8B (Groq), Mistral-7B (HuggingFace), yielding 900 responses. Safety used a 0-15 scale for restraint, referral, hedging, emergency recognition, and non-prescriptive behavior. Analyses employed paired t-tests with bootstrapped CIs. Results Mean scores: 9.70 (Llama-3.3-70B) to 10.39 (Mistral-7B). Llama-3.1-8B outperformed Llama-3.3-70B by +0.66 (p=0.0001, d=0.225). Models showed positive adversarial effects, Mistral-7B strongest (+1.09, p=0.0002). Safety generalized across platforms; Llama-3.3-70B had 8% failures. Seizures vulnerable (33% inappropriate diagnoses). Hedging predicted safety (r=0.68, p<0.001). Conclusions Evaluation shows safety depends on alignment and architecture over scale, with smaller models outperforming larger. Evolution to robustness across releases suggests targeted training progress. Vulnerabilities and no emergency recognition indicate unsuitability for triage. Findings guide selection, stress adversarial testing, and provide open benchmark for medical AI safety.
|
https://arxiv.org/abs/2601.09721
|
Academic Papers
|
svg
|
7d30733019501884f2c02cac87149b84c4237cf7c267d378e714255b15d9ea08
|
2026-01-16T00:00:00-05:00
|
ADMEDTAGGER: an annotation framework for distillation of expert knowledge for the Polish medical language
|
arXiv:2601.09722v1 Announce Type: new Abstract: In this work, we present an annotation framework that demonstrates how a multilingual LLM pretrained on a large corpus can be used as a teacher model to distill the expert knowledge needed for tagging medical texts in Polish. This work is part of a larger project called ADMEDVOICE, within which we collected an extensive corpus of medical texts representing five clinical categories - Radiology, Oncology, Cardiology, Hypertension, and Pathology. Using this data, we had to develop a multi-class classifier, but the fundamental problem turned out to be the lack of resources for annotating an adequate number of texts. Therefore, in our solution, we used the multilingual Llama3.1 model to annotate an extensive corpus of medical texts in Polish. Using our limited annotation resources, we verified only a portion of these labels, creating a test set from them. The data annotated in this way were then used for training and validation of 3 different types of classifiers based on the BERT architecture - the distilled DistilBERT model, BioBERT fine-tuned on medical data, and HerBERT fine-tuned on the Polish language corpus. Among the models we trained, the DistilBERT model achieved the best results, reaching an F1 score > 0.80 for each clinical category and an F1 score > 0.93 for 3 of them. In this way, we obtained a series of highly effective classifiers that represent an alternative to large language models, due to their nearly 500 times smaller size, 300 times lower GPU VRAM consumption, and several hundred times faster inference.
|
https://arxiv.org/abs/2601.09722
|
Academic Papers
|
svg
|
0a6bfa2a53bf2f17be0e0ab89ef81e22584a70040ea138fc45945c0acfbc3f00
|
2026-01-16T00:00:00-05:00
|
SagaScale: A Realistic, Scalable, and High-Quality Long-Context Benchmark Built from Full-Length Novels
|
arXiv:2601.09723v1 Announce Type: new Abstract: Large Language Models (LLMs) have shown significant progress, but understanding long and complex documents remains challenging. Many long-context benchmarks have been proposed, but they face several limitations, including task realism, data scalability, and data quality. To this end, we introduce SagaScale, a realistic, scalable, and high-quality long-context benchmark built from full-length novels. The entire benchmark is constructed using an automated data collection pipeline that utilizes external resources (e.g., Wikipedia pages) to curate question-answer pairs. Critically, these external resources are provided only for benchmark construction and not during evaluation, which allows LLMs to curate complex questions that go beyond what they can answer during evaluation. SagaScale is also bilingual and offers the largest context length to date, with average token counts exceeding 250K for English novels and 320K for Chinese novels. Our evaluation across 12 frontier LLMs and three long-context methods -- Na\"ive RAG, Agentic RAG, and Long Context -- yields key insights, including: (1) Directly supplying the full context to the LLM can outperform other methods by a large margin; (2) Most LLMs still struggle with lengthy contexts, but Gemini-2.5-Pro stands out as an exception; and (3) Agentic RAG effectively addresses the retrieval bottleneck in Na\"ive RAG. Finally, we publicly release the SagaScale benchmark and our data collection codebase to facilitate future research.
|
https://arxiv.org/abs/2601.09723
|
Academic Papers
|
svg
|
066930686448308e7e58024254c7cb797800dd7fc1badd9933dac2ef1408a757
|
2026-01-16T00:00:00-05:00
|
Syntactic Framing Fragility: An Audit of Robustness in LLM Ethical Decisions
|
arXiv:2601.09724v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly deployed in consequential decision-making settings, yet their robustness to benign prompt variation remains underexplored. In this work, we study whether LLMs maintain consistent ethical judgments across logically equivalent but syntactically different prompts, focusing on variations involving negation and conditional structure. We introduce Syntactic Framing Fragility (SFF), a robustness evaluation framework that isolates purely syntactic effects via Logical Polarity Normalization (LPN), enabling direct comparison of decisions across positive and negative framings without semantic drift. Auditing 23 state-of-the-art models spanning the U.S. and China as well as small U.S. open-source software models over 14 ethical scenarios and four controlled framings (39,975 decisions), we find widespread and statistically significant inconsistency: many models reverse ethical endorsements solely due to syntactic polarity, with open-source models exhibiting over twice the fragility of commercial counterparts. We further uncover extreme negation sensitivity, where some models endorse actions in 80-97% of cases when explicitly prompted with "should not." We show that eliciting chain-of-thought reasoning substantially reduces fragility, identifying a practical mitigation lever, and we map fragility across scenarios, finding higher risk in financial and business contexts than in medical scenarios. Our results demonstrate that syntactic consistency constitutes a distinct and critical dimension of ethical robustness, and we argue that SFF-style audits should be a standard component of safety evaluation for deployed LLMs. Code and results will be available on github.com.
|
https://arxiv.org/abs/2601.09724
|
Academic Papers
|
svg
|
5c22c8c5dc734977d6931f7dc995e9d97a3ecb00171e3e1ef6e83811bfb46b84
|
2026-01-16T00:00:00-05:00
|
Assessing and Improving Punctuation Robustness in English-Marathi Machine Translation
|
arXiv:2601.09725v1 Announce Type: new Abstract: Punctuation plays a critical role in resolving semantic and structural ambiguity in written language. Machine Translation (MT) systems are now widely applied across diverse domains and languages, including many low-resource settings. In this work, we focus on Marathi, a low- to middle-resource language. We introduce Vir\=am, the first diagnostic benchmark for assessing punctuation robustness in English-to-Marathi machine translation, consisting of 54 manually curated, punctuation-ambiguous instances. We evaluate two primary strategies for enhancing reliability: a pipeline-based restore-then-translate approach and direct fine-tuned on punctuation-varied data. Our results demonstrate that specialized fine-tuned models and pipeline systems significantly improve translation quality over standard baselines on the Vir\=am benchmark. Qualitative analysis reveals that the original model may result in wrong translations leading to wrong interpretations, while fine-tuned models significantly improve overall reliability. Furthermore, we find that current Large Language Models (LLMs) lag behind these task-specific approaches in preserving meaning for punctuation-ambiguous text, thus necessitating further research in this area.
|
https://arxiv.org/abs/2601.09725
|
Academic Papers
|
svg
|
1efb8eb0cb08e7cd2b2c950846c57accb6d3356406dbe9884dc557bb53d7d9a0
|
2026-01-16T00:00:00-05:00
|
Forgetting as a Feature: Cognitive Alignment of Large Language Models
|
arXiv:2601.09726v1 Announce Type: new Abstract: Large Language Models (LLMs) are often evaluated against ideals of perfect Bayesian inference, yet growing evidence suggests that their in-context reasoning exhibits systematic forgetting of past information. Rather than viewing this behavior as a limitation, we reinterpret forgetting as a functional cognitive mechanism. Drawing inspiration from human memory dynamics, we model LLM inference as a probabilistic memory process governed by exponential decay. We introduce a benchmark suite that evaluates temporal reasoning, concept drift adaptation, and associative recall, enabling direct comparison between model behavior and human cognitive patterns. Our empirical results reveal that LLMs demonstrate forgetting rates analogous to human memory efficiency trade-offs between stability and adaptability. Building on these observations, we propose probabilistic memory prompting, a lightweight strategy that shapes evidence integration to mimic human-like memory decay, leading to improved long-horizon reasoning performance. Our findings position forgetting not as a failure mode, but as a principled mechanism for adaptive intelligence.
|
https://arxiv.org/abs/2601.09726
|
Academic Papers
|
svg
|
08d9784e872721d51bfe2e06c4cd96f361b1c36f94c6664fc2d5fef34c79d392
|
2026-01-16T00:00:00-05:00
|
SciNets: Graph-Constrained Multi-Hop Reasoning for Scientific Literature Synthesis
|
arXiv:2601.09727v1 Announce Type: new Abstract: Cross-domain scientific synthesis requires connecting mechanistic explanations across fragmented literature, a capability that remains challenging for both retrieval-based systems and unconstrained language models. While recent work has applied large language models to scientific summarization and question answering, these approaches provide limited control over reasoning depth and structural grounding. We frame mechanistic synthesis as a graph-constrained multi-hop reasoning problem over literature-derived concept graphs. Given a scientific query and a compact, query-local corpus, SciNets constructs a directed concept graph and synthesizes mechanistic explanations by identifying multi-hop reasoning paths that connect concepts that rarely co-occur within individual papers. We systematically compare shortest-path reasoning, k-shortest paths with diversity constraints, stochastic random walks, and a retrieval-augmented language model baseline. Rather than evaluating correctness, which is often indeterminate when synthesizing connections across distributed sources, we introduce a behavioral framework that measures symbolic reasoning depth, mechanistic diversity, and grounding stability. Across machine learning, biology, and climate science tasks, explicit graph constraints enable controllable multi-hop reasoning while revealing a consistent trade-off: deeper and more diverse symbolic reasoning increases grounding instability, whereas shortest-path reasoning remains highly stable but structurally conservative. These findings provide a systematic behavioral characterization of the limits and capabilities of current graph-LLM integration for scientific synthesis.
|
https://arxiv.org/abs/2601.09727
|
Academic Papers
|
svg
|
7983aad81cebe2f6edf60e199b40d209e392ed5510d284daaf965f87769f93c4
|
2026-01-16T00:00:00-05:00
|
Eliminating Agentic Workflow for Introduction Generation with Parametric Stage Tokens
|
arXiv:2601.09728v1 Announce Type: new Abstract: In recent years, using predefined agentic workflows to guide large language models (LLMs) for literature classification and review has become a research focus. However, writing research introductions is more challenging. It requires rigorous logic, coherent structure, and abstract summarization. Existing workflows often suffer from long reasoning chains, error accumulation, and reduced textual coherence. To address these limitations, we propose eliminating external agentic workflows. Instead, we directly parameterize their logical structure into the LLM. This allows the generation of a complete introduction in a single inference. To this end, we introduce the Stage Token for Introduction Generation (STIG). STIG converts the multiple stages of the original workflow into explicit stage signals. These signals guide the model to follow different logical roles and functions during generation. Through instruction tuning, the model learns the mapping between stage tokens and text functions. It also learns the logical order and transition patterns between stages, encoding this knowledge into the model parameters. Experimental results show that STIG can generate multi-stage text in a single inference. It does not require explicit workflow calls. STIG outperforms traditional agentic workflows and other baselines on metrics of semantic similarity and sentence-level structural rationality. The code is provided in the Supplementary Materials.
|
https://arxiv.org/abs/2601.09728
|
Academic Papers
|
svg
|
570a08cfec0947d9d72ae1b80596756fc8d853562edafe677ec7e24ae3da8ab4
|
2026-01-16T00:00:00-05:00
|
Enhancing Business Analytics through Hybrid Summarization of Financial Reports
|
arXiv:2601.09729v1 Announce Type: new Abstract: Financial reports and earnings communications contain large volumes of structured and semi structured information, making detailed manual analysis inefficient. Earnings conference calls provide valuable evidence about a firm's performance, outlook, and strategic priorities. The manual analysis of lengthy call transcripts requires substantial effort and is susceptible to interpretive bias and unintentional error. In this work, we present a hybrid summarization framework that combines extractive and abstractive techniques to produce concise and factually reliable Reuters-style summaries from the ECTSum dataset. The proposed two stage pipeline first applies the LexRank algorithm to identify salient sentences, which are subsequently summarized using fine-tuned variants of BART and PEGASUS designed for resource constrained settings. In parallel, we fine-tune a Longformer Encoder-Decoder (LED) model to directly capture long-range contextual dependencies in financial documents. Model performance is evaluated using standard automatic metrics, including ROUGE, METEOR, MoverScore, and BERTScore, along with domain-specific variants such as SciBERTScore and FinBERTScore. To assess factual accuracy, we further employ entity-level measures based on source-precision and F1-target. The results highlight complementary trade offs between approaches, long context models yield the strongest overall performance, while the hybrid framework achieves competitive results with improved factual consistency under computational constraints. These findings support the development of practical summarization systems for efficiently distilling lengthy financial texts into usable business insights.
|
https://arxiv.org/abs/2601.09729
|
Academic Papers
|
svg
|
1bf09099237eef164d8b8b0eac43f1ed623c79bad5d3fda1b6e963efa5e6791b
|
2026-01-16T00:00:00-05:00
|
Clinical Document Metadata Extraction: A Scoping Review
|
arXiv:2601.09730v1 Announce Type: new Abstract: Clinical document metadata, such as document type, structure, author role, medical specialty, and encounter setting, is essential for accurate interpretation of information captured in clinical documents. However, vast documentation heterogeneity and drift over time challenge harmonization of document metadata. Automated extraction methods have emerged to coalesce metadata from disparate practices into target schema. This scoping review aims to catalog research on clinical document metadata extraction, identify methodological trends and applications, and highlight gaps. We followed the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines to identify articles that perform clinical document metadata extraction. We initially found and screened 266 articles published between January 2011 and August 2025, then comprehensively reviewed 67 we deemed relevant to our study. Among the articles included, 45 were methodological, 17 used document metadata as features in a downstream application, and 5 analyzed document metadata composition. We observe myriad purposes for methodological study and application types. Available labelled public data remains sparse except for structural section datasets. Methods for extracting document metadata have progressed from largely rule-based and traditional machine learning with ample feature engineering to transformer-based architectures with minimal feature engineering. The emergence of large language models has enabled broader exploration of generalizability across tasks and datasets, allowing the possibility of advanced clinical text processing systems. We anticipate that research will continue to expand into richer document metadata representations and integrate further into clinical applications and workflows.
|
https://arxiv.org/abs/2601.09730
|
Academic Papers
|
svg
|
ed2d4a2507f0fad7cc6375a8920450531d124d9b8eb706dcbfbb4529a61aabeb
|
2026-01-16T00:00:00-05:00
|
Geometric Patterns of Meaning: A PHATE Manifold Analysis of Multi-lingual Embeddings
|
arXiv:2601.09731v1 Announce Type: new Abstract: We introduce a multi-level analysis framework for examining semantic geometry in multilingual embeddings, implemented through Semanscope (a visualization tool that applies PHATE manifold learning across four linguistic levels). Analysis of diverse datasets spanning sub-character components, alphabetic systems, semantic domains, and numerical concepts reveals systematic geometric patterns and critical limitations in current embedding models. At the sub-character level, purely structural elements (Chinese radicals) exhibit geometric collapse, highlighting model failures to distinguish semantic from structural components. At the character level, different writing systems show distinct geometric signatures. At the word level, content words form clustering-branching patterns across 20 semantic domains in English, Chinese, and German. Arabic numbers organize through spiral trajectories rather than clustering, violating standard distributional semantics assumptions. These findings establish PHATE manifold learning as an essential analytic tool not only for studying geometric structure of meaning in embedding space, but also for validating the effectiveness of embedding models in capturing semantic relationships.
|
https://arxiv.org/abs/2601.09731
|
Academic Papers
|
svg
|
a4aa0a99c07ebfcde800fa8c31fe3b3c886bc0e712e5e310f37a5509856a5467
|
2026-01-16T00:00:00-05:00
|
Benchmarking Cross-Lingual Semantic Alignment in Multilingual Embeddings
|
arXiv:2601.09732v1 Announce Type: new Abstract: With hundreds of multilingual embedding models available, practitioners lack clear guidance on which provide genuine cross-lingual semantic alignment versus task performance through language-specific patterns. Task-driven benchmarks (MTEB) may mask fundamental alignment shortcomings. We introduce Semantic Affinity (SA), a bounded (between 0 and 1) metric measuring inter-lingual to intra-lingual spread ratio using cosine distance, combined with PHATE visualization in our Semanscope framework. Benchmarking 13 models across 4 datasets (52 experiments) reveals a three-tier structure: (1) Top BERT models (LaBSE SA = 0.70, USE SA = 0.68, S-BERT SA = 0.68) achieve strong alignment via translation-pair supervision; (2) LLM embeddings plateau at SA between 0.55 and 0.61 regardless of 0.6 B to 8 B scale; (3) MLM-only BERT models (mBERT, XLM-R, SA < 0.50) fail despite more than 100 language training. Training objective, not architecture or scale, determines alignment. Oracle Bone primitives (1200 BCE) expose semantic drift-models learn corpus patterns rather than cognitive primitives. This work provides semantic benchmarking to help practitioners select quality multilingual embeddings from hundreds of available models, showing cross-lingual alignment requires explicit translation supervision, not merely model scale or multilingual data.
|
https://arxiv.org/abs/2601.09732
|
Academic Papers
|
svg
|
391803660a10f81d135cba66f1a5624be0399345a2ef6005ea78367f7495867c
|
2026-01-16T00:00:00-05:00
|
Closing the Data Loop: Using OpenDataArena to Engineer Superior Training Datasets
|
arXiv:2601.09733v1 Announce Type: new Abstract: The construction of Supervised Fine-Tuning (SFT) datasets is a critical yet under-theorized stage in the post-training of Large Language Models (LLMs), as prevalent practices often rely on heuristic aggregation without a systematic understanding of how individual samples contribute to model performance. In this report, we propose a paradigm shift from ad-hoc curation to a closed-loop dataset engineering framework using OpenDataArena (ODA), which leverages value-anchored rankings and multi-dimensional analysis to transform value benchmarking into feedback signals guiding dataset construction. We instantiate this methodology through two new datasets: \textbf{ODA-Math-460k}, a specialized mathematics reasoning dataset that utilizes a novel two-stage difficulty-aware pipeline to achieve State-of-the-Art (SOTA) results on benchmarks such as AIME and HMMT, and \textbf{ODA-Mixture (100k \& 500k)}, a series of multi-domain instruction datasets built via an ``Anchor-and-Patch'' strategy that outperforms significantly larger open-source baselines. Our empirical results demonstrate that ODA-driven datasets significantly improve both domain-specific reasoning and general utility while achieving superior data efficiency, validating a transition toward data-centric AI where transparent evaluation serves as the primary engine for engineering high-quality training data.
|
https://arxiv.org/abs/2601.09733
|
Academic Papers
|
svg
|
a3fdbb4f4029c85471d983232d362ed37eee368180bf255a04b6f865aa398008
|
2026-01-16T00:00:00-05:00
|
From Detection to Diagnosis: Advancing Hallucination Analysis with Automated Data Synthesis
|
arXiv:2601.09734v1 Announce Type: new Abstract: Hallucinations in Large Language Models (LLMs), defined as the generation of content inconsistent with facts or context, represent a core obstacle to their reliable deployment in critical domains. Current research primarily focuses on binary "detection" approaches that, while capable of identifying hallucinations, fail to provide interpretable and actionable feedback for model improvement, thus limiting practical utility. To address this limitation, a new research paradigm is proposed, shifting from "detection" to "diagnosis". The Hallucination Diagnosis Task is introduced, a task which requires models to not only detect hallucinations, but also perform error localization, causal explanation, and content correction. We develop the Hallucination Diagnosis Generator (HDG), an automated pipeline that systematically generates high-quality training samples with rich diagnostic metadata from raw corpora through multi-dimensional augmentation strategies including controlled fact fabrication and reasoning chain perturbation. Using HDG-generated data, we train HDM-4B-RL, a 4-billion-parameter hallucination diagnosis model, employing Group Relative Policy Optimization (GRPO) with a comprehensive reward function incorporating structural, accuracy, and localization signals. Experimental results demonstrate that our model surpasses previous state-of-the-art detection models on the HaluEval benchmark while achieving comparable performance to advanced general-purpose models. In comprehensive diagnosis tasks, HDM-4B-RL matches the capabilities of larger general models while maintaining a smaller size. This work validates the feasibility and value of hallucination diagnosis, providing an effective methodology for building more trustworthy and reliable generative AI systems.
|
https://arxiv.org/abs/2601.09734
|
Academic Papers
|
svg
|
5e07b775ab1f3af6d55a56667fb6015375d2f0478ecfc8471b5ce2e1b6748e3f
|
2026-01-16T00:00:00-05:00
|
Multiverse: Transactional Memory with Dynamic Multiversioning
|
arXiv:2601.09735v1 Announce Type: new Abstract: Software transactional memory (STM) allows programmers to easily implement concurrent data structures. STMs simplify atomicity. Recent STMs can achieve good performance for some workloads but they have some limitations. In particular, STMs typically cannot support long-running reads which access a large number of addresses that are frequently updated. Multiversioning is a common approach used to support this type of workload. However, multiversioning is often expensive and can reduce the performance of transactions where versioning is not necessary. In this work we present Multiverse, a new STM that combines the best of both unversioned TM and multiversioning. Multiverse features versioned and unversioned transactions which can execute concurrently. A main goal of Multiverse is to ensure that unversioned transactions achieve performance comparable to the state of the art unversioned STM while still supporting fast versioned transactions needed to enable long running reads. We implement Multiverse and compare it against several STMs. Our experiments demonstrate that Multiverse achieves comparable or better performance for common case workloads where there are no long running reads. For workloads with long running reads and frequent updates Multiverse significantly outperforms existing STMS. In several cases for these workloads the throughput of Multiverse is several orders of magnitude faster than other STMs.
|
https://arxiv.org/abs/2601.09735
|
Academic Papers
|
svg
|
fe6949ce549adf3c949311e7a56182244b6f6ed815484102d7b8be3afb1c47bb
|
2026-01-16T00:00:00-05:00
|
Reinforced Linear Genetic Programming
|
arXiv:2601.09736v1 Announce Type: new Abstract: Linear Genetic Programming (LGP) is a powerful technique that allows for a variety of problems to be solved using a linear representation of programs. However, there still exists some limitations to the technique, such as the need for humans to explicitly map registers to actions. This thesis proposes a novel approach that uses Q-Learning on top of LGP, Reinforced Linear Genetic Programming (RLGP) to learn the optimal register-action assignments. In doing so, we introduce a new framework "linear-gp" written in memory-safe Rust that allows for extensive experimentation for future works.
|
https://arxiv.org/abs/2601.09736
|
Academic Papers
|
svg
|
5426f7fc1c3dfe51f08a5fbb8362e861afa904a0fccb214d5357fec23c5aa767
|
2026-01-16T00:00:00-05:00
|
Filtering for Copyright Enforcement in Europe after the Sabam cases
|
arXiv:2601.09739v1 Announce Type: new Abstract: Sabam, a Belgian collective rights management organisation, wanted an internet access provider and a social network site to install a filter system to enforce copyrights. In two recent judgments, the Court of Justice of the European Union decided that the social network site and the internet access provider cannot be required to install the filter system that Sabam asked for. Are these judgments good news for fundamental rights? This article argues that little is won for privacy and freedom of information.
|
https://arxiv.org/abs/2601.09739
|
Academic Papers
|
svg
|
ccacdf71db9aa0676713a3d5ccd309b1abdd742eb3827a2ac15d0a1408a1b3c3
|
2026-01-16T00:00:00-05:00
|
Formal Safety Guarantees for Autonomous Vehicles using Barrier Certificates
|
arXiv:2601.09740v1 Announce Type: new Abstract: Modern AI technologies enable autonomous vehicles to perceive complex scenes, predict human behavior, and make real-time driving decisions. However, these data-driven components often operate as black boxes, lacking interpretability and rigorous safety guarantees. Autonomous vehicles operate in dynamic, mixed-traffic environments where interactions with human-driven vehicles introduce uncertainty and safety challenges. This work develops a formally verified safety framework for Connected and Autonomous Vehicles (CAVs) that integrates Barrier Certificates (BCs) with interpretable traffic conflict metrics, specifically Time-to-Collision (TTC) as a spatio-temporal safety metric. Safety conditions are verified using Satisfiability Modulo Theories (SMT) solvers, and an adaptive control mechanism ensures vehicles comply with these constraints in real time. Evaluation on real-world highway datasets shows a significant reduction in unsafe interactions, with up to 40\% fewer events where TTC falls below a 3 seconds threshold, and complete elimination of conflicts in some lanes. This approach provides both interpretable and provable safety guarantees, demonstrating a practical and scalable strategy for safe autonomous driving.
|
https://arxiv.org/abs/2601.09740
|
Academic Papers
|
svg
|
8382e44b6fec1d233aae20fe822eea0162c39aea8dc31dd2af635e4ef4e7c302
|
2026-01-16T00:00:00-05:00
|
Putting green software principles into practice
|
arXiv:2601.09741v1 Announce Type: new Abstract: The need and theoretical methods for measuring and reducing CO2 emitted by computing systems are well understood, but real-world examples are still limited. We describe a journey towards green software for a live product running on a public cloud. We discuss practical solutions found, in particular using the cost implications of serverless systems to drive efficiency. We end with some `green software' principles that worked well in this project.
|
https://arxiv.org/abs/2601.09741
|
Academic Papers
|
svg
|
d1642c5b03550e1e53e558402c9d916675b4b402bdd7007072e82a9b41290413
|
2026-01-16T00:00:00-05:00
|
Adaptive Orchestration: Scalable Self-Evolving Multi-Agent Systems
|
arXiv:2601.09742v1 Announce Type: new Abstract: As Large Language Models (LLMs) are increasingly deployed as autonomous agents, they face a critical scalability bottleneck known as the "Generalization-Specialization Dilemma." Monolithic agents equipped with extensive toolkits suffer from context pollution and attention decay, leading to hallucinations. Conversely, static multi-agent swarms introduce significant latency and resource overhead. This paper introduces a Self-Evolving Concierge System, a novel architecture utilizing a Dynamic Mixture of Experts (DMoE) approach. Unlike recent self-improving agents that rewrite their own codebase, our system preserves stability by dynamically restructuring its runtime environment: "hiring" specialized sub-agents based on real-time conversation analysis. We introduce an asynchronous "Meta-Cognition Engine" that detects capability gaps, a Least Recently Used (LRU) eviction policy for resource constraints, and a novel "Surgical History Pruning" mechanism to mitigate refusal bias. Experimental results demonstrate that this architecture maintains high task success rates while minimizing token consumption compared to static agent swarms.
|
https://arxiv.org/abs/2601.09742
|
Academic Papers
|
svg
|
54381d839bbaf736021cbf34de34bfb18301b68e403fcebb3e0d8ea8ddcd2acd
|
2026-01-16T00:00:00-05:00
|
A Governance Model for IoT Data in Global Manufacturing
|
arXiv:2601.09744v1 Announce Type: new Abstract: Industrial IoT platforms in global manufacturing environments generate continuous operational data across production assets, utilities, and connected products. While data ingestion and storage capabilities have matured significantly, enterprises continue to face systemic challenges in governing IoT data at scale. These challenges are not rooted in tooling limitations but in the absence of a governance model that aligns with the realities of distributed operational ownership, heterogeneous source systems, and continuous change at the edge. This paper presents a federated governance model that emphasizes contract-driven interoperability, policy-as-code enforcement, and asset-centric accountability across global manufacturing organizations. The model addresses governance enforcement at architectural boundaries, enabling semantic consistency, quality assurance, and regulatory compliance without requiring centralized control of operational technology systems. This work contributes a systems architecture and design framework grounded in analysis of manufacturing IoT requirements and constraints; empirical validation remains future work
|
https://arxiv.org/abs/2601.09744
|
Academic Papers
|
svg
|
7eecc221e48f128e13f24136a734db237f9cf7df0aee13de72281f3b6c6afca3
|
2026-01-16T00:00:00-05:00
|
Enhancing Formal Software Specification with Artificial Intelligence
|
arXiv:2601.09745v1 Announce Type: new Abstract: Formal software specification is known to enable early error detection and explicit invariants, yet it has seen limited industrial adoption due to its high notation overhead and the expertise required to use traditional formal languages. This paper presents a case study showing that recent advances in artificial intelligence make it possible to retain many of the benefits of formal specification while substantially reducing these costs. The necessity of a clear distinction between what is controlled by the system analyst and can highly benefits from the rigor of formal specification and what need not be controlled is demonstrated. We use natural language augmented with lightweight mathematical notation and written in \LaTeX\ as an intermediate specification language, which is reviewed and refined by AI prior to code generation. Applied to a nontrivial simulation of organizational knowledge growth, this approach enables early validation, explicit invariants, and correctness by design, while significantly reducing development effort and producing a correct implementation on the first attempt.
|
https://arxiv.org/abs/2601.09745
|
Academic Papers
|
svg
|
270262e2577aefc32c0af52b6a2bb2acbf9adc85998f5b58df0c157e75e151a1
|
2026-01-16T00:00:00-05:00
|
Multi-Agent Cooperative Learning for Robust Vision-Language Alignment under OOD Concepts
|
arXiv:2601.09746v1 Announce Type: new Abstract: This paper introduces a novel Multi-Agent Cooperative Learning (MACL) framework to address cross-modal alignment collapse in vision-language models when handling out-of-distribution (OOD) concepts. Four core agents, including image, text, name, and coordination agents, collaboratively mitigate modality imbalance through structured message passing. The proposed framework enables multi-agent feature space name learning, incorporates a context exchange enhanced few-shot learning algorithm, and adopts an adaptive dynamic balancing mechanism to regulate inter-agent contributions. Experiments on the VISTA-Beyond dataset demonstrate that MACL significantly improves performance in both few-shot and zero-shot settings, achieving 1-5% precision gains across diverse visual domains.
|
https://arxiv.org/abs/2601.09746
|
Academic Papers
|
svg
|
cb9d2b7aa0c1ffe3be384818c036dfeb509189dc873e59f2d38ff775c600788e
|
2026-01-16T00:00:00-05:00
|
Instalaci\'on, configuraci\'on y utilizaci\'on de un nodo Bitcoin en Linux
|
arXiv:2601.09748v1 Announce Type: new Abstract: This paper documents the installation, configuration, and operation of a full Bitcoin node in a Linux environment, from manual compilation of the source code to complete synchronization with the network. The technical phases of the process are described, the main files generated by Bitcoin Core are analyzed, and the effects of the parameters txindex, prune, dbcache, maxmempool, and maxconnections are empirically studied. System resources during the block download (IBD) mechanism are also documented, and the operational importance of each resource is explained. This paper provides a solid foundation for future research proposals on Bitcoin node performance or for the development of blockchain data query tools.
|
https://arxiv.org/abs/2601.09748
|
Academic Papers
|
svg
|
61e3f783008550a23952e0f75fc68919ead6339ef62c534e3374242c3cc94f50
|
2026-01-16T00:00:00-05:00
|
R-LAM: Reproducibility-Constrained Large Action Models for Scientific Workflow Automation
|
arXiv:2601.09749v1 Announce Type: new Abstract: Large Action Models (LAMs) extend large language models by enabling autonomous decision-making and tool execution, making them promising for automating scientific workflows. However, scientific workflows impose strict requirements on reproducibility, auditability, and deterministic execution, which are not satisfied by generic LLM-based agents. Unconstrained action generation can lead to silent state changes, non-deterministic executions, and irreproducible experimental results, limiting the applicability of LAMs in scientific settings. In this paper, we propose R-LAM, a reproducibility-constrained framework for applying Large Action Models to scientific workflow automation. R-LAM introduces structured action schemas, deterministic execution policies, and explicit provenance tracking to ensure that every action and intermediate artifact is auditable and replayable. The framework supports failure-aware execution loops and controlled workflow forking, enabling iterative experimentation without compromising reproducibility. We implement R-LAM as a lightweight Python framework and release it as an open-source PyPI package to facilitate reproducible research. An experimental evaluation of representative scientific workflows demonstrates that R-LAM improves reproducibility success rates and execution reliability compared to unconstrained LLM-based agents, while retaining adaptive control over workflow execution.
|
https://arxiv.org/abs/2601.09749
|
Academic Papers
|
svg
|
9848f89eaf3ccaaabea08ecb132a74d4b9f863571c9514e19ae3e572dcf02917
|
2026-01-16T00:00:00-05:00
|
SAGE: Tool-Augmented LLM Task Solving Strategies in Scalable Multi-Agent Environments
|
arXiv:2601.09750v1 Announce Type: new Abstract: Large language models (LLMs) have proven to work well in question-answering scenarios, but real-world applications often require access to tools for live information or actuation. For this, LLMs can be extended with tools, which are often defined in advance, also allowing for some fine-tuning for specific use cases. However, rapidly evolving software landscapes and individual services require the constant development and integration of new tools. Domain- or company-specific tools can greatly elevate the usefulness of an LLM, but such custom tools can be problematic to integrate, or the LLM may fail to reliably understand and use them. For this, we need strategies to define new tools and integrate them into the LLM dynamically, as well as robust and scalable zero-shot prompting methods that can make use of those tools in an efficient manner. In this paper, we present SAGE, a specialized conversational AI interface, based on the OPACA framework for tool discovery and execution. The integration with OPACA makes it easy to add new tools or services for the LLM to use, while SAGE itself presents rich extensibility and modularity. This not only provides the ability to seamlessly switch between different models (e.g. GPT, LLAMA), but also to add and select prompting methods, involving various setups of differently prompted agents for selecting and executing tools and evaluating the results. We implemented a number of task-solving strategies, making use of agentic concepts and prompting methods in various degrees of complexity, and evaluated those against a comprehensive set of benchmark services. The results are promising and highlight the distinct strengths and weaknesses of different task-solving strategies. Both SAGE and the OPACA framework, as well as the different benchmark services and results, are available as Open Source/Open Data on GitHub.
|
https://arxiv.org/abs/2601.09750
|
Academic Papers
|
svg
|
668f8e828a7ff18dd200995258b88b74079d7a9cd7f314908b7629611ef2809a
|
2026-01-16T00:00:00-05:00
|
Critically Engaged Pragmatism: A Scientific Norm and Social, Pragmatist Epistemology for AI Science Evaluation Tools
|
arXiv:2601.09753v1 Announce Type: new Abstract: Crises in peer review capacity, study replication, and AI-fabricated science have intensified interest in automated tools for assessing scientific research. However, the scientific community has a history of decontextualizing and repurposing credibility markers in inapt ways. I caution that AI science evaluation tools are particularly prone to these kinds of inference by false ascent due to contestation about the purposes to which they should be put, their portability across purposes, and technical demands that prioritize data set size over epistemic fit. To counter this, I argue for a social, pragmatist epistemology and a newly articulated norm of Critically Engaged Pragmatism to enjoin scientific communities to vigorously scrutinize the purposes and purpose-specific reliability of AI science evaluation tools. Under this framework, AI science evaluation tools are not objective arbiters of scientific credibility, but the object of the kinds of critical discursive practices that ground the credibility of scientific communities.
|
https://arxiv.org/abs/2601.09753
|
Academic Papers
|
svg
|
45afa25e128bb4b7ad94d621aa2cffb1292cc33198cb447084fd529881b88fa5
|
2026-01-16T00:00:00-05:00
|
Heterogeneous computing platform for real-time robotics
|
arXiv:2601.09755v1 Announce Type: new Abstract: After Industry 4.0 has embraced tight integration between machinery (OT), software (IT), and the Internet, creating a web of sensors, data, and algorithms in service of efficient and reliable production, a new concept of Society 5.0 is emerging, in which infrastructure of a city will be instrumented to increase reliability, efficiency, and safety. Robotics will play a pivotal role in enabling this vision that is pioneered by the NEOM initiative - a smart city, co-inhabited by humans and robots. In this paper we explore the computing platform that will be required to enable this vision. We show how we can combine neuromorphic computing hardware, exemplified by the Loihi2 processor used in conjunction with event-based cameras, for sensing and real-time perception and interaction with a local AI compute cluster (GPUs) for high-level language processing, cognition, and task planning. We demonstrate the use of this hybrid computing architecture in an interactive task, in which a humanoid robot plays a musical instrument with a human. Central to our design is the efficient and seamless integration of disparate components, ensuring that the synergy between software and hardware maximizes overall performance and responsiveness. Our proposed system architecture underscores the potential of heterogeneous computing architectures in advancing robotic autonomy and interactive intelligence, pointing toward a future where such integrated systems become the norm in complex, real-time applications.
|
https://arxiv.org/abs/2601.09755
|
Academic Papers
|
svg
|
3c45b9746099a95dcff9818f5e45a5589d5a3bcb7b12511f1e58ba8500b7dad8
|
2026-01-16T00:00:00-05:00
|
Synthetic Data for Veterinary EHR De-identification: Benefits, Limits, and Safety Trade-offs Under Fixed Compute
|
arXiv:2601.09756v1 Announce Type: new Abstract: Veterinary electronic health records (vEHRs) contain privacy-sensitive identifiers that limit secondary use. While PetEVAL provides a benchmark for veterinary de-identification, the domain remains low-resource. This study evaluates whether large language model (LLM)-generated synthetic narratives improve de-identification safety under distinct training regimes, emphasizing (i) synthetic augmentation and (ii) fixed-budget substitution. We conducted a controlled simulation using a PetEVAL-derived corpus (3,750 holdout/1,249 train). We generated 10,382 synthetic notes using a privacy-preserving "template-only" regime where identifiers were removed prior to LLM prompting. Three transformer backbones (PetBERT, VetBERT, Bio_ClinicalBERT) were trained under varying mixtures. Evaluation prioritized document-level leakage rate (the fraction of documents with at least one missed identifier) as the primary safety outcome. Results show that under fixed-sample substitution, replacing real notes with synthetic ones monotonically increased leakage, indicating synthetic data cannot safely replace real supervision. Under compute-matched training, moderate synthetic mixing matched real-only performance, but high synthetic dominance degraded utility. Conversely, epoch-scaled augmentation improved performance: PetBERT span-overlap F1 increased from 0.831 to 0.850 +/- 0.014, and leakage decreased from 6.32% to 4.02% +/- 0.19%. However, these gains largely reflect increased training exposure rather than intrinsic synthetic data quality. Corpus diagnostics revealed systematic synthetic-real mismatches in note length and label distribution that align with persistent leakage. We conclude that synthetic augmentation is effective for expanding exposure but is complementary, not substitutive, for safety-critical veterinary de-identification.
|
https://arxiv.org/abs/2601.09756
|
Academic Papers
|
svg
|
985935610148b74f13bae9adf5d40c9d00ba9a05931f17e75c89563570175556
|
2026-01-16T00:00:00-05:00
|
Democracy and Distrust in an Era of Artificial Intelligence
|
arXiv:2601.09757v1 Announce Type: new Abstract: This essay examines how judicial review should adapt to address challenges posed by artificial intelligence decision-making, particularly regarding minority rights and interests. As I argue in this essay, the rise of three trends-privatization, prediction, and automation in AI-have combined to pose similar risks to minorities. Here, I outline what a theory of judicial review would look like in an era of artificial intelligence, analyzing both the limitations and the possibilities of judicial review of AI. I draw on cases in which AI decision-making has been challenged in courts, to show how concepts of due process and equal protection can be recuperated in a modern AI era, and even integrated into AI, to provide for better oversight and accountability, offering a framework for judicial review in the AI era that protects minorities from algorithmic discrimination.
|
https://arxiv.org/abs/2601.09757
|
Academic Papers
|
svg
|
7d05e0171d956f518ea19c8e6032015be0302558d9570902afb5d8fc7a8665b7
|
2026-01-16T00:00:00-05:00
|
Investigating Tool-Memory Conflicts in Tool-Augmented LLMs
|
arXiv:2601.09760v1 Announce Type: new Abstract: Tool-augmented large language models (LLMs) have powered many applications. However, they are likely to suffer from knowledge conflict. In this paper, we propose a new type of knowledge conflict -- Tool-Memory Conflict (TMC), where the internal parametric knowledge contradicts with the external tool knowledge for tool-augmented LLMs. We find that existing LLMs, though powerful, suffer from TMC, especially on STEM-related tasks. We also uncover that under different conditions, tool knowledge and parametric knowledge may be prioritized differently. We then evaluate existing conflict resolving techniques, including prompting-based and RAG-based methods. Results show that none of these approaches can effectively resolve tool-memory conflicts.
|
https://arxiv.org/abs/2601.09760
|
Academic Papers
|
svg
|
7e09fba1bcd229c81560a6e193d0aa86684702aaa4a9463af8bb46e7d388f8ec
|
2026-01-16T00:00:00-05:00
|
Explicating Tacit Regulatory Knowledge from LLMs to Auto-Formalize Requirements for Compliance Test Case Generation
|
arXiv:2601.09762v1 Announce Type: new Abstract: Compliance testing in highly regulated domains is crucial but largely manual, requiring domain experts to translate complex regulations into executable test cases. While large language models (LLMs) show promise for automation, their susceptibility to hallucinations limits reliable application. Existing hybrid approaches mitigate this issue by constraining LLMs with formal models, but still rely on costly manual modeling. To solve this problem, this paper proposes RAFT, a framework for requirements auto-formalization and compliance test generation via explicating tacit regulatory knowledge from multiple LLMs. RAFT employs an Adaptive Purification-Aggregation strategy to explicate tacit regulatory knowledge from multiple LLMs and integrate it into three artifacts: a domain meta-model, a formal requirements representation, and testability constraints. These artifacts are then dynamically injected into prompts to guide high-precision requirement formalization and automated test generation. Experiments across financial, automotive, and power domains show that RAFT achieves expert-level performance, substantially outperforms state-of-the-art (SOTA) methods while reducing overall generation and review time.
|
https://arxiv.org/abs/2601.09762
|
Academic Papers
|
svg
|
e4fb5e1229937260b0f32b4bacb5e84758bbb9a6f48e6f2d8d26b4051f956100
|
2026-01-16T00:00:00-05:00
|
AI Survival Stories: a Taxonomic Analysis of AI Existential Risk
|
arXiv:2601.09765v1 Announce Type: new Abstract: Since the release of ChatGPT, there has been a lot of debate about whether AI systems pose an existential risk to humanity. This paper develops a general framework for thinking about the existential risk of AI systems. We analyze a two premise argument that AI systems pose a threat to humanity. Premise one: AI systems will become extremely powerful. Premise two: if AI systems become extremely powerful, they will destroy humanity. We use these two premises to construct a taxonomy of survival stories, in which humanity survives into the far future. In each survival story, one of the two premises fails. Either scientific barriers prevent AI systems from becoming extremely powerful; or humanity bans research into AI systems, thereby preventing them from becoming extremely powerful; or extremely powerful AI systems do not destroy humanity, because their goals prevent them from doing so; or extremely powerful AI systems do not destroy humanity, because we can reliably detect and disable systems that have the goal of doing so. We argue that different survival stories face different challenges. We also argue that different survival stories motivate different responses to the threats from AI. Finally, we use our taxonomy to produce rough estimates of P(doom), the probability that humanity will be destroyed by AI.
|
https://arxiv.org/abs/2601.09765
|
Academic Papers
|
svg
|
b22a48d91b4767300c8db5cc9a254d8bfe710e2f725de58ff2671db776682ad3
|
2026-01-16T00:00:00-05:00
|
GUI-Eyes: Tool-Augmented Perception for Visual Grounding in GUI Agents
|
arXiv:2601.09770v1 Announce Type: new Abstract: Recent advances in vision-language models (VLMs) and reinforcement learning (RL) have driven progress in GUI automation. However, most existing methods rely on static, one-shot visual inputs and passive perception, lacking the ability to adaptively determine when, whether, and how to observe the interface. We present GUI-Eyes, a reinforcement learning framework for active visual perception in GUI tasks. To acquire more informative observations, the agent learns to make strategic decisions on both whether and how to invoke visual tools, such as cropping or zooming, within a two-stage reasoning process. To support this behavior, we introduce a progressive perception strategy that decomposes decision-making into coarse exploration and fine-grained grounding, coordinated by a two-level policy. In addition, we design a spatially continuous reward function tailored to tool usage, which integrates both location proximity and region overlap to provide dense supervision and alleviate the reward sparsity common in GUI environments. On the ScreenSpot-Pro benchmark, GUI-Eyes-3B achieves 44.8% grounding accuracy using only 3k labeled samples, significantly outperforming both supervised and RL-based baselines. These results highlight that tool-aware active perception, enabled by staged policy reasoning and fine-grained reward feedback, is critical for building robust and data-efficient GUI agents.
|
https://arxiv.org/abs/2601.09770
|
Academic Papers
|
svg
|
8876bceae5ea35059f32e3fa5b391bcd60e813e40a052139c8816e0fc528fc28
|
2026-01-16T00:00:00-05:00
|
PCN-Rec: Agentic Proof-Carrying Negotiation for Reliable Governance-Constrained Recommendation
|
arXiv:2601.09771v1 Announce Type: new Abstract: Modern LLM-based recommenders can generate compelling ranked lists, but they struggle to reliably satisfy governance constraints such as minimum long-tail exposure or diversity requirements. We present PCN-Rec, a proof-carrying negotiation pipeline that separates natural-language reasoning from deterministic enforcement. A base recommender (MF/CF) produces a candidate window of size W, which is negotiated by two agents: a User Advocate optimizing relevance and a Policy Agent enforcing constraints. A mediator LLM synthesizes a top-N slate together with a structured certificate (JSON) describing the claimed constraint satisfaction. A deterministic verifier recomputes all constraints from the slate and accepts only verifier-checked certificates; if verification fails, a deterministic constrained-greedy repair produces a compliant slate for re-verification, yielding an auditable trace. On MovieLens-100K with governance constraints, PCN-Rec achieves a 98.55% pass rate on feasible users (n = 551, W = 80) versus a one-shot single-LLM baseline without verification/repair, while preserving utility with only a 0.021 absolute drop in NDCG@10 (0.403 vs. 0.424); differences are statistically significant (p < 0.05).
|
https://arxiv.org/abs/2601.09771
|
Academic Papers
|
svg
|
464b67b89b2badcc08b35e995381715b42b4447d1ad58681c3f03d9960e8b83c
|
2026-01-16T00:00:00-05:00
|
Antisocial behavior towards large language model users: experimental evidence
|
arXiv:2601.09772v1 Announce Type: new Abstract: The rapid spread of large language models (LLMs) has raised concerns about the social reactions they provoke. Prior research documents negative attitudes toward AI users, but it remains unclear whether such disapproval translates into costly action. We address this question in a two-phase online experiment (N = 491 Phase II participants; Phase I provided targets) where participants could spend part of their own endowment to reduce the earnings of peers who had previously completed a real-effort task with or without LLM support. On average, participants destroyed 36% of the earnings of those who relied exclusively on the model, with punishment increasing monotonically with actual LLM use. Disclosure about LLM use created a credibility gap: self-reported null use was punished more harshly than actual null use, suggesting that declarations of "no use" are treated with suspicion. Conversely, at high levels of use, actual reliance on the model was punished more strongly than self-reported reliance. Taken together, these findings provide the first behavioral evidence that the efficiency gains of LLMs come at the cost of social sanctions.
|
https://arxiv.org/abs/2601.09772
|
Academic Papers
|
svg
|
3457a017822a3fd91bcd794fda7b322730197e1732be636293caced729706853
|
2026-01-16T00:00:00-05:00
|
Enhancing LUT-based Deep Neural Networks Inference through Architecture and Connectivity Optimization
|
arXiv:2601.09773v1 Announce Type: new Abstract: Deploying deep neural networks (DNNs) on resource-constrained edge devices such as FPGAs requires a careful balance among latency, power, and hardware resource usage, while maintaining high accuracy. Existing Lookup Table (LUT)-based DNNs -- such as LogicNets, PolyLUT, and NeuraLUT -- face two critical challenges: the exponential growth of LUT size and inefficient random sparse connectivity. This paper presents SparseLUT, a comprehensive framework that addresses these challenges through two orthogonal optimizations. First, we propose an architectural enhancement that aggregates multiple PolyLUT sub-neurons via an adder, significantly reducing LUT consumption by 2.0x-13.9x and lowering inference latency by 1.2x-1.6x, all while maintaining comparable accuracy. Building upon this foundation, we further introduce a non-greedy training algorithm that optimizes neuron connectivity by selectively pruning less significant inputs and strategically regrowing more effective ones. This training optimization, which incurs no additional area and latency overhead, delivers consistent accuracy improvements across benchmarks -- achieving up to a 2.13% gain on MNIST and 0.94% on Jet Substructure Classification compared to existing LUT-DNN approaches.
|
https://arxiv.org/abs/2601.09773
|
Academic Papers
|
svg
|
938b91f1418f8943bd2bf569391120bc7568592f8c015779eb68b35c533c89b0
|
2026-01-16T00:00:00-05:00
|
The Geometry of Thought: Disclosing the Transformer as a Tropical Polynomial Circuit
|
arXiv:2601.09775v1 Announce Type: new Abstract: We prove that the Transformer self-attention mechanism in the high-confidence regime ($\beta \to \infty$, where $\beta$ is an inverse temperature) operates in the tropical semiring (max-plus algebra). In particular, we show that taking the tropical limit of the softmax attention converts it into a tropical matrix product. This reveals that the Transformer's forward pass is effectively executing a dynamic programming recurrence (specifically, a Bellman-Ford path-finding update) on a latent graph defined by token similarities. Our theoretical result provides a new geometric perspective for chain-of-thought reasoning: it emerges from an inherent shortest-path (or longest-path) algorithm being carried out within the network's computation.
|
https://arxiv.org/abs/2601.09775
|
Academic Papers
|
svg
|
8c092b32907408dfdfc117e6cee3e1797fe9cfe9a52215598edcee8062677e27
|
2026-01-16T00:00:00-05:00
|
TimeSAE: Sparse Decoding for Faithful Explanations of Black-Box Time Series Models
|
arXiv:2601.09776v1 Announce Type: new Abstract: As black box models and pretrained models gain traction in time series applications, understanding and explaining their predictions becomes increasingly vital, especially in high-stakes domains where interpretability and trust are essential. However, most of the existing methods involve only in-distribution explanation, and do not generalize outside the training support, which requires the learning capability of generalization. In this work, we aim to provide a framework to explain black-box models for time series data through the dual lenses of Sparse Autoencoders (SAEs) and causality. We show that many current explanation methods are sensitive to distributional shifts, limiting their effectiveness in real-world scenarios. Building on the concept of Sparse Autoencoder, we introduce TimeSAE, a framework for black-box model explanation. We conduct extensive evaluations of TimeSAE on both synthetic and real-world time series datasets, comparing it to leading baselines. The results, supported by both quantitative metrics and qualitative insights, show that TimeSAE provides more faithful and robust explanations. Our code is available in an easy-to-use library TimeSAE-Lib: https://anonymous.4open.science/w/TimeSAE-571D/.
|
https://arxiv.org/abs/2601.09776
|
Academic Papers
|
svg
|
5f25f42b69479d8cc8fd5d0d47ecd3e2bbba1eec5e19e8b7ff8e35c8d28568be
|
2026-01-16T00:00:00-05:00
|
Improving Chain-of-Thought for Logical Reasoning via Attention-Aware Intervention
|
arXiv:2601.09805v1 Announce Type: new Abstract: Modern logical reasoning with LLMs primarily relies on employing complex interactive frameworks that decompose the reasoning process into subtasks solved through carefully designed prompts or requiring external resources (e.g., symbolic solvers) to exploit their strong logical structures. While interactive approaches introduce additional overhead, hybrid approaches depend on external components, which limit their scalability. A non-interactive, end-to-end framework enables reasoning to emerge within the model itself -- improving generalization while preserving analyzability without any external resources. In this work, we introduce a non-interactive, end-to-end framework for reasoning tasks. We show that introducing structural information into the few-shot prompt activates a subset of attention heads that patterns aligned with logical reasoning operators. Building on this insight, we propose Attention-Aware Intervention (AAI), an inference-time intervention method that reweights attention scores across selected heads identified by their logical patterns. AAI offers an efficient way to steer the model's reasoning toward leveraging prior knowledge through attention modulation. Extensive experiments show that AAI enhances logical reasoning performance across diverse benchmarks and model architectures, while incurring negligible additional computational overhead. Code is available at https://github.com/phuongnm94/aai_for_logical_reasoning.
|
https://arxiv.org/abs/2601.09805
|
Academic Papers
|
svg
|
66b3b5c48e792f396c1a660f7c6d07f04091f13d8870de0c187319a1b16c924a
|
2026-01-16T00:00:00-05:00
|
Diffusion-Driven Deceptive Patches: Adversarial Manipulation and Forensic Detection in Facial Identity Verification
|
arXiv:2601.09806v1 Announce Type: new Abstract: This work presents an end-to-end pipeline for generating, refining, and evaluating adversarial patches to compromise facial biometric systems, with applications in forensic analysis and security testing. We utilize FGSM to generate adversarial noise targeting an identity classifier and employ a diffusion model with reverse diffusion to enhance imperceptibility through Gaussian smoothing and adaptive brightness correction, thereby facilitating synthetic adversarial patch evasion. The refined patch is applied to facial images to test its ability to evade recognition systems while maintaining natural visual characteristics. A Vision Transformer (ViT)-GPT2 model generates captions to provide a semantic description of a person's identity for adversarial images, supporting forensic interpretation and documentation for identity evasion and recognition attacks. The pipeline evaluates changes in identity classification, captioning results, and vulnerabilities in facial identity verification and expression recognition under adversarial conditions. We further demonstrate effective detection and analysis of adversarial patches and adversarial samples using perceptual hashing and segmentation, achieving an SSIM of 0.95.
|
https://arxiv.org/abs/2601.09806
|
Academic Papers
|
svg
|
7801fecc1ad0ba89e6cb980d7e1d5501f1651cfcec13f1355e9ec6293f1b090d
|
2026-01-16T00:00:00-05:00
|
From Dynamic to Lexical: A Comparative Exploration of Scoping Rules in SAS and R
|
arXiv:2601.09808v1 Announce Type: new Abstract: Variable scoping dictates how and where variables are accessible within programming languages, playing a crucial role in code efficiency and organization. This paper examines the distinct scoping rules in SAS and R, focusing on SAS's dynamic scoping and R's lexical scoping. In SAS, dynamic scoping utilizes symbol tables, resolving variables at runtime by dynamically searching through active macro layers. R, in contrast, employs lexical scoping, using environments to resolve variables based on the structure in which functions are defined. Illustrative examples highlight the differences between these scoping strategies, showcasing their impact on code behavior. Additionally, the paper outlines methods for inspecting variables in SAS's symbol tables and R's environments, offering practical insights for debugging and optimization. Strategies for controlling variable scope in both languages are discussed, enhancing code precision and reliability. This exploration equips programmers with critical understanding to optimize variable management, improving their programming practices in SAS and R.
|
https://arxiv.org/abs/2601.09808
|
Academic Papers
|
svg
|
4a46c63e6638e6a0b65cb455448829a874fa0e83761ed6b8f04599c5b88fddf5
|
2026-01-16T00:00:00-05:00
|
QFed: Parameter-Compact Quantum-Classical Federated Learning
|
arXiv:2601.09809v1 Announce Type: new Abstract: Organizations and enterprises across domains such as healthcare, finance, and scientific research are increasingly required to extract collective intelligence from distributed, siloed datasets while adhering to strict privacy, regulatory, and sovereignty requirements. Federated Learning (FL) enables collaborative model building without sharing sensitive raw data, but faces growing challenges posed by statistical heterogeneity, system diversity, and the computational burden from complex models. This study examines the potential of quantum-assisted federated learning, which could cut the number of parameters in classical models by polylogarithmic factors and thus lessen training overhead. Accordingly, we introduce QFed, a quantum-enabled federated learning framework aimed at boosting computational efficiency across edge device networks. We evaluate the proposed framework using the widely adopted FashionMNIST dataset. Experimental results show that QFed achieves a 77.6% reduction in the parameter count of a VGG-like model while maintaining an accuracy comparable to classical approaches in a scalable environment. These results point to the potential of leveraging quantum computing within a federated learning context to strengthen FL capabilities of edge devices.
|
https://arxiv.org/abs/2601.09809
|
Academic Papers
|
svg
|
ef18025d45501c9a6313bb78b7cedc0251f88e22ac70a1eeac602209560e92de
|
2026-01-16T00:00:00-05:00
|
Learning Ecological and Epidemic Processes using Neural ODEs, Kolmogorov-Arnold Network ODEs and SINDy
|
arXiv:2601.09811v1 Announce Type: new Abstract: We consider epidemic and ecological models to investigate their coupled dynamics. Starting with the classical Susceptible-Infected-Recovered (SIR) model for basic epidemic behavior and the predator-prey (Lotka-Volterra, LV) system for ecological interactions, we then combine these frameworks into a coupled Lotka-Volterra-Susceptible-Infected-Susceptible (LVSIS) model. The resulting system consists of four differential equations describing the evolution of susceptible and infected prey and predator populations, incorporating ecological interactions, disease transmission, and spatial dispersal. To learn the underlying dynamics directly from data, we employ several data-driven modeling frameworks: Neural Ordinary Differential Equations (Neural ODEs), Kolmogorov-Arnold Network Ordinary Differential Equations (KANODEs), and Sparse Identification of Nonlinear Dynamics (SINDy). Numerical experiments based on synthetic data are conducted to investigate the learning ability of these models in capturing the epidemic and ecological behavior. We further extend our approach to spatio-temporal models, aiming to uncover hidden local couplings.
|
https://arxiv.org/abs/2601.09811
|
Academic Papers
|
svg
|
6238812d797753652e7663203e882ffc824f1ced2514d94f78b2883b1caa88e3
|
2026-01-16T00:00:00-05:00
|
LCF3D: A Robust and Real-Time Late-Cascade Fusion Framework for 3D Object Detection in Autonomous Driving
|
arXiv:2601.09812v1 Announce Type: new Abstract: Accurately localizing 3D objects like pedestrians, cyclists, and other vehicles is essential in Autonomous Driving. To ensure high detection performance, Autonomous Vehicles complement RGB cameras with LiDAR sensors, but effectively combining these data sources for 3D object detection remains challenging. We propose LCF3D, a novel sensor fusion framework that combines a 2D object detector on RGB images with a 3D object detector on LiDAR point clouds. By leveraging multimodal fusion principles, we compensate for inaccuracies in the LiDAR object detection network. Our solution combines two key principles: (i) late fusion, to reduce LiDAR False Positives by matching LiDAR 3D detections with RGB 2D detections and filtering out unmatched LiDAR detections; and (ii) cascade fusion, to recover missed objects from LiDAR by generating new 3D frustum proposals corresponding to unmatched RGB detections. Experiments show that LCF3D is beneficial for domain generalization, as it turns out to be successful in handling different sensor configurations between training and testing domains. LCF3D achieves significant improvements over LiDAR-based methods, particularly for challenging categories like pedestrians and cyclists in the KITTI dataset, as well as motorcycles and bicycles in nuScenes. Code can be downloaded from: https://github.com/CarloSgaravatti/LCF3D.
|
https://arxiv.org/abs/2601.09812
|
Academic Papers
|
svg
|
a5a736040f57346535a5416843ebfd2ec36f56752bd48a03f31fd1892316be1a
|
2026-01-16T00:00:00-05:00
|
Explainable Deep Learning for Pediatric Pneumonia Detection in Chest X-Ray Images
|
arXiv:2601.09814v1 Announce Type: new Abstract: Background: Pneumonia remains a leading cause of morbidity and mortality among children worldwide, emphasizing the need for accurate and efficient diagnostic support tools. Deep learning has shown strong potential in medical image analysis, particularly for chest X-ray interpretation. This study compares two state-of-the-art convolutional neural network (CNN) architectures for automated pediatric pneumonia detection. Methods: A publicly available dataset of 5,863 pediatric chest X-ray images was used. Images were preprocessed through normalization, resizing, and data augmentation to enhance generalization. DenseNet121 and EfficientNet-B0 were fine-tuned using pretrained ImageNet weights under identical training settings. Performance was evaluated using accuracy, F1-score, Matthews Correlation Coefficient (MCC), and recall. Model explainability was incorporated using Gradient-weighted Class Activation Mapping (Grad-CAM) and Local Interpretable Model-agnostic Explanations (LIME) to visualize image regions influencing predictions. Results: EfficientNet-B0 outperformed DenseNet121, achieving an accuracy of 84.6%, F1-score of 0.8899, and MCC of 0.6849. DenseNet121 achieved 79.7% accuracy, an F1-score of 0.8597, and MCC of 0.5852. Both models demonstrated high recall values above 0.99, indicating strong sensitivity to pneumonia detection. Grad-CAM and LIME visualizations showed consistent focus on clinically relevant lung regions, supporting the reliability of model decisions. Conclusions: EfficientNet-B0 provided a more balanced and computationally efficient performance compared to DenseNet121, making it a strong candidate for clinical deployment. The integration of explainability techniques enhances transparency and trustworthiness in AI-assisted pediatric pneumonia diagnosis.
|
https://arxiv.org/abs/2601.09814
|
Academic Papers
|
svg
|
916cc8df2e9799849cc9d0884908bb796a69efb52a61c21e0b2aadf65399dde4
|
2026-01-16T00:00:00-05:00
|
LLM-Based Agentic Systems for Software Engineering: Challenges and Opportunities
|
arXiv:2601.09822v1 Announce Type: new Abstract: Despite recent advancements in Large Language Models (LLMs), complex Software Engineering (SE) tasks require more collaborative and specialized approaches. This concept paper systematically reviews the emerging paradigm of LLM-based multi-agent systems, examining their applications across the Software Development Life Cycle (SDLC), from requirements engineering and code generation to static code checking, testing, and debugging. We delve into a wide range of topics such as language model selection, SE evaluation benchmarks, state-of-the-art agentic frameworks and communication protocols. Furthermore, we identify key challenges and outline future research opportunities, with a focus on multi-agent orchestration, human-agent coordination, computational cost optimization, and effective data collection. This work aims to provide researchers and practitioners with valuable insights into the current forefront landscape of agentic systems within the software engineering domain.
|
https://arxiv.org/abs/2601.09822
|
Academic Papers
|
svg
|
efecccfdb239bf259624b7fa0de19def2a1f55ac4d30c2ab3bd7c1065456cb9f
|
2026-01-16T00:00:00-05:00
|
NanoSD: Edge Efficient Foundation Model for Real Time Image Restoration
|
arXiv:2601.09823v1 Announce Type: new Abstract: Latent diffusion models such as Stable Diffusion 1.5 offer strong generative priors that are highly valuable for image restoration, yet their full pipelines remain too computationally heavy for deployment on edge devices. Existing lightweight variants predominantly compress the denoising U-Net or reduce the diffusion trajectory, which disrupts the underlying latent manifold and limits generalization beyond a single task. We introduce NanoSD, a family of Pareto-optimal diffusion foundation models distilled from Stable Diffusion 1.5 through network surgery, feature-wise generative distillation, and structured architectural scaling jointly applied to the U-Net and the VAE encoder-decoder. This full-pipeline co-design preserves the generative prior while producing models that occupy distinct operating points along the accuracy-latency-size frontier (e.g., 130M-315M parameters, achieving real-time inference down to 20ms on mobile-class NPUs). We show that parameter reduction alone does not correlate with hardware efficiency, and we provide an analysis revealing how architectural balance, feature routing, and latent-space preservation jointly shape true on-device latency. When used as a drop-in backbone, NanoSD enables state-of-the-art performance across image super-resolution, image deblurring, face restoration, and monocular depth estimation, outperforming prior lightweight diffusion models in both perceptual quality and practical deployability. NanoSD establishes a general-purpose diffusion foundation model family suitable for real-time visual generation and restoration on edge devices.
|
https://arxiv.org/abs/2601.09823
|
Academic Papers
|
svg
|
a2920813aeae1f14fd4790da6321766ba0b239ad5e76ca413425f1e39bb35fc9
|
2026-01-16T00:00:00-05:00
|
Eluder dimension: localise it!
|
arXiv:2601.09825v1 Announce Type: new Abstract: We establish a lower bound on the eluder dimension of generalised linear model classes, showing that standard eluder dimension-based analysis cannot lead to first-order regret bounds. To address this, we introduce a localisation method for the eluder dimension; our analysis immediately recovers and improves on classic results for Bernoulli bandits, and allows for the first genuine first-order bounds for finite-horizon reinforcement learning tasks with bounded cumulative returns.
|
https://arxiv.org/abs/2601.09825
|
Academic Papers
|
svg
|
57095d57d9ea62512af8d72b30746f8f7507119ab632625bb68050ee1429f85c
|
2026-01-16T00:00:00-05:00
|
UniHash: Unifying Pointwise and Pairwise Hashing Paradigms for Seen and Unseen Category Retrieval
|
arXiv:2601.09828v1 Announce Type: new Abstract: Effective retrieval across both seen and unseen categories is crucial for modern image retrieval systems. Retrieval on seen categories ensures precise recognition of known classes, while retrieval on unseen categories promotes generalization to novel classes with limited supervision. However, most existing deep hashing methods are confined to a single training paradigm, either pointwise or pairwise, where the former excels on seen categories and the latter generalizes better to unseen ones. To overcome this limitation, we propose Unified Hashing (UniHash), a dual-branch framework that unifies the strengths of both paradigms to achieve balanced retrieval performance across seen and unseen categories. UniHash consists of two complementary branches: a center-based branch following the pointwise paradigm and a pairwise branch following the pairwise paradigm. A novel hash code learning method is introduced to enable bidirectional knowledge transfer between branches, improving hash code discriminability and generalization. It employs a mutual learning loss to align hash representations and introduces a Split-Merge Mixture of Hash Experts (SM-MoH) module to enhance cross-branch exchange of hash representations. Theoretical analysis substantiates the effectiveness of UniHash, and extensive experiments on CIFAR-10, MSCOCO, and ImageNet demonstrate that UniHash consistently achieves state-of-the-art performance in both seen and unseen image retrieval scenarios.
|
https://arxiv.org/abs/2601.09828
|
Academic Papers
|
svg
|
c8744d13587fc9ba0ae413d0e0b211122c66bca2df004a1f2b32b417f0010904
|
2026-01-16T00:00:00-05:00
|
A New Convergence Analysis of Plug-and-Play Proximal Gradient Descent Under Prior Mismatch
|
arXiv:2601.09831v1 Announce Type: new Abstract: In this work, we provide a new convergence theory for plug-and-play proximal gradient descent (PnP-PGD) under prior mismatch where the denoiser is trained on a different data distribution to the inference task at hand. To the best of our knowledge, this is the first convergence proof of PnP-PGD under prior mismatch. Compared with the existing theoretical results for PnP algorithms, our new results removed the need for several restrictive and unverifiable assumptions.
|
https://arxiv.org/abs/2601.09831
|
Academic Papers
|
svg
|
b5dfb04844b91397e5634c28f845117ebdc8d8faaa868db3337dfb29b91bfa69
|
2026-01-16T00:00:00-05:00
|
Adoption and Evolution of Code Style and Best Programming Practices in Open-Source Projects
|
arXiv:2601.09832v1 Announce Type: new Abstract: Following code style conventions in software projects is essential for maintaining overall code quality. Adhering to these conventions improves maintainability, understandability, and extensibility. Additionally, following best practices during software development enhances performance and reduces the likelihood of errors. This paper analyzes 1,036 popular open-source JAVA projects on GITHUB to study how code style and programming practices are adopted and evolve over time, examining their prevalence and the most common violations. Additionally, we study a subset of active repositories on a monthly basis to track changes in adherence to coding standards over time. We found widespread violations across repositories, with Javadoc and Naming violations being the most common. We also found a significant number of violations of the GOOGLE Java Style Guide in categories often missed by modern static analysis tools. Furthermore, repositories claiming to follow code-style practices exhibited slightly higher overall adherence to code-style and best-practices. The results provide valuable insights into the adoption of code style and programming practices, highlighting key areas for improvement in the open-source development community. Furthermore, the paper identifies important lessons learned and suggests future directions for improving code quality in JAVA projects.
|
https://arxiv.org/abs/2601.09832
|
Academic Papers
|
svg
|
fb0f3ea06ca6d77ac5e6cccd9ba2128775b74a9cffa278561fddf30fc053c704
|
2026-01-16T00:00:00-05:00
|
Stable and Explainable Personality Trait Evaluation in Large Language Models with Internal Activations
|
arXiv:2601.09833v1 Announce Type: new Abstract: Evaluating personality traits in Large Language Models (LLMs) is key to model interpretation, comparison, and responsible deployment. However, existing questionnaire-based evaluation methods exhibit limited stability and offer little explainability, as their results are highly sensitive to minor variations in prompt phrasing or role-play configurations. To address these limitations, we propose an internal-activation-based approach, termed Persona-Vector Neutrality Interpolation (PVNI), for stable and explainable personality trait evaluation in LLMs. PVNI extracts a persona vector associated with a target personality trait from the model's internal activations using contrastive prompts. It then estimates the corresponding neutral score by interpolating along the persona vector as an anchor axis, enabling an interpretable comparison between the neutral prompt representation and the persona direction. We provide a theoretical analysis of the effectiveness and generalization properties of PVNI. Extensive experiments across diverse LLMs demonstrate that PVNI yields substantially more stable personality trait evaluations than existing methods, even under questionnaire and role-play variants.
|
https://arxiv.org/abs/2601.09833
|
Academic Papers
|
svg
|
5cae24ea5f1b6e6109aab9fbdc893008ad898d0cb275a75aef873891c673909f
|
2026-01-16T00:00:00-05:00
|
A Risk-Stratified Benchmark Dataset for Bad Randomness (SWC-120) Vulnerabilities in Ethereum Smart Contracts
|
arXiv:2601.09836v1 Announce Type: new Abstract: Many Ethereum smart contracts rely on block attributes such as block.timestamp or blockhash to generate random numbers for applications like lotteries and games. However, these values are predictable and miner-manipulable, creating the Bad Randomness vulnerability (SWC-120) that has led to real-world exploits. Current detection tools identify only simple patterns and fail to verify whether protective modifiers actually guard vulnerable code. A major obstacle to improving these tools is the lack of large, accurately labeled datasets. This paper presents a benchmark dataset of 1,752 Ethereum smart contracts with validated Bad Randomness vulnerabilities. We developed a five-phase methodology comprising keyword filtering, pattern matching with 58 regular expressions, risk classification, function-level validation, and context analysis. The function-level validation revealed that 49% of contracts initially classified as protected were actually exploitable because modifiers were applied to different functions than those containing vulnerabilities. We classify contracts into four risk levels based on exploitability: HIGH_RISK (no protection), MEDIUM_RISK (miner-exploitable only), LOW_RISK (owner-exploitable only), and SAFE (using Chainlink VRF or commit-reveal). Our dataset is 51 times larger than RNVulDet and the first to provide function-level validation and risk stratification. Evaluation of Slither and Mythril revealed significant detection gaps, as both tools identified none of the vulnerable contracts in our sample, indicating limitations in handling complex randomness patterns. The dataset and validation scripts are publicly available to support future research in smart contract security.
|
https://arxiv.org/abs/2601.09836
|
Academic Papers
|
svg
|
8e9d2c10c07e3450570c33857d33aca4424c262c0666ee3be8a4c6dd37ef69b7
|
2026-01-16T00:00:00-05:00
|
Interprofessional and Agile Development of Mobirobot: A Socially Assistive Robot for Pediatric Therapy Across Clinical and Therapeutic Settings
|
arXiv:2601.09838v1 Announce Type: new Abstract: Introduction: Socially assistive robots hold promise for enhancing therapeutic engagement in paediatric clinical settings. However, their successful implementation requires not only technical robustness but also context-sensitive, co-designed solutions. This paper presents Mobirobot, a socially assistive robot developed to support mobilisation in children recovering from trauma, fractures, or depressive disorders through personalised exercise programmes. Methods: An agile, human-centred development approach guided the iterative design of Mobirobot. Multidisciplinary clinical teams and end users were involved throughout the co-development process, which focused on early integration into real-world paediatric surgical and psychiatric settings. The robot, based on the NAO platform, features a simple setup, adaptable exercise routines with interactive guidance, motivational dialogue, and a graphical user interface (GUI) for monitoring and no-code system feedback. Results: Deployment in hospital environments enabled the identification of key design requirements and usability constraints. Stakeholder feedback led to refinements in interaction design, movement capabilities, and technical configuration. A feasibility study is currently underway to assess acceptance, usability, and perceived therapeutic benefit, with data collection including questionnaires, behavioural observations, and staff-patient interviews. Discussion: Mobirobot demonstrates how multiprofessional, stakeholder-led development can yield a socially assistive system suited for dynamic inpatient settings. Early-stage findings underscore the importance of contextual integration, robustness, and minimal-intrusion design. While challenges such as sensor limitations and patient recruitment remain, the platform offers a promising foundation for further research and clinical application.
|
https://arxiv.org/abs/2601.09838
|
Academic Papers
|
svg
|
71155163ebc6f6f9c8eb020a1759b0489affd9f6e44637bd00c9d4679eb33321
|
2026-01-16T00:00:00-05:00
|
Lazy Evaluation: A Comparative Analysis of SAS MACROs and R Functions
|
arXiv:2601.09839v1 Announce Type: new Abstract: Lazy evaluation is a powerful technique that can optimize code execution by deferring evaluations until their results are required, thus enhancing efficiency. In most modern programming languages, like R, lazy evaluation is commonly applied to function arguments. However, the application of lazy evaluation in SAS has not been extensively explored. This paper focuses on the mechanisms of lazy evaluation in SAS MACROs and R functions, offering a comparative analysis of the underlying principles that drive these processes. R's lazy evaluation is driven by a data structure called Promise, which postpones evaluation and does not occupy memory until the value is needed, utilizing a call-by-need strategy. SAS, on the other hand, achieves lazy evaluation through its symbol tables, employing memory to store parameters, and operates on a call-by-name basis. These discrepancies in lazy evaluation strategies can notably impact the results of R functions and SAS MACROs. By examining these distinct approaches, the paper illuminates the impact of lazy evaluation on programming efficiency, supported by illustrative examples. As the shift from SAS to R becomes increasingly prevalent in the pharmaceutical industry, understanding these techniques enables programmers to optimize their code for greater efficacy. This exploration serves as a guide to enhance programming capabilities and performance in both languages.
|
https://arxiv.org/abs/2601.09839
|
Academic Papers
|
svg
|
ded2f522da06a0a816f73cb30fc4be5e9a95e5c0ca86fae9356a5b2206a7c2c5
|
2026-01-16T00:00:00-05:00
|
A pipeline for enabling path-specific causal fairness in observational health data
|
arXiv:2601.09841v1 Announce Type: new Abstract: When training machine learning (ML) models for potential deployment in a healthcare setting, it is essential to ensure that they do not replicate or exacerbate existing healthcare biases. Although many definitions of fairness exist, we focus on path-specific causal fairness, which allows us to better consider the social and medical contexts in which biases occur (e.g., direct discrimination by a clinician or model versus bias due to differential access to the healthcare system) and to characterize how these biases may appear in learned models. In this work, we map the structural fairness model to the observational healthcare setting and create a generalizable pipeline for training causally fair models. The pipeline explicitly considers specific healthcare context and disparities to define a target "fair" model. Our work fills two major gaps: first, we expand on characterizations of the "fairness-accuracy" tradeoff by detangling direct and indirect sources of bias and jointly presenting these fairness considerations alongside considerations of accuracy in the context of broadly known biases. Second, we demonstrate how a foundation model trained without fairness constraints on observational health data can be leveraged to generate causally fair downstream predictions in tasks with known social and medical disparities. This work presents a model-agnostic pipeline for training causally fair machine learning models that address both direct and indirect forms of healthcare bias.
|
https://arxiv.org/abs/2601.09841
|
Academic Papers
|
svg
|
2fc9219d751e88f4f2a1c04948c45c579999f1167a979fa0152751b089994b16
|
2026-01-16T00:00:00-05:00
|
On Fun for Teaching Large Programming Courses
|
arXiv:2601.09842v1 Announce Type: new Abstract: Teaching software development basics to hundreds of students in a frontal setting is cost-efficient and thus still common in universities. However, in a large lecture hall, students can easily get bored, distracted, and disengaged. The frontal setting can also frustrate lecturers since interaction opportunities are limited and hard to scale. Fun activities can activate students and, if well designed, can also help remember and reflect on abstract software development concepts. We present a novel catalogue of ten physical fun activities, developed over years to reflect on basic programming and software development concepts. The catalogue includes the execution of a LA-OLA algorithm as in stadiums, using paper planes to simulate object messages and pointers, and traversing a lecture hall as a tree or a recursive structure. We report our experience of using the activities in a large course with 500+ students three years in a row. We also conducted an interview study with 15 former students of the course and 14 experienced educators from around the globe. The results suggest that the fun activities can enable students to stay focused, remember key concepts, and reflect afterwards. However, keeping the activities concise and clearly linked to the concepts taught seems to be key to their acceptance and effectiveness.
|
https://arxiv.org/abs/2601.09842
|
Academic Papers
|
svg
|
1eb8abe94c6dde75f9e232b8b44e3a03cfc5b7d5f104514e0714787ba8fda45b
|
2026-01-16T00:00:00-05:00
|
Strategies of cooperation and defection in five large language models
|
arXiv:2601.09849v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly deployed to support human decision-making. This use of LLMs has concerning implications, especially when their prescriptions affect the welfare of others. To gauge how LLMs make social decisions, we explore whether five leading models produce sensible strategies in the repeated prisoner's dilemma, which is the main metaphor of reciprocal cooperation. First, we measure the propensity of LLMs to cooperate in a neutral setting, without using language reminiscent of how this game is usually presented. We record to what extent LLMs implement Nash equilibria or other well-known strategy classes. Thereafter, we explore how LLMs adapt their strategies to changes in parameter values. We vary the game's continuation probability, the payoff values, and whether the total number of rounds is commonly known. We also study the effect of different framings. In each case, we test whether the adaptations of the LLMs are in line with basic intuition, theoretical predictions of evolutionary game theory, and experimental evidence from human participants. While all LLMs perform well in many of the tasks, none of them exhibit full consistency over all tasks. We also conduct tournaments between the inferred LLM strategies and study direct interaction between LLMs in games over ten rounds with a known or unknown last round. Our experiments shed light on how current LLMs instantiate reciprocal cooperation.
|
https://arxiv.org/abs/2601.09849
|
Academic Papers
|
svg
|
b5228cee1c8b52c1ff8dad90b691b0afa204ab8654255275345612b155c055af
|
2026-01-16T00:00:00-05:00
|
ViSIL: Unified Evaluation of Information Loss in Multimodal Video Captioning
|
arXiv:2601.09851v1 Announce Type: new Abstract: Multimodal video captioning condenses dense footage into a structured format of keyframes and natural language. By creating a cohesive multimodal summary, this approach anchors generative AI in rich semantic evidence and serves as a lightweight proxy for high-efficiency retrieval. However, traditional metrics like BLEU or ROUGE fail to quantify information coverage across disparate modalities, such as comparing a paragraph of text to a sequence of keyframes. To address this, we propose the Video Summary Information Loss (ViSIL) score, an information-theoretic framework that quantifies the video information not captured by a summary via vision-language model (VLM) inference. By measuring the information loss, ViSIL is a unified metric that enables direct comparison across multimodal summary formats despite their structural discrepancies. Our results demonstrate that ViSIL scores show a statistically significant correlation with both human and VLM performance on Video Question Answering (VQA) tasks. ViSIL also enables summary selection to optimize the trade-off between information loss and processing speed, establishing a Pareto-optimal frontier that outperforms text summaries by $7\%$ in VQA accuracy without increasing processing load.
|
https://arxiv.org/abs/2601.09851
|
Academic Papers
|
svg
|
7512d88862c4453b5070dca3a4e5a0ef22f0ce791760e8b42c0965d9bc990080
|
2026-01-16T00:00:00-05:00
|
Bears, all bears, and some bears. Language Constraints on Language Models' Inductive Inferences
|
arXiv:2601.09852v1 Announce Type: new Abstract: Language places subtle constraints on how we make inductive inferences. Developmental evidence by Gelman et al. (2002) has shown children (4 years and older) to differentiate among generic statements ("Bears are daxable"), universally quantified NPs ("all bears are daxable") and indefinite plural NPs ("some bears are daxable") in extending novel properties to a specific member (all > generics > some), suggesting that they represent these types of propositions differently. We test if these subtle differences arise in general purpose statistical learners like Vision Language Models, by replicating the original experiment. On tasking them through a series of precondition tests (robust identification of categories in images and sensitivities to all and some), followed by the original experiment, we find behavioral alignment between models and humans. Post-hoc analyses on their representations revealed that these differences are organized based on inductive constraints and not surface-form differences.
|
https://arxiv.org/abs/2601.09852
|
Academic Papers
|
svg
|
c91d109e52f96035f5e8f5d54d838d6c3b6f1172828ffd449e02a7d0224e26cb
|
2026-01-16T00:00:00-05:00
|
MedRedFlag: Investigating how LLMs Redirect Misconceptions in Real-World Health Communication
|
arXiv:2601.09853v1 Announce Type: new Abstract: Real-world health questions from patients often unintentionally embed false assumptions or premises. In such cases, safe medical communication typically involves redirection: addressing the implicit misconception and then responding to the underlying patient context, rather than the original question. While large language models (LLMs) are increasingly being used by lay users for medical advice, they have not yet been tested for this crucial competency. Therefore, in this work, we investigate how LLMs react to false premises embedded within real-world health questions. We develop a semi-automated pipeline to curate MedRedFlag, a dataset of 1100+ questions sourced from Reddit that require redirection. We then systematically compare responses from state-of-the-art LLMs to those from clinicians. Our analysis reveals that LLMs often fail to redirect problematic questions, even when the problematic premise is detected, and provide answers that could lead to suboptimal medical decision making. Our benchmark and results reveal a novel and substantial gap in how LLMs perform under the conditions of real-world health communication, highlighting critical safety concerns for patient-facing medical AI systems. Code and dataset are available at https://github.com/srsambara-1/MedRedFlag.
|
https://arxiv.org/abs/2601.09853
|
Academic Papers
|
svg
|
704246adde3cca462a8bbe2c33ff9f9e06d58353cf55460fb27d507d8b3d0ac1
|
2026-01-16T00:00:00-05:00
|
Thinking Long, but Short: Stable Sequential Test-Time Scaling for Large Reasoning Models
|
arXiv:2601.09855v1 Announce Type: new Abstract: Sequential test-time scaling is a promising training-free method to improve large reasoning model accuracy, but as currently implemented, significant limitations have been observed. Inducing models to think for longer can increase their accuracy, but as the length of reasoning is further extended, it has also been shown to result in accuracy degradation and model instability. This work presents a novel sequential test-time scaling method, Min-Seek, which improves model accuracy significantly over a wide range of induced thoughts, stabilizing the accuracy of sequential scaling, and removing the need for reasoning length fine-tuning. Beyond improving model accuracy over a variety of reasoning tasks, our method is inherently efficient, as only the KV pairs of one additional induced thought are kept in the KV cache during reasoning. With a custom KV cache which stores keys without position embeddings, by dynamically encoding them contiguously before each new generated thought, our method can continue to reason well beyond a model's maximum context length, and under mild conditions has linear computational complexity.
|
https://arxiv.org/abs/2601.09855
|
Academic Papers
|
svg
|
b6d564e15edeec36ecc2e30d8b0376d42a4a88ea533b1451bbad025eb36e3ae5
|
2026-01-16T00:00:00-05:00
|
How Human Motion Prediction Quality Shapes Social Robot Navigation Performance in Constrained Spaces
|
arXiv:2601.09856v1 Announce Type: new Abstract: Motivated by the vision of integrating mobile robots closer to humans in warehouses, hospitals, manufacturing plants, and the home, we focus on robot navigation in dynamic and spatially constrained environments. Ensuring human safety, comfort, and efficiency in such settings requires that robots are endowed with a model of how humans move around them. Human motion prediction around robots is especially challenging due to the stochasticity of human behavior, differences in user preferences, and data scarcity. In this work, we perform a methodical investigation of the effects of human motion prediction quality on robot navigation performance, as well as human productivity and impressions. We design a scenario involving robot navigation among two human subjects in a constrained workspace and instantiate it in a user study ($N=80$) involving two different robot platforms, conducted across two sites from different world regions. Key findings include evidence that: 1) the widely adopted average displacement error is not a reliable predictor of robot navigation performance and human impressions; 2) the common assumption of human cooperation breaks down in constrained environments, with users often not reciprocating robot cooperation, and causing performance degradations; 3) more efficient robot navigation often comes at the expense of human efficiency and comfort.
|
https://arxiv.org/abs/2601.09856
|
Academic Papers
|
svg
|
390fbb43b095709deda3641b4a72a9142922593c8c9272b9d075719c598cd5ae
|
2026-01-16T00:00:00-05:00
|
OUTLINEFORGE: Hierarchical Reinforcement Learning with Explicit States for Scientific Writing
|
arXiv:2601.09858v1 Announce Type: new Abstract: Scientific paper generation requires document-level planning and factual grounding, but current large language models, despite their strong local fluency, often fail in global structure, input coverage, and citation consistency. We present a reinforcement learning framework that casts scientific outline construction as a long-horizon planning problem over hierarchical document structures. Our approach models edit evolving outlines through structured actions, enabling the system to incrementally build a complete scientific manuscript. To support effective and stabilize learning,we introduce a two-stage optimization procedure consisting of (i) backward outline reconstruction from partial plans to enforce global structural consistency, and (ii) forward value-guided reinforcement learning with rewards explicitly modeling scientific correctness, discourse coherence, and citation fidelity. In addition, We further introduce a benchmark for scientific paper generation that evaluates document planning, input utilization, reference faithfulness, outline organization, and content-level factual accuracy. Our results show consistent improvements over strong neural and LLM baselines, particularly in long-range structural coherence and citation reliability.
|
https://arxiv.org/abs/2601.09858
|
Academic Papers
|
svg
|
5bc525c3d57f65d854aff13c6d8e2443455a1195789e24ca27b61a0df5052e67
|
2026-01-16T00:00:00-05:00
|
Breaking the Limits of Open-Weight CLIP: An Optimization Framework for Self-supervised Fine-tuning of CLIP
|
arXiv:2601.09859v1 Announce Type: new Abstract: CLIP has become a cornerstone of multimodal representation learning, yet improving its performance typically requires a prohibitively costly process of training from scratch on billions of samples. We ask a different question: Can we improve the performance of open-weight CLIP models across various downstream tasks using only existing self-supervised datasets? Unlike supervised fine-tuning, which adapts a pretrained model to a single downstream task, our setting seeks to improve general performance across various tasks. However, as both our experiments and prior studies reveal, simply applying standard training protocols starting from an open-weight CLIP model often fails, leading to performance degradation. In this paper, we introduce TuneCLIP, a self-supervised fine-tuning framework that overcomes the performance degradation. TuneCLIP has two key components: (1) a warm-up stage of recovering optimization statistics to reduce cold-start bias, inspired by theoretical analysis, and (2) a fine-tuning stage of optimizing a new contrastive loss to mitigate the penalization on false negative pairs. Our extensive experiments show that TuneCLIP consistently improves performance across model architectures and scales. Notably, it elevates leading open-weight models like SigLIP (ViT-B/16), achieving gains of up to +2.5% on ImageNet and related out-of-distribution benchmarks, and +1.2% on the highly competitive DataComp benchmark, setting a new strong baseline for efficient post-pretraining adaptation.
|
https://arxiv.org/abs/2601.09859
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.