text string | source string |
|---|---|
as evidence retrieval, extraction, synthesis, and summarization, as well as evidence adoption and evidence-based research, such as question-answering, clinical trial design and identification, and other cutting-edge studies across various clinical specialties. Furthermore, we outline key benchmarks to fa- cilitate the development of future NLP models. Fi- nally, we explore several potential avenues for fu- ture research. To better support both clinicians and researchers in making more informed clinical de- cisions and producing more comprehensive reviewarXiv:2505.22280v1 [cs.CL] 28 May 2025 literature, we have made these resources publicly available.1 2 Scope and Literature Selection Our scoping review adheres to the Preferred Re- porting Items for Systematic Reviews and Meta- Analyses (PRISMA2) guidelines, as illustrated in Figure 1. 2.1 Information sources We searched 4 databases, including PubMed3, IEEE Xplore4, ACM Digital Library5, and ACL Anthology6. The search included studies from the past 5 years, spanning 2019 to 2024. 2.2 Search strategy Our search strategy was meticulously designed to capture the most relevant studies at the inter- section of NLP and EBM (Supplementary File A.3). We targeted key NLP concepts and technolo- gies by including terms such as ‘ natural language processing ’, ‘language model ’, ‘large language model ’, ‘computational linguistics ’, ‘information extraction ’, ‘information retrieval ’, ‘clinical trial retrieval ’, ‘text summarization ’, ‘question answer- ing’, ‘sentence segmentation ’, ‘named entity recog- nition ’, ‘tokenization ’ and the abbreviations like ‘NLP’ and ‘LLM’. In the domain of EBM, we included terms like ‘ Evidence-Based Medicine ’, ‘Evidence-Based Practice ’, ‘Clinical Trial ’ and their abbreviations like ‘EBM’ and ‘EBP’, also limited to appearances in the title or abstract. We used the Boolean operator to combine any word from the NLP domain and any work from the EBM domain in our search terms. 2.3 Study selection and metadata extraction The references of all eligible studies were imported into Covidence7, and duplicates were removed. We then screened the articles by title and abstract. In- clusion criteria were defined as (1) Studies pub- lished in English, (2) research applying NLP tech- niques specifically for EBM, and (3) Studies focus- ing on applications for humans. Exclusion criteria were defined as (1) articles unrelated to NLP for 1https://github.com/bionlplab/ awesome-nlp-in-ebm 2https://www.prisma-statement.org/ 3https://pubmed.ncbi.nlm.nih.gov/ 4https://ieeexplore.ieee.org/ 5https://dl.acm.org/ 6https://aclanthology.org/ 7https://www.covidence.org Studies from databases/registers (n = 601) References r emo ved (n = 8) Duplic ates iden fied manually (n = 6) Duplic ates iden fied b y Co vidence (n = 1) Studies scr eened (n = 603)Studies e xcluded (n = 386) Non-NLP s tudies (n = 19) Not r elevant resear ch ar ea (n=267) Not EBM s tudies (n = 71) Secondar y Literature (n=26) Not English (n=3) Studies assessed f or eligibility (n = 217)Studies e xcluded (n = 88) Misaligned objec v es (n = 52) Lack of EBM t asks (n = 36) Studies included in r eview (n = 129) Identification Screening IncludedFigure 1: PRISMA flow diagram. EBM, (2) non-English publications, and (3) sec- ondary literature such as systematic reviews, re- tracted papers, survey papers, case studies, and descriptive papers lacking experimental results. After the screening, the | https://arxiv.org/abs/2505.22280v1 |
metadata was extracted from each paper, including models, disease, tasks involved, results, and limitations. Two annotators cross-verified the study selection and metadata ex- traction processes and consulted a third in cases of disagreement. 2.4 Study Statistics From an initial pool of 601 papers retrieved from databases and 9 additional sources, we removed 8 duplicates. Subsequently, 386 papers were ex- cluded during the initial screening based on prede- fined exclusion criteria, and 88 more were removed during full-text screening due to misaligned objec- tives or lack of relevance to EBM tasks. Ultimately, 129 studies met the inclusion criteria and form the basis of this review, with detailed metadata pro- vided in Supplementary Table 1. Figure 2 illustrates the distribution of research papers across different years (2019–2024) and their corresponding NLP tasks. There has been a rapid growth of papers over the years, peaking in 2023. The most common tasks throughout the years are Entity Extraction, Classification, and Evaluation, showing their foundational role in NLP for EBM research. Emerging tasks like Question Answer- ing and Quality Assessment have appeared more prominently in recent years, reflecting evolving research directions. 2019 2020 2021 2022 2023 202401020304050607080Number of PapersSummarization Clinical Trial Design Entity Extraction and Classification EvaluationEvidence Ranking Evidence Synthesis Information RetrievalQuality Assessment Question Answering Relation ExtractionFigure 2: Distribution of papers in different EBM tasks over time. The color schema is the same as Supplemen- tary Table 1. 3 NLP Techniques for EBM The entire EBM process consists of five steps, com- monly referred to as the ‘5A’s: Ask,Acquire ,Ap- praise ,Apply , and Assess (Ratnani et al., 2023). NLP can be leveraged at each step to enhance the process (Table 1). For example, in the Ask step, clinicians or patients formulate precise clinical questions to address specific healthcare concerns. During the Acquire step, NLP can be employed to extract evidence, often leveraging the PICO frame- work. In the Appraise step, NLP tools can assist in evaluating and ranking the quality, validity, and relevance of the retrieved information to ensure its applicability to clinical decision-making. For theApply and Assess steps, NLP can streamline the design and identification of relevant clinical trials and facilitate their integration into practice, enabling continuous assessment and refinement of patient care strategies. Detailed trends and advance- ments for each step of the EBM process are dis- cussed in the following sections. 4 Ask - Searching & Selecting Studies EBM can help researchers and clinicians draft a successful systematic review. After the scope and questions have been determined, the first step is to search for studies to include in the reviews and ensure they remain up to date. This step is typically achieved using NLP-based information retrieval techniques, which extract rel-EBM cycle Description NLP tasks Ask Search & select studiesQuestion answering, Information retrieval Acquire Collect data Named entity recognition and normalization, Relation extraction Appraise Examine relevance, validity, and resultsQuality assessment, Evidence ranking and screening, Evidence synthesis, Evidence summarization Apply & Asses Apply EBM in practice and research and evaluate their effectivenessClinical trial identification and design, Question Answering, Domain-specific applications Table 1: Mapping of EBM | https://arxiv.org/abs/2505.22280v1 |
cycle to corresponding NLP tasks. evant information from large text corpora based on user queries. Early heuristic methods involved structured, keyword-based queries to retrieve arti- cles from repositories like MEDLINE or PubMed. These methods, while foundational, are limited by the high cost of expert annotation, maintenance, and domain sensitivity (Névéol et al., 2011). De- spite these limitations, recent methods often rely on predefined rule-based strategies, e.g., SR[pt] and CQrs (Navarro-Ruan and Haynes, 2022), to filter and compare the retrieved results for systematic re- views. In addition, while statistical machine learn- ing and context-aware models (Kamath et al., 2021; Samuel et al., 2021) have been widely adopted, they often lack scalability and struggle with less representative text embeddings. Recent advancements are leaning towards transformer-based deep learning frameworks (Ram- prasad et al., 2023a; Jin et al., 2022) due to their scalability and the ability to integrate medical on- tologies, improving domain-specific text represen- tation through self-supervised pretraining. For ex- ample, (Lokker et al., 2023) used BioBERT’s (Lee et al., 2020) embeddings and attention mechanisms to improve query representation and biomedical literature retrieval in clinical practice. Furthermore, the integration of generative AI models has ad- vanced literature retrieval despite challenges like hallucination. For example, Gwon et al. (2024) compared Microsoft Bing AI and ChatGPT in ac- celerating the systematic literature search for a clin- ical review on Peyronie disease treatment, finding both can speed up the search process. 5 Acquire - Collecting Data EBM is designed to identify all studies relevant to their research questions and synthesize data re- garding the study design, risk of bias, and results. Therefore, the findings of EBM heavily depend on decisions about which data from these studies are presented and analyzed. The data collected should be accurate, complete, and accessible for future review, updates, and data-sharing purposes. Here we describe NLP approaches used to extract data directly from journal articles and other studies’ reports. 5.1 Entity Extraction and Normalization Initially, entity (e.g., PICO) extraction relied on rule-based approaches, which utilize predefined lexical, syntactic, and contextual rules for extract- ing entities from clinical trial data (Chen et al., 2019c; Borchert et al., 2022). These methods are simple, transparent, and customizable, mak- ing them practical for high-precision tasks in struc- tured contexts. Although they face challenges with complex or ambiguous data, their interpretability and ease of adaptation remain valuable for PICO extraction (Dhrangadhariya and Müller, 2023). RNN/LSTM-based frameworks lacked long- term memory capabilities. Nevertheless, they have been used for sequential sentence classification to enhance context utilization and improve classifi- cation accuracy in unstructured or less structured medical abstracts (Jin and Szolovits, 2018). The current trend is towards the dominance of transformer-based frameworks due to their domain- aware pertaining benefits. For instance, models such as SciBERT and PubMedBERT have been specifically developed for extracting ‘Intervention’ (‘I’ in PICO) (Tsubota et al., 2022), SrBERT (Aum and Choe, 2021) for classifying articles into “in- cluded’ or “excluded” categories based on prede- fined inclusion criteria. 5.2 Relation Extraction Following the identification of PICO elements, re- lation extraction approaches can be used to link these elements | https://arxiv.org/abs/2505.22280v1 |
within studies. Initially, rule-based and machine-learning meth- ods were used to extract meaningful relationshipsfrom medical literature (Alodadi and Janeja, 2019; Borchert et al., 2022). By 2021, transformative methodologies were developed, integrating deep learning frameworks like BERT and Augment Min- ing (AM). For example, srBERT built-in (Aum and Choe, 2021), identified key elements and defines interrelations from the titles of articles. Stylianou and Vlahavas (2021) classified the relationships be- tween argumentative components within the texts, such as claims and evidence. Their relationships were labeled as ‘supporting’ or ‘opposing’. In the systematic review process, understanding the connections between different study results can influence the review outcomes. However, besides systematic reviews, automated relation extraction has shifted towards more structured approaches, such as schema-based relation extraction. For ex- ample, Sanchez-Graillet et al. (2022) utilized a richly annotated corpus that aligns with the C-TrO ontology. Complementing these advances, graph- based approaches offer a novel way to encode complex relationships between clinical entities. A knowledge graph is a structured representation of information where entities (e.g., symptoms, treat- ments, drugs) are represented as nodes and their relationships as edges. Graph-based approaches have emerged as an effective method to encode re- lationships. For example, a knowledge graph was used to organize and visualize relationships among clinical trial entities such as symptoms, treatments, and drug outcomes by structuring data into nodes and edges (Pan et al., 2021). 6 Appraise, Synthesize, and Summarize Evidence This task screens the included studies for risk of bias and appraises them for quality to ensure that healthcare decisions are informed by the most re- liable and relevant evidence. Once the appraisal is complete, the next step is synthesizing evidence by combining findings from multiple studies, of- ten using meta-analyses. Finally, these synthesized insights are summarized into concise, actionable conclusions. 6.1 Quality Assessment Developing tools to assess evidence is crucial in EBM, such as the fully automated tool that com- bines machine learning and rule-based techniques by Brassey et al. (2021). It assessed the evidence from randomized clinical trials and systematic re- views by sentiment analysis, indication of bias, and sample size calculation, and used them to estimate the potential effectiveness of the intervention. Be- sides, deep learning models such as BERT (Devlin et al., 2019) have been used to evaluate the quality of evidence by analyzing article titles and abstracts. For example, different variations like BioBERT, BlueBERT, and BERT BASE were fine-tuned to clas- sify the articles based on their adherence to method- ological quality criteria (Lokker et al., 2023). 6.2 Evidence Ranking and Screening After the quality assessment, the next step is to screen and rank the evidence. Several ranking meth- ods are available, with statistical-based methods be- ing among the earliest used. For example, Norman et al. (2019b) developed a method to rank refer- ences by their likelihood of relevance. Compared with randomized screening, their study showed that prioritization methods (with technological as- sistance) allow for fewer studies to be screened while still producing reliable results, which effec- tively reduces both the time and cost associated with the screening process. Rybinski et al. (2020a) | https://arxiv.org/abs/2505.22280v1 |
introduced the platform A2A, which used Okapi Best Match 25 (BM25) that assigned scores to doc- uments based on term frequency and document length and Divergence from Randomness (DFR) that quantified informativeness as the divergence of a term’s distribution from randomness for doc- ument ranking. Additionally, machine learning methods are implemented. Rybinski et al. (2020b) designed a search system with a simple query for- mulation strategy for initial ranking and used pre- trained BERT models (SciBERT, BioBERT, and BlueBERT) for re-ranking in clinical trial searches, which improved the robustness. 6.3 Evidence Synthesis Evidence synthesis combines data from included studies to draw conclusions about a body of evi- dence. While the most common method used is meta-analysis, which statistically combines results from studies to estimate overall effect sizes, NLP- based approaches have also been applied to synthe- size studies or findings. Mutinda et al. (2022b) pro- posed a method to reproduce meta-analysis, com- puting summary statistics (e.g., risk ratio) and visu- alizing results using forest plots by extracting and normalizing PICO elements from breast cancer ran- domized controlled trials. However, this method is built on a small amount of data. Górska andTacconelli (2024) developed a system to continu- ously update summary statistics from key publica- tions, further improving the meta-analysis process. However, only binary outcomes were supported in both methods, limiting the applicability to broader meta-analysis needs. Besides meta-analysis, Evi- denceMap (Kang et al., 2023) effectively synthe- sized medical findings by employing a structured and hierarchical representation comprising Entities, Propositions, and Maps that enhances the inter- pretability and retrievability of evidence through its sophisticated semantic relational retranslation. 6.4 Evidence Summarization Finally, EBM must present a clear statement of findings or conclusions to help people make better- informed decisions and increase usability. This summary should include information on all impor- tant outcomes, evidence certainty, and the interven- tion’s desirable and undesirable consequences. From an NLP technical perspective, evidence summarization uses extractive and abstractive strategies. Extractive summarization selects the most important sentences from the original text. Gulden et al. (2019) generated a new dataset from clinicaltrials.gov to test various algorithms (e.g., LexRank, TextRank, and Latent Semantic Analysis), identifying TextRank as the best per- former in creating summaries directly from the source texts without altering the original wording. However, these algorithms suffered from ineffi- ciency and high computational complexity when processing large datasets. Sarker et al. (2020) de- veloped a lightweight system that leverages Max- imal Marginal Relevance (MMR) and pre-trained word embeddings trained on PubMed and PMC texts to integrate semantic relevance and reduce redundancy. Similarly, Xie et al. (2022) proposed a knowledge infusion training framework called Ke- BioSum, which incorporated PICO into pre-trained language models (PLMs). It utilized lightweight knowledge adapters to reduce computational costs while improving semantic understanding and con- textual representation. Abstractive summarization focuses on the most critical information and creates new text for the summary; usually, more advanced techniques are used. Lalitha et al. (2023) have implemented sophisticated techniques such as neural network- based model T5 (Text-to-Text Transfer Trans- former), BART (Bidirectional Auto-Regressive Transformer), and PEGASUS (Pre-training with Extracted Gap-sentences for | https://arxiv.org/abs/2505.22280v1 |
Abstractive Summa- rization Sequence-to-sequence) to resolve the chal- lenge of obtaining useful information from a vast amount of clinical documents. With the increased demand for user-interacted summarization, Ramprasad et al. (2023b) presented TrialsSummarizer, a system that helps automate summarizing the most relevant evidence in a set of randomized controlled trials by a multi-headed ar- chitecture, enabling each token in the generated summary to be explicitly linked to specific in- put aspects (e.g., population, intervention, or out- come). It introduces template-infilling capabili- ties, allowing users to correct or adjust generated summaries dynamically. Moreover, the application of LLMs has evolved to address these tasks with growing precision and depth. Hamed et al. (2023) explored ChatGPT’s capabilities in synthesizing diabetic ketoacidosis (KDA) guidelines by compar- ing, integrating, and abstracting content. Unlu et al. (2024) employed a Retrieval-Augmented Genera- tion (RAG) framework with GPT-4 for generating responses to clinical trial eligibility questions based on retrieved patient data. Furthermore, TriSum (Jiang et al., 2024) stood out by using structured rationale-based abstractive summarization, where large language models generate aspect-triple ratio- nales that are distilled into smaller models through a dual-scoring selection mechanism and curriculum learning. 7Apply and Assess: adoption, refinement, and research Transitioning from the evidence generation and synthesis, the next critical step is its adoption and refinement, facilitated by an ‘Evidence-based Re- search’ approach. Adoption and refinement are crucial to consistently reassessing and enhancing clinical evidence, particularly when existing evi- dence gaps lead to unmet needs of clinicians and patients. Evidence-based research further ensures that these gaps inform future clinical studies. Here, we summarize several applications identified from our literature review that align with this topic. 7.1 Specialty-specific adoption In addition to general applications, we observed that NLP for EBM has been applied within spe- cific medical specialties. Here, we summarized common specialties featured in the papers, such as oncology for conditions like Non-small cell lungcancer (NSCLC) and cardiovascular events such as heart failure. Other diseases are detailed in Supple- mentary Table 1. Oncology. Cancer is a central topic in EBM, as it demands continuous integration of new re- search findings to guide evidence-based decisions for accurate diagnosis, effective treatment, and long-term patient management. Saiz et al. (2021) introduced Watson Oncology Literature Insights (WOLI), an AI system, by automatically identify- ing, prioritizing, and extracting relevant oncology research, which facilitated the translation of evi- dence into clinical practice. Similarly, the Clinical Trial Matching (CTM) system (Alexander et al., 2020) was evaluated at a cancer center in Aus- tralia with an overall accuracy of 92% for screen- ing lung cancer patients. These tools highlight how AI-driven systems are increasingly embedded in hospital workflows. Cardiology. Cardiology demands robust evi- dence to support the decision due to the high preva- lence and the critical consequences of diagnostic errors, which can result in severe harm or loss of life. For example, the hybrid model proposed by Tun et al. (2023) exemplifies clinical practice by automating patient eligibility assessment directly within clinical workflows. In a real-world applica- tion on a dataset of 40,000 patients across several clinical care pathways, such as heart failure | https://arxiv.org/abs/2505.22280v1 |
with reduced and preserved ejection fraction and atrial fibrillation, this model was deployed and achieved an impressive accuracy of 87.3%. 7.2 Clinical trial design and identification Not all medical specialties are fully addressed by current research, and even in those with significant focus, the integration of findings into real-world guidance remains insufficient. Automating clinical trial procedures is critical for instant reaction to pandemics or public health emergencies. A crucial step in advancing future clinical trials or experi- ments is the design phase, where NLP plays a piv- otal role. Effective clinical trial design involves structuring and optimizing trials to ensure they align with patient needs and research objectives. NLP tools can enhance the efficiency of clinical trial design by facilitating the automated matching of patients to suitable trials and ensuring trials are aligned with the right patient cohorts. This capa- bility supports a more effective and timely deploy- ment of research resources in emergency health situations. Eligibility matching and cohort identification is a process of matching patients to clinical trials based on their eligibility and identifying groups of pa- tients (cohorts) who meet specific criteria for in- clusion in clinical trials. There are several applica- tions. Vydiswaran et al. (2019) proposed a hybrid approach to identify patient cohorts for clinical trials, which combines pattern-based, knowledge- intensive, and feature-weighting techniques to de- termine if patients meet specific selection crite- ria. Segura-Bedmar and Raez (2019) explored the use of deep learning models for cohort selec- tion, framing it as a multi-label classification task. By employing CNNs and RNNs to process free- text eligibility criteria, this method allows for auto- matic learning of representations directly from text. Building on these foundations, Liu et al. (2022) de- veloped Criteria2Query (C2Q) to extract and trans- form free-text eligibility criteria into structured, queryable data for cohort identification. More re- cently, Murcia et al. (2024) proposed the “Trial- Matcher” algorithm to match veterans for clinical trials using existing information within EHRs. It extracted attributes from patient profiles and eli- gibility criteria from trial profiles and compared them using the Sørensen-Dice Index (SDI). These applications show the potential of streamlining the process of recruitment and improving future clini- cal trial design. Now, researchers try to add LLMs to the studies. LLMs like GPT-3.5 or GPT-4 en- hance clinical trial workflows by processing com- plex natural language data, such as patient profiles and trial eligibility criteria. The examples include AutoTrial (Wang et al., 2023b), focusing on trial design, specifically generating eligibility criteria using multi-step reasoning and hybrid prompting and TrialGPT (Jin et al., 2024), implementing a comprehensive framework for large-scale patient- trial matching, emphasizing real-world deployment and time-saving efficiency. 7.3 Drug repurposing Another frontier application in this field is drug re- purposing, which utilizes NLP to analyze existing medical literature and uncover new therapeutic ap- plications for established drugs. By automating the analysis of large datasets such as clinical trials and research papers, NLP speeds up the identification of potential treatments, offering a faster and more cost-effective alternative to traditional drug discov- ery methods. During the COVID-19 pandemic,there is an urgent | https://arxiv.org/abs/2505.22280v1 |
need for drugs for treatment. To quickly meet this requirement, the CovidX Net- work Algorithm Gates and Hamed (2020) was developed, which utilized NLP to analyze vast COVID-19 biomedical literature. It ranked poten- tial drug candidates for repurposing, highlighting NLP’s power in automating and accelerating evi- dence synthesis during critical times. Alzheimer’s disease (AD), a progressive neurodegenerative dis- order, remains a major global health challenge with limited treatment options and no definitive cure. Despite significant investment in drug development, the failure rate for Alzheimer’s-specific drugs in clinical trials remains exceedingly high. To ad- dress this, Daluwatumulle et al. (2022) employed knowledge graph embeddings to predict AD drug candidates by linking textual data and generating hypotheses from unstructured information. 7.4 Question Answering While EBM is taught according to the five steps: ask, acquire, appraise, apply, and evaluate, a recent trend of application with the advancement in LLMs focuses on treating the entire process as a question- answering (QA) task. Xie et al. (2023) experi- mented with the consultation of rhinoplasty ques- tions to ChatGPT, which pre-learned knowledge and summarized texts to respond, testing the poten- tial of LLMs to offer valuable feedback. Moreover, Mohammed and Fiaidhi (2024) added bootstrap- ping to BioBERT and BioGPT so that they could better understand PICO questions from physicians and find potential answers from publications. Ex- panding on this trend, Chuan and Morgan (2021) introduced Chatbot SOPHIA, which helps users understand their eligibility for clinical trials by answering questions based on trial criteria. Ad- dressing rare cancers, Jang et al. (2022) fine-tuned SAPBERT for QA and NER tasks, ultimately sum- marizing potential drugs ranked by relevance, such as bevacizumab, temozolomide, lomustine, and nivolumab. 8 EBM Benchmark dataset Here, we summarize the benchmarks used in NLP and EBM (Supplementary Table 2). The tasks fre- quently involved with these benchmarks are Evi- dence Retrieval, Evidence Extraction, and Clini- cal Trial Identification. There is a notable gap in datasets specifically tailored for Evidence Synthe- sis and Appraisal, as well as Question Answering. The existing datasets are often built upon general texts rather than medical-specific content. For ex- ample, CNN-DailyMail (See et al., 2017) is used for Evidence Summarization, but it is not medical- related. We also noticed that the primary data sources for these benchmarks are scholarly articles from PubMed and clinical trials. 9 Challenges and Future Directions EBM is an important, rewarding, and dynamic field that organizes current data to improve healthcare decision-making. By integrating the best available evidence with a healthcare professional’s experi- ence and the patient’s values, EBM aims to opti- mize health outcomes. Our focus here is on re- trieving, extracting, appraising, synthesizing, and summarizing evidence from biomedical literature such as clinical trials, cohort studies, and case re- ports. However, conducting these analyses can be both demanding and time-consuming. In this study, we explore key NLP techniques that can streamline and facilitate this process. Our review indicates that NLP-based systems or pipelines have achieved impressive results in EBM, such as extracting entities like PICO, enhancing the information retrieval engines, automating the evi- dence synthesis, assessing evidence quality, | https://arxiv.org/abs/2505.22280v1 |
rank- ing the evidence with the highest confidence, sum- marizing the information, and answering questions. At the same time, as in any other evolving area, there remain challenges ahead. For example, gener- ative models in EBM tasks have demonstrated im- pressive fluency and scalability, yet their tendency to hallucinate facts, lack source attribution, and sensitivity to prompt phrasing remain significant limitations for clinical use. A core challenge is the validation and trustworthiness of generated outputs, especially in high-stakes domains like medicine. Mechanisms such as RAG offer potential mitiga- tions but require further development and evalua- tion. From another perspective, particularly in han- dling diseases with limited literature or annotated data (Ge et al., 2023). Few-shot learning holds sig- nificant potential, as it enables models to generalize effectively from a small number of examples, re- ducing the dependency on large, annotated datasets. This data-efficient approach is crucial for EBM tasks in under-researched areas, such as rare dis- eases, where annotated resources are scarce. Few- shot learning can help these models adapt quicklyto specific clinical needs, allowing for more ac- curate information extraction, question answering, and evidence synthesis, even with minimal training data. Additionally, there is a pressing need for more benchmark datasets, especially for Evidence Syn- thesis and Appraisal and Question Answering. Cur- rent resources often rely on general corpora rather than those specifically oriented toward medical con- tent, limiting the development of specialized NLP applications. Researchers can consider and build more meaningful datasets. Moreover, NLP-based tools have not yet been widely applied across all medical specialties, such as Urology and Hepatol- ogy, indicating room for expansion in these areas. Another future direction for NLP in EBM in- volves incorporating real-world data from vari- ous sources, such as mobile devices, social media, and genomics. These data sources capture rich and diverse information beyond traditional clini- cal records, offering valuable insights into patient behaviors, lifestyle, environmental factors, and ge- netic predispositions. For example, data from mo- bile health apps and wearable devices can provide real-time health metrics. At the same time, so- cial media posts may reveal patient self-reported outcomes or experiences that are often missed in clinical settings. Integrating genomic data adds another layer, enabling family history and personal- ized genomic code into disease risk and treatment response. Furthermore, the “black box" nature of many NLP models limits their interpretability and ac- countability. Biases within training data can restrict NLP’s effectiveness and fairness across diverse pa- tient demographics. Additionally, the high compu- tational demands and the need for domain expertise in both NLP and healthcare are resource-intensive. To fully realize the potential of NLP for EBM in real-world clinical workflows often involve in- terdisciplinary scenarios that span multiple condi- tions, comorbidities, and patient subpopulations. To address these complexities, NLP systems for EBM must evolve toward more holistic, adaptable frameworks capable of reasoning across diverse clinical questions and integrating heterogeneous data sources. Addressing these limitations is important for en- abling efficiency and ultimately contributing to a safer, more equitable healthcare landscape. 10 Conclusion Our comprehensive review of over 600 papers re- sulted in the | https://arxiv.org/abs/2505.22280v1 |
selection of 129 studies that focus on critical aspects of NLP within EBM. We first provide an overview of EBM, followed by a sur- vey of NLP methods and techniques that address each step of the EBM process. We also explore use cases that demonstrate the application of EBM in various scenarios. Additionally, we review popular datasets and benchmarks. Finally, we present open challenges and future directions for research in this field. As NLP technologies evolve, they offer promising prospects for harnessing vast amounts of unstructured data, thus supporting clinical and research applications. Limitations Our study primarily focuses on English-language publications, potentially overlooking important re- search published in other languages. The inclusion criteria may have excluded studies indirectly re- lated to EBM and NLP that could provide valuable insights. Additionally, our analysis only covers arti- cles published between 2019 and 2024, which may have led to the omission of significant earlier works that contributed to the foundation of this field. Fur- thermore, the databases and search engines used in this review are limited, and it is possible that some relevant studies on NLP for EBM during the specified period were not identified. Acknowledgments This project was sponsored by the National Li- brary of Medicine grants R01LM014344 and R01LM014573. References Md Abdullah Al Hafiz Khan, Md Shamsuzzaman, Sa- did A. Hasan, Mohammad S Sorower, Joey Liu, Vivek Datla, Mladen Milosevic, Gabe Mankovich, Rob van Ommering, and Nevenka Dimitrova. 2019. Improving disease named entity recognition for clini- cal trial matching. In 2019 IEEE International Con- ference on Bioinformatics and Biomedicine (BIBM) , pages 2541–2548. Marliese Alexander, Benjamin Solomon, David L Ball, Mimi Sheerin, Irene Dankwa-Mullan, Anita M Preininger, Gretchen Purcell Jackson, and Dishan M Herath. 2020. Evaluation of an artificial intelligence clinical trial matching system in australian lung can- cer patients. JAMIA Open , 3(2):209–215.Mohammad S. Alodadi and Vandana P. Janeja. 2019. Linking knowledge discovery in clinical notes and massive biomedical literature repositories. In 2019 IEEE International Conference on Big Data . IEEE. Paul Arora, Devon Boyne, Justin J. Slater, Alind Gupta, Darren R. Brenner, and Marek J. Druzdzel. 2019. Bayesian networks for risk prediction using real- world data: A tool for precision medicine. Value in Health , 22(4):439–445. Sungmin Aum and Seon Choe. 2021. srBERT: auto- matic article classification model for systematic re- view using BERT. Syst. Rev. , 10(1):285. Jacob Beattie, Sarah Neufeld, Daniel Yang, Christian Chukwuma, Ahmed Gul, Neil Desai, Steve Jiang, and Michael Dohopolski. 2024. Utilizing large lan- guage models for enhanced clinical trial matching: A study on automation in patient screening. Cureus , 16(5):e60044. J Thaddeus Beck, Melissa Rammage, Gretchen P Jack- son, Anita M Preininger, Irene Dankwa-Mullan, M Christopher Roebuck, Adam Torres, Helen Holtzen, Sadie E Coverdill, M Paul Williamson, Quincy Chau, Kyu Rhee, and Michael Vinegra. 2020. Artificial intelligence tool for optimizing eligibility screening for clinical trials in a large community can- cer center. JCO Clin. Cancer Inform. , 4(4):50–59. Chris Blunt. 2022. The pyramid schema: The origins and impact of evidence pyramids. SSRN Electron. J. Florian Borchert, Christina Lohr, Luise Modersohn, Thomas Langer, Markus Follmann, Jan | https://arxiv.org/abs/2505.22280v1 |
Philipp Sachs, Udo Hahn, and Matthieu-P. Schapranow. 2020. GG- PONC: A corpus of German medical text with rich metadata based on clinical practice guidelines. In Proceedings of the 11th International Workshop on Health Text Mining and Information Analysis , pages 38–48, Online. Association for Computational Lin- guistics. Florian Borchert, Laura Meister, Thomas Langer, Markus Follmann, Bert Arnrich, and Matthieu-P Schapranow. 2022. Controversial trials first: Identi- fying disagreement between clinical guidelines and new evidence. In AMIA Annual Symposium Proceed- ings, pages 237–246. Jon Brassey, Christopher Price, Jonny Edwards, Markus Zlabinger, Alexandros Bampoulidis, and Allan Han- bury. 2021. Developing a fully automated evidence synthesis tool for identifying, assessing and collating the evidence. BMJ Evid. Based Med. , 26(1):24–27. Austin J. Brockmeier, Meizhi Ju, Piotr Przybyła, and Sophia Ananiadou. 2019. Improving reference pri- oritisation with pico recognition. BMC Medical In- formatics and Decision Making , 19:256. Tianrun Cai, Fiona Cai, Kumar P. Dahal, Gabrielle Cre- mone, Ethan Lam, Charlotte Golnik, Thany Seyok, Chuan Hong, and Katherine P. Liao. 2021. Improv- ing the efficiency of clinical trial recruitment using an ensemble machine learning to assist with eligibility screening. ACR Open Rheumatology , 3:593–600. Leonardo Campillos-Llanos, Ana Valverde-Mateos, Adrián Capllonch-Carrión, and Antonio Moreno- Sandoval. 2021. A clinical trials corpus annotated with umls entities to enhance the access to evidence- based medicine. BMC Medical Informatics and De- cision Making , 21(69). This article has been cor- rected. See BMC Med Inform Decis Mak. 2021 Apr 7;21:118. Boyu Chen, Hao Jin, Zhiwen Yang, Yingying Qu, Heng Weng, and Tianyong Hao. 2019a. An approach for transgender population information extraction and summarization from clinical trial text. BMC Med. Inform. Decis. Mak. , 19(Suppl 2):62. Chi-Jen Chen, Neha Warikoo, Yun Chun Chang, and Chen. 2019b. Medical knowledge infused convolu- tional neural networks for cohort selection in clinical trials. Journal of the american medical informatics association , 26(11):1227–1236. Long Chen, Yu Gu, Xin Ji, Chao Lou, Zhiyong Sun, Haodan Li, Yuan Gao, and Yang Huang. 2019c. Clin- ical trial cohort selection based on multi-level rule- based natural language processing system. Journal of the American Medical Informatics Association , 26(11):1218–1226. Ching-Hua Chuan and Susan Morgan. 2021. Creating and evaluating chatbots as eligibility assistants for clinical trials: An active deep learning approach to- wards user-centered classification. ACM Trans. Com- put. Healthc. , 2(1):1–19. Jonathan W Cunningham, Pulkit Singh, Christopher Reeder, Brian Claggett, Pablo M Marti-Castellote, Emily S Lau, Shaan Khurshid, Puneet Batra, Steven A Lubitz, Mahnaz Maddah, Anthony Philip- pakis, Akshay S Desai, Patrick T Ellinor, Orly Var- deny, Scott D Solomon, and Jennifer E Ho. 2024. Natural language processing for adjudication of heart failure in a multicenter clinical trial: A secondary analysis of a randomized clinical trial. JAMA Car- diol., 9(2):174–181. Geesa Daluwatumulle, Rupika Wijesinghe, and Ruvan Weerasinghe. 2022. In silico drug repurposing using knowledge graph embeddings for alzheimer’s disease. InProceedings of the 9th International Conference on Bioinformatics Research and Applications , pages 61–66, New York, NY , USA. ACM. Surabhi Datta, Kyeryoung Lee, Hunki Paek, Frank J. Manion, Nneka Ofoegbu, Jingcheng Du, Ying Li, Liang-Chin Huang, Jingqi Wang, Bin Lin, Hua Xu, and | https://arxiv.org/abs/2505.22280v1 |
Xiaoyan Wang. 2024. Autocriteria: a general- izable clinical trial eligibility criteria extraction sys- tem powered by large language models. Journal of the American Medical Informatics Association , 31(2):375–385. Published: 11 November 2023.Yang Deng, Yaliang Li, Ying Shen, Nan Du, Wei Fan, Min Yang, and Kai Lei. 2019. Medtruth: A semi- supervised approach to discovering knowledge con- dition information from multi-source medical data. InProceedings of the 28th ACM International Con- ference on Information and Knowledge Management , CIKM ’19, page 719–728, New York, NY , USA. As- sociation for Computing Machinery. Arti Devi, Shashank Uttrani, Aryansh Singla, Sarthak Jha, Nataraj Dasgupta, Sayee Natarajan, Rajesh- wari S Punekar, Larry A Pickett, and Varun Dutt. 2024a. Quantitative analysis of GPT-4 model: Op- timizing patient eligibility classification for clinical trials and reducing expert judgment dependency. In Proceedings of the 2024 8th International Conference on Medical and Health Informatics , pages 230–237, New York, NY , USA. ACM. Arti Devi, Shashank Uttrani, Aryansh Singla, Sarthak Jha, Nataraj Dasgupta, Sayee Natarajan, Rajeshwari S. Punekar, Larry A. Pickett, and Varun Dutt. 2024b. Automating clinical trial eligibility screening: Quan- titative analysis of GPT models versus human ex- pertise. In Proceedings of the 17th International Conference on PErvasive Technologies Related to Assistive Environments , New York, NY , USA. ACM. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers) , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Jay DeYoung, Iz Beltagy, Madeleine van Zuylen, Bai- ley Kuehl, and Lucy Wang. 2021. MS^2: Multi- document summarization of medical studies. In Pro- ceedings of the 2021 Conference on Empirical Meth- ods in Natural Language Processing , pages 7494– 7513, Stroudsburg, PA, USA. Association for Com- putational Linguistics. Houssein Dhayne, Rima Kilany, Rafiqul Haque, and Yehia Taher. 2021. Emr2vec: Bridging the gap be- tween patient data and clinical trial. Computers & Industrial Engineering , 156:107236. Anjani Dhrangadhariya, Roger Hilfiker, Roger Schaer, and Henning Müller. 2020. Machine learning as- sisted citation screening for systematic reviews. Dig- ital Personalized Health and Medicine . Anjani Dhrangadhariya, Gaetano Manzo, and Henning Müller. 2024. Pico to picos: Weak supervision to extend datasets with new labels. Digital Health and Informatics Innovations for Sustainable Health Care Systems , 316. Anjani Dhrangadhariya and Henning Müller. 2023. Not so weak pico: leveraging weak supervision for partic- ipants, interventions, and outcomes recognition for systematic review automation. JAMIA Open , 6(1). Nhan V Do, Danne C Elbers, Nathanael R Fillmore, Samuel Ajjarapu, Steven J Bergstrom, John Bihn, June K Corrigan, Rupali Dhond, Svitlana Dipietro, Arkadiy Dolgin, Theodore C Feldman, Sergey D Goryachev, Linden B Huhmann, Jennifer La, Paul A Marcantonio, Kyle M McGrath, Stephen J Miller, Vinh Q Nguyen, George R Schneeloch, Feng-Chi Sung, Kaitlin N Swinnerton, Amelia H Tarren, Han- nah M Tosi, Danielle Valley, Austin D V o, Cenk Yildirim, Chunlei Zheng, Robert Zwolinski, Gisele A Sarosy, David Loose, Colleen Shannon, and Mary T | https://arxiv.org/abs/2505.22280v1 |
Brophy. 2024. Matching patients to accelerate clini- cal trials (MPACT): Enabling technology for oncol- ogy clinical trial workflow. Stud. Health Technol. Inform. , 310:1086–1090. Nicholas J. Dobbins, Bin Han, Weipeng Zhou, Kris- tine F. Lan, H. Nina Kim, Robert Harrington, Özlem Uzuner, and Meliha Yetisgen. 2023. LeafAI: query generator for clinical cohort discovery rivaling a hu- man programmer. Journal of the American Medical Informatics Association , 30(12):1954–1964. Nicholas J Dobbins, Tony Mullen, Özlem Uzuner, and Meliha Yetisgen. 2022. The leaf clinical trials corpus: a new resource for query generation from clinical trial eligibility criteria. Sci. Data , 9(1):490. Jingcheng Du, Qing Wang, Jingqi Wang, Prerana Ramesh, Yang Xiang, Xiaoqian Jiang, and Cui Tao. 2021. COVID-19 trial graph: a linked graph for COVID-19 clinical trials. J. Am. Med. Inform. As- soc., 28(9):1964–1969. Abdelazeem Eldawlatly, Hussain Alshehri, Abdullah Alqahtani, Abdulaziz Ahmad, Fatma Al-Dammas, and Amir Marzouk. 2018. Appearance of popula- tion, intervention, comparison, and outcome as re- search question in the title of articles of three different anesthesia journals: A pilot study. Saudi Journal of Anaesthesia , 12(2):283–286. Yilu Fang, Jae Hyun Kim, Betina Ross Idnay, Re- beca Aragon Garcia, Carmen E. Castillo, Yingcheng Sun, Hao Liu, Cong Liu, Chi Yuan, and Chunhua Weng. 2021. Participatory design of a clinical trial eligibility criteria simplification method. Studies in Health Technology and Informatics , 281:984–988. Lyndsey Elaine Gates and Ahmed Abdeen Hamed. 2020. The anatomy of the SARS-CoV-2 biomedical litera- ture: Introducing the CovidX network algorithm for drug repurposing recommendation. J. Med. Internet Res., 22(8):e21169. Yao Ge, Yuting Guo, Sudeshna Das, Mohammed Ali Al-Garadi, and Abeed Sarker. 2023. Few-shot learning for medical text: A review of advances, trends, and opportunities. J. Biomed. Inform. , 144(104458):104458. Madhusudan Ghosh, Shrimon Mukherjee, Asmit Gan- guly, Partha Basuchowdhuri, Sudip Kumar Naskar, and Debasis Ganguly. 2024a. AlpaPICO: Extraction of PICO frames from clinical trial documents using LLMs. Methods , 226:78–88.Madhusudan Ghosh, Shrimon Mukherjee, Payel Santra, Girish Na, and Partha Basuchowdhuri. 2024b. BLINKtextsubscriptLSTM: BioLinkBERT and LSTM based approach for extraction of PICO frame from clinical trial text. In Proceedings of the 7th Joint International Conference on Data Science & Management of Data (11th ACM IKDD CODS and 29th COMAD) , New York, NY , USA. ACM. Meijian Guan, Samuel Cho, Robin Petro, Wei Zhang, Boris Pasche, and Umit Topaloglu. 2019. Natural language processing and recurrent network models for identifying genomic mutation-associated cancer treatment change from patient progress notes. JAMIA Open , 2(1):139–149. Christian Gulden, Melanie Kirchner, Christina Schüt- tler, Marc Hinderer, Marvin Kampf, Hans-Ulrich Prokosch, and Dennis Toddenroth. 2019. Extrac- tive summarization of clinical trial descriptions. Int. J. Med. Inform. , 129:114–121. YN Gwon, JH Kim, HS Chung, EJ Jung, J Chun, S Lee, and SR Shim. 2024. The use of generative ai for scientific literature searches for systematic reviews: Chatgpt and microsoft bing ai performance evalua- tion. JMIR Medical Informatics , 12:e51187. Anna Górska and Evelina Tacconelli. 2024. Towards autonomous living meta-analyses: A framework for automation of systematic review and meta-analyses. Stud. Health Technol. Inform. , 316:378–382. Ehab Hamed, Ahmad Eid, and Medhat Alberry. 2023. Exploring ChatGPT’s potential in facilitating adapta- | https://arxiv.org/abs/2505.22280v1 |
tion of clinical guidelines: A case study of diabetic ketoacidosis guidelines. Cureus , 15(5):e38784. Hamed Hassanzadeh, Sarvnaz Karimi, and Anthony Nguyen. 2020. Matching patients to clinical trials using semantically enriched document representation. Journal of Biomedical Informatics , 105:103406. Hendrik Ter Horst, Nicole Brazda, Jessica Schira- Heinen, Julia Krebbers, Hans-Werner Müller, and Philipp Cimiano. 2023. Automatic knowledge graph population with model-complete text comprehension for pre-clinical outcomes in the field of spinal cord in- jury. Artificial Intelligence in Medicine , 137:102491. Yan Hu, Vipina K Keloth, Kalpana Raja, Yong Chen, and Hua Xu. 2023. Towards precise PICO extraction from abstracts of randomized controlled trials using a section-specific learning approach. Bioinformatics , 39(9):btad542. Andy S. Huang, Kyle Hirabayashi, Laura Barna, Deep Parikh, and Louis R. Pasquale. 2024. Assessment of a large language model’s responses to questions and cases about glaucoma and retina management. JAMA Ophthalmology , 142(4):371–375. Bum-Sup Jang, Andrew J Park, and In Ah Kim. 2022. Exploration of biomedical knowledge for recurrent glioblastoma using natural language processing deep learning models. BMC Med. Inform. Decis. Mak. , 22(1):267. Pengcheng Jiang, Cao Xiao, Zifeng Wang, Parminder Bhatia, Jimeng Sun, and Jiawei Han. 2024. TriSum: Learning summarization ability from large language models with structured rationale. In Proceedings of the 2024 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 2805–2819, Mexico City, Mexico. As- sociation for Computational Linguistics. Di Jin and Peter Szolovits. 2018. PICO element de- tection in medical text via long short-term memory neural networks. In Proceedings of the BioNLP 2018 workshop , Stroudsburg, PA, USA. Association for Computational Linguistics. Qiao Jin, Chuanqi Tan, Mosha Chen, Ming Yan, and Xiaozhong Liu Ningyu Zhang, Songfang Huang. 2022. State-of-the-art evidence retriever for preci- sion medicine: Algorithm development and valida- tion. JMIR Medical Informatics , 10(12):e40743. Qiao Jin, Zifeng Wang, Charalampos S Floudas, Fangyuan Chen, Changlin Gong, Dara Bracken- Clarke, Elisabetta Xue, Yifan Yang, Jimeng Sun, and Zhiyong Lu. 2024. Matching patients to clinical trials with large language models. Nat. Commun. , 15(1):9074. Tom H Johnston, Alix M B Lacoste, Paula Raven- scroft, Jin Su, Sahar Tamadon, Mahtab Seifi, An- thony E Lang, Susan H Fox, Jonathan M Brotchie, and Naomi P Visanji. 2024. Using artificial intel- ligence to identify drugs for repurposing to treat l-DOPA-induced dyskinesia. Neuropharmacology , 248(109880):109880. Katikapalli Subramanyam Kalyan, Ajit Rajasekharan, and Sivanesan Sangeetha. 2022. Ammu: A sur- vey of transformer-based biomedical pretrained lan- guage models. Journal of Biomedical Informatics , 126:103982. Sowmya Kamath, Veena Mayya, and Priyadarshini. 2021. A probabilistic precision information retrieval model for personalized clinical trial recommendation based on heterogeneous data. In 2021 12th Inter- national Conference on Computing Communication and Networking Technologies (ICCCNT) , pages 1–5. IEEE. Lara J Kanbar, Benjamin Wissel, Yizhao Ni, Nathan Pajor, Tracy Glauser, John Pestian, and Judith W Dexheimer. 2022. Implementation of machine learn- ing pipelines for clinical practice: Development and validation study. JMIR Medical Informatics , 10(12):e37833. Tian Kang, Adler Perotte, Youlan Tang, Casey Ta, and Chunhua Weng. 2021. Umls-based data augmenta- tion for natural language processing of clinical re- search | https://arxiv.org/abs/2505.22280v1 |
literature. Journal of the American Medical Informatics Association , 28(4):812–823. Tian Kang, Yingcheng Sun, Jae Hyun Kim, Casey Ta, Adler Perotte, Kayla Schiffer, Mutong Wu, YangZhao, Nour Moustafa-Fahmy, Yifan Peng, and Chun- hua Weng. 2023. EvidenceMap: a three-level knowl- edge representation for medical evidence computa- tion and comprehension. J. Am. Med. Inform. Assoc. , 30(6):1022–1031. Tian Kang, Shirui Zou, and Chunhua Weng. 2019. Pre- training to recognize PICO elements from random- ized controlled trial literature. Stud. Health Technol. Inform. , 264:188–192. Samuel Kaskovich, Kirk D. Wyatt, Tomasz Oliwa, Luca Graglia, Brian Furner, Jooho Lee, Anoop Mayam- purath, and Samuel L. V olchenboum. 2023. Au- tomated matching of patients to clinical trials: A patient-centric natural language processing approach for pediatric leukemia. JCO Clinical Cancer Infor- matics , 7. Jenna Kefeli and Nicholas Tatonetti. 2024. Tcga- reports: A machine-readable pathology report re- source for benchmarking text-based ai models. Pat- terns , 5(3):100933. Published online February 21, 2024. AH Khan, A Abbe, B Falissard, P Carita, C Bachert, J Mullol, M Reaney, J Chao, LP Mannent, N Amin, P Mahajan, G Pirozzi, and L Eckert. 2021. Data min- ing of free-text responses: An innovative approach to analyzing patient perspectives on chronic rhinos- inusitis with nasal polyps in a phase iia proof-of- concept study for dupilumab. Dove Medical Press , 2021(15):2577–2586. Jeongeun Kim, Mitchell Izower, and Yuri Quintana. 2023a. Parsable clinical trial eligibility criteria rep- resentation using natural language processing. In AMIA Annual Symposium Proceedings , pages 616– 624. American Medical Informatics Association. Jeongeun Kim, Mitchell Izower, and Yuri Quintana. 2023b. Parsable clinical trial eligibility criteria rep- resentation using natural language processing. In AMIA Annual Symposium Proceedings , pages 616– 624. American Medical Informatics Association. Su Nam Kim, David Martinez, Lawrence Cavedon, and Lars Yencken. 2024. Nicta-piboso dataset. https: //doi.org/10.57702/ne4r48m1 . Dataset consists of 1,000 medical abstracts manually annotated with semantic tags based on the PICO criteria to support the automatic classification of sentences. Bevan Koopman, Tracey Wright, Natacha Omer, Veron- ica McCabe, and Guido Zuccon. 2021. Precision medicine search for paediatric oncology. In Proceed- ings of the 44th International ACM SIGIR Confer- ence on Research and Development in Information Retrieval , SIGIR ’21, page 2536–2540, New York, NY , USA. Association for Computing Machinery. Bevan Koopman and Guido Zuccon. 2022. Cohort- based clinical trial retrieval. In Proceedings of the 25th Australasian Document Computing Symposium , ADCS ’21, New York, NY , USA. Association for Computing Machinery. Fabrício Kury, Alex Butler, Chi Yuan, Li-Heng Fu, Yingcheng Sun, Hao Liu, Ida Sim, Simona Carini, and Chunhua Weng. 2020. Chia, a large annotated corpus of clinical trial eligibility criteria. Sci. Data , 7(1):281. Mary R Kwaan and Genevieve B Melton. 2012. Evidence-based medicine in surgical education. Clin. Colon Rectal Surg. , 25(3):151–155. Evani Lalitha, Kasarapu Ramani, Dudekula Shahida, Esikela Venkata Sai Deepak, M Hima Bindu, and Diguri Shaikshavali. 2023. Text summarization of medical documents using abstractive techniques. In 2023 2nd International Conference on Applied Arti- ficial Intelligence and Computing (ICAAIC) , pages 939–943. IEEE. Mengfei Lan, Mandy Cheng, Linh Hoang, Gerben Ter Riet, and Halil Kilicoglu. 2024. Automatic | https://arxiv.org/abs/2505.22280v1 |
cat- egorization of self-acknowledged limitations in ran- domized controlled trial publications. J. Biomed. Inform. , 152:104628. Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics , 36(4):1234–1240. Kyeryoung Lee, Zongzhi Liu, Yun Mai, Tomi Jun, Meng Ma, Tongyu Wang, Lei Ai, Ediz Calay, William Oh, Gustavo Stolovitzky, Eric Schadt, and Xiaoyan Wang. 2024. Optimizing clinical trial eligibility design using natural language processing models and real- world data: Algorithm development and validation. JMIR AI , 3:e50800. Chao Li, Harsha Gurulingappa, Prathamesh Karmalkar, Jana Raab, Aastha Vij, Gerard Megaro, and Christian Henke. 2021a. Automate clinical evidence synthesis by linking trials to publications with text analytics. In2021 International Symposium on Electrical, Elec- tronics and Information Engineering , New York, NY , USA. ACM. Jianfu Li, Qiang Wei, Omid Ghiasvand, Miao Chen, Victor Lobanov, Chunhua Weng, and Hua Xu. 2022. A comparative study of pre-trained language models for named entity recognition in clinical trial eligibility criteria from multiple corpora. BMC Med. Inform. Decis. Mak. , 22(Suppl 3):235. Xinhang Li, Hao Liu, Fabrício Kury, Chi Yuan, Alex Butler, Yingcheng Sun, Anna Ostropolets, Hua Xu, and Chunhua Weng. 2021b. A comparison between human and nlp-based annotation of clinical trial el- igibility criteria text using the omop common data model. In AMIA Joint Summits on Translational Sci- ence Proceedings , pages 394–403. AMIA. Yizhen Li, Zhongzhi Luan, Yixing Liu, Heyuan Liu, Jiaxing Qi, and Dongran Han. 2024. Automated information extraction model enhancing traditional chinese medicine rct evidence extraction (evi-bert): algorithm development and validation. frontiers arti- ficial intelligence , 7(1454945):not listed.Cong Liu, Hao Liu, Casey Ta, James Roger, Alex But- ler, Junghwan Lee, Jaehyun Kim, Ning Shang, and Chunhua Weng. 2022. Evaluation of Criteria2Query: Towards augmented intelligence for cohort identifica- tion. Stud. Health Technol. Inform. , 290:297–300. Cong Liu, Chi Yuan, Alex M Butler, Richard D Carvajal, Ziran Ryan Li, Casey N Ta, and Chunhua Weng. 2019. Dquest: dynamic questionnaire for search of clinical trials. Journal of the American Medical Informatics Association , 26(11):1333–1343. Hao Liu, Yuan Chi, Alex Butler, Yingcheng Sun, and Chunhua Weng. 2021. A knowledge base of clini- cal trial eligibility criteria. Journal of Biomedical Informatics , 117:103771. Cynthia Lokker, Elham Bagheri, Wael Abdelkader, Rick Parrish, Muhammad Afzal, Tamara Navarro, Chris Cotoi, Federico Germini, Lori Linkins, R Brian Haynes, Lingyang Chu, and Alfonso Iorio. 2023. Deep learning to refine the identification of high- quality clinical research articles from the biomedical literature: Performance evaluation. J. Biomed. In- form. , 142(104384):104384. Khalid Mahmood Malik, Madan Krishnamurthy, Pawel Marcinek, and Ghaus M Malik. 2020. Impact of size, location, symptomatic-nature and gender on the rupture of saccular intracranial aneurysms. In Proceedings of the 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining , ASONAM ’18, page 995–1001. IEEE Press. Iain J Marshall, Benjamin Nye, Joël Kuiper, Anna Noel-Storr, Rachel Marshall, Rory Maclean, Frank Soboczenski, Ani Nenkova, James Thomas, and By- ron C Wallace. 2020. Trialstreamer: A living, auto- matically updated database of clinical trial reports. J. Am. Med. | https://arxiv.org/abs/2505.22280v1 |
Inform. Assoc. , 27(12):1903–1912. Iain J Marshall, Thomas A Trikalinos, Frank Soboczen- ski, Hye Sun Yun, Gregory Kell, Rachel Marshall, and Byron C Wallace. 2023. In a pilot study, auto- mated real-time systematic review updates were fea- sible, accurate, and work-saving. J. Clin. Epidemiol. , 153:26–33. Tobias Mayer, Santiago Marro, Elena Cabrio, and Serena Villata. 2021. Enhancing evidence-based medicine with natural language argumentative anal- ysis of clinical trials. Artificial Intelligence in Medicine , 118:102098. Chirag Mehta, David Cohen, Priya Jaisinghani, and Payal Parikh. 2022. Internal medicine resident ad- herence to evidence-based practices in management of diabetes mellitus. J. Med. Educ. Curric. Dev. , 9:23821205221076659. Stéphane M. Meystre, Paul M. Heider, Andrew Cates, Grace Bastian, Tara Pittman, Stephanie Gentilin, and Teresa J. Kelechi. 2023. Piloting an automated clin- ical trial eligibility surveillance and provider alert system based on artificial intelligence and standard data models. BMC Medical Research Methodology , 23(88). Rashmi Mishra, Andrea Burke, Bonnie Gitman, Payal Verma, Mark Engelstad, Ilias Alevizos, William A. Gahl, Michael T. Collins, Janice S. Lee, and Murat Sincan. 2019. Data-driven method to enhance cranio- facial and oral phenotype vocabularies. The Journal of the American Dental Association , 150(11):933– 939.e2. Sabah Mohammed and Jinan Fiaidhi. 2023. Investiga- tion into scaling-up the soap problem-oriented med- ical record into a clinical case study. In 2023 IEEE 11th International Conference . IEEE. Sabah Mohammed and Jinan Fiaidhi. 2024. Generative AI for evidence-based medicine: A PICO GenAI for synthesizing clinical case reports. In ICC 2024 - IEEE International Conference on Communications , volume 3, pages 1503–1508. IEEE. Sabah Mohammed, Jinan Fiaidhi, and Rahul Kudadiya. 2023. Integrating a PICO clinical questioning to the QL4POMR framework for building evidence-based clinical case reports. In 2023 IEEE International Conference on Big Data (BigData) , volume 4, pages 4940–4947. IEEE. Victor M Murcia, Vinod Aggarwal, Nikhil Pesaladinne, Ram Thammineni, Nhan Do, Gil Alterovitz, and Rafael B Fricks. 2024. Automating clinical trial matches via natural language processing of synthetic electronic health records and clinical trial eligibility criteria. AMIA Summits Transl. Sci. Proc. , 2024:125– 134. Faith Mutinda, Kongmeng Liew, Shuntaro Yada, Shoko Wakamiya, and Eiji Aramaki. 2022a. PICO corpus: A publicly available corpus to support automatic data extraction from biomedical literature. In Proceed- ings of the first Workshop on Information Extraction from Scientific Publications , pages 26–31, Online. Association for Computational Linguistics. Faith Wavinya Mutinda, Kongmeng Liew, Shuntaro Yada, Shoko Wakamiya, and Eiji Aramaki. 2022b. Automatic data extraction to support meta-analysis statistical analysis: a case study on breast cancer. BMC Med. Inform. Decis. Mak. , 22(1):158. Joshua J. Myszewski, Emily Klossowski, Patrick Meyer, Kristin Bevil, Lisa Klesius, and Kristopher M. Schroeder. 2022. Validating gan-biobert: A method- ology for assessing reporting trends in clinical trials. Frontiers in Digital Health , 4. Tamara Navarro-Ruan and R. Brian Haynes. 2022. Pre- liminary comparison of the performance of the na- tional library of medicine’s systematic review pub- lication type and the sensitive clinical queries filter for systematic reviews in pubmed. Journal of the Medical Library Association , 110(1).Aurélie Névéol, Rezarta Islamaj Do ˘gan, and Zhiy- ong Lu. 2011. Semi-automatic semantic annota- tion | https://arxiv.org/abs/2505.22280v1 |
of pubmed queries: a study on quality, effi- ciency, satisfaction. Journal of biomedical informat- ics, 44(2):310–318. Abigail Newbury, Hao Liu, Betina Idnay, and Chun- hua Weng. 2023. The suitability of UMLS and SNOMED-CT for encoding outcome concepts. J. Am. Med. Inform. Assoc. , 30(12):1895–1903. Vincent Nguyen, Sarvnaz Karimi, and Brian Jin. 2019. An experimentation platform for precision medicine. InProceedings of the 42nd International ACM SI- GIR Conference on Research and Development in Information Retrieval , SIGIR’19, page 1357–1360, New York, NY , USA. Association for Computing Machinery. Yizhao Ni, Monica Bermudez, Stephanie Kennebeck, Stacey Liddy-Hicks, and Judith Dexheimer. 2019. A real-time automated patient screening system for clinical trials eligibility in an emergency department: Design and evaluation. JMIR Medical Informatics , 7(3):e14185. Mauro Nievas, Aditya Basu, Yanshan Wang, and Hrit- uraj Singh. 2024. Distilling large language mod- els for matching patients to clinical trials. Journal of the American Medical Informatics Association , 31(9):1953–1963. Christopher Norman, Mariska Leeflang, René Spijker, Evangelos Kanoulas, and Aurélie Névéol. 2019a. A distantly supervised dataset for automated data ex- traction from diagnostic studies. In Proceedings of the 18th BioNLP Workshop and Shared Task , pages 105–114, Florence, Italy. Association for Computa- tional Linguistics. Christopher R Norman, Mariska M G Leeflang, Raphaël Porcher, and Aurélie Névéol. 2019b. Measuring the impact of screening automation on meta-analyses of diagnostic test accuracy. Syst. Rev. , 8(1):243. Elvira Nurmambetova, Jie Pan, Zilong Zhang, Seung- won Lee, Danielle A Southern, Elliot A Martin, Gu- osong Wu, Chester Ho, and Cathy A Eastwood. 2023. Developing an inpatient electronic medical record phenotype for hospital-acquired pressure injuries: Case study using natural language processing models. JMIR AI , 2(2023):e41264. Benjamin Nye, Junyi Jessy Li, Roma Patel, Yinfei Yang, Iain Marshall, Ani Nenkova, and Byron Wallace. 2018. A corpus with multi-level annotations of pa- tients, interventions and outcomes to support lan- guage processing for medical literature. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 197–207, Melbourne, Australia. Association for Computational Linguistics. Zhenhe Pan, Shuang Jiang, Juntao Su, Muzhe Guo, and Yuanlin Zhang. 2021. Knowledge graph based plat- form of COVID-19 drugs and symptoms. In Proceed- ings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining , New York, NY , USA. ACM. Yifan Peng, Justin F Rousseau, Edward H Shortliffe, and Chunhua Weng. 2023. AI-generated text may have a role in evidence-based medicine. Nat. Med. , 29(7):1593–1594. Sanjana Ramprasad, Iain J. Marshall, Denis Jered McIn- erney, and Byron C. Wallace. 2023a. Automatically summarizing evidence from clinical trials: A proto- type highlighting current challenges. In Proceedings of the Conference of the Association for Computa- tional Linguistics Meeting , pages 236–247. Sanjana Ramprasad, Jered Mcinerney, Iain Marshall, and Byron Wallace. 2023b. Automatically summa- rizing evidence from clinical trials: A prototype high- lighting current challenges. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations , pages 236–247, Stroudsburg, PA, USA. Association for Computational Linguistics. Iqbal Ratnani, Sahar Fatima, Muhammad Mohsin Abid, Zehra Surani, and Salim Surani. 2023. | https://arxiv.org/abs/2505.22280v1 |
Evidence- based medicine: History, review, criticisms, and pit- falls. Cureus , 15(2):e35266. Omid Rohanian, Mohammadmahdi Nouriborji, Samaneh Kouchaki, Farhad Nooralahzadeh, Lei Clifton, and David A. Clifton. 2024. Exploring the effectiveness of instruction tuning in biomedical language processing. Artificial Intelligence in Medicine , 158:103007. Mohammad Abu Tareq Rony, Mohammad Shariful Is- lam, Tipu Sultan, Samah Alshathri, and Walid El- Shafai. 2023. Medigpt: Exploring potentials of con- ventional and large language models on medical data. IEEE Access , 12. Maciej Rybinski, Sarvnaz Karimi, Vincent Nguyen, and Cecile Paris. 2020a. A2A: a platform for research in biomedical literature search. BMC Bioinformatics , 21(Suppl 19):572. Maciej Rybinski, Jerry Xu, and Sarvnaz Karimi. 2020b. Clinical trial search: Using biomedical language un- derstanding models for re-ranking. J. Biomed. In- form. , 109(103530):103530. David L. Sackett, William M. C. Rosenberg, J. A. Muir Gray, R. Brian Haynes, and W. Scott Richardson. 1996. Evidence based medicine: what it is and what it isn’t. BMJ , 312(7023):71–72. Jawad Sadek, Alex Inskip, James Woltmann, Georgina Wilkins, Christopher Marshall, Maria Pokora, Amey Vedpathak, Anastasija Jadrevska, Dawn Craig, and Michael Trenell. 2023. Scanmedicine: An online search system for medical innovation. Contemporary Clinical Trials , 125:107042.Fernando Suarez Saiz, Corey Sanders, Rick Stevens, Robert Nielsen, Michael Britt, Leemor Yuravlivker, Anita M Preininger, and Gretchen P Jackson. 2021. Artificial intelligence clinical evidence engine for automatic identification, prioritization, and extraction of relevant clinical oncology research. JCO Clin. Cancer Inform. , 5(5):102–111. Hamman Samuel, Osmar Zaiane, and Francois Bolduc. 2021. Evaluation of applied machine learning for health misinformation detection via survey of med- ical professionals on controversial topics in pedi- atrics. In Proceedings of the 5th International Con- ference on Medical and Health Informatics , pages 1–6. ACM. Olivia Sanchez-Graillet, Christian Witte, Frank Grimm, and Philipp Cimiano. 2022. An annotated corpus of clinical trial publications supporting schema-based relational information extraction. J. Biomed. Seman- tics, 13(1):14. Abeed Sarker, Yuan-Chi Yang, Mohammed Ali Al- Garadi, and Aamir Abbas. 2020. A light-weight text summarization system for fast access to medical evidence. Front. Digit. Health , 2:585559. Abigail See, Peter J. Liu, and Christopher D. Man- ning. 2017. Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368 . Isabel Segura-Bedmar and Pablo Raez. 2019. Cohort se- lection for clinical trials using deep learning models. J. Am. Med. Inform. Assoc. , 26(11):1181–1188. Makoto Shiraishi, Yoko Tomioka, Ami Miyakuni, Saaya Ishii, Asei Hori, Hwayoung Park, Jun Ohba, and Mutsumi Okazaki. 2024. Performance of Chat- GPT in answering clinical questions on the practical guideline of blepharoptosis. Aesthetic Plast. Surg. , 48(13):2389–2398. Irena Spasic, David Krzeminski, Paul Corcoran, and Alexander Balinsky. 2019. Cohort selection for clinical trials from longitudinal patient records: Text mining approach. JMIR Medical Informatics , 7(4):e15980. Nikolaos Stylianou, Gerasimos Razis, Dimitrios G. Goulis, and Ioannis Vlahavas. 2020. Ebm+: Ad- vancing evidence-based medicine via two level au- tomatic identification of populations, interventions, outcomes in medical literature. Artificial Intelligence in Medicine , 108:101949. Nikolaos Stylianou and Ioannis Vlahavas. 2021. Trans- forMED: End-to-end transformers for evidence- based medicine and argument mining in medical lit- erature. J. Biomed. Inform. , 117(103767):103767. Davide Testa, Emmanuele Chersoni, and | https://arxiv.org/abs/2505.22280v1 |
Alessandro Lenci. 2023. We understand elliptical sentences, and language models should too: A new dataset for study- ing ellipsis and its interaction with thematic fit. In Proceedings of the 61st Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers) , pages 3340–3353, Toronto, Canada. Association for Computational Linguistics. Arun James Thirunavukarasu, Darren Shu Jeng Ting, Kabilan. Elangovan, et al. 2023. Large language models in medicine. Nature Medicine , 29:1930– 1940. Shubo Tian, Arslan Erdengasileng, Xi Yang, Yi Guo, Yonghui Wu, Jinfeng Zhang, Jiang Bian, and Zhe He. 2021. Transformer-based named entity recognition for parsing clinical trial eligibility criteria. ACM BCB , 2021. Shubo Tian, Pengfei Yin, Hansi Zhang, Arslan Erden- gasileng, Jiang Bian, and Zhe He. 2023. Parsing clinical trial eligibility criteria for cohort query by a multi-input multi-output sequence labeling model. In2023 IEEE International Conference on Bioinfor- matics and Biomedicine (BIBM) , pages 4426–4430. Hegler C. Tissot, Anoop D. Shah, David Brealey, Steve Harris, Ruth Agbakoba, and Amos Folarin. 2020. Natural language processing for mimicking clinical trial recruitment in critical care: A semi-automated simulation based on the leopards trial. IEEE Journal of Biomedical and Health Informatics , 24(10):2950– 2959. Tadashi Tsubota, Danushka Bollegala, Yang Zhao, Yingzi Jin, and Tomotake Kozu. 2022. Improvement of intervention information detection for automated clinical literature screening during systematic review. J. Biomed. Inform. , 134(104185):104185. Pyae Phyo Tun, Jiawen Luo, Jiecheng Xie, Sandi Wi- bowo, and Chen Hao. 2023. Automatic assessment of patient eligibility by utilizing nlp and rule-based analysis. In 2023 45th Annual International Confer- ence of the IEEE Engineering in Medicine & Biology Society (EMBC) , page 10340494, Sydney, Australia. IEEE. Ali Turfaha, Hao Liu, Latoya A Stewart, Tian Kang, and Chunhua Weng. 2022. Extending pico with ob- servation normalization for evidence computing. In MEDINFO 2021: One World, One Health – Global Partnership for Digital Innovation , pages 268–272, New York, New York, USA. International Medical Informatics Association and IOS Press, IOS Press. Ozan Unlu, Jiyeon Shin, Charlotte J Mailly, Michael F Oates, Michela R Tucci, Matthew Varugheese, Kav- ishwar Wagholikar, Fei Wang, Benjamin M Scirica, Alexander J Blood, and Samuel J Aronson. 2024. Retrieval augmented generation enabled generative pre-trained transformer 4 (GPT-4) performance for clinical trial screening. medRxiv . Peter Van de Vliet, Tobias Sprenger, Linde F C Kampers, Jennifer Makalowski, V olker Schirrmacher, Wilfried Stücker, and Stefaan W Van Gool. 2023. The appli- cation of evidence-based medicine in individualized medicine. Biomedicines , 11(7).Bianca V ora, Denison Kuruvilla, Chloe Kim, Michael Wu, Colby S. Shemesh, and Gillie A. Roth. 2023. Applying natural language processing to clinicaltri- als.gov: mrna cancer vaccine case study. Clinical and Translational Science , 16:2417–2420. V G Vinod Vydiswaran, Asher Strayhorn, Xinyan Zhao, Phil Robinson, Mahesh Agarwal, Erin Bagazinski, Madia Essiet, Bradley E Iott, Hyeon Joo, Pingjui Ko, Dahee Lee, Jin Xiu Lu, Jinghui Liu, Adharsh Murali, Koki Sasagawa, Tianshi Wang, and Nalingna Yuan. 2019. Hybrid bag of approaches to characterize se- lection criteria for cohort identification. J. Am. Med. Inform. Assoc. , 26(11):1172–1180. Kunyuan Wang, Hao Cui, Yun Zhu, Xiaoyun Hu, Chang Hong, Yabing Guo, Lingyao | https://arxiv.org/abs/2505.22280v1 |
An, Qi Zhang, and Li Liu. 2024. Evaluation of an artificial intelligence- based clinical trial matching system in chinese pa- tients with hepatocellular carcinoma: a retrospective study. BMC Cancer , 24(1):246. Yu Wang, Yuan Wang, Zhenwan Peng, Feifan Zhang, Luyao Zhou, and Fei Yang. 2023a. Medical text classification based on the discriminative pre- training model and prompt-tuning. Digit. Health , 9:20552076231193213. Zifeng Wang and Jimeng Sun. 2022. Trial2Vec: Zero- shot clinical trial document similarity search using self-supervision. In Findings of the Association for Computational Linguistics: EMNLP 2022 , pages 6377–6390, Stroudsburg, PA, USA. Association for Computational Linguistics. Zifeng Wang, Cao Xiao, and Jimeng Sun. 2023b. Au- toTrial: Prompting language models for clinical trial design. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Process- ing, pages 12461–12472, Singapore. Association for Computational Linguistics. Christian Witte, David M Schmidt, and Philipp Cimi- ano. 2024. Comparing generative and extractive ap- proaches to information extraction from abstracts describing randomized clinical trials. J. Biomed. Se- mantics , 15(1):3. Qianqian Xie, Jennifer Amy Bishop, Prayag Tiwari, and Sophia Ananiadou. 2022. Pre-trained lan- guage models with domain knowledge for biomed- ical extractive summarization. Knowl. Based Syst. , 252(109460):109460. Shiyao Xie, Wenjing Zhao, Guanghui Deng, Guohua He, Na He, Zhenhua Lu, Weihua Hu, Mingming Zhao, and Jian Du. 2024. Utilizing chatgpt as a scientific reasoning engine to differentiate conflicting evidence and summarize challenges in controversial clinical questions. Journal of the American Medical Infor- matics Association , 31(7):1551–1560. Published on- line: 17 May 2024. Yi Xie, Ishith Seth, David J Hunter-Smith, Warren M Rozen, Richard Ross, and Matthew Lee. 2023. Aes- thetic surgery advice and counseling from artificial intelligence: A rhinoplasty consultation with Chat- GPT. Aesthetic Plast. Surg. , 47(5):1985–1993. Quan Xu, Yueyue Liu, Dawei Sun, Xiaoqian Huang, Feihong Li, Jincheng Zhai, Yang Li, Qiming Zhou, Niansong Qian, and Beifang Niu. 2023. OncoCT- Miner: streamlining precision oncology trial match- ing via molecular profile analysis. Database (Ox- ford) , 2023:baad077. Yumeng Yang, Soumya Jayaraj, Ethan Ludmir, and Kirk Roberts. 2023. Text classification of cancer clinical trial eligibility criteria. AMIA Annu. Symp. Proc. , 2023:1304–1313. Xiaoxi Yao, Zachi I. Attia, Emma M. Behnken, Kelli Walvatne, Rachel E. Giblon, Sijia Liu, Konstanti- nos C. Siontis, Bernard J. Gersh, Jonathan Graff- Radford, Alejandro A. Rabinstein, Paul A. Friedman, and Peter A. Noseworthy. 2021. Batch enrollment for an artificial intelligence-guided intervention to lower neurologic events in patients with undiagnosed atrial fibrillation: rationale and design of a digital clinical trial. American Heart Journal , 239:73–79. Fatin Syafiqah Yazi, Wan-Tze V ong, Valliappan Ra- man, Patrick Hang Hui Then, and Mukulraj J Lunia. 2021. Towards automated detection of contradictory research claims in medical literature using deep learn- ing approach. In 2021 Fifth International Conference on Information Retrieval and Knowledge Manage- ment (CAMP) , pages 116–121. Jiayi Yuan, Ruixiang Tang, Xiaoqian Jiang, and Xia Hu. 2024. Large language models for healthcare data augmentation: An example on patient-trial matching. InAMIA Annual Symposium Proceedings , volume 2024, pages 1324–1333. AMIA. Kun Zeng, Zhiwei Pan, Yibin Xu, and Yingying Qu. 2020. An ensemble learning strategy for eligibility criteria text | https://arxiv.org/abs/2505.22280v1 |
classification for clinical trial recruitment: Algorithm development and validation. JMIR Medi- cal Informatics , 8(7):e17832. Gongbo Zhang, Qiao Jin, Yiliang Zhou, Song Wang, Betina Idnay, Yiming Luo, Elizabeth Park, Jor- dan G. Nestor, Matthew E. Spotnitz, Ali Soroush, Thomas R. Campion Jr, Zhiyong Lu, Chunhua Weng, and Yifan Peng. 2024a. Closing the gap between open source and commercial large language models for medical evidence summarization. NPJ Digital Medicine , 7(1):239. Gongbo Zhang, Yiliang Zhou, Yan Hu, Hua Xu, Chun- hua Weng, and Yifan Peng. 2024b. A span-based model for extracting overlapping PICO entities from RCT publications. J. Am. Med. Inform. Assoc. Yixuan Zhang, Junzhen Liu, and Wei Lu. 2023. Medict- gp: An accurate entity recognition model combining medical domain knowledge and globalization ideas. InProceedings of the 2023 9th International Confer- ence on Computing and Artificial Intelligence , ICCAI ’23, page 477–483, New York, NY , USA. Association for Computing Machinery.Ce Zheng, Hongfei Ye, Jinming Guo, Junrui Yang, Ping Fei, Yuanzhi Yuan, Danqing Huang, Yuqiang Huang, Jie Peng, Xiaoling Xie, Meng Xie, Peiquan Zhao, Li Chen, and Mingzhi Zhang. 2024. Development and evaluation of a large language model of ophthal- mology in chinese. Br. J. Ophthalmol. , 108(10):1390– 1397. A Appendix A.1 Included Studies Supplementary Table 1: Overview of Included Studies. P - Precision, R - Recall, Acc - Accuracy, NDCG - Normalized Discounted Cumulative Gain. S- Abstractive Summarization, T- Clinical Trial Design, N- Entity Extraction and Classification, E- Evaluation of Performance, L- Evidence Ranking and Screening, S- Extractive Summarization, Y- Evidence Synthesis, I - Information Retrieval, A - Quality Assessment, Q - Question Answering, R - Relation Extraction. Study Model Disease Task Al Hafiz Khan et al. (2019) RNN – TNE Alexander et al. (2020) Statistical Lung Cancer TNE Alodadi and Janeja (2019) Rules – NEIR Aum and Choe (2021) Transformer Cognitive Impairment NELR Beattie et al. (2024) LLM – TNE Beck et al. (2020) Rules Breast Cancer TNE Borchert et al. (2022) Rules Cancer NR Brassey et al. (2021) Rules – NYA Brockmeier et al. (2019) Transformer – NL Cai et al. (2021) Random Forest, Logistic LASSO Rheumatoid Arthritis TE Campillos-Llanos et al. (2021)Transformer – NE Chen et al. (2019a) Rules – STNER Chen et al. (2019b) CNN Myocardial infarction TNE Chen et al. (2019c) Rules – TNE Chuan and Morgan (2021) CNN Cancer TNEQ Cunningham et al. (2024) Transformer Heart Failure TNE Daluwatumulle et al. (2022) Graph Alzheimer’s Disease (AD) NEI Datta et al. (2024) LLM – NEL Devi et al. (2024b) LLM Non-Small Cell Lung Cancer (NSCLC)TNE DeYoung et al. (2021) Transformer – SNER Deng et al. (2019) Graph, Statistical Coronary heart disease, Chest pain, BronchitisEQ Do et al. (2024) Rules, Statistical Cancer TN Dobbins et al. (2022) Transformer – TNER Dobbins et al. (2023) Transformer, Rules – TNEIR Dhrangadhariya et al. (2020)Statistical, Learning to Rank – NEYA Dhrangadhariya and Müller (2023)Transformer – NE Dhrangadhariya et al. (2024)Rules, Statistical – NE Dhayne et al. (2021) SVM, CNN, RNN, Transformer Viral Infection TNE Du et al. (2021) Graph COVID-19 EI Fang et al. (2021) Rules – TN Gates and Hamed | https://arxiv.org/abs/2505.22280v1 |
(2020) Graph, Statistical SARS NL Ghosh et al. (2024a) LLM – NE Ghosh et al. (2024b) Transformer, RNN – NE Górska and Tacconelli (2024)Transformer, LLM – NLY Gulden et al. (2019) Graph – NES Gwon et al. (2024) LLM Peyronie Disease NEIA Hamed et al. (2023) LLM Diabetic Ketoacidosis SEQ Hassanzadeh et al. (2020) Support Vector Machine (SVM), Random Forest (RF), Logistic Regression (LR), and Stochastic Gradient Descent (SGD), RNN– TNEI Horst et al. (2023) Graph, CRF, Statistical Spinal Cord Injury TNEY Hu et al. (2023) Transformer COVID-19, AD NE Huang et al. (2024) LLM Glaucoma EQ Jang et al. (2022) Transformer Recurrent Glioblastoma NEQ Jiang et al. (2024) LLM – SEQ Continued on next page Supplementary Table 1 – continued from previous page Study Model Disease Task Jin et al. (2022) Transformer – EIA Jin et al. (2024) Transformer, LLM – TELI Johnston et al. (2024) Vector Space Model, Statistical l-DOPA-induced dyskinesia in ParkinsonNL Kamath et al. (2021) Statistical – TLI Kanbar et al. (2022) SVM, Naive Bayes, Random ForestEpilepsy TNE Kang et al. (2019) RNN – NE Kang et al. (2021) Transformer, RNN – NE Kang et al. (2023) Transformer COVID-19 SNEY Kaskovich et al. (2023) SVM Pediatric Leukemia TNEL Kefeli and Tatonetti (2024) BERT Cancer NE Khan et al. (2021) Statistical Chronic Rhinosinusitis with Nasal PolypsN Kim et al. (2023a) LR, NB, kNN, SVM, CNN, RNN, FastText, Transformer, ERNIEHepatocellular Carcinoma TNE Kim et al. (2023b) Transformer – TNE Koopman et al. (2021) Graph Cancer NI Koopman and Zuccon (2022)MMR Cancer TLI Kury et al. (2020) Rules – TE Lalitha et al. (2023) T5(Text-to-Text Transfer Transformer), BART (Bidirectional Auto-Regressive Transformer) and PEGASUS (Pre-training with Extracted Gap-sentences for Abstractive Summarization Sequence-to-sequence)– SE Lan et al. (2024) Transformer – NEA Lee et al. (2024) RNN Cancer TNE Li et al. (2021a) Vector Space model – LI Li et al. (2021b) Rules – N Li et al. (2022) Transformer – TNE Li et al. (2024) Transformer Stroke, Colorectal Cancer, Coronary Heart Disease, Heart Failure, Chronic Obstructive Pulmonary Disease, Diabetes, Diabetic Nephropathy, Osteoarthritis, Obesity, Rheumatoid Arthritis, and DiarrheaNE Liu et al. (2019) RNN – NEI Liu et al. (2021) Rules – TNER Liu et al. (2022) Rules, RNN – TNR Lokker et al. (2023) Transformer – EIA Malik et al. (2020) Statistical – N Marshall et al. (2023) Transformer COVID-19 NYI Mayer et al. (2021) GRU, CRF, RNN, Transformer – NER Meystre et al. (2023) Rules – TNL Mishra et al. (2019) Statistical Craniofacial Abnormalities NE Mohammed and Fiaidhi (2023)Graph – NI Mohammed et al. (2023) Graph – NES Mohammed and Fiaidhi (2024)LLM – NEYQ Murcia et al. (2024) Transformer – TN Mutinda et al. (2022b) Rules, Statistical, Transformer Breast cancer NEY Myszewski et al. (2022) Transformer - NEA Navarro-Ruan and Haynes (2022)Rules, Statistical – LI Newbury et al. (2023) Rules – NE Continued on next page Supplementary Table 1 – continued from previous page Study Model Disease Task Nguyen et al. (2019) Transformer, RNN, SVM – LI Ni et al. (2019) Rules Respiratory Tract Infection, Traumatic Brain Injury, and Serious Bacterial InfectionsTNE Nievas | https://arxiv.org/abs/2505.22280v1 |
et al. (2024) LLM – TE Norman et al. (2019b) Statistical – L Nurmambetova et al. (2023) Random Forest, XGBoost Acquired Pressure Injuries NE Pan et al. (2021) Transformer, Graph COVID-19 NESQR Ramprasad et al. (2023a) Transformer, Longformer – SI Rony et al. (2023) LLM – EQ Rybinski et al. (2020a) Rules – I Rybinski et al. (2020b) Transformer – NELI Sadek et al. (2023) Knowledge – NI Saiz et al. (2021) Gradient-boosted Trees Cancer TNEL Samuel et al. (2021) Statistical Autism EI Sanchez-Graillet et al. (2022)Transformer, Schema Glaucoma, Type 2 diabetes mellitusNE Sarker et al. (2020) Statistical COVID-19 ES Segura-Bedmar and Raez (2019)CNN, RNN – TE Shiraishi et al. (2024) LLM Blepharoptosis EQ Spasic et al. (2019) SVM, Logistic Regression, Naive Bayes, Gradient Tree Boosting, Rules, Decision Trees, Random Forests– TNE Stylianou et al. (2020) Transformer, RNN – NE Stylianou and Vlahavas (2021)Transformer – NE Tian et al. (2021) Transformer – TNA Tian et al. (2023) Statistical, Transformer AD TNE Tissot et al. (2020) Rules Organ dysfunction in septic shock TNEI Tsubota et al. (2022) Transformer – NEI Tun et al. (2023) Rules Cardiovascular Events TE Turfaha et al. (2022) Rules – NE Unlu et al. (2024) LLM Heart Failure EQ V ora et al. (2023) Rules mRNA Cancer NI Vydiswaran et al. (2019) Rules – TNE Wang and Sun (2022) Transformer – NEI Wang et al. (2023a) Statistical, Transformer – NE Wang et al. (2023b) LLM – TEYR Wang et al. (2024) CNN, Graph, Rules Hepatocellular Carcinoma NE Witte et al. (2024) Transformer Glaucoma, Type II Diabetes NE Xie et al. (2022) Transformer – NS Xie et al. (2023) LLM Rhinoplasty Q Xie et al. (2024) LLM – SEYA Xu et al. (2023) Graph Cancer NLA Yang et al. (2023) Transformer Cancer TN Yao et al. (2021) Not Specify Atrial Fibrillation TNE Yazi et al. (2021) Transformer – EA Yuan et al. (2024) LLM – TNE Zeng et al. (2020) Transformer – NA Zhang et al. (2023) Transformer, RNN – NE Zhang et al. (2024b) Transformer COVID-19, AD NEI Zheng et al. (2024) LLM Glaucoma EQ A.2 Benchmark Dataset Supplementary Table 2: Overview of recent benchmark datasets. P - Population. I - Intervention. C - Comparison. O - Outcome (Eldawlatly et al., 2018). RCT - Randomized controlled trial. ETC - Anything that doesn’t fit into the categories above. GPG - GNU Privacy Guard. CMS - Content Management System. Dataset Avail. Label Annotation Description Alzheimer’s disease RCT (Hu et al., 2023)Public P, I, C, O Manual 150 Alzheimer’s disease RCT abstracts Chia (Kury et al., 2020)Public Non-query-able, Post-eligibility, Informed consent, Pregnancy considerations, Parsing error, Non-representable, Competing trial, Context error, Subjective judgment, Not a criteria, Undefined semantics, Intoxication considerationsManual Eligibility statements from 1000 clinical trials and the dataset includes 12,409 annotated eligibility criteria Clinical trials on cancer (Rony et al., 2023)Private Eligible, Not eligible Manual 6M eligibility statements in clinical trials COVID-19 corpus (Hu et al., 2023)Public P, I, C, O Manual 150 COVID-19 RCT abstracts CT-EBM-SP (Campillos- Llanos et al., 2021)Public Anatomy, pharmacological and chemical substances, pathologies, and lab tests, diagnostic | https://arxiv.org/abs/2505.22280v1 |
or therapeutic proceduresManual 1,200 texts about clinical trials with entities EBM-COMET (Ghosh et al., 2024b)Public Physiological or clinical, Death, Life impact, Resource use, Adverse eventsManual 300 RCT abstracts EBM-NLP (Nye et al., 2018)Public P, I, O Manual 4,993 medical abstracts from literatures on PubMed EliIE (Testa et al., 2023)Public Condition, observation, drug/substance, and procedure or deviceManual 230 Alzheimer’s disease RCT documents GGPONC (Borchert et al., 2020)Public Recommendation creation date, Type of recommendation, Recommendation grade, Strength of consensus, Total vote in percentage, Literature references, Expert opinion, Level of evidence, Edit StateAutomatic 25 GPGs with 8,414 text segments from the CMS LCT (Dobbins et al., 2022)Public Clinical, Demographic, Logical, Qualifiers, Temporal and Comparative, OtherManual 1,000+ eligibility statements Limsi-Cochrane dataset (Norman et al., 2019a)Public Systematic reviews, Included studies, Data forms, Text entries, Excluded studies, Diagnostic tests, Test results, Study IDs, Numerical, Summary scoresManual 1,939 meta-analyses from 63 systematic reviews of diagnostic test accuracy from the Cochrane Library MedReview (Zhang et al., 2024a)Private medical related topics (e.g. Wounds, Urology)Manual Meta-analysis results and narrative summaries from the Cochrane Library MS∧2 (DeYoung et al., 2021)Public Background, Goal, Methods, Detailed findings, Further study, Recommendation, Evidence quality, effect, ETCManual 470k documents and 20K summaries from the scientific literature NICTA-PIBOSO (Kim et al., 2024)Public P, I, O, Background, Study Design, OtherManual 1,000 biomedical abstracts PICO-Corpus (Mutinda et al., 2022a)Public P, I, C, O Manual 1,011 breast cancer RCT abstracts RedHOT (Ghosh et al., 2024b)Public P, I, O Manual 22,000 social media posts from Reddit spanning 24 health conditions Illness dataset (Rony et al., 2023)Private Alzheimer’s, Parkinson’s, Cancer, and Diabetes domainsManual 22,660 tweets Continued on next page Supplementary Table 2 – continued from previous page Dataset Avail. Label Annotation Description Symptom2Disease dataset (Rony et al., 2023)Private 24 diseases, each described by 50 symptom profilesManual 1,200 data points Trialstreamer (Mar- shall et al., 2020)Public P, I, O, RCT classifiers Manual 191 RCT publications A.3 Queries {nlp_keywords} = natural language processing OR nlp OR language model OR large language model OR llm OR computational linguistics OR information extraction OR information retrieval OR clinical trial retrieval OR text summarization OR question answering OR sentence segmentation OR ner OR named entity recognition OR tokenization) {ebm_keywords) = evidence-based medicine OR ebm OR evidence-based practice OR ebp OR clinical trial PubMed (({nlp_keywords}[Title/Abstract]) AND ({ebm_keywords}[Title/Abstract])) IEEE Xplore (("Abstract":{nlp_keywords}) AND ("Abstract":{ebm_keywords})) OR (("Title":{nlp_keywords}) AND (("Title": {ebm_keywords}))) ACM (Abstract:{nlp_keywords} AND Abstract:{ebm_keywords}) OR (Title:{nlp_keywords} AND Title:{ebm_keywords}) ACL (Abstract:{nlp_keywords} AND Abstract:{ebm_keywords}) OR (Title:{nlp_keywords} AND Title:{ebm_keywords}) | https://arxiv.org/abs/2505.22280v1 |
arXiv:2505.22287v1 [cs.CY] 28 May 2025New Tools are Needed for Tracking Adherence to AI Model Behavioral Use Clauses Daniel McDuff∗Tim Korjakow Kevin Klyman Danish Contractor Abstract Foundation models have had a transformative impact on AI. A combination of large investments in research and development, growing sources of digital data for training, and architectures that scale with data and compute has led to models with powerful capabilities. Releasing assets is fundamental to scientific advancement and commercial enterprise. However, concerns over negligent or malicious uses of AI have led to the design of mechanisms to limit the risks of the technology. The result has been a proliferation of licenses with behavioral-use clauses and acceptable-use-policies that are increasingly being adopted by commonly used families of models (Llama [34], Gemma [33], Deepseek [23]) and a myriad of smaller projects. We created and deployed a custom AI licenses generator to facilitate license creation and have quantitatively and qualitatively analyzed over 300 customized licenses created with this tool. Alongside this we analyzed 1.7 million models licenses on the HuggingFace model hub. Our results show increas- ing adoption of these licenses, interest in tools that support their creation and a convergence on common clause configurations. We take the position that tools for tracking adoption of, and adherence to, these licenses is the natural next step and urgently needed in order to ensure they have the desired impact of ensuring responsible use. 1 Introduction Rapid development and deployment of artificial intelligence (AI) have raised concerns about its potential misuses, such as the generation of harmful content, violating laws, and perpetuating biases. To address these concerns, researchers and developers have begun exploring ways to manage the risks associated with AI, including the use of licenses with behavioral use clauses [5]. These licenses, sometimes referred to as Responsible AI Licenses (RAIL), allow developers to release AI assets while specifying restrictions on their use, thus promoting transparency and accountability and are a key tool in the responsible development of foundation models [25]. The adoption of RAIL licenses (or licenses with behaviorial-use clauses)*has gained momentum in recent years, with notable examples including the models from the BigScience [36] and BigCode [21] initiatives, Stable Diffusion [28], the Llama family of language models [34], Gemma family of language models [32, 33], as well as many of the models from DeepSeek [22, 23]. These licenses restrict the use of AI assets in various ways, such as prohibiting the generation of harmful content or the use of AI for malicious purposes [26]. As RAIL licenses have become more common regulatory bodies and think tanks have begun to encourage their use. The French Government’s PEReN center of expertise, which serves digital regulation, expresses an explicit preference for models with a high degree of openness, but with a “license which does not allow unethical usage”.* *Correspondence: dmcduff@uw.edu *We make no distinction between licenses using Behavioral-use clauses and licenses using the RAIL acronym for this work. *https://www.peren.gouv.fr/en/compare-os-iag/ Preprint. Under review. Despite the growing adoption of Responsible AI Licenses, there is a need for further research on their effectiveness and limitations. Previous studies | https://arxiv.org/abs/2505.22287v1 |
have highlighted the challenges of regulating AI, including the complexity of AI supply chains and the uncertainty surrounding copyright rules. Furthermore, [26] argue that the lack of standardization in licenses with behavioral use clauses can lead to confusion and inconsistency and suggest the use of tooling to help with standardization. [26]’s detailed study highlighted how the share of such licenses applied on models grew from close to 0 to over 20% in the 18-months from mid-2022. Adoption of such licenses was driven in part by major foundation model developers [32, 34, 36]. The continued rise in the popularity of these licenses motivates our position in this paper. We take the position that new tools are needed for tracking the adoption of, and adherence to, behavioral use clauses in AI licenses. In this paper, we present two sets of analyses that provide evidence to motivate this position. The first is an in-depth examination of the usage of an open-source RAIL-License Generator proposed by McDuff et al. [26]. The license generator allows practitioners to select usage restrictions from a library of pre-populated clauses which are appended on a set of 10mandatory clauses identified by an analysis of existing RAIL licenses [26]. Additionally, it allows specifying the artefacts being licensed (model, source code, or application) as well as the nature of release - research-use, open or proprietary (See Section 2). The second study is a large-scale analysis of licenses used by 1,704,180 models available on the HuggingFace model hub that highlights the continued large-scale adoption of RAIL licenses and the coherence of license clauses. Our studies provides a unique window into the practical application of RAIL licenses, revealing grow- ing adoption and several canonical modes of adoption. By analyzing the selection and combination of behavioral-use clauses, we can identify areas of convergence and divergence in the community’s understanding of responsible AI development and use. We study the licenses generated by users to gain a deeper understanding of how practitioners are selecting and applying these clauses. Our analysis reveals interesting trends and patterns in the selection of behavioral-use clauses, shedding light on the types of restrictions that users deem most important for responsible AI development and use. For example, we find that clauses related to disinformation/fake news were among the most frequently selected, indicating that people are particularly concerned that text and multimodal content generation models could be particularly dangerous for these purposes. In contrast, clauses related to warfare and injury or death were less commonly chosen. We believe that these outcomes are not viewed as less damaging but rather much less likely outcomes from a generative foundation model. As embodied AI and robotics applications grow these may be represented more. To summarize, while there are now tools to support the creation and customization of licenses, the evidence now supports the case for new tools to support practitioners to track how licenses are being adopted andhow license assets are being used . Without such tools cynicism about the effectiveness and enforcability of such licenses might grow and violation of licenses, and the associated negative impacts, might | https://arxiv.org/abs/2505.22287v1 |
increase. We argue that the availability, or unavailability, of such tools will have significant implications on the positive impact of Responsible AI Licenses. Furthermore, our findings can inform policymakers and regulators seeking to establish frameworks for the governance of AI, ensuring that these frameworks are grounded in the realities of AI development and use. 2 Case Study I: A RAIL License Generator We adopt the Open Source license generator proposed by McDuff et al. [26] for our work. The generator was hosted publicly at: https://www.licenses.ai/rail-license-generator . The generator has a three-step workflow for generating licenses (see Fig. 1). First, users are expected to indicate whether the license is being generated is for artifacts that can be used for research-only (ResearchRAIL), proprietary-use (RAIL) or open-use (OpenRAIL). Next, users select the artifact being licensed - model (M), source (S) code (C) or an application (A). Existing behavioural-use clauses largely apply to models, though there are instances of some being applied to source code and applications. Finally, users are presented with a set of 10mandatory clauses and 15optional clauses that they can choose to include in their license (See Table 1). We use the clauses identified previously by [26] but as we report in Section 2.2, the adoption of clauses by the community suggests some revision may be necessary. The process of generating a license is summarized in Figure 1. The license generator was launched on March 13th2024 and the data used in the analysis for this paper was pulled on May 16th2025. Figure 2[A] shows the cumulative number of licenses created 2 Figure 1: License Generator. The RAIL License Generator enables a user to select a type of license and the asset they want to license and then customize the behavioral use clauses. The final license text can be exported in several ways along with a quick response (QR) code. each month over the past year. ResearchRAIL, RAIL and OpenRAIL are different "flavors" of RAIL licenses as described in [26]. Over a period of 14months we had over 300complete licenses generated. We exclude licenses in the analysis that were clearly generated for testing purposes, specifically those whose names contain the substring "test" or have an empty name. This exclusion reduces noise in the collected dataset and allows for a clearer picture during the analysis of the connections between restrictions. Some examples of artifacts licensed using licenses from the generator include projects from various modalities and use cases such as vision models in maritime settings [6], [29], autonomous driving [11], earth observation [35], medical analysis [16], large language models [24], [30], [14] and 3D foundation models [31]. 2.1 Architecture and Deployment The license generator was implemented using a FastAPI backend paired with a frontend composed exclusively of HTML and CSS. Persistent storage and template version management are handled by a PostgreSQL database. Each generated license includes the complete license text, user-selected restrictions and artifacts, a unique license identifier, and the Git commit hash corresponding to the specific template version used. Given that the codebase is open source, this design provides transparent and publicly verifiable provenance | https://arxiv.org/abs/2505.22287v1 |
for each generated license. Furthermore, the inclusion of unique identifiers and immutable license records enables distribution not only in text form, but also through QR codes. The license generator has been publicly deployed, making it openly accessible to the general community. 2.2 Analysis Number of Licenses Created. Figure 2[A] shows the rate at which licenses were created using our tool. With little additional marketing or publicity over 300 customized licenses were created in just 3 Applications Models Applications & Models Source Code Applications & Source Code Models & Source Code Applications, Models & Source Code Clauses Selected: LicesnseApril July October 2025 April050100150200250300350 RAILNumber of Custom Licenses Created Date:ResearchRAIL OpenRAILLicense:A B C PrivacyHealth MilitaryLegalDisinformationResearch MalwareDefamation Mandatory Optional 1 2 3 4 5 6 8 7 9 11 10 12 13 16 20 14 15 17 18 19 21 22 23 24 25 4 11231914 15 21 9 7 3 25 24161722 Discrimination60 40Number of Licenses with Shared ClausesWhick Clasuses were Selected with Which AssetsFigure 2: License Generator Usage. [A]Number of Licenses Created. The number of RAIL, OpenRAIL and ResearchRAIL licenses created using our license generator has been accelerating over the year from April 2024. [B] Clause Adoption by Asset Type. Clauses select for every license generated for each type of asset. [C] Connectivity Plot of Non-Mandatory Clauses. The a circular plot highlights the heterogeneity across licenses in terms of the non-mandatory clauses selected. The opacity of each line reflects the number of licenses that include both clauses. over one year. The number of licenses created each week has also accelerated. All three license types were popular, with ResearchRAIL licenses (that restrict to research-only user only) being the most commonly selected. License Types and Artifacts Selected. Figure 2[B] summarizes how the license clauses were selected for different artifacts. The first 10clauses are the mandatory clauses and are therefore present in all licenses. Models are the most common asset that licenses were generated for. Of the 308 licenses we find that more than 50% of the licenses included models as the artifact being licensed. Additionally, we find that the users of our license generator do find value in licensing other artifacts such as source code and end applications with behavioral-use restrictions. Notably, nearly 50% of the licenses included restrictions on the use of source code. This perhaps indicates that AI developers are also concerned about the inappropriate use of powerful AI source code though it is unclear to what extent this reflects a wider need for such AI License types, as existing licenses with behavioral-use restrictions have traditionally only been applied on models or end-use. 4 Figure 3: RAIL License Release Timeline. Notable milestones in the adoption and standardization of RAIL and Open Source AI licenses. Artifacts and Clauses Selection The choice of licenses clauses does vary by asset type (see Fig- ure 2[B]. We find that, on average, users tend to select more behavioral-use restrictions when licensing models and the fewest when licensing source code . When licensing applications and source-code, users appear to select more clauses as compared to only applications. This perhaps | https://arxiv.org/abs/2505.22287v1 |
suggests that appli- cations, because they are designed for more specific purposes, rather than more generic models/code, are viewed as needing fewer behavioral-use restrictions. Behavioral-use clauses selected by users. The license generator had a minimum set of clauses indicated in Table 1 by a green dot. The most popular clauses related to users that intentional deceive or mislead, violate laws or create/disseminate malware. The generation of deceptive or misleading content (e.g., fake news) is an easily imaginable malicious application of a text generation model. The least selected were related to uses that have a connection with activities that present a risk of death or bodily harm. It is likely that these were less frequently selected because it is harder to imagine how a text generation model would lead to such an application. Figure 2[C] shows shows a connectivity (circular) plot that reflects which non-mandatory clauses were selected together. 3 Case Study II: Commonly Adopted AI Licenses Next, we performed a large-scale analysis the models hosted on the HuggingFace Hub to identify the licenses associated with AI models. We analyzed 1.7M model repos of these almost 650,000 had licenses. As Figure 4[A] shows, RAIL licenses continue to represent a sizable minority of projects compared to Open Source licensed projects (those using the MIT License, Apache License 2.0, Berkeley Software Distribution (BSD) or GNU General Public License (GPL) families of licenses). RAIL licenses are used for 12.1% of models, compared to 61.5% for Open Source. The timeline in Fig. 3 shows some of notable milestone in RAIL adoption and standardization. How consistent are the behavioral-use clauses across these licenses? A motivation for the RAIL License Generator was the observation that customized licenses were desired [26]. With the proliferation of behavioral-use licenses created by other parties not only have clauses been added/removed but also clause texts have been changed. Table 1 shows the 25license clauses in the RAIL License Generator how they appear in some recent RAIL Licenses.The last column in the table indicates how many (in %) of the licenses generated by our license generator include those clauses. 5 Behavioral Restrictions AIPubs RAIL BigSci. OpenRAIL CodeML OpenRAIL LLaMA 2 FALCON ImpACT L/M/H GRID DBRX DeepSeek Tencent Gemma Claude OpenAI FLUX IEEE P2840 % License Gen.Discrim.(1) To discriminate or exploit individuals or groups based on legally protected characteristics and/or vulnerabilities ✓✓✓✓ ✓✓✓✓ ✓✓ ✓ 100% (2) For purposes of administration of justice, law enforcement, immigration, or asylum processes, such as predicting that a natural person will commit a crime or the likelihood thereof.✓✓ ✓✓✓ ✓ ✓ 100% (3) To defame, disparage or otherwise harass or exploit others. ✓✓✓✓✓ ✓✓✓✓✓✓✓ 24% (4) To engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, or other essential goods and services.18%Disinformation(5) To create, present or disseminate verifiably false or misleading information for economic gain or to intentionally deceive the public, including creating false impersonations of natural persons.✓✓✓✓ ✓✓✓✓✓✓✓ ✓ 100% (6) To synthesize or modify a natural person’s appearance, voice, or other individual characteristics, unless prior | https://arxiv.org/abs/2505.22287v1 |
informed consent of said natural person is obtained✓✓✓✓ ✓ ✓✓ ✓ 100% (7) To generate or disseminate information (including - but not limited to - images, code, posts, articles), and place the information in any public context without expressly and intelligibly disclaiming that the information and/or content is machine generated)✓✓✓✓ ✓✓✓ ✓✓✓✓ 18% (8) To autonomously interact with a natural person, in text or audio format, unless disclosure and consent is given prior to interaction that the system engaging in the interaction is not a natural person.✓ ✓ 100% (9) To defame or harm a natural person’s reputation, such as by generating, creating, promoting, or spreading defamatory content (statements, images, or other content).✓ ✓ 18%Legal(10) To engage or enable fully automated decision-making that creates, modifies or terminates a binding, enforceable obligation between entities; whether these include natural persons or not.✓✓✓ ✓✓✓✓✓✓✓ ✓ 100% (11) In any way that violates any applicable national, federal, state, local or international law or regulation. ✓✓✓✓✓ ✓✓✓✓✓✓✓✓ 23% (12) To engage or enable fully automated decision-making that adversely impacts a natural person’s legal rights without expressly and intelligibly disclosing the impact to such natural person and providing an appeal process.✓ ✓ ✓ 100%Privacy(13) To utilize personal information to infer additional personal information about a natural person, including but not limited to legally protected characteristics, vulnerabilities or categories; unless informed consent from the data subject to collect said inferred personal information for a stated purpose and defined duration is received.✓ ✓✓✓ ✓ 100% (14) To generate or disseminate personal identifiable information that can be used to harm an individual or to invade the personal privacy of an individual.✓✓✓✓ ✓✓✓✓✓✓✓ 20% (15) To engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals.✓✓✓✓✓ ✓ ✓✓✓ 21%Health(16) To provide medical advice or make clinical decisions without necessary (external) accreditation of the system; unless the use is (i) in an internal research context with independent and accountable oversight and/or (ii) with medical professional oversight that is accompanied by any related compulsory certification and/or safety/quality standard for the implementation of the technology.✓✓ ✓ ✓ ✓ ✓ ✓ 100% (17) In connection with any activities that present a risk of death or bodily harm to individuals, including self-harm or harm to others, or in connection with regulated or controlled substances.✓ ✓✓✓✓ 14% (18) To provide medical advice and medical results interpretation without external, human validation of such advice or interpretation.✓ 14% (19) In connection with activities that present a risk of death or bodily harm to individuals, including inciting or promoting violence, abuse, or any infliction of bodily harm.✓ ✓✓✓ 13%Military(20) For weaponry or warfare. ✓ ✓✓ ✓ ✓✓ 100% (21) For purposes of building or optimizing military weapons or in the service of nuclear proliferation or nuclear weapons technology.✓ ✓ ✓ ✓ 17% (22) For purposes of military surveillance, including any research or development relating to military surveillance. ✓ ✓ ✓ ✓ 16%Other(23) Generate/disseminate malware/ransomware or other content for the purpose of harming electronic systems. ✓ ✓ ✓✓✓✓ 19% (24) To Intentionally deceive or | https://arxiv.org/abs/2505.22287v1 |
mislead others, including failing to appropriately disclose to end users any known dangers of your system.✓ ✓ 25% (25) In connection with any academic dishonesty, including submitting any informational content or output of a Model as Your own work in any academic setting.✓ ✓ 17% Table 1: Summary of Behavioral-Use Clauses. Clauses included in popular responsible AI licenses. 6 Bi-Gram Overlap Between Licenses April October April October April October April0100,000200,000300,000400,000500,000600,000Number of HuggingFace Model Hub Models Using LicensesDate:A B RAILLicense: OS Other Figure 4: License Adoption. [A]Number of Models Licensed. The number of RAIL, OS and other licensed models on the HuggingFace Model Hub from 2023 to 2025. All other models did not have a license. [B] Bi-gram Overlap in Behavioral-use clauses. The percentage of two-grams that overlap between clauses. There is significant overlap in the text content of certain licenses and most licenses have at least 10% overlap with at least one other license. Notable large language models that have leveraged RAIL licenses include BLOOM [36], Star- Coder [21], Meta’s Llama [34], Google’s Gemma [32] (acceptable use policy), DeepSeek [22, 23], MiniMax-01 [19], Tencent’s Hunyuan-large [1], Falcon [3] and Databricks (DBRX).*Prominent model families with open source licenses without behavioral use clauses include Mistral [15], IBM’s Granite [12], Alibaba’s Qwen [4] and Microsoft’s Phi [2]. Figure 4[B] shows the Bi-gram (sequences of two consecutive words) overlap between behavioral-use clauses in different licenses. This analysis shows similarity between almost all the licenses with some containing almost identical language. As an example licenses from the BigScience OpenRAIL-M, Llama and Gemma licenses all contain similar restrictions related to defamation and harassment: BigScience -https://huggingface.co/spaces/bigscience/license Clause 1.d. - "To defame, disparage or otherwise harass others;." Llama 2/3 -https://ai.meta.com/llama/use-policy/ Clause 1.c. - "Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individual." Gemma -https://ai.google.dev/gemma/prohibited_use_policy Clause 2.e.ii. - "Generation of content that may harm or promote the harm of individuals or a group, such as: Generating content that promotes or encourages hatred; Facilitating methods of harassment or bullying to intimidate, abuse, or insult others;" Apart from the behavioral-use restrictions, licenses such as the Llama license include considerations for commercial-use. Some licenses such as the AIPubs OpenRAIL permit use for Research purposes only. While a study of licenses on dimensions beyond aspects related to behaviorial-use clauses are outside the scope of this work, we encourage the interested readers to review recent work by [8,18,20]. *https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm 7 4 Tools are Need for Tracking Adoption or and Adherence to Licenses RAIL licenses are being adopted. The choice to release AI code, models and applications has significant implications for how the technology impacts people and society. Adoption of behavioral use licenses for machine learning models, particularly foundation models, illustrates the desire of researchers to adopt mechanisms to control how these assets are used. Based on analysis of the HuggingFace Model Hub, a sizable portion of models have licenses that include behavioral-use restrictions that reflect how they can be used. The restrictions across these licenses have a large amount of overlap, both in terms of the intention | https://arxiv.org/abs/2505.22287v1 |
behind clauses and their specific wording. The release of models such as Llama 2 [34], Gemma [32] and DeepSeek V2 and V3 [22, 23] accelerated the adoption of certain types of licenses. Our license generator is a demonstration of the demand for well-tailored licenses: the availability of our tool organically led to the creation of over 300 customized licenses in the space of a year with little additional promotion or advertising. Tools exist for tracking license adoption but could be improved. As evidenced by the analyses in this work, the task of tracking license adoption has become substantially easier due to GitHub enabling search of a vast number of code repositories, the HuggingFace Model Hub API*exposing license categories as metadata and our RAIL License Generator providing a standardized set of language for license clauses. The use of similar language across licenses makes them easier to search. A recently published IEEE standard*could help to further increase the coherence of these licenses making it easier to track adoption. This tooling builds on prior work to build more standardized and transparency licenses for machine learning models [9]. The lack of a standardized filetype for licenses (some are txt or markdown files, while others are word documents) limits systematic analysis, especially as many license files are titled “LICENSE,” and Hugging Face’s Model Hub API does not enable search by license filetype; our custom AI license generator allows the creation of licenses as markdown files. Tools for analyzing license content also exist [8], such as automated text analysis tools and platforms for carrying out qualitative coding and simplifying subsequent analysis [7, 13], though increasingly capable language models may reduce the need for such tooling over time. Tools for tracking license adherence are severely lacking. Identifying violations to license terms is a serious challenge. Existing approaches to Open Source code monitoring broadly fall into three buckets: 1) automated code scanning, 2) manual code reviews and 3) community monitoring and reporting. While existing automated code scanning tools would still be helpful to identify when code that is under a RAIL license is being used, they many be less effective at detecting how that code is being used and whether a behavioral restriction is being violated. The solution to tracking adherence to license clauses is more complex, and will likely need socio-technical solutions. Taking inspiration from the Open Source Initiative, greater adoption of tools for monitoring public complaints such as those on the Open Source Stack Exchange*would provide a way for community reported violations to be escalated. While larger organization might have the resources to track model use, practitioners from academic or smaller organizations (as is the case form many people who used the license generator) may struggle to identify how their assets are being used and whether any of the license terms are being violated. Novel methods for model provenance, such as model fingerprinting [37], or output watermarking, such as robust distortion-free watermarks [10, 17], may provide a useful substrate for tracking violations of license terms in time. However, the lack of viable methods for tracing the provenance of | https://arxiv.org/abs/2505.22287v1 |
models or their outputs, coupled with little infrastructure for identifying violations of license terms even when a model and its outputs can be correctly tracked, may lead developers to seek out ways to “short-circuit” model performance in certain domains in an effort to promote adherence to behavioral use clauses [38]. 5 Alternative Views There are alternative views to the position taken in this paper about how to release and license AI technology. For instance, the Open Source Initiative (OSI) launched a co-design process to define Open Source AI definition, partly in response to the community-led the adoption of licenses with *https://huggingface.co/docs/hub/en/models-the-hub *https://standards.ieee.org/ieee/2840/7673/ *https://opensource.stackexchange.com/ 8 behavioural-use clauses.*,*The open-source community values the four ‘freedoms’ of free software:* (i) freedom to run the software for any purpose, (ii) freedom to study how a software works, (iii) freedom to re-distribute, (iv) freedom to distribute modified versions to others. It is therefore obvious how licenses such as those that restrict commercial-use, or impose behavioral-use requirements contradict these freedoms. The recently released Open Source AI Definition (OSAID) v1.0*reinforces the principles of free- software and also makes the distinction between ‘Open Source AI’ and ‘Open-Weights’. Open Source AI systems need to release training data when possible, release weights of trained models, along with the source code to train and evaluate AI Models. These artifacts are deemed necessary to comply with the four freedoms though the definition makes room for practical considerations such as acknowledging that training data may not always be possible to release (e.g., data including PII). On the other hand, open-weight models refer to models that do not impose restrictions of any kind but they may not be accompanied by the release of code or data. Most models released under popular licenses such as Apache 2.0 and MIT License are thus, Open-Weight Models (including the ones previously refereed to as “Open-Source” models in Section 3). In fact, very few models conform to the definition of prescribed by OSAID, for examples models such as OLMo 2*and Google T5 [27] would not. 5.1 Open Weights versus RAIL The tension between unrestricted access and responsible-use of AI is most notable when there have been instances of model providers switching between RAIL and Open Weight releases. For instance, the Stable Diffusion Model first released under a license with behavioral-use restrictions*and then switched to an open source license, before finally reverting to a RAIL-license. DeepSeek’s early models including DeepSeek V3*are licensed with behavioural-use clauses, but its recent DeepSeek R1 model is licensed under an MIT license.*Model families such as Phi release weights under open source licenses but include guidance to comply with laws and request users to carefully consider the use of the model particularly in high-risk use-cases.*,*This suggests that model providers are actively thinking about the use of their models but grapple with the uncertainty that the inclusion of responsible-use clauses can bring especially when these have not been standardized or tested in court. Considerations on responsible-use have also been applied to artifacts such as datasets – for instance, the Dolma dataset was initially released under an ImpACT license (a license | https://arxiv.org/abs/2505.22287v1 |
with behavioral-use restrictions) and then switched to an ODC By 2.0 License,*which enabled the OLMo family of models (a model trained on this dataset) to be released under an Apache 2.0 license. 6 Conclusion Increasingly, AI software assets (models, source code, applications) are being released with licenses that include behavioral-use clauses and acceptable-use policies. While the specific combinations of clauses varies, we find that most continue use clauses from a relatively short list (N=25). A field-study of a license generation tool found organic adoption (N=308 licenses created in 12-month) analysis of these licenses found that 50% of practitioners chose to customize the choice of license clauses, including behavioral-use clause more when licensing models than source code or applications. With the trend of adoption of licenses clearly continuing and the validation of tools to help people create customized licenses, attention needs to move to support for tracking usage of assets and adherence to these licenses. *https://opensource.org/blog/towards-a-definition-of-open-artificial-intelligence-first-meeting-recap *https://opensource.org/ai/open-weights *https://www.gnu.org/philosophy/free-sw.en.html#four-freedoms *https://opensource.org/ai *https://allenai.org/blog/olmo2 *https://raw.githubusercontent.com/CompVis/stable-diffusion/main/LICENSE *https://github.com/deepseek-ai/DeepSeek-V3/blob/main/LICENSE-MODEL *https://huggingface.co/deepseek-ai/DeepSeek-R1/blob/main/LICENSE *https://huggingface.co/microsoft/Phi-3.5-vision-instruct *https://huggingface.co/microsoft/phi-4#intended-use *https://allenai.org/blog/making-a-switch-dolma-moves-to-odc-by-8f0e73852f44 9 References [1]Hunyuan-large: An open-source moe model with 52 billion activated parameters by tencent. arXiv preprint arXiv:2411.02265 , 2024. [2]Marah Abdin, Jyoti Aneja, Hany Awadalla, Ahmed Awadallah, Ammar Ahmad Awan, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Jianmin Bao, Harkirat Behl, et al. Phi-3 technical report: A highly capable language model locally on your phone. arXiv preprint arXiv:2404.14219 , 2024. [3]Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Alshamsi, Alessandro Cappelli, Ruxandra Cojocaru, Merouane Debbah, Etienne Goffinet, Daniel Heslow, Julien Launay, Quentin Malartic, et al. Falcon-40B: an open large language model with state-of-the-art performance. Findings of the Association for Computational Linguistics: ACL , 2023:10755–10773, 2023. [4]Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. Qwen technical report. arXiv preprint arXiv:2309.16609 , 2023. [5]Danish Contractor, Daniel McDuff, Julia Katherine Haines, Jenny Lee, Christopher Hines, Brent Hecht, Nicholas Vincent, and Hanlin Li. Behavioral use licensing for responsible ai. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency , pages 778–788, 2022. [6] Ze Cui. orcasound/orca-eye-aye. original-date: 2023-03-17T19:11:42Z. [7]Kasper Drazewski, Andrea Galassi, Agnieszka JABŁONOWSKA, Francesca Lagioia, Marco Lippi, Hans-Wolfgang Micklitz, Giovanni Sartor, Giacomo Tagiuri, and Paolo Torroni. A corpus for multilingual analysis of online terms of service. Association for Computational Linguistics, 2021. [8]Moming Duan, Qinbin Li, and Bingsheng He. Modelgo: A practical tool for machine learning license analysis. In Proceedings of the ACM Web Conference 2024 , WWW ’24, page 1158–1169, New York, NY , USA, 2024. Association for Computing Machinery. [9]Moming Duan, Rui Zhao, Linshan Jiang, Nigel Shadbolt, and Bingsheng He. "they’ve stolen my gpl-licensed model!": Toward standardized and transparent model licensing, 2024. [10] Pierre Fernandez, Guillaume Couairon, Hervé Jégou, Matthijs Douze, and Teddy Furon. The stable signature: Rooting watermarks in latent diffusion models, 2023. [11] Florent Bartoccioni, Elias Ramzi, Victor Besnier, Shashanka Venkataramanan, Tuan-Hung Vu, Yihong Xu, Loick Chambon, Spyros Gidaris, Serkan Odabas, David Hurych, Renaud Marlet, Alexandre Boulch, Mickael Chen, Eloi Zablocki, Andrei Bursuc, Eduardo Valle, and Matthieu Cord. VaViM and VaV AM: Autonomous driving through video generative modeling, 2024. original-date: 2024-02-09T07:46:03Z. [12] IBM Granite | https://arxiv.org/abs/2505.22287v1 |
Team. Granite 3.0 language models, 2024. [13] Alfonso Guarino, Nicola Lettieri, Delfina Malandrino, and Rocco Zaccagnino. A machine learning-based approach to identify unlawful practices in online terms of service: analysis, implementation and evaluation. Neural Computing and Applications , 33:17569–17587, 2021. [14] Takashi Isobe, He Cui, Dong Zhou, Mengmeng Ge, Dong Li, and Emad Barsoum. AMD- hummingbird: Towards an efficient text-to-video model, 2025. [15] Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. Mixtral of experts, 2024. [16] Jitesh Joshi, Sos Agaian, and Youngjun Cho. Factorizephys: Matrix factorization for multidi- mensional attention in remote physiological sensing. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. [17] Rohith Kuditipudi, John Thickstun, Tatsunori Hashimoto, and Percy Liang. Robust distortion- free watermarks for language models, 2024. 10 [18] Mark A. Lemley and Peter Henderson. The mirage of artificial intelligence terms of use restrictions. Research Paper 2025-04, Princeton University Program in Law & Public Affairs, 2024. [19] Aonian Li, Bangwei Gong, Bo Yang, Boji Shan, Chang Liu, Cheng Zhu, Chunhao Zhang, Congchao Guo, Da Chen, Dong Li, et al. Minimax-01: Scaling foundation models with lightning attention. arXiv preprint arXiv:2501.08313 , 2025. [20] Peihao Li, Jie Huang, and Shuaishuai Zhang. Licensenet: Proactively safeguarding intellectual property of ai models through model license. Journal of Systems Architecture , 159:103330, 2025. [21] Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, et al. Starcoder: may the source be with you! arXiv preprint arXiv:2305.06161 , 2023. [22] Aixin Liu, Bei Feng, Bin Wang, Bingxuan Wang, Bo Liu, Chenggang Zhao, Chengqi Dengr, Chong Ruan, Damai Dai, Daya Guo, et al. Deepseek-v2: A strong, economical, and efficient mixture-of-experts language model. arXiv preprint arXiv:2405.04434 , 2024. [23] Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, et al. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437 , 2024. [24] Jiang Liu, Jialian Wu, Xiaodong Yu, Sudhanshu Ranjan Prakamya Mishra, Zicheng Liu, Chaitanya Manem, Yusheng Su, Pratik Prabhanjan Brahma, Gowtham Ramesh, Ximeng Sun, Ze Wang, and Emad Barsoum. Instella: Fully open language models with stellar performance, 2025. [25] Shayne Longpre, Stella Biderman, Alon Albalak, Hailey Schoelkopf, Daniel McDuff, Sayash Kapoor, Kevin Klyman, Kyle Lo, Gabriel Ilharco, Nay San, et al. The responsible foundation model development cheatsheet: A review of tools & resources. Transactions on Machine Learning Research . [26] Daniel McDuff, Tim Korjakow, Scott Cambo, Jesse Josua Benjamin, Jenny Lee, Yacine Jernite, Carlos Muñoz Ferrandis, Aaron Gokaslan, Alek Tarkowski, Joseph Lindley, et al. Position: Standardization of behavioral use clauses is necessary for the adoption of responsible licensing of ai. In International Conference on Machine Learning , pages 35255–35266. PMLR, 2024. [27] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan | https://arxiv.org/abs/2505.22287v1 |
Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. , 21(1), January 2020. [28] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High- resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 10684–10695, 2022. [29] SalmonVision. Salmon-computer-vision/salmon-computer-vision. original-date: 2020-09- 14T21:14:11Z. [30] Ximeng Sun, Aditya Singh, Gowtham Ramesh, Jiang Liu, Ze Wang, Sudhanshu Ranjan, Pratik Prabhanjan Brahma, Prakamya Mishra, Jialian Wu, Xiaodong Yu, Yusheng Su, Emad Barsoum, and Zicheng Liu. Instella-VL-1b: First AMD vision language model, 2025. [31] Foundation AI Team, Kiran Bhat, Nishchaie Khanna, Karun Channa, Tinghui Zhou, Yiheng Zhu, Xiaoxia Sun, Charles Shang, Anirudh Sudarshan, Maurice Chu, Daiqing Li, Kangle Deng, Jean-Philippe Fauconnier, Tijmen Verhulsdonck, Maneesh Agrawala, Kayvon Fatahalian, Alexander Weiss, Christian Reiser, Ravi Kiran Chirravuri, Ravali Kandur, Alejandro Pelaez, Akash Garg, Michael Palleschi, Jessica Wang, Skylar Litz, Leon Liu, Anying Li, David Harmon, Derek Liu, Liangjun Feng, Denis Goupil, Lukas Kuczynski, Jihyun Yoon, Naveen Marri, Peiye Zhuang, Yinan Zhang, Brian Yin, Haomiao Jiang, Marcel van Workum, Thomas Lane, Bryce Erickson, Salil Pathare, Kyle Price, Anupam Singh, and David Baszucki. Cube: A roblox view of 3d intelligence, 2025. [32] Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale, Juliette Love, et al. Gemma: Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295 , 2024. 11 [33] Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupatiraju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, et al. Gemma 2: Improving open language models at a practical size. arXiv preprint arXiv:2408.00118 , 2024. [34] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 , 2023. [35] Thomas J Vandal, Kate Duffy, Daniel McDuff, Yoni Nachmany, and Chris Hartshorn. Global atmospheric data assimilation with multi-modal masked autoencoders. 2024. [36] BigScience Workshop, Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili´c, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, et al. Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100 , 2022. [37] Jiashu Xu, Fei Wang, Mingyu Derek Ma, Pang Wei Koh, Chaowei Xiao, and Muhao Chen. Instructional fingerprinting of large language models, 2024. [38] Andy Zou, Long Phan, Justin Wang, Derek Duenas, Maxwell Lin, Maksym Andriushchenko, Rowan Wang, Zico Kolter, Matt Fredrikson, and Dan Hendrycks. Improving alignment and robustness with circuit breakers, 2024. A Broader Impacts Licenses have broad implications. The open source community has contributed substantially to the development of software and to the progress in artificial intelligence. There are reasons that restrictive licenses might impact research and development and justified fears that they could slow down progress or cause confusion about what is and what is not acceptable with a given asset. We are not arguing behavioral use clauses to be a required component of licenses, | https://arxiv.org/abs/2505.22287v1 |
arXiv:2505.22288v1 [cs.AI] 28 May 2025Compression versus Accuracy: A Hierarchy of Lifted Models Jan Spellera,*, Malte Luttermannb,c, Marcel Gehrkecand Tanya Brauna aComputer Science Department, University of Münster, Germany bGerman Research Center for Artificial Intelligence (DFKI), Lübeck, Germany cInstitute for Humanities-Centered Artificial Intelligence, University of Hamburg, Germany ORCID (Jan Speller): https://orcid.org/0000-0003-1106-8177, ORCID (Malte Luttermann): https://orcid.org/0009-0005-8591-6839, ORCID (Marcel Gehrke): https://orcid.org/0000-0001-9056-7673, ORCID (Tanya Braun): https://orcid.org/0000-0003-0282-4284 Abstract. Probabilistic graphical models that encode indistinguish- able objects and relations among them use first-order logic con- structs to compress a propositional factorised model for more effi- cient (lifted) inference. To obtain a lifted representation, the state- of-the-art algorithm Advanced Colour Passing (ACP) groups factors that represent matching distributions. In an approximate version us- ingεas a hyperparameter, factors are grouped that differ by a factor of at most (1±ε). However, finding a suitable εis not obvious and may need a lot of exploration, possibly requiring many ACP runs with different εvalues. Additionally, varying εcan yield wildly dif- ferent models, leading to decreased interpretability. Therefore, this paper presents a hierarchical approach to lifted model construction that is hyperparameter-free. It efficiently computes a hierarchy of ε values that ensures a hierarchy of models, meaning that once factors are grouped together given some ε, these factors will be grouped to- gether for larger εas well. The hierarchy of εvalues also leads to a hierarchy of error bounds. This allows for explicitly weighing com- pression versus accuracy when choosing specific εvalues to run ACP with and enables interpretability between the different models. 1 Introduction Probabilistic graphical models (PGMs) allow for modelling envi- ronments under uncertainty by encoding features in random vari- ables (randvars) and relations between them in factors. Lifted or first- order versions of PGMs such as parametric factor graphs (FGs) [17] and Markov logic networks [18] incorporate logic constructs to en- code indistinguishable objects and relations among them in a com- pact way. Probabilistic inference on such first-order models becomes tractable in the domain size when using representatives for indistin- guishable objects [16], a technique referred to as lifting [17]. Lifting has been used to great effect in probabilistic inference including lift- ing various query answering algorithms [1, 2, 8, 9, 19, 22] next to lifting queries [3, 20] or evidence [19, 21]. Advanced Colour Passing (ACP) is the state-of-the-art algorithm to get a first-order model from a propositional one, specifically turn- ing FGs into parametric FGs, grouping factors with identical po- tentials [1, 13]. The newest version, ε-Advanced Colour Passing (ε-ACP), considers approximate indistinguishability, using a num- berεto group factors whose potentials differ by a factor of at most (1±ε)and are therefore considered ε-equivalent, with εas a hyper- parameter [14]. ε-ACP allows for compressing a propositional model to a larger degree with increasing ε, leading to better runtime for in- ference tasks. It also allows for computing an approximation error for such inference tasks, which helps assess the accuracy versus the compression gained. However, if a chosen εdoes not fulfil either a requirement for compression or accuracy, shortcomings become ap- ∗Corresponding Author. Email: jan.speller@uni-muenster.deparent: A new run of ε-ACP with a different | https://arxiv.org/abs/2505.22288v1 |
εdoes not guarantee a model consistent with the previous one. E.g., with a larger ε, mak- ing more factors ε-equivalent, factors that were previously grouped together might no longer be part of the same group, because a differ- ent grouping appears more suitable. That is, the models do not form a hierarchy, where groups of factors under a larger εcan only form by merging groups under a smaller ε. This inconsistency in models from one εto the next makes it hard to interpret the models regarding each other. Additionally, εis a hyperparameter that has to be chosen by the user. It may not be obvious what a suitable value for εis, re- quiring many runs of ε-ACP to find a suitable one with the necessary compression and accuracy. To counteract these shortcomings, this paper presents a hierarchi- cal approach to lifted model construction called Hierarchical Ad- vanced Colour Passing (HACP), which is hyperparameter-free, i.e., there is no need to choose a value for εin advance. Specifically, we calculate a hierarchy of εvalues that guarantees a hierarchy of com- pressed models when running HACP with those εvalues. To do so, this paper contributes the following: (i) a one-dimensional ε-equivalence distance (1DEED) measure that reduces the ε-equivalence after computations to a single number for comparing different factors in contrast to the previous defini- tion of ε-equivalence, (ii) an efficient algorithm for computing a hierarchical ordering of ε values, (iii) HACP using the hierarchical ordering of εvalues, yielding a hier- archy of parametric FGs, and (iv) an analysis of the error bounds in the hierarchy of parametric FGs, showing a hierarchical order as well. The hierarchical approach has the advantage that it is hyperparameter-free. The hierarchy of εvalues can be com- puted before running HACP and without needing a starting value. In combination with the error bounds, the hierarchy of εvalues allows to choose for which εvalues to actually run HACP, which means that a user does not need to do a hyperparameter exploration to find the most suitable one. It also has the upside that one has to run HACP only for those εvalues that are actually of interest. We approach this from the perspective of distributional deviation (accuracy), but also in the context of group merging of ε-equivalent factors by controlling ε(compression). Finally, the different models as a result of the εvalues in the hierarchy are consistent and as such interpretable with respect to each other. The remaining part of this paper is structured as follows: The paper starts with introducing necessary notations and briefly recaps ε-ACP, which is followed by the main part, which provides a definition of 1DEED, specifies how to calculate a hierarchical ordering of εval- ues, and presents HACP. Then, it shows error bounds for HACP and ends with a discussion and conclusion. 2 Background We start by defining an FG as a propositional probabilistic graphical model to compactly encode a full joint probability distribution over a set of randvars [7, 12]. Definition 1 (Factor Graph) .An FG M= (V,E)is an undirected bipartite graph consisting of a node set | https://arxiv.org/abs/2505.22288v1 |
V=R∪Φ, where R= {R1, . . . , R n}is a set of randvars and Φ={ϕ1, . . . , ϕ m}is a set of factors (functions), as well as a set of edges E⊆R×Φ. There is an edge between a randvar Ri∈Rand a factor ϕj∈ΦinE ifRiappears in the argument list of ϕj. A factor ϕj(Rj)defines a function ϕj:×R∈Rjrange (R)7→R>0that maps the ranges of its arguments Rj(a sequence of randvars from R) to a positive real number, called potential. The term range (R)denotes the possible values a randvar Rcan take. We further define the joint potential for an assignment r(withrbeing a shorthand notation for R=r) as ψ(r) =mY j=1ϕj(rj), (1) where rjis a projection of rto the argument list of ϕj. With Z=P rQm j=1ϕj(rj)as the normalisation constant, the full joint probability distribution encoded by Mis then given by PM(r) =1 ZmY j=1ϕj(rj) =1 Zψ(r). (2) Example 1. Consider the FG illustrated in Fig. 1. It holds that R= {A, B, C },Φ={ϕ1, ϕ2}, andE={{A, ϕ 1},{B, ϕ 1},{B, ϕ 2}, {C, ϕ 2}}. For the sake of this example, let range (A) =range (B) = range (C) ={true,false}. The potential tables of ϕ1andϕ2are shown on the right of Fig. 1 with ϕ1(true,true) =φ1and so on, where φi∈R>0,i= 1, . . . , 4, are arbitrary positive real numbers. In Def. 1, we stipulate that all potentials are strictly greater than zero to avoid division by zero when analysing theoretical bounds in subsequent sections of this paper. In general, it is sufficient to have at least one non-zero potential in every potential table to ensure a well-defined semantics of an FG. However, our requirement of hav- ing strictly positive potentials is no restriction in practice as zeros can easily be replaced by tiny numbers that are close to zero. An FG can be queried to compute marginal distributions of randvars given ob- servations for other randvars (referred to as probabilistic inference). Definition 2 (Query) .Aquery P(Q|E1=e1, . . . , E k=ek)con- sists of a query term Qand a set of events {Ej=ej}k j=1where Q and all Ej,j= 1, . . . , k , are randvars. To query a specific probabil- ity instead of a distribution, the query term is an event Q=q. A B Cϕ1 ϕ2A B ϕ1(A, B) true true φ1 true false φ2 false true φ3 false false φ4 C B ϕ2(C, B) true true φ1 true false φ2 false true φ3 false false φ4 Figure 1. An exemplary FG encoding a full joint probability distribution over three randvars A,B, andC.Lifted inference exploits identical behaviour of indistinguishable objects to answer queries more efficiently. The idea behind lifting is to use a representative of indistinguishable objects for computa- tions. Formally, this corresponds to making use of exponentiation instead of multiplying identical potentials several times (for an ex- ample, see Appendix 3). To exploit exponentiation during inference, equivalent factors have to be grouped. We next introduce the concept ofε-equivalent factors [14], which allows us to determine factors that can potentially be grouped for lifted inference. Definition 3 (ε-Equivalence) .Letε∈R>0be a | https://arxiv.org/abs/2505.22288v1 |
positive real number. Two potentials φ1, φ2∈R>0areε-equivalent , denoted asφ1=εφ2, ifφ1∈[φ2·(1−ε), φ2·(1 + ε)]andφ2∈ [φ1·(1−ε), φ1·(1 + ε)]. Further, two factors ϕ1(R1, . . . , R n) andϕ2(R′ 1, . . . , R′ n)areε-equivalent , denoted as ϕ1=εϕ2, if there exists a permutation πof{1, . . . , n }such that for all assignments (r1, . . . , r n)∈ ×n i=1range (Ri), where ϕ1(r1, . . . , r n) =φ1and ϕ2(rπ(1), . . . , r π(n)) =φ2, it holds that φ1=εφ2. Example 2. Consider the potentials φ1= 0.49,φ2= 0.5, and ε= 0.1. Since it holds that φ2= 0.5∈[φ1·(1−ε) = 0 .441, φ1· (1+ε) = 0.539] andφ1= 0.49∈[φ2·(1−ε) = 0.45, φ2·(1+ε) = 0.55],φ1andφ2areε-equivalent. The notion of ε-equivalence is symmetric. Moreover, it might hap- pen that indistinguishable objects are located at different positions in the argument list of their respective factors, which is the reason the definition considers permutations of arguments. For simplicity, in this paper, we stipulate that πis the identity function. However, all presented results also apply to any other choice of π[14]. Theε-ACP algorithm [14] computes groups of pairwise ε- equivalent factors to compress a given FG. In particular, as poten- tials are often estimated in practice, potentials that should actually be considered equal might slightly differ and the ε-ACP algorithm accounts for such deviations using a hyperparameter ε, which con- trols the trade-off between compression and accuracy of the resulting lifted representation. To allow for exponentiation, ε-ACP computes the mean potentials for each group of pairwise ε-equivalent factors and replaces the original potentials of the factors by the respective mean potentials. Thus, the semantics of the FG changes after the re- placement of potentials. However, replacing potentials by their mean guarantees specific error bounds (more details follow in Section 4). Note 1. ε-ACP uses the introduced concept of ε-equivalence includ- ing the corresponding hyperparameter ε∈R>0. This can be in- terpreted as a regularisation approach from the original model (with ε≈0) to the most trivial model by reducing complexity by increasing ε. For arbitrary large 0≪ε, it reduces the model to an FG, which considers all factors of the same dimension as pairwise ε-equivalent, where the dimension of a factor ϕrefers to the number of rows in the potential table of ϕ.ε-ACP does not yield hierarchical models, since the grouping process is independent for each choice of ε. 3 Hierarchical Lifted Model Construction As mentioned above, ε-ACP lacks a mechanism to ensure the core property of hierarchical methods: the consistent embedding of sim- pler models into more complex ones. To construct a principled hier- archical organisation of models, a clear mechanism for defining and transferring group membership across levels is essential. Concretely, we define a hierarchy, in which higher levels inherit structural prop- erties from lower levels, thereby inducing a consistent reduction in the complexity of the FG. To this end, we present the 1DEED , a more effective criterion for determining ε-equivalence. To do so, we treat the potential table of a factor ϕas a vector in Rn, where ϕ(k)denotes the k-th entry, i.e., | https://arxiv.org/abs/2505.22288v1 |
the potential associated with the k-th row in the potential table of ϕ. For example, factor ϕ1(A, B) in Fig. 1 is represented as the vector (φ1, φ2, φ3, φ4), with, e.g., ϕ1(true,false) =ϕ1(2) = φ2. After introducing 1DEED, we use it to set up a hierarchical ordering of ε-equivalent factors, which forms the backbone to a hierarchical approach for lifted model construction based on ε-ACP. 3.1 One-dimensional ε-Equivalence Distance We define 1DEED to compare two n-dimensional, strictly positive vectors, representing factors in an FG, i.e., ϕi∈Rn >0withϕi(k)>0 for all k. The name of this distance anticipates its properties, which we formalise after its definition. Definition 4 (One-dimensional ε-equivalence distance) .1DEED de- fined as the mapping d∞:Rn >0×Rn >0→Rfor two n-dimensional vectors ϕ1, ϕ2∈Rn >0is given by: d∞(ϕ1, ϕ2) := max k=1,...,n ϕ1(k)−ϕ2(k) ϕ1(k) , ϕ1(k)−ϕ2(k) ϕ2(k) = max k=1,...,n|ϕ1(k)−ϕ2(k)| min{|ϕ1(k)|,|ϕ2(k)|} (3) Since ϕ1, ϕ2are strictly positive, the absolute value in the denom- inator is not necessary. Two vectors ϕ1, ϕ2areε-equivalent if and only if d∞(ϕ1, ϕ2)≤ε. Lemma 1. 1DEED is positive and symmetric. Proof. Consider two vectors ϕ1, ϕ2∈Rn >0. The 1DEED d∞be- tween ϕ1andϕ2is given by Eq. (3), which is greater or equal to 0 due to the absolute value function. The symmetry of d∞follows from the symmetric nature of the absolute value and minimum functions: d∞(ϕ1, ϕ2) = max k=1,...,n|ϕ1(k)−ϕ2(k)| min{ϕ1(k), ϕ2(k)} = max k=1,...,n|ϕ2(k)−ϕ1(k)| min{ϕ2(k), ϕ1(k)} =d∞(ϕ2, ϕ1). Corollary 2. It holds that d∞(ϕ1, ϕ2) = 0 if and only if |ϕ1(k)− ϕ2(k)|= 0for all k= 1, . . . , n , which holds only if ϕ1=ϕ2. Proof. Follows directly from the proof of Lemma 1. This distance is based on the maximum metric (Chebyshev dis- tance) with an additional deviation. It does not satisfy the triangle inequality ( ∆-inequality) and thus is not a metric. In addition, the 1DEED definition for n= 1lacks the transitivity property. Both can be shown through counterexamples (see Appendix 1). Nonetheless, 1DEED is well-suited for bounding maximum relative deviations in all dimensions, aligning with practical use in probabilistic inference. This overall behaviour is consistent with the ε-equivalence defini- tion in Def. 3. This strong connection is no coincidence, but rather a consequence of the fact that the original definition, although not explicitly stated, also satisfies these properties. Furthermore, the fol- lowing theorem shows that both ε-equivalence definitions are equiv- alent, meaning that whenever one holds, the other also holds.Theorem 3. Definitions 3 and 4 of ε-equivalence for two vectors ϕ1, ϕ2∈Rn >0are equivalent. Proof Sketch. In mathematical terms, the claim can be summarised asϕ1=εϕ2⇔d∞(ϕ1, ϕ2)≤ε, which we prove for any ε >0in Appendix 2 via equivalence transformations. The use of relative deviation is essential in probabilistic query- ing, where percentage-based error bounds directly influence the qual- ity of inference. However, classical definitions of ε-equivalence re- quire checking all component-wise comparisons, leading to ineffi- ciencies in practice and no clear order of ε-equivalent factor groups. The one-dimensional formulation via d∞addresses this by provid- ing a closed-form, computationally minimal characterisation of ε- equivalence. Specifically, it allows for an efficient computation of the | https://arxiv.org/abs/2505.22288v1 |
smallest admissible εfor each pair of factors, facilitating both storage and comparison. Corollary 4. For two vectors ϕ1, ϕ2∈Rn >0we have ϕ1=εϕ2if and only if ε≥ε0, where ε0=d∞(ϕ1, ϕ2)is the threshold value below which ε-equivalence no longer holds. Proof. It follows from Def. 4 that there exists a j∈ {1, . . . , n }such that ε0=d∞(ϕ1, ϕ2) =|ϕ1(j)−ϕ2(j)| min{ϕ1(j), ϕ2(j)} = max k=1,...,n|ϕ1(k)−ϕ2(k)| min{ϕ1(k), ϕ2(k)} ≤ε. The advantage of 1DEED lies in its canonical representation: d∞(ϕ1, ϕ2)uniquely determines the minimal εfor which ϕ1=εϕ2 holds. This scalar value enables direct comparison, indexing, and efficient classification of equivalence classes without enumerating all component-wise ratios. The formal operational benefits—both in terms of computational complexity and structural hierarchy—are es- tablished in the next subsection. 3.2 Hierarchical Ordering of ε-Equivalent Factors To obtain a meaningful hierarchical structure, we need a starting point or a level 0, which is simply the full model. On the other hand, ifεis arbitrarily large, all factors of the same dimension would be grouped together, resulting in a nearly trivial model. However, these properties alone are insufficient for creating a clear hierarchy be- tween these extremes. A desired structure would resemble the one depicted in Fig. 2. Each level in the hierarchy is represented by a line on the left side of the figure. If we were to cut the figure horizontally at this point, all connected subtrees would form a group, while the remaining factors would stay separate. Our goal is to maintain the property that once multiple factors are grouped together at a lower level, they should also stay together at higher levels. This property is known as structure preservation. To achieve this, we must ensure that the structure is preserved. However, this is not trivial due to the lack of transitivity in d∞. Thus, we impose a hierarchical pre-ordering on the set of factors based on pairwise ε-equivalence, determined by the minimal devi- ation under 1DEED d∞. This unique ordering forms the basis for HACP described in Section 3.3. Smaller d∞values correspond to higher indistinguishability, and the hierarchical construction ensures that higher-level aggregations preserve the nested structure of previ- ously merged subgroups. ϕ1 ϕ2ε1 ϕ3 ϕ4ε2 ϕ5 ϕ6ε3ε4 ϕ7ε5ε7 ϕ8 ϕ9ε6ϕ10ε8ε9Levels Figure 2. Exemplary visualisation of a factor ordering with increasing ε. This information is easily stored in the list and is easily readable from the matrix. The εiare ordered by size ( ε1being the smallest value of them). Note that a root εis always the maximal ε-distance of all pairwise factor comparisons of all leafs. E.g., ε4= max {ε1,2, ε1,3, ε1,4, ε2,3, ε2,4, ε3,4}. Factor ϕ1ϕ2··· ··· ϕm−1 ϕm ϕ1 0ε1,2ε1,3··· ε1,m−1 ε1,m ϕ2 0 0 ε2,3··· ε2,m−1 ε2,m ..................... ..................... ϕm−1 0 0 0 0 0 εm−1,m ϕm 0 0 0 0 0 0 Table 1. Upper triangular matrix Λ = (Λ ij)1≤i,j≤millustrating the matrix output of Alg. 1. The entries are defined as Λij=εi,j:=d∞(ϕi, ϕj)for1≤i < j≤m, andΛij= 0 otherwise. To determine this ordering, we follow a two-phase procedure as described in Alg. 1 with an FG as input. In Phase I, the algo- rithm | https://arxiv.org/abs/2505.22288v1 |
creates a matrix Λas shown in Table 1. Its cell entries are Λij:=εi,j:=d∞(ϕi, ϕj)for1≤i < j ≤m, where m=|Φ| is the number of factors in the FG. Due to the symmetric property ofd∞, additional information is not required, allowing for filling the matrix with zeros, thus forming an upper triangular matrix. Next, we examine Phase II of the algorithm in more detail, which performs a hierarchical ordering of ε-equivalent group selections. The algorithm iteratively chooses εvalues from Λthat allow (groups of) factors that are pairwise ε-equivalent to be grouped. The algorithm runs for m−1iterations as there are mfactors to merge, meaning there are m−1hierarchical levels at the end. The outputs are an ordered vector εof length m−1of increasing εvalues as well as an ordered nested list of lists Lcontaining a nested grouping of indices according to the εvalues and their hierarchy level (with an index shift of mfor easier identification compared to the indices identifying factors). Specifically, the algorithm picks the next two (groups of) factors to merge by selecting the minimal entry εi′,j′in Λ, which is then stored in εat the current level. Before dealing with L, let us consider how Λis updated: Since both (groups of) factors are now considered as a single group, their respective rows in Λneed to be merged by keeping the maximum of the two εvalues in each column, which ensures that if an entry of this row is picked in another iteration, all factors are ε-equivalent givenAlgorithm 1 Hierarchical Ordering of ε-Equivalent Groups Input: An FG M= (R∪Φ,E)withm=|Φ|. Output: An ordered nested list Lof lists, an ordered vector ε= (ε1. . . , ε m−1)withεi< εi+1. ▷Phase I: Generate upper triangular Matrix Λ 1:Initialise Λ∈Rm×mwith zeros 2:fori= 1tom−1do 3: forj=i+ 1tomdo 4: Compute d∞(ϕi, ϕj) =:εi,j 5: Store result in Λij 6:SaveΛ ▷Phase II: Generate ordered list Land vector ε 7:Initialise empty list L ← [ ] 8:Initialise vector ε= (ε1. . . , ε m−1)with zeros 9:Initialise active index set A ← { 1, . . . , m } 10:Initialise active matrix ˜Λ := Λ 11:forℓ= 1tom−1do 12: Find(i′, j′) = arg min {˜Λij|i < j, i, j ∈ A} 13: Saveεl:=˜Λi′j′ 14: ifi′appears in direct parent group Gp∈ L then 15: ifj′appears in direct parent group G′ p∈ L then 16: Update GpandG′ pas one new group [Gp, G′ p, ℓ+m] 17: else 18: Update GpasGp= [Gp, j′, ℓ+m] 19: else if j′appears in direct parent group g∈ L then 20: Update GpasGp= [Gp, i′, ℓ+m] 21: else ▷Neither i′norj′appears in any parent group 22: Append [[i′, j′, ℓ+m]]toL 23: fork∈ A \ { i′, j′}do 24: ifk > j′then 25: ˜Λi′k←max( ˜Λi′k,˜Λj′k) 26: else if k < i′then 27: ˜Λki′←max( ˜Λki′,˜Λkj′) 28: else if i′< k < j′then 29: ˜Λi′k←max( ˜Λi′k,˜Λkj′) 30: Remove j′from active set: A ← A \ { j′} this larger ε. To avoid resizing Λ, there is a set of active indices, and merging (groups of) factors removes the second index, essentially deactivating the row. The entries of the row of the first index are then updated | https://arxiv.org/abs/2505.22288v1 |
to the maximum value. Regarding L, a new entry is formed, which is essentially a 3-tuple l= (ei′, ej′, h): one element ei′for the first (group of) factor(s), one element ej′for the second (group of) factor(s), and the last element hbeing the current hierarchy level shifted by m. Ifi′orj′identify a single factor, then ei′orej′store the index identifying the factor. If i′orj′identify a group of factors, then there already exists an entry l′inLfor it from a previous merging, which is removed from Land then stored in ei′orej′. Next, we look at an example. Example 3. LetM= (R∪Φ,E)be an FG with m=|Φ|= 10 factors, all of identical dimension. Assume distances between factors leading to a hierarchy corresponding to Fig. 2, that is, factors ϕ1and ϕ2have the smallest distance among all factors, ϕ3andϕ4the next smallest distance, followed by ϕ5andϕ6, after which the first two pairings have the smallest distance, and so on. During the first iteration of Phase II in Alg. 1, ε1,2is minimal in Λ. Therefore, an entry in Lis created with [1,2,11], containing the two indices 1,2identifying the factors and the current hierarchy level 1 shifted by m= 10 . The matrix update looks as follows: Factor ϕ2 ϕ3 ϕ4 ϕ5 ··· ϕ1 ε1,2max i=1,2εi,3max i=1,2εi,4max i=1,2εi,5··· ϕ2 0 ε2,3 ε2,4 ε2,5 . . . ϕ3 0 0 ε3,4 ε3,5··· ϕ4 0 0 0 ε4,5··· ... 0 0 0 0 ··· The row of index 1is updated to the maximum value of the two entries of the rows 1,2. The row of index 2is deactivated, which is depicted by the crossed out line. In the next iteration, ε3,4is minimal in ˜Λ, which means adding an entry [3,4,12]toL, with the matrix update being the following: Factor ϕ2 ϕ3 ϕ4 ϕ5 ··· ϕ1 ε1,2max i=1,2 j=3,4εi,jmax i=1,2εi,4max i=1,2εi,5··· ϕ2 0 ε2,3 ε2,4 ε2,5 . . . ϕ3 0 0 ε3,4 max i=3,4εi,5··· ϕ4 0 0 0 ε4,5··· ... 0 0 0 0 ··· When ˜Λ13is the next value to choose, both indices identify groups of factors. As such, their entries in L, namely, [1,2,11]and[3,4,12], are replaced by an entry [[1,2,11],[3,4,12],14]. At the end, the out- put of Alg. 1 looks as follows: L= [ [[[1 ,2,11],[3,4,12],14],[[5,6,13],7,15],17], [[8,9,16],10,18],19 ] ε= (ε1, . . . , ε 9)with ε1= min i,j=1,...,10 i<j{εi,j}= Λ 12, ε2= min i,j=1,3,...,10 i<j{εi,j}=˜Λ34, ε3= min i,j=1,3,5,...,10 i<j{εi,j}=˜Λ56, ε4= min i,j=1,3,5,7,...,10 i<j{εi,j}=˜Λ13, . . . This results in a total hierarchy of maximal 10different levels (and models). Appendix 4 shows an overview of the group sizes in the hierarchy, illustrating the compression possible with increasing ε. Thus, in the output, each εi∈εcorresponds to an increasingly coarse partitioning, reflecting group memberships under growing tol- erance thresholds for higher levels. Selecting a specific εiimplies fixing a hierarchical level i, which determines the groupings from L. Running Alg. 1 is rather efficient, depending on the number of factors only and needing to compute pairwise distances only once. 3.3 Hierarchical Advanced Colour Passing Algorithm HACP provides a hyperparameter-free hierarchical approach to lifted model construction. It uses the output of Alg. 1 to determine for a given level | https://arxiv.org/abs/2505.22288v1 |
which groups get the same colour assigned, which is then the input to standard ACP, which runs independent of ε. Specifically, HACP proceeds in three phases, loading groups, running ACP, and updating potentials. Alg. 2 shows an overview, which is specified for a given hierarchy level ifor the sake of brevity but could be easily extended to build parametric models for all levels of the hierarchy. It takes an FG, an index i, and the output of Alg. 1 for the FG as input.Algorithm 2 Hierarchical Advanced Colour Passing Input: An FG M= (R∪Φ,E), an index i∈ {1, . . . , m −1}, and the outcome of Alg. 1 run on M. Output: A lifted representation M′, encoded as a parametric FG, which is approximately equivalent to M. ▷Phase I: Load groups of pairwise ε-equivalent factors for εi 1:LetLbe the current list of candidate groups. 2:fork=m+itom+ 1do 3: ifkoccurs in any group in Lthen 4: Load global parent group Gp(k)∈ L 5: Store group of ε-equivalent factors: GΦ(k) :={ϕj|j∈Gp(k), j < m } ∈G 6: Update L ← L \ Gp(k) 7:for each factor ϕj∈Φ\Sm+i k=m+1GΦ(k) do 8: Store group GΦ(j) :={ϕj} ∈G ▷Phase II: Assign colours to factors and run ACP 9:for each groupGj∈Gdo 10: for each factor ϕi∈Gjdo 11: ϕi.colour ←j 12:G′←Call ACP on MandGusing the assigned colours ▷Phase III: Update potentials 13:for each groupGj∈G′do 14: ifGj̸=∅then 15: ϕ∗(r)←1 |Gj|P ϕi∈Gjϕi(r)for all assignments r 16: for each factor ϕi∈Gjdo 17: ϕi←ϕ∗ 18:M′←Construct parametric FG from groupings of ACP Phase I provides a systematic procedure for forming the groups of factors in the input FG given the nested list Lof the output of Alg. 1. For instance, consider Example 3 and level 4withε4. The grouping induced at this level is: G={G1={ϕ1, ϕ2, ϕ3, ϕ4},G2={ϕ5, ϕ6}, G3={ϕ7},G4={ϕ8},G5={ϕ9},G6={ϕ10}}. Subsequently, Phase II applies the standard ACP with colours as- signed to each identified group. In Phase III, each group that ACP outputs is reduced to a representative factor, replacing all associ- ated potentials with their arithmetic mean. That is, for each group of potentials {φ1, . . . , φ k}, a new potential is computed as φ∗= arg minˆφPk i=1(φi−ˆφ)2. This minimises intra-group variance and yields an optimal compressed representation. The constructed factor ϕ∗= (φ∗ 1, . . . , φ∗ n)is, by design, also pairwise ε-equivalent to all original factors from its generating group. 4 Hierarchical Bounds: Compression vs. Accuracy A crucial property of the ε-ACP algorithm is its induced bound on the change in probabilistic queries based on its regularisation parameter ε. HACP uses predefined groups (selected via Alg. 2) and still allows choosing a compressed model of reduced complexity, while preserv- ing the same asymptotic bounds as its predecessor, ε-ACP, to ensure consistent and reliable performance (accuracy). The following sub- sections explore the properties of HACP, examining how to control this information to apply the algorithm with specific accuracy. 4.1 Asymptotic Properties A desirable property of the HACP algorithm is that it preserves the same boundaries on the change in probabilistic queries as the ε-ACP algorithm. After running ε-ACP, | https://arxiv.org/abs/2505.22288v1 |
the deviation-wise worst-case sce- nario for an assignment is bounded. Notably, HACP relies on the mean values of the potentials of pairwise ε-equivalent factors. To quantify the difference in probabilistic queries between the original FG and the hierarchical processed FG after applying the HACP al- gorithm, we use the symmetric distance measure between two distri- butions PMandPM′introduced by Chan and Darwiche [4], which effectively bounds the maximal deviation of any assignment r: DCD(PM, PM′) := ln max rPM′(r) PM(r)−ln min rPM′(r) PM(r).(4) Forε-ACP, the following asymptotic bound has been proven [14]: Theorem 5 (Luttermann et al. [14]) .LetM= (R∪Φ,E)be an FG and let M′be the output of ε-ACP when run on M. With PMand PM′being the underlying full joint probability distributions encoded byMandM′, respectively, and m=|Φ|, it holds that DCD(PM, PM′)≤ln 1 +m−1 mε 1 +ε 1 +1 mε!m (5) <ln 1 +ε2m<ln1 +ε 1−εm , (6) where the bound given in Eq. (5)is optimal (sharp). We next show that this bound also applies to the HACP algorithm. Proposition 6. Theorem 5 holds the same way for M′being the output of the HACP algorithm (Alg. 2). Proof. The core components of the ε-ACP algorithm and its hierar- chical counterpart HACP (Alg. 2) are identical, aside from enforc- ing predefined group structures to guarantee a hierarchical structure. Therefore, the proof can be conducted in the same manner as the original proof [14, Appendix A]. The same proposed example can be used to hit the bound of Eq. (5), showing its optimality. 4.2 Compression versus Accuracy We continue to give a finer analysis of theoretical properties entailed by HACP regarding the trade-off between compression and accuracy. All upcoming results hold for both ε-ACP and HACP. Theorem 7. The maximal absolute deviation between any initial probability p=PM(r|e)ofrgivenein model Mand the prob- ability p′=PM′(r|e)in the modified model M′resulting from running HACP (Alg. 2) or ε-ACP on Mcan be bounded by pmax ∆ := max for any r|e|p−p′| ≤√ ed−1√ ed+ 1withd=DCD(PM, P′ M). Proof Sketch. From Chan and Darwiche [4], we use pe−d p(e−d−1) + 1≤p′=PM′(r|e)≤ped p(ed−1) + 1(7) to define a upper bound function fmax ∆ forpmax ∆ , which is sym- metric for p′= 1/2(see Fig. 3) fmax ∆ (p) := max( fupper(p), flower(p))forp∈[0,1] withfupper(p) :=ped p(ed−1) + 1−p=p(1−p)(ed−1) p(ed−1) + 1 andflower(p) :=p−pe−d p(e−d−1) + 1=p(1−p)(1−e−d) p(e−d−1) + 1, 0.0 0.2 0.4 0.6 0.8 1.00.00.20.40.60.81.0fupper (o max. left) and flower (x max. right) pColorcode: ε=0.01 ε=0.001 Linetype: m = 10 m = 100 m = 1000Figure 3. Showing fupper andflower over[0,1]withd2values from Corollary 8 to use an upper estimate for pmax ∆ by bounding it from above. and calculate its extrema by first- and second-order conditions. Theorem 7 is crucial in practice, since it lets us compute DCD(PM, PM′)efficiently—without inspecting the full factor graph—and thus has direct practical impact. Since√ ed−1√ ed+1= tanh d 4 ≤1is monotonically strictly increasing in d, the results of Theorem 5 can be substituted into its formula, while the directions of the inequalities are obtained. Corollary 8. With previous notations, the change in any probabilis- tic query in | https://arxiv.org/abs/2505.22288v1 |
an initial model Mand a modified model M′obtained by running HACP (Alg. 2) or ε-ACP is bounded by pmax ∆≤√ ed1−1√ ed1+ 1withd1=DCD(PM, P′ M) ≤√ ed2−1√ ed2+ 1withd2= ln 1 +m−1 mε 1 +ε 1 +1 mε!m ≤√ ed3−1√ ed3+ 1withd3= ln 1 +ε2m ≤√ ed4−1√ ed4+ 1withd4= ln1 +ε 1−εm . This implies that for a given ε >0, we can determine the maxi- mum deviation of pmax ∆ (see Fig. 4). However, it is also essential to consider the reverse perspective to gain insight into the overall diversity and structural complexity of the FG. Therefore, we inves- tigate the following question: What choice of εguarantees that our probabilities remain within a specified distance, i.e., pmax ∆≤p∗ ∆? This question is addressed in the subsequent theorem by reversing the preceding inequalities and solving for ε. Theorem 9. For any given p∗ ∆∈(0,1 2], the output of HACP guar- antees for any ε∈(0,1), which is smaller or equal to ε1=−1 +m−1 m−1 mm√ ed 2m−1 m+vuut −1 +m−1 m−1 mm√ ed 2m−1 m2 −1−m√ ed m−1 m withd= ln p∗ ∆+1 1−p∗ ∆2 the bound pmax ∆≤p∗ ∆. 0.0 0.1 0.2 0.3 0.40.00.20.40.60.81.0pmaxΔ in dependence of a chosen ε εpmaxΔ m = 10 m = 100 m = 1000Figure 4. The bound on pmax ∆ depending on the choice of εfor different amounts of factors m. This means that we can bound the maximal deviation p∗ ∆, which is tight [14], of HACP and ε-ACP, respectively, by calculating ε1(p∗ ∆) before we run it. Proof. Using Theorem 7, we get for d=DCD(PM, P′ M): pmax ∆≤√ ed−1√ ed+ 1=:p∗ ∆ ⇔lnp∗ ∆+ 1 1−p∗ ∆2 =d=DCD(PM, P′ M). Additionally, we use Eq. (5) from Theorem 5 and the correspond- ing version for HACP (Proposition 6) and solve the inequality for ε, when it reaches equality: d= ln 1 +m−1 mε 1 +ε 1 +1 mε!m . These bounds are mostly of theoretical nature and the deviations are rarely encountered in reality [14]. Note 2. Choosing, for instance, a 10times larger εhas pretty much the same effect as choosing 10times the number of factors mon the bound on the change in probabilistic queries (cf. Fig. 3). Thus, this subsection lays the groundwork for leveraging the sup- plementary insights from Alg. 1 to develop a holistic understanding of the structural and computational complexity of the FG. Depending on sensitivity to ε,pmax ∆ /p∗ ∆, and m, one can assess the implica- tions of specific parameter choices on the effect of compressing a given FG, or alternatively, prioritise model fidelity. The HACP algo- rithm enables such assessments across multiple structural levels. 5 Discussion Related work. Lifted inference exploits the indistinguishability of objects in a probabilistic (relational) model, allowing to carry out query answering (i.e., the computation of marginal distributions for randvars given observations for other randvars) more efficiently while maintaining exact answers [16]. First introduced by Poole [17], parametric FGs, which combine relational logic and probabilisticmodelling, and lifted variable elimination enable lifted probabilis- tic inference to speed up query answering by exploiting | https://arxiv.org/abs/2505.22288v1 |
the indistin- guishability of objects. Over the past years, lifted variable elimina- tion has continuously been refined by many researchers to reach its current form [3, 5, 6, 11, 15, 19]. To construct a lifted (i.e., first-order) representation such as a parametric FG, the ACP algorithm [13], which generalises the CompressFactorGraph algorithm [1, 10], is the current state of the art. ACP runs a colour passing procedure to de- tect symmetric subgraphs in a probabilistic graphical model, similar to the Weisfeiler-Leman algorithm [23], which is a well-known al- gorithm to test for graph isomorphism. The idea is then to group symmetric subgraphs and exploit exponentiation during probabilistic inference. While ACP is able to construct a parametric FG entailing equivalent semantics as a given propositional model, it requires po- tentials of factors to exactly match before grouping them. In practical applications, however, potentials are often estimates and hence might slightly differ even for indistinguishable objects. To account for small deviations between potentials, the ε-ACP algorithm [14] has been in- troduced, generalising ACP by introducing a hyperparameter εthat controls the trade-off between the exactness and the compactness of the resulting lifted representation. Pre-ordering means pre-analysing. Our hierarchical algorithm (Alg. 1) imposes a predetermined nesting structure on the factor graph before any colour passing procedure, enabling application- oriented level selection that is specified a priori. By fixing the levels to consider before invoking Alg. 2 on the adjusted graph, one can predict the resulting complexity of ε-equivalent group structure and thereby enhance interpretability. In contrast to ε-ACP, which may produce ε-equivalent groupings that lack consistent nesting across runs or parameter settings, our hierarchical approach (HACP) en- sures structural coherence and comparability across instances. More- over, the explicit composition of each level can be monitored to trace the impact of modifications throughout the hierarchy. Trade-off: Compression versus Accuracy. Both ε-ACP and HACP inherit the deviation bounds introduced by Luttermann et al. [14], yielding identical sharp bounds and dependency structures for any choice of ε. In practice, the precise grouping composition gov- erns the magnitude and sign of probabilistic deviations for down- stream queries: As the hierarchy level (or ε) increases, theoretical bounds grow, yet actual query deviations may fluctuate based on group aggregations. Crucially, our hierarchical bounds facilitate pre- specification of maximal permissible εvalues and corresponding lev- els. Rather than relying on the generally intractable DCDfor approx- imated models [4], one can derive pmax ∆ for a given ε, or select an admissible level that guarantees both desired compression and suffi- cient accuracy. 6 Conclusion We introduce a novel framework for hierarchical lifting and model reconciliation in FGs. By presenting a more practical one- dimensional notion of ε-equivalent factors, we enable the identifi- cation of (possibly inexact) symmetries, the number and sizes of ε- equivalent groups and the resulting reduction of computational com- plexity, thereby allowing for lifted inference. Our theoretical analysis provides a solid foundation for understanding the structural proper- ties of FGs. Crucially, the entire hierarchy is fixed prior to initiating colour passing or inference, ensuring structural consistency and en- abling theoretical error bounds. | https://arxiv.org/abs/2505.22288v1 |
This work provides a foundation for future advances in efficient and interpretable probabilistic inference. Acknowledgements This work was partially funded by the Ministry of Culture and Sci- ence of the German State of North Rhine-Westphalia. The research of Malte Luttermann was funded by the BMBF project AnoMed 16KISA057. References [1] B. Ahmadi, K. Kersting, M. Mladenov, and S. Natarajan. Exploiting Symmetries for Scaling Loopy Belief Propagation and Relational Train- ing. Machine Learning , 92(1):91–132, 2013. [2] T. Braun and R. Möller. Lifted Junction Tree Algorithm. In Proceedings of the Thirty-Ninth German Conference on Artificial Intelligence (KI- 2016) , pages 30–42. Springer, 2016. [3] T. Braun and R. Möller. Parameterised Queries and Lifted Query An- swering. In Proceedings of the Twenty-Seventh International Joint Con- ference on Artificial Intelligence (IJCAI-2018) , pages 4980–4986. IJ- CAI Organization, 2018. [4] H. Chan and A. Darwiche. A Distance Measure for Bounding Proba- bilistic Belief Change. International Journal of Approximate Reason- ing, 38:149–174, 2005. [5] R. De Salvo Braz, E. Amir, and D. Roth. Lifted First-Order Proba- bilistic Inference. In Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence (IJCAI-2005) , pages 1319–1325. Morgan Kaufmann Publishers Inc., 2005. [6] R. De Salvo Braz, E. Amir, and D. Roth. MPE and Partial Inversion in Lifted Probabilistic Variable Elimination. In Proceedings of the Twenty- First National Conference on Artificial Intelligence (AAAI-2006) , pages 1123–1130. AAAI Press, 2006. [7] B. J. Frey, F. R. Kschischang, H.-A. Loeliger, and N. Wiberg. Fac- tor Graphs and Algorithms. In Proceedings of the Thirty-Fifth An- nual Allerton Conference on Communication, Control, and Computing , pages 666–680. Allerton House, 1997. [8] V . Gogate and P. Domingos. Probabilistic Theorem Proving. In Pro- ceedings of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence (UAI-2011) , pages 256–265. AUAI Press, 2011. [9] M. Hartwig, R. Möller, and T. Braun. An Extended View on Lifting Gaussian Bayesian Networks. Artificial Intelligence , 330:104082, 2024. [10] K. Kersting, B. Ahmadi, and S. Natarajan. Counting Belief Propaga- tion. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence (UAI-2009) , pages 277–284. AUAI Press, 2009. [11] J. Kisy ´nski and D. Poole. Constraint Processing in Lifted Probabilis- tic Inference. In Proceedings of the Twenty-Fifth Conference on Un- certainty in Artificial Intelligence (UAI-2009) , pages 293–302. AUAI Press, 2009. [12] F. R. Kschischang, B. J. Frey, and H.-A. Loeliger. Factor Graphs and the Sum-Product Algorithm. IEEE Transactions on Information Theory , 47:498–519, 2001. [13] M. Luttermann, T. Braun, R. Möller, and M. Gehrke. Colour Passing Revisited: Lifted Model Construction with Commutative Factors. In Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelli- gence (AAAI-2024) , pages 20500–20507. AAAI Press, 2024. [14] M. Luttermann, J. Speller, M. Gehrke, T. Braun, R. Möller, and M. Hartwig. Approximate Lifted Model Construction. In Proceed- ings of the Thirty-Fourth International Joint Conference on Artificial Intelligence (IJCAI-2025) , 2025. https://arxiv.org/abs/2504.20784. [15] B. Milch, L. S. Zettlemoyer, K. Kersting, M. Haimes, and L. P. Kael- bling. Lifted Probabilistic Inference with Counting Formulas. In Pro- ceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (AAAI-2008) , pages 1062–1068. AAAI | https://arxiv.org/abs/2505.22288v1 |
Press, 2008. [16] M. Niepert and G. Van den Broeck. Tractability through Exchangeabil- ity: A New Perspective on Efficient Probabilistic Inference. In Proceed- ings of the Twenty-Eighth AAAI Conference on Artificial Intelligence (AAAI-2014) , pages 2467–2475. AAAI Press, 2014. [17] D. Poole. First-order Probabilistic Inference. In Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence (IJCAI-2003) , pages 985–991. IJCAI Organization, 2003. [18] M. Richardson and P. Domingos. Markov Logic Networks. Machine Learning , 62(1–2):107–136, 2006. [19] N. Taghipour, D. Fierens, J. Davis, and H. Blockeel. Lifted Variable Elimination: Decoupling the Operators from the Constraint Language. Journal of Artificial Intelligence Research , 47(1):393–439, 2013. [20] G. Van den Broeck. Lifted Inference and Learning in Statistical Rela- tional Models . PhD thesis, KU Leuven, 2013.[21] G. Van den Broeck and J. Davis. Conditioning in First-Order Knowl- edge Compilation and Lifted Probabilistic Inference. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence (AAAI- 2012) , pages 1961–1967. AAAI Press, 2012. [22] G. Van den Broeck, N. Taghipour, W. Meert, J. Davis, and L. De Raedt. Lifted Probabilistic Inference by First-order Knowledge Compilation. InProceedings of the Twenty-Second International Joint Conference on Artificial Intelligence (IJCAI-2011) , pages 2178–2185. IJCAI Organi- zation, 2011. [23] B. Weisfeiler and A. A. Leman. The Reduction of a Graph to Canonical Form and the Algebra which Appears Therein. NTI, Series , 2:12–16, 1968. English translation by Grigory Ryabov available at https://www. iti.zcu.cz/wl2018/pdf/wl_paper_translation.pdf. Appendix 1 Counterexamples Proposition 10. The 1DEED is not a metric. Proof. The 1DEED is not a metric, because the ∆-inequality does not hold, which is exemplarily proven by this counterexample: ϕ1=2 0.5 , ϕ2=1 1 , ϕ3=1 2 withd∞(ϕ1, ϕ2) +d∞(ϕ2, ϕ3) = 1 + 1 <3 =d∞(ϕ1, ϕ3). Proposition 11. The 1DEED lacks the transitivity property. Proof. Given a Boolean random variable V ar, consider three factors ϕ1, ϕ2, ϕ3defined as follows: V ar ϕ 1(V ar)ϕ2(V ar)ϕ3(V ar) true 0.95 1 .0 1 .08 false 2.05 1 .95 2 .10 Using the 1DEED with ε= 0.1, we end up with ϕ1=εϕ2and ϕ2=εϕ3, butϕ1̸=εϕ3: ϕ1(true) = 0.95<0.972 = (1 −ε)1.08 = (1 −ε)ϕ3(true). 2 Missing Proofs Theorem 3. Def. 3 and Def. 4 (main paper) of ε-equivalence for two vectors ϕ1, ϕ2∈Rn >0are equivalent. Proof. In mathematical terms, the claim can be summarised as ϕ1=εϕ2⇔d∞(ϕ1, ϕ2)≤ε, which we prove for any ε >0: ϕ1=εϕ2for two factors ϕ1, ϕ2∈Rn >0 def.⇔ϕ1(k)∈[(1−ε)ϕ2(k),(1 +ε)ϕ2(k)]and ϕ2(k)∈[(1−ε)ϕ1(k),(1 +ε)ϕ1(k)]fork= 1, . . . , n ⇔ϕ2(k)−ϕ2(k)ε≤ϕ1(k)≤ϕ2(k) +εϕ2(k)and ϕ1(k)−ϕ1(k)ε≤ϕ2(k)≤ϕ1(k) +εϕ1(k)fork= 1, . . . , n ⇔ −ϕ2(k)ε≤ϕ1(k)−ϕ2(k)≤εϕ2(k)and −ϕ1(k)ε≤ϕ2(k)−ϕ1(k)≤εϕ1(k)fork= 1, . . . , n ⇔ |ϕ1(k)−ϕ2(k)| ≤εϕ2(k)and |ϕ1(k)−ϕ2(k)| ≤εϕ1(k)fork= 1, . . . , n ⇔|ϕ1(k)−ϕ2(k)| ϕ2(k)≤εand |ϕ1(k)−ϕ2(k)| ϕ1(k)≤εfork= 1, . . . , n ⇔|ϕ1(k)−ϕ2(k)| min{ϕ1(k), ϕ2(k)}≤εfork= 1, . . . , n ⇔max k=1,...,n|ϕ1(k)−ϕ2(k)| min{ϕ1(k), ϕ2(k)} ≤ε def.⇔d∞(ϕ1, ϕ2)≤ε Theorem 7. The maximal absolute deviation between any initial probability p=PM(r|e)ofrgivenein model Mand the prob- ability p′=PM′(r|e)in the modified model M′resulting from 0.0 0.2 0.4 0.6 0.8 1.00.00.20.40.60.81.0Upper and lower bounds for p' pε=0.01 ε=0.001 m = 10 m = 100 m = 1000Figure 5. Bounds of Eq. (8) in | https://arxiv.org/abs/2505.22288v1 |
comparison to pfor different dvalues depending on mandε(Eq. (5) of the main paper). Circles/ crosses are the maximum distances from those bounds to the function p. Distances to function f(p) =pare later also referred as f=fupper forp∈[0,1 2]and f=flower forp∈(1 2,1]. running Hierarchical Advanced Colour Passing (HACP , main pa- per, Alg. 2) or ε-Advanced Colour Passing ( ε-ACP) on Mcan be bounded by pmax ∆ := max for any r|e|p−p′| ≤√ ed−1√ ed+ 1withd=DCD(PM, P′ M). Proof. From Chan and Darwiche [1], we already know that pe−d p(e−d−1) + 1≤p′=PM′(r|e)≤ped p(ed−1) + 1(8) holds (see Fig. 5), where p=PM(r|e)is the probability of rgiven ein the original model Mandd=DCD(PM, P′ M)is the value of the distance measure introduced by Chan and Darwiche between PM andP′ M. Hence, for any rgivene, in the worst case, we get |p−p′|=( p′−pforp≤p′ p−p′forp′< p= ped p(ed−1)+1−p forp≤p′ p−pe−d p(e−d−1)+1forp′< p. Becoming independent of p′guarantees one maximal bound for all possible queries and can be achieved using the maximum of both cases in [0,1]as an upper bound for pmax ∆ , which is given by the following function fmax ∆ (p): fmax ∆ (p) := max( fupper(p), flower(p))forp∈[0,1] withfupper(p) :=ped p(ed−1) + 1−p=p(1−p)(ed−1) p(ed−1) + 1 andflower(p) :=p−pe−d p(e−d−1) + 1=p(1−p)(1−e−d) p(e−d−1) + 1. It is easy to see that fmax ∆ is a symmetric function around p= 0.5 0.0 0.2 0.4 0.6 0.8 1.00.00.20.40.60.81.0fupper (o max. left) and flower (x max. right) pColorcode: ε=0.01 ε=0.001 Linetype: m = 10 m = 100 m = 1000Figure 6. Showing the fupper andflower functions over [0,1]withd2 values from Corollary 8 to use an upper estimate for pmax ∆ by bounding it from above. (see Fig. 6), because fupper(p) =flower(1−p)holds: flower(1−p) =(1−p)p(1−e−d) (1−p)(e−d−1) + 1 =(1−p)p(ed−1)e−d (1−p)(e−d−1) + 1 =(1−p)p(ed−1) ((1−p)(e−d−1) + 1) ed =p(1−p)(ed−1) (1−p)(1−ed) +ed =p(1−p)(ed−1) 1−p+ped =p(1−p)(ed−1) p(ed−1) + 1 =fupper(p) Now, the choice for p′to get the maximum of both functions is p= 0.5, while fupper(p)decreases for p >0.5andflower(p)increases for p <0.5ford≥0. Therefore, we get fmax ∆ (p) :=( fupper(p)for0≤p≤0.5 flower(p)for0.5< p≤1(9) This means that our search for the maximum deviation leads us to calculate the derivatives after p f′ upper(p) =−(ed−1)(p2(ed−1) + 2 p−1) (p(e−d−1) + 1)2, f′′ upper(p) =−2ed(ed−1) (p(ed−1) + 1)3, f′ lower(p) =(1−e−d)(p2(1−e−d)−2p+ 1) (p(e−d−1) + 1)2, f′′ lower(p) =−2ed(ed−1) (p+ed(1−p))3,and to consider initially the first-order conditions. For this purpose, we first obtain f′ upper(p) = 0 ⇔p2(ed−1) + 2 p−1 = 0 ⇔pupper1/2=−1 ed−1±s−1 ed−12 +1 ed−1 ⇔pupper1/2=−1 ed−1±√ ed ed−1=−1±√ ed ed−1. Aspupper2is smaller than zero, the potential maximum in [0,1]is at p1=pupper1=√ ed−1 ed−1=1√ ed+ 1. Analogously, we find a potential maximum for f′ lower: f′ lower(p) = 0 ⇔p2(1−e−d)−2p+ 1 = 0 ⇔plower1/2=1 1−e−d±s1 1−e−d2 −1 1−e−d ⇔plower1/2=1 1−e−d±√ e−d 1−e−d=1±√ e−d 1−e−d Asplower1is larger than one, the possible maximum in [0,1]is at p2=plower2=1−√ e−d 1−e−d=1√ e−d+ 1=√ ed √ ed+ 1 and the second-order conditions can also be easily checked: Since ed−1>0and1−p > 0, we get f′′ upper(p1)<0and f′′ lower(p2)<0and can conclude that p1is a local maximum of fupper andp2is a local maximum of flower. The | https://arxiv.org/abs/2505.22288v1 |
boundary values 0and1 are no possible points for a global maximum, because both functions fupperandflowertake on the value 0there. Therefore, the only possi- ble extreme point for the global maximum for fupperisp1=1√ ed+1 andflower isp2=√ ed√ ed+1. Note that p1andp2are symmetrically distanced to p= 1/2. Both reach exactly the same maximal deviation: fupper(p1) =1√ ed+1(1−1√ ed+1)(ed−1) 1√ ed+1(ed−1) + 1 =1√ ed+1·√ ed√ ed+1·(√ ed+ 1)·(√ ed−1) ed−1+√ ed+1√ ed+1 =√ ed(√ ed−1) ed+√ ed=√ ed−1√ ed+ 1 and the same holds for flower(p2) =√ ed√ ed+1(1−√ ed√ ed+1)(1−e−d) √ ed√ ed+1(e−d−1) + 1 =√ ed√ ed+1·1√ ed+1·(1−e−d) √ ed√ ed+1·(√ e−d−√ ed+√ ed+ 1) =√ ed(1−√ e−d)(1 +√ e−d) (√ e−d+ 1)(√ ed+ 1) =√ ed−1√ ed+ 1. This means: pmax ∆ = max for any r|e|p−p′| ≤fupper(p1) =flower(p2) =√ ed−1√ ed+ 1 Theorem 9. For any given p∗ ∆∈(0,1 2], the output of HACP guar- antees for any ε∈(0,1), which is smaller or equal to ε1=−1 +m−1 m−1 mm√ ed 2m−1 m +vuut −1 +m−1 m−1 mm√ ed 2m−1 m!2 −1−m√ ed m−1 m with d= lnp∗ ∆+ 1 1−p∗ ∆2 the bound pmax ∆≤p∗ ∆. This means that we can bound the maximal deviation p∗ ∆of HACP andε-ACP, respectively, by calculating ε1(p∗ ∆)before we run it. In [2], it is shown that the bound is tight. Proof. Using Theorem 7, we get for d=DCD(PM, P′ M): pmax ∆≤√ ed−1√ ed+ 1=:p∗ ∆ ⇔p∗ ∆√ ed+ 1 =√ ed−1 ⇔p∗ ∆+ 1 = (1 −p∗ ∆)√ ed ⇔p∗ ∆+ 1 1−p∗ ∆=√ ed ⇔lnp∗ ∆+ 1 1−p∗ ∆2 =d=DCD(PM, P′ M). 0.0 0.2 0.4 0.6 0.8 1.00.00.10.20.30.40.5ε in dependence of a chosen pΔ* pΔ*εm = 10 m = 100 m = 1000Figure 7. The maximal choice of εdepending on the maximal deviation p∗ ∆for different amount of factors mto guarantee pmax ∆ ≤p∗ ∆as proven in Theorem 9. Additionally, we know from Theorem 5 and the corresponding ver- sion for HACP (Proposition 6) that DCD(PM, P′ M)≤ln 1 +m−1 mε 1 +ε 1 +1 mε!m . (10) Thus, the question we now answer is for which εthis inequality reaches equality: d= ln 1 +m−1 mε 1 +ε 1 +1 mε!m ⇔m√ ed= 1 +m−1 mε 1 +ε 1 +1 mε ⇔ 1 +1 mεm√ ed= 1 +m−1 mε 1 +ε ⇔0 =m−1 mε2+ 1 +m−1 m−1 mm√ ed ε+ 1−m√ ed, which can be solved for εwithq1=1+m−1 m−1 mm√ ed m−1 mandq2= 1−m√ ed m−1 m, resulting in ε1/2=−q1 2±r q1 22−q2. Since q1≥0⇔m−1 m≥p∗ ∆, the minus option ε2is smaller than 0 and knowing that m≥2already guarantees the result of Theorem 9, for all cases which make sense to apply (better than guessing ≥0.5), the only reasonable solution is ε1. 3 The Basic Idea of Lifting To illustrate the idea behind lifting, consider the following example. Example 4. Take a look at the FG illustrated in Fig. 8 and assume we want to answer the query P(B=true). We obtain P(B=true) =X a∈range(A)X c∈range(C)P(A=a, B=true, C=c) =1 ZX a∈range(A)X c∈range(C)ϕ1(a,true)·ϕ2(c,true) =1 Z φ1φ1+φ1φ3+φ3φ1+φ3φ3 . Since ϕ1(A, B)andϕ2(C, B)are equivalent | https://arxiv.org/abs/2505.22288v1 |
(in particular, it holds thatϕ1(a,true) =ϕ2(c,true)for all assignments where a=c), we can exploit this property to simplify the computation and get P(B=true) =1 ZX a∈range(A)X c∈range(C)ϕ1(a,true)·ϕ2(c,true) =1 ZX a∈range(A)ϕ1(a,true)X c∈range(C)ϕ2(c,true) =1 Z X a∈range(A)ϕ1(a,true)!2 =1 Z X c∈range(C)ϕ2(c,true)!2 =1 Z φ1+φ32 . This example illustrates the idea of using a representative of indis- tinguishable objects for computations (here, either AorCcan be chosen as a representative for the group consisting of AandC). A B Cϕ1 ϕ2A B ϕ1(A, B) true true φ1 true false φ2 false true φ3 false false φ4 C B ϕ2(C, B) true true φ1 true false φ2 false true φ3 false false φ4 Figure 8. An exemplary FG encoding a full joint probability distribution over three random variables A,B, andC. The idea of exploiting exponentiation can be generalised to groups consisting of kindistinguishable objects to significantly reduce the computational effort for query answering. To be able to exploit ex- ponentiation during probabilistic inference, we need to ensure that the potential tables of factors within the same group are identical. In- distinguishable objects frequently occur in many real world domains. For example, in an epidemic domain, each person impacts the prob- ability of having an epidemic equally. That is, the probability of an epidemic depends on the number of sick people in the universe but is independent of which specific individual people are sick.4 Group Sizes of an Hierarchical Ordering Table 2 shows for each level of the hierarchy in Fig. 2 of the main paper how many ε-equivalent groups of which size exist. Thus, it illustrates the increasing compression that is possible with increasing εvalues. Level Number of total groups Group Size (Frequency) 0 10 1 (10), 1 9 2 (1), 1 (8) 2 8 2 (2), 1 (6) 3 7 2 (3), 1 (4) 4 6 4 (1), 2 (1), 1 (4) 5 5 4 (1), 3 (1), 1 (3) 6 4 4 (1) , 3 (1), 2 (1), 1 (1) 7 3 7 (1), 2 (1), 1 (1) 8 2 7 (1), 3 (1) 9 1 10 (1) Table 2. Implicit group sizes for each level for given structure and pre-ordered FG for Ex. 3 from the main paper. References [1] H. Chan and A. Darwiche. A distance measure for bounding probabilistic belief change. International Journal of Approximate Reasoning , 38:149– 174, 2005. [2] M. Luttermann, J. Speller, M. Gehrke, T. Braun, R. Möller, and M. Hartwig. Approximate lifted model construction. In Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelli- gence (IJCAI-2025) , 2025. https://arxiv.org/abs/2504.20784. | https://arxiv.org/abs/2505.22288v1 |
arXiv:2505.22290v1 [cs.AI] 28 May 2025Rethinking the Unsolvable: When In-Context Search Meets Test-Time Scaling Fanzeng Xia*1,Yidong Luo*1,Tinko Sebastian Bartels1,Yaqi Xu2, andTongxin Li†1 1The Chinese University of Hong Kong, Shenzhen 2Beijing University of Posts and Telecommunications Abstract Recent research has highlighted that Large Language Models (LLMs), even when trained to generate extended long reasoning steps, still face significant challenges on hard reasoning problems. However, much of the existing literature relies on direct prompting with simple in-context learning examples for evaluation, which largely overlooks advanced techniques to elicit LLMs’ deliberate reasoning before drawing conclusions that LLMs hit a performance ceiling. In this paper, we systematically explore the combined potential of in-context search and test-time scaling on super hard reasoning tasks. We find that by employing advanced in-context search prompting to LLMs augmented with internal scaling, one can achieve transformative performance breakthroughs on tasks previously deemed “unsolvable” (e.g., reported success rates below 5%). We provide both empirical results and theoretical analysis of how this combination can unleash LLM reasoning capabilities: i) Empirically, on controlled NP-Hard tasks and complex real-world planning benchmarks, our approach achieves up to a 30×improvement in success rates compared to previously reported results without any external mechanisms; ii) Theoretically, we show that in-context search prompting, when combined with internal scaling, significantly extends the complexity class of solvable reasoning problems. These findings challenge prevailing assumptions about the limitations of LLMs on complex tasks, indicating that current evaluation paradigms systematically underestimate their true potential. Our work calls for a critical reassessment of how LLM reasoning is benchmarked and a more robust evaluation strategy that fully captures the true capabilities of contemporary LLMs, which can lead to a better understanding of their operational reasoning boundaries in real-world deployments. 1 Introduction Large Language Models (LLMs) have demonstrated significant progress in solving diverse reasoning and planning tasks, such as instruction following (Wei et al., 2021; Ouyang et al., 2022), commonsense reasoning (Talmor et al., 2018; Brown et al., 2020), and code generation (Chen et al., 2021; Austin et al., 2021). Despite their achievements, LLMs still face significant challenges when applied to complex, long-horizon problem instances. The emergence of reasoning-oriented models, as exemplified by (Jaech et al., 2024; Guo et al., 2025), has led to substantial progress in solving problems that require extensive reasoning and planning. Trained through reinforcement learning and high-quality trajectories to develop systematic reasoning processes, these models demonstrate improved performance on tasks that require multi-step deduction and self-correction. This enhanced capability, evidenced by benchmarks such as PlanBench (Valmeekam et al., 2023), marks a significant advancement in areas where earlier LLMs exhibited notable deficiencies. However, this improvement is not uniform across all problem complexities; the models particularly struggle with more challenging instances, such as controlled NP-hard tasks like Vertex Cover and 3-Dimensional Matching (Yang et al., 2025), and sophisticated real-world planning scenarios such as the Natural Plan Benchmark (Zheng et al., 2024). Recent critiques (Valmeekam et al., 2024a,b) emphasize that difficulties persist in solving these complex problem instances, highlighting the need for further strategies to address the full spectrum of reasoning challenges in real-world applications. *Equal Contribution. | https://arxiv.org/abs/2505.22290v1 |
†Corresponding Author. 1 )BSE3FBTPOJOH1SPCMFN--.6OTPMWBCMF *O$POUFYU4FBSDI5FTU5JNF4DBMJOH"MHPSJUINJD4FBSDI "P5 (SFFEZ4FBSDI $P5 %JSFDU1SPNQUJOH4PMWBCMF _ 1BSBMMFM4DBMJOH4FRVFOUJBM4DBMJOH*OUFSOBM4DBMJOH&MFWBUFE3FBTPOJOH"CJMJUZ$PNNPOMZ6TFE&WBMVBUJPO$POGJHVSBUJPOT6OMPDLFE3FBTPOJOH1PUFOUJBM 6QUP*NQSPWFNFOU#PPTUFigure 1: Overview of promoted LLM reasoning boundaries for hard problems via the combination of in-context search and test-time scaling. Our findings reveal that tasks previously reported as unsolvable are actually solvableand thus require a rethink of the current evaluation configurations. Evaluation is based on the Trip Planning task as an illustrative example. To optimize the deployment of LLMs across diverse problem difficulties, recent studies (Chen et al., 2024a; Sun et al., 2025) have sought to define their reasoning limitations. A common observation is that current LLMs encounter a significant performance ceiling (often when success rates on complex tasks drop below 10-20%), suggesting that additional training with new architectures may be needed for breakthroughs on hard problems. However, our analysis of experimental settings in recent studies (see Table 3 in Appendix B) reveals a pattern: most evaluations rely on direct prompting strategies. These include few-shot in-context learning without task decomposition (Wang et al., 2020), automatic Chain-of-Thought (Auto-CoT) prompting (Zhang et al., 2022) where models generate reasoning steps autonomously, and hints before prompting (Fu et al., 2024). Focusing mostly on direct prompting, prior work might have missed the power of advanced in-context search prompting techniques to elicit deeper reasoning, possibly leading to a biased assessment of LLMs’ actual reasoning boundaries. Inspired by this, our work aims to systematically probe and potentially elevate the perceived upper limits of LLM reasoning on “super hard” problem instances (those with previously reported success rates below 5%). This investigation motivates the following central question: Are LLMs truly incapable of solving tasks deemed “unsolvable” by current evaluation paradigms, or can their full potential on such problems yet be unleashed? To answer this question, we adopt a standard prompt-based approach (without external mechanisms or additional fine-tuning) to evaluate the upper-bound reasoning performance of out-of-the-box LLMs on two challenging problem categories: controlled NP-hard tasks and complex real-world planning scenarios. Overall, there are two types of methods commonly employed to enhance the on-the-fly reasoning capabilities of LLMs: (i) In-context search prompting , a class of prompting strategies that enables LLMs to internally simulate search processes through learned in-context examples. Our experiments explore a variety of prompting techniques designed to optimize model performance across diverse tasks, including direct prompting, Chain-of-Thought (CoT), and Algorithm-of-Thought (AoT); (ii) Test-time scaling , a class of techniques that augment LLM reasoning at inference time by expanding computational effort or diversifying reasoning pathways. We experiment with methods including parallel scaling, sequential scaling, and internal scaling. Further details on both these categories of methods are provided in Appendix B. Recent representative works have sought to provide insights into improving LLM reasoning capabilities from different perspectives: (i) Through in-context search prompting, (Ge et al., 2025) explored how different direct prompting strategies affect models with internal scaling, while (Sel et al., 2025) developed advanced in-context search prompting methods that enable LLMs to better follow structured in-context search processes; (ii) Regarding test-time scaling, (Snell et al., 2024) investigated the impact of parallel 2 scaling and sequential scaling with direct prompting, leaving | https://arxiv.org/abs/2505.22290v1 |
more advanced in-context search techniques unexplored, concluding that these test-time scaling schemes yielded only small gains on hard problems; (iii) In the context of super-hard tasks, (Chen et al., 2024b) examined sequential scaling in conjunction with external pipelines for complex real-world planning. Other investigations into NP-hard problems, such as 3-SAT and Sum-of-Squares (Hazra et al., 2025; Li et al., 2025), have found that LLMs show promise on NP-hard instances and have probed the models to understand different cognitive behaviors. These studies provide initial explorations of specific NP-hard tasks, but an overall understanding and analysis are still lacking. While these prior works offer valuable insights, we identify a crucial gap in the current literature as shown in Table 3: the potential of advanced in-context search methods, particularly when deeply integrated with test-time scaling strategies, remains unexplored. Specifically, how their fine-grained combinations might advance the current understanding and quantification of the true reasoning capacity of LLMs on unsolvable tasks is still a critical open problem. As illustrated in Figure 1, our ablation studies reveal a significant empirical finding: by combining advanced in-context search prompting as algorithmic guidance for LLMs endowed with internal scaling, we achieve up to 30 ×improvement on problem instances previously considered unsolvable and intractable (reported success rates below 5%). These results indicate that commonly employed configurations for evaluating the true reasoning ability of contemporary LLMs systematically underestimate the attainable reasoning ceiling, thereby masking their operational boundaries in real deployments. To further investigate the underlying reasons behind these improvements, we extend prior theoretical work in (Merrill and Sabharwal, 2024; Sel et al., 2023), providing an analysis of the expressivity of LLMs augmented with in-context search and internal scaling. Collectively, our empirical and theoretical results necessitate a systematic reassessment of current methodologies for probing LLM reasoning boundaries and call for the development of more robust and faithful evaluation techniques. Our main contributions are summarized as follows: •IdentifyingSystematicBias. Werevealsystematicevaluationbiasinthecommonlyusedevaluation configurations of existing research that underestimates the attainable performance ceiling of LLMs’ reasoning ability and can yield misleading operational boundaries. •Improved LLM Reasoning Abilities. By conducting ablation studies on fine-grained combinations of in-context search prompting with test-time scaling, we achieve up to a 30 ×improvement compared to the previously reported success rates on two controlled NP-hard tasks and two complex real-world planning problems. •Theoretical Insights. We provide a theoretical analysis of the reasoning boundary that can be achieved with the combination of in-context search and internal scaling, offering valuable insights into the expressive power of contemporary LLMs. 2 Empirical Results In this section, we present the empirical results supporting our central claim. We begin by describing the setup design of our experiments on two hard reasoning problem instances. Following this, we present our findings through a detailed three-level ablation study to examine combinations of in-context search and test-time scaling methods to identify systematic bias in commonly used evaluation configurations. 2.1 Experimental Setup Tasks. Our investigation incorporates two categories of super hard task instances: (i) Controlled NP-Hard Tasks : we begin by conducting experiments on controlled Nondeterministic Polynomial-time (NP)-Hard problems, which are problems requiring exponential time | https://arxiv.org/abs/2505.22290v1 |
to find solutions. Specifically, we select Vertex Cover and 3-Dimensional Matching (3DM) and generate 100 controlled problem instances according to the methods in (Yang et al., 2025), enabling an understanding of their average behaviors and reliable performance estimates. We evaluate the LLMs’ reasoning capabilities on tasks at the most challenging difficulty (level 10). (ii) Complex Real-World Planning : furthermore, we disentangle the 3 Table 1: Performance on Vertex Cover (Difficulty Level = 10) for controlled NP-hard task. Qwen3 showed almost no improvement across all methods, possibly because of struggling with complex numerical abstract reasoning. Model Evaluation StrategyDirect Greedy Search Depth-First Prompting (CoT) Search (AoT) Qwen3No Scaling (Base Model) 0/100 = 0% 0/100 = 0% 0/100 = 0% Parallel Scaling (Best-of-N) 0/100 = 0% 0/100 = 0% 0/100 = 0% Sequential Scaling (Self-Refine) 0/100 = 0% 0/100 = 0% 0/100 = 0% Internal Scaling (Thinking Mode) 0/100 = 0% 1/100 = 1% 2/100 = 2% Claude 3.7No Scaling (Base Model) 0/100 = 0% 1/100 = 1% 1/100 = 1% Parallel Scaling (Best-of-N) 0/100 = 0% 4/100 = 4% 4/100 = 4% Sequential Scaling (Self-Refine) 0/100 = 0% 5/100 = 5% 3/100 = 3% Internal Scaling (Thinking Mode) 2/100 = 2% 21/100 = 21% 31/100 = 31% Table 2: Performance on Trip Planning (Difficulty Level = 10) for complex real-world planning. When the complexity of numerical inputs in language-based planning is disentangled, Qwen3 improved. Model Evaluation StrategyDirect Greedy Search Depth-First Prompting (CoT) Search (AoT) Qwen3No Scaling (Base Model) 0/100 = 0% 0/100 = 0% 0/100 = 0% Parallel Scaling (Best-of-N) 0/100 = 0% 0/100 = 0% 0/100 = 0% Sequential Scaling (Self-Refine) 0/100 = 0% 0/100 = 0% 1/100 = 1% Internal Scaling (Thinking Mode) 0/100 = 0% 24/100 = 24% 30/100 = 30% Claude 3.7No Scaling (Base Model) 0/100 = 0% 2/100 = 2% 3/100 = 3% Parallel Scaling (Best-of-N) 0/100 = 0% 8/100 = 8% 9/100 = 9% Sequential Scaling (Self-Refine) 0/100 = 0% 9/100 = 9% 7/100 = 7% Internal Scaling (Thinking Mode) 4/100 = 4% 26/100 = 26% 40/100 = 40% complexity of numerical inputs and evaluate LLMs while planning in natural language using the Trip Planning and Meeting Planning tasks from the Natural Plan benchmark (Zheng et al., 2024). Similarly, we randomly select 100 instances for each of these tasks. This type of task can essentially be understood as a natural language-based variation of the NP-Hard Traveling Salesperson Problem (TSP). We also select the most difficult level (level 10) for these tasks to explore the performance ceiling of LLM reasoning capabilities. Models. We use an open-source LLM (Qwen3 235B (QwenLM, 2025) from Alibaba Cloud) and a closed- source API (Anthropic’s Claude 3.7 Sonnet (Anthropic, 2025)) in our experiments. These particular models were chosen because they offer functionalities that allow for the explicit activation or deactivation of an internal “thinking” mode. This feature is crucial as it enables us to perform ablation studies on the same base model, thereby isolating the effects of using internal scaling versus not using it. All prompting strategies were evaluated using these two models to | https://arxiv.org/abs/2505.22290v1 |
ensure a comprehensive understanding of their capabilities and responses to different experimental conditions. Prompts. We investigate three in-context search prompting strategies to instruct the LLMs for the tasks outlined above: •Direct Prompting : this approach provides the model with a few problem-solution pairs (Wang et al., 2020) without any explicit intermediate reasoning steps in the examples. It relies on the model’s internalized reasoning to bridge from the problem to the solution and generally does not delve into deep task decomposition. •CoT Prompting : CoT (Wei et al., 2022) guides LLMs to generate a sequence of intermediate reasoning steps before arriving at a final solution by providing an in-context example that demonstrates a successive, step-by-step solution path (typically representing a greedy search without algorithmic operations like backtracking). •AoT Prompting : this approach (Sel et al., 2025) guides the model through explicit, high-level 4 algorithmic search pathways. Itusesdetailedexamplestodemonstratestructuredalgorithmicoperations, helping the model simulate decision-making processes based on algorithmic principles. The prompts constructed are also evaluated under 3 different test-time scaling methods: •Parallel Scaling : this method improves reasoning performance by generating multiple outputs in parallel and then aggregating them. In our experiments, we specifically employed a Best-of-N (BoN) approach where N=3; the highest accuracy was recorded. •Sequential Scaling : this is an iterative method where subsequent computations are explicitly directed based on the results of the previous one, allowing the model to refine solutions round by round. We implemented this using the Self-Refine technique (Madaan et al., 2023), where initial responses were augmented with Self-Refine prompts and resubmitted for improvement. •Internal Scaling : this approach relies on the model autonomously determining the computational effort to allocate for reasoning using its internal parameters and a learned policy. For our experiments, this involved activating the “thinking” mode for extended reasoning paths of the selected base models. Templates for these prompt strategies can be found in the supplementary Appendix C, with more introduction delegated to Related Work in Appendix B. Metrics. We measure the reasoning performance using Success Rate (referred to as Accuracy in Yang et al. (2025) and Solve Rate in Zheng et al. (2024)), defined as the percentage of problem instances for which the LLM generates a complete and verifiably correct solution within each problem domain. For the controlled NP-Hard tasks, a success is a solution that satisfies all defined problem constraints, while for the complex real-world planning tasks, it requires the generated plan to achieve an exact match with the ground truth or optimal solution. 2.2 Ablation Studies In this section, we present the results of a three-level ablation study focusing on the hard problem instances described in Section 2.1. Detailed empirical results are provided in Table 1, 2, 4, and 5. Our discussion draws upon these tables, using the Trip Planning task (trends illustrated in Figure 1) as an illustrative example. Results on the other tasks are quantitatively similar, with details deferred to Appendix D. We aim to progressively improve the reasoning boundary of LLMs and identify systematic bias in commonly adopted evaluation configurations. Level 1: Test-time Scaling. We begin with the most commonly | https://arxiv.org/abs/2505.22290v1 |
used evaluation configuration: direct prompting augmented with four test-time scaling variants: without scaling (Direct-WS), parallel scaling (Direct-PS), sequential scaling (Direct-SS), and internal scaling (Direct-IS). Figure 1 shows that: (i) Both Qwen 3 and Claude 3.7 achieve a 0%success rate under Direct-WS/PS/SS. This echoes reported results from (Valmeekam et al., 2024a; Chen et al., 2024b; Lee et al., 2025), confirming that this instance is effectively unsolvable for basic LLMs; (ii) When the model is endowed with internal scaling capabilities by switching from a “no-thinking” to a “thinking” mode, Claude 3.7 improves marginally to 4%, whereas Qwen 3 remains at 0%. This aligns with the common understanding that internal scaling can raise the upper bound of a model’s reasoning ability. However, the success rate is still < 5%. Related literature evaluating the reasoning capabilities of state-of-the-art LLMs often stops at this point, concluding that LLM reasoning cannot solve such difficult tasks. Takeaway 1 Under direct prompting (which includes methods like Auto-CoT, zero-shot, few-shot, and hints), LLMs generally struggle with hard reasoning tasks. Enabling test-time scaling yields only marginal gains and does not resolve the unsolved instances. This aligns with the conclusions from most existing research evaluations, indicating that the introduction of state-of-the-art test-time scaling methods is still insufficient to solve these hard problems. Level 2: In-Context Search. The second level of our ablation study explores the efficacy of advanced in-context search prompting, specifically employing CoT prompting and AoT prompting. Both methods 5 require an understanding of the task’s solution path and correspondingly designed in-context examples. We evaluate these methods without test-time scaling. As Figure 1 illustrates: (i) under both CoT -WS and AoT -WS, Claude 3.7’s success rate rises by 2% and 3% respectively, whereas Qwen 3 remains fixed at0%. This indicates that advanced in -context search prompting alone can provide a performance lift on hard reasoning tasks; (ii) Along with the results from Level 1, we find that Qwen 3 consistently stays at0%when either test-time scaling or advanced in-context search is applied in isolation. In contrast, Claude 3.7 shows marginal gains in both scenarios. This suggests that the effectiveness of all in-context search and test-time scaling variants is highly correlated with the inherent capabilities of the base model. Takeaway 2 Under advanced in-context search prompting with task decomposition such as CoT and AoT, LLMs exhibit improved performance on hard reasoning tasks, even without any test-time scaling. However, their standalone impact on difficult tasks is still marginal. Level 3: Combining In-Context Search and Test-time Scaling. The final level explores the upper-bound reasoning capabilities achieved by combining In-Context Search and Test-time Scaling across six different configurations. As illustrated in Figure 1, these include CoT with parallel scaling (CoT-PS), CoT with sequential scaling (CoT-SS), CoT with internal scaling (CoT-IS), AoT with parallel scaling (AoT-PS), AoT with sequential scaling (AoT-SS), and AoT with internal scaling (AoT-IS). The results reveal the following key findings: (i) Under CoT-PS/SS and AoT-PS/SS, Qwen 3 for the first time shows a marginal improvement of only 1% on AoT-SS, while Claude 3.7 demonstrates improvements of 7-9%; (ii) The most significant breakthroughs occur with | https://arxiv.org/abs/2505.22290v1 |
CoT-IS and AoT-IS with internal scaling. We observe substantial improvements: Qwen 3’s success rate jumps to 24% (CoT-IS) and 30% (AoT-IS), while Claude 3.7 reaches 26% (CoT-IS) and 40% (AoT-IS). This represents up to a 30-fold improvement in success rate compared to the previous commonly used configurations. These results demonstrate a significant improvement in reasoning performance, breaking the previously observed evaluation thresholds for LLMs on this benchmark even when no external mechanisms were utilized. This suggests that current evaluation methods do not fully unleash the reasoning potential of LLMs, and that the current understanding of their operational boundaries is subject to systematic bias. Takeaway 3 When combining in-context search prompting with test-time scaling, especially CoT/AoT with internal scaling, LLMs achieve up to a 30×improvement in success rates on hard reasoning tasks. This reveals the significant untapped potential of LLMs that commonly used evaluation methods fail to capture, highlighting a call for more robust and faithful evaluation techniques to define LLMs’ true operational boundaries in real deployments. 3 Theoretical Analysis Driven by our empirical results, we explore the theoretical foundations of why combining in-context search methods (CoT and AoT) with internal scaling enables LLMs to solve tasks that were previously unsolvable and intractable. We aim to establish a formal basis for understanding how these methods can redefine and extend the perceived reasoning boundaries of LLMs. 3.1 Preliminaries To lay the groundwork for our theoretical analysis, we first define the key in-context search prompting strategies that are central to our investigation and subsequent theorems. Definition 1 (In-Context Search Prompting ).The following defines the primary in-context search methods employed and analyzed in this work, progressing from simpler to more structured forms of reasoning guidance: •Direct Prompting (Few-Shot Learning): The model is augmented only with a set of a few Prob- lem–Solution pairs, without any intermediate reasoning steps. Concretely, given a target input (prompt) 6 x= (x1, . . . , x n),min-context examples are prepended. These examples form the demonstration sequence SDP: SDP= x(1), y(1), x(2), y(2), . . . , x(m), y(m) , where each x(j)(forj= 1, . . . , m) is an example problem instance and each y(j)is its corresponding solution. Note that the target input xis a distinct problem instance from the example inputs x(j) provided in SDP. The model then emits its answer yansfor the target input xconditioned on the full sequence formed by SDP. Direct prompting provides only question-answer supervision in each example pair, relying on the model’s internalized reasoning to bridge from problem to solution without explicit in-context search guidance. •Chain of Thought (CoT): CoT is a greedy in-context search paradigm wherein a transformer model, upon receiving an input x= (x1, . . . , x n)and asuccessive in-context search example (without algorithmic steps, e.g. backtracking) Problem ,Greedy Search Trace ,Solution that demonstrates step-by-step reasoning, generates a sequence of intermediate tokens SCoT = (s1, s2, . . . , s t(n))of up to t(n)auxiliary tokens through its autoregressive decoding process. Be- fore giving a final answer token yans, each intermediate token sjis produced conditioned on the input | https://arxiv.org/abs/2505.22290v1 |
and all previously generated tokens: sj= Transformer θ x, s 1, . . . , s j−1 , j = 1, . . . , t (n). The class of languages recognizable by a Transformer that uses at most t(n)such intermediate decoding steps (i.e., generates up to t(n)intermediate tokens) is denoted CoT(t(n)). In essence, CoT allows the model to use these generated intermediate tokens as a form of working memory or recurrent state to enable sequential reasoning. •Algorithm of Thought (AoT): AoT further extends CoT by providing algorithmic in-context search examples mimicking an explicit search algorithm. Upon processing an input x= (x1, . . . , x n), a transformer model is elicited by an algorithmic in-context search example Problem ,Algorithmic Search Trace ,Solution to generate a sequence of intermediate tokens SAoT= (s1, s2, . . . , s a(n))of up to a(n)auxiliary tokens. These tokens collectively instantiate an algorithmic search trace for the given problem, following the same underlying autoregressive augmentation mechanism as CoT. This paradigm empowers the model to generate tokens representing distinct algorithmic operations, including but not limited to: 1.Initialization: Defining the search space or partitioning the problem into subproblems; 2.Expansion: Traversing a path or branch within the search tree (e.g., depth-first exploration); 3.Evaluation: Assessing the branch’s promise (prune if needed); 4.Backtracking: Reverting to a previous node to explore alternative branches or strategies. We denote by AoT(a(n))the class of languages the transformer can recognize when augmented by generating a(n)reasoning tokens. By structuring the prompt as such, AoT unlocks the model’s ability to carry out complex, multi-path reasoning in a single query without relying on external search mechanisms. Definition 2 (Internal Scaling ).Internal scaling refers to a mechanism that enables a model to autonomously scale the amount of intermediate tokens to allocate for a given problem at test time. This determination is guided by the model’s learned parameters θand an internal control policy πθ, acquired from reasoning-oriented training (Anthropic, 2025; QwenLM, 2025). Thetotalnumberofgeneratedtokensorreasoningsteps Tisthusdynamicallydeterminedbythisinternally trained mechanism rather than being preset. The halting of the process for deciding a language is ensured by the functioning of the learned policy πθto stop within a finite T. By adapting its computational 7 effort based on the perceived requirements of the input, it can lead to emergent test-time behaviors such as generating more detailed and extended reasoning chains for complex problems, or performing self-correction and evaluation steps during the process. In our work, we switch between thinking and no-thinking modes to simulate with or without internal scaling setting: •Without internal scaling (no thinking mode), the policy πθwill lead to a small T, which we assume will be bounded by a polynomial function of the input size n(i.e., T=poly(n)). •With internal scaling (thinking mode), the policy πθwhen faced with a task requiring deeper reasoning, can allow Tto scale significantly further, potentially to an exponential function of n(i.e., T=exp(n)). The dynamically allocated number of reasoning steps Tdirectly corresponds to the length of the generated sequence of thought tokens (such as t(n)in the definition of CoT(t(n))ora(n)inAoT(a(n))). In the next section, we show that the ability of | https://arxiv.org/abs/2505.22290v1 |
internal scaling to modulate Tfrom polynomial to exponential scales is what allows such models to potentially address correspondingly more complex problems. Further theoretical definitions, including Turing machine, computational complexity classes, and decoder- only transformers are provided in Appendix E. 3.2 Theoretical Results Standard Transformer models, characterized by a fixed number of layers, face inherent constraints on the depth of sequential computation they can natively perform. Without augmentation, their expressive power is typically confined to complexity classes significantly weaker than P(Chen et al., 2024c). CoT prompting has revolutionized the approach to complex reasoning tasks with LLMs. This technique guides the model to generate an explicit sequence of intermediate reasoning steps before producing a final output. This explicit articulation of reasoning allows the model to undertake multi-step computations that are more complex than would be possible with direct-answer generation. Figure 2: Conceptual roadmap illustrating the power of in-context search and test-time scaling in pushing the reasoning boundary of LLMs. LLMs can solve problems in P(Theorem 3.1) and NP(Theorem 3.2) using standard CoT and AoT with polynomial-length traces. Internal Scaling, by extending these thought processes to exponential lengths, significantly pushes the reasoning boundary towards EXP(Theorem 3.3) andNEXP(Theorem 3.4). Drawing upon recent theoretical advances (Li et al., 2024; Merrill and Sabharwal, 2024), we begin with the established understanding that Transformers using a polynomial number of CoT steps are computationally equivalent to polynomial-time Deterministic Turing Machines (DTMs), enabling them to solve problems in the class P. Figure 2 offers a conceptual roadmap that illustrates how the subsequent theorems progressively extend Transformers’ reasoning boundary to higher complexity classes with different configurations. We first recall the following theorem from (Merrill and Sabharwal, 2024): Theorem 3.1 (CoT(poly(n)) = PMerrill and Sabharwal (2024)) .The class of languages decidable by decoder-only transformer models (satisfying the architectural assumptions in Assumption 2 detailed in the supplementary appendix), when augmented with a CoT of length t(n) =poly(n), is the complexity class P. Therefore, Transformers equipped with a polynomial number of CoT steps are computationally equivalent to polynomial-time DTMs. Building on this, when Transformers are prompted with AoT to leverage their capacity for simulating non-deterministic search processes, their computational power extends to the class NP for polynomial-length reasoning traces. 8 Theorem 3.2 (AoT(poly(n)) = NP).The class of languages decidable by decoder-only Transformer models (satisfying the architectural assumptions in Assumption 2), when prompted with AoT such that the total number of intermediate tokens generated along any single accepting computational path is bounded by a polynomial a(n) =poly(n), is the complexity class NP. The proof of Theorem 3.2 is given in Appendix E.3. The length of the generated thought sequence is a crucial factor. If the number of Chain of Thought steps is allowed to scale exponentially with the input size, Transformers can solve problems from correspondingly higher complexity classes. This exponential scaling might initially seem impractical due to computational demands, but for practical instances of hard problems as demonstrated in Section 2, it remains entirely feasible. The proof is given in Appendix E.4. Theorem 3.3 (CoT(exp(n)) = EXP).The class of languages decidable by decoder-only transformer models (satisfying | https://arxiv.org/abs/2505.22290v1 |
the architectural assumptions in Assumption 2), when augmented with a Chain of Thought (CoT) of length t(n) =exp(n)(i.e., t(n) =O(2p(n))for some polynomial p(n)in the input length n), is the complexity class EXP. Similarly, thisexponentialscalingofreasoningstepsalsoenhancesthepowerofAoT,enablingTransformers to tackle problems within non-deterministic exponential time. The proof is given in Appendix E.5. Theorem 3.4 (AoT(exp(n)) = NEXP).The class of languages decidable by decoder-only Transformer models (satisfying the architectural assumptions in Assumption 2), when prompted with Algorithm of Thought (AoT) such that the total number of intermediate tokens generated along any single accepting computational path is bounded by an exponential function a(n) =exp(n)(i.e., a(n) =O(2pA(n))for some polynomial pA(n)in the input length n), is the complexity class NEXP. While these theorems establish a clear link between the length of generated thought processes and computational power, it is important to emphasize that the mere quantity of tokens is not the sole determinant. The following theorem highlights the critical role of computationally relevant core reasoning steps: Theorem 3.5 (Core Reasoning Tokens) .For a decoder-only Transformer model (satisfying Assumption 2), the decidable complexity class is predicated on the length of its core computational trace ,kcore(n), consisting of only computationally essential steps; generating additional redundant tokens (increasing total tokens ktotal(n)beyond kcore(n)) does not elevate this class. Specifically, if kcore(n) =O(poly(n)), the model decides languages within P(if its internal algorithm Ais deterministic) or NP(ifAis non- deterministic). Consequently, to decide languages in EXPorNEXPrespectively, a core computational trace of kcore(n) =O(exp(n))is necessary. Theorem 3.5 aligns with the recent empirical observations that the quality and computational relevance of generated thoughts are more critical than the length of reasoning tokens for effective complex reason- ing (Zeng et al., 2025; Li et al., 2025; Wu et al., 2025; Ballon et al., 2025; Su et al., 2025). The proof is given in Appendix E.6. 4 Conclusion Our results demonstrate that LLMs can significantly outperform previous expectations on challenging tasks, such as NP-hard problems and complex real-world planning, achieving success rates up to 30 times higher through advanced in-context search and test-time scaling techniques. These findings together with the theoretical evidence challenge the assumption that LLMs have reached their performance limits without external mechanisms, highlighting the need for improved evaluation methods to fully uncover and assess LLMs’ true reasoning capabilities. Limitations and Future Directions. While this work reveals substantial untapped LLM reasoning potential, future efforts should focus on developing more efficient, automated reasoning strategies and deepeningourmechanisticunderstandingtofullyrealizetheseextendedcapabilitiesforbroaderdeployment: i) extending the theoretical analysis, which currently emphasizes internal scaling, to comprehensively cover parallel and sequential scaling methods; ii) diversifying the in-context search algorithm examples 9 beyond Depth-First Search to include techniques like Monte Carlo Tree Search and Graph Search; iii) developing more efficient and automated reasoning strategies, for instance, to reduce the current reliance on potentially well-defined, hand-crafted prompts or examples; iv) exploring the potential of hybrid test-time scaling approaches, which were not considered in this work; v) investigating how all these strategies interact with varying model architectures and training paradigms remains crucial to fully realize and enhance LLM reasoning capabilities for broader deployment. References Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, | https://arxiv.org/abs/2505.22290v1 |
Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. In International Conference on Learning Representations , 2021. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems , 35:27730–27744, 2022. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. Commonsenseqa: A question answering challenge targeting commonsense knowledge. arXiv preprint arXiv:1811.00937 , 2018. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems , 33:1877–1901, 2020. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 , 2021. Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732 , 2021. Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Hel- yar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720 , 2024. Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. Karthik Valmeekam, Matthew Marquez, Alberto Olmo, Sarath Sreedharan, and Subbarao Kambhampati. Planbench: An extensible benchmark for evaluating large language models on planning and reasoning about change. Advances in Neural Information Processing Systems , 36:38975–38987, 2023. Chang Yang, Ruiyu Wang, Junzhe Jiang, Qi Jiang, Qinggang Zhang, Yanchen Deng, Shuxin Li, Shuyue Hu, Bo Li, Florian T Pokorny, et al. Nondeterministic polynomial-time problem challenge: An ever-scaling reasoning benchmark for llms. arXiv preprint arXiv:2504.11239 , 2025. Huaixiu Steven Zheng, Swaroop Mishra, Hugh Zhang, Xinyun Chen, Minmin Chen, Azade Nova, Le Hou, Heng-Tze Cheng, Quoc V Le, Ed H Chi, et al. Natural plan: Benchmarking llms on natural language planning. arXiv preprint arXiv:2406.04520 , 2024. Karthik Valmeekam, Kaya Stechly, and Subbarao Kambhampati. Llms still can’t plan; can lrms? a preliminary evaluation of openai’s o1 on planbench. arXiv preprint arXiv:2409.13373 , 2024a. Karthik Valmeekam, Kaya Stechly, Atharva Gundawar, and Subbarao Kambhampati. Planning in strawberry fields: Evaluating and improving the planning and scheduling capabilities of lrm o1. arXiv preprint arXiv:2410.02162 , 2024b. Qiguang Chen, Libo Qin, Jiaqi Wang, Jingxuan Zhou, and Wanxiang Che. Unlocking the capabilities of thought: A reasoning boundary framework to quantify and optimize chain-of-thought. Advances in Neural Information Processing Systems , 37:54872–54904, 2024a. 10 Yiyou Sun, Georgia Zhou, Hao Wang, Dacheng Li, Nouha Dziri, and Dawn Song. Climbing the ladder of reasoning: What llms can-and still can’t-solve after sft? arXiv preprint arXiv:2504.11741 , 2025. Yaqing Wang, Quanming Yao, James T Kwok, and Lionel M Ni. Generalizing from a few examples: | https://arxiv.org/abs/2505.22290v1 |
A survey on few-shot learning. ACM computing surveys (csur) , 53(3):1–34, 2020. Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. Automatic chain of thought prompting in large language models. arXiv preprint arXiv:2210.03493 , 2022. Jinlan Fu, Shenzhen Huangfu, Hang Yan, See-Kiong Ng, and Xipeng Qiu. Hint-before-solving prompting: Guiding llms to effectively utilize encoded knowledge. arXiv preprint arXiv:2402.14310 , 2024. Yuyao Ge, Shenghua Liu, Yiwei Wang, Lingrui Mei, Lizhe Chen, Baolong Bi, and Xueqi Cheng. Innate reasoning is not enough: In-context learning enhances reasoning large language models with less overthinking. arXiv preprint arXiv:2503.19602 , 2025. Bilgehan Sel, Ruoxi Jia, and Ming Jin. Llms can plan only if we tell them. arXiv preprint arXiv:2501.13545 , 2025. Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute optimally can be more effective than scaling model parameters. arXiv preprint arXiv:2408.03314 , 2024. Weizhe Chen, Sven Koenig, and Bistra Dilkina. Reprompt: Planning by automatic prompt engineering for large language models agents. arXiv preprint arXiv:2406.11132 , 2024b. Rishi Hazra, Gabriele Venturato, Pedro Zuidberg Dos Martires, and Luc De Raedt. Have large language models learned to reason? a characterization via 3-sat phase transition. arXiv preprint arXiv:2504.03930 , 2025. Kechen Li, Wenqi Zhu, Coralia Cartis, Tianbo Ji, and Shiwei Liu. Sos1: O1 and r1-like reasoning llms are sum-of-square solvers. arXiv preprint arXiv:2502.20545 , 2025. William Merrill and Ashish Sabharwal. The expressive power of transformers with chain of thought. In The Twelfth International Conference on Learning Representations , 2024. Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Ruoxi Jia, and Ming Jin. Algorithm of thoughts: Enhancing exploration of ideas in large language models. arXiv preprint arXiv:2308.10379 , 2023. QwenLM. Qwen3 technical report, 2025. URL https://github.com/QwenLM/Qwen3/blob/main/Qwen3_ Technical_Report.pdf . Accessed: 2025-05-15. Anthropic. Claude 3.7 sonnet: System card, 2025. URL https://www.anthropic.com/ claude-3-7-sonnet-system-card . Accessed: 2025-05-15. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement with self- feedback. Advances in Neural Information Processing Systems , 36:46534–46594, 2023. Kuang-Huei Lee, Ian Fischer, Yueh-Hua Wu, Dave Marwood, Shumeet Baluja, Dale Schuurmans, and Xinyun Chen. Evolving deeper llm thinking. arXiv preprint arXiv:2501.09891 , 2025. Lijie Chen, Binghui Peng, and Hongxun Wu. Theoretical limitations of multi-layer transformer. arXiv preprint arXiv:2412.02975 , 2024c. Zhiyuan Li, Hong Liu, Denny Zhou, and Tengyu Ma. Chain of thought empowers transformers to solve inherently serial problems. arXiv preprint arXiv:2402.12875 , 1, 2024. Zhiyuan Zeng, Qinyuan Cheng, Zhangyue Yin, Yunhua Zhou, and Xipeng Qiu. Revisiting the test- time scaling of o1-like models: Do they truly possess test-time scaling capabilities? arXiv preprint arXiv:2502.12215 , 2025. 11 Yuyang Wu, Yifei Wang, Tianqi Du, Stefanie Jegelka, and Yisen Wang. When more is less: Understanding chain-of-thought length in llms. arXiv preprint arXiv:2502.07266 , 2025. Marthe Ballon, Andres Algaba, and Vincent Ginis. The relationship between reasoning and performance in large language models–o3 (mini) thinks harder, not | https://arxiv.org/abs/2505.22290v1 |
longer. arXiv preprint arXiv:2502.15631 , 2025. Jinyan Su, Jennifer Healey, Preslav Nakov, and Claire Cardie. Between underthinking and overthinking: An empirical study of reasoning length and correctness in llms. arXiv preprint arXiv:2505.00127 , 2025. Gabriel Maher. Llmpc: Large language model predictive control. arXiv preprint arXiv:2501.02486 , 2025. Zangir Iklassov, Yali Du, Farkhad Akimov, and Martin Takac. Self-guiding exploration for combinatorial problems. Advances in Neural Information Processing Systems , 37:130569–130601, 2024. Yanan Chen, Ali Pesaranghader, Tanmana Sadhu, and Dong Hoon Yi. Can we rely on llm agents to draft long-horizon plans? let’s take travelplanner as an example. arXiv preprint arXiv:2408.06318 , 2024d. Yilun Hao, Yongchao Chen, Yang Zhang, and Chuchu Fan. Large language models can solve real-world planning rigorously with formal verification tools. arXiv preprint arXiv:2404.11891 , 2024. Jian Xie, Kexun Zhang, Jiangjie Chen, Siyu Yuan, Kai Zhang, Yikai Zhang, Lei Li, and Yanghua Xiao. Revealing the barriers of language agents in planning. arXiv preprint arXiv:2410.12409 , 2024. Zhenting Qi, Hongyin Luo, Xuliang Huang, Zhuokai Zhao, Yibo Jiang, Xiangjun Fan, Himabindu Lakkaraju, and James Glass. Quantifying generalization complexity for large language models. arXiv preprint arXiv:2410.01769 , 2024. Junlin Wang, Shang Zhu, Jon Saad-Falcon, Ben Athiwaratkun, Qingyang Wu, Jue Wang, Shuaiwen Leon Song, Ce Zhang, Bhuwan Dhingra, and James Zou. Think deep, think fast: Investigating efficiency of verifier-free inference-time-scaling methods. arXiv preprint arXiv:2504.14047 , 2025. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. Treeofthoughts: Deliberateproblemsolvingwithlargelanguagemodels. Advances in neural information processing systems , 36:11809–11822, 2023a. Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Michal Podstawski, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Hubert Niewiadomski, Piotr Nyczyk, et al. Graph of thoughts: Solving elaborate problems with large language models. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 38, pages 17682–17690, 2024. Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, and Zhiting Hu. Reasoning with language model is planning with world model. arXiv preprint arXiv:2305.14992 , 2023. Qiyuan Zhang, Fuyuan Lyu, Zexu Sun, Lei Wang, Weixu Zhang, Zhihan Guo, Yufei Wang, Irwin King, Xue Liu, and Chen Ma. What, how, where, and how well? a survey on test-time scaling in large language models. arXiv preprint arXiv:2503.24235 , 2025. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171 , 2022. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR) , 2023b. R. Impagliazzo and R. Paturi. Complexity of k-sat. In Proceedings. Fourteenth Annual IEEE Conference on Computational Complexity (Formerly: Structure in Complexity Theory Conference) (Cat.No.99CB36317) , pages 237–240, 1999. doi: 10.1109/CCC.1999.766282. 12 Appendix A Outline This appendix provides supplementary information and additional experimental results to support the main text. The content is organized into four main parts: Related Work. Appendix B presents a review of existing research configurations for evaluating LLMs’ in-context reasoning, discusses key concepts such as In-Context Search, External Search, | https://arxiv.org/abs/2505.22290v1 |
and Test-Time Scaling, and positions our work within this broader landscape. Prompt Design. Appendix C presents the specific prompt templates employed for Direct Prompting, Chain of Thought (CoT) Prompting, and Algorithm of Thought (AoT) Prompting, using the Trip Planning task as an illustrative example. Experimental Results. Appendix D presents supplementary experimental results, including detailed tables (e.g., Table 4, Table 5) and line graphs (e.g., Figure 3), which expand on the findings discussed in the main text. TheoreticalResults. AppendixEillustratesthetheoreticalfoundationsofourwork, includingdefinitions of Turing Machines, computational complexity classes (such as P,NP,EXP,NEXP,CoT(t(n)), and AoT(a(n))), Transformer model assumptions, and provides formal proofs for key theorems (Theorem 3.1, 3.2, 3.3, 3.4, and 3.5). B Related Work Table 3: Comparison with existing research configurations evaluating LLMs’ in -context reasoning; external mechanisms are not considered in this work. Our study is the first to systematically explore the combined potential of Test-Time Scaling and In-Context Search on complex reasoning tasks. Test-Time Scaling In-Context Search Parallel ScalingSequential ScalingInternal ScalingDirect (Zero-shot/ Few-shot/ Hints)CoT (Greedy Search)AoT (Algorithmic Search) Snell et al. (2024) ✓ ✓ ✗ ✓ ✗ ✗ Maher (2025) ✓ ✓ ✗ ✓ ✗ ✗ Iklassov et al. (2024) ✓ ✓ ✗ ✓ ✗ ✗ Chen et al. (2024d) ✗ ✓ ✗ ✓ ✓ ✗ Chen et al. (2024a) ✓ ✗ ✓ ✓ ✓ ✗ Lee et al. (2025) ✓ ✓ ✓ ✓ ✗ ✗ Hao et al. (2024) ✗ ✓ ✓ ✓ ✓ ✗ Chen et al. (2024b) ✗ ✓ ✓ ✓ ✗ ✗ Valmeekam et al. (2024a) ✗ ✗ ✓ ✓ ✗ ✗ Xie et al. (2024) ✗ ✗ ✗ ✓ ✗ ✗ Qi et al. (2024) ✗ ✗ ✓ ✓ ✗ ✗ Zheng et al. (2024) ✗ ✓ ✓ ✓ ✗ ✗ Wang et al. (2025) ✓ ✓ ✓ ✓ ✗ ✗ Sun et al. (2025) ✓ ✗ ✓ ✓ ✗ ✗ Ge et al. (2025) ✗ ✗ ✓ ✓ ✓ ✗ Hazra et al. (2025) ✗ ✗ ✓ ✓ ✗ ✗ Sel et al. (2025) ✓ ✓ ✗ ✓ ✓ ✓ Our Work ✓ ✓ ✓ ✓ ✓ ✓ 13 In-Context Search. In-context search prompting enables LLMs to learn and execute algorithmic behaviors for searching the solution space of complex problems, often all within a single query. (i) Direct Prompting : This approach involves providing the model with a few problem-solution pairs (Wang et al., 2020) without any explicit intermediate reasoning steps in the examples. It relies on the model’s internalized reasoning to bridge from the problem to the solution and generally does not delve into deep task decomposition. Variations include strategies like Automatic Chain-of-Thought (Auto-CoT) prompting (Zhang et al., 2022), and the use of hints before prompting (Fu et al., 2024); (ii) Chain-of-Thought (CoT) Prompting : CoT (Wei et al., 2022) guides LLMs to generate a sequence of intermediate reasoning steps before arriving at a final solution by providing an in-context example that demonstrates a successive, step-by-step solution path (typically representing a greedy search without algorithmic operations like backtracking). (iii) Algorithm-of-Thought (AoT) Prompting : AoT (Sel et al., 2025) is a more advanced in-context prompting strategy that guides | https://arxiv.org/abs/2505.22290v1 |
LLMs through explicit algorithmic search pathways. This is achieved by employing detailed examples demonstrating algorithmic operations. External Search. In contrast with in-context search methods, external search involves an external algorithmic pipeline designed for halting, modifying, and then resuming the LLM’s generation process, often relying on multiple queries or external tools (e.g. verifiers or state evaluators). Representative works include Tree-of-Thoughts (ToT) (Yao et al., 2023a), Graph-of-Thoughts (GoT) (Besta et al., 2024), and Reasoning-via-Planning (RAP) (Hao et al., 2023) approaches. In this paper, we focus on evaluating and enhancing LLMs’ in-context reasoning abilities without the aid of external mechanisms such as additional model training, external reward signals, or explicit process supervision. Test-Time Scaling. This class of methods aims to boost on-the-fly reasoning capabilities by allocating additional computational resources during inference (Zhang et al., 2025). Major strategies include (i) Parallel Scaling : improves performance by generating multiple outputs in parallel and then aggregating them, as seen in techniques like Best-of-N (BoN) sampling and Self-Consistency (Wang et al., 2022); (ii)Sequential Scaling : an iterative method where subsequent computations are explicitly directed based on the results of intermediate steps, allowing the model to refine solutions step-by-step, exemplified by approaches like Self-Refine (Madaan et al., 2023) and ReAct (Yao et al., 2023b); and (iii) Internal Scaling: where a model autonomously determines the computational effort to allocate for reasoning using its internal parameters and a learned policy (typically trained via reinforcement learning (Guo et al., 2025)), enabling more detailed or self-evaluated reasoning chains without external multi-call mechanisms. C Prompt Design This section shows the different prompting methodologies using Trip Planning task in (Zheng et al., 2024) as an illustrative example, detailing the specific templates employed for Direct Prompting, CoT Prompting, and AoT Prompting. Direct Prompting Problem Description: You are an expert at planning trips. You are given a few constraints regarding the cities to visit and the durations of staying at each city. You are also given the flight information between the cities. 5-Shot In-Context Examples: ### Task1 ### You plan to visit 6 European cities for 16 days in total. You only take direct flights to commute between cities. You plan to stay in Riga for 3 days. You plan to stay in Edinburgh for 4 days. You want to spend 2 days in Milan. You are going to attend a wedding in Milan between day 4 and day 5. You plan to stay in Copenhagen for 4 days. You plan to visit relatives in Copenhagen between day 5 and day 8. You would like to visit Vilnius for 5 days. You have to attend a workshop in Vilnius between day 10 and day 14. You plan to stay in Brussels for 3 days.\n\nHere are the cities that have direct flights:\nEdinburgh and Copenhagen, Vilnius and Brussels, Copenhagen 14 and Riga, Milan and Vilnius, Milan and Brussels, Edinburgh and Milan, Edinburgh and Riga, Edinburgh and Brussels, Milan and Copenhagen, Copenhagen and Brussels, from Riga to Vilnius, Copenhagen and Vilnius, Milan and Riga, Riga and Brussels.\n\nFind a trip plan of visiting the cities for 16 days by taking direct flights | https://arxiv.org/abs/2505.22290v1 |
to commute between them. ### Solution1 ### Here is the trip plan for visiting the 6 European cities for 16 days:\n\n**Day 1-4:** Arriving in Edinburgh and visit Edinburgh for 4 days.\n**Day 4:** Fly from Edinburgh to Milan.\n**Day 4-5:** Visit Milan for 2 days.\n**Day 5:** Fly from Milan to Copenhagen.\n**Day 5-8:** Visit Copenhagen for 4 days.\n**Day 8:** Fly from Copenhagen to Riga.\n**Day 8-10:** Visit Riga for 3 days.\n**Day 10:** Fly from Riga to Vilnius.\n**Day 10-14:** Visit Vilnius for 5 days.\n**Day 14:** Fly from Vilnius to Brussels.\n**Day 14-16:** Visit Brussels for 3 days. ...... (4 more examples ) Question: ### Target Question ### You plan to visit 10 European cities for 25 days in total. You only take direct flights to commute between cities. You plan to stay in Berlin for 4 days. You have to attend a workshop in Berlin between day 12 and day 15. You would like to visit Prague for 2 days. You plan to stay in Stuttgart for 5 days. You would like to visit Manchester for 3 days. You want to spend 2 days in Nice. You would like to visit Reykjavik for 2 days. You would like to visit Florence for 3 days. You plan to stay in Vilnius for 5 days. You would like to meet your friends at Vilnius between day 15 and day 19 to tour together. You plan to stay in Oslo for 4 days. You would like to visit Dubrovnik for 4 days. You plan to visit relatives in Dubrovnik between day 1 and day 4.\n\nHere are the cities that have direct flights:\nfrom Reykjavik to Stuttgart, Manchester and Stuttgart, Nice and Berlin, Oslo and Prague, Stuttgart and Berlin, Manchester and Nice, Reykjavik and Oslo, Reykjavik and Prague, Manchester and Prague, Reykjavik and Berlin, Dubrovnik and Manchester, Manchester and Oslo, Manchester and Berlin, Prague and Florence, Berlin and Vilnius, Dubrovnik and Oslo, Nice and Oslo, Berlin and Oslo, Nice and Reykjavik, Vilnius and Oslo.\n\nFind a trip plan of visiting the cities for 25 days by taking direct flights to commute between them. Chain of Thought Prompting Problem Description: You are an expert at planning trips. You are given a few constraints regarding the cities to visit and the durations of staying at each city. You are also given the flight information between the cities. Greedy Search Thinking Process: ### Objective ### Plan a **16-day** trip that visits the **six** European cities below, using only the direct flights provided. ### Constraints ### 1. **Adjacency overlap** The last day of city *i* is also the first day of city *i + 1*. 2. **Stay-length & fixed-window requirements** City: Riga Required stay: 3 days Fixed-day window: – City: Edinburgh Required stay: 4 days Fixed-day window: – 15 City: Milan Required stay: 2 days Fixed-day window: **must cover Day 4–5** (wedding) City: Copenhagen Required stay: 4 days Fixed-day window: **must cover Day 5–8** (visit relatives) City: Vilnius Required stay: 5 days Fixed-day window: **must cover Day 10–14** (workshop) City: Brussels Required stay: 3 days Fixed-day window: – The sum of **independent days** ( Σstay - overlaps) must | https://arxiv.org/abs/2505.22290v1 |
equal **16** exactly. 3. **Flights requirements** A direct flight exists **only** when explicitly listed: Riga→Vilnius Riga →Edinburgh Riga →Milan Riga →Copenhagen Riga →Brussels Edinburgh →Copenhagen Edinburgh →Milan Edinburgh →Riga Edinburgh →Brussels Milan→Copenhagen Milan →Vilnius Milan →Brussels Milan →Riga Copenhagen →Edinburgh Copenhagen →Riga Copenhagen →Vilnius Copenhagen →Milan Copenhagen →Brussels Vilnius→Brussels Vilnius →Copenhagen Vilnius →Milan Brussels→Vilnius Brussels →Copenhagen Brussels →Milan Brussels →Edinburgh Brus- sels→Riga ### Solution Thinking Process ### 1. State definition path ordered list of visited cities UD current independent-day total start calendar start-day of the last city used set of visited cities 2. Initialization Initial state: (path=[Edinburgh], UD=4, start=1) # Edinburgh spans Day 1-4 3. Greedy Search Step: G0 Transition tried: **Start Edinburgh** Calendar preview & test: Edinburgh Day 1-4 (no window) Outcome: keep Step: G1 Transition tried: Edinburgh →**Milan** Calendar preview & test: Milan Day 4–5 covers wedding (Day 4–5) Outcome: keep UD = 5 Step: G2 Transition tried: Milan →**Copenhagen** Calendar preview & test: Copenhagen Day 5–8 covers relatives (Day 5–8) Outcome: keep UD = 8 Step: G3 Transition tried: Copenhagen →**Riga** Calendar preview & test: Riga Day 8–10 (no hard window broken) Outcome: keep UD = 10 Step: G4 Transition tried: Riga →**Vilnius** 16 Calendar preview & test: Vilnius Day 10–14 covers workshop (Day 10–14) Outcome: keep UD = 14 Step: G5 Transition tried: Vilnius →**Brussels** Calendar preview & test: Brussels Day 14–16 (all constraints now met) Outcome: **Success** UD = 16 4. Unique solution path found Edinburgh →Milan→Copenhagen →Riga→Vilnius→Brussels 5. Output Format Day 1–4 Edinburgh Day 4–5 Milan (wedding) Day 5–8 Copenhagen (visit relatives) Day 8–10 Riga Day 10–14 Vilnius (workshop) Day 14–16 Brussels Question: ### Target Question ### You plan to visit 10 European cities for 25 days in total. You only take direct flights to commute between cities. You plan to stay in Berlin for 4 days. You have to attend a workshop in Berlin between day 12 and day 15. You would like to visit Prague for 2 days. You plan to stay in Stuttgart for 5 days. You would like to visit Manchester for 3 days. You want to spend 2 days in Nice. You would like to visit Reykjavik for 2 days. You would like to visit Florence for 3 days. You plan to stay in Vilnius for 5 days. You would like to meet your friends at Vilnius between day 15 and day 19 to tour together. You plan to stay in Oslo for 4 days. You would like to visit Dubrovnik for 4 days. You plan to visit relatives in Dubrovnik between day 1 and day 4.\n\nHere are the cities that have direct flights:\nfrom Reykjavik to Stuttgart, Manchester and Stuttgart, Nice and Berlin, Oslo and Prague, Stuttgart and Berlin, Manchester and Nice, Reykjavik and Oslo, Reykjavik and Prague, Manchester and Prague, Reykjavik and Berlin, Dubrovnik and Manchester, Manchester and Oslo, Manchester and Berlin, Prague and Florence, Berlin and Vilnius, Dubrovnik and Oslo, Nice and Oslo, Berlin and Oslo, Nice and Reykjavik, Vilnius and Oslo.\n\nFind a trip plan of visiting the cities for 25 days by taking direct flights to commute between | https://arxiv.org/abs/2505.22290v1 |
them. Algorithm of Thought Prompting Problem Description: You are an expert at planning trips. You are given a few constraints regarding the cities to visit and the durations of staying at each city. You are also given the flight information between the cities. Depth-First Search Thinking Process: ### Objective ### Plan a **16-day** trip that visits the **six** European cities below, using only the direct flights provided. ### Constraints ### 1. **Adjacency overlap** The last day of city *i* is also the first day of city *i + 1*. 2. **Stay-length & fixed-window requirements** City: Riga Required stay: 3 days Fixed-day window: – 17 City: Edinburgh Required stay: 4 days Fixed-day window: – City: Milan Required stay: 2 days Fixed-day window: **must cover Day 4–5** (wedding) City: Copenhagen Required stay: 4 days Fixed-day window: **must cover Day 5–8** (visit relatives) City: Vilnius Required stay: 5 days Fixed-day window: **must cover Day 10–14** (workshop) City: Brussels Required stay: 3 days Fixed-day window: – The sum of **independent days** ( Σstay - overlaps) must equal **16** exactly. 3. **Flights requirements** A direct flight exists **only** when explicitly listed: Riga→Vilnius Riga →Edinburgh Riga →Milan Riga →Copenhagen Riga →Brussels Edinburgh →Copenhagen Edinburgh →Milan Edinburgh →Riga Edinburgh →Brussels Milan→Copenhagen Milan →Vilnius Milan →Brussels Milan →Riga Copenhagen →Edinburgh Copenhagen →Riga Copenhagen →Vilnius Copenhagen →Milan Copenhagen →Brussels Vilnius→Brussels Vilnius →Copenhagen Vilnius →Milan Brussels→Vilnius Brussels →Copenhagen Brussels →Milan Brussels →Edinburgh Brus- sels→Riga ### Solution Thinking Process ### 1. State definition path ordered list of visited cities UD current independent-day total start calendar start-day of the last city used set of visited cities 2. Initialization Pick an initial state: (path=[Riga], UD=3, start=1) # Riga spans Day 1-3 3. Depth-First Search with pruning 3-A. Riga-rooted subtree (cut in one shot) Step: A Transition tried: Riga →**Milan** Calendar preview & test: Milan Day 3-4 (wedding Day 5 missing) Outcome: **Prune (window)** Step: B Transition tried: Riga →**Edinburgh** Calendar preview & test: Edinburgh Day 3-6 →Milan can’t own Day 4-5 Outcome: **Prune** Step: C Transition tried: Riga →**Brussels** Calendar preview & test: Brussels Day 3-5 occupies Day 4-5 Outcome: **Prune** Step: D Transition tried: Riga →**Copenhagen** Calendar preview & test: Copenhagen Day 3-6 (relatives window broken; Milan missing) Outcome: **Prune** 18 Step: E Transition tried: Riga →**Vilnius** Calendar preview & test: Vilnius Day 3-7 (workshop window broken) Outcome: **Prune** No child of the Riga root survives, so the algorithm back-tracks to choose a new start city. 3-B. Edinburgh-rooted search (full trace) Step: S0 Transition tried: **Start Edinburgh** Calendar preview & test: Edinburgh Day 1-4 Outcome: keep Step: A Transition tried: Edinburgh →**Copenhagen** Calendar preview & test: Copenhagen Day 4-7; relatives window 5-8 not fully covered, wedding lost Outcome: **Prune (windows)** Step: B Transition tried: Edinburgh →**Brussels** Calendar preview & test: Brussels Day 4-6; Milan cannot cover Day 4-5 Outcome: **Prune (window)** Step: C Transition tried: Edinburgh →**Milan** Calendar preview & test: Milan Day 4–5 covers wedding →UD = 5 Outcome: keep Step: C1 Transition tried: ... →**Brussels** Calendar preview & test: Brussels Day 5-7; relatives window lost Outcome: **Prune** Step: C2 Transition tried: ... | https://arxiv.org/abs/2505.22290v1 |
→**Riga** Calendar preview & test: Riga Day 5-7; relatives window lost Outcome: **Prune** Step: C3 Transition tried: ... →**Vilnius** Calendar preview & test: Vilnius Day 5-9; relatives & workshop windows broken Outcome: **Prune** Step: C4 Transition tried: ... →**Copenhagen** Calendar preview & test: Copenhagen Day 5–8 covers relatives →UD = 8 Outcome: keep Step: C4a Transition tried: ... →**Brussels** Calendar preview & test: Brussels Day 8-10; workshop window still unmet Outcome: **Prune** Step: C4b Transition tried: ... →**Vilnius** Calendar preview & test: Vilnius Day 8-12; workshop window 10-14 not fully covered Outcome: **Prune** Step: C4c Transition tried: ... →**Riga** Calendar preview & test: Riga Day 8–10 →UD = 10 Outcome: keep Step: C4c1 Transition tried: ... →**Brussels** 19 Calendar preview & test: Brussels Day 10-12; workshop window unmet Outcome: **Prune** Step: C4c2 Transition tried: ... →**Edinburgh** Calendar preview & test: revisit city Outcome: **Prune (visited)** Step: C4c3 Transition tried: ... →**Milan** Calendar preview & test: revisit city Outcome: **Prune (visited)** Step: C4c4 Transition tried: ... →**Vilnius** Calendar preview & test: Vilnius Day 10–14 covers workshop →UD = 14 Outcome: keep Step: C4c4a Transition tried: ... →**Brussels** Calendar preview & test: Brussels Day 14–16 →UD = 16 (check) Outcome: **Success** 4. Unique solution path found Edinburgh →Milan→Copenhagen →Riga→Vilnius→Brussels 5. Output Format Day 1–4 Edinburgh Day 4–5 Milan (wedding) Day 5–8 Copenhagen (visit relatives) Day 8–10 Riga Day 10–14 Vilnius (workshop) Day 14–16 Brussels Question: ### Target Question ### You plan to visit 10 European cities for 25 days in total. You only take direct flights to commute between cities. You plan to stay in Berlin for 4 days. You have to attend a workshop in Berlin between day 12 and day 15. You would like to visit Prague for 2 days. You plan to stay in Stuttgart for 5 days. You would like to visit Manchester for 3 days. You want to spend 2 days in Nice. You would like to visit Reykjavik for 2 days. You would like to visit Florence for 3 days. You plan to stay in Vilnius for 5 days. You would like to meet your friends at Vilnius between day 15 and day 19 to tour together. You plan to stay in Oslo for 4 days. You would like to visit Dubrovnik for 4 days. You plan to visit relatives in Dubrovnik between day 1 and day 4.\n\nHere are the cities that have direct flights:\nfrom Reykjavik to Stuttgart, Manchester and Stuttgart, Nice and Berlin, Oslo and Prague, Stuttgart and Berlin, Manchester and Nice, Reykjavik and Oslo, Reykjavik and Prague, Manchester and Prague, Reykjavik and Berlin, Dubrovnik and Manchester, Manchester and Oslo, Manchester and Berlin, Prague and Florence, Berlin and Vilnius, Dubrovnik and Oslo, Nice and Oslo, Berlin and Oslo, Nice and Reykjavik, Vilnius and Oslo.\n\nFind a trip plan of visiting the cities for 25 days by taking direct flights to commute between them. D Experimental Results This section provides additional experimental results for four tasks: Vertex Cover and 3-Dimensional Matching (3DM) from the controlled NP-Hard problems category (Yang et al., 2025), and Trip Planning and Meeting Planning from | https://arxiv.org/abs/2505.22290v1 |
the complex real-world planning tasks (Zheng et al., 2024), supplementing those presented in the main body of the paper and featuring detailed tables and line graphs. 20 D.1 Tables Table 4: Performance on 3-Dimensional Matching (3DM) (Difficulty Level = 10) for controlled NP-hard task. Qwen3 showed almost no improvement across all methods, possibly because of struggling with complex numerical abstract reasoning. Model Evaluation StrategyDirect Greedy Search Depth-First Prompting (CoT) Search (AoT) Qwen3No Scaling (Base Model) 0/100 = 0% 0/100 = 0% 0/100 = 0% Parallel Scaling (Best-of-N) 0/100 = 0% 0/100 = 0% 0/100 = 0% Sequential Scaling (Self-Refine) 0/100 = 0% 0/100 = 0% 0/100 = 0% Internal Scaling (Thinking Mode) 0/100 = 0% 1/100 = 1% 1/100 = 1% Claude 3.7No Scaling (Base Model) 0/100 = 0% 0/100 = 0% 0/100 = 0% Parallel Scaling (Best-of-N) 0/100 = 0% 0/100 = 0% 0/100 = 0% Sequential Scaling (Self-Refine) 0/100 = 0% 0/100 = 0% 0/100 = 0% Internal Scaling (Thinking Mode) 1/100 = 1% 2/100 = 2% 15/100 = 15% Table 5: Performance on Meeting Planning (Difficulty Level = 10) for complex real-world planning. When the complexity of numerical input in language-based planning is disentangled, Qwen3 improved. Model Evaluation StrategyDirect Greedy Search Depth-First Prompting (CoT) Search (AoT) Qwen3No Scaling (Base Model) 0/100 = 0% 0/100 = 0% 0/100 = 0% Parallel Scaling (Best-of-N) 0/100 = 0% 0/100 = 0% 0/100 = 0% Sequential Scaling (Self-Refine) 0/100 = 0% 0/100 = 0% 1/100 = 1% Internal Scaling (Thinking Mode) 0/100 = 0% 1/100 = 1% 8/100 = 8% Claude 3.7No Scaling (Base Model) 0/100 = 0% 0/100 = 0% 0/100 = 0% Parallel Scaling (Best-of-N) 0/100 = 0% 1/100 = 1% 1/100 = 1% Sequential Scaling (Self-Refine) 0/100 = 0% 1/100 = 1% 1/100 = 1% Internal Scaling (Thinking Mode) 1/100 = 1% 8/100 = 8% 20/100 = 20% 21 D.2 Line Graphs 7FSUFY$PWFS %JNFOTJPOBM.BUDIJOH %. 5SJQ1MBOOJOH .FFUJOH1MBOOJOH Figure 3: Success rates (%) of the Qwen3 and Claude 3.7 models across various configurations for four challenging tasks: Vertex Cover, 3-Dimensional Matching (3DM), Trip Planning, and Meeting Planning. 22 E Theoretical Results E.1 Preliminaries In this section, we provide theoretical foundations for turing machine, computational complexity classes, and decoder-only transformers. Definition 3 (Turing Machine) .A (general) Turing machine Mis defined as a tuple M= (Q,Σ,Γ, δ, q 0, qaccept , qreject ), where •Qis a finite set of states. •Σis a finite input alphabet, not containing the blank symbol ⊔. •Γis a finite tape alphabet with Σ⊂Γand⊔ ∈Γ. •q0∈Qis the start state; qaccept , qreject∈Qare the unique halting states, qaccept̸=qreject. •δis the transition function, which differs for deterministic vs. nondeterministic machines: ifMis deterministic: δ:Q×Γ→Q×Γ× {L, R}, ifMis nondeterministic: δ:Q×Γ→ P Q×Γ× {L, R} . Lsignifies a left movement and Ra right movement of the tape head. In the deterministic case, exactly one action is specified for each (q, a). In the nondeterministic case, δ(q, a)is a finite set of possible moves. On input w∈Σ∗, the machine initially places won the tape, with the head at the leftmost cell and state q0. | https://arxiv.org/abs/2505.22290v1 |
It then follows transitions until it reaches either qaccept(accept) or qreject(reject). •A Deterministic Turing Machine (DTM) Mdecidesa language L⊆Σ∗if on every input w, it halts in either qacceptorqreject, and accepts exactly those w∈L. •An Non-deterministic Turing Machine (NTM) Mdecides Lif on every w∈Lthere is at least one accepting branch (reaching qaccept), and on every w /∈Lno branch accepts. •Thetime complexity ofM, denoted TM(n), is the maximum number of steps (over all computation paths, in the nondeterministic case) on inputs of length n. Definition 4 (Deterministic and Nondeterministic Time Classes) .Lett:N→Nbe a time-constructible function. TIME (t(n)) ={L| ∃DTM Mthat decides LinO(t(n))time}. A language Lis in TIME (t(n))precisely when there is a deterministic Turing machine that, on every input wof length n, halts within O(t(n))steps and correctly decides whether w∈L. In other words, TIME (t(n))captures exactly those problems solvable by a DTM within the specified time bound. NTIME (t(n)) ={L| ∃NTM Mthat decides LinO(t(n))time}. Equivalently, L∈NTIME (t(n))if there is a nondeterministic Turing machine which, on each branch and for each input of length n, halts within O(t(n))steps and accepts exactly the strings in L. One can also view this via a polynomial-time verifier: there is a deterministic machine Vand for every w∈La “witness” uof length O(t(n))such that V(⟨w, u⟩)accepts in O(t(n))time. 23 Definition 5 (Polynomial-Time and Exponential-Time Classes) .Using the above classes, we define the most common time-based complexity classes: P=[ k∈NTIME (nk). A language Lis in Pif and only if it can be decided by some deterministic Turing machine in time polynomial in the input length. Equivalently, there exists ksuch that on every input of length n, the machine halts in at most O(nk)steps. NP=[ k∈NNTIME (nk). A language Lis in NPif there is a nondeterministic machine running in polynomial time that accepts exactly the strings in L. Equivalently, membership in Lcan be certified by a polynomial-length witness which a polynomial-time deterministic verifier checks. EXP =[ k∈NTIME 2nk . The class EXPcontains those problems solvable by a deterministic Turing machine in time bounded by some exponential function 2nk. It strictly generalizes Pby allowing much larger time bounds. NEXP =[ k∈NNTIME 2nk . Similarly, NEXPconsists of languages decidable by a nondeterministic Turing machine in exponential time, or equivalently those with exponentially long witnesses checkable in exponential time. Definition 6 (Polynomial-Time Reducibility ( ≤P)).A language L1is said to be polynomial-time reducible to a language L2, denoted L1≤PL2, if there exists a function f: Σ∗→Σ∗, computable by a DTM in polynomial time, such that for every w∈Σ∗: w∈L1⇐⇒ f(w)∈L2. Definition 7 (The Complexity Class NP-Hard).A problem (or language) His classified as NP-Hardif, for every language L′∈NP,L′≤PH. An NP-Hardproblem is at least as difficult as any problem in NP. If an NP-Hardproblem His also a member of NP, then His termed NP-Complete . Definition 8 (The Complexity Class CoT(t(n))).Following (Merrill and Sabharwal, 2024), we denote byCoT(t(n))the set of languages L, such that there is some transformer fthat recognizes Lint(n) decoding steps. Definition 9 (The Complexity Class AoT(a(n))).Following (Sel et al., 2023), we denote by AoT(a(n)), the set of languages L, such that there is some transformer fandb | https://arxiv.org/abs/2505.22290v1 |
>1, so that fthat recognizes L along some successful computational trace of a(n)decoding steps, using O ba(n) intermediate tokens to generate a tree of guesses (e.g. when using DFS). Remark 1 (On the Total Computational Effort for AoT) .It is important to distinguish between the length of the successful computational trace as defined in AoT(a(n))and the total number of intermediate tokens that an AoT-prompted Transformer might generate when engaged in an exhaustive search strategy to guarantee finding such a trace for a challenging problem instance (e.g., an NP-hard problem). •For a language L∈NP, while Theorem 3.2 states that an accepting AoT trace will have a(n) =poly(n) tokens, a Transformer prompted to exhaustively search for this trace (e.g., by systematically exploring all possible certificates up to a certain polynomial length) might generate a total number of intermediate tokens that is exponential in n, potentially O(exp(poly(n))), before the correct polynomial-length accepting trace is found or all possibilities are exhausted. This reflects the inherent difficulty of search in NPproblems under the exponential time hypothesis (Impagliazzo and Paturi, 1999). The AoT(poly(n))classification is based on the length of the specific path, not the effort to find it. 24 •ForAoT(exp(n)), where an accepting trace itself could be exponentially long (e.g., for problems in NEXPwhere certificates can be exponentially long), a Transformer prompted to guarantee finding such an exponentially long solution trace through an exhaustive search might generate a total number of intermediate tokens that is doubly exponential in n, potentially on the order of O(exp(exp(poly(n)))). This is because the search space could be exponential in the length of the already exponentially long certificate. While generating token sequences representing such exhaustive searches would clearly be computationally infeasible, the empirical results presented in 2 suggest that in practice, LLMs tend to produce good guesses sufficiently early in their search to find solutions before exhausting compute limitations in many instances. Assumption 1 (P̸=NP,EXP̸=NEXP).Throughout this paper, we adopt the standard and widely accepted conjecture that P̸=NPandEXP̸=NEXP. Assumption 2 (Decoder-Only Transformers) .We consider transformers with decoder-only architecture operating under several key assumptions according to (Chen et al., 2024c; Merrill and Sabharwal, 2024): •Constant Depth: The number of Transformer layers, Ldepth, is a constant, independent of the input length n. Ldepth =O(1) This implies that the intrinsic sequential processing capability of the base Transformer (without CoT) does not scale with n. •LogarithmicPrecisionforParametersandActivations: Themodeloperateswithfinitenumerical precision for its weights Wand activations A. This precision, Pbits, is logarithmic with respect to the combined length of the input nand the Chain of Thought t(n). Pbits(W, A) =O(log(n+t(n))) This constraint is important for simulating Transformers on Turing machines efficiently. •Logarithmic Embedding Dimension: The dimensionality of token embeddings, demb, is logarithmic with respect to the input length n. demb=O(logn) This keeps the model size relatively small in terms of its width. •Strict Causal Masking: The self-attention mechanism ensures that the computation for a token at sequence position ican only attend to tokens at positions jsuch that j < i. IfMmaskis the attention mask, then for attention scores αi,j: αi,j̸= 0 = ⇒j < i This is inherent to autoregressive generation | https://arxiv.org/abs/2505.22290v1 |
in decoder-only models. •Advanced Normalization Schemes: The architecture is assumed to employ specific layer normal- ization variants, such as projected pre-norm (where a learnable linear projection Pprojis applied to the input hsubof a sublayer before layer normalization LN) or multi-pre-norm. Outputsub= Sublayer( LN(Pproj(hsub))) (Illustrative for projected pre-norm ) These are noted as important for enabling certain constructive proofs. •Saturated Attention: For every attention head, the score matrix α∈[0,1]n×nsatisfies αi,j∈n 0,1 ko for some k∈ {1, . . . , n },nX j=1αi,j= 1∀i. Thus each token ieither ignores a position ( αi,j= 0) or attends with equal weight 1/kto exactly k positions. Uniform attention ( k=n) and hard attention ( k= 1) are special cases. This “averaging hard attention” regime yields deterministic routing that is convenient for expressivity analyses. 25 •Projected Pre-Norm: Letv∈Rmbe the sub-layer input and M∈Rm×ma learnable projection. proj_layer_norm( v;M) := layer _norm Mv . By projecting first and normalizing afterward, a layer can isolate and renormalize any linear subspace of its hidden state, effectively simulating multiple independent pre -norm paths with modest extra depth. •Layer-Norm Hash: For scalars x, y∈Rdefine ϕ(x, y) := layer _norm ([ x, y,−x,−y])∈R4. Key properties used throughout the lower-bound constructions: (a) Scale-invariance: ϕ x/i,1/i =ϕ(x,1)for any i >0. (b) Exact-match test: ϕ(x,1)·ϕ(k,1) = 1 ⇐⇒ x=k. These allow attention to perform reliable equality checks and value retrieval across positions, even when earlier averaging operations have rescaled the stored pairs. E.2 Proof of Theorem 3.1: CoT (poly (n)) = PMerrill and Sabharwal (2024) Proof.The proof is demonstrated in two parts: Part I: P⊆CoT(poly(n)) This inclusion is established by demonstrating that a CoT-augmented Transformer, under specific architectural assumptions, can simulate any DTM operating within a polynomial time bound. Theorem 2 in Merrill and Sabharwal (2024): LetMbe a DTM that, on input length 1 +n, runs for at most t(n)steps (at most polynomial). There is a decoder-only projected pre-norm transformer with strict causal saturated attention (with or without positional encodings, as stated in saturated attention assumption in Assumption 2) that, on input x, takes t(n)decoding steps and then, with |M(x)|additional steps, outputs M(x). Corresponding Proof. The proof constructs a Transformer that simulates the DTM Mby dedicating each decoding step to a single computational step of M. The Transformer’s sequence of generated CoT tokens serves to encode the evolving configuration of M. This configuration includes M’s current state, the symbols on its tapes, and the positions of its tape heads. Specifically, at each step jof the simulation (corresponding to the j-th CoT token), the Transformer may store a “diff” representing the changes to M’s tape and state from step j−1toj. A crucial component of this simulation is the “layer-norm hash” mechanism. This mechanism allows the Transformer, at any step i, to retrieve information corresponding to specific tape cells or past DTM states from the previously generated CoT sequence (tokens 1, . . . , i −1). This is achieved under the assumption of strict causal masking in Assumption 2, effectively simulating random access to M’s tape. For instance, to determine the symbol γτ ion tape τat head position | https://arxiv.org/abs/2505.22290v1 |
hτ iduring the i-th CoT step, the Transformer attends to the CoT history. The layer-norm hash ϕ(hτ i/i,1/i) =ϕ(hτ i,1)is used. An attention query, such as ⟨ϕ(hτ i,1), e1⟩, can be matched against keys derived from previous CoT steps, e.g., ⟨ϕ(hτ j,1),−ϕ(f(j),1)⟩ (where j < i), to identify and retrieve the most recent symbol written to cell hτ i. The step-by-step simulation proceeds as follows: For each computational step of M, the Transformer performs one CoT generation step (say, step i, for i > n, where nis the input length). First, the Transformer reconstructs the DTM’s current configuration. The DTM state qi−n−1is derived from the (i−1)-th CoT token. The symbols under M’s tape heads are retrieved from the CoT history using the layer-norm hash mechanism. For the input tape, the symbol σh0 iat head position h0 iis retrieved via attention, potentially using a query like ⟨ϕ(h0 i/i,−1),1⟩. For work and output tapes, the symbol γτ iat head position hτ iis determined by finding the latest “diff” token δj−n−1(for some j < i) in the CoT history that corresponds to a write operation at cell hτ j. 26 Second, the Transformer’s feed-forward networks implement M’s transition function δ. Given the reconstructed state qi−n−1and the symbols read from the tapes (σh0 i, γ1 i, . . . , γk i), these networks compute the new state of M, the symbols to be written to the tapes, and the movements of the tape heads. Finally, this output (new state, symbols written, head movements) is encoded and emitted as the next CoT token, δi−n. This cycle is repeated for t(n)steps, thereby simulating t(n)steps of M. After t(n)simulation steps, if Mhas halted, the Transformer can then output M(x)using|M(x)|additional decoding steps by retrieving the contents of M’s output tape from the CoT sequence. The proof of Theorem 2 above, demonstrating the capability of a Transformer to simulate a DTM step-for-step, directly leads to the following corollary relating their computational power: Corollary 2.1 in Merrill and Sabharwal (2024): TIME (t(n))⊆CoT(t(n)). With Theorem 2 and Corollary 2.1 established, we can now prove the first part of Theorem 3.1. Let Lbe an arbitrary language in the class P. By the definition of P, there exists a DTM MLand a polynomial function pL(k)such that MLdecides L, and for any input wof length n=|w|,MLhalts in O(pL(n)) time. This implies L∈TIME (pL(n)). To show that L∈CoT(poly(n)), we utilize a CoT-augmented Transformer MCoTconfigured to simulate MLas per the construction in Theorem 2. We set the number of intermediate CoT steps generated by MCoTto be t(n) =pL(n). Since pL(n)is a polynomial function ofn, it follows that t(n) =poly(n). According to Corollary 2.1, if a language is in TIME (t(n)), then it is also in CoT(t(n)). Since L∈TIME (pL(n)), by applying Corollary 2.1 with t(n) =pL(n), we have L∈CoT(pL(n)). Given our choice t(n) =pL(n) =poly(n), it follows that L∈CoT(poly(n)). AsLwas an arbitrary language in P, this establishes the inclusion P⊆CoT(poly(n)). Part II: CoT(poly(n))⊆P This inclusion is demonstrated by showing that the computation performed by a CoT-augmented Transformer can be simulated by a DTM in polynomial time. Theorem | https://arxiv.org/abs/2505.22290v1 |
3 in Merrill and Sabharwal (2024): The relationship between CoT and Turing machine time complexity is given by: CoT(t(n))⊆^TIME (n2+t(n)2).(The tilde in ^TIMEhides polylogarithmic factors, specifically logk(n+t(n))for some constant k). Corresponding Proof. The proof proceeds by constructing a DTM that simulates the operations of a CoT-augmented Transformer. The DTM simulates the Transformer for each of its t(n)intermediate CoT generation steps, plus one final step to produce the answer. To generate a single CoT token, the Transformer performs a forward pass. This pass involves several stages: first, the current input sequence, which consists of the original input of length nand all previously generated CoT tokens up to that point, is embedded. Let kbe the number of CoT tokens generated so far; the length of this sequence is Nk=n+k. This embedded sequence is then processed through a constant number of layers (see constant depth assumption in Assumption 2). Each layer typically comprises a self-attention mechanism followed by feed-forward networks. Finally, a classification layer selects the next token to be generated. The DTM simulates each of these operations. A key component in the complexity analysis is the self-attention mechanism. For a sequence of length Nk, each self-attention layer performs computations that are roughly quadratic in Nk, i.e., O(N2 k), multiplied by factors related to the model’s embedding dimension and number of attention heads (which are assumed to be constant or at most logarithmic in Nkaccording to Assumption 2). The architectural assumptions include log-precision arithmetic, meaning that numbers are represented with O(log(n+t(n)))bits. Consequently, each arithmetic operation (such as addition or multiplication) on these numbers can be simulated by the DTM in O(polylog (Nk))time. The DTM performs a total of approximately n+t(n)such forward passes (one for each token processed or generated). The j-thforwardpassinthissequenceprocessesaninputoflength j. Thecostofsimulatingthe j-th forward pass is polynomial in jand polylogarithmic in the precision. Summing the costs for all forward passes, where the j-th pass iterates over jkey-value pairs for attention, leads to the overall simulation time. The total time for the DTM to simulate all n+t(n)forward passes is bounded by ˜O(n2+t(n)2). This complexity arises from summing the costs of individual forward passes, roughly expressed as the sumPn+t(n) j=1O(j·(complexity of processing one item including attention lookups )), where the per- item processing complexity considering attention over jitems is itself O(j·polylog (j)). This sum is O((n+t(n))2·polylog (n+t(n))), which is denoted as ˜O(n2+t(n)2). 27 With Theorem 3 established, we can now prove the second part of Theorem 3.1. Let Lbe an arbitrary language in CoT(poly(n)). By definition, this implies that there exists a CoT-augmented Transformer, MCoT, and a polynomial function pM(n)such that MCoTdecides the language Lby generating t(n) = pM(n)intermediate CoT steps. According to Theorem 3, the Transformer MCoT, which generates t(n) CoT tokens for an input of length n, can be simulated by a deterministic Turing machine in ˜O(n2+(t(n))2) time. We substitute t(n) =pM(n)into this time bound. The simulation time for MCoTby the DTM thus becomes ˜O(n2+ (pM(n))2). Since pM(n)is a polynomial function of n, its square, (pM(n))2, is also a polynomial in n. Consequently, the expression n2+ (pM(n))2represents a polynomial function of n. The ˜Onotation | https://arxiv.org/abs/2505.22290v1 |
encapsulates polylogarithmic factors, which do not alter the overall polynomial nature of this time bound. Therefore, the DTM simulates MCoTin time that is polynomial with respect to the input length n. This signifies that the language L, decided by MCoT, belongs to the complexity class P. AsLwas an arbitrary language in CoT(poly(n)), we conclude that CoT(poly(n))⊆P. Combining inclusions from Part IandPart II, it directly follows that: CoT(poly(n)) = P. E.3 Proof of Theorem 3.2: AoT (poly (n)) = NP Proof.The proof is demonstrated in two parts: NP⊆AoT(poly(n))andAoT(poly(n))⊆NP. We rely on the standard definitions of these complexity classes and the specified architectural assumptions for the Transformer (Assumption 2). Part I: NP⊆AoT(poly(n)) This inclusion is established by demonstrating that an AoT-prompted Transformer can decide any language L∈NP. Let Lbe an arbitrary language in the complexity class NP. By the verifier-based definition of NP, there exists a polynomial function pc(k)(bounding the certificate length) and a DTM V (the verifier), along with another polynomial function pV(k)(bounding the verifier’s runtime), such that for any input string wof length n=|w|: w∈L⇐⇒ ∃ u∈Σ∗such that |u| ≤pc(n)andVaccepts the pair ⟨w, u⟩within pV(|w|+|u|)time. The string uis the certificate (or witness) for the membership of winL. Since |u| ≤pc(n), the length of the input to V,|w|+|u|, is bounded by n+pc(n). Consequently, the runtime of V,pV(n+pc(n)), is also a polynomial in n. Let this polynomial runtime of Vbe denoted as p′ V(n). We will construct an AoT-prompted Transformer, MAoT, that decides L. The AoT process for MAoTwill simulatethenondeterministicguessofacertificate uthroughguidedexploration, andthendeterministically verify this certificate using a simulation of the DTM V. 1. Certificate Generation (Simulating Nondeterministic Guess via AoT Exploration): AoT prompting enables the Transformer to explore various computational pathways. This capability is harnessed to search for or generate a candidate certificate u. The in-context examples provided to MAoTare designed to demonstrate an effective search strategy for such certificates (e.g., by showing how to construct them incrementally, explore a constrained search space, or emulate steps of a search algorithm). Ifw∈L, a valid certificate uwith|u| ≤pc(n)exists and, by Definition 9, the search space is sufficiently large to include all such candidates, so in particular u. The AoT-prompted Transformer, through its explorative generation capabilities, can produce a sequence of intermediate thought tokens representing such a candidate certificate u. These tokens are generated autoregressively. The segment of an AoT computational path corresponding to the generation of uwould consist of a number of tokens proportional to|u|. Assuming each symbol of ucan be represented by a constant number of tokens (or a number of tokens polylogarithmic in n, fitting within the model’s tokenization scheme), this requires at most ku=O(pc(n))tokens. This phase effectively emulates the guessing or existential quantification aspect of NP. The AoT framework allows the model to try different potential certificates if earlier attempts do not lead to verification, using backtracking and exploration as guided by its prompting. 2. Verification (Simulating Verifier DTM V):Once a candidate certificate u(represented by at most kutokens) has been generated as part of an AoT computational path, the Transformer then simulates the execution of the verifier DTM Von the input pair ⟨w, | https://arxiv.org/abs/2505.22290v1 |
u⟩. The verifier Vruns in p′ V(n)time on this input. According to Theorem 2 in Merrill and Sabharwal (2024) (as cited in the proof of Theorem 3.1), a 28 decoder-only Transformer satisfying Assumption 2 can simulate a DTM (like V) running in p′ V(n)steps by generating p′ V(n)intermediate tokens. These tokens effectively form a Chain of Thought detailing the verifier’s computation steps. These verification tokens are generated sequentially after the tokens representing u, forming a continuous part of the same AoT computational path. Let kV=p′ V(n)be the number of tokens for this verification phase. 3. Overall AoT Accepting Path and its Length: Ifw∈L, then there exists at least one certificate u(of length ≤pc(n)) that Vaccepts. The AoT-prompted Transformer MAoTis designed to explore the space of potential certificates. An accepting computational path for MAoTconsists of: •A sequence of tokens representing the generation of a valid certificate u: requiring ku≤ O(pc(n)) tokens. •A sequence of tokens representing the simulation of V(⟨w, u⟩)leading to acceptance by V: requiring kV=p′ V(n)tokens. The total number of intermediate tokens along this single accepting computational path is a(n) =ku+kV. This sum is bounded as follows: a(n)≤ O(pc(n)) +p′ V(n). Since pc(n)andp′ V(n)are both polynomials in n, their sum a(n)is also a polynomial in n. Note, that this upper bound applies to the accepting path specifically, not necessarily the full exploration (see remark 1). IfVaccepts ⟨w, u⟩,MAoTis instructed to output an accept signal. The AoT framework, as guided by its in-context algorithmic examples, allows the model to find such a path if one exists. If w /∈L, no such certificate exists, and consequently, no such accepting computational path of polynomial length can be completed by MAoTleading to acceptance; it will eventually halt and reject (as per the requirement that it decides L, which implies halting on all inputs after exploring relevant paths or exhausting search bounds defined by the AoT prompting strategy). The definition of AoT(poly(n))in Theorem 3.2 specifically refers to the length a(n)of this successful computational trace. The existence of such a polynomially-bounded accepting path for all w∈Lis guaranteed if the underlying NPproblem has such certificates and verifiers. Therefore, any language L∈ NPcan be decided by an AoT-prompted Transformer where an accepting computation involves generating a(n) =poly(n)intermediate tokens along that specific accepting path. Thus, NP⊆AoT(poly(n)). Part II: AoT(poly(n))⊆NP This inclusion requires showing that if a language Lis decided by an AoT-prompted Transformer MAoT using at most a(n) =poly(n)intermediate tokens along any single accepting computational path, then L∈NP. To show L∈NP, we must construct a polynomial-time deterministic verifier DTM, VAoT, and demonstrate that for any w∈L, there exists a certificate ucertof polynomial length that VAoTcan use to verify w’s membership in Lin polynomial time. 1. Certificate Definition: Letw∈L. Since MAoTdecides Land accepts w, by the definition of AoT(poly(n))in Theorem 3.2, there exists at least one specific computational path (a sequence of generated intermediate thought tokens) that leads to MAoT’s acceptance of w. Let this sequence of tokens beSAoT= (s1, s2, . . . , s k), where k≤a(n)anda(n)is a polynomial in n. This sequence SAoTserves as our certificate, ucert. The | https://arxiv.org/abs/2505.22290v1 |
length of this certificate, measured in the number of tokens, is k, which is polynomial in n. The bit length of ucertisk·Pbits. With Pbits=O(log(n+k))(from Assumption 2 on Logarithmic Precision for Parameters and Activations, applied to input nand CoT/AoT length k), and since k≤a(n) =poly(n),n+kis also polynomial in n. Thus, log(n+k)is polylogarithmic in n. The bit length of ucertisk·O(log(n+a(n))), which is poly(n)·polylog (n), and is therefore polynomial in n. 2. Verifier DTM ( VAoT) Construction: The verifier DTM VAoTtakes as input the pair ⟨w, ucert⟩= ⟨w, S AoT⟩.VAoTmust perform two main tasks in polynomial time: •Verify that SAoTis a valid computational path that MAoTcould have legitimately generated starting from input w. •Verify that this path SAoTresults in MAoTaccepting w. The DTM VAoToperates by simulating the AoT-prompted Transformer MAoTstep-by-step, forced along the path dictated by the provided certificate SAoT. For each token sjinSAoT(forjfrom 1tok): 29 •VAoTconsiders the original input wand the sequence of previously verified tokens from the certificate, (s1, . . . , s j−1), as the current prefix for MAoT. •VAoTthen simulates a single forward pass of the Transformer MAoT(as defined by its architecture in Assumption 2) to determine the token s′ jthatMAoTwould deterministically generate given this prefix (w, s 1, . . . , s j−1). •VAoTcompares this internally computed s′ jwith the token sjfrom the certificate. If s′ j̸=sj, then SAoTis not a valid computational path that MAoTwould have taken, so VAoThalts and rejects. 3. Time Complexity of VAoT:The dominant operation within VAoTis the simulation of a single forward pass of the Transformer MAoTfor each of the ktokens in the certificate. Let Nj=n+(j−1)be the length of the sequence (input w+j−1preceding thought tokens) upon which the j-th token is conditioned. The proof of Theorem 3 in Merrill and Sabharwal (2024) ( CoT(t(n))⊆^TIME (n2+t(n)2)) indicates that a single forward pass on a sequence of length Njcan be simulated by a DTM in time polynomial in Nj, specifically O(N2 j·polylog (Nj)), denoted eO(N2 j). Since j≤k≤a(n), and a(n)is polynomial in n, Nj=n+j−1is also polynomial in n. Thus, the time to verify each token sjiseO((n+j−1)2), which is polynomial in n. The DTM VAoTperforms this verification for all k=|SAoT|tokens. The total time for VAoTis the sum of times for each step: Time (VAoT) =kX j=1eO((n+j−1)2). Since k≤a(n)anda(n)is a polynomial in n(leta(n) =O(nc1)for some constant c1≥0), this sum is bounded by k·eO((n+k)2). Substituting k=O(nc1): Time (VAoT) =O(nc1)·eO((n+O(nc1))2). Letcmax= max(1 , c1). Then n+O(nc1) =O(ncmax). The expression becomes: Time (VAoT) =O(nc1)·eO((ncmax)2) =O(nc1)·eO(n2cmax) =eO(nc1+2cmax). This is a polynomial function of n(as the eOnotation hides polylogarithmic factors, which are absorbed into the overall polynomial nature). 4. Final Acceptance by VAoT:IfVAoTsuccessfully verifies all ktokens in SAoT(i.e., s′ j=sjfor all j= 1, . . . , k), it then needs to determine if this now-validated computational path SAoTindeed corresponds to an acceptance of wbyMAoT. This typically involves checking the nature of the last generated token sk(e.g., if it’s a special accept token) or by simulating the final classification layer of the Transformer after the full sequence (w, S AoT)has been processed to see if it outputs an accept decision. This final | https://arxiv.org/abs/2505.22290v1 |
check is part of the simulation of the k-th step (or an implicit (k+ 1)-th decision step based on the state after sk) and is therefore completed within the polynomial time bound. If the path SAoTleads to an acceptance state for MAoT, then VAoTaccepts ⟨w, S AoT⟩. Otherwise (e.g., if the path does not end in acceptance, or if any sjwas inconsistent during verification), VAoTrejects. Since the certificate SAoThas polynomial length (in terms of bits, as derived from ktokens and logarithmic precision), and the deterministic verifier DTM VAoTverifies it in polynomial time with respect to n, it follows by the definition of NPthat the language Ldecided by MAoTis in NP. Therefore, AoT(poly(n))⊆NP. Combining the inclusions from Part I(NP⊆AoT(poly(n))) andPart II (AoT(poly(n))⊆NP), we conclude that AoT(poly(n)) = NP. E.4 Proof of Theorem 3.3: CoT (exp(n)) = EXP Proof.The notation exp(n)signifies a function t(n)such that t(n) =O(2p(n)), where p(n)is a polynomial inn. The complexity class EXPis defined according to Definition 5 as EXP =S k∈NTIME (2nk).The proof is demonstrated in two parts. 30 Part I: EXP⊆CoT(exp(n)) This inclusion is established by demonstrating that a CoT-augmented Transformer, under the specified architectural assumptions, can simulate any DTM that operates within an exponential time bound. LetLbe an arbitrary language in the class EXP. By the definition of EXP, there exists a DTM ML and a non-negative integer constant k0such that MLdecides L, and for any input wof length n=|w|, MLhalts in TML(n) =O(2nk0)time. Let pL(n) =nk0; thus, MLruns in O(2pL(n))time. This means L∈TIME (O(2pL(n))). We leverage Theorem 2 in Merrill and Sabharwal (2024) for L∈EXP, and set the DTM running time t(n)to correspond to TML(n). IfTML(n)≤c·2pL(n)for some constant c, we can choose the number of CoT steps to be tCoT(n) = 2p′ L(n)where p′ L(n)is a polynomial such that c·2pL(n)≤2p′ L(n)for sufficiently large n(e.g., p′ L(n) =pL(n) +⌈log2c⌉). This tCoT(n)is an exp(n)function. The Transformer will thus generate tCoT(n)CoT tokens to simulate ML. The architectural assumptions of Logarithmic Embedding Dimension ( Pbits(W, A) =O(log(n+t(n))) from Assumption 2) must be compatible. With t(n) =tCoT(n) =O(2p′ L(n)), the precision becomes Pbits=O(log(n+ 2p′ L(n))). For large n,2p′ L(n)dominates, leading to: Pbits=O(log(2p′ L(n))) =O(p′ L(n)). This polynomial precision (in n) is consistent with the O(log(T+N))precision requirement for their DTM simulation construction (where Tis DTM steps, Nis input length). This is a crucial point: the logarithmic precision is relative to the total sequence length; if the CoT is exponential, the required bit precision becomes polynomial in n. The Transformer simulates MLusing tCoT(n)CoT steps. Next, by Corollary 2.1 in Merrill and Sabharwal (2024) TIME (t(n))⊆CoT(t(n)).Since L∈TIME (TML(n)) andTML(n)≤tCoT(n)where tCoT(n)isexp(n), by applying Corollary 2.1, we have L∈CoT(tCoT(n)). Therefore, L∈CoT(exp(n)). As Lwas an arbitrary language in EXP, this establishes the inclusion EXP⊆CoT(exp(n)). Part II: CoT(exp(n))⊆EXP This inclusion is demonstrated by showing that the computation performed by a CoT-augmented Transformer using exp(n)CoT steps can be simulated by a DTM in exponential time. LetLbe an arbitrary language in CoT(exp(n)). By definition, there exists a CoT-augmented Transformer, MCoT(satisfying Assumption 2), that decides Lby generating tCoT(n)intermediate CoT tokens, where tCoT(n) =O(2pM(n))for some polynomial pM(n). AccordingtoTheorem3inMerrillandSabharwal(2024), therelationshipis CoT(t(n))⊆^TIME (n2+t(n)2). The^TIMEnotation hides | https://arxiv.org/abs/2505.22290v1 |
polylogarithmic factors of its argument, i.e., the actual DTM simulation time is O((n2+t(n)2)·(log(n+t(n)))j)for some constant j. Substitute tCoT(n) =O(2pM(n))into this time bound. The term (tCoT(n))2becomes: (tCoT(n))2= (O(2pM(n)))2=O(22pM(n)). Letp′ M(n) = 2 pM(n), which is also a polynomial. So, (tCoT(n))2=O(2p′ M(n)). The base complexity n2+ (tCoT(n))2=n2+O(2p′ M(n)). For large n(assuming p′ M(n)is not identically zero), this is dominated byO(2p′ M(n)). Now, consider the polylogarithmic factor (log(n+tCoT(n)))j. Since tCoT(n) =O(2pM(n)), for large n, n+tCoT(n)isO(2pM(n)). So, log(n+tCoT(n)) = log(O(2pM(n))) =O(log(2pM(n))) =O(pM(n)). The polylogarithmic factor is (O(pM(n)))j, which is a polynomial in n. Let this polynomial be Ppoly_log (n). The total DTM simulation time is O(2p′ M(n)·Ppoly_log (n)). To show this is an exponential time bound of the form 2poly (n), we rewrite Ppoly_log (n)as2log2(Ppoly_log (n)). The time bound then becomes: O(2p′ M(n)·2log2(Ppoly_log (n))) =O(2p′ M(n)+log2(Ppoly_log (n))). Letp′′ M(n) =p′ M(n) +log2(Ppoly_log (n)). Since p′ M(n)is a polynomial in n, and Ppoly_log (n)is also a polynomial in n,log2(Ppoly_log (n))grows slower than any polynomial of positive degree (e.g., if Ppoly_log (n) =na, then log2(na) =alog2n). 31 •IfpM(n)(and thus p′ M(n)) is a polynomial of degree d≥1, then p′ M(n)dominates log2(Ppoly_log (n)). Thus, p′′ M(n) =O(p′ M(n)), which is polynomial. The DTM runs in O(2poly (n))time. •IfpM(n)is a constant (degree 0), say c0, then tCoT(n) =O(2c0) =O(1). In this case, the original simulation time is eO(n2) =O(n2·(logn)j), which is polynomial time. Since P⊆EXP, this scenario is consistent with p′′ M(n)effectively becoming a function that leads to an overall polynomial time when the base n2term dominates. More generally, for tCoT(n) =O(2pM(n)), the exponent p′′ M(n)is bounded by some polynomial in n. The DTM simulates MCoTin time O(2p′′ M(n)). By definition of EXP(time 2poly (n)), this signifies that L∈EXP. AsLwas an arbitrary language in CoT(exp(n)), we conclude that CoT(exp(n))⊆EXP. By combining the inclusions from Part I(EXP⊆CoT(exp(n))) andPart II (CoT(exp(n))⊆EXP), it directly follows that CoT(exp(n)) = EXP. This equivalence demonstrates that decoder-only Transformers (under Assumption 2) augmented with a number of CoT tokens that is exponential in the input length possess computational power equivalent to that of exponential-time Deterministic Turing Machines. E.5 Proof of Theorem 3.4: AoT (exp(n)) = NEXP Proof.The complexity class NEXPis defined as: NEXP =S k∈NNTIME (2nk).An exponential func- tiona(n) = exp(n)means a(n) =O(2pA(n))for some polynomial pA(n). The proof consists of two parts: NEXP ⊆AoT(exp(n))and AoT(exp(n))⊆NEXP. We rely on the definitions provided in the Preliminaries E and the architectural assumptions in Assumption 2. Part I: NEXP ⊆AoT(exp(n)) This part demonstrates that any language L∈NEXPcan be decided by an AoT-prompted Transformer using a number of intermediate tokens that is exponential in the input length nalong any accepting path. LetLbe an arbitrary language in NEXP. By definition, L∈NTIME (T(n))forT(n) = 2pL(n)for some polynomial pL(n). As stated in the Definition of Deterministic and Nondeterministic Time Classes 5, membership in NTIME (t(n))implies there exists a deterministic Turing machine (DTM) V(the verifier) such that for every input string wof length n=|w|: w∈L⇐⇒ ∃ u∈Σ∗(certificate/witness) s.t. |u|=O(2pL(n)),andVaccepts ⟨w, u⟩inO(2pL(n))time. Letp′(n)be a polynomial (potentially pL(n)adjusted by constants) such that |u| ≤c1·2p′(n)andVruns in at most TV(n) =c2·2p′(n)steps for constants c1, c2. We construct an AoT-prompted Transformer, | https://arxiv.org/abs/2505.22290v1 |
denoted MAoT, satisfying Assumption 2, that decides L. On input w, the AoT process for MAoTproceeds as follows: 1. Certificate Generation (Simulating Nondeterministic Guess via AoT Exploration): The AoT prompting supplies MAoTwith examples demonstrating algorithmic search or construction. This capability is leveraged to explore the space of potential certificates and generate a candidate u. Ifw∈L, a valid certificate uof length O(2p′(n))exists and, by Definition 9, the search space is sufficiently large to include all such candidates, so in particular u.MAoTgenerates a sequence of intermediate tokens representing this candidate u. Assuming each symbol of uis encoded by a constant (or polylogarithmic) number of tokens, the number of tokens required to represent u, denoted ku, isku=O(|u|) =O(2p′(n)). This is an exp(n)number of tokens. 2. Verification (Simulating Verifier DTM V):After generating the tokens for a candidate u,MAoT simulates the execution of the DTM verifier Von⟨w, u⟩. The verifier Vruns in TV(n) =O(2p′(n))time. Based on the established capability of Transformers under Assumption 2 to simulate DTMs (using the logic from Theorem 2 in Merrill and Sabharwal (2024)), MAoTcan simulate the TV(n)steps of Vby generating kV=O(TV(n)) = O(2p′(n))intermediate tokens. This simulation is feasible because the 32 required precision Pbits=O(log(n+ku+kV)) =O(log(n+ 2p′(n))), which simplifies for large nto: Pbits=O(p′(n)). This polynomial precision (in n) is consistent with Assumption 2. 3. Overall AoT Accepting Path and its Length: Ifw∈L, there exists at least one certificate ufor which V(⟨w, u⟩)accepts. The AoT prompting guides MAoTto find such a computational path. This accepting path comprises: •ku=O(2p′(n))tokens for generating u. •kV=O(2p′(n))tokens for simulating V’s accepting computation. The total number of intermediate tokens along this single accepting path is apath(n) =ku+kV= O(2p′(n)) +O(2p′(n)) =O(2p′(n)). Since p′(n)is a polynomial, apath(n)is an exp(n)function. This path length must be bounded by the overall AoT capacity a(n) =O(2pA(n)), which holds if we choose pA(n) appropriately (e.g., pA(n) =p′(n)). Note, that this upper bound applies to the accepting path specifically, not necessarily the full exploration (see Remark 1). Ifw /∈L, no certificate uexists that Vaccepts. Consequently, no AoT path corresponding to guessing and successfully verifying a certificate will lead to acceptance. MAoThalts and rejects (as it decides L). The existence of such an exponentially-bounded accepting path for all w∈L(and no such path for w /∈L) means L∈AoT(exp(n)). Therefore, NEXP ⊆AoT(exp(n)). Part II: AoT(exp(n))⊆NEXP This part shows that if a language Lis decided by an AoT-prompted Transformer MAoTusing a(n) = exp(n)tokens along any accepting path ( a(n) =O(2pA(n))for polynomial pA(n)), then L∈NEXP. To prove L∈NEXP, we construct a NEXP verifier for L. We define a certificate ucertof length exp(n) and a DTM verifier VAoTthat runs in exp(n)time, such that VAoTaccepts ⟨w, u cert⟩if and only if w∈L. 1. Certificate Definition: Ifw∈L, by definition of MAoTdeciding LinAoT(exp(n)), there exists at least one accepting sequence of intermediate AoT tokens SAoT= (s1, s2, . . . , s k), where k≤a(n) =O(2pA(n)). This sequence SAoTis our certificate, ucert. The number of tokens is k=O(2pA(n)). The bit length requires precision Pbits. From Assumption 2, Pbits=O(log(n+k)). Since k=O(2pA(n)),Pbits= O(log(n+ 2pA(n))) =O(pA(n))(for polynomial pA(n)). The total bit length of ucertis|ucert|=k·Pbits, which evaluates to: |ucert|=O(2pA(n))·O(pA(n)) =O(2pA(n)+log2(pA(n))). Letp′ | https://arxiv.org/abs/2505.22290v1 |
A(n) =pA(n) +log2(pA(n)). Since pA(n)is polynomial, p′ A(n)is also polynomial (as log2of a polynomial grows slower than the polynomial). Thus, the bit length |ucert|=O(2p′ A(n)), which is exp(n). 2. Verifier DTM ( VAoT) Construction: The DTM VAoTtakes ⟨w, u cert⟩=⟨w, S AoT⟩as input. It deterministically simulates MAoTstep-by-step, verifying against SAoT: For j= 1tok: •Consider the prefix (w, s 1, . . . , s j−1). •Simulate one forward pass of MAoTon this prefix to compute the next token s′ j. •Compare s′ jwith sj. Ifs′ j̸=sj, reject. If all ktokens match, check if the path SAoTcorresponds to an acceptance state in MAoT. If yes, accept ⟨w, S AoT⟩. Otherwise, reject. 3. Time Complexity of VAoT:The core cost is simulating kforward passes. Let Nj=n+ (j−1) be the sequence length at step j. From the analysis supporting Theorem 3 in Merrill and Sabharwal (2024), the simulation time for one step, Tstep(Nj), isO(N2 j·poly(Pbits)). Here, Pbits=O(pA(n)), so let Plog(n) =poly(pA(n)), a polynomial in n. Thus, Tstep(Nj) =O(N2 j·Plog(n)). The total time for VAoTisPk j=1Tstep(Nj). This sum is bounded by k×Tstep(Nk), where Nk=n+k−1. This leads to: Time (VAoT)≤k·O(N2 k·Plog(n)). 33 Substituting k=O(2pA(n))andNk=O(n+ 2pA(n)) =O(2pA(n))(for large n), we get: Time (VAoT)≤O(2pA(n))·O((O(2pA(n)))2·Plog(n)) =O(23pA(n)·Plog(n)). Since Plog(n)is polynomial, we write Plog(n) = 2log2(Plog(n)). The time complexity becomes: Time (VAoT)≤O(23pA(n)+log2(Plog(n))). Letp′′ A(n) = 3 pA(n) +log2(Plog(n)). As pA(n)andPlog(n)are polynomials, p′′ A(n)is also a polynomial inn. Therefore, the time complexity is Time (VAoT) =O(2p′′ A(n)), which is exp(n). We have constructed a DTM verifier VAoTthat takes an input wand a certificate ucert=SAoTof length |ucert|=O(2p′ A(n)) =exp(n), and verifies in time Time (VAoT) =O(2p′′ A(n)) =exp(n)whether w∈L. This precisely matches the verifier definition of NEXP. Therefore, AoT(exp(n))⊆NEXP. Combining the inclusions from Part I(NEXP ⊆AoT(exp(n))) andPart II (AoT(exp(n))⊆NEXP), we concludethat AoT(exp(n)) = NEXP .Thisestablishesthatdecoder-onlyTransformers, underAssumption2, when augmented with an AoT generating an exponential number of tokens along any single accepting path, possess computational power equivalent to that of exponential-time Nondeterministic Turing Machines. E.6 Proof of Theorem 3.5: Core Reasoning Tokens Proof.LetMTbe a decoder-only Transformer (satisfying Assumption 2) that decides a language Lby implicitly running an internal algorithm AMTthrough its generation of intermediate tokens. We define thecore computational trace Score(length kcore(n)) to be the shortest subsequence of tokens capturing all essential, progressive computational steps of AMT(redundant tokens, self-corrections, and filler contents are not included). Evidently (Zeng et al., 2025; Li et al., 2025; Wu et al., 2025; Ballon et al., 2025; Su et al., 2025), this core length is bounded by the total: kcore(n)≤ktotal(n), where ktotal(n)is the total number of tokens produced by MTon inputs of size n. The complexity class governing Ldepends jointly on the growth of kcore(n)and on the nature of AMT: •Deterministic (CoT case). If AMToperates as a step-by-step deterministic computation (analogous to a DTM), then: kcore(n) =( O poly(n) =⇒L∈P, O exp(n) =⇒L∈EXP. •Nondeterministic (AoT case). If an accepting computation of AMTamounts to guessing and then verifying a witness (analogous to a NTM), then: kcore(n) =( O poly(n) =⇒L∈NP, O exp(n) =⇒L∈NEXP . LetCpoly(AMT)denote the polynomial complexity class and Cexp(AMT)the exponential complexity class that corresponds to the computational nature of AMT, | https://arxiv.org/abs/2505.22290v1 |
defined as: Cpoly(AMT) =( PifAMTis deterministic , NPifAMTis nondeterministic , Cexp(AMT) =( EXPifAMTis deterministic , NEXPifAMTis nondeterministic . The following hold: 34 (1)Ifkcore(n) =O(poly(n)), then Llies in Cpoly AMT , regardless of any super-polynomial overhead inktotal(n). (2)To decide a language L∈Cexp AMT \Cpoly AMT , the algorithm must exhibit kcore(n) = O(exp(n)); mere exponential length of ktotal(n)does not suffice if kcore(n)remains polynomial. The key idea is that computational power is governed by the minimal, redundancy-free trace Score’s length kcore(n), not the possibly much larger ktotal(n). Imagine an idealized Transformer M∗ Tthat outputs exactly Scoreand nothing more. Because M∗ Truns the same algorithm AMT, its complexity is governed bykcore(n)and by whether AMTis deterministic or nondeterministic. Part I:Suppose kcore(n) =O poly(n) . Two cases: •Deterministic (CoT case). Then M∗ Tperforms a polynomial-length simulation of a DTM, so by Theorem 3.1 we have L∈P. •Nondeterministic (AoT case). Here M∗ Toutputs a polynomial-size witness–verification pair; applying Theorem 3.2 yields L∈NP. Either way, L∈Cpoly AMT . The extra ktotal(n)−kcore(n)tokens never raise the underlying complexity, even though simulating the raw output stream could take exponential time. Part II: Assume MTdecides a language L∈Cexp AMT \Cpoly AMT . If, contrary to claim, kcore(n) = O poly(n) , Part I would force LintoCpoly AMT as a contradiction. Hence kcore(n)is super-polynomial. Moreover, the constructive inclusions from Theorems 3.3 and 3.4 show that an exponential-length core computational trace is both necessary and sufficient for languages that provably separate Pfrom EXPor NPfrom NEXP. Therefore kcore(n) =O exp(n) is required. In conclusion, only by scaling the effective part of its reasoning tokens can a Transformer transcend polynomial-time capabilities. 35 | https://arxiv.org/abs/2505.22290v1 |
Neural Restoration of Greening Defects in Historical Autochrome Photographs Based on Purely Synthetic Data Saptarshi Neil Sinhaa, P. Julius K ¨uhna, Johannes Koppeb, Arjan Kuijpera,b, Michael Weinmannc aFraunhofer IGD, Darmstadt, Germany bTU Darmstadt, Darmstadt, Germany cDelft University of Technology, Delft, Netherlands Abstract The preservation of early visual arts, particularly color photographs, is challenged by deterioration caused by aging and improper storage, leading to issues like blurring, scratches, color bleeding, and fading defects. In this paper, we present the first approach for the automatic removal of greening color defects in digitized autochrome photographs. Our main contributions include a method based on synthetic dataset generation and the use of generative AI with a carefully designed loss function for the restoration of visual arts. To address the lack of suitable training datasets for analyzing greening defects in damaged autochromes, we introduce a novel approach for accurately simulating such defects in synthetic data. We also propose a modified weighted loss function for the ChaIR method to account for color imbalances between defected and non-defected areas. While existing methods struggle with accurately reproducing original colors and may require significant manual e ffort, our method allows for e fficient restoration with reduced time requirements. Keywords: Image restoration; Synthetic data; Deep learning; Defect detection; Visual arts; Cultural heritage Real autochrome defectsGenerate representative synthetic defects Synthetic autochrome defectsTrain Deep learning network Restored autochromesTest Post - processing Figure 1: Pipeline for synthetic data generation and network training to restore greening defects in digitized autochromes. 1. Introduction Autochromes, invented in 1903 and introduced to the mar- ket in 1907 respectively by Auguste and Louis Lumi `ere, rep- resent the first widely and commercially adopted method for color photography [1]. Utilizing glass plates coated with col- ored potato starch grains as color filters over a black-and-white emulsion, autochromes allowed for the creation of vivid and painterly color negatives through light projection, capturing the imagination of early 20th-century photographers and audi- ences. This principle dominated color photography for nearly three decades until it was replaced by Kodachrome films in the 1930s which extended autochromes by using multiple layers of light-sensitive emulsions, each sensitive to a specific color (red, green, or blue), and adding the dyes to the emulsion layers to reproduce the colors more accurately and o ffering finer grain and better resolution, similar to the principle of the Bayer filter in modern digital sensors. In contrast to the archival stability of Kodachrome slides and films which can retain their colors for decades without significant fading, autochromes are frag- ile and sensitive to physical damage and environmental condi- tions. Aging processes and inadequate storage may, hence, lead to deterioration in terms of blur, scratches, color bleeding and color fading. In particular, the dyes and emulsion layers are prone to fading or discoloration. Furthermore, the involvement Email addresses: saptarshi.neil.sinha@igd.fraunhofer.de (Saptarshi Neil Sinha), julius.kuehn@igd.fraunhofer.de (P. Julius K¨uhn), johannes.koppe@tu-darmstadt.de (Johannes Koppe), arjan.kuijper@igd.fraunhofer.de (Arjan Kuijper), michael.weinmann@tudelft.nl (Michael Weinmann)of sensitive glass plate poses challenges for damage-free con- servation, leading to various types of defects such as trapped dust, air bubbles, and moisture-related color issues. A com- mon defect, known | https://arxiv.org/abs/2505.22291v1 |
as greening, occurs when the green potato starch grains bleed into adjacent areas due to their high sensi- tivity to water, resulting in unwanted green spots in the final image (see Fig. 2). Such artifacts distort the original appear- ance of these historical images, complicating their interpreta- tion and diminishing their aesthetic and documentary value. To (a) (b) Figure 2: Greening defects in autochromes: (a) Large-region defects and (b) small green spots (highlighted with zoom-in) due to bleeding of dyes in the surrounding regions [1]. address the aformentioned challenges, previous investigations on autochromes focused on the analysis of defects from delam- ination and faded dyes [1] as well as the restoration of de- lamination [2] and the restoration regarding faded dyes [3]. Furthermore, the pre-trained real-ESRGAN model has been in- troduced to enhance old photographs, including autochromes, however, without specifically addressing greening defects [4]. AI tools like Midjourney [5] and Adobe Photoshop, aim to en- hance autochromes, however, they also do not specifically ad-1arXiv:2505.22291v1 [cs.CV] 28 May 2025 dress greening defects or utilize image pairs trained for this type of restoration. Hence, whereas modern deep learning methods, particularly image segmentation networks for defect detection and image restoration networks, o ffer promising solutions, the scarcity of reference data for historical artworks as well as the aforementioned systematic defects pose challenge for the train- ing of models for restoring defects in autochromes like green- ing. In this paper, we - to the best of our knowledge – present the first approach for successful automatic removal of green- ing color defects in digitized autochrome photographs. For this purpose, we present the following main contributions: •An approach based on synthetic dataset generation and use of generative AI with a carefully designed loss function for the restoration of visual arts (See Fig. 1). •To address the lack of suitable training datasets for analyz- ing greening defects in damaged autochromes, we present a novel approach for accurately simulating such defects in synthetic data. •A modified weighted loss function for the ChaIR [6] method to account for color imbalances between defected and non-defected areas. Our approach enables e fficient restoration of greening defects while minimizing time requirements, addressing the challenges faced by existing methods. We will provide the code and dataset upon acceptance to facilitate further research. 2. Related work Generative AI based image color restoration: Digital im- age processing can assist in artwork restoration, serving as a guide for traditional methods or enabling fully digital restora- tions [7, 8]. Recent generative methods, such as inpainting techniques, allow for masking and replacing areas or filling in missing parts [9, 10]. Other approaches modify the color of old photographs, including GAN-based colorization methods like ChromaGAN [11], CycleGAN [12], and Pix2Pix [13]. Ad- ditionally, models for white-balancing or color-balancing ad- just color temperature globally [14, 15, 16]. Image decompo- sition methods separate layers, enabling applications like im- age deraining or dehazing [6, 17, 18], which help restore struc- tures obscured by semi-transparent objects. The Channel Inter- action Restoration (ChaIR) model [6] achieves state-of-the-art performance on 13 benchmark datasets for dehazing, deblur- | https://arxiv.org/abs/2505.22291v1 |
ring, and deraining by introducing a dual-domain channel atten- tion mechanism that enhances interactions through lightweight convolutions in the spatial domain and integrates information from various frequency components. This paper explores the training, application, evaluation, and discussion of generative AI methods like ChaIR [6], CycleGAN [12], and Pix2Pix [13] for restoring greening defects in autochrome images, excluding diffusion-based approaches that tend to overwrite image struc- tures due to their noise-based generation [19]. We modify the loss function proposed in the ChaIR [6] model so that it en- hances the accuracy of color correction by specifically target- ing defect areas, allowing for a more significant adjustment ofcolor representations within these regions compared to the orig- inal model, which applied uniform corrections across the entire image. Synthetic defect generation for visual arts: While multiple labeled datasets, such as the ART500K dataset [20, 21], are available for developing digital artwork restoration methods and encompass a variety of artworks from di fferent painters and epochs, challenges persist, including the scarcity of paired im- ages of damaged and non-damaged artworks, the need to pre- serve colors and artistic styles, restoring large degraded areas with limited local information, generating masks for restora- tion areas, and identifying appropriate evaluation methods [8]. A key factor in developing an e ffective AI model is the avail- ability of a challenging and representative dataset. However, for artwork datasets in general and for our use-case of green- ing defects occurring in autochromes , there is insu fficient an- notated data for specialized image-to-image translation tasks. One solution is to create synthetic datasets [22, 23], which have gained importance in computer vision in recent years, allowing for an unlimited amount of training data and facilitating faster automatic data labeling. However, the generation and use of synthetic data can raise ethical and social concerns, as well as security and compliance issues, which is why it is essential to document these processes transparently [22]. We address these concerns by incorporating expert knowledge regarding defects provided by an expert for autochrome defects to adequately mimic the defects in the scope of synthesized data, thereby overcoming the lacking availability of ground truth (GT) data or image pairs including greening defects in old digitized au- tochrome images. In turn, the respective synthetic data allows the training of restoration models that can handle greening de- fects. 3. Methodology In this section, we introduce our core contributions on the generation of accurate, synthetic data including greening de- fects as required for the training of powerful restoration net- works. Furthermore, we present extensions to restoration net- works relevant for the restoration of autochromes. Synthetic generation of greening defects: To address the key challenge of the absence of suitable datasets for damaged au- tochromes, we present an approach to create a synthetic dataset suitable for training and testing autochrome restoration meth- ods. To ensure the quality of the synthetic dataset, we closely examined real defects in autochromes , revealing that they can be spot-shaped, wide-spread, having both defect types, or com- pletely damaged image. One to eleven spot-shaped defects are found on the | https://arxiv.org/abs/2505.22291v1 |
seven selected autochromes for closer evaluation, with diameters ranging from 1% to 5% of the image width. Larger defects can appear alone, covering up to one-third of the image, and may merge with one another. We also analyzed greening defects and their e ffects on individual color channels (see Fig. 3) based on a per-channel investigation . Our findings revealed that the defects varied in color and transparency, were often bordered by orange tones, and appeared as dot-shaped 2 rings with di fferent shades of green and a dark core, originat- ing from a single point and fading out in various directions due to liquid spreading. Notably, the green color channel was the (a)Greening defect (b)Red channel (c)Blue channel (d)Green channel Figure 3: Channel composition of an example autochrome from the Harold Taylor collection [24] least a ffected, exhibiting increased intensity in damaged areas, whereas the red and blue channels showed significant decreases in color intensity (see Fig. 3). In an initially generated dataset (v1), we simulated defects based on color filters used in photog- raphy, specifically utilizing green filters (representing the fore- ground layer) for defect generation for an autochrome in the background layer. However, it turned out that this approach ul- timately failed to produce realistic results, as the defects exhib- ited low color variance, making them appear unrealistic. Hence, we created second dataset (v2) that directly represents observ- able changes in the color channels at the defective areas. The process of defect generation is described below: •Images can contain point defects (60%), large defects (30%), or both (10%).In addition, we generate defect masks following the characteristics of known defects. •Defect origins may be point-shaped or linear, with inten- sity defined by I=−d2+1, where dis the normalized distance to the origin. •We defined a set of rings with varying colors for each de- fect type, with ring diameters based on the defect’s under- lying intensity and distance from the center, and for each ring, a percentage change in the individual color channels is defined and randomly adjusted by a factor of 0.2 in both directions (See supplemental materials1for the respective algorithm). . •The combined defect pattern is smoothed with a Gaussian filter. The intensity change ∆Icfor color channel cis cal- culated as ∆Ic=(pc∗Ic)−Ic, where pcis the corruption percentage. This change is applied to the final image. Training on restoration networks: Being used for di fferent use cases, we trained Image2Image models (Pix2Pix [13] and CycleGAN [12]) on the synthetic data (v1) with the original settings. Furthermore, we trained a ChaIR model [6] since it achieved state-of-the-art performance for deraining and de- hazing tasks, on both of our datasets. The original formula- tion relies on spatial and frequency loss functions to assess the model’s performance in both domains, thereby supporting dual- channel attention mechanisms. Spatial and frequency loss func- 1https://drive.google.com/file/d/ 1uj6hLeeWKRMTid6O2rhBvVGiv_jxtMtJ/view?usp=sharing (a) Spotting defect (b) Wide area defect Figure 4: Example synthetic greening defects (v2) tion were defined as: ls=3X i=11 Si∥ˆYi−Yi∥1 (1) lf=3X i=11 Si∥F(ˆYi)−F(Yi)∥1 (2) where ˆYistands for the predicted images and Yfor the ground truth (GT) images, Fdescribes | https://arxiv.org/abs/2505.22291v1 |
the Fast Fourier transform. The final loss is computed as l=ls+0.1lf. However, the main chal- lenge for autochrome restoration is the correct representation of the original colors of the defective area and to place a stronger focus on the proper color representation of the defective area, we therefore replace the spatial loss lsby: ls=1 NX x,yW(x,y)·|Ipred(x,y)−IGT(x,y)| (3) where, W(x,y)=1.0 if|Iin(x,y)−IGT(x,y)|>0.1 w otherwise, with w∈{0.1,0.5}(4) The weight matrix W matches the size of the processed images (a) Defected Region (b) chair_v2_default (c) chair_v2_loss10 Figure 5: The above figure shows the e ffect of weighting the defected areas 10 times with our loss function. and assigns a weight to incorrect color representations in defect areas that is two to ten times higher than in non-defective areas, while the remaining components of the loss function remain unchanged, with defect areas determined through a before-and- after comparison of input images and ground truth (GT) during training. Semi-automated restoration: To gain evidence on practi- cal relevance, we let an expert designer post-process the de- greened result using Photoshop to correct defects. According 3 to the received feedback, the defect mask facilitated quick iden- tification, and the de-greened result allowed for e ffective cor- rections, enabling prompt issue resolution (see supplementary material. for example results), thereby demonstrating the e ffi- ciency and potential of our approach. 4. Dataset and implementation We utilized the Harold Taylor collection [24], comprising 420 autochrome images available under a public domain license for free use. We labeled the defects with assistance from an expert, identifying 306 images without visual damage and 95 with greening defects: 15 classified as having strong green- ing defects a ffecting a substantial part of the image, while the remaining 80 showed milder defects. We used the 306 defect-free images to synthetically generate defects, creating defected-undamaged pairs. Training was conducted on a clus- ter equipped with NVIDIA A100 GPUs. Corresponding train- ing parameters are presented in Table 1. For details of the Table 1: Quantitative results Experiment Name Dataset Epochs Loss Type Mean PSNR Mean MS-SSIM p2pv1netdpixel v1 LR vanilla 35.088 0.983 cgv1default v1 LR vanilla 29.311 0.959 chair itspretrain RI [25] 300 - 19.746 0.928 chair otspretrain RO [25] 30 - 22.216 0.947 chair v1default v1 300 S+F 34.333 0.987 chair v2default v2 300 S+F 34.795 0.985 chair v2transfer RO+v2 300 S+F 39.492 0.995 chair v2loss2 v2 300 custom 34.754 0.984 chair v2loss10 v2 300 custom 34.776 0.985 * S+F=spatial +frequency, RI =RESIDEIndoor, RO =RESIDEOutdoor, LR =100 (initial) +100 (decay), v1 =Version 1 synthetic dataset, v2 =Version 2 synthetic dataset. conducted experiments, please refer to the supplementary ma- terial. The entries for chair itspretrain and chair otspretrain were included for clarity and were not trained. Models based on Pix2Pix and CycleGAN were initially trained for 100 epochs with an initial learning rate, followed by fine-tuning for another 100 epochs with a linearly decaying learning rate. 5. Evaluation To demonstrate the potential of our approach, we conducted both quantitative and qualitative analyses. We selected the ChaIR model for training using the v2 dataset because the eval- uation of all | https://arxiv.org/abs/2505.22291v1 |
models with the v1 dataset (see Fig. 6) clearly showed that the image-image models impacted regions beyond the defect areas. Quantitative results: The results in Table 1 present PSNR and SSIM scores between synthetic references and outputs, divided into two groups based on datasets v1 and v2 for comparability. Metrics are averaged over all test images. Chair v2transfer shows the best results among methods. Our loss function weights defected areas similarly for W =0.1 and W =0.5. To evaluate our method against state-of-the-art photo editing soft- ware like Photoshop, we compared our results with AI-based (a) Synthetic defect (b) GT (c) p2p_v1_netdpixel (d) cg_v1_default (e) chair_v1_defaultFigure 6: Di fference maps showing a ffected after image restoration in compar- ison to ground truth (GT) Reference RegionDefect Histogram matchedOurs (chair_v2_transfer) Figure 7: Comparison of our de-greened result and histogram matching of the region of interest with the reference region (a) Large area defect (b) De -greened resultDefect removal stationary areas Figure 8: Degreening of defects using ps genfill method using Photoshop. The image structure is not preserved. tools integrated into Photoshop ( psgenfill ) and manual correc- tions by a designer ( psmancc ). As shown in Table 2, the ChaIR model trained on synthetic data outperforms Photoshop results. Qualitative analysis: The results of the ’Generative Fill’ (see Fig. 8) method in Adobe Photoshop indicate a noticeable change in the defective areas, suggesting that this method is unsuitable for restoration as it ignores and overwrites under- lying structures. Although it e ffectively removes small sta- tionary defects it struggles with larger areas. So we exclude this method when we do our qualtitative analysis by color his- togram analysis(see Fig. 10). A visual comparison in Fig. 5 Table 2: Comparison with photo editing tools (Photoshop) Experiment Name MS-SSIM SSIM Cropout psgenfill 0.991 0.504 psmancc 0.996 0.946 chair v2default 0.997 0.964 chair v2transfer 0.998 0.972 shows no significant qualitative di fferences between the loss model and the default model, with only slight improvements in color shading. Histograms indicate that all methods shift color values towards a more normal distribution, and contrary to initial assumptions, the number of green tones remains stable 4 (a) Damaged Image (b) De -greened Image (c) Difference MaskFigure 9: Qualitative analysis on the real dataset shows the degreening (b) of the affected regions (a). The afected areas are shown using a di fference mask (c). while other colors adjust to align with green values. Addition- ally, Fig. 7 demonstrates that our method outperforms classical techniques like histogram matching around defected regions, even when the characteristic noise of the autochrome is modi- fied and the greening e ffect isn’t completely removed. Fig. 9 (a) Defected Image (b) ps_mancc (c) chair_v2_default (d) chair _v2_transfer Figure 10: Histogram analysis of the images after degreening presents a qualitative analysis of real defects, demonstrating that the chair v2transfer model e ffectively detects defected re- gions. We selected this model due to its superior quantitative results. While greening defects are significantly reduced, small regions may still be inadvertently a ffected, particularly in im- ages with larger defected areas. | https://arxiv.org/abs/2505.22291v1 |
6. Conclusion Our method e ffectively identifies defects of all sizes and lo- cations by adjusting the colors of detected areas, making them less recognizable. However, it struggles to accurately repro- duce the original colors of autochrome images, often result- ing in bluish tones and missing very small defects. In con- trast, Photoshop-based methods enable targeted defect removal in stationary regions but require significant manual e ffort and expertise in non-stationary areas, where removal is often inef- fective without de-greening. Our method’s output can enhance editing tools, allowing designers to achieve similar results more efficiently and tackle defects that are otherwise di fficult to man- age. Additionally, deep inpainting algorithms show promise inrestoring greening defects on stationary areas using masks gen- erated by our algorithms. Future work should focus on expand- ing the synthetic dataset and identifying suitable no-reference metrics to evaluate restoration quality across a wider range of samples. Furthermore, the methodologies developed could be adapted to restore other autochrome defects, such as oranging, and applied to various artworks and image types. References [1] B. Lavedrine, J.-P. Gandolfo, C. Capdero, The Lumiere Autochrome: History, Technology, and Preservation, Getty Publications, 2013. 1 [2] U. M ¨uller, A method of consolidating delaminated autochrome plates from the photograph collection of the albertina museum in vienna, in: AICCM Symposium 2006, Conservation of Paper, Books and Photo- graphic Materials. Post-prints and Posters. 4th Book, Paper & Pho- tographs Symposium, 2006. 1 [3] G. Barker, J. Hubicka, M. Jacobs, L. Kimrov ´a, K. Meyer, D. Peterson, Finlay, thames, dufay, and paget color screen process collections: Us- ing digital registration of viewing screens to reveal original color, CoRR abs/2211.16076 (2022). 1 [4] N. Saunders, Enhancement of old colour photographs using generative adversarial networks, accessed: 22 Jan 2025 (2021). 1 [5] A. Kovalev, Autochrome print midjourney style, accessed: 26 Oct 2024 (2024). 1 [6] Y . Cui, A. Knoll, Exploring the potential of channel interactions for image restoration, Knowledge-Based Systems 282 (2023). 2, 3 [7] M. Barni, F. Bartolini, V . Cappellini, Image processing for virtual restora- tion of artworks, IEEE Multimedia 7 (2) (2000) 34–37. 2 [8] P. Kumar, V . Gupta, Preserving artistic heritage: A comprehensive review of virtual restoration methods for damaged artworks, Archives of Com- putational Methods in Engineering (2024). 2 [9] H. Zheng, Z. Lin, J. Lu, S. Cohen, E. Shechtman, C. Barnes, J. Zhang, N. Xu, S. Amirghodsi, J. Luo, CM-GAN: image inpainting with cas- caded modulation GAN and object-aware training, CoRR abs /2203.11947 (2022). 2 [10] R. Suvorov, E. Logacheva, A. Mashikhin, A. Remizova, A. Ashukha, A. Silvestrov, N. Kong, H. Goka, K. Park, V . Lempitsky, Resolution- robust large mask inpainting with fourier convolutions, in: WACV, 2022, pp. 3172–3182. 2 [11] P. Vitoria, L. Raad, C. Ballester, Chromagan: Adversarial picture col- orization with semantic class distribution, in: WACV, 2020, pp. 2434– 2443. 2 [12] J. Zhu, T. Park, P. Isola, A. A. Efros, Unpaired image-to-image translation using cycle-consistent adversarial networks, in: ICCV , 2017, pp. 2242– 2251. 2, 3 [13] P. Isola, J. Zhu, T. Zhou, A. A. Efros, Image-to-image translation | https://arxiv.org/abs/2505.22291v1 |
with conditional adversarial networks, in: CVPR, 2017, pp. 5967–5976. 2, 3 [14] M. Afifi, B. L. Price, S. Cohen, M. S. Brown, When color constancy goes wrong: Correcting improperly white-balanced images, in: CVPR, 2019, pp. 1535–1544. 2 [15] M. Afifi, M. S. Brown, Deep white-balance editing, in: CVPR, 2020, pp. 1394–1403. 2 [16] M. Afifi, Semantic white balance: Semantic color constancy using con- volutional neural network, CoRR abs /1802.00153 (2018). 2 [17] Z. Zou, S. Lei, T. Shi, Z. Shi, J. Ye, Deep adversarial decomposition: A unified framework for separating superimposed images, in: CVPR, 2020, pp. 12803–12813. 2 [18] Z. Zhang, J. Han, C. Gou, H. Li, L. Zheng, Strong and controllable blind image decomposition, CoRR abs /2403.10520 (2024). 2 [19] C. O’brien, J. Hutson, T. Olsen, J. Ratican, Limitations and possibilities of digital restoration techniques using generative ai tools: Reconstituting antoine franc ¸ois callet’s achilles dragging hector’s body past the walls of troy, Arts & Communication (11 2023). 2 [20] H. Mao, M. Cheung, J. She, Deepart: Learning joint representations of visual arts, in: ACM Multimedia, 2017, pp. 1183–1191. 2 [21] H. Mao, J. She, M. Cheung, Visual arts search on mobile devices, ACM Trans. Multim. Comput. Commun. Appl. 15 (2s) (2019) 60:1–60:23. 2 5 [22] S. Hao, W. Han, T. Jiang, Y . Li, H. Wu, C. Zhong, Z. Zhou, H. Tang, Synthetic data in AI: challenges, applications, and ethical implications, CoRR abs /2401.01629 (2024). 2 [23] G. Paulin, M. Ivasic-Kos, Review and analysis of synthetic dataset gen- eration methods and techniques for application in computer vision, Artif. Intell. Rev. 56 (9) (2023) 9221–9265. 2 [24] H. A. Taylor, Harold Taylor slide collection, California Revealed, visited on 11 /26/2024 (1896). 3, 4 [25] B. Li, W. Ren, D. Fu, D. Tao, D. Feng, W. Zeng, Z. Wang, Benchmarking single-image dehazing and beyond, IEEE Trans. Image Process. 28 (1) (2019) 492–505. 4 6 | https://arxiv.org/abs/2505.22291v1 |
arXiv:2505.22303v1 [cs.HC] 28 May 2025VOICE CMS: UPDATING THE KNOWLEDGE BASE OF A DIGITAL ASSISTANT THROUGH CONVERSATION A P REPRINT Grzegorz Wolny Orange Research Warsaw, Poland grzegorz.wolny@orange.com Michał K. Szczerbak Orange Research Warsaw, Poland michal.szczerbak@orange.com May 29, 2025 ABSTRACT In this study, we propose a solution based on a multi-agent LLM architecture and a voice user interface (VUI) designed to update the knowledge base of a digital assistant. Its usability is evaluated in comparison to a more traditional graphical content management system (CMS), with a focus on understanding the relationship between user preferences and the complexity of the information being provided. The findings demonstrate that, while the overall usability of the VUI is rated lower than the graphical interface, it is already preferred by users for less complex tasks. Furthermore, the quality of content entered through the VUI is comparable to that achieved with the graphical interface, even for highly complex tasks. Obtained qualitative results suggest that a hybrid interface combining the strengths of both approaches could address the key challenges identified during the experiment, such as reducing cognitive load through graphical feedback while maintaining the intuitive nature of voice-based interactions. This work highlights the potential of conversational interfaces as a viable and effective method for knowledge management in specific business contexts. Keywords voice user interface ·knowledge management ·digital assistant ·bot·user experience ·human-ai interaction · knowledge-grounded conversation ·knowledge extraction ·comparative / qualitative study 1 Motivation Last couple of years have revolutionized the field of bots thanks to the rapid advancement on the topic of large language models (LLMs), the main promise being more natural conversations between the assistants and humans Kouba et al. [2023]. In fact, the LLM technology is replacing more and more classical bot-based products based on intent recognition models, which required more curated knowledge bases using conversation trees and often resulted in very inflexible dialogues that users needed to adjust to. User experience is thus designed to improve with the bots employing LLMs. The opposite happens, however, when one detects that behind well-rounded words providing a perfect match to the questions asked hides a lie Kim et al. [2024] Oelschlager [2024]. It is only natural for the LLMs, due to their very nature and the way they were trained, that they invent a chain of words not related to any truth if they don’t have access to the true information. To cope with the problem of not having particular knowledge at the time of training an LLM, different methods were introduced, including research augmented generation (RAG), critical reasoning, tools, etc. Tonmoy et al. [2024]. Still, the quality of the information provided to LLM-based conversation systems plays a major role in creating trustworthy assistants. What if this knowledge is not fixed, though? What if it changes slightly but constantly at a pace of company decisions and the ever-changing business context? We consider a videobot in the role of a customer-facing digital assistant used in a hotel lobby to provide the guests with up-to-date information about the hotel events or its restaurant, for example. Whereas the city-level attractions calendar can be provided | https://arxiv.org/abs/2505.22303v1 |
by the specialized service provider, through an API, for instance, assuring that the digital assistant’s knowledge is always valid falls into the responsibility of the hotel staff, who has the information in the first place. V oice CMS - G. Wolny and M. Szczerbak A P REPRINT Indeed, keeping a bot degenerating in time with obsolete information and misleading users can very swiftly become more of a problem than a benefit to any company. On the other hand, we need to consider that it might be too much to ask from the hotel employees to assure that the digital assistant’s knowledge is always valid, should the method of doing so be too complex. Our current research focuses, therefore, on providing users with an effective interface that would facilitate the task of maintaining the knowledge base of digital assistants in the service of a real small to medium-sized company. We take an example of a hotel in Poland, providing us with an environment for our tests and experiments on digital assistants. In this particular work, we explore employing an interface that is natural for both humans and digital assistants - namely, the voice user interface (VUI). In fact, it is already the interface used between the end users seeking information and our videobot, which we describe in more detail in Lesiak et al. [2025]. Having surveyed the owners of the hotel it seems to them that adding a new graphical interface, in a form of a dedicated web application for example, would be ineffective for reasons of needing to first learn it, then log into it every time a new event needs to be added or menu changes in the restaurant. They assume that passing information vocally, like one would do to a junior co-worker, would augment the chances of employees actually transmitting the information to the system. In this paper, we describe a solution to the challenge of creating both a backend system for managing the incremental knowledge base for the LLM-based digital assistant and a frontend VUI augmented with graphical elements to enhance the experience, which would answer the business need of the hotel. Moreover, we report on the experiment aiming to study the conditions under which the vocal interface can prove useful as complementary to the classical GUI-based content management systems (CMS). Ultimately, we aim to answer the research question (RQ): what is the difference in user experience between GUI and VUI in the context of providing new content into a system, and what does it depend on? The remainder of this document is structured as follows. First, we give a relevant and recent literature review in Section 2. Then, in Section 3, the proposed solution is described. Section 4 outlines the design from the hypothesis, through setup to the actual execution of the experiment, the results of which are presented in Section 5. This leads us to the discussion of both the contributions of our findings and the limitations in Section 6. Finally, we conclude with Section 7 by distilling the main messages and suggesting future research dimensions. 2 | https://arxiv.org/abs/2505.22303v1 |
Background Management of the knowledge base for a digital assistant is still more often than not restricted to its off-line preparation, where a assistant is designed either to have some fixed and infrequently changing information base or to be linked to an external data source, i.e. in RAG solutions for LLMs. Hence, any even slight changes to the knowledge require intervention on the data source and most probably at least some proficiency in development or digital data manipulation. We have identified only one work from Grassi et al. [2021] focusing on adding new knowledge into a conversation agent at run-time. Their use case was focused on a humanoid robot in a care home and was technologically based on a pre-LLM knowledge base grounded in an ontology. The authors report on the experiments aiming at the technical performance assessment concerning new concepts extraction and four methods developed to insert the recognized concepts into the ontology. They did not include, however, usability tests of teaching the system new topics via voice. Another case study of inserting data into the system through a VUI is discussed in Olivares-Rojas et al. [2025]. This method is introduced as an alternative to a traditional GUI in the context of a medical visit, where doctors should be focused on the patients and the consultation itself instead of the screen for capturing the information. There is a strong parallel here with our hotel use case, even if the latter concerns rather operational efficiency and aims to impact the knowledge base of the voice assistant itself. Another graphical interface extension with a vocal one is studied in Reicherts et al. [2022], where collaborative tasks for interpreting different visualizations on a screen were performed with or without the use of a vocal assistant, effectively giving insights for differences in interactions that humans engage with GUIs vs VUIs. Ultimately, the results show the effectiveness of voice interfaces in the context of human-like collaboration. This is an encouraging conclusion in the case of an assistant employed in a hotel, which in the context of updating the knowledge base might be treated by the staff as a junior co-worker. There have been several other studies in the last two decades looking into evaluating voice interfaces, frequently in comparison to graphical interfaces. Both Chavez-Sanchez and Colin [2020] and Zhang et al. [2010] report on preliminary experiments around several tasks performed on iPhones and promising results for VUI in terms of effectiveness and user preference. Even more optimistic results are discussed in Chandel et al. [2013], where the voice part of the system was prepared in a Wizard of Oz fashion and focused on the differences between more or less literate users. An earlier study 2 V oice CMS - G. Wolny and M. Szczerbak A P REPRINT showed, however, that even though a vocal interface might be preferred and work faster, it may also be less accurate and more error-prone Damianos et al. [2003]. It should be noted that some of the previously mentioned publications refer to relatively old first attempts to compare the two types | https://arxiv.org/abs/2505.22303v1 |
of interfaces on a very different maturity level at that time. A more recent study from the authors of Cha and Ji [2024] emphasizes the importance of activity context on the preference for choosing voice or touch. The reference GUI in that experiment consisted of buttons with Korean text, and researchers were observing a user preference switching point depending on the number of syllables per button, which seems to be a very specific use case. On the other hand, Limerick et al. [2015] gives a little more general perspective on the problem of employing VUIs, as they are proven to suffer from a lesser sense of agency when compared to GUIs. The authors argue that this does not depend on the voice-related technology and should hold true despite its progress. This relates to the human natural preference for voice for cooperation and graphical interactions for focusing on the task Le Bigot et al. [2007], and the question asked by the researchers in Qvarfordt et al. [2003] regarding whether the interfaces should become human-like or stay tool-like. In the latter study, we learn that the more anthropomorphic the voice feedback was given to experiment participants, the more they preferred the system to be human-like. In our work, we wish to further deepen the understanding of human preferences regarding VUIs and the factors that impact both expressed satisfaction and measured cognitive load for such complex communication tasks as updating a digital voice assistant’s knowledge base in a very specific business setting. As voice interfaces are gaining more and more interest, especially in the context of modern digital assistants using LLM-based conversational systems, there is growing attention to better evaluate their usability Deshmukh and Chalmeta [2024]. Some researchers focus on deriving new measures and qualities specific to digital virtual assistants, going away from the focus on effectiveness Dutsinma et al. [2022] and towards correlation with user satisfaction, intention for continuous use and recommendation Chen and Gong [2024]. Others advocate for preparing new VUI design guidelines to address the differences from well-studied GUIs Nowacki et al. [2020] Murad et al. [2023], which we implement in our solution. 3 Solution Figure 1: Digital assistant The solution presented in this study builds upon a digital assistant equipped with a voice user interface (VUI). The assistant, implemented as a 3D-rendered digital character displayed on a large touchscreen in a hotel lobby (see 3 V oice CMS - G. Wolny and M. Szczerbak A P REPRINT Figure 1), serves as a customer-facing videobot. Its primary role is to provide hotel guests with up-to-date information about the hotel’s offerings, assist in selecting menu items from the hotel restaurant, recommend nearby attractions, and help plan routes by displaying QR codes linking to navigation tools. The assistant is developed using the Unity engine, which not only renders the 3D character in real time but also supports the display of graphical elements such as banners, buttons, and QR codes. The system integrates speech recognition and synthesis capabilities, while the conversational engine is built on a multi-agent architecture using LangGraph1. This architecture employs nodes to | https://arxiv.org/abs/2505.22303v1 |
define workflows aimed at responding to user queries, with most of the nodes using large language models, specifically Gemini 1.5 Flash2, for natural language interpretation and generation. To ensure the assistant remains a reliable source of information, a method for providing and maintaining its knowledge base is required. This posed a challenge, as introducing a new IT system for knowledge management would impose a new obligation on hotel staff, requiring additional training and time investment. To address this, a voice-based content management system (V oice CMS) was developed, allowing staff to update the assistant’s knowledge base through natural conversation. This approach aligns with the intuitive interaction style already employed by the assistant when communicating with guests, thus reducing the cognitive and operational load on hotel employees. The V oice CMS operates in a secure mode, activated by tapping a specific area of the touchscreen and entering an access code. The solution’s architecture is illustrated in Figure 2, where the nodes represented as rectangles leverage large language models to operate. Once activated ( true branch after VCMS mode ), hotel staff can add new information by conversing with the assistant. During the initial phase of the interaction, the assistant summarizes the information provided, ensuring clarity and completeness. If the assistant detects some details are unclear or missing, it can actively prompt the user for clarification (node Ask user ). Simultaneously, the system constructs a draft entry for the VCMS in the background, which includes details such as the validity period of the information, its category, and the timeframe during which the assistant can use it in guest interactions (node Fill entry ). Once the information is complete, the assistant vocally presents a full summary for confirmation. At this stage, staff can make corrections or approve the entry. Upon confirmation, the information is added to the knowledge base in the form of a JSON object, and the assistant is ready to receive another piece of information (node Process entry ). It is worth noting that the V oice CMS mode can query the existing conversational engine to supplement knowledge or build context for information currently being processed in the conversation (node Ask assistant ). The V oice CMS is designed as an independent module integrated with the previously existing conversational engine. This modular approach ensures compatibility with other conversational systems, as the knowledge updates from the V oice CMS are aggregated with the assistant’s existing knowledge base at a single integration node in the LangGraph workflow (node Aggregate VCMS updates ). This design enhances the system’s flexibility and scalability, making it suitable for deployment in various contexts beyond the hotel industry. By enabling natural, conversational updates to the knowledge base, the V oice CMS addresses the dual challenge of maintaining the assistant’s reliability while minimizing the operational burden on hotel staff. This solution not only ensures that the assistant remains a trustworthy source of information but also aligns with the intuitive interaction style. 4 Evaluation This study investigates the usability and effectiveness of voice and graphical interfaces in completing tasks of varying complexity, with | https://arxiv.org/abs/2505.22303v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.