text string | source string |
|---|---|
multi-head self-attention mechanism [ 15], layer normalization [ 4], a position-wise feedforward net- work, and residual connection [ 29]. The encoder-decoder structure serves as the foundation for many early-stage influential LLMs, such as T5 [ 66], and BART [ 45]. These models have demonstrated strong capabilities on certain generation tasks [39, 67]. On the other hand, (causal) decoder-only Transformer models [ 63,64], trained with the autoregressive language modeling objective [ 70,79], have demonstrated exceptional performance in open-ended generation and reasoning tasks [ 13,83,82]. The superior generalization capabilities have established decoder-only Transformers as the backbone of recent state-of-the-art LLMs such as PaLM [ 16], Falcon [3], LLaMA [77, 21], and ChatGPT [1, 51, 36]. In this work, we develop the Transformer Copilot framework to support both encoder-decoder and decoder-only Transformer architectures. Our intuition is to provide flexibility across a broad range of model configurations and downstream task scenarios. LLMs Adaptation with Fine-tuning. Large language models perform well across many NLP tasks, yet adapting them to specialized tasks remains challenging [ 81]. The standard solution, full-parameter 32 fine-tuning [ 52], retrains all model parameters on task-specific data. Applying full fine-tuning has proven effective at improving performance, but can sometimes be computationally costly [ 74]. Recent work on parameter-efficient fine-tuning approaches [ 40,32,86,98,28], such as prefix-tuning [ 47] and LoRA [ 32], aims to reduce the computational overhead by tuning only a small subset of model parameters while still leveraging the expressive power of pre-trained models. Our learning framework builds upon the aforementioned methods’ fine-tuning paradigm and aims to refine the fine-tuning and inference by utilizing mistake information during the models’ learning trajectory. Additionally, since the Copilot model retains the decoder module structure, our framework can seamlessly integrate with various adaptation techniques such as DoRA [50] and ReFT [85]. Differences from Boosting and neural exploration . The core idea of Boosting [9, 24] is to train a series of models, where each subsequent model focuses on correcting the errors made by the previous ones. However, the proposed Copilot framework is distinct from boosting in several key ways. First, in boosting, the subsequent model is trained to correct the errors of a fixed, post-trained weak model, whereas the Copilot learns from the mistakes (errors) made by a strong, pre-trained pilot model during its fine-tuning trajectory. Second, the labels (errors) that the subsequent model in boosting attempts to predict are derived from the fixed parameters of the preceding weak model, whereas the labels that the Copilot learns are based on the fine-tuning dynamics of the Pilot’s parameters. Third, while all models in boosting only take data features as inputs, the Copilot also takes the internal state of the Pilot model as part of its input. Fourth, boosting does not require modifications to the base models, whereas the Copilot framework involves modifying the model structure, specifically the Transformer architecture. Another related work is neural exploration methods [ 37,87,62,5,7]. For example, one recent work called EE-Net [ 6] introduces an exploration neural network to manage trade-offs between exploitation and exploration in the contextual bandits setting. In contrast, the Copilot | https://arxiv.org/abs/2505.16270v1 |
arXiv:2505.16276v1 [cs.AI] 22 May 2025 peer reviewed and to appear in the ESWC 2025 Workshops and Tutorials Joint ProceedingsHow do Scaling Laws Apply to Knowledge Graph Engineering Tasks? The Impact of Model Size on Large Language Model Performance Desiree Heim1,2,*, Lars-Peter Meyer3,4, Markus Schröder1, Johannes Frey3,5and Andreas Dengel1,2 1DFKI, Kaiserslautern, Germany 2RPTU, Kaiserslautern, Germany 3InfAI, Leipzig, Germany 4TU Chemnitz, Germany 5Uni Leipzig, Germany Abstract When using Large Language Models (LLMs) to support Knowledge Graph Engineering (KGE), one of the first indications when searching for an appropriate model is its size. According to the scaling laws, larger models typically show higher capabilities. However, in practice, resource costs are also an important factor and thus it makes sense to consider the ratio between model performance and costs. The LLM-KG-Bench framework enables the comparison of LLMs in the context of KGE tasks and assesses their capabilities of understanding and producing KGs and KG queries. Based on a dataset created in an LLM-KG-Bench run covering 26 open state-of-the-art LLMs, we explore the model size scaling laws specific to KGE tasks. In our analyses, we assess how benchmark scores evolve between different model size categories. Additionally, we inspect how the general score development of single models and families of models correlates to their size. Our analyses revealed that, with a few exceptions, the model size scaling laws generally also apply to the selected KGE tasks. However, in some cases, plateau or ceiling effects occurred, i.e., the task performance did not change much between a model and the next larger model. In these cases, smaller models could be considered to achieve high cost-effectiveness. Regarding models of the same family, sometimes larger models performed worse than smaller models of the same family. These effects occurred only locally. Hence it is advisable to additionally test the next smallest and largest model of the same family. Keywords LLM, RDF, SPARQL, Knowledge Graph, LLM Evaluation, Knowledge Graph Engineering, Scaling Laws 1. Introduction Knowledge Graphs (KGs) [ 1] store facts about real-world domains in a structured way that facilitates reasoning to derive new information based on rules and existing knowledge. However, their creation and maintenance, commonly known as Knowledge Graph Engineering (KGE), usually requires manual, labour-intensive efforts including activities such as drafting an appropriate ontology, transforming data sources to fit the required format, and solving data integrity problems. With the emergence of Large Language Models (LLMs), various approaches were developed with LLMs to support KGE tasks [ 2,3,4,5,6,7,8]. Once LLMs are employed, the question arises of how well they can cope with KGs and KGE challenges. To address this, the LLM-KG-Bench benchmark framework [ 9,10] assesses ESWC 2025: 2nd Workshop on Evaluation of Language Models in Knowledge Engineering (ELMKE), June 01–05, 2025, Portoroz, Slovenia *Corresponding author. /envel⌢pe-⌢pendesiree.heim@dfki.de (D. Heim); lpmeyer@infai.org (L. Meyer); markus.schroeder@dfki.de (M. Schröder); frey@informatik.uni-leipzig.de (J. Frey); andreas.dengel@dfki.de (A. Dengel) /orcid0000-0003-4486-3046 (D. Heim); 0000-0001-5260-5181 (L. Meyer); 0000-0001-8416-0535 (M. Schröder); 0000-0003-3127-0815 (J. Frey); 0000-0002-6100-8255 (A. Dengel) ©2025 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). the performance of LLMs on tasks requiring | https://arxiv.org/abs/2505.16276v1 |
the comprehension of KGs [ 11,12], their schemata and query languages [10, 13]. The results on this benchmark not only show single model performances but might also provide valuable indications on KGE-specific scaling laws of LLMs. Such scaling laws typically examine LLM task performances concerning their model sizes, training data size, or utilized computational training resources [ 14]. Especially regarding model sizes, a usual expectation is that the larger the LLMs, the better their performances on downstream tasks. Yet, this assumption can be wrong. Moreover, larger models typically involve higher costs. In particular, the higher memory consumption of larger models compared to smaller ones is a highly relevant cost factor since more or more powerful hardware like GPUs are required. At the same time, the parameter size also influences inference time assuming using the same hardware setting as with more parameters more weights need to be calculated. Here, Mixture-of-Expert (MoE) models form an exception since only the number of active parameters, i.e., the proportion of parameters from the total parameters selected at runtime, influences the inference time. Hence, when choosing between an MoE LLM and another one with the same amount of total parameters and a similar task performance, the MoE has a higher cost-effectiveness. However, using the same hardware, for hosting either smaller and larger LLMs exclusively on, is, in practice, not necessarily realistic since smaller models would not fully exhaust the hardware’s, e.g., GPUs, capability for the same setting, e.g., the same targeted number of concurrent request and maximum input lengths. Hence, except for MoE models in comparison with similarly-sized LLMs, considering the memory requirement of LLMs is preferable. In conclusion, considering the cost-effectiveness, i.e. particularly the memory requirements, the largest models may not be the best choice and a good trade-off between model performance and the model’s resource demand has to be found. In this paper, we therefore analyse LLM scaling laws on KGE tasks with respect to model sizes. The data for our analysis are drawn from a recently published LLM-KG-Bench benchmark run [ 10]. It covers 26 open state-of-the-art LLMs from five providers and in total eleven model families, i.e., series of models released by a specific provider. Using the benchmark results and a combination of statistical analysis and visualizations, we would like to give initial answers to the following questions: How do benchmark scores . . . 1. . . . relate to different LLM model size groups? 2. . . . develop with respect to model sizes in general? 3. . . . develop with respect to model sizes within specific model families? By answering these questions, we aim to get more general insights about model size scaling laws on KGE tasks that can also be transferred to models not included in the benchmark run. This paper is structured as follows: Section 2 introduces related works. In Section 3, we describe the LLM-KG Bench run and the obtained dataset used in our analyses. In Section 4, we analyse the dataset in particular with respect to the correlation between model size and performance on the benchmark. | https://arxiv.org/abs/2505.16276v1 |
Following the analysis, we summarize and discuss the gained insights in Section 5. Section 6 concludes this paper and gives an outlook to future work. 2. Related Work In order to compare the vast amount of LLMs, there are several LLM leaderboards, which rank various LLMs based on a selection of benchmarks or workloads. Among the well-known leaderboards are Chatbot Arena [ 15], which evaluates models using human preference on interactive tasks, and OpenLLM- Leaderboard [ 16], covering numerous standard tasks like MMLU, BBH, and GPQU across more than 2,000models. Similarly, HELM [ 17] provides comprehensive evaluations including domain-specific benchmarks such as LegalBench and MedQA. Regarding code generation, which shares a few conceptual similarities with KGE benchmarking, several dedicated benchmarks and leaderboards exist too. Prominent code benchmarks include Hu- manEval and MultiPL-E, evaluated in the Big Code Models Leaderboard1, as well as EvalPlus [ 18], employing both HumanEval and the Mostly Basic Python Programming (MBPP) benchmark. The CanAiCode Leaderboard2specifically targets text-to-code tasks for smaller LLMs. These code-focused benchmarks emphasize structured output, syntactical correctness, and execution correctness, mirroring the evaluation criteria in KGE tasks, thereby offering insights relevant to benchmarking structured outputs from LLMs. However, the mentioned attempts do not cover the evaluation of tasks specifically relevant to Knowledge Graph Engineering (KGE) [ 19], such as RDF syntax correctness, SPARQL query generation, or graph comprehension. Efforts addressing KG-related evaluations frequently target specific problems like Text-to-RDF conver- sion [ 20,21], Knowledge Graph Question Answering (KGQA) [ 22], and SPARQL query generation [ 7,23]. These evaluations typically focus only on isolated tasks and often involve manual assessments, which limits scalability and adaptability to newer LLMs and task variations. An exception closely related to our interest in structured output is StructuredRAG [ 24], evaluating JSON-based structured responses from LLMs. To address gaps in existing benchmarking efforts, especially regarding RDF and SPARQL tasks, LLM-KG-Bench [ 9,10] provides a specialized automated benchmarking environment for evaluating semantic correctness and syntax handling in RDF and SPARQL tasks. In contrast to general benchmarks like HELM or BigBench [ 25], LLM-KG-Bench emphasizes semantic and syntactic correctness rather than multiple-choice accuracy, significantly reducing technological complexity for creating and evaluating KG-related tasks [11, 12, 13]. Prior research already investigated the correlation between LLM parameter size and task perfor- mance [ 14,26]. Larger LLMs typically outperform smaller models for the same tasks but also exhibit emergent capabilities (not present in smaller models) such as complex reasoning or nuanced instruction- following abilities [ 26,27]. However, this relationship is not universally linear; task type, complexity, and input and output structure can significantly influence whether larger models yield proportionally better performance. Scenarios and tasks w.r.t. Knowledge Graph Engineering, which typically requires dealing with RDF serialization formats and paradigms, remains underexplored. This study addresses this gap by explicitly examining how model size impacts performance across diverse RDF and SPARQL tasks within the context of KG engineering. 3. Dataset This work analyses data generated by the LLM-KG-Bench framework [ 9,10]. The LLM-KG-Bench framework offers the infrastructure to define KG engineering-related automated tasks that can be repeatedly executed on many LLMs | https://arxiv.org/abs/2505.16276v1 |
to evaluate their performance. Since also the evaluation is automated, the same experiments can be repeated which increases reproducibility and gives a broader sample size for statistical analysis to take the probabilistic nature of LLM-generated answers into account. The dataset used in this work evaluated over 30 open and proprietary LLMs on 26 RDF- and SPARQL- related task variations. The dataset contains LLMs from three open LLM providers: Qwen, Meta-LLama, and Microsoft-Phi. They were selected because of providing official instruction-finetuned Large Language Models that were the highest-ranked on the Open LLM Leaderboard [ 16] in December 2024 with respect to their average score across all benchmarks included in the leaderboard3. We restricted our selection to models of up to 80B parameters due to restrictions on hardware resources available to us. In addition to that, the dataset includes three LLMs fine-tuned or optimized for code understanding and generation which also requires handling structured data similar to KG-related tasks: Qwen2.5-Coder-Instruct-32B, 1https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard 2https://huggingface.co/spaces/mike-ravkine/can-ai-code-results 3Upstage providing the solar LLM family was excluded here since the models only support a maximum context length of up to 4096k Token which was not sufficient for all tasks Infly-OpenCoder-8B-Instruct, and deepseek-coder-33b-instruct. For the selection of these models, we consulted the Mostly Basic Python Programming (MBPP) Benchmark score reported by the EvalPlus Leaderboard [ 18] and decided for the top-ranked instruction-finetuned models not larger than 80B parameters that are reported to be explicitly optimized or fine-tuned for code. The model sizes range from 0.5billion parameters up to 72 billion parameters. Two included LLMs are Mixture-of-Experts models: Qwen2-Instruct with 57 billion parameters (14 billion active) and Phi- 3.5-instruct with 42 billion parameters ( 6.6billion active). With mixture-of-expert models only a subset of parameters is active during inference, resulting in a lower effective parameter count compared to the total model size. An overview of the evaluated models can be found in Table 1. More details on the models and their selection can be found in the dedicated paper [10]. In addition to the open LLMs, several proprietary LLMs from the OpenAI GPT, Google Gemini and Anthropic Claude families that achieved constantly high scores on the Chatbot Arena Leaderboard [ 15] were included in the benchmark run, namely ChatGPT 3.5 turbo, ChatGPT 4o, ChatGPT 4o-mini, ChatGPT o1, ChatGPT o1-mini, Gemini 2.0 Flash, Gemini 1.5 Pro, Gemini 1.5 Flash, Claude 3.5 Sonnet and Claude 3.5 Haiku. However, since model sizes for proprietary LLMs are not documented, we selected only the remaining 26 open LLMs for our main analysis and refer to the achieved scores of the proprietary models only briefly for comparison to better classify the open LLM performance. From the 26 task variations included in the dataset, we analyse 23 variations of seven task classes in the KG engineering areas of RDF and SPARQL handling. To focus the comparison on various input formats for consistency reasons, three task variations of the Text2Sparql consisting of other KGs as datasets were excluded and eight task variations of RdfFriendCount in the dataset were aggregated into four task variations for the analysis. Task variations of a task class | https://arxiv.org/abs/2505.16276v1 |
have a similar prompt and evaluation but differ e.g. by the serialization format (JSON-LD, N-Triples, Turtle, XML) presented to the LLM. For each open LLM, respectively 50 repetitions per task variation were executed. To assess the performance of the LLMs, tasks compute several measures based on the LLM answers with values in the interval [0,1]with better answers resulting in higher scores. These measures often include ones based on recall, precision, and f1 measure as well as e.g. brevity measures or ones indicating if the answer was at least syntactically correct. For some tasks, there are variations of measures defined with different levels of strictness in the evaluation. We selected measures that examine the result correctness in sufficiently different ways to provide a concise overview. Therefore, brevity measures are not included and F1-based measures were selected over precision- and recall-based measures. In the case of similar measures, only one representative was chosen, e.g. measures that check the responses relying on the requested output format or measures that search the answer for expected components. Here, stricter measures have been preferred to more relaxed ones. Regarding measures that operate on output lists, we selected measures that remove leading and trailing white space, since it is only a minor correction. Additionally, for tasks yielding RDF graphs or SPARQL queries, measures indicating their syntactical correctness were included. The different calculated measures can be classified into three types: Central These format-sensitive answer quality measures assess the answer correctness sensitive to the instructed output format, i.e., the output accuracy is assessed assuming that the requested format is respected. They are listTrimF1 ,f1,strSimilarity andtrimF1 . Fragment The fragment-based answer quality measures measure the answer correctness but are less strict regarding the answer format when evaluating the answer and account for correct answer parts. They include textHttpF1 ,contentF1 andsparqlIrisF1 . Syntax Syntactical answer correctness measures inspect whether the output is syntactically correct, i.e. fulfills all criteria for valid graphs or queries. The two measures parsableSyntax andanswerParse belong to this category. In the following, the seven task classes and the measures selected for this analysis are briefly described. Table 1 Model sizes for the 26 open LLMs analysed grouped by model family. Two of these models utilize the mixture of experts architecture (denoted by *), where only a subset of parameters is active during inference. Model (Family) Name Model Sizes = Number of Parameters Qwen2-Instruct 0.5B 1.5B 7B 57B* 72B Qwen2.5-Instruct 0.5B 1.5B 3B 7B 14B 32B 72B Qwen2.5-Coder-Instruct 32B Meta-LLama-3-Instruct 8B 70B Meta-LLama-3.1-Instruct 8B 70B Meta-LLama-3.2-Instruct 1B 3B 70B Meta-LLama-3.3-Instruct 70B Microsoft-Phi-3-instruct 3.8B 7B 14B Microsoft-Phi-3.5-instruct 3.8B 42B* Infly-OpenCoder-8B-Instruct 8B deepseek-coder-33b-instruct 33B More information can be found in the LLM-KG-Bench documentation4or in the articles introducing them [10, 11, 12, 13]. RdfConnectionExplain This task consists of finding the shortest connection between two nodes in a small KG which requires a basic understanding of serialization formats and RDF concepts. There are four variations of this task. Each presents the graph in a different serialization format: JSON-LD, N-Triples, Turtle, or RDF/XML. Here, a list of IRIs representing the shortest path is the expected answer | https://arxiv.org/abs/2505.16276v1 |
format. For the given answer, the task computes listTrimF1 as F1-measure on trimmed list entries without leading or trailing whitespaces. The textHttpF1 measure is an F1-measure on IRI-like answer parts starting, e.g., with “ http:// ”. RdfFriendCount This task presents a small KG with nodes of one type and edges of one type. The LLM is asked to return the node with the most incoming edges. There are 4 KG serialization format variations: JSON-LD, N-Triples, Turtle, and RDF/XML. The task computes the f1measure on the nodes found in the answer. RdfSyntaxFixing A KG with syntax errors is provided and the LLM is queried to correct it. There are 3 variations introduced with the serialization formats JSON-LD, N-Triples, and Turtle. The measure parsableSyntax equals 1if the RDF syntax in the answer is parsable ( 0otherwise). strSimilarity is computed by comparing the given RDF with the expected answer, and contentF1 is the F1-measure on the given RDF graph on a triple level. Sparql2Answer In this task, the LLM is asked to respond with the result set for a given SPARQL SELECT query given the KG. There are 2 variations with the graph serialization formats JSON-LD and Turtle. The answer should be a list of entities and the trimF1 measure is computed as F1-measure on the trimmed entities, where leading and trailing whitespaces are removed. SparqlSyntaxFixing Similar to the RdfSyntaxFixing task, the LLM is asked to fix syntactically erroneous SPARQL SELECT queries. The measure answerParse equals 1if the adapted SPARQL query syntax is correct ( 0otherwise). sparqlIrisF1measure is the F1-measure on the IRIs found in the modified SPARQL query. f1measure refers to the result set obtained when executing the corrected SPARQL SELECT query. 4Task documentation: https://github.com/AKSW/LLM-KG-Bench/blob/v3.0.0/LlmKgBench/tasks/README.md Text2Answer Similar to the Sparql2Answer task, the LLM is asked to respond with the result set for a given natural language question given a small KG. There are 2 variations of the graph presented in the serialization formats JSON-LD and Turtle. Similar to the Sparql2Answer task, the answer is expected as a list and the trimF1 measure is computed on the trimmed list elements. Text2Sparql Here a natural language question is presented together with information on a KG and the LLM is asked to translate the question into a suitable SPARQL SELECT query. There are 3 variations with the KG presented in the form of a complete schema, only the relevant schema or the relevant subgraph, all in Turtle syntax. The same measures as described for the SparqlSyntaxFixing task were selected: answerParse ,sparqlIrisF1measure andf1measure . For all tasks, the prompts are kept relatively simple and are not specifically optimized using prompt engineering to assess the basic capabilities of LLMs. Moreover, we restrict from using advanced prompting techniques to prevent certain models from gaining an unfair advantage that could occur after they are used in the prompt engineering process. In the following section, we analyse the described dataset. 4. Result Analysis In this section, we report and analyse the results of the benchmark run. First, the overall task performance is examined to explore task-centered tendencies (Section 4.1). Second, we take | https://arxiv.org/abs/2505.16276v1 |
a closer look at task performances with respect to model sizes and shed light on two aspects: comparison of performances between different size categories (Section 4.2) and the development of scores with respect to model sizes and families (Section 4.3). 4.1. Overall Task Performance To get an overview of the benchmark scores achieved by the open LLMs included in the experiments, Table 2 lists the means and standard deviations of all LLM scores per task variation. Additionally, mean scores of individual LLMs as well as the highest and lowest intra-LLM mean are reported. Regarding mean calculation, missing values of central and fragment measures originating from unparsable RDF or SPARQL outputs were filled with 0 to account for the fact that the outputs do not even meet the minimum quality criterion of syntactical correctness. Concerning tasks that allow corrections to initial answers (i.e. multiple Prompt-Answer-Evaluate loops), only the last answer scores are considered in the table, since the mean of all scores for the first and last answers show only minor differences. In the following, we will examine the scores for each measure type that are listed in Table 2. Regarding the mean scores of central measures they are in average medium-high being close to a score of 0.6on the tasks SparqlSyntaxFixing ,RdfConnectionExplain ,Text2Answer ,RdfSyntaxFixing and Sparql2Answer . In contrast, RdfFriendCount stands out with low means between 0.06and0.29. For the Text2SPARQL task, the two input variations turtle schema and subschema also got low scores of 0.13 and0.10, while the input variation turtle subgraph achieved a comparably high mean score of 0.57. For other tasks, the difference between the mean scores for input variations is comparably small with a maximum difference of 0.25between the lowest and highest mean in the task class RdfSyntaxFixing . Moreover, no clear task-overarching preference for a specific KG format (turtle, nt, jsonld, xml) is recognizable. Looking at the standard deviation, the scores on the central measures are widespread. Roughly 20% of the central measures have a standard deviation between 0.2and0.3, 40% have one larger than 0.3 to0.4and the remaining 40% have a dispersion larger than 0.4to0.5. This is also reflected in the minimum and maximum average central measure score per LLM. The highest minimum mean is 0.12 while, except for three outliers, the maximum means are 0.75or higher, and most have even a mean of 1 or close values. Here, only the turtle input variation of the RdfFriendCount task with a maximum Table 2 This table depicts data about the selected benchmark scores grouped by tasks and task variations. It presents basic statistics including mean, standard deviation (std), minimum (min) and maximum (max) mean achieved by one specific LLM. For each measure, the respective type is provided. In case a task allows for up to two retries, it is marked with (*) and the scores achieved by the last output are given. Task Variation Measure Type mean std min max RdfConnectionExplain jsonld listTrimF1 central 0.64 0.36 0.03 1.00 textHttpF1 fragment 0.71 0.30 0.03 1.00 nt listTrimF1 central 0.51 0.34 0.02 0.95 textHttpF1 fragment 0.68 0.25 0.34 0.98 turtle listTrimF1 | https://arxiv.org/abs/2505.16276v1 |
central 0.68 0.32 0.08 1.00 textHttpF1 fragment 0.75 0.27 0.30 1.00 xml listTrimF1 central 0.70 0.30 0.12 1.00 textHttpF1 fragment 0.77 0.23 0.42 1.00 RdfFriendCount jsonld f1 central 0.17 0.37 0.00 1.00 nt f1 central 0.17 0.37 0.00 1.00 turtle f1 central 0.06 0.22 0.00 0.47 xml f1 central 0.29 0.45 0.00 1.00 RdfSyntaxFixing * jsonld parsableSyntax syntax 0.81 0.39 0.00 1.00 strSimilarity central 0.67 0.38 0.10 0.87 contentF1 fragment 0.78 0.40 0.00 1.00 nt parsableSyntax syntax 0.74 0.44 0.00 1.00 strSimilarity central 0.61 0.43 0.08 1.00 contentF1 fragment 0.65 0.46 0.00 1.00 turtle parsableSyntax syntax 0.68 0.47 0.02 1.00 strSimilarity central 0.42 0.34 0.10 0.90 contentF1 fragment 0.67 0.46 0.01 1.00 Sparql2Answer jsonld trimF1 central 0.54 0.47 0.01 1.00 turtle trimF1 central 0.58 0.47 0.01 1.00 SparqlSyntaxFixing * answerParse syntax 0.68 0.47 0.00 1.00 f1measure central 0.60 0.49 0.00 1.00 sparqlIrisF1measure fragment 0.66 0.46 0.00 1.00 Text2Answer jsonld trimF1 central 0.57 0.48 0.02 1.00 turtle trimF1 central 0.63 0.47 0.03 1.00 Text2Sparql * turtle schema answerParse syntax 0.72 0.45 0.00 1.00 f1measure central 0.13 0.27 0.00 0.31 sparqlIrisF1measure fragment 0.30 0.30 0.00 0.53 turtle subschema answerParse syntax 0.74 0.44 0.06 1.00 f1measure central 0.10 0.25 0.00 0.28 sparqlIrisF1measure fragment 0.35 0.29 0.01 0.57 turtle subgraph answerParse syntax 0.81 0.39 0.06 1.00 f1measure central 0.57 0.45 0.00 0.93 sparqlIrisF1measure fragment 0.71 0.40 0.04 0.96 score of 0.47and the turtle schema and turtle subschema input variations of the Text2SPARQL task with a maximum intra-LLM mean of 0.31and0.28differ substantially from the other means. In all of these cases, this circumstance is also apparent in a low overall mean score. Thefragment measures show tendencies similar to the central measures. As expected, for all task variations their means are higher compared to the central measures. This is also reflected in the minimum and maximum intra-LLM means. Only for the RdfSyntaxFixing task, the minimum mean per LLM is lower than those of the central measures. Notably, in contrast to the central measures, the fragment measures are only calculated if the output graph is syntactically correct, otherwise, they are 0. Last but not least, the means of syntax measures for the RdfSyntaxFixing ,SparqlSyntaxFixing , andText2Sparql tasks are rather high ranging between 0.68and0.81. However, the dispersion of values around the means is relatively high with standard deviations between 0.39and0.47. Without exceptions, the minimum intra-LLM means are all close or equal to 0, and the maximums close or equal to1. This subsection gave an overview of the models’ performances on all tasks to classify task classes and their variations. Building upon that, the following two subsections focus on comparing the models’ answer quality based on their sizes, i.e., trained parameters. First, in Section 4.2, task performances are compared between model size categories, and in Section 4.3 the development of scores in general and within families is visually assessed with respect to model sizes. 4.2. Size Category Performance Similarities In the following analyses, only the central measures are included since they indicate most accurately whether a given answer is correct also taking into account the adherence to the requested output format. | https://arxiv.org/abs/2505.16276v1 |
To examine whether LLM size affects task performance, we first divided the LLMs into four groups with respect to their sizes. We classify models into the size categories tiny [0−3𝐵], small (3𝐵−8𝐵], medium (8𝐵−33𝐵]and large (33𝐵−72𝐵]. Subsequently, to assess whether there are any significant differences in achieved central benchmark scores between the LLM size groups, we conducted Kruskal–Wallis tests [ 28] for each task variation with the null hypothesis that the score distributions of all groups are identical. For all tests performed, null hypotheses were rejected with a significance level of less than 0.001indicating that for all task variations, there are significant differences between model size groups. The highest significance level was obtained for the RdfConnectionExplain xmlvariation with p≈5𝑒−12and the lowest significance level was p ≈7𝑒−122for the Text2Sparql turtle subschema variation indicating highly significant differences between the groups. Since the Kruskal-Wallis test only measures whether there is a significant difference between a set of groups, next, posthoc Dunn tests [ 29] with Bonferroni correction [ 30] were conducted to examine which groups are dissimilar. Again, the null hypothesis was that there is no difference between the group pairs. Table 3 shows the results of the post-hoc tests for each task variation. Group pairs that are dissimilar with a significance of 5% or less are blank. For all pairs not classified as dissimilar, the p-value is provided. The higher this value is, the more similar groups can be considered. Additionally, on the left-hand side of the table, the mean scores per model size group are given as a reference. Groups with a high standard deviation (0.3,0.4]were marked with a ∼, and those with a very high standard deviation (0.4,0.5]were marked with ≈. All other groups have a standard deviation of 0.3or lower. Overall, as expected, most comparisons show, with a significance of 5%, a dissimilarity between the respective group pairs, i.e., their respective scores were significantly different (the null hypotheses were rejected). Except for six dissimilar pairs, the differences were also very significant with 𝑝 <0.001. Typically, the identified significant score differences between groups are associated with rising scores from groups containing smaller LLMs to groups with larger model sizes. However, only the input variations turtle and xml of the RdfFriendCount Task show unexpectedly decreasing scores from a group of smaller to a group of larger LLMs. Additionally, deviating from the assumptions, there were also pairs for which no significant differences were recognizable. For these groups, the p-value is given in Table 3. Higher values indicate that size categories can be considered more similar with respect to their task performance. For the pairs of medium and large groups, this applies to roughly half of the cases. Mostly, for both groups, the average scores were high, i.e., a ceiling effect occurred. In three cases, for the RdfFriendCount nt ,Text2Sparql turtle schema andText2Sparql turtle subschema task variations the overall scores are low and show plateau effects, i.e. they do not change perceivably, so no significant differences were detected. The pair with the second highest number of insignificant score differences is small-medium with five cases. Here, | https://arxiv.org/abs/2505.16276v1 |
mainly plateau effects occurred. Similarly, the three insignificant differences between scores of the pair tiny-small are plateaus. In contrast to Table 3 The table shows the mean scores of central measures per model size groups and the group-wise similarity. The models are grouped into the size groups tiny (t, [0−3𝐵]), small (s, (3𝐵−8𝐵]), medium (m, (8𝐵−33𝐵]) and large (l, (33𝐵−72𝐵]). Mean scores with a high standard deviation (0.3 to 0.4) were marked with a ∼and those with a very high standard deviation (0.4 to 0.5) were marked with ≈. For the group-wise similarity only p-values >0.05 of post-hoc test are shown. In the other cases, groups show statistically significant differences. Higher p-values indicate more similar groups. mean score per model size group group-wise similarity Task Variation t s m l t ↔s t↔m t↔l s↔m s↔l m↔l RdfConnection Explainjsonld∼0.41 0 .61∼0.75 0 .87 nt∼0.39 0 .51 0 .49∼0.68 0 .07 1 .00 turtle∼0.50 0 .59 0 .82 0 .87 0 .13 xml 0.53 0 .60 0 .79 0 .93 0 .09 RdfFriend Countjsonld 0.09 0 .09 0 .06≈0.41 1 .00 1 .00 1 .00 nt 0.04∼0.13≈0.26≈0.32 1 .00 turtle 0.05 0 .03∼0.18 0 .01 1 .00 xml≈0.31∼0.11 0 .07≈0.57 1 .00 RdfSyntax Fixingjsonld≈0.59≈0.68 0 .98 0 .96 1 .00 nt≈0.36≈0.61∼0.86 0 .91 0 .60 turtle≈0.34≈0.66 0 .95 0 .92 1 .00 Sparql2Answerjsonld∼0.25≈0.42≈0.75∼0.88 turtle∼0.24≈0.54∼0.76 0 .92 SparqlSyntax Fixing≈0.25≈0.50∼0.86 0 .95 0 .12 Text2Answerjsonld≈0.26≈0.50≈0.73 0 .92 turtle≈0.28≈0.64≈0.76 0 .97 Text2Sparqlschema 0.04 0 .13 0 .15∼0.23 1 .00 0 .21 subschema∼0.04≈0.12 0 .16 0 .13 1 .00 1 .00 0 .34 subgraph 0.19 0 .58 0 .84 0 .86 1 .00 groups adjacent with respect to the size class they represent, pairs that are not directly adjacent have predominately significantly different scores. Hence, there are only two cases of rather similar scores forming plateaus between the groups tiny and medium and one case between the pair small and large. In summary, overall the most frequent perceivable effect is a rise of the average scores from smaller to larger model groups. Notable exceptions are the task variations turtle and xml RdfFriendCount , for which the scores between smaller and larger model size groups decrease significantly. Other than that, plateaus occur for which the scores between adjacent groups do not change significantly. However, in some cases, the plateaus occur only locally and a rise of scores is detectable in particular between the medium and large groups (see e.g. RdfFriendCount jsonld orRdfConnectionExplain nt ). For some task variations, the scores of groups medium and large also almost reach the upper score bound of 1. 4.3. Task Performance by Model Size and Family Complementary to the last section, Figure 1 shows the average central measure scores for each task class by LLM relative to their sizes. In addition, dashed lines connect LLMs of the same family. For the tasks Text2Sparql andRdfFriendCount , the plots again expose the overall poor performance of the included LLMs. Moreover, other previously found patterns are visible in the figures. Hence, the overall tendency of scores to rise with the model size is noticeable. In | https://arxiv.org/abs/2505.16276v1 |
addition, the plateau and ceiling effects can be seen. Furthermore, additional insights are visible in the plots. For reference, the plots also show the highest average score achieved by a proprietary model included in the benchmark run as a red horizontal line. Here, we see that, except for the Text2Sparql task, the best proprietary LLM always reaches an average score of 0.99or1.00. With that, the best-performing proprietary model is, except for the RdfFriendCount task in which the best open LLMs achieves only a score of 0.55, on par with the best-performing open 0 10 20 30 40 50 60 70 Size in number of parameters [in B]0.00.20.40.60.81.0Score(a) RdfConnectionExplain 0 10 20 30 40 50 60 70 Size in number of parameters [in B]0.00.20.40.60.81.0Score (b) RdfFriendCount 0 10 20 30 40 50 60 70 Size in number of parameters [in B]0.00.20.40.60.81.0Score (c) RdfSyntaxFixing 0 10 20 30 40 50 60 70 Size in number of parameters [in B]0.00.20.40.60.81.0Score (d) Sparql2Answer 0 10 20 30 40 50 60 70 Size in number of parameters [in B]0.00.20.40.60.81.0Score (e) SparqlSyntaxFixing 0 10 20 30 40 50 60 70 Size in number of parameters [in B]0.00.20.40.60.81.0Score (f) Text2Answer 0 10 20 30 40 50 60 70 Size in number of parameters [in B]0.00.20.40.60.81.0Score (g) Text2Sparql DeepseekCoder Llama3 Llama3.2 Llama3.3 OpenCoder Phi3 Phi3.5 Qwen2 Qwen2.5 Qwen2.5CoderDeepseekCoder Llama3 Llama3.2 Llama3.3 OpenCoder Phi3 Phi3.5 Qwen2 Qwen2.5 Qwen2.5CoderDeepseekCoder Llama3 Llama3.2 Llama3.3 OpenCoder Phi3 Phi3.5 Qwen2 Qwen2.5 Qwen2.5CoderDeepseekCoder Llama3 Llama3.2 Llama3.3 OpenCoder Phi3 Phi3.5 Qwen2 Qwen2.5 Qwen2.5CoderDeepseekCoder Llama3 Llama3.2 Llama3.3 OpenCoder Phi3 Phi3.5 Qwen2 Qwen2.5 Qwen2.5CoderDeepseekCoder Llama3 Llama3.2 Llama3.3 OpenCoder Phi3 Phi3.5 Qwen2 Qwen2.5 Qwen2.5CoderDeepseekCoder Llama3 Llama3.2 Llama3.3 OpenCoder Phi3 Phi3.5 Qwen2 Qwen2.5 Qwen2.5CoderDeepseekCoder Llama3 Llama3.2 Llama3.3 OpenCoder Phi3 Phi3.5 Qwen2 Qwen2.5 Qwen2.5CoderDeepseekCoder Llama3 Llama3.2 Llama3.3 OpenCoder Phi3 Phi3.5 Qwen2 Qwen2.5 Qwen2.5CoderDeepseekCoder Llama3 Llama3.2 Llama3.3 OpenCoder Phi3 Phi3.5 Qwen2 Qwen2.5 Qwen2.5CoderDeepseekCoder Llama3 Llama3.2 Llama3.3 OpenCoder Phi3 Phi3.5 Qwen2 Qwen2.5 Qwen2.5CoderDeepseekCoder Llama3 Llama3.2 Llama3.3 OpenCoder Phi3 Phi3.5 Qwen2 Qwen2.5 Qwen2.5CoderDeepseekCoder Llama3 Llama3.2 Llama3.3 OpenCoder Phi3 Phi3.5 Qwen2 Qwen2.5 Qwen2.5CoderDeepseekCoder Llama3 Llama3.2 Llama3.3 OpenCoder Phi3 Phi3.5 Qwen2 Qwen2.5 Qwen2.5CoderDeepseekCoder Llama3 Llama3.2 Llama3.3 OpenCoder Phi3 Phi3.5 Qwen2 Qwen2.5 Qwen2.5CoderDeepseekCoder Llama3 Llama3.2 Llama3.3 OpenCoder Phi3 Phi3.5 Qwen2 Qwen2.5 Qwen2.5CoderDeepseekCoder Llama3 Llama3.2 Llama3.3 OpenCoder Phi3 Phi3.5 Qwen2 Qwen2.5 Qwen2.5CoderDeepseekCoder Llama3 Llama3.2 Llama3.3 OpenCoder Phi3 Phi3.5 Qwen2 Qwen2.5 Qwen2.5CoderDeepseekCoder Llama3 Llama3.2 Llama3.3 OpenCoder Phi3 Phi3.5 Qwen2 Qwen2.5 Qwen2.5CoderDeepseekCoder Llama3 Llama3.2 Llama3.3 OpenCoder Phi3 Phi3.5 Qwen2 Qwen2.5 Qwen2.5Coder Figure 1: Plots showing the achieved central measure mean scores per model in relation to the model size for all tasks. Models of the same family are connected through a dashed line. As a reference, for each task, aösp the highest score achieved by a proprietary model included in the benchmark run is given and depicted in a red horizontal line. LLM. Especially the mean scores of tiny [0−3𝐵]and small (3𝐵−8𝐵]models differ greatly. Moreover, for most tasks, the highest score growth occurs within the range of tiny to smaller medium- sized models around 13B. Frequently, for tasks showing ceiling effects, already some smaller models of around 8B or 14B reach average scores of 0.8or higher. Here, especially the 8 and 14B | https://arxiv.org/abs/2505.16276v1 |
Qwen2.5 models stand out. The two included Mixture-of-Experts (MoE) LLMs, namely Phi-3.5-MoE-instruct (42B parameters thereof 6.6B active) and Qwen2-57B-A14B-Instruct (57B parameters thereof 14B active) show for most tasks scores similar to models of a similar size regarding their total parameters that use all parameters during inference time. Nevertheless, there are models with lower total parameter counts in the range of the MoE models’ active parameter counts that perform comparably. Besides, the code-specialized models, Qwen2.5-Coder (32B), OpenCoder (8B), and Deepseek-Coder (33B) are all roughly on par or in a few cases slightly better performing compared to similarly sized LLMs regarding their task performance on the RdfSyntaxFixing ,SparqlSyntaxFixing andText2Sparql that all yield either a RDF graph or a Sparql query. Here, DeepSeek-Coder and Qwen2.5-Coder perform similarly to the best-performing open models but this also holds for some non-code specialized models. For the other tasks, except for the RdfFriendCount task, Qwen2.5-Coder also performs comparably similar or even slightly better than similar-sized models. In contrast, DeepSeek-Coder and OpenCoder perform worse than other similarly-sized models on the tasks not yielding a KG or SPARQL query. In the next paragraphs, we look at intra-family and inter-family developments of benchmark scores with respect to the model size. Models of the same family also reflect the overall tendency of scores to rise with the model size. In addition, the largest models of the families are typically the best performing. However, on the family level, occasionally the task performance also drops between size-wise adjacent smaller and larger models. These drops in performance with larger models typically remain only local and the next larger model of the family often shows a higher or at least steady task performance compared to the model before the drop. Global, family-overarching saturation effects, i.e., ceiling and plateau effects, are also visible, in particular, for families with many models covering all size bands. In between different families, in some cases, the score development with rising model size is similar and almost parallel. However, in general, clear global parallelisms in the score developments with rising model sizes are not recognizable. 5. Discussion In this section, we summarize and discuss key insights from our analysis structured into paragraphs each covering a different insight. Larger models typically achieve higher scores than smaller ones but there are plateau and ceiling effects. In the size category analyses, we saw that most of the time, as expected, typically larger model size groups achieved significantly higher benchmark scores (see also Table 3). For easier tasks, especially the medium and large category pairs got similar high scores. Hence, here medium-sized models could be a good choice to optimize cost-effectiveness. In contrast, especially for more difficult tasks, plateaus occurred. Some were only local and larger models got significantly higher scores than the models within the plateau range. Consequently, it makes sense to consider the detected local plateaus and decide on a larger LLM since even if the costs increase, also the performance increases significantly. For global plateaus that are not yet close to the maximum score and spanning up to the large models, it could also | https://arxiv.org/abs/2505.16276v1 |
make sense to use smaller models since it saves costs and does not affect the task performance much. Some smaller models also perform comparably well. However, the performance across individual smaller models varies. Furthermore, also Figure 1 confirms that even some small ( 8B) or medium-sized ( 13B) models might be a good choice since they already achieve reasonably high scores. Nevertheless, individual models within the same size band have to be tested explicitly, since their performance also varies. However, the insights help to guide the overall model search and indicate whether it seems promising or likely to consider models within a certain size band or not. Performance drops occur between smaller and larger models from the same family. Moreover, within model families, we again saw that local task performance drops can occur between smaller and larger members. Hence, it is advisable to also study models adjacent with respect to their size within a family. The examined Open LLMs cannot cope well with the RdfFriendCount andText2Sparql tasks. In addition to guiding the model choice within open LLMs, the results also indicate that current state- of-the-art open LLMs as of December 2024 up to a parameter count of 70B, cannot cope well with the RdfFriendCount and the Text2Sparql tasks. Here, the tasks likely require an even larger model. For the RdfFriendCount task, a proprietary LLM included in the benchmark run got a mean central score of 0.99 or1.00. Hence, here current proprietary models can cope well with the task in contrast to comparably much smaller large open LLMs. Nevertheless, for the Text2Sparql task also the proprietary LLMs achieve not substantially higher mean scores. Here, the best-performing model achieves only a mean score of 0.49indicating that the identified plateau effect even continues for much larger proprietary models. The examined code-specialized models showed better performance on tasks where a KG or a SPARQL query was requested. Compared to the other tasks, especially DeepseekCoder and OpenCoder performed best on the tasks RdfSyntaxFixing ,SparqlSyntaxFixing andText2Sparql in comparison to other tasks. The examined Mixture-of-Expert (MoE) models do not show superior performance in com- parison with models of the MoE’s active parameter count. Looking at the individual task scores of models, the MoE LLMs Phi-3.5-MoE-instruct and Qwen2-57B-A14B-Instruct performed mostly com- parably to models having a similar total parameter count. However, models of a size in the range of the MoE models’ active parameters performed also similarly. Hence, for the given tasks, it makes sense to prefer these smaller models instead of the MoE models concerning their cost-effectiveness. 6. Conclusion In this paper, we analysed the scores of open LLMs from a run of the LLM-KG-Bench benchmark for knowledge graph engineering-related tasks with a focus on the correlation between model size and achieved scores. Overall, we saw that, as expected, usually the larger the model was, the higher the scores were. However, our analysis also showed plateau and ceiling effects in which model scores did not differ substantially between smaller and larger models. Hence, for comparably easy tasks, also smaller models already achieved reasonably high scores. Consequently, it makes sense | https://arxiv.org/abs/2505.16276v1 |
to also consider smaller models for similarly complex tasks. For the RdfFriendCount andText2Sparql tasks, the benchmark scores were overall low, plateau effects spanned up to the largest models analysed. Here, we can conclude that the capabilities of SOTA open LLMs are not yet sufficient to solve tasks of this complexity. While the RdfFriendCount task can be solved by much larger proprietary models, forText2Sparql plateau effects continue, and potentially even larger models are required to sufficiently solve this task. For future works, we believe that for benchmark runs similar analyses are meaningful to get an overview of the status of SOTA models but also derive generalizable insights that might help to classify whether newly introduced models or models not part of the benchmark run seem promising to consider. Here, it would be also interesting to examine additionally other scaling law-related factors like the training data, the number of training steps, and factors related to the model architecture. This would allow for further examination and possible explanations of effects that became apparent in this work like the performance differences between similarly-sized models or performance drops of larger models compared to smaller ones belonging to the same model family. Moreover, it is meaningful to extend the LLM-KG Bench framework by more complex variations of already well-solved tasks to be able to figure out whether medium-sized models are still on par with large models in more difficult scenarios. Besides, also motivated by the preference of code-specialized LLMs towards tasks requiring a KG or SPARQL query as output, exploring which kind of capabilities are required for specific tasks and why some tasks seem particularly challenging would be a meaningful future contribution to guide targeted solutions. Acknowledgments This work was partially supported by grants from the German Federal Ministry of Education and Research (BMBF) to the projects ScaleTrust (16DTM312D) and KupferDigital2 (13XP5230L) as well as from the German Federal Ministry for Economic Affairs and Climate Action (BMWK) to the KISS project (01MK22001A). Declaration on Generative AI During the preparation of this work, the authors used ChatGPT4o and ChatGPT4.5-RP to: Grammar and spelling check, paraphrase, and reword to improve the writing style. After using these tools/services, the authors reviewed and edited the content as needed and take full responsibility for the publication’s content. References [1]A. Hogan, E. Blomqvist, M. Cochez, C. d’Amato, G. de Melo, C. Gutierrez, J. E. L. Gayo, S. Kirrane, S. Neumaier, A. Polleres, R. Navigli, A.-C. N. Ngomo, S. M. Rashid, A. Rula, L. Schmelzeisen, J. Sequeda, S. Staab, A. Zimmermann, Knowledge graphs, ACM Computing Surveys (CSUR) 54 (2020) 1 – 37. doi: 10.1145/3447772 . [2]S. Pan, L. Luo, Y. Wang, C. Chen, J. Wang, X. Wu, Unifying large language models and knowledge graphs: A roadmap, IEEE Transactions on Knowledge and Data Engineering (TKDE) (2024). doi:10.1109/TKDE.2024.3352100 . [3]B. P. Allen, L. Stork, P. Groth, Knowledge engineering using large language models (2023). doi:10.4230/TGDK.1.1.3 . [4]R. Buchmann, J. Eder, H.-G. Fill, U. Frank, D. Karagiannis, E. Laurenzi, J. Mylopoulos, D. Plexousakis, M. Y. Santos, Large language models: Expectations for semantics-driven systems engineering, Data and Knowledge Engineering 152 | https://arxiv.org/abs/2505.16276v1 |
(2024) 102324. doi: 10.1016/j.datak.2024.102324 . [5]T. A. Taffa, R. Usbeck, Leveraging LLMs in scholarly knowledge graph question answering (2023). doi:10.48550/ARXIV.2311.09841 . [6]M. Hofer, J. Frey, E. Rahm, Towards self-configuring knowledge graph construction pipelines using llms - a case study with rml, in: Fifth International Workshop on Knowledge Graph Construction @ ESWC2024, volume 3718 of CEUR Workshop Proceedings , CEUR-WS.org, 2024. URL: https://ceur-ws.org/Vol-3718/paper6.pdf. [7]L. Kovriguina, R. Teucher, D. Radyush, D. Mouromtsev, Sparqlgen: One-shot prompt-based approach for sparql query generation, in: International Conference on Semantic Systems, volume 3526 of CEUR Workshop Proceedings , CEUR-WS.org, 2023. URL: https://ceur-ws.org/Vol-3526/ paper-08.pdf. [8]H. Babaei Giglou, J. D’Souza, S. Auer, LLMs4OL: Large Language Models for Ontology Learning, Springer Nature Switzerland, 2023, pp. 408–427. doi: 10.1007/978-3-031-47240-4_22 . [9]L.-P. Meyer, J. Frey, K. Junghanns, F. Brei, K. Bulert, S. Gründer-Fahrer, M. Martin, Developing a scalable benchmark for assessing large language models in knowledge graph engineering, in: N. Keshan, S. Neumaier, A. L. Gentile, S. Vahdati (Eds.), Proceedings of the Posters and Demo Track of the 19th International Conference on Semantic Systems (SEMANTICS 2023), volume 3526 of CEUR Workshop Proceedings , CEUR-WS.org, 2023. URL: https://ceur-ws.org/Vol-3526/paper-04.pdf. [10] L.-P. Meyer, J. Frey, D. Heim, F. Brei, C. Stadler, K. Junghanns, M. Martin, LLM-KG-Bench 3.0: A compass for semantic technology capabilities in the ocean of LLMs, in: Proceedings of ESWC 2025 Resources Track, 2025. Accepted for publication. [11] J. Frey, L. Meyer, N. Arndt, F. Brei, K. Bulert, Benchmarking the abilities of large language models for RDF knowledge graph creation and comprehension: How well do llms speak turtle?, in: M. Alam, M. Cochez (Eds.), Proceedings of the Workshop on Deep Learning for Knowledge Graphs (DL4KG 2023) co-located with the 21th International Semantic Web Conference (ISWC 2023), Athens, November 6-10, 2023, volume 3559 of CEUR Workshop Proceedings , CEUR-WS.org, 2023. URL: https://ceur-ws.org/Vol-3559/paper-3.pdf. [12] J. Frey, L.-P. Meyer, F. Brei, S. Gruender, M. Martin, Assessing the evolution of llm capabilities for knowledge graph engineering in 2023, in: Proceedings of Special Track Large Language Models for Knowledge Engineering at Extended Semantic Web Conference 2024 (ESWC24), 2024. doi:10.1007/978-3-031-78952-6_5 . [13] L.-P. Meyer, J. Frey, F. Brei, N. Arndt, Assessing sparql capabilities of large language models, in: E. Vakaj, S. Iranmanesh, R. Stamartina, N. Mihindukulasooriya, S. Tiwari, F. Ortiz-Rodríguez, R. Mcgranaghan (Eds.), Proceedings of the 3rd International Workshop on Natural Language Processing for Knowledge Graph Creation co-located with 20th International Conference on Semantic Systems (SEMANTiCS 2024), volume 3874 of CEUR Workshop Proceedings , 2024, p. 35–53. URL: https://ceur-ws.org/Vol-3874/paper3.pdf. [14] J. Kaplan, S. McCandlish, T. Henighan, T. B. Brown, B. Chess, R. Child, S. Gray, A. Radford, J. Wu, D. Amodei, Scaling laws for neural language models (2020). arXiv:2001.08361 . [15] W.-L. Chiang, L. Zheng, Y. Sheng, A. N. Angelopoulos, T. Li, D. Li, H. Zhang, B. Zhu, M. Jordan, J. E. Gonzalez, I. Stoica, Chatbot arena: An open platform for evaluating llms by human preference, 2024. arXiv:2403.04132 . [16] C. Fourrier, N. Habib, A. Lozovskaya, K. Szafer, T. Wolf, Open llm leaderboard v2, https: //huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard, 2024. [17] P. Liang, R. Bommasani, T. Lee, D. Tsipras, D. Soylu, M. Yasunaga, Y. | https://arxiv.org/abs/2505.16276v1 |
Zhang, D. Narayanan, Y. Wu, A. Kumar, B. Newman, B. Yuan, B. Yan, C. Zhang, C. Cosgrove, C. D. Manning, C. Ré, D. Acosta- Navas, D. A. Hudson, E. Zelikman, E. Durmus, F. Ladhak, F. Rong, H. Ren, H. Yao, J. Wang, K. Santhanam, L. Orr, L. Zheng, M. Yuksekgonul, M. Suzgun, N. Kim, N. Guha, N. Chatterji, O. Khattab, P. Henderson, Q. Huang, R. Chi, S. M. Xie, S. Santurkar, S. Ganguli, T. Hashimoto, T. Icard, T. Zhang, V. Chaudhary, W. Wang, X. Li, Y. Mai, Y. Zhang, Y. Koreeda, Holistic evaluation of language models, 2023. arXiv:2211.09110 . [18] J. Liu, C. S. Xia, Y. Wang, L. Zhang, Is your code generated by chatgpt really correct? rigorous eval- uation of large language models for code generation, Advances in Neural Information Processing Systems 36 (2024). [19] L.-P. Meyer, C. Stadler, J. Frey, N. Radtke, K. Junghanns, R. Meissner, G. Dziwis, K. Bulert, M. Martin, Llm-assisted knowledge graph engineering: Experiments with chatgpt, in: C. Zinke-Wehlmann, J. Friedrich (Eds.), First Working Conference on Artificial Intelligence Development for a Resilient and Sustainable Tomorrow (AITomorrow) 2023, Informatik aktuell, 2023, pp. 101–112. doi: 10. 1007/978-3-658-43705-3_8 . [20] N. Mihindukulasooriya, S. Tiwari, C. F. Enguix, K. Lata, Text2KGBench: A benchmark for ontology- driven knowledge graph generation from text, in: T. R. Payne, V. Presutti, G. Qi, M. Poveda-Villalón, G. Stoilos, L. Hollink, Z. Kaoudi, G. Cheng, J. Li (Eds.), The Semantic Web – ISWC 2023, Springer Nature Switzerland, Cham, 2023, pp. 247–265. [21] Y. Zhu, X. Wang, J. Chen, S. Qiao, Y. Ou, Y. Yao, S. Deng, H. Chen, N. Zhang, Llms for knowl- edge graph construction and reasoning: Recent capabilities and future opportunities, 2023. arXiv:2305.13168 . [22] R. Usbeck, M. Röder, M. Hoffmann, F. Conrads, J. Huthmann, A.-C. Ngonga-Ngomo, C. Demmler, C. Unger, Benchmarking question answering systems, Semantic Web 10 (2019) 293–304. doi: 10. 3233/sw-180312 . [23] H. M. Zahera, M. Ali, M. A. Sherif, D. Moussallem, A. N. Ngomo, Generating SPARQL from natural language using chain-of-thoughts prompting, in: A. A. Salatino, M. Alam, F. Ongenae, S. Vahdati, A. L. Gentile, T. Pellegrini, S. Jiang (Eds.), Knowledge Graphs in the Age of Language Models and Neuro-Symbolic AI - Proceedings of the 20th International Conference on Semantic Systems, 17-19 September 2024, Amsterdam, The Netherlands, volume 60 of Studies on the Semantic Web , IOS Press, 2024, pp. 353–368. URL: https://doi.org/10.3233/SSW240028. doi: 10.3233/SSW240028 . [24] C. Shorten, C. Pierse, T. B. Smith, E. Cardenas, A. Sharma, J. Trengrove, B. van Luijt, Structuredrag: Json response formatting with large language models, 2024. arXiv:2408.11061 . [25] A. Srivastava, et al., Beyond the imitation game: Quantifying and extrapolating the capabilities of language models, Transactions on Machine Learning Research (2023). arXiv:2206.04615 . [26] J. Wei, Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, D. Yogatama, M. Bosma, D. Zhou, D. Metzler, E. H. Chi, Q. V. Le, Emergent abilities of large language models, Transactions on Machine Learning Research (2022). arXiv:2206.07682 . [27] J. Hoffmann, S. Borgeaud, A. Mensch, E. Buchatskaya, T. Cai, E. Rutherford, D. de Las Casas, L. A. Hendricks, J. | https://arxiv.org/abs/2505.16276v1 |
Welbl, A. Clark, T. Hennigan, E. Noland, K. Millican, G. van den Driessche, B. Damoc, A. Guy, S. Osindero, K. Simonyan, E. Elsen, J. W. Rae, O. Vinyals, L. Sifre, Training compute-optimal large language models (2022). arXiv:2203.15556 . [28] W. H. Kruskal, W. A. Wallis, Use of ranks in one-criterion variance analysis, Journal of the American Statistical Association 47 (1952) 583–621. [29] O. J. Dunn, Multiple comparisons using rank sums, Technometrics 6 (1964) 241–252. [30] C. E. Bonferroni, Il calcolo delle assicurazioni su gruppi di teste, 1935, pp. 13–60. A. Online Resources The LLM-KG-Bench framework is available here: LLM-KG-Bench. The raw benchmark run results and further figures are available here: Results LLM-KG-Bench v3. The code written to perform this analysis can be found here: Analysis Code. | https://arxiv.org/abs/2505.16276v1 |
arXiv:2505.16277v1 [cs.CL] 22 May 2025Spontaneous Speech Variables for Evaluating LLMs’ Cognitive Plausibility Sheng-Fu Wang Academia Sinica Institute of Linguistics Taipei, Taiwan sftwang@gate.sinica.edu.twLaurent Prévot CNRS & MEAE CEFC Taipei, Taiwan laurent.prevot@cnrs.fr Jou-An Chi Graduate Institute of Linguistics National Taiwan University Taipei, Taiwan r11142005@ntu.edu.twRi-Sheng Huang Department of CSIE National Taiwan University Taipei, Taiwan r13922102@csie.ntu.edu.twShu-Kai Hsieh Graduate Institute of Linguistics National Taiwan University Taipei, Taiwan shukaihsieh@ntu.edu.tw Abstract The achievements of Large Language Models in Natural Language Processing, especially for high-resource languages, call for a better un- derstanding of their characteristics from a cog- nitive perspective. Researchers have attempted to evaluate artificial models by testing their ability to predict behavioral (e.g., eye-tracking fixations) and physiological (e.g., brain re- sponses) variables during language processing (e.g., reading/listening). In this paper, we pro- pose using spontaneous speech corpora to de- rive production variables (speech reductions, prosodic prominences) and applying them in a similar fashion. More precisely, we extract. We then test models trained with a standard proce- dure on different pretraining datasets (written, spoken, and mixed genres) for their ability to predict these two variables. Our results show that, after some fine-tuning, the models can pre- dict these production variables well above base- lines. We also observe that spoken genre train- ing data provides more accurate predictions than written genres. These results contribute to the broader effort of using high-quality speech corpora as benchmarks for LLMs. 1 Introduction The success of Large Language Models in Natural Language Processing, especially for high-resource languages, calls for a better understanding of their characteristics from a cognitive perspective. Yet, the difference in scale between the data size re- quired to train LLMs and the amount of linguistic exposure needed for humans to achieve language proficiency calls for a better understanding of these models from a cognitive viewpoint. Researchers have attempted to evaluate artificial models by test-ing their ability to predict behavioral (e.g., eye- tracking fixations) and (neuro-)physiological (e.g., brain responses) variables during language process- ing (e.g., reading/listening). In this paper, we pro- pose using spontaneous speech corpora to derive production variables and applying them in a similar fashion. More precisely, we extract speech reduc- tionandprosodic prominences from three corpora in different languages (English, French, and Tai- wan Mandarin). Speech reductions shortens words duration by dropping sounds or merging syllables. Prosodic prominence makes certain syllables or words stand out through prosodic marking. TheBABYLMinitiative (Choshen et al., 2024; Hu et al., 2024; Warstadt et al., 2023b,a) has created an impulse for creating and evaluating cognitively plausible LLMs, both for the machine learning pur- pose of designing an efficient model with a small amount of training data (10-100M tokens) and for the cognitive science purpose of better understand- ing the similarities and differences between artifi- cial and human language learning. An observation about the initiative to date is that the datasets used are all in English. Although this is a natural starting point, it represents a significant limitation. Expand- ing the scope to include more languages is not only about better representing linguistic communities or potential model users; it is also about achiev- ing comparable, contrastive | https://arxiv.org/abs/2505.16277v1 |
results across different languages, which should offer valuable insights into both the learning models and the underlying learning processes. While gathering a 100M token conversational dataset based on real spoken conversational data might be challenging, a 10M token dataset is acces- sible for languages like English, French, Mandarin, and some other languages studied from a corpus linguistics perspective. Conversational speech is the genre within which humans acquire their basic language skills. It is a genre quite distant from the usual written or web content on which LMs are trained, increasing the risk of biases in the LMs pro- duced. Moreover, it has been argued that this genre is highly relevant to language emergence (Levin- son, 2020; Christiansen and Chater, 2022). How could a purely interactional dataset, including both child-directed and general conversation transcripts, be compared to written genres or more balanced mixtures (Feng et al., 2024)? In this context, current evaluation metrics, such as the ones used in BABY LM, while a good start- ing point, appear biased in two ways: they tend to favor canonical written forms and to prioritize syntactic, semantic, and commonsense pragmat- ics. However, language and communicative com- petence include many other dimensions. Although the initiative clearly emphasizes the importance of using speech transcripts—both child-directed and everyday conversations—as training data, to our knowledge, none of the evaluation metrics em- ployed explicitly address the specificities of spon- taneous speech. To summarize, we argue that datasets and eval- uation metrics are just as crucial as models for understanding the computational learning of lan- guage structure. More precisely, researchers have attempted to evaluate artificial models by testing their ability to predict behavioral (e.g., eye-tracking fixations) (Hollenstein et al., 2021) and physiolog- ical (e.g., brain responses) (Bingel et al., 2016; Hollenstein et al., 2018; Pasquiou et al., 2022) variables during language processing (e.g., read- ing/listening). We propose using spontaneous speech corpora to derive production variables and applying them in a similar fashion. More precisely, we extract speech reduction’ andprosodic promi- nences’ from three corpora in different languages (English, French, and Taiwan Mandarin). The three languages were chosen because of the availability of high-quality speech corpora (from which these variables can be reliably extracted), the availabil- ity of enough spontaneous speech transcripts (to train the model), and to ensure the inclusion of lan- guages from different typological families. We then test models trained with a standard procedure on different pretraining datasets (written, spoken, and mixed genres) to assess their ability to predict these two variables. Our results show that the models can,after some fine-tuning, predict these production variables well above baselines. We also observe that spoken genre training data provides more accu- rate predictions than written genres. These results open the possibility of using high-quality speech corpora as benchmarks for LLMs. 2 Related Work Since the emergence of large language models, there has been strong interest in the computational linguistics community in understanding why they are so successful. Warstadt et al. (2020b) explored the conditions (e.g., the amount of training data) under which ROBERT Adevelops and leverages linguistic features, such as part of | https://arxiv.org/abs/2505.16277v1 |
speech (POS) and morphology, as opposed to relying on simpler surface-level features like position-based or length- based features. More recently, several studies have probed LLMs to better characterize their perfor- mance across various domains, particularly with regard to their linguistic competence versus com- monsense reasoning. These studies have also exam- ined the relationship between model performance and the amount of training data required for differ- ent tasks. In particular, Zhang et al. (2021) used training sets of varying sizes—1M, 10M, 100M, and 1B tokens—to show that syntactic and seman- tic competence become robust within the 10M- 100M range, whereas larger datasets are needed to achieve strong results in pragmatic and common- sense reasoning tasks. More broadly, there have been proposals for eval- uating the performance of LLMs on diverse lin- guistic tasks. Warstadt et al. (2019b) leveraged a substantial body of generative syntax-semantics literature to develop benchmarks based on accept- ability judgments, coming from the linguistic liter- ature, like the COLAbenchmark, which was fur- ther extended by exploiting more sources and data augmentation methods in BLIMP(Warstadt et al., 2020a). In addition to these binary decision tasks, Zhang et al. (2021) combined three other types of evaluation metrics: classifier probing (following (Ettinger et al., 2016; Adi et al., 2017)), which includes tasks from POS tagging to coreference resolution; information-theoretic probing based on the minimum description length (MDL) principle; andfine-tuning on higher-level tasks such as those in the S UPER GLUE benchmark. Most of the benchmarks have been proposed for English. However, BLIMPWarstadt et al. (2019a) has inspired a series of language-specific bench- marks, such as CLIMP for Mandarin Chinese (Xi- ang et al., 2021), as well as benchmarks for other languages like Japanese (Someya and Oseki, 2023), Dutch (Suijkerbuijk et al.), and Russian (Takta- sheva et al., 2024). These are important additions to the evaluation landscape. While these bench- marks represent important extensions to the gen- eral evaluation framework, they all rely on syntax- semantics structures derived from introspection and textbook data, as will be discussed in the next sec- tion. In parallel with these efforts, monolingual language models have been developed using large amounts of data (Chang et al., 2024), as well as ex- periments involving varied data quantities (Micheli et al., 2020). In another line of research, several studies have tested the ability of large language models (LLMs) to perform tasks inspired by cognitive science, par- ticularly in the domains of semantics and pragmat- ics (Ettinger, 2020; Binz and Schulz, 2023). Our approach of using actual speech data to ex- tract production-based metrics can be related to studies that use behavioral or neurophysiological data linked with linguistic datasets. Specifically, there has been significant work focusing on tex- tual datasets combined with eye-tracking (Hollen- stein et al., 2021) or neurophysiological (Bingel et al., 2016; Hollenstein et al., 2018) measures. Additionally, datasets from passive listening tasks, linked to fMRI, have been released for various lan- guages (e.g., French, Mandarin, and English) (Li et al., 2022). These datasets have been used, for instance, to study the impact of training parameters on a language model’s | https://arxiv.org/abs/2505.16277v1 |
ability to predict neurophys- iological data (Pasquiou et al., 2022). Focusing on spontaneous speech, (Rauchbauer et al., 2019; Hmamouche et al., 2024) examined the predictabil- ity of fMRI-derived signals from conversational variables, including lexical information. In terms of specialized language models, (Cabiddu et al., 2025) developed LMs based on child-directed speech transcripts and evaluated them on word-sense disambiguation tasks. They concluded that word acquisition trajectories could be better captured by multimodal models that incor- porate acoustic features, among other aspects. Re- garding tokenizers, Beinborn and Pinter (2023) pro- posed an evaluation paradigm focusing on the cog- nitive plausibility of subword tokenization. They compared BPE,WordPiece , and UnigramLM and re- vealed a lower “cognitive correlation” for the latter.Furthermore, in the most recent BABYLMedition, (Martinez et al., 2023) introduced an interesting learning curriculum that constrained vocabulary in the early stages to simulate more cognitively plau- sible learning curves. Although this approach did not yield consistent overall results, marginal gains were observed in selected tasks. Lastly, a recent audio-based question-answering benchmark for spoken language models incorpo- rates spontaneous speech phenomena such as filler words, disfluencies, false starts, and corrections into different “speech style” conditions for testing the robustness of these models (Cui et al., 2025). Notably, the injection of speech phenomena into the benchmark is text-based and LLM-generated instead of being collected from natural speech. 3 A proposal for a new source of metrics Most of the work mentioned above is grounded in text-based and/or handcrafted paradigms, po- tentially coupled with behavioral or physiologi- cal lab measures. In contrast, we propose using actual spontaneous conversational transcripts to build complementary benchmarks that test not only syntactic-semantic dimensions but also real-world language use. These metrics will remain funda- mentally linguistic in nature rather than focusing on task-specific or end-to-end evaluation. Language is acquired, especially in its early stages, within spontaneous, conversational environ- ments. While conversational language shares gram- matical structures with other genres, its unique characteristics suggest that simply listing syntactic “errors” or semantic incongruities does not fully capture linguistic competence. Moreover, in a con- versational context, what may be considered a pro- duction error from a formal grammatical perspec- tive is often perfectly acceptable and successfully achieves its communicative purpose. Therefore, we aim to develop a complementary approach that pro- vides a broader set of metrics to evaluate language models from both cognitive and communicative perspectives along with existing benchmarks. Specifically, we propose using spontaneous speech corpora, as they offer insights into human language processing through various observable production phenomena. Our approach is a kind of classifier probing (Ettinger et al., 2016; Adi et al., 2017; Warstadt et al., 2019b), but rather than fo- cusing on metalinguistic tasks (e.g., predicting syn- tactic categories), we aim to predict phenomena that serve as partial indicators of language process- ing. We propose two preliminary metrics: speech reductions andprosodic prominences , which are grounded in spontaneous speech production, and each have been the subject of extensive research. 3.1 Speech reductions Speech reductions have been studied across a range of linguistic levels, especially when considering the issue of signal information density. | https://arxiv.org/abs/2505.16277v1 |
In spontaneous speech, some chunks of speech are produced in a reduced manner, both in terms of duration and articulatory amplitude. The location of these re- ductions is not random. For example, studies have suggested that speakers tend to smooth the informa- tion density of their speech signal over time, with reductions serving as a mechanism to achieve this smoothing effect (Aylett and Turk, 2004). The relationship between information density and speech reduction has led to research develop- ments on this topic with various approaches. These approaches may differ in the probabilistic mea- sures used to predict reductions, such as lexical frequency, contextual probability, and informativity (Aylett and Turk, 2004; Gahl, 2008; Cohen Priva, 2012; Seyfarth, 2014). They also differ in terms of the linguistic level at which reductions occur, whether at the phoneme-, syllable-, word-level, or in terms of overall speech rate. Many of these stud- ies include and compare different types of proba- bilistic measurements (e.g., lexical frequency and contextual probability) within a single study (e.g., (Seyfarth, 2014; Cohen Priva and Jaeger, 2018)) and some of them also compare probabilistic mea- surements calculated at different linguistic levels (e.g., segment- and syllable-levels in Van Son et al. (1999), segment- and word-level measurements in Van Son and Pols (2003), syllable- and word-level measurements in Wang (2022)). Inclusion and com- parison of reductions or phonetic variability across various linguistic levels in the same study have also been done (e.g., for individual segments and prefixes as a whole in Pluymaekers et al. (2005); morphemes and words in Tang and Bennett (2018)), albeit less frequently. These studies show that phonetic reduction can be predicted to varying degrees based on the sta- tistical distribution of linguistic units, and the pre- diction has been repeatedly found with different types of measurements at various levels of lin- guistic units. This motivates the development of a reduction-labeling task for evaluating languagemodels. 3.2 Prosodic Prominences Prosodic prominence refers to the emphasis placed on certain units, often demarcated at the level of words or syllables, within a spoken utterance. This emphasis can be measured through (and perceived based on) acoustic cues such as movements in fun- damental frequencies, duration, intensity, and seg- mental properties such as the formant structure of vowels. Recent work by Wolf et al. (2023) has shown a significant degree of redundancy be- tween the representations encoded from tokens alone and those derived from acoustic-prosodic information. Acoustic-prosodic features such as word-level energy, fundamental frequency, dura- tion, pause, and composite measurements derived using a wavelet-based algorithm (Suni et al., 2017) were used to quantify this redundancy. Their find- ings suggest that prosodic information can be pre- dicted, to some extent, from the word itself and its surrounding context. There are also studies show- ing that language models (e.g., BERT ) can predict prosodically-prominent tokens based from speech transcripts (Kakouros and O’Mahony, 2023; Tal- man et al., 2019). These findings suggest that part of prosodic prominence can indeed be predicted by a language model, even though predicting based on text alone should never be sufficient (as also noted by Wolf | https://arxiv.org/abs/2505.16277v1 |
et al. (2023)). 4 Method 4.1 Pre-training LLMs We pretrained multiple RoBERTa models (an archi- tecture choice motivated in part by a more direct comparison of our topline choice, XLM-RoBERTa, which has a more manageable size for fine-tuning) using three corpora in different languages: English, French, and Taiwan Mandarin. For each language, the pretraining data is categorized into three dis- tinct types: conversational, written, and a mix of these two genres. For English, the spoken data was drawn from BNC and Switchboard, and the writ- ten data was drawn from Simple Wikipedia (both extracted from the BABYLM100M dataset). The mixed data is a 9M-token subset of the BABYLM 10M training data, which contains CHILDES and Gutenberg collections in addition to the three afore- mentioned sources. The 1M-token dev sets were built from the corresponding validation sets in the BABYLM data. For French, the conversational data consists of 10 million tokens from conversational datasets specifically ORFEO (Benzitoun et al., 2016) and CHILDES- FR(MacWhinney, 2014; Rose and MacWhinney, 2014), and the conversational data contains an additional 10 million tokens sourced from Wikipedia. For Taiwan Mandarin, the 10M-token conver- sational data was derived from the NCCU Spo- ken Corpus of Taiwan Mandarin (Chui and Lai, 2008), Taiwan Corpus of Child Mandarin (TCCM)1 (Chang et al., 2011), Open Subtitles (Lison and Tiedemann, 2016), and the transcript of Taiwan Legislative Yuan2. The written data was from a subset of Traditional Chinese Wikipedia3. For French and Mandarin, the mixed data was built from a 50-50 mix of the conversational and Wiki data, and the prepared datasets for all three types underwent a 90-10 split to build the training and development sets. We employed the SentencePiece tokenizer (Kudo, 2018a) with the Unigram model (Kudo, 2018b). SentencePiece allows tokenization with- out requiring predefined word boundaries, mak- ing it particularly suitable for languages with non- segmented scripts, such as Taiwan Mandarin. Ad- ditionally, we applied a minimum token frequency threshold of 2. The vocabulary size was set to 10,000 tokens, which was determined empirically based on preliminary experiments. Each model was initialized with XLM-RoBERTa as the pre- trained base model and trained for 100 epochs us- ing a batch size of 32. The English, French, and Mandarin models were pretrained with a learning rate of 1e-4, 1e-4, and 2e-4 respectively, with early stopping if validation loss failed to decrease in 10 epochs. 4.2 Benchmarks For these experiments, we used three sources to build benchmarks: the Buckeye Corpus for En- glish4(Pitt et al., 2005), the Corpus of Interactional Data (CID) for French5(Blache et al., 2017), and The Sinica Mandarin Conversational Dialogue Cor- pus (Sinica MCDC8)6for Mandarin (Tseng, 2013). CID is an 8-hour corpus of 1-hour conversations 1https://lope.linguistics.ntu.edu.tw/tccm/ 2https://www.parliamentarytv.org.tw/ 3https://huggingface.co/datasets/zetavg/ zh-tw-wikipedia/ 4https://buckeyecorpus.osu.edu/ 5https://hdl.handle.net/11403/sldr000720 6https://www.aclclp.org.tw/use_mat.php#mcdcbetween friends (16 speakers). It features fiercely spontaneous conversational speech. Buckeye is a corpus with 38.1 hours of spontaneous speech (40 speakers) recorded in an interview format. MCDC8 features 8 one-hour natural conversations between recruited participants (16 speakers) who did not know their interlocutor in advance. The main rea- son for the choice of these corpora is the high qual- | https://arxiv.org/abs/2505.16277v1 |
ity of their speech transcript alignment, down to the syllable or even the segment level. 4.2.1 Speech Reduction There are several methods to determine whether a portion of speech is reduced. Following approaches in the literature, we first derived ratios of every word token’s actual duration and its expected du- ration. For the French and Mandarin benchmarks, we used annotations of syllable boundaries in the corpus and developed a model that predicts syl- lable duration based on the segment it contains, similar to Wang (2022). A model is trained on one-half of the corpus and then applied to estimate the expected token duration in the remaining half of the corpus. For the English benchmark, we fol- lowed the literature (Bell et al., 2009; Gahl et al., 2012; Seyfarth, 2014) and made use of the available segmental duration in the corpus and calculated a word’ expected duration. The corpus was similarly divided in halves with expected duration in one half calculated with segment duration from the other. In both cases, we then converted the ratios into binary labels by applying a threshold of 0.5 (i.e., a reduction of at least 50%) for English and French, and a threshold of 0.6 (reduction of at least 40%) for Mandarin. The threshold was adjusted for Mandarin to keep the proportion of reduced to- kens more similar to other two languages, as Man- darin’s syllable-timed nature make substantial re- duction a rare event. These thresholds resulted in 13.58%, 17.54%, and 11.09% of the tokens in English, French, and Mandarin being labeled as reduced. These labels were then encoded in BIO format. 4.2.2 Prosodic Prominences To detect prosodically prominent tokens we used Suni et al.’s (2017) method based on wavelet method that combines acoustic features (funda- mental frequency, energy, duration) fr determining prominence at the token level. One of the rea- sons for this tool choice is that it has already been used in the LM literature (Wolf et al., 2023) to quantify the amount of redundancy between tex- tual and prosodic levels. We applied the default configuration of this tool and set a threshold score of1.25(See figure 8 in Appendix A for details of the score distribution), which yielded 14%, 13.79%, and 12.77% of prosodic prominent tokens in En- glish, French, and Mandarin. 4.3 Fine-tuning Experiments The experiments evaluated different pre-trained models for our set of tasks.7More precisely, we fine-tuned the pretrained models separately on a to- ken classification task to predict which tokens were labeled (reduced / prominent) and which were not. A simple cross-validation was conducted across groups of speakers to maximize diversity across the folds. The models were fine-tuned for 10 epochs, with the validation fold being used for early stop- ping (patience = 5). For each model-task combi- nation, the fine-tuning experiment was run for 10 iterations, i.e., five learning rates (2e-5, 4e-5, 6e-5, 8e-5, 1e-4) ×two batch sizes (32, 16), with the combination that resulted in the highest averaged f1 score across 8 folds being used in the analysis. Finally, we include two conditions using fine- tuning XLM-R OBERT ABASE andLARGE . | https://arxiv.org/abs/2505.16277v1 |
This condition is not fair with the other models, since it is pretrained on much larger data set. It was originally included both as a topline and to assess the feasability of the task. For more efficient fine- tuning, these models underwent vocabulary prun- ing (Yang et al., 2022) on the training texts for our three languages. 5 Results The main results are presented in Figure 1. (see more detailed results in Appendix B). For reduc- tion, the results indicate that models comfortably outperform the label distribution baselines (dot- ted lines), especially for English and French. Be- tween models, in all three languages, the Wikipedia model is outperformed by at least the conversa- tional model. Furthermore, the topline RoBERTa models do not exhibit obvious advantages over the smaller model. It is also worth noting that models perform much worse in Mandarin than in the other two languages. As for prominence, the performance is more comparable across the three languages. There is 7Notebooks for pretraining LMs and performing the exper- iment can be accessed at ***.also a similar advantage for conversational and mixed models over the Wikipedia model. Con- trary to the reduction task, the topline ROBERT A models exhibit an advantage over the smaller mod- els. 5.1 Perplexity and benchmarking As language models’ capacity is often evaluated on their perplexity on a given corpus, we calculated per-token perplexity (handling multi-token words with the masking method in (Kauf and Ivanova, 2023)) for each language-model combination on each fold in the benchmarks, shown in Figure 2. For the smaller models, three languages exhibit distinct patterns when it comes to the difference between models: For English, the perplexity scores were very similar, while for French, the conversa- tional model outperforms the other models in hav- ing the lowest perplexity, followed by the mixed model and the Wiki model. For Mandarin, the pattern is reversed with the Wiki model having the lowest perplexity. In other words, for English and especially Mandarin, there is a divergence of perplexity scores between the benchmark and the fine-tuning experiment. We also examined the correlation between token- level surprisal and gold labels, an analysis similar to the aforementioned studies that examine whether surprisal/perplexity predicts behavioral measure- ments such as reading time. The results are shown in Figure 3. Based on observations from phonetic research, perplexity’s correlation is expected to be negative for both reduction and prominence (Aylett and Turk, 2004; Seyfarth, 2014). For prominence, the correlation is more positive for conversational and mixed models as observed in the fine-tuning experiments, thereby suggesting that models’ fine- tuning performance on categorical prominence pre- diction is partially influenced by the behavior of the base models. For reduction, the connection is less established: The expected negative correlation between perplex- ity and reduction is only borne out for conversa- tional and mixed models in English, even though the relative difference between Wiki and conversa- tional/mixed models in French is similar. Notably, the Wiki model is the only small model that showed the expected correlation in Mandarin. As these pat- terns are not perfect matches of the | https://arxiv.org/abs/2505.16277v1 |
fine-tuning results, it suggests that at least for reduction, the fine-tuning task is not as trivial as something that can be predicted from base models’ behavior. Figure 1: F-scores on the benchmarks as a function of model (x), task and language (prominence). Model comparisons are based on Bayesian regression analyses where MODEL is the fixed predictor and FOLD is a random intercept. The models were run with weak (uniform) priors using the brms package in R, with post hoc hypothesis testing focused on comparing three small models (Wiki, conversational, mixed). Stars in the figure indicate a one-sided hypothesis with a posterior probability above 95%. Dotted lines correspond to random baselines. Figure 2: Perplexity on the benchmarks These observations, summarized in terms of the winner of the comparison between the conversa- tional and Wiki models in Table 1, paint a more nuanced picture of the relationship between fine- tuning performance, perplexity, and how perplexity predicts reduction and prominence. They also add to the previous discussions on how and whether better language models in perplexity do not guar- antee better cognitive modeling (Kuribayashi et al., 2021). Figure 3: The correlation between perplexity and labels. 5.2 Error Analysis To better understand model errors in predicting prominence and reduction, we calculated the most over-predicted and under-predicted words for each language based on the difference between the mod- els’ predictions and gold label rates, as shown in Appendix C. The variances are very low across models, showing that models with different pre- training data have similar tendencies in over- and under-predictions. Table 1: Winners of the comparison between the con- versational and Wiki models in different criteria: higher F1 in fine-tuning (FT); lower perplexity (ppl), lower perplexity-label correlation (ppl-label cor.) for reduc- tion and higher correlation for prominence. lge task FT ppl ppl-label cor. Eng. both conv n.s. conv Fre. both conv conv conv Man. reduc conv wiki wiki Man. prom conv wiki conv Across languages, the most over-predicted words for reduction are function words and discourse markers. As for prominence, over-predicted words include discourse markers and content words di- rectly linked to the topic of the discourse. Under- predicted reductions are mostly content words and less commonly reduced expressions For promi- nence, under-predicted words also tend to be con- tent words. We refer the readers to Appendix C for examples of these words. A general explanation may be that models have learned that some words are intrinsically more likely to be reduced (Bodur et al., 2023) or to be prominent, and therefore biased towards the more common class. The tendency is likely related to lexical frequency and distribution of words within the discourse structure. It is worth noting that even though over- and under-predictions are observed, the overall correlation between frequency and label assignments did not differ greatly between model predictions and gold labels (Figures 10 and 11 in the Appendix D). 6Potential shortcomings and Limitations Information-centric nature. One potential lim- itation is that the models may only capture the information-theoretic contribution to our tasks, i.e., the well-noted relationship between information- theoretic notions such as information | https://arxiv.org/abs/2505.16277v1 |
density, en- tropy, and predictability and phonetic realizations. However, the prediction of these phenomena can- not be reduced to information-theoretic explana- tions alone, and our analyses do show some diver- gences between fine-tuning experiments and direct correlations between perplexity and benchmark la- bels. As each metric introduces its own set of sub- tleties related to language processing, our goal is to evaluate LLMs in terms of their ability to grasp these subtleties. Text-only. The proposed phenomena forprobing the models are inherently related to speech processing, which goes beyond what can be achieved with a text-only approach. Beyond the acoustic modality, the visual channel also plays a role. Our goal in proposing these metrics is not to achieve state-of-the-art performance in predicting these phenomena. Rather, we aim to treat them as ‘traces’ of human language processing visible at the surface level, and to test which models are better at predicting these traces from text-only input. Surface level shortcuts. A concern related to the previous point is the risk that models may rely on surface-level elements as shortcuts to predict the target variables. Addressing such a concern is inherently challenging, as speech phenomena are deeply intertwined with observable surface patterns. Nonetheless, we believe it is still worth pursuing this line of investigation, which can be extended particularly through controlled evaluation sets, sim- ilar to the approach in McCoy et al. (2019), who compared the models with some heuristics to iso- late the influence of surface-level cues. The types of cross-lingual phenomena in our proposed bench- marks also invite future work that develops prin- cipled methodologies in evaluating the ‘cognitive’ process inside a language model. Triviality of the main result. From a machine- learning perspective, it might be seen as a trivial result that models trained on data similar to test sets perform better than models trained on other types of data. First of all, it is worth emphasizing that pretraining datasets and benchmarks in our ex- periments are completely independent as they do not come from the same raw corpora. Also, the pre- training datasets and corpora for building bench- marks have been curated by different teams and transcribed following different conventions. Nev- ertheless, we cannot deny that the conversational datasets are by all aspects (sentence length distri- bution, lexical frequencies, etc) more similar to benchmarks than Wikipedia datasets are. As trivial as it seems, it may be one of our main points: to produce models more closely related to human cognition, one should use data sets made of spontaneous speech (and not generic textual / web content). Furthermore, the additional analyses on perplexity and how perplexity directly correlates with labels in benchmarks show that the advantage of conversational models in these conversational benchmarks is not guaranteed in all potential ways of ‘evaluating’ these models. 7 Conclusion In this paper, we propose advancing the benchmark- ing of language models to include spontaneous speech phenomena and to extend beyond English. Motivated by the BABYLM initiative, we evalu- ated models trained on developmentally plausible amounts of data and showed how models trained in different genres of | https://arxiv.org/abs/2505.16277v1 |
texts perform differently. In the future, it will be crucial to explore more nuanced variations in training data, such as balanc- ing conversational speech, child-directed speech, and simple texts as well as developping comple- mentary evaluation metrics. From a broader perspective, we hope to show that benchmarks like BLIMP that require a sig- nificant amount of expert and naive human input to build, can be complemented, as we have ex- plored in this paper, with benchmarks derived from existing high-quality linguistic corpora, without additional human efforts. Acknowledgments We would like to thank Shu-Chuan Tseng and Pierre Magistry for discussions in relation to this paper. This study was supported by Taiwan’s Na- tional Science and Technology Council (NSTC- 112-2410-H-001-098-MY2). References Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. InProceeddings of ICLR Conference Track , Toulon, France. Matthew Aylett and Alice Turk. 2004. The smooth signal redundancy hypothesis: A functional explana- tion for relationships between redundancy, prosodic prominence, and duration in spontaneous speech. Language and speech , 47(1):31–56. Lisa Beinborn and Yuval Pinter. 2023. Analyzing cog- nitive plausibility of subword tokenization. In The 2023 Conference on Empirical Methods in Natural Language Processing . Alan Bell, Jason M Brenier, Michelle Gregory, Cyn- thia Girand, and Dan Jurafsky. 2009. Predictability effects on durations of content and function words in conversational English. Journal of Memory and Language , 60(1):92–111. Christophe Benzitoun, Jeanne-Marie Debaisieux, and Henri-José Deulofeu. 2016. Le projet orféo: un cor- pus d’étude pour le français contemporain. Corpus , (15).Joachim Bingel, Maria Barrett, and Anders Søgaard. 2016. Extracting token-level signals of syntactic pro- cessing from fmri-with an application to pos induc- tion. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers) , pages 747–755. Marcel Binz and Eric Schulz. 2023. Using cognitive psychology to understand gpt-3. Proceedings of the National Academy of Sciences , 120(6):e2218523120. Philippe Blache, Roxane Bertrand, Gaëlle Ferré, Berthille Pallaud, Laurent Prévot, and Stéphane Rauzy. 2017. The corpus of interactional data: A large multimodal annotated resource. Handbook of linguistic annotation , pages 1323–1356. Kübra Bodur, Corinne Fredouille, Stéphane Rauzy, and Christine Meunier. 2023. Exploring the nuances of reduction in conversational speech: Lexicalized and non lexicalized reductions. PsyArXiv . Francesco Cabiddu, Mitja Nikolaus, and Abdellah Four- tassi. 2025. Comparing children and large language models in word sense disambiguation: Insights and challenges. Language Development Research , 5(1). Lih-Huei Chang, Zhong-Ru Chang, Yan-Chang Ke, and Su-Hui Xiao. 2011. Taiwan child language corpus (tccm). National Science Council Research Project, NSC96-2420-H-002-030. Available at http: //lope.linguistics.ntu.edu.tw/tccm/ . Tyler A Chang, Catherine Arnett, Zhuowen Tu, and Benjamin K Bergen. 2024. Goldfish: Monolingual language models for 350 languages. arXiv preprint arXiv:2408.10441 . Leshem Choshen, Ryan Cotterell, Michael Y Hu, Tal Linzen, Aaron Mueller, Candace Ross, Alex Warstadt, Ethan Wilcox, Adina Williams, and Chengxu Zhuang. 2024. [call for papers] the 2nd babylm challenge: Sample-efficient pretraining on a developmentally plausible corpus. arXiv preprint arXiv:2404.06214 . Morten H Christiansen and Nick Chater. 2022. The lan- guage game: How improvisation created language and | https://arxiv.org/abs/2505.16277v1 |
changed the world . Random House. Kawai Chui and Huei-ling Lai. 2008. The nccu corpus of spoken chinese: Mandarin, hakka, and southern min. Taiwan Journal of Linguistics , 6(2). Uriel Cohen Priva. 2012. Sign and signal: Deriving linguistic generalizations from information utility . Ph.D. thesis, Stanford University. Uriel Cohen Priva and T Florian Jaeger. 2018. The interdependence of frequency, predictability, and in- formativity in the segmental domain. Linguistics Vanguard , 4(s2). Wenqian Cui, Xiaoqi Jiao, Ziqiao Meng, and Irwin King. 2025. V oxeval: Benchmarking the knowledge under- standing capabilities of end-to-end spoken language models. arXiv preprint arXiv:2501.04962 . Allyson Ettinger. 2020. What bert is not: Lessons from a new suite of psycholinguistic diagnostics for lan- guage models. Transactions of the Association for Computational Linguistics . Allyson Ettinger, Ahmed Elgohary, and Philip Resnik. 2016. Probing for semantic evidence of composition by means of simple classification tasks. In Proceed- ings of the 1st workshop on evaluating vector-space representations for nlp , pages 134–139. Steven Feng, Noah Goodman, and Michael C Frank. 2024. Is child-directed speech effective training data for language models? In Proceedings of the 2024 Conference on Empirical Methods in Natural Lan- guage Processing , pages 22055–22071. Susanne Gahl. 2008. Time and thyme are not homo- phones: The effect of lemma frequency on word du- rations in spontaneous speech. Language , 84(3):474– 496. Susanne Gahl, Yao Yao, and Keith Johnson. 2012. Why reduce? phonological neighborhood density and pho- netic reduction in spontaneous speech. Journal of memory and language , 66(4):789–806. Youssef Hmamouche, Magalie Ochs, Laurent Prévot, and Thierry Chaminade. 2024. Interpretable predic- tion of brain activity during conversations from multi- modal behavioral signals. Plos one , 19(3):e0284342. Nora Hollenstein, Emmanuele Chersoni, Cassandra L Jacobs, Yohei Oseki, Laurent Prévot, and Enrico San- tus. 2021. Cmcl 2021 shared task on eye-tracking prediction. In Proceedings of the Workshop on Cogni- tive Modeling and Computational Linguistics , pages 72–78. Nora Hollenstein, Jonathan Rotsztejn, Marius Troen- dle, Andreas Pedroni, Ce Zhang, and Nicolas Langer. 2018. Zuco, a simultaneous eeg and eye-tracking resource for natural sentence reading. Scientific data , 5(1):1–13. Michael Y Hu, Aaron Mueller, Candace Ross, Ad- ina Williams, Tal Linzen, Chengxu Zhuang, Ryan Cotterell, Leshem Choshen, Alex Warstadt, and Ethan Gotlieb Wilcox. 2024. Findings of the sec- ond babylm challenge: Sample-efficient pretraining on developmentally plausible corpora. arXiv preprint arXiv:2412.05149 . Sofoklis Kakouros and Johannah O’Mahony. 2023. What does bert learn about prosody? In 20th In- ternational Congress of Phonetic Sciences ICPhS . International Phonetics Association. Carina Kauf and Anna Ivanova. 2023. A better way to do masked language model scoring. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) , pages 925–935.T Kudo. 2018a. Sentencepiece: A simple and lan- guage independent subword tokenizer and detok- enizer for neural text processing. arXiv preprint arXiv:1808.06226 . Taku Kudo. 2018b. Subword regularization: Improving neural network translation models with multiple sub- word candidates. arXiv preprint arXiv:1804.10959 . Tatsuki Kuribayashi, Yohei Oseki, Takumi Ito, Ryo Yoshida, Masayuki Asahara, and Kentaro Inui. 2021. Lower perplexity is not always human-like. In Pro- ceedings of the | https://arxiv.org/abs/2505.16277v1 |
59th Annual Meeting of the Asso- ciation for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing , pages 5203–5217. Association for Com- putational Linguistics. Stephen C Levinson. 2020. On the human "interaction engine". In Roots of human sociality , pages 39–69. Routledge. Jixing Li, Shohini Bhattasali, Shulin Zhang, Berta Franzluebbers, Wen-Ming Luh, R Nathan Spreng, Jonathan R Brennan, Yiming Yang, Christophe Pal- lier, and John Hale. 2022. Le petit prince multilingual naturalistic fmri corpus. Scientific data , 9(1):530. Pierre Lison and Jörg Tiedemann. 2016. Opensub- titles2016: Extracting large parallel corpora from movie and tv subtitles. Brian MacWhinney. 2014. The CHILDES project: Tools for analyzing talk, Volume I: Transcription for- mat and programs . Psychology Press. Richard Diehl Martinez, Hope McGovern, Zebulon Goriely, Christopher Davis, Andrew Caines, Paula Buttery, and Lisa Beinborn. 2023. Climb–curriculum learning for infant-inspired model building. In Pro- ceedings of the BabyLM Challenge at the 27th Con- ference on Computational Natural Language Learn- ing, pages 112–127. Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuris- tics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics , pages 3428–3448, Florence, Italy. Association for Computational Linguistics. Vincent Micheli, Martin d’Hoffschmidt, and François Fleuret. 2020. On the importance of pre-training data volume for compact language models. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 7853–7858, Online. Association for Computational Linguistics. Alexandre Pasquiou, Yair Lakretz, John T Hale, Bertrand Thirion, and Christophe Pallier. 2022. Neu- ral language models are not born equal to fit brain data, but training helps. In International Conference on Machine Learning , pages 17499–17516. PMLR. Mark A Pitt, Keith Johnson, Elizabeth Hume, Scott Kiesling, and William Raymond. 2005. The buck- eye corpus of conversational speech: Labeling con- ventions and a test of transcriber reliability. Speech Communication , 45(1):89–95. Mark Pluymaekers, Mirjam Ernestus, and R Harald Baayen. 2005. Lexical frequency and acoustic reduc- tion in spoken dutch. The Journal of the Acoustical Society of America , 118(4):2561–2569. Birgit Rauchbauer, Bruno Nazarian, Morgane Bourhis, Magalie Ochs, Laurent Prévot, and Thierry Chami- nade. 2019. Brain activity during reciprocal social interaction investigated using conversational robots as control condition. Philosophical Transactions of the Royal Society B , 374(1771):20180033. Yvan Rose and Brian MacWhinney. 2014. The phon- bank project: Data and software-assisted methods for the study of phonology and phonological develop- ment. Scott Seyfarth. 2014. Word informativity influences acoustic duration: Effects of contextual predictability on lexical representation. Cognition , 133(1):140– 155. Taiga Someya and Yohei Oseki. 2023. Jblimp: Japanese benchmark of linguistic minimal pairs. In Findings of the Association for Computational Linguistics: EACL 2023 , pages 1581–1594. Michelle Suijkerbuijk, Zoë Prins, Marianne de Heer Kloots, Willem Zuidema, and Stefan L Frank. Blimp-nl. Antti Suni, Juraj Šimko, Daniel Aalto, and Martti Vainio. 2017. Hierarchical representation and estimation of prosody using continuous wavelet transform. Com- puter Speech & Language , 45:123–136. Ekaterina Taktasheva, Maxim Bazhukov, Kirill Koncha, Alena Fenogenova, and Ekaterina Artemova. 2024. Rublimp: Russian benchmark of linguistic minimal | https://arxiv.org/abs/2505.16277v1 |
pairs. arXiv preprint arXiv:2406.19232 . Aarne Talman, Antti Suni, Hande Celikkanat, Sofok- lis Kakouros, Jörg Tiedemann, and Martti Vainio. 2019. Predicting prosodic prominence from text with pre-trained contextualized word representations. In Proceedings of the 22nd Nordic Conference on Com- putational Linguistics , pages 281–290. Kevin Tang and Ryan Bennett. 2018. Contextual pre- dictability influences word and morpheme duration in a morphologically complex language (kaqchikel mayan). The Journal of the Acoustical Society of America , 144(2):997–1017. Shu-Chuan Tseng. 2013. Lexical coverage in taiwan mandarin conversation. In International Journal of Computational Linguistics & Chinese Language Pro- cessing, Volume 18, Number 1, March 2013 .RJJH Van Son and Louis CW Pols. 2003. How efficient is speech. In Proceedings of the institute of phonetic sciences , volume 25, pages 171–184. RJJH Van Son, Louis CW Pols, et al. 1999. Effects of stress and lexical structure on speech efficiency. In EUROSPEECH . Sheng-Fu Wang. 2022. The interaction between pre- dictability and pre-boundary lengthening on sylla- ble duration in taiwan southern min. Phonetica , 79(4):315–352. Alex Warstadt, Leshem Choshen, Aaron Mueller, Ad- ina Williams, Ethan Wilcox, and Chengxu Zhuang. 2023a. Call for papers–the babylm challenge: Sample-efficient pretraining on a developmentally plausible corpus. arXiv preprint arXiv:2301.11796 . Alex Warstadt, Aaron Mueller, Leshem Choshen, Ethan Wilcox, Chengxu Zhuang, Juan Ciro, Rafael Mos- quera, Bhargavi Paranjabe, Adina Williams, Tal Linzen, et al. 2023b. Findings of the babylm challenge: Sample-efficient pretraining on develop- mentally plausible corpora. In Proceedings of the BabyLM Challenge at the 27th Conference on Com- putational Natural Language Learning , pages 1–34. Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mo- hananey, Wei Peng, Sheng Fu Wang, and Samuel R. Bowman. 2019a. Blimp: The benchmark of linguis- tic minimal pairs for english. Transactions of the Association for Computational Linguistics . Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mo- hananey, Wei Peng, Sheng-Fu Wang, and Samuel R Bowman. 2020a. Blimp: The benchmark of linguis- tic minimal pairs for english. Transactions of the Association for Computational Linguistics , 8:377– 392. Alex Warstadt, Amanpreet Singh, and Samuel R. Bow- man. 2019b. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics , 7:625–641. Alex Warstadt, Yian Zhang, Xiaocheng Li, Haokun Liu, and Samuel R. Bowman. 2020b. Learning which features matter: RoBERTa acquires a preference for linguistic generalizations (eventually). In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 217–235, Online. Association for Computational Lin- guistics. Lukas Wolf, Tiago Pimentel, Evelina Fedorenko, Ryan Cotterell, Alex Warstadt, Ethan Wilcox, and Tamar Regev. 2023. Quantifying the redundancy between prosody and text. In Proceedings of the 2023 Con- ference on Empirical Methods in Natural Language Processing , pages 9765–9784, Singapore. Associa- tion for Computational Linguistics. Beilei Xiang, Changbing Yang, Yu Li, Alex Warstadt, and Katharina Kann. 2021. CLiMP: A benchmark for Chinese language model evaluation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume , pages 2784–2790, Online. Association for Computational Linguistics. Ziqing Yang, Yiming Cui, and Zhigang Chen. 2022. TextPruner: A model pruning toolkit for pre-trained language models. In | https://arxiv.org/abs/2505.16277v1 |
Proceedings of the 60th An- nual Meeting of the Association for Computational Linguistics: System Demonstrations , pages 35–43, Dublin, Ireland. Association for Computational Lin- guistics. Yian Zhang, Alex Warstadt, Xiaocheng Li, and Samuel R. Bowman. 2021. When do you need bil- lions of words of pretraining data? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) , pages 1112–1125, Online. Association for Computational Linguistics. A Distribution of benchmark labels Figure 4: Distribution reduction ratios as calculated in the English Dataset and the threshold selected. Figure 5: Distribution reduction ratios as calculated in the French Dataset and the threshold selected. Figure 6: Distribution of reduction ratios as calculated in the Mandarin Dataset and the threshold selected. Figure 7: Distribution of prominence scores as calcu- lated in the English Dataset and the threshold selected. Figure 8: Distribution of prominence scores as calcu- lated in the French Dataset and the threshold selected. Figure 9: Distribution of prominence scores as calcu- lated in the Mandarin Dataset and the threshold selected. B Complete scores Table 2: Full results on the proposed speech-based benchmarks Language Task Model F1 Precision Recall English reduction wiki .433 (.020) .451 (.038) .422 (.052) conv .442 (.014) .465 (.023) .426 (.041) mixed .436 (.019) .448 (.025) .429 (.048) xlm-r-base .452 (.013) .458 (.029) .451 (.041) xlm-r-large .448 (.013) .433 (.033) .473 (.055) prominence wiki .378 (.028) .407 (.035) .356 (.044) conv .394 (.027) .428 (.027) .366 (.041) mixed .390 (.025) .414 (.040) .372 (.035) xlm-r-base .414 (.018) .463 (.046) .378 (.032) xlm-r-large .422 (.024) .447 (.039) .405 (.051) French reduction wiki .441 (.034) .472 (.051) .415 (.031) conv .455 (.023) .488 (.036) .429 (.038) mixed .456 (.024) .498 (.038) .424 (.035) xlm-r-base .460 (.027) .491 (.047) .437 (.042) xlm-r-large .454 (.028) .473 (.036) .438 (.030) prominence wiki .416 (.015) .415 (.039) .422 (.032) conv .443 (.017) .453 (.035) .436 (.032) mixed .445 (.031) .456 (.030) .435 (.045) xlm-r-base .474 (.030) .478 (.037) .472 (.038) xlm-r-large .464 (.020) .463 (.036) .468 (.029) Mandarin reduction wiki .169 (.016) .154 (.023) .195 (.038) conv .241 (.024) .248 (.044) .239 (.033) mixed .230 (.029) .251 (.051) .220 (.042) xlm-r-base .192 (.031) .179 (.036) .215 (.045) xlm-r-large .180 (.026) .174 (.034) .194 (.043) prominence wiki .365 (.019) .345 (.023) .390 (.033) conv .401 (.008) .398 (.018) .405 (.025) mixed .395 (.008) .379 (.019) .413 (.015) xlm-r-base .444 (.020) .447 (.029) .444 (.037) xlm-r-large .426 (.031) .429 (.037) .425 (.036) C Word-level error analysis For each language and task, we measured the most over-predicted and the most under-predicted words by rate difference, which is the difference between the model prediction’s true rate and the ground truth’s true rate. Top five and bottom five words ranked by rate difference are listed below. Since a single word may be split into multiple tokens, we determine a word’s prediction based on majority voting among its tokens. In the case of a tie, the word is classified as “true” (reduced/prominent). We only include words that | https://arxiv.org/abs/2505.16277v1 |
appear at least 50 times. Table 3: The most over-predicted and under-predicted words for each task. For each pretraining scheme (wiki, con- versational, mixed, ROBERT A-base and ROBERT A-large), we selected the best-performing model, and calculated every word’s rate difference. Only the top-5 and bottom-5 words in rate difference are listed, along with the mean across the 5 models (and the standard deviation in parentheses). Language Task Top-5 Rate difference Bottom-5 Rate difference English reduction youre .284 (.021) u1-.255 (.033) dont .219 (.033) everybody -.232 (.070) theyre .209 (.063) only -.232 (.111) a .179 (.008) ill -.226 (.057) at .170 (.022) many -.209 (.055) prominence basically .215 (.082) vote -.225 (.023) obviously .165 (.145) grade -.144 (.047) okay .134 (.022) wanted -.126 (.036) yeah .121 (.048) ten -.125 (.044) um .110 (.027) times -.125 (.024) French reduction jesuis .302 (.036) ont -.183 (.047) jesais .236 (.012) vais -.180 (.071) peutêtre .205 (.026) vois -.160 (.022) ai .160 (.041) oh -.154 (.021) de .147 (.031) te -.149 (.036) prominence mhm .352 (.179) etc -.167 (.108) problème .185 (.074) école -.144 (.107) accord .178 (.055) soit -.117 (.048) gamin .161 (.051) oh -.115 (.069) moment .134 (.039) pense -.106 (.037) Mandarin reduction de0 .201 (.191) xue2-xiao4 -.241 (.094) ke3-shi4 .192 (.059) zhe4-ge0 -.089 (.015) yin1-wei1 .148 (.044) bian4-cheng2 -.084 (.040) le0 .143 (.072) da4-jia1 -.079 (.040) suo3-yi3 .142 (.053) bu4-neng2 -.070 (.048) prominence dang1-bing1 .272 (.044) san1 -.136 (.049) xue2-xiao4 .183 (.104) jiao4 -.131 (.027) wei4-shen2-me0 .119 (.057) na4-me0 -.091 (.047) ru2-guo3 .112 (.102) dong1-xi1 -.079 (.019) da4-jia .105 (.049) che1 -.076 (.011) 1u as in the initialism “OSU (Ohio State University)”, which appears frequently in the Buckeye corpus. D Token frequency vs. gold and predicted labels Figure 10: Predicted and gold reduction label rates as a function of log frequency (x axis, z-scored) and model type ×language (panels) Figure 11: Predicted and gold prominence label rates as a function of log frequency (x axis, z-scored) and model type×language (panels) | https://arxiv.org/abs/2505.16277v1 |
arXiv:2505.16281v1 [cs.CL] 22 May 2025HIMATE: A Hierarchical Multi-Agent Framework for Machine Translation Evaluation Shijie Zhang1*and Renhao Li1,2*†and Songsheng Wang1*and Philipp Koehn3and Min Yang2 and Derek F. Wong1 1NLP2CT Lab, Department of Computer and Information Science, University of Macau 2Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences 3Johns Hopkins University nlp2ct.{shijie,songsheng}@gamil.com, li.renhao@connect.um.edu.mo phi@jhu.edu, min.yang@siat.ac.cn, derekfw@um.edu.mo Abstract The advancement of Large Language Mod- els (LLMs) enables flexible and interpretable automatic evaluations. In the field of machine translation evaluation, utilizing LLMs with translation error annotations based on Multi- dimensional Quality Metrics (MQM) yields more human-aligned judgments. However, cur- rent LLM-based evaluation methods still face challenges in accurately identifying error spans and assessing their severity. In this paper, we propose HiMATE, a Hierarchical Multi- Agent Framework for Machine Translation Evaluation. We argue that existing approaches inadequately exploit the fine-grained structural and semantic information within the MQM hi- erarchy. To address this, we develop a hier- archical multi-agent system grounded in the MQM error typology, enabling granular evalu- ation of subtype errors. Two key strategies are incorporated to further mitigate systemic hallu- cinations within the framework: the utilization of the model’s self-reflection capability and the facilitation of agent discussion involving asym- metric information. Empirically, HiMATE outperforms competitive baselines across dif- ferent datasets in conducting human-aligned evaluations. Further analyses underscore its significant advantage in error span detection and severity assessment, achieving an average F1-score improvement of 89% over the best- performing baseline. We make our code and data publicly available at https://anonymous. 4open.science/r/HiMATE-Anony . 1 Introduction Translation capability reflects the cross-lingual comprehension and generation quality of Large Language Models (LLMs) (Hendy et al., 2023; Yang et al., 2024; Dubey et al., 2024). Compared *Equal contribution. †Under the Joint Ph.D. Program between UM and SIAT. Corresponding author. Intro V3: Add M -MAD xCOMET Labeled Data Multilingual PLM Target LM - Sentence -level Score - Error Span & Severity Final Score Feed Finetune Infer HiMATE MQM Hierarchy (Category & Description ) Collaborative DiscussionSelf- Reflection Subtype Evaluation Final Score Build InferMulti- Agent System … MQM Hierarchy (Category & Description )M-MAD Build Final Score Infer Multi- Agent System … Dimension -Aware Debate & Judge GEMBA -MQM Build Act MQM Hierarchy (Category Only) Single -Agent System Final Score Infer All-in-One Prompt Figure 1: A comparison of HIMATE and other repre- sentative MQM-based metrics for MTE. to traditional machine translation models, apply- ing LLMs significantly enhances the translation capabilities of machine translation systems and in- troduces new challenges to Machine Translation Evaluation (MTE) (Kocmi et al., 2024). Previously, statistics-based MTE metrics such as BLEU (Papineni et al., 2002) and ME- TEOR (Banerjee and Lavie, 2005) are utilized for their efficiency, despite inherent limitations in accu- rately capturing semantic similarity. To address this gap, learned metrics such as COMET-20 (Rei et al., 2020) and RoBLEURT (Wan et al., 2021) have been developed to enhance evaluations. However, these approaches primarily yield numeric scores and cannot identify the specific error spans in trans- lations (Fernandes et al., 2023; Leiter et al., 2022). The machine translation data, annotated based on the guidelines of Multidimensional Quality Met- | https://arxiv.org/abs/2505.16281v1 |
rics (MQM),1includes detailed information regard- ing error spans, categories, and severity for each translated sentence, which facilitates more precise finetuning and thorough analysis of MTE mod- 1themqm.org. 1 els. The xCOMET model (Guerreiro et al., 2024) is finetuned on such data for interpretable scores, while it necessitates resource-intensive multi-stage training and shows a lack of generalization abil- ity. The LLM-based single-agent evaluation sys- tems, including EAPrompt (Lu et al., 2024) and GEMBA-MQM (Kocmi and Federmann, 2023a), utilize MQM-informed prompts to guide LLMs in error detection. Nevertheless, the hallucination and position bias inherent within LLM evaluators affect their performance (Wang et al., 2024). In response to these challenges, the recently proposed method M-MAD (Feng et al., 2024) employs a multi-agent system to decompose complex MTE tasks into sim- pler assessments, showcasing the potential of this paradigm. However, its effectiveness primarily stems from the self-consistency of agents oper- ating within identical contexts, which may limit nuanced, multi-level error analysis. We argue that the rich semantic and hierarchical structure of the MQM framework has been largely underutilized in current methodological designs. By developing a more fine-grained, hierarchical agent-based system explicitly tailored to leverage the untapped poten- tial of MQM’s structured hierarchy, error detection accuracy could be significantly enhanced, thereby narrowing the gap with human expert performance. In this paper, we propose HIMATE , a Hierarchical Multi-Agent Framework for Machine Translation Evaluation. Specifically, we establish the agent topology based on the MQM error ty- pology, facilitating hierarchical agent nodes to ex- change information in alignment with human error classification methodology. Error descriptions asso- ciated with each node are then employed to initial- ize multi-agent evaluators, allowing them to focus on detailed, simplified discrimination tasks rather than evaluations against broad, comprehensive cri- teria. This design facilitates fine-grained subtype error evaluations through multi-agent collabora- tion. Furthermore, we propose two post-stages to enhance the accuracy of the system in error de- tection and severity assessment. Concretely, the subtype evaluator first revises the original trans- lations based on its error detection results, then re-evaluates the validity of the initial judgments through comparison. Cases with low confidence during this self-reflection phase are forwarded to the next stage, where agents from different tiers collaboratively determine the presence and severity of translation errors through iterative discussion. In contrast to prior methods, the proposed methodleverages MQM information more effectively for decomposing complex tasks and constructing hi- erarchical multi-agent systems. By guiding these agents through self-reflection and collaboration, we achieve more human-aligned automatic evaluations. Figure 1 highlights the key differences between Hi- MATE and other representative MTE metrics. We summarize our contributions as follows: ·We propose HIMATE , a novel multi-agent ma- chine translation evaluation framework that lever- ages MQM hierarchy-derived structural and se- mantic information to define agent topology, re- sulting in reliable, human-aligned judgments. ·We develop a three-stage strategy to enhance er- ror detection and severity assessment within the framework, facilitated by self-reflection and col- laborative discussions among agents. ·Empirical results across various models and datasets demonstrate the effectiveness of HI- MATE . Further analyses highlight its superiority in both error span identification | https://arxiv.org/abs/2505.16281v1 |
and severity eval- uation, achieving an average 89% improvement in F1-score and 95% enhancement in Recall over the best-performing baseline. 2 Related Work 2.1 Machine Translation Evaluation The objective of MTE is to assess the quality of machine-translated text. Traditional statistics- based metrics (Papineni et al., 2002; Lin, 2004; Zhang et al., 2019) often inadequately capture semantic similarity (Rei et al., 2020). Certain learned metrics (Rei et al., 2022a,b; Guerreiro et al., 2024) have shown improvement, while high iter- ation costs and limited generalization ability con- strain these methods. Recent advancements har- ness the power of LLMs for translation evalua- tion. GEMBA-DA (Kocmi and Federmann, 2023b) implements direct scoring through prompt tuning, whereas EAPrompt (Lu et al., 2024) identifies er- rors without numerical scoring to mitigate bias. AUTOMQM (Fernandes et al., 2023) and GEMBA- MQM (Kocmi and Federmann, 2023a) prompt models to produce a score within the MQM frame- work. However, these methods insufficiently utilize the hierarchical information of MQM framework. 2.2 Evaluation based on Multi-Agent Multi-agent systems have demonstrated their po- tential in enhancing the accuracy of evaluation 2 ### SOURCE: 如果你想拍爱情片,那你就老老实实讲喜欢骑摩托的小镇 青年和服务员妹子之间的恩怨纠葛。 ### TRANSLATION: If you want to make a romantic movie, then you should honestly tell the story of the feud between the young man who likes to ride a motorcycle and the waiter sister.A Zh→En Translation Sample for Evaluation The Error of ‘Addition’ exists, and the severity is major … The Error of ‘Omission’ does not exist . Because … … MQM HierarchyStage 1: Subtype Evaluation Stage 3: Collaborative Discussion Final Stage: Weighted Scoring Tier-2 Agent Assignment Seek assistance from the responsible tier- 1 agentTier-1 Tier-2 AccuracyMistranslation OmissionAdditionUntranslated text FluencyPunctuationSpellingGrammarRegisterInconsistencyCharacter encoding Style Awkward … … Severity Category Weight Major All -5 MinorFluency/Punctuation -0.1 All others -1 Neutral All 0 MQM Error Weighting Final Score Stage 2: Self -Reflection Omission … Addition Minor -1Neutral 0Check for errors by self -reflection (More Subtype Evaluation Omitted) Aggregate judgements from subtype agentsEvaluate by subtype agents separately Final Stage No YesError?Error has been successfully corrected. Addition Compare Addition EvaluateOmission Evaluate Final StageNo I disagree with your opinion. I believe the error exists and should be a major error … I think the error does exist, while it should be a minor error. Specifically, … I agree with your opinion … <Final Answer>: Minor I disagree with your view . I still hold the point that …Confidence: 0.34 Addition Evaluate… … ### CORRECTION: If you want to … who likes to ride a motorcycle and the waitress .### SOURCE: 如果你想 … 和服务员妹子 之间的恩怨纠葛。 Conf > 𝑡𝑡Framework V3: Change stage 2 ### TRANSLATION: If you want to … who likes to ride a motorcycle and the waiter sister . Addition Correct Yes Accuracy Evaluate Figure 2: An overview of the proposed framework HiMATE where translation errors are categorized into tier-1 high-level errors and tier-2 fine-grained subtype errors. The tier-2 agent first evaluates a subtype error. Then the judgment is refined through the self-reflection stage. If needed, the tier-2 agent discusses with its upper-level tier-1 agent for further confirmation. The final score is computed by summing | https://arxiv.org/abs/2505.16281v1 |
the weighted values of errors. within Nature Language Generation (NLG). Chat- Eval (Chan et al., 2023) employs LLMs in a frame- work similar to human group discussions. MAT- Eval (Li et al., 2024) introduces architectures that rely on multi-round discussion and summarization for evaluation. Similarly, a courtroom-inspired ar- chitecture has been proposed, utilizing LLMs as adversarial judges engaging in multi-round debates for NLG assessment (Bandi and Harrasse, 2024). M-MAD (Feng et al., 2024) integrates multi-agent systems into MTE, leveraging the collaborative reasoning capabilities of LLMs. Nevertheless, ex- isting approaches for MTE, such as M-MAD, insuf- ficiently leverage fine-grained error definitions and semantic information within the MQM hierarchy. 3 Methodology In this section, we introduce our proposed frame- work HiMATE for conducting human-aligned ma- chine translation evaluations. Adhering to the hi- erarchical structure of the MQM error typology, HiMATE organizes translation errors into two tiers:high-level error categories (tier-1) and fine-grained error subtypes (tier-2). Two distinct types of agents are assigned to manage these tiers, with each agent specializing in assessing specific error categories, as depicted in Figure 2. Initially, tier-2 agents indi- vidually perform evaluations on their correspond- ing error subtypes. This preliminary assessment is then refined by harnessing the self-reflection ca- pabilities of LLMs. Finally, results exhibiting low confidence are subjected to collaborative discus- sion involving tier-1 and tier-2 agents. The entire evaluation process operates without reliance on ref- erence, ensuring broad applicability for evaluation. 3.1 Agent Topology The hierarchical error structure of the evaluation framework comprises high-level core error types ei(e.g., Accuracy), which are allocated to tier-1 agents A[1] i, and fine-grained error subtypes ej(e.g., Omission, Addition) that belong under these core errors and are assigned to tier-2 agents A[2] j. Spe- cific descriptions are provided in Appendix A. 3 3.2 Subtype Evaluation Initially, the tier-2 agents are required to seek the specific errors individually within the translations. For a given source text x, translation y, and sys- tem prompt pSE, each agent A[2] jis provided with distinct error definitions corresponding to tier-2, en- abling them to leverage detailed information effec- tively. All the agents adhere to a uniform definition of severity when evaluating identified errors. Upon completion of this stage, a preliminary as- sessment concerning the error and the correspond- ing severity is produced. Evaluators also provide explanations supporting their judgments. The ini- tial evaluation information ISEgenerated above is then passed on to the subsequent stage: ISE←A[2] j(x, y, p SE) (1) If no errors are detected, results proceed directly to the final stage; otherwise, sentences flagged with errors advance to the subsequent stage. 3.3 Self-Reflection To enhance the robustness of the preliminary evalu- ation, we further validate the initial assessment by harnessing the self-reflection capabilities of LLMs. The prompt pSRconsists of two components, pc for error correction and pvfor comparative verifica- tion. Initially, the tier-2 agents scrutinize all error annotations and justifications from the preliminary assessment. Utilizing the information ISE, tier-2 agents generate the corrected versions yc: yc←A[2] j(x, y, p c,ISE) (2) Subsequently, the evaluators compare the origi- nal translation ywith its corrected version yc. Sup- pose an | https://arxiv.org/abs/2505.16281v1 |
error is accurately identified during the initial phase, the correction should either rectify the issue or mitigate its severity, thereby validating the initial judgment. Otherwise, this may indicate a false positive in the previous evaluation, suggest- ing that the purported error might not exist. This verification mechanism yields more precise error judgments, diminishing workload concerning error type and severity in subsequent stages. Following the stage, additional confirmation regarding error information I SRcan be procured: ISR←A[2] j(y, y c, pv),ISE (3) Responses with confidence scores, calculated by summing token logarithmic probabilities, below apredefined stage-transition threshold advance to the subsequent phase, whereas those surpassing this threshold proceed directly to the final stage. 3.4 Collaborative Discussion The final stage is designed to validate the ambigu- ous assessment through collaborative discussion. This methodology stems from the observation that judgments with low confidence often indicate po- tential inaccuracies requiring expert validation. By focusing specifically on these instances, HiMATE achieves enhanced precision without compromis- ing computational efficiency. In this phase, the tier-1 agents A[1] i, responsible for high-level error categorization, collaborate and engage in discussions with the tier-2 agents. These tier-1 agents possess comprehensive knowledge of all tier-2 subcategory error definitions within their respective domains, enabling them to evaluate translation from a broader perspective. Throughout the discussion period, tier-1 agents first analyze the information, initially set as ISRat the start of the discussion, from tier-2 agents. Sub- sequently, tier-1 agents provide expert evaluations, integrated with the prior chat history to form an updated discussion record. Tier-2 agents then con- tribute their perspectives, indicating agreement or disagreement with the assessments provided by tier- 1 agents, thereby further updating the discussion record. The discussion continues iteratively until consensus is reached or the predefined maximum number of discussion rounds is exceeded. The final evaluation outcome is derived from the history of the concluding conversation. 3.5 Weighted Scoring Commencing with evaluation on subtype errors, Hi- MATE refines judgments through a self-reflection process to validate identified errors, eventually en- gaging in a collaborative discussion to reach con- sensus. After these three stages, the evaluation outcome for the translation sentence pair (x, y)re- garding subtype error ejis obtained, consisting of the severity label sjand its corresponding weight wj. The weights associated with each error subtype and severity level are detailed in Appendix B. The score Scan be calculated as: S=−X wj (4) 4 Table 1: Results of MTE methods on the ZH-EN and EN-DE subsets of the MQM22 dataset. A higher Kendall’s correlation coefficient ( τ) and Spearman’s correlation coefficient ( s), as well as a lower Mean Absolute Error (MAE) and Mean Squared Error (MSE), indicate better alignment with human evaluations. The abbreviations 4o-mini and qwen2.5 refer to the gpt-4o-mini and qwen2.5-72b-instruct models, respectively. Reference-based methods are indicated with a gray background . The best result in each column is bolded , and the second-best is underlined . Model MethodsZH-EN EN-DE τ↑ s↑MAE ↓MSE ↓τ↑ s↑MAE ↓MSE ↓ BLEU 0.172 0.233 0.6849 0.5064 0.161 0.213 0.6184 0.4322 BERTSCORE 0.300 0.400 0.3334 0.1349 0.208 0.275 0.3936 0.1970 COMET-22 0.369 0.489 0.1527 0.0306 0.301 0.390 | https://arxiv.org/abs/2505.16281v1 |
0.0948 0.0144 COMETKIWI 0.365 0.487 0.1821 0.0386 0.212 0.277 0.1351 0.0221 xCOMET-XL 0.394 0.515 0.1210 0.0317 0.283 0.355 0.0567 0.01154o-miniGEMBA-MQM 0.381 0.464 0.0755 0.0131 0.270 0.316 0.0420 0.0052 M-MAD 0.350 0.435 0.0379 0.0033 0.282 0.327 0.0164 0.0007 HiMATE (ours) 0.404 0.502 0.0421 0.0051 0.293 0.338 0.0250 0.0026qwen2.5GEMBA-MQM 0.407 0.510 0.0419 0.0039 0.229 0.279 0.0428 0.0045 M-MAD 0.297 0.365 0.0323 0.0027 0.227 0.266 0.0286 0.0018 HiMATE (ours) 0.413 0.498 0.0281 0.0028 0.248 0.274 0.0198 0.0016qwen-maxGEMBA-MQM 0.417 0.527 0.0873 0.0159 0.292 0.347 0.0471 0.0071 M-MAD 0.350 0.425 0.0417 0.0050 0.216 0.253 0.0258 0.0017 HiMATE (ours) 0.425 0.531 0.0386 0.0045 0.318 0.358 0.0187 0.0014 4 Experiments 4.1 Settings Dataset We conduct our main experiments on the MQM22 dataset (Freitag et al., 2021). Due to cost constraints, we randomly select the out- puts from the HuaweiTSC system for Chinese- English (ZH-EN) and the comet-bestmbr system for English-German (EN-DE), including 1,875 and 1,315 annotated translations, respectively. The human-annotated MQM scores serve as the ground truth for evaluating machine translation quality. To mitigate potential data leakage risks due to overlaps between LLM pre-training corpora and evaluation datasets, we also perform experiments on the recently released MQM24 dataset. Experi- mental results are provided in Appendix C. Baseline The reference-based evaluation met- rics, including BLEU (Papineni et al., 2002), BERTSCORE (Zhang et al., 2019), and COMET- 22 (Rei et al., 2022a), as well as the reference-free ones, such as COMETKIWI (Rei et al., 2022b), xCOMET-XL (Guerreiro et al., 2024), GEMBA- MQM (Kocmi and Federmann, 2023a), and M- MAD (Feng et al., 2024), are adopted as baselinesfor comparison. Among these, GEMBA-MQM and M-MAD are LLM-based methods like ours. Evaluation Method Four widely adopted met- rics are employed to evaluate the alignment of Hi- MATE with human judgments. Specifically, we use Kendall’s correlation coefficient τand Spear- man’s correlation coefficient sto measure correla- tion, while using Mean Absolute Error (MAE) and Mean Squared Error (MSE) to quantify similarity. Experiment Settings The gpt-4o-mini (Hurst et al., 2024), qwen2.5-72b-instruct, and qwen- max (Yang et al., 2024) are adopted as backbone models for LLM-based MTE methods. The error typology used in HiMATE is categorized and fur- ther refined into 5 core error types and 19 distinct subtypes. The temperature parameter is set to 0. Several instances from the MQM20 and MQM21 datasets are selected to serve as 2-shot demon- stration examples for the subtype evaluation stage. Stage-transition thresholds tare adaptively config- ured for different models (refer to Appendix D for specific procedures). The maximum number of dialogue turns during the collaborative discussion stage is set to four, with each agent allowed at most two statements. 5 Table 2: Ablation study of HiMATE across different backbone models. The abbreviations 4o-mini andqwen2.5 refer to the gpt-4o-mini and qwen2.5-72b-instruct models, respectively. SE, SR, and CD respectively denote the subtype evaluation, self-reflection, and collaborative discussion stages. SA denotes the single-agent baseline method. The best result for each metric within the same backbone model group is bolded . Model MethodsZH-EN EN-DE τ↑ s↑ MAE ↓MSE ↓ τ↑ s↑ MAE ↓MSE ↓4o-miniSA 0.345 0.417 0.0427 0.0042 0.210 0.230 0.0166 0.0008 HiMATE-SE 0.359 | https://arxiv.org/abs/2505.16281v1 |
0.468 0.1614 0.0614 0.278 0.329 0.0698 0.0205 -SE+SR 0.370 0.477 0.1178 0.0362 0.283 0.332 0.0497 0.0106 -SE+SR+CD 0.404 0.502 0.0421 0.0051 0.293 0.338 0.0250 0.0026qwen2.5SA 0.341 0.413 0.0356 0.0031 0.183 0.200 0.0197 0.0017 HiMATE-SE 0.397 0.489 0.0572 0.0172 0.240 0.267 0.0341 0.0100 -SE+SR 0.406 0.496 0.0479 0.0110 0.245 0.271 0.0281 0.0054 -SE+SR+CD 0.413 0.498 0.0281 0.0028 0.248 0.274 0.0198 0.0016qwen-maxSA 0.395 0.480 0.0374 0.0049 0.286 0.325 0.0200 0.0014 HiMATE-SE 0.409 0.523 0.1047 0.0405 0.292 0.333 0.0436 0.0140 -SE+SR 0.409 0.517 0.0925 0.0326 0.290 0.329 0.0354 0.0085 -SE+SR+CD 0.425 0.531 0.0386 0.0045 0.318 0.358 0.0187 0.0014 4.2 Main Results Table 1 illustrates the experimental results of differ- ent MTE methods on the MQM22 dataset. Accord- ing to the results, HiMATE consistently exhibits improvements across different evaluation metrics and backbone models. Stable enhancements in correlation and similarity metrics are observed for both ZH-EN and EN-DE translation tasks. Specifically, HiMATE showcases improved alignment with human evaluations, as evidenced by correlation coefficients. For ZH-EN, when utiliz- ing the qwen-max model, HiMATE achieves peak correlation values of 0.425 in Kendall’s correlation coefficient and 0.531 in Spearman’s correlation co- efficient. This advantage is sustained across diverse backbone models, with correlation metrics gener- ally outperforming LLM-based methods. Similar improvements are observed on the EN-DE dataset, where HiMATE surpasses other LLM-based meth- ods, achieving the highest τand second-best s. On the other hand, HiMATE achieves either the best or second-best performance in MAE and MSE, showing comparable evaluation similarity with M- MAD and markedly exceeding other baseline meth- ods. The significant improvement of similarity with human evaluation suggests the effectiveness of aligning human-built evaluation criteria within the proposed method. In addition, HiMATE demon-strates consistently strong performance with differ- ent backbone models, highlighting its robustness to changes in model capacity. All of these observa- tions serve as compelling evidence of the efficacy of HiMATE in conducting high-quality evaluations. 5 Ablation and Analysis 5.1 Ablation Study To verify the contribution of each stage within Hi- MATE, we conduct an ablation study of the frame- work. Concretely, we introduce a single-agent eval- uation method (denoted as SA) as a baseline, which conducts an all-in-one evaluation encompassing all subtype errors from the subtype evaluation stage in HiMATE. Evaluation results of the ablated frame- work on the MQM22 dataset are shown in Table 2. According to the results, HiMATE-SE consistently outperforms SA in terms of correlation, particularly improving Kendall’s correlation coefficient τfrom 0.341 to 0.397 when using a qwen2.5 backbone model on the ZH-EN subset, which underscores the importance of fine-grained subtype error evaluation through a multi-agent approach. The performance improvements from the self-reflection stage are rel- atively modest; however, this stage helps reduce the computational cost of the subsequent stage by fil- tering the high-confidence, reliable judgments. The collaborative discussion stage contributes more sub- 6 Table 3: Evaluation results of different MTE methods on MQM22 ZH-EN under different sentence lengths. We use gpt-4o-mini as the backbone model. The threshold for separating sentences is set to 12 and 27. short medium long τ s F1-score τ s F1-score τ s F1-score GEMBA-MQM 0.369 0.433 0.085 | https://arxiv.org/abs/2505.16281v1 |
0.382 0.460 0.093 0.344 0.427 0.100 M-MAD 0.333 0.386 0.124 0.316 0.392 0.108 0.311 0.408 0.116 HiMATE 0.368 0.436 0.218 0.425 0.529 0.260 0.377 0.491 0.294 10% 30% 50% 70% 90%0.00.20.40.60.81.0Precision HiMATE(ours) M-MAD GEMBA-MQM 10% 30% 50% 70% 90% Matching Threshold0.00.20.40.60.81.0Recall HiMATE(ours) M-MAD GEMBA-MQM 10% 30% 50% 70% 90%0.00.20.40.60.81.0F1-score HiMATE(ours) M-MAD GEMBA-MQM Figure 3: Error span detection results for different MTE methods on the MQM22 ZH-EN dataset, based on gpt-4o- mini. We report Precision, Recall, and F1-score across varying matching thresholds. stantially, achieving the best or second-best results across all four evaluation metrics. It is worth noting that the performance improvements from collab- orative discussion vary across different backbone LLMs, likely influenced by instruction-following and divergent thinking capability of the model. 5.2 Error Span Detection As mentioned earlier, current LLM-based MTE methods struggle with accurately identifying trans- lation error spans. In this section, we aim to eval- uate the error span detection performance of Hi- MATE and several baseline approaches. Our ex- periments are conducted on the MQM22 ZH-EN subset, using gpt-4o-mini as the backbone model. Following Ye et al. (2024), we utilize Precision, Recall, and F1-score for evaluation. Figure 3 il- lustrates the evaluation results at varying match- ing thresholds, which are defined by the minimum overlap ratio between detected error spans and gold- standard annotations. According to the results, Hi- MATE consistently surpasses GEMBA-MQM and M-MAD across all matching thresholds on all three metrics. Notably, compared to the best-performing baseline M-MAD, HiMATE achieves average im- provements of approximately 89% in F1-score and 95% in Recall, demonstrating its superior effective- ness in error span detection. These results further validate the effectiveness of HiMATE’s hierarchical agent design. More details regarding the error spanmatching algorithm are provided in Appendix E. 5.3 Performance on Distinct Sentence Length Longer contexts present significant challenges to both machine translation quality and evaluation accuracy. To investigate how sentence length af- fects MTE performance, we conduct experiments on the ZH-EN subset of MQM22, grouping sam- ples by token count. As shown in Table 3, Hi- MATE consistently outperforms baselines across all sentence-length groups. While baseline meth- ods achieve comparable results on shorter samples, HiMATE maintains superior performance as sen- tence length increases. In contrast, GEMBA-MQM and M-MAD exhibit limited or declining accuracy with rising complexity. Notably, at a matching threshold of 50%, the F1-score gap between Hi- MATE and the best-performing baseline widens from 0.094 for short sentences to 0.194 for long sentences. These findings demonstrate HiMATE’s robustness in accurately identifying translation er- rors, particularly in longer sentences. 5.4 Analysis on Domain-Specific Evaluation The MQM22 dataset encompasses four distinct text domains: news, social, conversation, and ecom- merce. We present an analysis of the performance of different MTE methods across these domains using gpt-4o-mini as the backbone model. As illus- trated in Table 5, HiMATE significantly surpasses 7 Table 4: A case of responses generated by gpt-4o-mini on MQM22 ZH-EN among different LLM-based methods. The parts marked in red indicate incorrect judgments, while the parts marked in green indicate correct judgments. Source 希望你们了解一下 Translation I hope you know about it. | https://arxiv.org/abs/2505.16281v1 |
Reference I hope you can find out about it. Annotation Major-Mistranslation-’know about it’ GEMBA-MQMMajor-Mistranslation-‘I hope you know about it’, Minor-Awkward-‘I hope you know about it’ M-MAD Stage1 Minor-Omission-‘know about it’ M-MAD Stage2&3 Minor-Omission-‘know about it’ HiMATE-SEMinor-Inappropriate for context-‘know’, Major-Omission-‘ 希望’, Minor- Awkward-‘I hope you know about it’, Minor-Addition-‘know about it’, Major- Mistranslation-‘know about it’ -SE+SRMinor-Awkward-‘I hope you know about it’, Minor-Addition-‘know about it’, Major-Mistranslation-‘know about it’ -SE+SR+CD Major-Mistranslation-‘know about it’ Table 5: Domain-specific results of various MTE meth- ods on MQM22 ZH-EN, measured by Spearman’s cor- relation coefficient susing gpt-4o-mini as backbone. news conversation social ecommerce GEMBA-MQM 0.333 0.367 0.471 0.538 M-MAD 0.365 0.347 0.430 0.444 HiMATE 0.355 0.399 0.529 0.577 the other two LLM-based methods in the conver- sation, social, and ecommerce domains. Although M-MAD slightly outperforms HiMATE in the news domain, it exhibits the least effective performance in the other three domains. We also conduct ex- periments using other metrics and datasets, the de- tails of which can be found in Appendix F. Over- all, these findings underscore the robustness of Hi- MATE across diverse textual contexts. 5.5 Case Study Through specific case studies, we compare the evaluation results of baseline methods with our proposed framework in Table 4, highlighting the distinct behavior of HiMATE. As shown, while GEMBA-MQM correctly identifies the type and severity of one error, it inaccurately expands the error span to the entire sentence, making the as- sessment unreliable. Additionally, it falsely flags non-existent errors. Due to its single-agent, single- stage design, the system lacks the ability to refine or correct its judgments. In the case of M-MAD, while the system precisely pinpoints the error span,it inaccurately classifies the error type and severity, and fails to rectify the previous misjudgment in the subsequent multi-agent debate stage. In contrast, while HiMATE initially over-identifies errors dur- ing the subtype evaluation stage, its self-reflection and collaborative discussion stages effectively fil- ter out these incorrect assessments while retaining valid judgments. This demonstrates HiMATE’s ef- fectiveness in allocating different tiers of hierarchi- cal MQM information to distinct agents, enabling each to focus specifically on the error types under its responsibility, thereby enhancing the evaluation accuracy. More detailed case examples with staged evaluation processes across agents are provided in Appendix H. 6 Conclusion This paper introduces HiMATE, a hierarchical multi-agent framework for LLM-based machine translation evaluation. The framework assembles a three-stage strategy to enhance error detection and severity assessment, leveraging self-reflection and collaborative discussion mechanisms after sub- type evaluation. The experimental results based on correlation and similarity metrics demonstrate the superior performance of HiMATE. Ablation study further validates the contribution of each stage in aligning closely with human judgments. Detailed analyses highlight that HiMATE exhibits pronounced advantages in error span detection com- pared to existing LLM-based methods. Addition- 8 ally, evaluations across varying sentence lengths indicate robust and consistent performance. Hi- MATE also achieves stable advancements across various text domains. The achievement of HiMATE emphasizes the significance of effectively utiliz- ing the human evaluation framework in designing multi-agent collaborative evaluation strategies. Limitations Due to constraints in time and resources, the vali- dation | https://arxiv.org/abs/2505.16281v1 |
process presented in this study primarily fo- cuses on selected models and language pairs, rather than an exhaustive evaluation across diverse mod- els and languages. The current study does not eval- uate the applicability of the proposed framework to advanced reasoning models, which have shown promising potential in comprehensive tasks. Addi- tionally, the current experiment does not include the evaluation of the cross-lingual comprehension and generation quality of LLMs. References Chaithanya Bandi and Abir Harrasse. 2024. Ad- versarial multi-agent evaluation of large language models through iterative debates. arXiv preprint arXiv:2410.04663 . Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with im- proved correlation with human judgments. In Pro- ceedings of the ACL Workshop on Intrinsic and Ex- trinsic Evaluation Measures for Machine Transla- tion and/or Summarization , pages 65–72, Ann Arbor, Michigan. Association for Computational Linguis- tics. Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, and Zhiyuan Liu. 2023. Chateval: Towards better llm-based eval- uators through multi-agent debate. arXiv preprint arXiv:2308.07201 . Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, and 1 others. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 . Zhaopeng Feng, Jiayuan Su, Jiamei Zheng, Jiahan Ren, Yan Zhang, Jian Wu, Hongwei Wang, and Zuozhu Liu. 2024. M-mad: Multidimensional multi-agent de- bate framework for fine-grained machine translation evaluation. arXiv preprint arXiv:2412.20127 . Patrick Fernandes, Daniel Deutsch, Mara Finkel- stein, Parker Riley, André Martins, Graham Neubig, Ankush Garg, Jonathan Clark, Markus Freitag, andOrhan Firat. 2023. The devil is in the errors: Leverag- ing large language models for fine-grained machine translation evaluation. In Proceedings of the Eighth Conference on Machine Translation , pages 1066– 1083, Singapore. Association for Computational Lin- guistics. Markus Freitag, George Foster, David Grangier, Viresh Ratnakar, Qijun Tan, and Wolfgang Macherey. 2021. Experts, errors, and context: A large-scale study of human evaluation for machine translation. Transac- tions of the Association for Computational Linguis- tics, 9:1460–1474. Nuno M. Guerreiro, Ricardo Rei, Daan van Stigt, Luisa Coheur, Pierre Colombo, and André F. T. Martins. 2024. xcomet: Transparent machine translation eval- uation through fine-grained error detection. Transac- tions of the Association for Computational Linguis- tics, 12:979–995. Amr Hendy, Mohamed Abdelrehim, Amr Sharaf, Vikas Raunak, Mohamed Gabr, Hitokazu Matsushita, Young Jin Kim, Mohamed Afify, and Hany Hassan Awadalla. 2023. How good are gpt models at ma- chine translation? a comprehensive evaluation. arXiv preprint arXiv:2302.09210 . Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, and 1 others. 2024. Gpt-4o system card. arXiv preprint arXiv:2410.21276 . Tom Kocmi, Eleftherios Avramidis, Rachel Bawden, Ondˇrej Bojar, Anton Dvorkovich, Christian Feder- mann, Mark Fishel, Markus Freitag, Thamme Gowda, Roman Grundkiewicz, Barry Haddow, Marzena Karpinska, Philipp Koehn, Benjamin Marie, Christof Monz, Kenton Murray, Masaaki Nagata, Martin Popel, Maja Popovi ´c, and 3 others. 2024. Findings of the WMT24 general machine translation shared task: The LLM era is here but MT is not solved yet. InProceedings of the Ninth Conference on | https://arxiv.org/abs/2505.16281v1 |
Machine Translation , pages 1–46, Miami, Florida, USA. As- sociation for Computational Linguistics. Tom Kocmi and Christian Federmann. 2023a. GEMBA- MQM: Detecting translation quality error spans with GPT-4. In Proceedings of the Eighth Conference on Machine Translation , pages 768–775, Singapore. Association for Computational Linguistics. Tom Kocmi and Christian Federmann. 2023b. Large language models are state-of-the-art evaluators of translation quality. In Proceedings of the 24th An- nual Conference of the European Association for Ma- chine Translation , pages 193–203, Tampere, Finland. European Association for Machine Translation. Christoph Leiter, Piyawat Lertvittayakumjorn, Ma- rina Fomicheva, Wei Zhao, Yang Gao, and Steffen Eger. 2022. Towards explainable evaluation met- rics for natural language generation. arXiv preprint arXiv:2203.11131 . 9 Yu Li, Shenyu Zhang, Rui Wu, Xiutian Huang, Yongrui Chen, Wenhao Xu, Guilin Qi, and Dehai Min. 2024. Mateval: A multi-agent discussion framework for advancing open-ended text evaluation. In Interna- tional Conference on Database Systems for Advanced Applications , pages 415–426. Springer. Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out , pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Qingyu Lu, Baopu Qiu, Liang Ding, Kanjian Zhang, Tom Kocmi, and Dacheng Tao. 2024. Error analysis prompting enables human-like translation evaluation in large language models. In Findings of the Asso- ciation for Computational Linguistics: ACL 2024 , pages 8801–8816, Bangkok, Thailand. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics , pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Ricardo Rei, José G. C. de Souza, Duarte Alves, Chrysoula Zerva, Ana C Farinha, Taisiya Glushkova, Alon Lavie, Luisa Coheur, and André F. T. Martins. 2022a. COMET-22: Unbabel-IST 2022 submission for the metrics shared task. In Proceedings of the Seventh Conference on Machine Translation (WMT) , pages 578–585, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics. Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP) , pages 2685–2702, Online. Association for Computational Linguistics. Ricardo Rei, Marcos Treviso, Nuno M. Guerreiro, Chrysoula Zerva, Ana C Farinha, Christine Maroti, José G. C. de Souza, Taisiya Glushkova, Duarte Alves, Luisa Coheur, Alon Lavie, and André F. T. Martins. 2022b. CometKiwi: IST-unbabel 2022 sub- mission for the quality estimation shared task. In Proceedings of the Seventh Conference on Machine Translation (WMT) , pages 634–645, Abu Dhabi, United Arab Emirates (Hybrid). Association for Com- putational Linguistics. Yu Wan, Dayiheng Liu, Baosong Yang, Tianchi Bi, Haibo Zhang, Boxing Chen, Weihua Luo, Derek F. Wong, and Lidia S. Chao. 2021. RoBLEURT submis- sion for WMT2021 metrics task. In Proceedings of the Sixth Conference on Machine Translation , pages 1053–1058, Online. Association for Computational Linguistics. Peiyi Wang, Lei Li, Liang Chen, Zefan Cai, Dawei Zhu, Binghuai Lin, Yunbo Cao, Lingpeng Kong,Qi Liu, Tianyu Liu, and Zhifang Sui. 2024. Large lan- | https://arxiv.org/abs/2505.16281v1 |
guage models are not fair evaluators. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 9440–9450, Bangkok, Thailand. Association for Computational Linguistics. An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, and 1 others. 2024. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 . Xiaojun Ye, Junhao Chen, Xiang Li, Haidong Xin, Chao Li, Sheng Zhou, and Jiajun Bu. 2024. MMAD:multi- modal movie audio description. In Proceedings of the 2024 Joint International Conference on Compu- tational Linguistics, Language Resources and Evalu- ation (LREC-COLING 2024) , pages 11415–11428, Torino, Italia. ELRA and ICCL. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Eval- uating text generation with bert. arXiv preprint arXiv:1904.09675 . 10 A MQM Hierarchy The error typology used in HiMATE is categorized and further refined into 5 core error types and 19 distinct subtypes. Considering the characteristics and practical requirements of the proposed frame- work, specific error definitions within MQM have been further refined without altering their origi- nal meanings. These refinements are based on the MQM- Full Master File Instructions and build upon the previous work (Freitag et al., 2021), as detailed in Table 8. B MQM Error Weight Following the architectural design and operational requirements of HiMATE, we eliminate the "non- translation" category and assign error weights that remain closely aligned with established human as- sessment practices (Freitag et al., 2021) for ma- chine translation evaluation. The MQM error weights used for calculating the final score are shown in Table 6. C Evaluation Experiment on MQM24 To eliminate potential data leakage risks arising from the overlaps between LLM pre-training cor- pora and evaluation datasets, we perform additional evaluation experiments on outputs from the IKUN system on the MQM24 EN-DE dataset, encompass- ing 487 annotated translations. We select gpt-4o- mini as the backbone model for this experiment, whose training data cut-off is prior to the release of the MQM24 dataset. The experimental results, presented in Table 7, closely align with the con- clusions drawn from experiments conducted on the MQM22 dataset. On the one hand, HiMATE achieves excellent performance in terms of corre- lation and similarity. On the other hand, HiMATE demonstrates stable improvements across all eval- uation stages. These findings are consistent with the primary experimental results, further validat- ing the effectiveness and robustness of HiMATE in enhancing evaluation accuracy. DStage-Transition Threshold Acquisition To determine an appropriate stage-transition thresh- old, several sentences unrelated to the primary ex- periment and not utilized as two-shot demonstra- tions are randomly selected from the MQM20 and MQM21 datasets, both for ZH-EN and EN-DE, forming the validation set. The size of this valida- tion set is approximately one-tenth of the MQM22Table 6: MQM error weights used to calculate the final score in HiMATE. Severity Category Weight Major all others 5 Minor Fluency/Punctuation 0.1 all others 1 Neutral all 0 Table 7: Results of MTE methods on the EN-DE subset of the MQM24 dataset, where the backbone model of LLM-based | https://arxiv.org/abs/2505.16281v1 |
methods is gpt-4o-mini. A higher Kendall’s correlation coefficient ( τ) and Spearman’s correlation coefficient ( s), as well as a lower Mean Absolute Er- ror (MAE) and Mean Squared Error (MSE), indicate better alignment with human evaluations. Reference- based methods are indicated with a gray background . The best result in each column is bolded , and the second- best is underlined . MethodsEN-DE τ↑s↑MAE ↓MSE ↓ BLEU 0.050 0.071 0.1664 0.0394 BERTSCORE 0.109 0.153 0.0537 0.0095 COMET-22 0.245 0.343 0.1611 0.0376 COMETKIWI 0.190 0.263 0.0574 0.0099 xCOMET-XL 0.344 0.467 0.0940 0.0178 GEMBA-MQM 0.281 0.350 0.0563 0.0087 M-MAD 0.267 0.347 0.0275 0.0029 HiMATE-SE 0.254 0.344 0.0975 0.0322 HiMATE-SR 0.280 0.372 0.0732 0.0191 HiMATE-CD 0.321 0.424 0.0440 0.0072 dataset utilized in the main experiments.2Each sen- tence undergoes subtype evaluation and subsequent self-reflection to produce confidence scores. Sub- sequently, these confidence scores are ranked, and empirical analysis suggests the confidence score at the 60% percentile as the optimal stage-transition threshold. Threshold values vary across language pairs and models. 2The validation set can be found in https://anonymous. 4open.science/r/HiMATE-Anony 11 Algorithm 1 Matching Algorithm 1:Given: Gold-standard error span G, detected error span E, matching threshold θ∈(0,1]. 2: Tokenize G→Gt,E→Et 3:n← |Gt|,m← |Et| 4:L←min(n, m) 5:while L≥1do 6: fori= 0ton−Ldo 7: if∃js.t.Gt[i:i+L] =Et[j:j+L]then 8: α←L/n,β←L/m 9: ifα≥θandβ≥θthen 10: return true 11: end if 12: end if 13: end for 14: L←L−1 15:end while 16:return false E Matching Algorithm of Error Span Detection A matching algorithm is employed to validate the accuracy of error span detection. Specifically, given a gold-standard error span Gand a detected error span E, the match is confirmed only if both overlap ratios satisfy |I|/|G| ≥θand|I|/|E| ≥θ, where Idenotes the intersection segment G∩E, | · |represents length measurement (e.g., measured by token or character count), and θis a predefined matching threshold 0< θ≤1. For instance, when θ= 50% , if the gold span Gis "go back to the lab" (4 tokens) and the detected span Eis "back to the lab tomorrow" (5 tokens), their intersection I ("back to the lab", 3 tokens) yields overlap ratios of 75% (3/4 for G) and 60% (3/5 for E), both exceed- ing the matching threshold 50%. The pseudocode is shown as Algorithm 1. F Supplementary Analysis on Domain-Specific Performance As illustrated in Figure 4, although HiMATE may exhibit slightly lower performance in specific do- mains, it demonstrates strong alignment with man- ual evaluation across most scenarios. Incorporating a hierarchical multi-agent structure combined with fine-grained error detection ensures robust perfor- mance of HiMATE across diverse text domains. G Prompts of HiMATE Agents The prompts utilized by the proposed HiMATE are presented in Tables 9 and 10. A three-stage pipeline is adopted, where the self-reflection stagecomprises two sequential steps, and the collabora- tive discussion stage involves a dialogue procedure between two tiers of agents. H Detailed Case Example This section presents three illustrative cases demon- strating the complete workflow of each agent within HiMATE. These examples are generated utilizing the gpt-4o-mini model. The first exam- ple, sourced from MQM22 ZH-EN, corresponds to the case discussed in Chapter 5.5, detailed in | https://arxiv.org/abs/2505.16281v1 |
Ta- bles 11 and 12. The second example, also derived from MQM22 ZH-EN, is illustrated from Table 13 to Table 15. The third example, originating from MQM22 EN-DE, is presented in Tables 16 and 17. 12 Table 8: Core error subtypes and descriptions after refinement. Here, ’Core’ refers to high-level error categories at tier-1, while ’Subtype’ denotes fine-grained error classifications at tier-2. ’Description’ are utilized within the prompt to characterize each error type. Core Subtype Description AccuracyAdditionTranslation includes information (including the punctuation) not present in the source. OmissionA paragraph present in the source (including the noun, verb, adverb, adverbial, punctuation, and so on) is missing in the translation. MistranslationIncorrect use in target content of a word, inconsistent match, and the incorrect segmentation. Untranslated textSource text has been left untranslated. Not translating special symbols or placeholders is not an untranslated text error. TerminologyInappropriate for contextUse of a term that differs from term usage required by a speci- fied termbase or other resource. Inconsistent use Terminology is used inconsistently. FluencyPunctuationUnpaired quote marks or parentheses. Missing mark from a set of paired punctuation marks, such as a missing parenthesis or quote mark. And the omission or addition of punctuation. Spelling Error occurring when a word is misspelled. GrammarSubject-verb disagreement, incorrect verb tenses or forms, and improper declension of nouns, pronouns, or adjectives. RegisterWrong grammatical register (e.g. inappropriately informal pronouns). Inconsistency Internal inconsistency (not related to terminology). Character encodingError occurring when characters are garbled due to incorrect application of an encoding. Style AwkwardSome languages provide grammatical and stylistic features that support complex, embedded ideas, which can result in awkward style if a target text retains these features. Locale ConventionAddress formatError involving inappropriate address format for locale. Such as the wrong form used for postal codes for target locale. Currency format Incorrect currency format for its locale. Date format Error involving inappropriate date format for its locale. Name formatName format used in text or a data field inappropriate for its locale, such as switching the order of last and first names inappropriately. Telephone formatError involving inappropriate telephone number form for lo- cale. Time formatError involving incorrect time format for its locale. For exam- ple, unless specified as using a 24-hour clock, US time formats report time after noon using 12-hour notation. 13 news conversation social ecommerce0.260.300.340.380.420.46 news conversation social ecommerce0.320.360.400.440.480.520.56 s news conversation social ecommerce0.180.220.260.300.340.38 news conversation social ecommerce0.240.280.320.360.400.44 szhen endeGEMBA-MQM M-MAD HiMAETFigure 4: The Kendall’s correlation coefficient τand Spearman’s correlation coefficient sacross multiple text domains on MQM22 ZH-EN and EN-DE dataset using the gpt-4o-mini model. 14 Table 9: The prompt of Subtype Evaluation and Self-Reflection stages. Here, the Self-Reflection stage comprises two sequential steps, the error correction and comparative verification. Subtype Evaluation Self-Reflection Role-Play Prompt Role-Play Prompt Role-Play Prompt You are an evaluator conduct- ing a manual translation evalua- tion based on the MQM frame- work, and you need to determine whether the following translation from Chinese to English below contains an error.You are an evaluator conducting a manual translation evaluation based on the MQM framework. Here you are given a pair of sen- tences: a | https://arxiv.org/abs/2505.16281v1 |
source Chinese sen- tence and its English translation. You need to correct the errors in the translation.You are an evaluator conducting a manual translation evaluation based on the MQM framework. Here you need to compare the following original translation and the corrected translation based on the error definition. Task Assignment Prompt Task Assignment Prompt Task Assignment Prompt {Source} {Translation} {Error Definition} Please check if there are er- rors in the translation and whether they meet the definition of error types. You also need to indicate the degree of error. Error severities are assigned independently of category, and consist of Major, Minor, and Neutral levels, corresponding, respectively, to actual translation or grammatical errors, smaller imperfections, and purely subjective opinions about the translation.{Source} {Translation} {Error Definition} {Error Information} Previously, you have identi- fied specific types of errors and the severity based on the error definition. Based on the information below, please correct the errors in the translation.{Original Translation} {Corrected Translation} Previously, you have cor- rected the translation. Now, determine whether the error mentioned previously in the original translation has been corrected. 15 Table 10: The prompt of the Collaborative Discussion stage. Two tiers of agents participate in this stage and engage in discussion. Collaborative Discussion Tier-1 Agent Round Tier-2 Agent Round Role-Play Prompt Role-Play Prompt You are an expert in manual translation evalu- ation based on the MQM framework and are responsible for a high-level error type. In pre- vious work, an evaluator responsible for a sub- type has expressed his lack of confidence in his answers. Please review his work and pro- vide your opinions.You are an evaluator who manually evaluates translations based on the MQM framework. In your previous work, you were not sure about the accuracy of your evaluation. Therefore, you communicated with the expert in charge of coordinating the evaluation of the error sub- type for which you are responsible. Task Assignment Prompt Task Assignment Prompt {Source Sentence} {Translation} {Previous Error Information} {Previous Chat History} The following is the source sentence and its corresponding translation, as well as the assessment of the subcategory evaluator on the existence and severity of the error. Please provide your opinion. You need to determine whether an error exists, and then judge whether the degree of error is correct. You don’t need to completely negate it.{Source Sentence} {Translation} {Previous Error Information} {Previous Chat History} Regarding the source sentence and its translation, as well as the evaluation answer you provided, your superior evaluation expert has provided the following opinions. Please indicate whether you agree with his viewpoint. 16 Table 11: The workflow for each agent in the first case from the MQM22 ZH-EN dataset generated by gpt-4o-mini. The table illustrates the subtype evaluation stage, with segments highlighted in red indicating incorrect judgments and those in green indicating correct ones. Subsequent tables will consistently adopt this convention. Source 希望你们了解一下 Translation I hope you know about it. Reference I hope you can find out about it. Annotation Major-Mistranslation-‘know about it’ HiMATE-SEMajor-Mistranslation-‘know about it’ tier-2 agent: "The translation "I hope you know about it" does not accurately convey the | https://arxiv.org/abs/2505.16281v1 |
meaning of the source text "希望你们了解一下 " which more appropriately translates to "I hope you all understand" or "I hope you all are aware" The use of "know about it" implies a lesser degree of understanding and does not capture the intent of encouraging awareness or understanding, thus constituting a major mistranslation." Minor-Inappropriate for context-‘know’ tier-2 agent: "The translation "I hope you know about it" does not accurately convey the nuance of " 希望你们了解 一下 ," which suggests a desire for the audience to gain a better understanding or awareness of something, rather than simply knowing about it. The term "了解" implies a deeper comprehension than "know," which could be seen as inappropriate for the context. However, the overall meaning is still conveyed, hence the error is classified as minor." Major-Omission-‘ 希望’ tier-2 agent: "The translation omits the word " 希望" ("hope") in the source, which changes the tone and intention of the message. The translation does not convey the speaker’s hope or desire for understanding, resulting in a significant difference in meaning." Minor-Awkward-‘I hope you know about it’ tier-2 agent: "The translation "I hope you know about it" is somewhat awkward because it implies a level of familiarity that may not be intended in the original Chinese phrase " 希 望你们了解一下 " which suggests a desire for the listeners to gain some understanding or awareness rather than simply knowing about something. A more natural translation could be "I hope you can understand this" or "I hope you can learn about this" which would better convey the intent of the original message." Minor-Addition-‘know about it’ tier-2 agent: "The translation "I hope you know about it" is somewhat awkward because it implies a level of familiarity that may not be intended in the original Chinese phrase " 希望你们了解 一下 " which suggests a desire for the listeners to gain some understanding or awareness rather than simply knowing about something. A more natural translation could be "I hope you can understand this" or "I hope you can learn about this" which would better convey the intent of the original message." 17 Table 12: The workflow for each agent in the first case from the MQM22 ZH-EN dataset generated by gpt-4o-mini. The table illustrates the self-reflection and collaborative discussion stages. Source 希望你们了解一下 Translation I hope you know about it. Reference I hope you can find out about it. Annotation Major-Mistranslation-‘know about it’ HiMATE-SR-CorrectionMajor-Mistranslation-‘know about it’ tier-2 agent: "I hope you all understand." Minor-Inappropriate for context-‘know’ tier-2 agent: "I hope you can under- stand it better." Major-Omission-‘ 希望’ tier-2 agent: "I hope you understand this." Minor-Awkward-‘I hope you know about it’ tier-2 agent: "I hope you can understand this." Minor-Addition-‘know about it’ tier-2 agent: "I hope you understand a bit about it." HiMATE-SR-VerificationMajor-Mistranslation-‘know about it’ tier-2 agent: "Error has been corrected." Minor-Inappropriate for context-‘know’ tier-2 agent: "No significant differ- ence between the original translation and the corrected Translation." Major-Omission-‘ 希望’ tier-2 agent: "No significant difference between the original translation and the corrected Translation." Minor-Awkward-‘I hope you know about it’ tier-2 agent: "Error has been corrected." Minor-Addition-‘know | https://arxiv.org/abs/2505.16281v1 |
about it’ tier-2 agent: "Error has been corrected." HiMATE-CD-Round1Major-Mistranslation-‘know about it’ tier-1 agent: "Error Exist: Yes Error Severity: Major. The evaluator’s assessment is correct. The translation "I hope you know about it" does not accurately capture the meaning of the source sentence " 希望你们了解一下 " which conveys a sense of encouraging un- derstanding or familiarity rather than merely hoping that the audience knows something. This misrepresentation of intent in the translation qualifies as a major error, as it alters the original message significantly." Minor-Awkward-‘I hope you know about it’ tier-1 agent: "Error Exist: No. Error Severity: Neutral. The translation "I hope you know about it" is gram- matically correct but does not fully capture the nuance of the source sentence "希望你们了解一下 " which implies a desire for understanding rather than mere knowledge. The awkwardness stems from the choice of words rather than a clear error, making it more subjective. Therefore, I would classify the severity as Neutral rather than Minor." Minor-Addition-‘know about it’ tier-1 agent: "Error Exist: No. Error Sever- ity: Neutral. The evaluator correctly identified the addition error, as the translation introduces a nuance that is not present in the source text. However, the severity should be considered neutral rather than minor, as the addition does not significantly distort the original meaning or create a major misunder- standing. It is more of a stylistic difference rather than a critical error." HiMATE-CD-Round2Major-Mistranslation-‘know about it’ tier-2 agent: "I agree with him." Minor-Awkward-‘I hope you know about it’ tier-2 agent: "I agree with him." Minor-Addition-‘know about it’ tier-2 agent: "I agree with him." 18 Table 13: The workflow for each agent in the second case from the MQM22 ZH-EN dataset generated by gpt-4o- mini. The table illustrates the subtype evaluation stage. Source 无论从接待、化妆、服装、灯光、摄影到后期都无可挑剔。 TranslationFrom reception, makeup, costumes, lighting, photography to the late stage, it is impeccable. ReferenceIt was perfect in reception, makeup, clothing, lamplight, photography and post-editing. Annotation Major-Mistranslation-‘late stage’ HiMATE-SEMajor-Mistranslation-‘the late stage’ tier-2 agent: "The phrase "to the late stage" in the translation does not accurately convey the meaning of the original Chinese text, which implies that everything is impeccable "from reception to post-production" without a clear distinction or segmentation of "the late stage." This mistranslation leads to confusion about the intended meaning and scope of the description." Minor-Addition-‘the late stage’ tier-2 agent: "The translation introduces the phrase "the late stage" which is not present in the source text (" 后期" could refer to a general later stage rather than a specific term). This addition alters the original meaning slightly by being more specific than the original context intended." Major-Omission-‘ 无论从’ tier-2 agent: "The translation omits the phrase "无论从" (which means "regardless of" or "no matter if") from the original Chinese sentence, altering the meaning and emphasis of the statement. This results in a significant loss of context and intent, qualifying it as a major omission." Minor-Grammar-‘to the late stage’ tier-2 agent: "The phrase "to the late stage" does not accurately reflect the intended meaning of " 后期" which typically refers to "post-production" or "editing" in a context involving photography or film. Although | https://arxiv.org/abs/2505.16281v1 |
the overall translation is understandable, this specific phrase could lead to ambiguity regarding the stages mentioned, making it a minor grammatical error." Minor-Inconsistency-‘the late stage’ tier-2 agent: "The translation uses ¨the late stage ¨which is not a commonly used term in English for " 后期" in this context. A more appropriate term would be "post-production" or "final stage." This creates a slight inconsistency in the translation, but it does not significantly alter the meaning." Minor-Awkward-‘to the late stage’ tier-2 agent: "The translation "From re- ception, makeup, costumes, lighting, photography to the late stage, it is impeccable." feels awkward due to the phrasing "to the late stage" which is not a commonly used expression in English. A more natural phrasing like "post-production" would make it smoother. However, the overall meaning is conveyed, which keeps the severity at a minor level." 19 Table 14: The workflow for each agent in the second case from the MQM22 ZH-EN dataset generated by gpt-4o- mini. The table illustrates the self-reflection stage. Source 无论从接待、化妆、服装、灯光、摄影到后期都无可挑剔。 TranslationFrom reception, makeup, costumes, lighting, photography to the late stage, it is impeccable. ReferenceIt was perfect in reception, makeup, clothing, lamplight, photography and post-editing. Annotation Major-Mistranslation-‘late stage’ HiMATE-SR-CorrectionMajor-Mistranslation-‘the late stage’ tier-2 agent: "From reception, makeup, costumes, lighting, photography to post-production, everything is impecca- ble." Minor-Addition-‘the late stage’ tier-2 agent: "From reception, makeup, cos- tumes, lighting, photography to post-production, it is impeccable." Major-Omission-‘ 无论从’ tier-2 agent: "Regardless of reception, makeup, costumes, lighting, photography, or the late stage, it is impeccable." Minor-Grammar-‘to the late stage’ tier-2 agent: "From reception, makeup, costumes, lighting, photography to post-production, it is impeccable." Minor-Inconsistency-‘the late stage’ tier-2 agent: "From reception, makeup, costumes, lighting, photography to post-production, it is impeccable." Minor-Awkward-‘to the late stage’ tier-2 agent: "From reception, makeup, costumes, lighting, photography to post-production, it is impeccable." HiMATE-SR-VerificationMajor-Mistranslation-‘the late stage’ tier-2 agent: "Error has been corrected." Minor-Addition-‘the late stage’ tier-2 agent: "Error has been corrected." Major-Omission-‘ 无论从’ tier-2 agent: "No significant difference between the original translation and the corrected Translation." Minor-Grammar-‘to the late stage’ tier-2 agent: "No significant difference between the original translation and the corrected Translation." Minor-Inconsistency-‘the late stage’ tier-2 agent: "Error has been corrected." Minor-Awkward-‘to the late stage’ tier-2 agent: "Error has been corrected." 20 Table 15: The workflow for each agent in the second case from the MQM22 ZH-EN dataset generated by gpt-4o- mini. The table illustrates the collaborative discussion stage. Source 无论从接待、化妆、服装、灯光、摄影到后期都无可挑剔。 TranslationFrom reception, makeup, costumes, lighting, photography to the late stage, it is impeccable. ReferenceIt was perfect in reception, makeup, clothing, lamplight, photography and post-editing. Annotation Major-Mistranslation-‘late stage’ HiMATE-CD-Round1Major-Mistranslation-‘the late stage’ tier-1 agent: "Error Exist: Yes. Error Severity: Major. The translation "From reception, makeup, costumes, light- ing, photography to the late stage, it is impeccable" misrepresents the original meaning. The term " 后期" typically refers to "post-production" rather than "the late stage." This distortion significantly alters the intended message about the quality of the various aspects involved, thus justifying a major severity rating." Minor-Addition-‘the late stage’ tier-1 agent: "Error Exist: No. Error Sever- ity: Neutral. The evaluator correctly identified the addition error, as the phrase "late stage" | https://arxiv.org/abs/2505.16281v1 |
in the translation does not correspond to the source text. However, the severity should be considered neutral rather than minor, as this error does not belong to an addition error. " Minor-Inconsistency-‘the late stage’ tier-1 agent: "Error Exist: No. Er- ror Severity: Neutral. The evaluator correctly identified the existence of an inconsistency in the translation, as the phrase "late stage" does not ac- curately reflect the original meaning of " 后期" which typically refers to "post-production" in this context. However, the severity should be considered Neutral rather than Minor, as this does not significantly impact the overall understanding of the text. The translation remains comprehensible, and the error does not lead to a misunderstanding of the main message." Minor-Awkward-‘to the late stage’ tier-2 agent: "Error Exist: No. Error Severity: Neutral. The translation retains grammatical acceptability but lacks the fluidity and coherence expected in the target language. While it does exhibit awkwardness, the severity should be assessed as neutral rather than minor, as it primarily involves a subjective opinion rather than causing a notable impact on the overall meaning or readability. The phrase "to the late stage" feels particularly awkward and could confuse readers, but it does not constitute a significant error affecting comprehension." HiMATE-CD-Round2Major-Mistranslation-‘the late stage’ tier-2 agent: "I agree with him." Minor-Addition-‘the late stage’ tier-1 agent: "I agree with him." Minor-Inconsistency-‘the late stage’ tier-1 agent: "I agree with him." Minor-Awkward-‘to the late stage’ tier-2 agent: "I agree with him." 21 Table 16: The workflow for each agent in the third case from the MQM22 EN-DE dataset generated by gpt-4o-mini. The table illustrates the subtype evaluation stage. SourceThis move is just the latest twist in the Dallas-Washington rivalry, one that got juiced up even more this week when Cowboys head coach Mike McCarthy predicted a win for his team, prompting some fireworks back and forth with Washington’s Ron Rivera and players. TranslationDieser Schritt ist nur die jüngste Wendung in der Rivalität zwischen Dallas und Washington, die diese Woche noch stärker gemacht wurde, als Cowboys- Cheftrainer Mike McCarthy einen Sieg für sein Team vorhersagte und ein Feuerwerk zwischen Ron Rivera und Spielern aus Washington auslöste. ReferenceDiese Aktion ist nichts als ein weiteres Kapitel in der Rivalität zwischen Dallas und Washington, die diese Woche noch mehr an Fahrt aufnahm, als der leitende Coach der Cowboys Mike McCarthy einen Sieg seines Teams voraussagte und ein hitziges Wortgefecht mit Ron Rivera von Washington und seinen Spielern auslöste. AnnotationMinor-Awkward-‘stärker gemacht’, Minor-Awkward-‘Feuerwerk’, Minor- Mistranslation-‘Feuerwerk’ HiMATE-SEMinor-Awkward-‘stärker gemacht wurde’ tier-2 agent: "The phrase "stärker gemacht wurde" is somewhat awkward in German. A more natural phrasing could be "intensiver wurde" or "mehr an Intensität gewann." The current trans- lation retains the original structure but does not flow as smoothly in German, leading to a minor awkwardness in style." Major-Mistranslation-‘Feuerwerk’ tier-2 agent: "The translation uses "Feuer- werk" which is more commonly associated with fireworks in a literal sense, while the context implies a figurative sense of heated exchanges or confronta- tions. A more appropriate term might be "Auseinandersetzungen" or "Kon- flikte" to convey the intended meaning of verbal disputes." Minor-Inconsistency-‘die diese Woche | https://arxiv.org/abs/2505.16281v1 |
noch stärker gemacht wurde’ tier-2 agent: "The phrase "die diese Woche noch stärker gemacht wurde" implies that the rivalry was made stronger, whereas the original phrase suggests a progression in intensity due to recent events. A more accurate translation could clarify that the rivalry was intensified rather than simply "made stronger." This is a minor inconsistency that doesn’t convey the precise meaning of the source text." 22 Table 17: The workflow for each agent in the third case from the MQM22 EN-DE dataset generated by gpt-4o-mini. The table illustrates the self-refleciton and collaborative discussion stages. SourceThis move is just the latest twist in the Dallas-Washington rivalry, one that got juiced up even more this week when Cowboys head coach Mike McCarthy predicted a win for his team, prompting some fireworks back and forth with Washington’s Ron Rivera and players. TranslationDieser Schritt ist nur die jüngste Wendung in der Rivalität zwischen Dallas und Washington, die diese Woche noch stärker gemacht wurde, als Cowboys- Cheftrainer Mike McCarthy einen Sieg für sein Team vorhersagte und ein Feuerwerk zwischen Ron Rivera und Spielern aus Washington auslöste. ReferenceDiese Aktion ist nichts als ein weiteres Kapitel in der Rivalität zwischen Dallas und Washington, die diese Woche noch mehr an Fahrt aufnahm, als der leitende Coach der Cowboys Mike McCarthy einen Sieg seines Teams voraussagte und ein hitziges Wortgefecht mit Ron Rivera von Washington und seinen Spielern auslöste. AnnotationMinor-Awkward-‘stärker gemacht’, Minor-Awkward-‘Feuerwerk’, Minor- Mistranslation-‘Feuerwerk’ HiMATE-SR-CorrectionMinor-Awkward-‘stärker gemacht wurde’ tier-2 agent: "Dieser Schritt ist nur die jüngste Wendung in der Rivalität zwischen Dallas und Washington, die diese Woche noch intensiver wurde, als Cowboys-Cheftrainer Mike McCarthy einen Sieg für sein Team vorhersagte und ein Feuerwerk zwischen Ron Rivera und Spielern aus Washington auslöste." Major-Mistranslation-‘Feuerwerk’ tier-2 agent: "Dieser Schritt ist nur die jüngste Wendung in der Rivalität zwischen Dallas und Washington, die diese Woche noch stärker gemacht wurde, als Cowboys-Cheftrainer Mike McCarthy einen Sieg für sein Team vorhersagte und Auseinandersetzungen zwischen Ron Rivera und Spielern aus Washington auslöste." Minor-Inconsistency-‘die diese Woche noch stärker gemacht wurde’ tier-2 agent: "Dieser Schritt ist nur die jüngste Wendung in der Rivalität zwischen Dallas und Washington, die diese Woche noch intensiver wurde, als Cowboys- Cheftrainer Mike McCarthy einen Sieg für sein Team vorhersagte und ein Feuerwerk zwischen Ron Rivera und Spielern aus Washington auslöste." HiMATE-SR-VerificationMinor-Awkward-‘stärker gemacht wurde’ tier-2 agent: "Error has been cor- rected." Major-Mistranslation-‘Feuerwerk’ tier-2 agent: "Error has been corrected." Minor-Inconsistency-‘die diese Woche noch stärker gemacht wurde’ tier-2 agent: "No significant difference between the original translation and the corrected Translation." HiMATE-CD-Round1Minor-Awkward-‘stärker gemacht wurde’ tier-1 agent: "No explanation due to exceeding the stage-transition threshold." Minor-Mistranslation-‘Feuerwerk’ tier-1 agent: "Error Exist: Yes. Error Severity: Minor. While there is a mistranslation present, the errors noted do not significantly alter the overall meaning of the source text. The translation captures the core sentiment and information though with some imprecision. Therefore, although there is an error, it does not warrant a major severity classification, as it does not impede comprehension or alter the message’s intent in a substantial way. A minor severity classification is more appropriate here." HiMATE-CD-Round2 Minor-Mistranslation-‘Feuerwerk’ tier-2 agent: "I agree with | https://arxiv.org/abs/2505.16281v1 |
arXiv:2505.16293v1 [cs.CL] 22 May 2025Augmenting LLM Reasoning with Dynamic Notes Writing for Complex QA Rishabh Maheshwary§Masoud Hashemi§Khyati Mahajan§ Shiva Krishna Reddy Malay§Sai Rajeswar†Sathwik Tejaswi Madhusudhan§ Spandana Gella†Vikas Yadav§ §ServiceNow †ServiceNow Research {rishabh.maheshwary, vikas.yadav} @servicenow.com§ Abstract Iterative RAG for multi-hop question answer- ing faces challenges with lengthy contexts and the buildup of irrelevant information. This hin- ders a model’s capacity to process and reason over retrieved content and limits performance. While recent methods focus on compressing retrieved information, they are either restricted to single-round RAG, require finetuning or lack scalability in iterative RAG. To address these challenges, we propose NotesWriting , a method that generates concise and relevant notes from retrieved documents at each step, thereby re- ducing noise and retaining only essential in- formation. This indirectly increases the effec- tive context length of Large Language Mod- els (LLMs), enabling them to reason and plan more effectively while processing larger vol- umes of input text. NotesWriting is framework agnostic and can be integrated with different iterative RAG methods. We demonstrate its ef- fectiveness with three iterative RAG methods, across two models and four evaluation datasets. NotesWriting yields an average improvement of 15.6 percentage points overall, with minimal increase in output tokens. 1 Introduction The retrieval augmented generation (RAG) paradigm has advanced open domain question an- swering (Zhang et al., 2022; Kamalloo et al., 2023) by incorporating external knowledge (Lewis et al., 2020; Guu et al., 2020; Borgeaud et al., 2022; Shi et al., 2023b; Izacard et al., 2023), enabling Large Language Models (LLMs) (Hurst et al., 2024; Dubey et al., 2024) to refresh outdated parametric knowledge (Dhingra et al., 2022; Kasai et al., 2023; Vu et al., 2023) and mitigate hallucinations (Ji et al., 2023; Zhang et al., 2023). However, for tasks like multi hop question an- swering (Yang et al., 2018; Zhu et al., 2024; Kr- ishna et al., 2024) which requires reasoning over multiple documents, a single-round RAG basedsolely on the initial question often falls short, as it fails to capture all the necessary information. To overcome this, iterative RAG methods such as IR- CoT (Trivedi et al., 2022), FLARE (Jiang et al., 2023), and ReAcT (Yao et al., 2023) interleave retrieval and reasoning over multiple steps, pro- gressively accumulating the evidence needed to answer complex queries. Nevertheless, retrieved information can be noisy, and prior work has shown that excessive noise in the retrieved context can significantly degrade RAG performance (Petroni et al., 2020; Shi et al., 2023a; Zhang et al., 2024; Leng et al., 2024; Wu et al., 2024). This challenge is amplified in iterative re- trieval settings, where new information must be retrieved at each reasoning step. Therefore, simply concatenating all retrieved documents at each step leads to several problems: •Context Overload: Exceeding the LLM’s con- text window limit (Krishna et al., 2024). •Computational Cost & Scalabiity: Increasing processing time and resources (Yue et al., 2024). •Distraction: Including irrelevant or redundant information that hinders the LLM’s reasoning and planning ability (Yu et al., 2023; Chen et al., 2024; Xie et al., 2024; Aghzal et al., 2025). •Readability: Excessively long | https://arxiv.org/abs/2505.16293v1 |
reasoning traces created from multiple documents pose challenges for users to interpret precise reasoning. Redun- dant information can further affect readability. To address these issues, we propose a simple yet effective and scalable method called NotesWriting . At each retrieval step, NotesWriting produces con- cise notes based upon the retrieved documents and the sub-question, thus providing only the essential information required at each step. This increases theeffective context length as it does not overload the LLM context with irrelevant information, which helps the LLMs plan & reason better. Furthermore, NotesWriting is generic and can be coupled with any iterative RAG framework. While recent meth- ods (Edge et al., 2024; Xu et al., 2023a; Kim et al., 2024) have explored summarizing retrieved con- tent, they are often limited to single-round RAG or require synthetic data for fine-tuning. Jiang et al. (2025) extend this idea by summarizing retrieved documents at each step; however, this approach lacks scalability as it is still limited to three itera- tions at maximum and depends on multiple mod- ules leading to multiple LLM calls in each iteration. This work makes the following key contributions: •We propose NotesWriting to improve effective context length of iterative RAG. This reduces context overload, the number of reasoning steps, and redundancy. •NotesWriting is plug-and-play, and can be cou- pled with any iterative RAG framework, benefit- ing planning and reasoning abilities by reducing tokens in context (thus indirectly increasing the effective context length per step) and reducing retention of irrelevant information. •Our experiments across three iterative RAG base- lines (IRCoT, FLARE & ReAct), four multi-hop QA datasets and twoLLMs demonstrates that NotesWriting achieves 15.6 percentage points im- provement by increasing the volume of ingested text with minimal increase in output tokens. •NotesWriting with ReAct (ReNAct) achieves the highest performance, enabling better planning by guiding the model to generate more accurate search queries and retrieving correct documents as demonstrated in Section 6. 2 Background and Related Work Single-Step vs. Iterative RAG. Traditional RAG often operates in a single step: retrieve relevant documents based on the initial query, then gen- erate the final response conditioned on both the query and the retrieved context. While effective for simpler questions, this retrieve-then-read approach struggles for multi-hop QA, where the information needed evolves throughout the reasoning process. Iterative RAG addresses this limitation by inter- leaving retrieval and generation. The model can issue multiple queries, gather information incre- mentally, and refine its reasoning path based on newly retrieved evidence. This dynamic interaction between the LLM and the retriever is better suited for complex, multi-step reasoning. Formulation of Iterative RAG. Letxbe the user input question, and D={di}|D| i=1represent the external knowledge corpus (e.g., Wikipedia).An iterative RAG process aims to generate a se- quence of reasoning steps or partial outputs s= [s1,s2, . . . ,sn]. We denote the language model as LM(·)and the retrieval function, which returns the top-kdocuments for a query q, as ret (q). At each step t≥1, the typical process involves: 1.Query Formulation: A query qtis generated based on the initial input xand the preceding stepss<t= [s1, . . | https://arxiv.org/abs/2505.16293v1 |
. ,st−1]. This is governed by a query formulation function Q(·): qt=Q(x,s<t) (1) For the first step, s<1=∅, and often q1=x. 2.Retrieval: The retriever fetches the top- krele- vant documents: Dqt=ret(qt). 3.Generation: The LM generates the next rea- soning step stusing the original input, previous steps, and the newly retrieved documents: st=LM([Dqt,x,s<t]) (2) This process continues until a final answer is gen- erated, or a maximum number of steps is reached. Advances in Iterative RAG. Several approaches have explored different strategies within this itera- tive framework: IRCoT (Interleaving Retrieval and Chain-of-Thought) (Trivedi et al., 2022) directly uses the last generated sentence of the Chain-of- Thought (CoT) (Wei et al., 2022) reasoning as the query for the next retrieval step. This tightly cou- ples reasoning progress with information gathering. Formally, its query function is: qt=QIRCoT(x,s<t) =st−1 (3) IRCoT demonstrates improved performance over single-step RAG across various QA datasets, show- ing robustness even with smaller models. FLARE (Forward-Looking Active Retrieval Aug- mented Generation) (Jiang et al., 2023) builds on similar ideas but retrieves proactively. It anticipates future content needs by checking the confidence of generated tokens. Retrieval is triggered only when the model generates low-confidence tokens (often indicating a need for information), using the preceding sentence as the query. If confidence is high, retrieval is skipped. The query function is conditional: qt=( st−1,if conf (st−1)> θ ∅, otherwise(4) While effective in long-form generation, its re- liance on predicting future needs might face chal- lenges if the context shifts unexpectedly. ReAct (Reason and Action) (Yao et al., 2023) ex- What is the elevation range for the area that the eastern sector of the Colorado orogeny extends into? Thought : I need to determine where the eastern sector of the Colorado orogeny extends and then find the elevation range of that area. First, I will search for information on the Colorado orogeny and identify the specific region its eastern sector extends into. Action : search [Colorado orogeny; Where does the eastern sector of the Colorado orogeny extend? ] Observation : The Colorado orogen, is a >500-km-wide belt of oceanic arc rock (1.78?1.65 Ga) that extends southward into New Mexico and composes a major part of the Proterozoic provinces of the southwestern United States. The eastern sector extends into the High Plains and is called the Central Plains orogeny. Thought : The eastern sector of the Colorado orogeny extends into the High Plains. Now, I need to determine the elevation range of the High Plains. Action : search [High Plains (United States); What is the elevation range of the High Plains? ] Observation : The High Plains are a subregion of the Great Plains. From east to west, the High Plains rise in elevation from around 1,800 to 7,000 ft (550 to 2,130 m). Thought: The High Plains, where the eastern sector of the Colorado orogeny extends, have an elevation range from 1,800 to 7,000 ft. Action: finish [ 1,800 to 7,000 ft ] Step 0 Initial Question Step 1 Generated CoT + query based on question Step 2 Written notes added to context Step 3 Generated | https://arxiv.org/abs/2505.16293v1 |
CoT + query based on Step 1 Step 4 Written notes added to context Step 5 Final explanation and answer Where does the eastern sector of the Colorado orogeny extend? Query for retrieval Notes LLM Concise Notes Top Wiki pages What is the elevation range of the High Plains? Query for retrieval Colorado Orogeny Top Wiki pages High Plains (United States) Notes LLM Concise NotesFigure 1: Overview of NotesWriting within an iterative RAG framework. plicitly separates reasoning ( Thought ) from infor- mation gathering ( Action ), where the action often involves generating a specific search query. qt=QReAct(x,s<t) =Action t (5) Context Management in Iterative RAG. De- spite advancements in iterative RAG, a core chal- lenge persists: managing the retrieved context ef- fectively across iterations. Even with long con- text LLMs, studies have found that complex tasks which require compositional reasoning like multi- hop QA are solved better with retrieval (Xu et al., 2023b; Lee et al., 2024a). However, long context LLMs have been shown to face issues in handling information within the long context (needle-in- the-haystack issues) (Kamradt, 2023; Hsieh et al., 2024) which limit performance even when com- bined with RAG (Jiang et al., 2024). Thus, address- ing context management requires mechanisms to condense, filter, or summarize the retrieved infor- mation. Several such approaches have been ex- plored in recent research. RECOMP (Xu et al., 2023a) compresses retrieved documents using ex- tractive or abstractive summarization before pass- ing them to the main LLM in a single-turn RAG setting. This helps with query-relevant compres- sion, but does not directly handle iterative context accumulation. Chain-of-Note (CON) (Yu et al., 2023) generates sequential notes during training to assess retrieved document relevance and reli- ability. This improves robustness against noise, but lacks explicit planning or iterative refinement at inference time. PlanRAG (Lee et al., 2024b) proposes a two-stage approach, generating a deci- sion plan and then executing retrieval operations by adding Plan and Re-plan steps to ReAct. Smar- tRAG (Gao et al., 2024) similarly includes a policy network which decides whether to retrieve, and a retriever, which are jointly optimized to reduceretrieval while improving performance. However, retrieved documents while more relevant can still accumulate in context, affecting performance. Self- RAG (Asai et al., 2023) uses reflection tokens to self-reflect on the retrieved documents as a means to reduce the number of documents included in the context. Self-reflection is achieved by fine-tuning the model, which improves factual accuracy and ci- tation integrity when benchmarked against existing models such as ChatGPT and Llama2-Chat. How- ever, the requirement of fine-tuning could be costly, and require updates over time. More recently, modular approaches have been sug- gested for iterative retrieval and summarization. Infogent (Reddy et al., 2024) proposes two mod- ules, an Aggregator whose textual feedback guides further retrieval from a Navigator-Extractor. The Extractor extracts readable relevant content from the Navigator’s web-based API access and for- wards it to the Aggregator for evaluation. How- ever, context management remains an important issue. ReSP (Retrieve, Summarize, Plan) (Jiang et al., 2025) uses query-focused summarization in multi-hop QA, maintaining global and | https://arxiv.org/abs/2505.16293v1 |
local evi- dence summaries across iterations to prevent con- text overload. It involves multiple LLM calls per iteration focusing on planning for the next step with sub-questions, summarizing retrieved docu- ments, generating the next sub-questions, and judg- ing for whether there is sufficient information to answer the question. While specialized modules for each stage could boost performance further, this approach faces several drawbacks - like the possi- bility of cascading failures if any module fails dur- ing an iteration, and multiple LLM calls which can further increase latency and information repetition with local and global evidence. Our proposed NotesWriting method overcomes the aforementioned challenges by focusing on flex- ibly generating concise and relevant notes from retrieved documents at each iterative step. This addresses the critical need for noise reduction and context length enhancement, thereby allow- ing LLMs to reason and plan more effectively in complex multi-hop scenarios. 3 Method To address the challenges of context overload and information noise in iterative RAG, particularly for multi-hop QA, we introduce NotesWriting , a method for generating concise, query-relevant notes from retrieved documents at each step. In- stead of feeding raw retrieved documents to the main LM, NotesWriting first processes them to extract key information, thereby reducing context length and filtering irrelevant content. 3.1 NotesWriting : Iterative Note Extraction The core idea is to use a dedicated, smaller lan- guage model ( LM notes) to act as a note-taker. At each iteration t, after retrieving the top- kdocu- mentsDqt={d1, d2, . . . , d k}based on the query qt,NotesWriting performs the following: 1.Note Extraction: For each retrieved document di,LM notes is prompted (using prompt Pnotes, see Appendix A.6) to extract concise notes ri relevant to the current query qt: ri=LM notes(qt, di) (6) 2.Note Aggregation: The extracted notes from allkdocuments are aggregated as Ot: Ot=k[ i=1ri (7) This process replaces the direct feeding of poten- tially long and noisy documents Dqtwith the much shorter and focused notes Ot. 3.2 ReNAct: ReAct with NotesWriting While NotesWriting is a generic module that can be integrated with different iterative RAG meth- ods, results in Section 5 demonstrates that it works best with the ReAct framework (Yao et al., 2023). Therefore, we propose leveraging the ReAct frame- work as a suitable base for our approach. ReAct’s structure explicitly separates reasoning ( Thought ) from information gathering ( Action ), where the action often involves generating a specific search query. This explicit query generation aligns natu- rally with the goal of targeted retrieval followed byfocused note-taking. We combine ReAct with NotesWriting with the process at step tas follows: 1.LM generates Thought step outlining reasoning, along with an Action step, typically containing a search query. This query becomes qt: qt=QReAct(x,s<t) = SearchQueryFrom (Action t)(8) Retrieval is performed using qtto getDqt. 2.NotesWriting processes Dqtusing LM notesto generate aggregated notes, presented as the ob- servation Ot. 3.The main LM receives Otand uses it along withxands<tto generate the next Thought andAction pair: st(next Thought+Action ) = LM([Observation: Ot,x,s<t])(9) Iterations continue until the model generates a final answer within stor reaches a maximum number | https://arxiv.org/abs/2505.16293v1 |
of iterations T, after which a final answer is synthe- sized based on the full history sand the collected notes{Ot}T t=1. This approach (illustrated in Fig- ure 1) aims to combine the structured reasoning of ReAct with the context management benefits of NotesWriting , leading to a more robust and efficient iterative RAG system for complex QA. 3.3 NotesWriting : A Plug-and-Play Module for Iterative RAG NotesWriting is designed as a complementary mod- ule that can be integrated into various iterative RAG frameworks. It modifies the generation step (Eq. 5) while keeping the specific query formulation Q of the base framework. We demonstrate this inte- gration with two SOTA iterative RAG frameworks: IRCoT and FLARE. IRCoT: Query remains the last generated sentence (qt=st−1). Generation step becomes: st=LM([Ot,x,s<t])where Otis derived from Dqt=ret(st−1)(10) FLARE: Query formulation remains conditional based on confidence θ(qt=QFLARE (x,s<t)). If retrieval occurs ( qt̸=∅), the generation step uses the extracted notes: st=LM([Ot,x,s<t]) where Otis derived from Dqt(11) If retrieval is skipped ( qt=∅),Ot=∅and gener- ation proceeds without new retrieved context. Model Dataset Over Limit (n / %) GPT-4o-mini (128k)Frames 463 / 549 (84.3%) FanoutQA 244 / 310 (78.7%) LLaMA 3.1 70B (64K)Frames 488 / 549 (88.8 %) FanoutQA 255 / 310 (82.2%) Table 1: Number of questions exceeding LLMs context with the top-5 Wikipedia pages (markdown format) be- ing inserted into the LLM context at each step. 4 Experiments 4.1 Datasets (1)FanoutQA (Zhu et al., 2024) focuses on "fanout" multi-hop, multi-document complex ques- tions that require gathering information about a large set of entities. We report results on the dev set containing 310questions. (2)FRAMES (Krishna et al., 2024) a challenging multi-hop QA dataset requiring 2–15 hops to an- swer questions. We exclude questions requiring tabular reasoning and evaluate on 549examples. (3)HotpotQA (Yang et al., 2018) a popular multi- hop QA dataset that requires reasoning over 2-3 Wikipedia articles. reasoning. We report results on 500examples from the dev set. (4)MultiHop-RAG (Tang and Yang, 2024) a non- wikipedia based benchmark that involves retrieval over recent news articles. It has ∼600news arti- cles. For each question, we used BM25 to get the top five news articles in each iteration. Evaluation metrics. We report the F1 score be- tween predicted and ground truth answer and fol- lowing (Krishna et al., 2024) we also use GPT4- as-Judge score with prompt in the appendix 6. We also measure the effective context length by reporting the average number of input & output to- kens processed by the main LLM and notes writing LM notesacross all steps/iterations. Finally, we look at the average number of steps that is defined as the number of search queries that the main LLM needs to answer the question. 4.2 Models We experiment with two LLMs, representing closed & open weights, GPT-4o-mini1andLlama 3.1-70-Instruct (Dubey et al., 2024). We set the temperature to 0.7and use the same LLM for generating reasoning step and NotesWriting (i.e LM = LM notes).Llama 3.1-70-Instruct was 1https://openai.com/index/ gpt-4o-mini-advancing-cost-efficient-intelligence/hosted using vLLM (Kwon et al., 2023) across 8 A100-80GB GPUs, supporting a maximum context length of 64K.GPT-4o-mini , which has a | https://arxiv.org/abs/2505.16293v1 |
context length of 128K. 4.3 NotesWriting Implementation Details ForNotesWriting , we utilize the Wikipedia API to fetch the top 5 relevant pages based on the query qt. Each retrieved Wikipedia page is converted to Markdown format using markdownify2before being processed by LM notes. Initial experiments revealed that feeding the full content of the top-5 retrieved Wikipedia pages di- rectly into the main LM (as standard iterative RAG baselines do) frequently caused context length er- rors, especially on challenging benchmarks like Fanout-QA and FRAMES. Table 1 shows that ap- proximately 80% of the questions are unanswer- able as the context builds up and exceeds the con- text window. This observation, corroborated by the large average token counts reported for base- lines Tables 2, 8, and 9 (which often exceed typi- cal LLM context limits, necessitating adjustments. Therefore, for a fair comparison, the baseline meth- ods (IRCoT, FLARE, ReAct without NotesWriting ) were re-implemented using a chunked document setup as detailed in Section 4.4, while NotesWriting operates on the full retrieved pages. 4.4 Baseline Implementation Details ReAct — As simply passing the top retrieved doc- uments at each step to LLM causes context length exceeded (refer Table 1), we re-implement the orig- inal baseline (Yao et al., 2023) which allows the LLM tosearch that retrieves the first paragraph of the top 5 Wikipedia pages, select that allows ReAct to select relevant page for first 10 passages andlookup that returns paragraphs containing that specific string in the selected page. IRCoT & FLARE — These were originally de- signed for older completion-based models such as text-davinci-003 which relied heavily on inter- nal parametric knowledge to generate CoTs. How- ever, such a design is not directly applicable to questions requiring step by step planning and up- to-date knowledge. To address this, we adapt the few-shot prompting strategy to be compatible with chat models, enabling them not only to generate CoTs but also to explicitly plan ahead (see Ap- pendix A.6). Moreover, the original baselines used 2https://pypi.org/project/markdownify/ Model Setting Benchmark F1 (%) GPT-4 Avg Main Tokens Notes Tokens Score (%) steps Input Output Input OutputGPT-4o-miniReActFanout-QA 28.6 12.9 10.4 116K 916 - - Frames 8.7 31.1 8.36 67K 707 - - Hotpot-QA 42.2 56.4 3.33 26K 319 - - MultiHop-RAG 58.0 64.2 5.6 188K 278 - - Fanout-QA 50.0 28.0 7.8 17K 598 359K 675 ReAct + Frames 46.8 52.3 6.51 16K 543 277K 607 NotesWriting Hotpot-QA 51.0 64.0 3.2 9K 326 130K 321 (ReNAct) MultiHop-RAG 58.0 70.6 6.0 46K 368 68K 390LLaMA-3.1-70BReActFanout-QA 13.5 8.7 7.5 113K 506 - - Frames 21.7 26.8 6.83 85K 433 - - Hotpot-QA 43.7 52.6 4.3 49K 289 - - MultiHop-RAG 53.6 61.4 5.24 180K 295 - - Fanout-QA 43.0 26.1 5.80 15K 485 265K 1116 ReAct+ Frames 49.0 57.6 4.81 13K 412 193K 717 NotesWriting Hotpot-QA 55.5 67.4 3.34 8K 274 109K 391 (ReNAct) MultiHop-RAG 63.5 73.0 5.9 47K 262 76K 425 Table 2: ReAct and NotesWriting results for GPT-4o-mini and LLaMA. Main tokens represent the total number of input & output tokens for the main LLM across all steps (average on | https://arxiv.org/abs/2505.16293v1 |
all questions). Similarly, notes tokens represent the total number of input & output tokens across all steps by the notes writing LLM (averaged on all questions). Token counts are rounded to the nearest thousand. BM25 from an older Wikipedia dump. However, in initial experiments we observed that the older dump is outdated for latest datasets. Therefore, we used a recent dump 20231101.en3and dense passage re- trieval with ef-base-v2 embeddings (Wang et al., 2022). We set the selective retrieval parameter θto 0.8for all our experiments.4 5 Results Enhanced performance with NotesWriting . From Table 2, 3 and 4 in comparison to the re- spective baselines, NotesWriting shows signifi- cant improvements across all models and bench- marks. Specifically from Table 2, on complex long-form multihop-QA datasets like FRAMES and Fanout-QA , on average ReNAct achieves an abso- lute improvement of 29.1 points in F1 score and 21.1 points in GPT-4 score. On relatively easier datasets such as Hotpot-QA andMultiHop-RAG , ReNAcT yields absolute improvements of 10.3 and 5.0 points, respectively. The strong results com- pared to the baseline demonstrate that the LLM is receiving correct and relevant information at each step with NotesWriting . From Tables 3 and 4 on challenging datasets NotesWriting coupled with each of IRCoT and FLARE leads to 14.4 and 10.5 points improvement 3https://huggingface.co/datasets/wikimedia/ wikipedia 4We also compare NotesWriting with Infogent (Reddy et al., 2024) with details and results in Appendix A.1Model Setting Benchmark F1 (%) GPT-4 (%)GPT-4o-miniIRCoTFanoutQA 33.6 15.2 Frames 24.0 22.0 HotpotQA 36.4 42.8 M-RAG 23.7 48.0 FanoutQA 41.9 21.3 IRCoT Frames 43.9 42.3 +Notes HotpotQA 46.2 53.8 Writing M-RAG 36.0 65.6LLaMA-3.1-70BIRCoTFanoutQA 21.0 8.4 Frames 19.4 21.1 HotpotQA 31.5 38.2 M-RAG 35.4 64.8 FanoutQA 36.0 22.9 IRCoT Frames 26.9 33.3 +Notes HotpotQA 36.5 53.0 Writing M-RAG 38.0 64.8 Table 3: IRCoT performance for GPT-4o-mini and LLaMA-3.1-70B M-RAG represents MultiHop-RAG. on F1 and GPT-4 score. Similarly on Hotpot-QA and MultiHop-RAG we find 7.0 and 10.8 points improvement on F1 and GPT-4 score respectively. Increased effective context length. Tables 2, 8 and 9 show the average number of input and out- put tokens across all steps for the baseline and NotesWriting . The total number of tokens pro- cessed by the system (sum of input tokens across main and notes-writing LLMs) increases, allowing the model to reason over more retrieved content. However, it is important to note that this infor- mation cannot be naively appended to the main LLM’s context (summing columns 7 and 9 would exceed the context window). This demonstrates thatNotesWriting enables scalable use of large re- trieval context by delegating information manage- ment to a specialized LLM. With ReAct (Table 2), the number of tokens for the main LLM reduce significantly from baseline to ReNAct across all benchmarks — by 77K tokens for GPT-4o-mini and 86K tokens for LLaMA-3.1- 70B on average. This demonstrates concise notes being added at each retrieval step. Similarly, output tokens decrease across benchmarks, with an aver- age reduction of 96 tokens for GPT-4o-mini and 53 tokens for LLaMA-3.1-70B. The same trend is ob- served with IRCoT (Table 8) and FLARE (Table 9) | https://arxiv.org/abs/2505.16293v1 |
where the main LLM input tokens reduces by at least 4x and 1.5x for GPT-4o-mini and LLaMA-3.1- 70B, with output tokens being almost comparable. Reduced average steps. ReNAct reduces the av- erage number of steps across all the benchmarks MultiHop-RAG (Table 2). For Frames andFanout- QA, the reduction is 2.23 & 1.86 for GPT-4o-mini and LLaMA-3.1-70B respectively. The reduction is smaller but still present for Hotpot-QA with an av- erage drop of 0.13 and 0.96 steps respectively. We further analyze the reduction in redundant queries and correlation with ground truth steps in Section 6. NotesWriting is cost effective. Tables 2, 8, and 9 show that with NotesWriting the combined output tokens by the main & note taking LLM are on an average 2-3x more than the baselines. How- ever, this tradeoff is justified by the performance improvement and the output tokens being signifi- cantly less (about 100x) than effective number of input tokens. As the output tokens are the major contributing factor to cost5& latency, NotesWrit- ingis a much more cost + compute + performance effective approach. 6 Analysis Reasoning Quality Analysis. We evaluate the reasoning chains generated by ReAct and ReNAct using GPT-4o as a judge across three axes, (1) Effi- ciency — to measure redundant searches and how well each step contributes to the final answer, (2) Redundancy — to assess repeated search queries, or unnecessary repetition or duplication of steps (3) Coherence — to check if the chain is comprehensi- ble, logically connected, and free from unnecessary 5https://openai.com/api/pricing/Model Setting Benchmark F1 (%) GPT-4 (%)GPT-4o-miniFLAREFanout-QA 35.1 14.2 Frames 26.3 23.7 HotpotQA 34.8 39.0 M-RAG 28.9 65.7 Fanout-QA 42.3 22.2 FLARE Frames 27.7 29.8 +Notes HotpotQA 34.5 45.8 Writing M-RAG 30.2 66.6LLaMA-3.1-70BFLAREFanoutQA 23.0 11.4 Frames 16.4 18.6 Hotpot-QA 24.7 31.2 M-RAG 36.1 67.0 FanoutQA 35.8 24.2 FLARE Frames 20.0 25.3 +Notes HotpotQA 34.0 47.0 Writing M-RAG 30.5 66.4 Table 4: FLARE performance for GPT-4o-mini and LLaMA-3.1-70B. M-RAG represents MultiHop-RAG. Model Dataset ReNAct ReAct GPT-4o-miniFanout 22.83 36.84 Frames 23.18 29.89 HotpotQA 30.56 46.02 LLaMAFanout 49.44 35.71 Frames 28.22 30.94 HotpotQA 31.33 32.82 Table 5: % of correct questions having search steps less than number of ground truth Wikipedia pages. complexity or ambiguity. The evaluation prompt is in Appendix 7. Figure 2 shows the results. ReNAct is better across all three axes than ReAct across on all models and datasets. Specifically, on Frames and FanoutQA across both models efficiency, re- dundancy and coherence improve by at least 1.5x. On HotpotQA, the improvement is 1.2x. Search Steps Comparison. Figure 3 shows the comparison of the number of ground truth steps, ReAct and ReNAct search steps for each question in each dataset across both models. The dashed lines for each method represents the in-correct an- swers and the sold line represents correct ones. The x-axis is the index of the question in the dataset sorted by the number of ground-truth search steps. From the Figure 3, it can be observed that ReNAct (solid blue line) is much closer to the ground truth steps with ReAct (solid red line) being relatively far demonstrating the effectiveness of | https://arxiv.org/abs/2505.16293v1 |
NotesWriting in coming up with correct stepwise plan and search query for retrieval. Figure 3 also shows that the in-correct questions (dashed red & blue line) have a higher number of steps that shows that it fails Figure 2: Quality evaluation of ReAct and ReNAct reasoning chain. Figure 3: Steps (smoothed) by ReNAct, ReAct vs the ground truth steps for GPT-4o-mini and LLama-3.1-70B. after many re-tries. The difference among ReNAct ReAct and ground truth steps is more significant in challenging datasets like Frames and Fanout-QA as opposed to HotpotQA. Correct Answers with Fewer Searches than Ground Truth. Figure 3 shows cases where Re- NAct and ReAct (solid blue & red lines) are below ground truth steps. Table 5 reports percentages of cases for the number of correctly answered ques- tions which took less searches than the number of ground truth Wikipedia pages required to answer the question correctly. 7 Conclusion We presented NotesWriting , a plug-and-play mod- ule that improves effective context length in itera- tive RAG by accumulating only the most relevantinformation at each reasoning step. Experiments on three RAG baselines (IRCoT, FLARE, and ReAct), four multi-hop QA datasets, and twoLLMs show thatNotesWriting improves performance by up to 15.6 points, while also reducing context overload, the number of reasoning steps, and redundancy. In the ReAct setting, NotesWriting enables better planning by guiding the model to generate more accurate search queries and retrieve the correct doc- uments. Moreover, NotesWriting consistently im- proves coherence and efficiency of planning and search across models in ReAct. Therefore, we sug- gest ReNAct as an effective iterative RAG frame- work. Our results show that ReNAct (ReAct + NotesWriting ) makes iterative RAG more scalable and precise. Limitations and Societal Impact Our approach has several limitations. First, our experiments are limited to the two models we ex- periment with, which could be extended to newer smaller open-source models. Second, we limit on- line searches to the Wikipedia API6, which only supports searching for text matching Wiki pages; and third, Wiki pages change often and this could lead to a mismatch with static benchmarks’ ground truth. While these could affect performance, we ensure that the same setup is also followed in all baselines we experiment with, to keep evaluation comparable while reducing the need to utilize paid search APIs. Third, with retrievals based on iter- ative notes writing, there is a possibility of con- flicting information being received (Table 18). It is possible that the model starts hallucinating facts, and this remains a weakness at large. Lastly, we impose a maximum iteration limit to ensure com- putational efficiency, which could also impact per- formance. Further explorations towards improving on weaknesses remain future work. Potential risks of our work include usage in scenarios where the requested retrieval informa- tion is toxic or harmful. While we cannot control how our method is used for prompting, we expect content moderation policies to help with reducing the impact of such queries. Moreover, hallucina- tions 19, 20 can affect the QA experience, although manual observation of the reasoning traces show that recovery can be | https://arxiv.org/abs/2505.16293v1 |
better with NotesWriting . We expect our work to significantly enhance the QA user experience, as focused information im- proves performance and reduced context lengths lower computational costs. We hope our NotesWrit- ingmethod can contribute towards better task han- dling at large. We will make our code publicly available upon acceptance towards this goal. References Mohamed Aghzal, Erion Plaku, Gregory J Stein, and Ziyu Yao. 2025. A survey on large language models for automated planning. arXiv preprint arXiv:2502.12435 . Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi. 2023. Self-rag: Learning to retrieve, generate, and critique through self-reflection. arXiv preprint arXiv:2310.11511 . 6https://www.mediawiki.org/wiki/API:SearchSebastian Borgeaud, Arthur Mensch, Jordan Hoff- mann, Trevor Cai, Eliza Rutherford, Katie Milli- can, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. 2022. Improving language models by retrieving from tril- lions of tokens. In International conference on ma- chine learning , pages 2206–2240. PMLR. Yanan Chen, Ali Pesaranghader, Tanmana Sadhu, and Dong Hoon Yi. 2024. Can we rely on llm agents to draft long-horizon plans? let’s take travelplanner as an example. arXiv preprint arXiv:2408.06318 . Bhuwan Dhingra, Jeremy R Cole, Julian Martin Eisenschlos, Daniel Gillick, Jacob Eisenstein, and William W Cohen. 2022. Time-aware language mod- els as temporal knowledge bases. Transactions of the Association for Computational Linguistics , 10:257– 273. Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 . Darren Edge, Ha Trinh, Newman Cheng, Joshua Bradley, Alex Chao, Apurva Mody, Steven Truitt, Dasha Metropolitansky, Robert Osazuwa Ness, and Jonathan Larson. 2024. From local to global: A graph rag approach to query-focused summarization. arXiv preprint arXiv:2404.16130 . Jingsheng Gao, Linxu Li, Weiyuan Li, Yuzhuo Fu, and Bin Dai. 2024. Smartrag: Jointly learn rag-related tasks from the environment feedback. arXiv preprint arXiv:2410.18141 . Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasu- pat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In International confer- ence on machine learning , pages 3929–3938. PMLR. Cheng-Ping Hsieh, Simeng Sun, Samuel Kriman, Shan- tanu Acharya, Dima Rekesh, Fei Jia, Yang Zhang, and Boris Ginsburg. 2024. Ruler: What’s the real context size of your long-context language models? arXiv preprint arXiv:2404.06654 . Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Os- trow, Akila Welihinda, Alan Hayes, Alec Radford, et al. 2024. Gpt-4o system card. arXiv preprint arXiv:2410.21276 . Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi- Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2023. Atlas: Few-shot learning with retrieval augmented language models. Journal of Machine Learning Research , 24(251):1–43. Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of hal- lucination in natural language generation. ACM com- puting surveys , 55(12):1–38. Zhengbao Jiang, Frank F Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Ac- tive retrieval augmented generation. arXiv | https://arxiv.org/abs/2505.16293v1 |
preprint arXiv:2305.06983 . Zhouyu Jiang, Mengshu Sun, Lei Liang, and Zhiqiang Zhang. 2025. Retrieve, summarize, plan: Advanc- ing multi-hop question answering with an iterative approach. Preprint , arXiv:2407.13101. Ziyan Jiang, Xueguang Ma, and Wenhu Chen. 2024. Longrag: Enhancing retrieval-augmented gener- ation with long-context llms. arXiv preprint arXiv:2406.15319 . Ehsan Kamalloo, Nouha Dziri, Charles LA Clarke, and Davood Rafiei. 2023. Evaluating open-domain ques- tion answering in the era of large language models. arXiv preprint arXiv:2305.06984 . Greg Kamradt. 2023. Needle in a haystack-pressure testing llms. Github Repository , page 28. Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Akari Asai, Xinyan Yu, Dragomir Radev, Noah A Smith, Yejin Choi, Kentaro Inui, et al. 2023. Realtime qa: What’s the answer right now? Advances in neural information processing systems , 36:49025–49043. Jaehyung Kim, Jaehyun Nam, Sangwoo Mo, Jongjin Park, Sang-Woo Lee, Minjoon Seo, Jung-Woo Ha, and Jinwoo Shin. 2024. Sure: Summarizing re- trievals using answer candidates for open-domain qa of llms. arXiv preprint arXiv:2404.13081 . Satyapriya Krishna, Kalpesh Krishna, Anhad Mo- hananey, Steven Schwarcz, Adam Stambler, Shyam Upadhyay, and Manaal Faruqui. 2024. Fact, fetch, and reason: A unified evaluation of retrieval-augmented generation. arXiv preprint arXiv:2409.12941 . Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph Gon- zalez, Hao Zhang, and Ion Stoica. 2023. Efficient memory management for large language model serv- ing with pagedattention. In Proceedings of the 29th Symposium on Operating Systems Principles , pages 611–626. Jinhyuk Lee, Anthony Chen, Zhuyun Dai, Dheeru Dua, Devendra Singh Sachan, Michael Boratko, Yi Luan, Sébastien MR Arnold, Vincent Perot, Siddharth Dalmia, et al. 2024a. Can long-context language models subsume retrieval, rag, sql, and more? arXiv preprint arXiv:2406.13121 . Myeonghwa Lee, Seonho An, and Min-Soo Kim. 2024b. Planrag: A plan-then-retrieval augmented generation for generative large language models as decision mak- ers.arXiv preprint arXiv:2406.12430 . Quinn Leng, Jacob Portes, Sam Havens, Matei Zaharia, and Michael Carbin. 2024. Long context rag per- formance of large language models. arXiv preprint arXiv:2411.03538 .Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich Küttler, Mike Lewis, Wen-tau Yih, Tim Rock- täschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neu- ral Information Processing Systems , 33:9459–9474. Fabio Petroni, Patrick Lewis, Aleksandra Piktus, Tim Rocktäschel, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. 2020. How context affects lan- guage models’ factual predictions. arXiv preprint arXiv:2005.04611 . Revanth Gangi Reddy, Sagnik Mukherjee, Jeonghwan Kim, Zhenhailong Wang, Dilek Hakkani-Tur, and Heng Ji. 2024. Infogent: An agent-based framework for web information aggregation. arXiv preprint arXiv:2410.19054 . Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed H Chi, Nathanael Schärli, and Denny Zhou. 2023a. Large language models can be easily distracted by irrelevant context. In In- ternational Conference on Machine Learning , pages 31210–31227. PMLR. Weijia Shi, Sewon Min, Michihiro Yasunaga, Min- joon Seo, Rich James, Mike Lewis, Luke Zettle- moyer, and Wen-tau Yih. 2023b. Replug: Retrieval- augmented black-box language models. arXiv preprint arXiv:2301.12652 . Yixuan Tang and Yi Yang. 2024. Multihop-rag: Bench- marking retrieval-augmented generation for multi- hop queries. arXiv preprint arXiv:2401.15391 . Harsh | https://arxiv.org/abs/2505.16293v1 |
Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022. Interleav- ing retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions. arXiv preprint arXiv:2212.10509 . Tu Vu, Mohit Iyyer, Xuezhi Wang, Noah Constant, Jerry Wei, Jason Wei, Chris Tar, Yun-Hsuan Sung, Denny Zhou, Quoc Le, et al. 2023. Freshllms: Refreshing large language models with search engine augmenta- tion. arXiv preprint arXiv:2310.03214 . Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, and Furu Wei. 2022. Text embeddings by weakly- supervised contrastive pre-training. arXiv preprint arXiv:2212.03533 . Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits rea- soning in large language models. Advances in neural information processing systems , 35:24824–24837. Siye Wu, Jian Xie, Jiangjie Chen, Tinghui Zhu, Kai Zhang, and Yanghua Xiao. 2024. How easily do irrelevant inputs skew the responses of large language models? arXiv preprint arXiv:2404.03302 . Jian Xie, Kexun Zhang, Jiangjie Chen, Siyu Yuan, Kai Zhang, Yikai Zhang, Lei Li, and Yanghua Xiao. 2024. Revealing the barriers of language agents in planning. arXiv preprint arXiv:2410.12409 . Fangyuan Xu, Weijia Shi, and Eunsol Choi. 2023a. Re- comp: Improving retrieval-augmented lms with com- pression and selective augmentation. arXiv preprint arXiv:2310.04408 . Peng Xu, Wei Ping, Xianchao Wu, Lawrence McAfee, Chen Zhu, Zihan Liu, Sandeep Subramanian, Evelina Bakhturina, Mohammad Shoeybi, and Bryan Catan- zaro. 2023b. Retrieval meets long context large lan- guage models. In The Twelfth International Confer- ence on Learning Representations . Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christo- pher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. InProceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing , pages 2369–2380, Brussels, Belgium. Association for Com- putational Linguistics. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2023. React: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR) . Wenhao Yu, Hongming Zhang, Xiaoman Pan, Kaixin Ma, Hongwei Wang, and Dong Yu. 2023. Chain-of- note: Enhancing robustness in retrieval-augmented language models. arXiv preprint arXiv:2311.09210 . Zhenrui Yue, Honglei Zhuang, Aijun Bai, Kai Hui, Rolf Jagerman, Hansi Zeng, Zhen Qin, Dong Wang, Xuan- hui Wang, and Michael Bendersky. 2024. Inference scaling for long-context retrieval augmented genera- tion. arXiv preprint arXiv:2410.04343 . Qin Zhang, Shangsi Chen, Dongkuan Xu, Qingqing Cao, Xiaojun Chen, Trevor Cohn, and Meng Fang. 2022. A survey for efficient open domain question answering. arXiv preprint arXiv:2211.07886 . Tianjun Zhang, Shishir G Patil, Naman Jain, Sheng Shen, Matei Zaharia, Ion Stoica, and Joseph E Gon- zalez. 2024. Raft: Adapting language model to do- main specific rag. In First Conference on Language Modeling . Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, et al. 2023. Siren’s song in the ai ocean: a survey on hallucination in large language models. arXiv preprint arXiv:2309.01219 . Andrew Zhu, Alyssa Hwang, Liam Dugan, and Chris Callison-Burch. 2024. FanOutQA: A multi-hop, multi-document question | https://arxiv.org/abs/2505.16293v1 |
answering benchmark for large language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) , pages 18–37,Bangkok, Thailand. Association for Computational Linguistics. A Appendix A.1 Infogent Implementation Details We use the official code provided by Infogent au- thors here (Apache 2.0. License) with the following modifications: •Due to our limitations in accessing OpenAI, we modify the code to use AzureOpenAI. •OpenAI embedding is replaced by sentence- transformers’ all-mpnet-base-v27. •Serper Google Search8is replaced by Wikipedia search API due to credit limitations and to use similar open knowledge tools as those used in our method, reducing the cost needed to conduct RAG experiments. A.2 Results Setting Benchmark F1 (%) GPT-4 (%) InfoAgentFanoutQA 47.2 22.9 Frames 28.0 29.9 NotesWritingFanoutQA 50.0 28.0 Frames 46.8 52.3 Table 6: Infoagent vs NotesWriting performance com- parison on GPT-4o-mini. A.3 Benchmarks We evaluated four multi-hop QA datasets: (1) FanOutQA (Zhu et al., 2024), which features com- plex fanout questions, (2) FRAMES (Krishna et al., 2024), requiring reasoning over 2–15 articles, (3) MultiHop-RAG (Tang and Yang, 2024), which in- volves retrieval and reasoning over news articles, and (4) HotpotQA (Yang et al., 2018), which re- quires multi-article reasoning. For FanOutQA, we evaluated all 310examples from the development set, while for FRAMES, we used 549multiple- constraint-tagged questions. For MultiHop-RAG and HotpotQA, we assessed performance on 500 examples from the test and development splits, re- spectively. FanOutQA, HotpotQA and Wikipedia comes under CC BY-SA 4.0 (Creative Com- mons Attribution-ShareAlike 4.0 International Li- cense), FRAMES under Apache 2.0. license and MultiHop-RAG under ODC-By (Open Data Com- mons Attribution License). 7https://huggingface.co/sentence-transformers/ all-mpnet-base-v2 8https://serper.dev/A.4 Models Models : We conduct experiments with two models, representing both closed and open weights: GPT-4o-mini9andLlama 3.1-70-Instruct (Dubey et al., 2024). The temperature is set to 0.7, and the same LLM is used for generating reasoning steps and NotesWrit- ing(i.e.,Ms=M).Llama 3.1-70-Instruct was hosted using vLLM (Kwon et al., 2023) across 8A100-80GB GPUs, supporting a maximum context length of 64K. With parallelization, evaluation runs took approximately 9–10hours for MultiHop-RAG, HotpotQA, and FRAMES, and around 15hours for FanOutQA. GPT-4o-mini , which has a context length of 128K, completed evaluations in approximately 7hours for FRAMES and FanOutQA, 2hours for HotpotQA, and 27 minutes for MultiHop-RAG. The reported times include the full end-to-end process, accounting for rate limits, Wikipedia queries, and NotesWriting . A.5 Standard deviation across runs We ran the NotesWriting and ReNAct across all datasets and models three times to see the variance across different runs. We report the results in Ta- ble 7. Model Dataset Avg F1 GPT-4 Score GPT-4o-miniFanout ±1.86 ±2.45 Frames ±1.10 ±2.35 Llama-3.1 70BFanout ±3.79 ±1.54 Frames ±4.42 ±5.76 Table 7: Standard deviation across Frames & FanoutQA. A.6 Examples comparing ReNAct with baselines 9https://openai.com/index/ gpt-4o-mini-advancing-cost-efficient-intelligence/ Model Setting Benchmark F1 (%) GPT-4 Main Tokens Notes Tokens Score (%) Input Output Input OutputGPT-4o-miniBaselineFanout-QA 33.6 15.2 273K 385 - - Frames 24.0 22.0 183K 312 - - Hotpot-QA 36.4 42.8 99K 205 - - MultiHop-RAG 23.7 48.0 909K 322 - - NotesWritingFanout-QA 41.9 21.3 68K 444 902K 1.8K Frames 43.9 42.3 37K 280 658K | https://arxiv.org/abs/2505.16293v1 |
862 Hotpot-QA 46.2 53.8 26K 193 433K 650 MultiHop-RAG 36.0 65.6 189K 412 40K 324LLaMA-3.1-70BBaselineFanout-QA 21.0 8.4 197K 511 - - Frames 19.4 21.1 192K 732 - - Hotpot-QA 31.5 38.2 167K 557 - - MultiHop-RAG 35.4 64.8 429K 313 - - NotesWritingFanout-QA 36.0 22.9 154K 552 1.4M 7.6K Frames 26.9 33.3 116K 735 1.2M 3.9K Hotpot-QA 36.5 53.0 100K 561 1.1M 3.3K MultiHop-RAG 38.0 64.8 99K 680 338K 2.9K Table 8: IRCoT results for GPT-4o-mini and LLaMA-3.1-70B across Baseline and NotesWriting . Token counts are rounded to the nearest thousand (input/output). Model Setting Benchmark F1 (%) GPT-4 Main Tokens Notes Tokens Score (%) Input Output Input OutputGPT-4o-miniBaselineFanout-QA 35.1 14.2 268K 384 - - Frames 26.3 23.7 50K 314 - - Hotpot-QA 34.8 39.0 36K 218 - - MultiHop-RAG 28.9 65.7 204K 319 - - NotesWritingFanout-QA 42.3 22.2 65K 409 968K 2.2K Frames 27.7 29.8 43K 358 47K 56 Hotpot-QA 34.5 45.8 33K 258 45K 49 MultiHop-RAG 30.2 66.6 205K 312 14K 49LLaMA-3.1-70BBaselineFanout-QA 23.0 11.4 195K 453 - - Frames 16.4 18.6 86K 640 - - Hotpot-QA 24.7 31.2 83K 573 - - MultiHop-RAG 36.1 67.0 301K 558 - - NotesWritingFanout-QA 35.8 24.2 154K 555 1.3M 7.5K Frames 20.0 25.3 67K 592 47K 66 Hotpot-QA 34.0 47.0 65K 563 45K 77 MultiHop-RAG 30.5 66.4 67K 710 15K 38 Table 9: FLARE results for GPT-4o-mini and LLaMA-3.1-70B across Baseline and NotesWriting . Token counts are rounded to the nearest thousand (input/output). Question: What law schools did the most recent four Supreme Court justices attend? Thought: I need to gather information on the law schools attended by the most recent four Supreme Court justices. First, I will search for the names of the most recent four justices to find out where they studied law. Action: search[Supreme Court justices; Who are the most recent four Supreme Court justices?] Observation: (Result 1) List of justices of the Supreme Court of the United States - The most recent four Supreme Court justices are: Amy Coney Barrett, Ketanji Brown Jackson, Neil Gorsuch, and Brett Kavanaugh. (Result 2) Ideological leanings of United States Supreme Court justices - The most recent four Supreme Court justices are: Amy Coney Barrett (appointed in 2020), Ketanji Brown Jackson (appointed in 2022), Brett Kavanaugh (appointed in 2018), and Neil Gorsuch (appointed in 2017). (Result 3) Demographics of the Supreme Court of the United States - The most recent four Supreme Court justices are: Ketanji Brown Jackson (appointed in 2022), Amy Coney Barrett (appointed in 2020), Brett Kavanaugh (appointed in 2018), and Neil Gorsuch (appointed in 2017). Thought: The most recent four Supreme Court justices are Amy Coney Barrett, Ketanji Brown Jackson, Neil Gorsuch, and Brett Kavanaugh. Next, I need to find out which law schools they attended. I will start by searching for the law school attended by Amy Coney Barrett. Action: search[Amy Coney Barrett; Which law school did Amy Coney Barrett attend?] Observation: (Result 1) Amy Coney Barrett attended Rhodes College (BA) and University of Notre Dame (JD). She graduated from Notre Dame Law School in 1997 with a Juris Doctor, summa cum | https://arxiv.org/abs/2505.16293v1 |
laude , ranked first in her class. Thought: Amy Coney Barrett attended Notre Dame Law School. Next, I need to find out which law school Ketanji Brown Jackson attended. Action: search[Ketanji Brown Jackson; Which law school did Ketanji Brown Jackson attend?] Observation: (Result 1) Ketanji Brown Jackson attended Harvard Law School, where she graduated in 1996 with a Juris Doctor, cum laude . (Result 2) Joe Biden Supreme Court candidates - Ketanji Brown Jackson attended Harvard University for her law degree. (Result 3) Demographics of the Supreme Court of the United States - Ketanji Brown Jackson attended Harvard Law School. Thought: Ketanji Brown Jackson attended Harvard Law School. Now, I need to find out which law school Neil Gorsuch attended. Action: search[Neil Gorsuch; Which law school did Neil Gorsuch attend?] Observation: (Result 1) Neil Gorsuch received his legal education at Harvard Law School and graduated in 1991 with a Juris Doctor, cum laude . Thought: Neil Gorsuch also attended Harvard Law School. Finally, I need to find out which law school Brett Kavanaugh attended. Action: search[Brett Kavanaugh; Which law school did Brett Kavanaugh attend?] Observation: (Result 1) Brett Kavanaugh - Kavanaugh graduated from Yale Law School with a Juris Doctor degree in 1990. Thought: Brett Kavanaugh graduated from Yale Law School. Now I have gathered the law schools attended by the most recent four Supreme Court justices: Amy Coney Barrett attended Notre Dame Law School, Ketanji Brown Jackson and Neil Gorsuch both attended Harvard Law School, and Brett Kavanaugh attended Yale Law School. Action: finish[Notre Dame Law School, Harvard Law School, Harvard Law School, Yale Law School] Table 10: An example of NotesWriting on the FanoutQA dataset using GPT-4o-mini. Question: According to the 2000 United States census, what was the 2000 population of the birth city of the only 21st-century mayor of Austin, Texas who also served as mayor in the 1990s? Round your answer to the nearest thousand. Thought: I need to determine the birth city of the only 21st-century mayor of Austin, Texas who also served as mayor in the 1990s and then find its population in the 2000 United States census. First, I will identify the mayor who meets this criterion. Action: search[Austin, Texas mayors; Who was the 21st-century mayor of Austin, Texas who also served as mayor in the 1990s?] Observation: (Result 1) Mayor of Austin - Kirk Watson took office as mayor on January 6, 2023, for a second term, having served as mayor from 1997 to 2001. (Result 2) 2024 Austin mayoral election - Kirk Watson (Result 3) Austin, Texas - Kirk Watson Thought: The only 21st-century mayor of Austin, Texas who also served as mayor in the 1990s is Kirk Watson. I now need to find his birth city. Action: search[Kirk Watson; Where was Kirk Watson born?] Observation: (Result 1) Kirk Watson - Oklahoma City, Oklahoma, U.S. (Result 2) Austin, Texas - Kirk Watson was born in 1958 in Oak Park, Illinois Thought: The information provided contains conflicting birth cities for Kirk Watson. To confirm his birth city, I will perform another search. Action: search[Kirk | https://arxiv.org/abs/2505.16293v1 |
Watson (American politician); What is the confirmed birth city of Kirk Watson?] Observation: (Result 1) Kirk Watson - Oklahoma City Thought: Based on the confirmed information, Kirk Watson’s birth city is Oklahoma City. Now, I need to find the population of Oklahoma City in the 2000 United States census. Action: search[Oklahoma City, Oklahoma; What was the population of Oklahoma City in the 2000 United States census?] Observation: (Result 1) Oklahoma City - 506,132 Thought: The population of Oklahoma City in the 2000 United States census was 506,132. To answer the question, I need to round this number to the nearest thousand. Action: finish[506,000] Table 11: An example of NotesWriting on the Frames dataset using LLama-3.1-70B. Question: Diago Costa played for which club when he was awarded the first FIFA World Cup Goal based on a V AR Decision? Thought: I need to determine which club Diago Costa was playing for when he was awarded the first FIFA World Cup Goal based on a V AR Decision. First, I will gather information on the first FIFA World Cup Goal awarded based on a V AR Decision. Action: search[FIFA World Cup Goal; What was the first FIFA World Cup Goal awarded based on a V AR Decision, and who scored it?] Observation: (Result 1) 2022 FIFA World Cup - Ao Tanaka Thought: It seems like the search result did not provide the relevant information about the first FIFA World Cup Goal awarded based on a V AR Decision. Action: search[FIFA World Cup V AR Decision; What was the first FIFA World Cup Goal awarded based on a V AR Decision, and who scored it?] Observation: (Result 1) 2018 FIFA World Cup - Diego Costa’s first goal against Portugal became the first World Cup goal based on a V AR decision. (Result 2) 2022 FIFA World Cup - The first FIFA World Cup Goal awarded based on a V AR Decision was scored by Ao Tanaka of Japan against Spain. (Result 3) 2019 FIFA Women’s World Cup - The first FIFA World Cup goal awarded based on a V AR decision was scored by French midfielder Antoine Griezmann in the 58th minute of a group stage match against Australia on June 16, 2018. (Result 4) Video assistant referee - The first V AR decision at the World Cup came on 16 June 2018 in a group stage match between France and Australia, where referee Andres Cunha awarded a penalty to France after consulting with the V AR. The resulting penalty kick was successfully converted by Antoine Griezmann of France, but the goal was initially disallowed for offside. However, the V AR team correctly advised the referee that the offside infraction was committed by an Australian player, not a French player, and the goal was awarded. Thought: It seems like the search results provided conflicting information about the first FIFA World Cup Goal awarded based on a V AR Decision. However, according to Result 1 of the second search, Diego Costa’s first goal against Portugal became the first World Cup goal based on a V AR | https://arxiv.org/abs/2505.16293v1 |
decision. Action: search[Diego Costa; Which club did Diego Costa play for in the 2018 FIFA World Cup?] Observation: (Result 1) Diego Costa - Atlético Madrid Thought: Diego Costa played for Atlético Madrid when he was awarded the first FIFA World Cup Goal based on a V AR Decision. Action: finish[Atlético Madrid] Observation: Atlético Madrid Table 12: An Example of NotesWriting on Frames dataset using GPT-4o-mini. Question: If my future wife has the same first name as the 15th first lady of the United States’ mother and her surname is the same as the second assassinated president’s mother’s maiden name, what is my future wife’s name? Thought: I need to determine the first name of the 15th first lady of the United States and the surname of the second assassinated president’s mother’s maiden name. Let’s start by identifying the 15th first lady of the United States and her mother’s name. Action: search[15th first lady of the United States; Who is the 15th first lady and what is her mother’s name?] Observation: No relevant information, try a different search term. Thought: I need to rephrase my search to find the correct information about the 15th first lady of the United States and her mother’s name. I will search for the name of the 15th first lady directly. Action: search[15th first lady of the United States; Who is the 15th first lady?] Observation: No relevant information, try a different search term. Thought: It seems that I’m having trouble finding information about the 15th first lady of the United States. I will now search for a list of all first ladies to identify her name and then find her mother’s name. Action: search[List of first ladies of the United States; What is the name of the 15th first lady?] Observation: (Result 1) List of first ladies of the United States - The 15th first lady was Harriet Lane, who served from March 4, 1857, to March 4, 1861. Thought: I have identified the 15th first lady of the United States as Harriet Lane. Now, I need to find information about her mother’s name. Action: search[Harriet Lane; What is the name of Harriet Lane’s mother?] Observation: (Result 1) Harriet Lane - Harriet Lane’s mother was Jane Ann Buchanan Lane. Thought: I have found that Harriet Lane’s mother was named Jane Ann Buchanan Lane. Now, I need to identify the second assassinated president of the United States and find his mother’s maiden name. Action: search[second assassinated president of the United States; Who is the second assassinated president?] Observation: (Result 1) List of United States presidential assassination attempts and plots - James A. Garfield was the second U.S. president to be assassinated, occurring on July 2, 1881, less than four months after he took office. (Result 2) List of presidents of the United States by time in office - The second assassinated president is James A. Garfield. Thought: I need to rephrase my search to find the correct information about James A. Garfield’s mother’s maiden name. I will search for information about James A. Garfield and his family. Action: | https://arxiv.org/abs/2505.16293v1 |
search[James A. Garfield’s mother; What was the maiden name of James A. Garfield’s mother?] Observation: No relevant information, try a different search term. Thought: It seems I’m having difficulty finding the maiden name of James A. Garfield’s mother. I will search for biographies of James A. Garfield to find more details about his family. Action: search[James A. Garfield biography; What is known about James A. Garfield’s family?] Observation: (Result 1) James A. Garfield - James Abram Garfield was born on November 19, 1831, in a log cabin in Moreland Hills, Ohio. He was the youngest of five children. Garfield married Lucretia Rudolph in 1858, and they had seven children, five of whom survived infancy. Thought: I still need to find the maiden name of James A. Garfield’s mother. I will focus my search specifically on his mother’s name. Action: search[James A. Garfield’s family; What is the name of James A. Garfield’s mother?] Observation: (Result 1) James A. Garfield - James A. Garfield’s mother was named Eliza. Thought: I am struggling to find the maiden name of James A. Garfield’s mother. I will search for historical records or references that detail Eliza Garfield’s family background. Action: search[Eliza Garfield biography; What is known about Eliza Garfield’s family and maiden name?] Observation: No relevant information, try a different search term. Thought: It seems I am having persistent difficulties finding the maiden name of James A. Garfield’s mother, Eliza. I will try searching for historical documentation or genealogical resources that might provide her maiden name. Action: search[Eliza Garfield maiden name; What is the maiden name of James A. Garfield’s mother?] Observation: (Result 1) Abbie C. B. Robinson - The maiden name of President James A. Garfield’s mother is Eliza Ballou. Thought: I have found that the maiden name of James A. Garfield’s mother is Eliza Ballou. Now, I have the necessary information to form the name of my future wife. Her first name is Jane (from Jane Ann Buchanan Lane) and her last name is Ballou (from Eliza Ballou). Action: finish[Jane Ballou] Table 13: An example of NotesWriting on the Frames dataset using GPT-4o-mini. Question: What 2016 film stars actor Xavier Samuel and an actress who portrayed Vasilisa Dragomir in the film "Vampire Academy"? Thought: I need to search for the film that stars Xavier Samuel, then find out who played Vasilisa Dragomir in "Vampire Academy" to identify the actress. Finally, I will check if both appeared in the same film in 2016. Action: search[Xavier Samuel filmography] Observation: Wikipedia Title: Xavier Samuel Xavier Samuel (born 10 December 1983) is an Australian film and theatre actor. He has appeared in leading roles in the feature films Adore, September, Further We Search, Newcastle, The Loved Ones, Frankenstein, A Few Best Men, and played Riley Biers... Wikipedia Title: Mohanlal filmography Mohanlal is an Indian actor, producer, distributor, host, and playback singer who has starred in mainstream blockbuster and art-house films. He has made over 355 appearances in feature films during the span of his career, primarily in Malayalam... Wikipedia Title: Benedict Samuel Benedict Samuel is an Australian actor, writer, | https://arxiv.org/abs/2505.16293v1 |
and director best known for playing Jervis Tetch / Mad Hatter in the Fox crime series Gotham and as Owen in The Walking... Wikipedia Title: Gérard Depardieu Gérard Xavier Marcel Depardieu is a multinational French actor, known to be one of the most prolific in film history. An icon of French cinema, considered a world star in... Wikipedia Title: Jagadish filmography P. V . Jagadish Kumar, known as Jagadish, is an Indian actor, screenwriter, and television presenter. He has starred in over 400 Malayalam... Thought: The search still did not yield the specific filmography of Xavier Samuel. I will try searching for his name along with the year 2016 to see if I can find a film he starred in that year. Action: search[Xavier Samuel 2016 film] Observation: Wikipedia Title: Xavier Samuel Xavier Samuel (born 10 December 1983) is an Australian film and theatre actor. He has appeared in leading roles in the feature films Adore, September, Further We Search, Newcastle, The Loved Ones, Frankenstein, A Few Best Men, and played Riley Biers... Wikipedia Title: Mr. Church Mr. Church is a 2016 American drama film directed by Bruce Beresford and written by Susan McMartin. The film stars Eddie Murphy as the title character with Britt Robertson, Xavier Samuel, Lucy Fry, Christian Madsen and Natascha McElhone also... Wikipedia Title: Benedict Samuel Benedict Samuel is an Australian actor, writer, and director best known for playing Jervis Tetch / Mad Hatter in the Fox crime series Gotham and as Owen in The Walking... Wikipedia Title: Samuel Hunt (actor) Samuel Caleb Hunt (born October 10, 1986) is an American actor best known for his reoccurring role as Greg “Mouse” Gerwitz in the television series Chicago Fire and Chicago P.D., playing Xavier Rosen in the third season of Empire, and for starring... Wikipedia Title: Spin Out (film) Spin Out is a 2016 Australian romantic comedy film directed by Tim Ferguson and Marc Gracie and starring Xavier Samuel and Morgan... Table 14: An example of ReAcT on the HotpotQA dataset using GPT-4o-mini. It starts by repeatedly searching similar query. Thought: I have found that Xavier Samuel stars in the film “Spin Out” in 2016. Now I need to find out who portrayed Vasilisa Dragomir in “Vampire Academy.” Action: search[Vasilisa Dragomir Vampire Academy actress] Observation: Wikipedia Title: Vampire Academy (film) Vampire Academy is a 2014 American fantasy comedy horror film directed by Mark Waters from a screenplay by Daniel Waters, based on the first book of the novel series of the same name by Richelle Mead. The film stars an ensemble cast led by Zoey... Wikipedia Title: Bloodlines (book series) Bloodlines is a book series by Richelle Mead. It is a spin-off of Mead’s Vampire Academy series. The books follow alchemist Sydney Sage. After the incident with Rose and Dimitri, Sydney is being closely watched by the Alchemists (humans who help... Wikipedia Title: List of fictional princesses This is a list of fictional princesses that have appeared in various works of fiction. This list is organized by medium and limited to well-referenced, notable examples of fictional... Thought: The search | https://arxiv.org/abs/2505.16293v1 |
did not provide the specific actress who portrayed Vasilisa Dragomir. I will select the “Vampire Academy” film page to find the cast details. Action: select[Vampire Academy (film)] Observation: 2014 fantasy comedy horror film by Mark Waters | Vampire Academy | | | Theatrical release poster | | | Directed by | Mark Waters | | Screenplay by | Daniel Waters | | Based on | Vampire Academy by Richelle Mead | | Produced by | Don Murphy, Susan Montford, Michael Preger, Deepak Nayar | | Cinematography | Tony Pierce-Roberts | | Edited by | Chris Gill | | Music by | Rolfe Kent | | Production companies | Kintop Pictures, Reliance Entertainment, Angry Films, Preger Entertainment | | Distributed by | The Weinstein Company | | Release dates | February 7, 2014 (US), July 14, 2014 (UK straight-to-DVD) | Vampire Academy is a 2014 American fantasy comedy horror film directed by Mark Waters from a screenplay by Daniel Waters, based on the first book of the novel series by Richelle Mead. The film stars an ensemble cast led by Zoey Deutch as Rose Hathaway, a Dhampir, and Lucy Fry as Lissa Dragomir, her Moroi best friend, and follows their return to the St Vladimir’s Academy boarding school, after being on the run for one year. Thought: I found that Lucy Fry portrays Vasilisa Dragomir in “Vampire Academy.” Now, I will check if both Xavier Samuel and Lucy Fry starred in the same film in 2016. Since Xavier Samuel is in “Spin Out,” I need to find out if Lucy Fry is also in this film. Action: search[Spin Out film cast] Table 15: Continued example of ReAcT on the HotpotQA dataset using GPT-4o-mini. It focuses on "Spin Out" but misses the potential answer "Mr. Church". Observation: Wikipedia Title: Spin Out (film) Spin Out is a 2016 Australian romantic comedy film directed by Tim Ferguson and Marc Gracie and starring Xavier Samuel and Morgan... Wikipedia Title: Spinning Out Spinning Out is an American drama television series, created by Samantha Stratton, that premiered on Netflix on January 1, 2020. In February 2020, the series was canceled after one season. The series stars Kaya Scodelario as Kat Baker, a young ice... Wikipedia Title: Spin (2021 film) Spin is a 2021 American comedy film released as a Disney Channel Original Movie (DCOM). It is directed by Manjari Makijany from a screenplay by Carley Steiner and Josh A. Cagan. Produced by Disney Channel Original Productions, the film stars... Wikipedia Title: Spinning Man Spinning Man is a 2018 thriller film directed by Simon Kaijser based on a novel by George Harrar and adapted to screenplay by Matthew Aldrich. It stars Guy Pearce, Pierce Brosnan, Minnie Driver, Alexandra Shipp, Clark Gregg and Odeya Rush. The film... Wikipedia Title: Dog Man (film) Dog Man is a 2025 American animated superhero comedy film based on Dav Pilkey’s children’s graphic novel series Dog Man, produced by DreamWorks Animation and Scholastic Entertainment, and distributed by Universal Pictures. It is a spin-off and story... Thought: The search did not provide the cast details for “Spin | https://arxiv.org/abs/2505.16293v1 |
Out.” I will select the “Spin Out (film)” page to find the cast. Action: select[Spin Out (film)] Observation: 2016 Australian film | Spin Out | | | — | — | | Theatrical film poster | | | Directed by | Tim Ferguson, Marc Gracie | | Written by | Edwina Exton, Tim Ferguson | | Produced by | Marc Gracie, David Redman | | Starring | Xavier Samuel, Morgan Griffin, Lincoln Lewis | | Cinematography | Justin Brickle | | Edited by | Ken Sallows | | Music by | Jon Hume | Spin Out is a 2016 Australian romantic comedy film directed by Tim Ferguson and Marc Gracie and starring Xavier Samuel and Morgan Griffin. Thought: The search results confirm that “Spin Out” stars Xavier Samuel, but there is no mention of Lucy Fry in the cast list. Thus, they did not appear in the same film. I will now summarize the findings to answer the question. Action: finish[There is no common film between Xavier Samuel and Lucy Fry in 2016.] Observation: There is no common film between Xavier Samuel and Lucy Fry in 2016. Table 16: Continued example of ReAcT on the HotpotQA dataset using GPT-4o-mini. It fails by focusing exclusively on Xavier Samuel’s film "Spin Out" while completely overlooking "Mr. Church," despite it being mentioned in the search results alongside both actors’ names. This prevents it from from discovering that "Mr. Church" is the 2016 film starring both Xavier Samuel and Lucy Fry (who played Vasilisa Dragomir in "Vampire Academy"). Question: What 2016 film stars actor Xavier Samuel and an actress who portrayed Vasilisa Dragomir in the film "Vampire Academy"? Thought: I need to identify the actress who portrayed Vasilisa Dragomir in "Vampire Academy" and then find a 2016 film that stars Xavier Samuel alongside her. First, I will search for the cast of "Vampire Academy" to determine who played Vasilisa Dragomir. Action: search[Vampire Academy cast; Who portrayed Vasilisa Dragomir in the film "Vampire Academy"?] Observation: (Result 1) Vampire Academy (film) - Lucy Fry portrayed Vasilisa "Lissa" Dragomir in the film "Vampire Academy". Thought: Lucy Fry portrayed Vasilisa Dragomir in "Vampire Academy." Now, I need to find a 2016 film that stars both Xavier Samuel and Lucy Fry. Action: search[Xavier Samuel; What films did Xavier Samuel star in during 2016?] Observation: (Result 1) Xavier Samuel - In 2016, Xavier Samuel played the titular character in *The Death and Life of Otto Bloom*, and performed alongside Kate Beckinsale, Chloe Sevigny, Stephen Fry, and James Fleet in the critically-acclaimed film *Love and Friendship*, adapted from Jane Austen novel *Lady Susan*. Thought: I found two films that Xavier Samuel starred in during 2016: *The Death and Life of Otto Bloom* and *Love and Friendship*. Now, I need to check if Lucy Fry appeared in either of these films. Action: search[Lucy Fry; What films did Lucy Fry star in during 2016?] Observation: (Result 1) Lucy Fry - In 2016, Lucy Fry starred in the films *Mr. Church* as Poppy and *The Darkness* as Stephanie Taylor. Thought: Lucy Fry starred in *Mr. Church* | https://arxiv.org/abs/2505.16293v1 |
and *The Darkness* in 2016. I need to check if either of these films also features Xavier Samuel. Action: search[Mr. Church; Did Xavier Samuel star in Mr. Church?] Observation: (Result 1) Mr. Church - Xavier Samuel is listed as one of the stars in the film *Mr. Church*. Thought: I have confirmed that both Xavier Samuel and Lucy Fry starred in the film *Mr. Church* in 2016. Action: finish[*Mr. Church*] Observation: *Mr. Church* Table 17: The previouis example with NotesWriting on the HotpotQA dataset using GPT-4o-mini. Question: As of August 1, 2024, what is the largest city of the 9th largest country by land area in Europe? Thought: I need to determine the 9th largest country by land area in Europe and then find its largest city. First, I will identify the 9th largest country by land area in Europe. Action: search[Largest countries in Europe by land area; Which is the 9th largest country by land area in Europe?] Observation: (Result 1) List of countries and dependencies by area - The 9th largest country by land area in Europe is Kazakhstan (2,699,700 km2). (Result 2) List of European countries by area - Poland Thought: It seems like there are multiple conflicting sources for the 9th largest country by land area in Europe, with one source saying it’s Kazakhstan, but Kazakhstan is not in Europe, and another source saying it’s Poland but without providing more information. In this case, I’ll try to find the correct 9th largest country by land area in Europe . Action: search[Largest countries in Europe by land area; What are the top 10 largest countries by land area in Europe?] Observation: (Result 1) List of countries and dependencies by area - 1. Russia - 16,376,870 km2 2. Ukraine - 579,330 km2 3. France - 640,427 km2 4. Spain - 498,980 km2 5. Sweden - 407,284 km2 6. Norway - 366,704 km2 7. Germany - 349,390 km2 8. Finland - 303,815 km2 9. Poland - 304,255 km2 10. Italy - 295,717 km2 Thought: The 9th largest country by land area in Europe is indeed Poland, with a land area of 312,679 km2or 304,255 km2. Now, I need to determine the largest city of Poland. Action: search[Poland; What is the largest city in Poland as of August 1, 2024?] Observation: (Result 1) Poland - Warsaw (Result 2) Invasion of Poland - The largest city in Poland as of August 1, 2024, is Warsaw (Polish: Warszawa), which is also the capital of Poland. Thought: The largest city of the 9th largest country by land area in Europe as of August 1, 2024, is Warsaw. Action: finish[The largest city of the 9th largest country by land area in Europe as of August 1, 2024, is Warsaw.] Table 18: An example from the FRAMES dataset where note taking leads to conflicting information (in bold). To clarify the confusion the agent re-tries the search to find the correct information. Question: A general motors vehicle is named after the largest ward in the country of Monaco. How many people had walked on the moon as of | https://arxiv.org/abs/2505.16293v1 |
the first model year of the vehicle? Note: the model year is not the same as the year the model was first produced. Thought: I need to determine which General Motors vehicle is named after the largest ward in Monaco and find out the first model year of that vehicle. After that, I will find out how many people had walked on the moon by that year. Let’s start by searching for the General Motors vehicle named after the largest ward in Monaco. Action: search[General Motors vehicle named after the largest ward in Monaco; What is the largest ward in Monaco and which GM vehicle is named after it?] Observation: No relevant information, try a different search term. Thought: I need to find the largest ward in Monaco first and then see if there is a General Motors vehicle named after it. I will search for the largest ward in Monaco. Action: search[largest ward in Monaco; What is the largest ward in Monaco?] Observation: (Result 1) Monaco - The largest quarter in Monaco is Monte Carlo. Thought: The largest ward in Monaco is Monte Carlo. Now, I need to search for any General Motors vehicle named after Monte Carlo and find out its first model year. Action: search[Monte Carlo; What is the General Motors vehicle named after Monte Carlo and what is its first model year?] Observation: (Result 1) Chevrolet Monte Carlo - The Chevrolet Monte Carlo is the General Motors vehicle named after Monte Carlo, and it was introduced for the 1970 model year. Thought: The Chevrolet Monte Carlo is named after the largest ward in Monaco and was introduced for the 1970 model year. Now, I need to find out how many people had walked on the moon by the end of 1970. Action: search[moon landings; How many people had walked on the moon by the end of 1970?] Observation: (Result 1) Moon landing - A total of twelve astronauts have landed on the Moon . (Result 2) Apollo program - By the end of 1970, twelve people had walked on the Moon. Thought: By the end of 1970, a total of twelve astronauts had walked on the Moon. Now I can provide the final answer regarding how many people had walked on the Moon as of the first model year of the Chevrolet Monte Carlo. Action: finish[12] Observation: 12 Table 19: An example of hallucination by the note taking LLM in NotesWriting on the Frames dataset by GPT-4o- mini. The in-correct note taken is in bold. Instead the note writing LLM should have calculated the number of people walked before 1970 which is 4. Question: How old would the founder of the publishing company of the magazine that serialized the manga series Raw Hero have been the year the magazine ended publication? Thought: I need to first identify the founder of the publishing company that serialized the manga series Raw Hero. Then I will determine the year the magazine ended publication and calculate the founder’s age at that time. Action: search[Raw Hero; Which publishing company serialized the manga series | https://arxiv.org/abs/2505.16293v1 |
Raw Hero?] Observation: (Result 1) Raw Hero - *Raw Hero* was serialized in Kodansha’s *Evening*. Thought: The manga series Raw Hero was serialized in Kodansha’s *Evening* magazine. Now, I need to find out who the founder of Kodansha is. Action: search[Kodansha; Who is the founder of Kodansha?] Observation: (Result 1) Kodansha - Kodansha was founded by Seiji Noma in 1909. Thought: Seiji Noma is the founder of Kodansha, which serialized *Raw Hero*. Next, I need to determine the year *Evening* magazine ended publication to calculate his age at that time. Action: search[Evening magazine; When did Evening magazine cease publication?] Observation: (Result 1) *Evening* (magazine) - The magazine ended publication on February 28, 2023. (Result 2) PM Magazine - *Evening Magazine* ceased publication on August 30, 1991. Thought: The *Evening* magazine ceased publication on February 28, 2023. I will use the founding year of Kodansha (1909) to calculate the age of Seiji Noma at the time of the magazine’s end in 2023. Action: finish[113] Observation: 113 Table 20: An example of hallucination by the main LLM in NotesWriting on the Frames dataset by GPT-4o-mini. The LLM uses the founding year of the magazine as the birth year of the founder (highlighted in bold) instead of first identifying the birth year. IRCoT and FLARE fewshot prompt Task: Given a question and relevant documents, generate a step-by-step reasoning process to solve it. Instructions: - Enclose each reasoning step within <s> </s> tags. - Reason in short sentences as the sentences will be used as queries to retrieve relevant documents for subsequent steps. - If provided with a partial reasoning chain, complete it by generating the next set of steps. - If you do not have up-to-date knowledge, try to think of a plan to solve the question. - Provide the final answer inside <answer> </answer> tags. DOCUMENTS: Nobody Loves You (When You’re Down and Out) - "Nobody Loves You (When You’re Down and Out)" is a song written by John Lennon released on his 1974 album "Walls and Bridges". The song is included on the 1986 compilation "Menlove Ave.", the 1990 boxset "Lennon", the 1998 boxset "John Lennon Anthology", the 2005 two-disc compilation "", and the 2010 boxset "Gimme Some Truth". John Lennon/Plastic Ono Band - John Lennon/Plastic Ono Band is the debut studio album by English rock musician John Lennon. It was released in 1970, after Lennon had issued three experimental albums with Yoko Ono and "Live Peace in Toronto 1969", a live performance in Toronto credited to the Plastic Ono Band. The album was recorded simultaneously with Ono’s debut avant garde solo album, "Yoko Ono/Plastic Ono Band", at Ascot Sound Studios and Abbey Road Studios using the same musicians and production team and nearly identical cover artwork. Walls and Bridges - Walls and Bridges is the fifth studio album by English musician John Lennon. It was issued by Apple Records on 26 September 1974 in the United States and on 4 October in the United Kingdom. Written, recorded and released during his 18-month separation from Yoko Ono, the album captured Lennon in the | https://arxiv.org/abs/2505.16293v1 |
midst of his "Lost Weekend". "Walls and Bridges" was an American "Billboard" number-one album and featured two hit singles, "Whatever Gets You thru the Night" and "#9 Dream". The first of these was Lennon’s first number-one hit in the United States as a solo artist, and his only chart-topping single in either the US or Britain during his lifetime. Question: Nobody Loves You was written by John Lennon and released on what album that was issued by Apple Records, and was written, recorded, and released during his 18 month separation from Yoko Ono? Step-by-step reasoning: <s>Identify album issued by Apple Records and recorded during John Lennon’s 18-month separation from Yoko Ono.</s> <s>The album "Walls and Bridges" was issued by Apple Records and recorded during this period.</s> <s>Determine the album on which "Nobody Loves You (When You’re Down and Out)" was released.</s> <s>"Nobody Loves You (When You’re Down and Out)" was released on "Walls and Bridges".</s> <answer>Walls and Bridges</answer> DOCUMENTS: Route 13 (Laos) - Route 13 is the most important highway in the country of Laos. It begins at Boten in the North of Laos at the Chinese border. It connects the city of Vientiane to Luang Prabang in the north and roughly follows the line of the Mekong River down to the border with Cambodia. The road then continues at National Highway 7 in Cambodia. Also, this national highway leads past all three international airports in Laos: Vientiane Airport, Luang Prabang Airport, and Pakse Airport. Between Boten and Nateuy, Route 13 is at a length of 20 km as part of the Kunming-Bangkok Expressway, which is also part of the Asian Highway AH3. From Nateuy to Vientiane, Route 13 is part of AH12, and from Vientiane to the border of Cambodia, it is part of AH11. National Route 13 (Vietnam) - National Route 13 is a highway in southern Vietnam stretching from the northeastern outskirts of Ho Chi Minh City, the commercial center of the country, towards the border to Cambodia. The highway starts around on the northern outskirts of Ho Chi Minh City, once the site of the military academy of the Army of the Republic of Vietnam, and travels north through the provinces. Cambodia - Cambodia (Khmer: or Kampuchea), officially known as the Kingdom of Cambodia , is a country located in the southern portion of the Indochina Peninsula in Southeast Asia. It is 181,035 km² in area, bordered by Thailand to the northwest, Laos to the northeast, Vietnam to the east, and the Gulf of Thailand to the southwest. Question: What is known as the Kingdom and has National Route 13 stretching towards its border? Step-by-step reasoning: <s>Cambodia is officially known as the Kingdom of Cambodia.</s> <s>National Route 13 in Vietnam stretches towards the border of Cambodia.</s> <s>Since Cambodia is referred to as the Kingdom and National Route 13 leads to its border, Cambodia is the correct answer.</s> <answer>Cambodia</answer> DOCUMENTS: Vandenbrink GTO - The Vandenbrink GTO is a limited re-bodied version of the Ferrari 599 GTB Fiorano. It features an entirely new coachwork designed by Michiel van den Brink | https://arxiv.org/abs/2505.16293v1 |
of Vandenbrink Design, inspired by the 1962 Ferrari 250 GTO as a tribute. 1969 World Sportscar Championship - The 1969 World Sportscar Championship was the 17th season of FIA World Sportscar racing, featuring Group 6 Prototype Sports Cars, Group 4 Sports Cars, and Group 3 Grand Touring Cars. Ferrari 250 GTO - The Ferrari 250 GTO was a GT car produced by Ferrari from 1962 to 1964 for homologation into the FIA’s Group 3 Grand Touring Car category. It was powered by Ferrari’s Tipo 168/62 V12 engine. Question: Which car, produced by Ferrari from 1962 to 1964 for homologation into the FIA’s Group 3 Grand Touring Car category, inspired the Vandenbrink GTO? Step-by-step reasoning: <s>The Ferrari 250 GTO was produced by Ferrari from 1962 to 1964 for homologation into the FIA’s Group 3 Grand Touring Car category.</s> <s>The Vandenbrink GTO was inspired by the Ferrari 250 GTO.</s> <s>Since the question asks for the car that inspired the Vandenbrink GTO, the correct answer is the Ferrari 250 GTO.</s> <answer>Ferrari 250 GTO</answer> DOCUMENTS: Talula - "Talula" is a song by Tori Amos, released as the second single from her 1996 album "Boys For Pele". It reached #22 on the UK Singles Chart and appears in the Jan de Bont film "Twister". Nema aviona za Zagreb - "Nema aviona za Zagreb" ("There Is No Plane to Zagreb") is a 2012 Dutch film by Louis van Gasteren. The film is a retrospective of events in the director’s life from 1964 to 1969, filmed by him in that period and reflected on from his vantage point over 40 years later at the age of 90. It is also Jan de Bont’s first film credit as cinematographer since 1992’s "Basic Instinct". Jan de Bont - Jan de Bont (born 22 October 1943) is a Dutch cinematographer, director, and film producer. He is widely known for directing the 1994 action film "Speed," starring Keanu Reeves and Sandra Bullock, and the disaster film "Twister." As a director of photography, he has shot numerous blockbusters and genre films, including "Cujo, Flesh and Blood, Die Hard, The Hunt for Red October," and "Basic Instinct." Question: Who was born first? Jan de Bont or Raoul Walsh? Step-by-step reasoning: <s>Jan de Bont was born on October 22, 1943.</s> <s>Raoul Walsh was born on March 11, 1887.</s> <s>Since 1887 is earlier than 1943, Raoul Walsh was born first.</s> <answer>Raoul Walsh</answer> Figure 4: Fewshot prompt used for the evaluation of IRCoT and FLARE methods. Prompt for Notes Writing Extract relevant information which is not previously extracted from the Wikipedia page provided in markdown format relevant to the given query. You will be provided with the Wikipedia page, query, and the previously extracted content. Do not miss any information. Do not add irrelevant information or anything outside of the provided sources. Provide the answer in the format: <YES/NO>#<Relevant context> . Here are the rules: • If you don’t know how to answer the query - start your answer with NO# • If the text is not related to the query - start your answer with NO# • | https://arxiv.org/abs/2505.16293v1 |
If the content is already extracted - start your answer with NO# • If you can extract relevant information - start your answer with YES# Example answers: •YES#Western philosophy originated in Ancient Greece in the 6th century BCE with the pre-Socratics. •NO#No relevant context. Context: {Context} Previous Context: {PrevContext} Query:{Query} Figure 5: Notes writing prompt for extracting the relevant information. GPT-4 Judge Prompt ===Task=== I need your help in evaluating an answer provided by an LLM against a ground truth answer. Your task is to determine if the ground truth answer is present in the LLM’s response. Please analyze the provided data and make a decision. ===Instructions=== 1. Carefully compare the "Predicted Answer" with the "Ground Truth Answer". 2.Consider the substance of the answers – look for equivalent information or correct answers. Do not focus on exact wording unless the exact wording is crucial to the meaning. 3.Your final decision should be based on whether the meaning and the vital facts of the "Ground Truth Answer" are present in the "Predicted Answer." ===Input Data=== •Question: «question» •Predicted Answer: «LLM_response» •Ground Truth Answer: «ground_truth_answer» ===Output Format=== Provide your final evaluation in the following format: Explanation: (How you made the decision?) Decision: ("TRUE" or "FALSE") Please proceed with the evaluation. Figure 6: GPT-4 prompt for evaluating the correctness of predicted answer. Quality evaluation prompt You are asked to evaluate the reasoning chain produced in response to a question, particularly focusing on how effectively tools were used throughout the process. The evaluation should be based on the following clearly defined criteria. For each criterion, provide a numerical rating on a scale from 0 to 5, where 5 represents excellent performance and 0 indicates poor or entirely absent performance. Criterion 1: Efficiency of the Steps Taken Definition: Evaluate the overall efficiency of each step in the reasoning chain, with specific focus on whether the tool calls and reasoning steps helped progress toward the final correct answer. Efficient steps reduce uncertainty, narrow the solution space, or directly contribute to solving the problem. Rating Guide: 5 – Extremely efficient: Every step clearly advances the reasoning; no wasted effort. 4 – Highly efficient: Most steps are purposeful, with only minor inefficiencies. 3 – Moderately efficient: Some steps are valuable, others contribute little. 2 – Minimally efficient: Several steps are misdirected or low-impact. 1 – Poorly efficient: Most steps offer minimal or no progress toward the answer. 0 – Not efficient at all: Steps are irrelevant, aimless, or distracting. Criterion 2: Redundancy of Steps Definition: Assess the reasoning chain for unnecessary repetition or duplication of steps, including redundant tool calls or rephrasing of the same logic without new insight. A low-redundancy chain avoids rework and keeps the progression streamlined. Rating Guide: 5 – No redundancy: Each step is unique and adds distinct value. 4 – Very low redundancy: Only minor repetition, quickly resolved. 3 – Moderate redundancy: Some ideas or tool uses are repeated without added benefit. 2 – Noticeable redundancy: Multiple steps repeat similar content or actions unnecessarily. 1 – High redundancy: Repetition significantly detracts from conciseness. 0 – | https://arxiv.org/abs/2505.16293v1 |
Extremely redundant: Most of the chain rehashes prior reasoning with no new value. Criterion 3: Clarity and Coherence of the Reasoning Chain Definition: Examine how clearly and logically the reasoning chain progresses from the question to the final answer. This includes whether steps are easy to follow, logically connected, and free of ambiguity or excessive complexity. Rating Guide: 5 – Exceptionally clear and coherent: The reasoning is logical, concise, and easy to follow. 4 – Mostly clear: The chain is understandable with minor clarity issues. 3 – Moderately clear: Some transitions or justifications are unclear or weak. 2 – Confusing in parts: Multiple unclear, inconsistent, or disjointed steps. 1 – Difficult to follow: Lacks logical flow or clear structure. 0 – Incomprehensible: The chain cannot be understood or followed logically. First provide your reasoning of your evaluation then structure your responses as a json with the keys "Crite- rion 1", "Criterion 2", "Criterion 3" and the values as the ratings you provided. Chain: {} Figure 7: Prompt for quality evaluation of reasoning chain. | https://arxiv.org/abs/2505.16293v1 |
ToDi: Token-wise Distillation via Fine-Grained Divergence Control Seongryong Jung1,2, Suwan Yoon1, DongGeon Kim1, Hwanhee Lee1* 1Department of Artificial Intelligence, Chung-Ang University,2Dmtlabs {jungsr1116, swyoon0312, golddonggun, hwanheelee}@cau.ac.kr Abstract Large language models (LLMs) offer im- pressive performance but are impractical for resource-constrained deployment due to high latency and energy consumption. Knowledge distillation (KD) addresses this by transferring knowledge from a large teacher to a smaller student model. However, conventional KD, no- tably approaches like Forward KL (FKL) and Reverse KL (RKL), apply uniform divergence loss across the entire vocabulary, neglecting token-level prediction discrepancies. By inves- tigating these representative divergences via gradient analysis, we reveal that FKL boosts underestimated tokens, while RKL suppresses overestimated ones, showing their comple- mentary roles. Based on this observation, we propose Token-wise Distillation (ToDi) , a novel method that adaptively combines FKL and RKL per token using a sigmoid-based weighting function derived from the teacher- student probability log-ratio. ToDi dynami- cally emphasizes the appropriate divergence for each token, enabling precise distribution alignment. We demonstrate that ToDi consis- tently outperforms recent distillation baselines using uniform or less granular strategies across instruction-following benchmarks. Extensive ablation studies and efficiency analysis further validate ToDi’s effectiveness and practicality. 1 Introduction Recent advances in large language models (LLMs), driven by scaling up model size, have substantially enhanced their ability to follow user instructions and generate contextually appropriate responses (Brown et al., 2020; Sanh et al., 2021; Wei et al., 2022; Chung et al., 2024). However, the continued enlargement of model size introduces several chal- lenges, including increased inference latency, high energy consumption, and inefficiency in resource- constrained environments. To address these issues, *Corresponding author. � � � � 0.9 FKL +0.1 RKL 0.7 RKL +0.3 FKL0.8 FKL +0.2 RKL 0.8 RKL +0.2 FKL � 0.6 FKL +0.4 RKL 0.6 FKL +0.4 RKL 0.6 FKL +0.4 RKL 0.6 FKL +0.4 RKL Divergence term Divergence term Divergence term Divergence term � � � � � � � RKL RKL RKL RKL Reverse KL Input ? Next Token Student Teacher Adaptive KL (Fixed weighting for all token)ToDi (Ours) (Token -wise dynamic weighting)Prediction Prediction Teacher Student I to find ing ‘ m try Weighting MethodsNeeds Appropriate divergence term FKL RKL FKL RKL Increase Decrease Increase Decrease Forward KL FKL FKL FKL FKL � � � � Probability thisProbability this Probability this Probability this Probability thisAppropriate signals Figure 1: Token-wise learning signals for KL-based dis- tillation objectives. Conventional methods apply a fixed divergence across the entire vocabulary, while ToDi dy- namically blends Forward and Reverse KL per-token based on the teacher–student probability ratio, balanc- ing gradients across all tokens. knowledge distillation (KD; Hinton et al., 2015) has been widely adopted; this approach aims to minimize the performance gap between teacher and student models by transferring knowledge from a high-performing large teacher model to a smaller student model. Recently, various knowledge distil- lation techniques for enhancing the efficiency of LLMs have been proposed, and research surround- ing these methods is actively underway (Zhang et al., 2024b; Feng et al., 2024; Shing et al., 2025). Conventional knowledge distillation methods often employ divergences such as | https://arxiv.org/abs/2505.16297v1 |
Forward KL (FKL) and Reverse KL (RKL) to minimize the discrepancy between teacher and student distribu- tions (Hinton et al., 2015; Gu et al., 2024). How- ever, as depicted in the example for FKL and RKL in Figure 1, these approaches apply a single di- 1arXiv:2505.16297v1 [cs.CL] 22 May 2025 vergence uniformly across the entire vocabulary , regardless of how severely the student misesti- mates each token. This uniform-loss assumption persists in symmetric and hybrid variants of FK- L/RKL (Wen et al., 2023; Ko et al., 2024; Agarwal et al., 2024), and even dynamic combinations at the vocabulary-set or time-step level like Adaptive KL (Wu et al., 2025). We hypothesize that such uniform treatment is sub-optimal because different tokens may require different correction signals. In this paper, we analyze the limitation of uni- form application by investigating token-specific optimal signals through a gradient-based analysis of divergences in existing KD methods (Section 3). This analysis reveals that FKL effectively increases the probability of tokens that the student model un- derestimates relative to the teacher model, whereas RKL excels at suppressing the probability of tokens that it overestimates, showing their distinct and complementary roles. However, existing methods apply a uniform divergence loss across the entire vocabulary, failing to leverage these complemen- tary signals effectively at the token level. As shown in Figure 1, this uniformity prevents appropriate training signals for individual tokens, particularly when the student significantly over- or underesti- mates the teacher’s distribution. Motivated by this insight, we propose a novel dis- tillation method, Token-wise Distillation (ToDi) (Section 4). As illustrated in Figure 1, ToDi dy- namically balances the contributions of FKL and RKL based on token-level prediction discrepancies by adaptively combining them per-token using a token-specific weighting function. This approach directly provides tailored training signals that cap- ture fine-grained differences between the teacher and student distributions, going beyond uniform loss application. We demonstrate ToDi’s effectiveness through extensive experiments and show that ToDi consis- tently outperforms recent distillation baselines on various instruction-following benchmarks, achiev- ing superior ROUGE-L scores and higher win rates in GPT-4-based pairwise evaluations. Furthermore, we validate the critical importance of ToDi’s token- wise divergence control. We also show that ToDi maintains stable training and linear time complex- ity with respect to vocabulary size, highlighting its efficiency and practicality. The principal contributions of this paper are as follows:•We analyze and show the complementary roles of FKL and RKL for KD through gradient analysis. •Based on this analysis, we propose ToDi, a new KD method that adaptively combines FKL and RKL per token according to prediction discrepan- cies and enables fine-grained distribution align- ment. •We provide theoretical grounding for ToDi and demonstrate its superior performance over exist- ing methods through extensive experiments on instruction following tasks. 2 Related Work 2.1 Objective Functions of KD In knowledge distillation (Hinton et al., 2015), the student model is trained to mimic the teacher’s output distribution by minimizing the divergence loss. The FKL induces mode averaging, smooth- ing a multimodal teacher distribution, while the RKL causes mode collapse, driving the student to focus on a single | https://arxiv.org/abs/2505.16297v1 |
mode (Minka et al., 2005; Koller and Friedman, 2009; Chan et al., 2022; Wang et al., 2023a). To counter these extremes, Wen et al. (2023) adopted the symmetric Jensen–Shannon Di- vergence (JSD), and Agarwal et al. (2024) gen- eralized it to interpolate between FKL and RKL. Skewed KL variants (SKL, SRKL) further mix the student distribution into the teacher’s distribution for stability (Ko et al., 2024), while TAID (Shing et al., 2025) inserts a time-varying intermediate distribution between teacher and student. Despite these advances, all prior work on ap- plying KD for language models still processes the entire vocabulary distribution at every sequence po- sition and applies a uniform loss across tokens. This coarse treatment misses token-level mismatches be- tween teacher and student, limiting the student’s ability to replicate the teacher’s fine-grained pre- dictive structure. Our proposed method aims to overcome this limitation by applying a token-wise dynamic divergence control, precisely addressing these fine-grained mismatches. 2.2 Dynamic Combination of FKL and RKL Several studies have explored combining FKL and RKL to take advantage of both methods. Lee et al. (2023) proposed a straightforward additive com- bination, whereas Amara et al. (2022) introduced BD-KD, which adjusts the weights of FKL and RKL on a per-sample basis via the entropy gap 2 0 20 40 60 80 100 Index0.000.020.04ProbabilityDistributions Teacher Distribution p(x)Student Distribution q(x) 0 20 40 60 80 100 Index02468Absolute GradientGradient Magnitudes |qFKL|=|p/q| |qRKL|=|log(q/p)+1| Figure 2: Toy example demonstrating the behavior of FKL and RKL gradients. In regions where p > q , FKL provides stronger gradients, while in regions where q > p , RKL provides stronger learning signals. between teacher and student distributions. Wu et al. (2025) presented AKL—tailored for LLM distilla- tion—that adaptively combines the two divergences based on the observation that, in early training, FKL primarily learns head predictions while RKL focuses on tail predictions. Nevertheless, such ap- proaches still dynamically apply FKL and RKL to the entire vocabulary distribution at every se- quence position without assigning dynamic weights to individual tokens . This limitation prevents a fine- grained reflection of token-level prediction differ- ences between teacher and student, thereby hinder- ing the learning of detailed predictive structures. In contrast, our proposed ToDi method dynamically balances FKL and RKL on a per-token basis, cap- turing fine-grained probability discrepancies and enabling more precise predictive structure learning. 3 Gradient Behavior of FKL and RKL In this section, we formalize knowledge distilla- tion for autoregressive LLMs and analyze the FKL and RKL objectives from a gradient perspective. By understanding the gradients, we precisely ex- amine how the learning signal for each vocabulary token depends on the relative magnitudes of the teacher probability p(vi|y<t,x)and the student probability qθ(vi|y<t,x), providing insight into token-specific optimal signals. 3.1 Preliminaries An autoregressive LLMs generates an output se- quence y= [y1, . . . , y |y|]conditioned on an input sequence x. At each time step t, it selects one token from a finite vocabulary V={v1, . . . , v |V|}. KD minimizes the discrepancy between the teacher’s distribution p(yt|y<t,x)and the stu- dent’s distribution qθ(yt|y<t,x), where θdenotes | https://arxiv.org/abs/2505.16297v1 |
the student parameters and y<t= [y1, . . . , y t−1]Case Forward KL Reverse KL p>qθ↑Strong push-up ≈Weak push-up p<qθ≈Weak pull-down ↓Strong pull-down Table 1: Complementary training signals of FKL vs. RKL. are the tokens generated before step t. During KD, the loss is typically instantiated as either the FKL or the RKL. At time step t, the contribution of each divergence for a token vi∈ V is defined as: D(t,i) FKL(p, qθ) =p(vi|y<t,x) logp(vi|y<t,x) qθ(vi|y<t,x), (1) D(t,i) RKL(p, qθ) =qθ(vi|y<t,x) logqθ(vi|y<t,x) p(vi|y<t,x). (2) Training Objective We accumulate the token- level divergences (from Equations 1 and 2) over all time steps and vocabulary entries to obtain the total forward and reverse KL divergence losses: LFKL=|y|X t=1|V|X i=1D(t,i) FKL p, qθ , (3) LRKL=|y|X t=1|V|X i=1D(t,i) RKL p, qθ . (4) 3.2 Theoretical Analysis We theoretically analyze the FKL and RKL train- ing signals. In particular, we examine how the two divergences exert opposite corrective effects de- pending on the relative magnitudes of the teacher distribution p(yt|y<t,x)and the student distri- bution qθ(yt|y<t,x). The analysis is grounded 3 ToDi (Ours)water dog cat here this apple now today banana Weighting Function for ToDi Token-wise Distillation (ToDi) Forward KL Reverse KL ToDi (Ours)KD KD KDTeacher StudentI ‘ to call That ‘ s what your ‘ re in rightFigure 3: Illustration of the Token-wise Distillation. (Left) For each vocabulary token, the contributions of FKL and RKL are dynamically combined using a token-specific weight αt,i.(Right) The weight αt,i, determined by the teacher–student probability ratio, smoothly increases FKL emphasis when p > q θand RKL emphasis when qθ> p. in the token-level definitions given in Equations 1 and 2. Gradient Form. From the derivations summa- rized in Appendix A, the partial derivatives of each divergence with respect to qθare: ∂ ∂qθ(vi|y<t,x)D(t,i) FKL(p, qθ) =−p(vi|y<t,x) qθ(vi|y<t,x),(5) ∂ ∂qθ(vi|y<t,x)D(t,i) RKL(p, qθ) = logqθ(vi|y<t,x) p(vi|y<t,x)+ 1. (6) Difference in Training Signals by Relative Prob- ability. The two gradients can be compared through a single ratio r=p vi|y<t,x qθ vi|y<t,x: •r >1(the student model underestimates). Here, the FKL gradient −ris a negative value whose magnitude exceeds 1, pushing qθtoin- crease sharply. The RKL gradient, log1 r+ 1, turns negative only when r > e and its mag- nitude is smaller, producing a relatively weak corrective signal. Thus, for tokens underesti- mated by the student, FKL provides the domi- nant "push-up" signal. •r <1(the student model overestimates). In this case, the FKL gradient remains a small negative value, whereas the RKL gradient is a positive value greater than 1, providing a strong signal to decrease qθ. Consequently, when the student overestimates, RKL provides the dominant "pull-down" signal. In summary, as organized in Table 1, our theo- retical analysis reveals that FKL and RKL providecomplementary training signals around the bound- aryr= 1: FKL strongly encourages increasing stu- dent probability (i.e. push-up) for underestimated tokens ( p > q θ), while RKL strongly encourages decreasing student probability (i.e. pull-down) for overestimated tokens ( qθ> p). 3.3 Empirical Analysis of a Toy Example To empirically examine how FKL and RKL gradient magnitudes depend on the relative | https://arxiv.org/abs/2505.16297v1 |
teacher–student probabilities at each token, we con- struct a toy example by defining teacher distribu- tionp(x)and student distribution q(x). Figure 2 illustrates the comparison of gradient magnitudes according to the relative relationship between the teacher distribution p(x)and the student distribu- tionq(x)in a toy example. The left panel shows where the two distributions intersect, with the re- gions p(x)> q(x)andq(x)> p(x)shaded sepa- rately. The right panel visualizes, for each index, the gradient magnitudes induced by FKL and RKL. Consistent with the theoretical analysis, we ob- serve in the toy example that in the region where p(x)> q(x), FKL produces substantially larger gradients than RKL, delivering a strong corrective signal for tokens that the student under-estimates relative to the teacher. Conversely, in the region where q(x)> p(x), the magnitude of the RKL gradient is greater, indicating a strong signal to suppress over-estimation. Consequently, FKL and RKL provide specialized training signals in differ- ent scenarios. 4 ToDi In this section, we introduce Token-wise Distilla- tion (ToDi) , which dynamically adjusts the contri- butions of FKL and RKL based on the token-level 4 probability ratios in the teacher and student distri- butions. Objective Functions for ToDi. As shown in the gradient analysis of Section 3, for each vocabulary token vi, when p(vi|y<t,x)> qθ(vi|y<t,x), the FKL provides a learning signal that effec- tively increases qθ, and conversely, when qθ(vi| y<t,x)> p(vi|y<t,x),the RKL offers a sig- nal that reduces qθ. Building on this insight into their complementary roles, we propose a novel dis- tillation method, Token-wise Distillation (ToDi) , which dynamically combines FKL and RKL ac- cording to the relative magnitudes of the teacher probability p(vi|y<t,x)and the student probabil- ityqθ(vi|y<t,x). Unlike conventional approaches that apply a single loss uniformly across the entire vocabulary, ToDi computes a specific loss for each token viat time step t, denoted D(t,i) ToDi. This token- level loss is a weighted sum of the token’s FKL and RKL divergences. Specifically, the token-level loss D(t,i) ToDiis defined as follows: D(t,i) ToDi(p, qθ) =αt,i·D(t,i) FKL(p, qθ) + 1−αt,i ·D(t,i) RKL(p, qθ),(7) where αt,iis atoken-specific weight dynamically computed for each token vibased on the relative teacher and student probabilities. As illustrated in Figure 3 (Left), we utilize the weighting function to amplify the contribution of FKL when needed (when p > q θ) and amplify the contribution of RKL when needed (when qθ> p). The overall distillation loss is then the sum of these token-level losses over all time steps and vo- cabulary entries: LToDi=|y|X t=1|V|X i=1D(t,i) ToDi(p, qθ). (8) Weighting Function for ToDi. The core of ToDi’s token-wise control lies in the weighting function that determines αt,i. This weight must dynamically adjust according to the relative mag- nitudes of p(vi|y<t,x)andqθ(vi|y<t,x)to effectively leverage the complementary nature of FKL and RKL. Specifically, the token-specific weight αt,iis de- fined by a function Wof these probabilities: αt,i=W p(vi|y<t,x), qθ(vi|y<t,x) (9)Function αt,i(r) (r=p/qθ) β Sigmoid1 1+e−logr=r 1+r1 Scaled tanh1 2 1 + tanh(log r) 2 Jeffreys (fixed)1 20 Step function 1[r >1 ] β→∞ Table 2: Various weighting functions can be unified un- der the Generalized ToDi, where each can be | https://arxiv.org/abs/2505.16297v1 |
expressed in the form αt,i(r) =σ(βlogr)with an appropriate scaling factor β. The function Wshould assign a larger value (thus increasing the contribution of FKL) when p(vi|y<t,x)> qθ(vi|y<t,x), so as to boost the student’s probability. Conversely, when qθ(vi| y<t,x)> p(vi|y<t,x),a smaller function value (favoring RKL) is appropriate. To satisfy these re- quirements and enable fine-grained control, the function Wmust meet the following four condi- tions: •Ifp(vi|y<t,x)> qθ(vi|y<t,x), then αt,i should be greater than 0.5 to emphasize FKL. •Ifqθ(vi|y<t,x)> p(vi|y<t,x), then αt,i should be less than 0.5 to emphasize RKL. •To allocate more extreme weights when the teacher–student probability gap is larger, αt,i must be a monotonically increasing function of the ratio p(vi|y<t,x)/qθ(vi|y<t,x). •αt,imust lie within the valid weight range [0,1]. To satisfy all four conditions, we adopt the sig- moid function for W, defining αt,ias: αt,i= sg σ logp vi|y<t,x qθ vi|y<t,x! (10) Here, σ(·)denotes the sigmoid function, and sg[·]the stop-gradient operator. By applying sg[·], we block gradient flow through its arguments, ef- fectively treating the weight αt,ias a fixed value during the backpropagation of the loss. As illustrated in Figure 3 (Right), αt,ismoothly varies between 0 and 1 according to the magnitude ofp(vi|y<t,x)/qθ(vi|y<t,x), naturally reflect- ing the teacher–student probability discrepancy. A detailed proof that the sigmoid satisfies all four con- ditions is provided in Appendix B. Furthermore, we implement the stop-gradient operator sg[·]as adetach operation during training; its effects are discussed in detail in Appendix C. 5 Methods DollyEval S-NI UnNI SelfInst VicunaEval Average GPT2 1.5B →GPT2 120M Teacher 26.66 ±0.30 27.17 ±0.33 31.60 ±0.13 14.42 ±0.49 16.32 ±0.41 23.23 SFT 23.09 ±0.53 16.44 ±0.39 18.96 ±0.08 9.72 ±0.43 14.81 ±0.34 16.61 FKL 24.06 ±0.43 18.43 ±0.22 21.42 ±0.04 11.13 ±0.34 15.53 ±0.45 18.12 RKL 24.22 ±0.18 18.60 ±0.10 21.99 ±0.07 11.42 ±0.33 15.65 ±0.51 18.38 JS 23.77 ±0.29 17.31 ±0.17 19.74 ±0.07 10.08 ±0.37 15.08 ±0.32 17.20 TVD 23.90 ±0.61 17.89 ±0.24 20.87 ±0.12 10.73 ±0.71 15.20 ±0.30 17.72 SKL 24.05 ±0.31 17.18 ±0.31 20.43 ±0.08 10.54 ±0.55 14.93 ±0.29 17.42 SRKL 24.20 ±0.40 18.02 ±0.18 21.67 ±0.09 11.05 ±0.48 15.07 ±0.22 18.00 AKL 24.67 ±0.29 18.29 ±0.23 21.46 ±0.12 10.62 ±0.68 15.28 ±0.16 18.07 ToDi (Ours) 24.81 ±0.62 19.42 ±0.18 22.16 ±0.21 11.30 ±0.41 15.61 ±0.34 18.66 LLaMA2 7B →TinyLLaMA 1.1B Teacher 28.88 ±0.23 30.72 ±0.36 32.02 ±0.08 19.89 ±0.58 18.76 ±0.59 26.05 SFT 23.36 ±0.26 26.19 ±0.18 26.69 ±0.08 15.76 ±1.04 15.88 ±0.63 21.58 FKL 25.40 ±0.50 30.13 ±0.43 29.47 ±0.06 18.22 ±1.12 16.77 ±0.31 24.00 RKL 24.11 ±0.31 32.09 ±0.37 30.29 ±0.11 17.97 ±0.84 16.02 ±0.73 24.09 JS 24.41 ±0.34 28.55 ±0.33 28.69 ±0.10 17.31 ±0.32 16.21 ±0.52 23.03 TVD 24.71 ±0.74 29.23 ±0.25 29.12 ±0.05 16.64 ±0.83 16.19 ±0.63 23.18 SKL 25.32 ±0.54 31.10 ±0.38 29.89 ±0.11 17.45 ±0.69 16.32 ±0.33 24.01 SRKL 24.93 ±0.18 30.52 ±0.31 30.62 ±0.15 17.17 ±0.68 16.41 ±0.36 23.93 AKL 25.50 ±0.53 30.41 ±0.28 30.55 ±0.08 17.52 ±0.57 16.79 ±0.34 24.15 ToDi (Ours) 26.26 ±0.31 31.53 ±0.22 31.29 ±0.17 18.14 ±0.23 16.96 ±0.23 24.83 Table 3: Across all distillation settings, our proposed ToDi consistently outperforms every baseline in ROUGE-L score. The best | https://arxiv.org/abs/2505.16297v1 |
result is shown in bold , and the second best is underlined . Generalized ToDi. Any function satisfying the four weight conditions introduced above can take many forms. To explore this design space and unify various weighting strategies, we introduce a scal- ing hyperparameter β∈R. By incorporating β into the sigmoid input, we can express a variety of weighting functions in a single unified form . In this case, the ToDi weight function αt,iis defined as: αt,i= sg σ β·logp vi|y<t,x qθ vi|y<t,x! (11) As summarized in Table 2, by simply varying the value of β, this unified framework can repre- sent a range of weighting functions, such as the standard sigmoid ( β= 1), scaled tanh ( β= 2), Jeffreys divergence (Jeffreys, 1946) ( β= 0), and approximating a step function ( β→ ∞ ). 5 Experiments 5.1 Experimental Setup Training Configuration. We follow the ex- perimental setup of Zhang et al. (2024c) to evaluate ToDi. For training, we use the databricks/dolly-15k dataset, which comprises 11K training samples, 1K validation samples, and 500 test samples. As student models, we employ GPT2-120M (Radford et al., 2019) andTinyLLaMA-1.1B (Zhang et al., 2024a). We train GPT2-120M via full fine-tuning using GPT2- 1.5B as the teacher model, whereas we train TinyLLaMA-1.1B with LoRA (Hu et al., 2022) using LLaMA2-7B (Touvron et al., 2023) as the teacher. Evaluation Protocol. We conduct performance evaluation following the protocol of Gu et al. (2024), using the ROUGE-L metric (Lin, 2004). We assess instruction-following ability across five datasets: DollyEval ,S-NI (Wang et al., 2022), UnNI (Honovich et al., 2023), SelfInst (Wang et al., 2023b), and VicunaEval (Zheng et al., 2023). We repeat each evaluation with five different ran- dom seeds, and we report the average scores. Fur- ther details of the experimental setup are provided in Appendix D. Baseline Methods. We use the following meth- ods as baselines to compare the performance of ToDi: •SFT: Fine-tuning the student model directly on the dataset without knowledge distillation. •FKL/RKL (Hinton et al., 2015; Gu et al., 2024): Knowledge distillation using Forward or Reverse KL divergence. •JS/TVD (Wen et al., 2023): Symmetric di- 6 0 20 40 60 80 100 Percentage (%)AKLSRKLSKLTVDJSRKLFKL 38.8 27.0 34.2 p=1e-439.5 25.9 34.6 p=7e-538.5 26.0 35.5 p=1e-241.7 25.5 32.8 p=5e-1341.9 25.0 33.1 p=7e-1339.0 27.4 33.7 p=1e-539.4 26.3 34.2 p=2e-5Win Tie LoseFigure 4: GPT-4 pairwise evaluation of TinyLLaMA models trained with various KD methods on 5,000 UnNI examples. Bars show Win/Tie/Lose proportions; p-values on right. vergences—Jensen–Shannon and Total Vari- ation—minimizing the distance between the teacher and student distributions. •SKL/SRKL (Ko et al., 2024): Skewed KL and Skewed Reverse KL, which mix teacher and student distributions at ratio λ; SKL uses λp+ (1−λ)qθwhile SRKL uses (1−λ)p+ λqθ. •AKL (Wu et al., 2025): Adaptive KL that com- bines FKL and RKL by considering head–tail differences in the distributions. To evaluate ToDi’s performance, we select vari- ous divergence-based knowledge distillation meth- ods as baselines and compare their performance based on the choice of divergence. 5.2 Results Overall Performance We first evaluate the over- all instruction-following performance of ToDi against baselines using ROUGE-L. | https://arxiv.org/abs/2505.16297v1 |
Table 3 presents the performance of the teacher and student mod- els under different teacher–student configurations, compared across various knowledge distillation methods. Our proposed ToDi achieves the high- est average score on all five instruction-following tasks for both teacher–student pairs, outperforming all baseline methods, showing that ToDi effectively transfers the knowledge of the teacher to the stu- dent. We demonstrate that ToDi consistently out- performs all single-divergence baselines and even surpasses an approach that uses a single, global weight across the entire vocabulary. These results indicate that dynamic, token-level adjustment of divergence weights—tailored to each token’s pre- dicted probability discrepancy—yields significant /uni00000015 /uni00000017 /uni00000019 /uni0000001b /uni00000014/uni00000013 /uni00000028/uni00000053/uni00000052/uni00000046/uni0000004b/uni00000015/uni00000017/uni00000015/uni00000018/uni00000015/uni00000019/uni00000015/uni0000001a/uni00000015/uni0000001b/uni00000015/uni0000001c/uni00000016/uni00000013/uni00000035/uni00000052/uni00000058/uni0000004a/uni00000048/uni00000010/uni0000002f /uni00000029/uni0000002e/uni0000002f /uni00000035/uni0000002e/uni0000002f /uni0000002d/uni00000036 /uni00000037/uni00000039/uni00000027 /uni00000036/uni0000002e/uni0000002f /uni00000036/uni00000035/uni0000002e/uni0000002f /uni00000024/uni0000002e/uni0000002f /uni00000037/uni00000032/uni00000027/uni0000002cFigure 5: Validation ROUGE-L scores per epoch for TinyLLaMA using various KD methods. performance gains. Additional experiments on di- verse teacher–student configurations are presented in Appendix E. Preference Evaluation via GPT-4 We further evaluate ToDi through a pairwise comparison ex- periment using GPT-4. We also evaluate the sub- jective quality of responses generated by models trained with ToDi using a GPT-4. We randomly select 5,000 samples from the UnNI dataset and compare the responses generated by a TinyLLaMA model trained with ToDi to those produced by models trained with alternative divergence objec- tives. GPT-4 judged which response was superior. As shown in Figure 4, ToDi consistently achieved higher win rates across all comparisons. In most cases, these improvements were statistically sig- nificant (p < 0.001), confirming ToDi’s superiority over the baselines. For additional details, refer to Appendix F. 5.3 Analysis Training Stability and Convergence We ana- lyze the training dynamics of ToDi to assess its stability and convergence behavior. As shown in Figure 5, ToDi maintains a large performance mar- gin over other methods at every epoch, achieving the highest scores throughout training. In particular, ToDi outperforms all baselines by a wide margin in the first epoch and exhibits a steady upward tra- jectory during the middle epochs (2–6 epochs). In the later stages (6–10 epochs), its learning curve remains smooth and converges stably without os- cillation. These results indicate that ToDi not only provides a strong training signal as a KD loss func- tion but also ensures reliable convergence. Computational Efficiency We compare the com- putational complexity of ToDi with existing meth- ods to assess its efficiency. The efficiency of 7 =1 =0 =1 /uni00000015/uni00000016/uni00000011/uni00000013/uni00000015/uni00000016/uni00000011/uni00000018/uni00000015/uni00000017/uni00000011/uni00000013/uni00000015/uni00000017/uni00000011/uni00000018/uni00000015/uni00000018/uni00000011/uni00000013/uni00000015/uni00000018/uni00000011/uni00000018/uni00000035/uni00000052/uni00000058/uni0000004a/uni00000048/uni00000010/uni0000002f/uni00000027/uni00000052/uni0000004f/uni0000004f/uni0000005c/uni00000028/uni00000059/uni00000044/uni0000004f =1 =0 =1 /uni00000014/uni0000001b/uni00000011/uni00000017/uni00000014/uni0000001b/uni00000011/uni00000019/uni00000014/uni0000001b/uni00000011/uni0000001b/uni00000014/uni0000001c/uni00000011/uni00000013/uni00000014/uni0000001c/uni00000011/uni00000015/uni00000014/uni0000001c/uni00000011/uni00000017/uni00000014/uni0000001c/uni00000011/uni00000019/uni00000035/uni00000052/uni00000058/uni0000004a/uni00000048/uni00000010/uni0000002f/uni00000036/uni00000010/uni00000031/uni0000002c =1 =0 =1 /uni00000015/uni00000013/uni00000011/uni0000001b/uni00000015/uni00000014/uni00000011/uni00000013/uni00000015/uni00000014/uni00000011/uni00000015/uni00000015/uni00000014/uni00000011/uni00000017/uni00000015/uni00000014/uni00000011/uni00000019/uni00000015/uni00000014/uni00000011/uni0000001b/uni00000015/uni00000015/uni00000011/uni00000013/uni00000015/uni00000015/uni00000011/uni00000015/uni00000035/uni00000052/uni00000058/uni0000004a/uni00000048/uni00000010/uni0000002f/uni00000038/uni00000051/uni00000031/uni0000002c =1 =0 =1 /uni00000014/uni00000013/uni00000011/uni00000013/uni00000014/uni00000013/uni00000011/uni00000018/uni00000014/uni00000014/uni00000011/uni00000013/uni00000014/uni00000014/uni00000011/uni00000018/uni00000035/uni00000052/uni00000058/uni0000004a/uni00000048/uni00000010/uni0000002f/uni00000036/uni00000048/uni0000004f/uni00000049/uni0000002c/uni00000051/uni00000056/uni00000057 =1 =0 =1 /uni00000014/uni00000017/uni00000011/uni0000001b/uni00000014/uni00000018/uni00000011/uni00000013/uni00000014/uni00000018/uni00000011/uni00000015/uni00000014/uni00000018/uni00000011/uni00000017/uni00000014/uni00000018/uni00000011/uni00000019/uni00000014/uni00000018/uni00000011/uni0000001b/uni00000014/uni00000019/uni00000011/uni00000013/uni00000035/uni00000052/uni00000058/uni0000004a/uni00000048/uni00000010/uni0000002f/uni00000039/uni0000004c/uni00000046/uni00000058/uni00000051/uni00000044/uni00000028/uni00000059/uni00000044/uni0000004f 0.25 FKL + 0.75 RKL0.50 FKL + 0.50 RKL0.75 FKL + 0.25 RKLToDi (Ours)24.024.224.424.624.825.025.2Average ROUGE-L Score24.4624.54 24.5224.83Figure 6: (Left) Performance comparison of Generalized ToDi with different scaling parameters β∈ {1,0,−1} across five evaluation datasets. The dynamic weighting scheme ( β= 1) outperforms the static setting ( β= 0), while the reversed weighting ( β=−1) shows clear performance degradation on all datasets. (Right) Average ROUGE-L scores on five instruction-following benchmarks for fixed-ratio FKL–RKL mixtures uniformly applied across the entire vocabulary distribution versus ToDi’s token-wise weighting strategy. ToDi is evident not only in its performance but also in its computational complexity. For instance, AKL—which dynamically adjusts the weights | https://arxiv.org/abs/2505.16297v1 |
of FKL and RKL globally across the entire vocabu- lary—incurs a time complexity of O(VlogV)due to the required sorting operations. In contrast, ToDi performs computations adaptively on a per-token basis without any sorting during loss computation. As a result, it preserves linear time complexity O(V)with respect to vocabulary size, identical to both FKL and RKL. Effect of the Generalization Parameter βTo analyze the impact of the scaling parameter β, we compare the three settings β∈ {1,0,−1}in gen- eralized ToDi. β= 1 corresponds to the default ToDi configuration; β= 0 fixes α= 0.5, result- ing in an equal combination of FKL and RKL (i.e., Jeffreys divergence); and β=−1reverses the weighting direction, amplifying FKL when qθ> p and RKL when p > q θ. Experimental results with GPT2-120M are shown in Figure 6 (Left). The dynamic weighting scheme ( β= 1) outperforms both the static setting ( β= 0) and the reversed setting ( β=−1), with the reversed setting exhibit- ing even lower performance than the static scheme, indicating that ToDi’s adaptive weight adjustment contributes to performance improvements. A more detailed sensitivity study on βis provided in Ap- pendix G. Token-wise vs. Uniform Divergence Control Rather than applying a fixed FKL–RKL ratio uni- formly across all tokens, ToDi dynamically adjusts this balance on a per-token basis. To validate this effect, we conduct comparative experiments on a TinyLLaMA model using the fixed FKL–RKL mix- tures schemes. As shown in Figure 6 (Right), ToDi consistently achieves higher ROUGE-L scores than all fixed-ratio schemes. This demonstrates that flex- ible, token-level ratio adjustment, rather than a uni-Methods GPT2 TinyLLaMA AKL 0.477 0.599 ToDi 0.482 0.610 Table 4: Pearson similarities for AKL and ToDi using trained GPT-2 and TinyLLaMA models in Section 5, with distributions computed from the databricks/dolly-15k training set. form application across the vocabulary, is the key to performance improvements. Coarse vs. Fine-Grained Weighting To demon- strate that a student model trained with ToDi more accurately learns the teacher distribution than one trained with AKL, we compare the distributions generated by each student model to the teacher dis- tribution following Huang et al. (2022). Table 4 summarizes our analysis by reporting the Pear- son similarity between the teacher and student model distributions. ToDi achieves higher Pearson similarity than AKL, which—despite adaptively combining forward and reverse KL at each time step—applies a uniform mixing ratio across the entire vocabulary. This indicates that ToDi’s dy- namic, per-token mixing more accurately captures the teacher distribution. 6 Conclusion We present ToDi, a novel token-wise distillation method that dynamically balances FKL and RKL based on per-token prediction discrepancies. Our gradient analysis shows that FKL corrects un- derestimation while RKL suppresses overestima- tion, and ToDi leverages this by using a sigmoid- based weight per token. Experiments on multiple instruction-following benchmarks demonstrate that ToDi consistently outperforms existing baselines, and GPT-4 pairwise preference evaluations con- firm its superiority. Finally, we introduce a unified weighting framework and validate its effectiveness via extensive ablations. 8 Limitations ToDi precisely captures token-level prediction dis- crepancies between the teacher and student models, thereby enabling effective distribution alignment. However, ToDi assumes | https://arxiv.org/abs/2505.16297v1 |
that the teacher and stu- dent share an identical vocabulary, which limits its direct applicability when the two models employ different vocabularies. Moreover, ToDi requires ac- cess to the full token probability distribution of the teacher model, restricting its use to open-source LLMs that expose per-token logits. Experiments on extremely large-scale models were not conducted due to computational resource constraints. Nevertheless, ToDi consistently outper- forms existing methods across a diverse range of models, including GPT2-120M and TinyLLaMA- 1.1B, demonstrating its practicality and efficiency. Acknowledgments This work was supported by the Institute of Infor- mation & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea gov- ernment (MSIT) [RS-2021-II211341, Artificial In- telligence Graduate School Program (Chung-Ang University)]. References Rishabh Agarwal, Nino Vieillard, Yongchao Zhou, Pi- otr Stanczyk, Sabela Ramos Garea, Matthieu Geist, and Olivier Bachem. 2024. On-policy distillation of language models: Learning from self-generated mistakes. In The Twelfth International Conference on Learning Representations . Ibtihel Amara, Nazanin Sepahvand, Brett H Meyer, War- ren J Gross, and James J Clark. 2022. Bd-kd: balanc- ing the divergences for online knowledge distillation. arXiv preprint arXiv:2212.12965 . Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, and 12 others. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems , volume 33, pages 1877–1901. Curran Associates, Inc. Alan Chan, Hugo Silva, Sungsu Lim, Tadashi Kozuno, A Rupam Mahmood, and Martha White. 2022. Greedification operators for policy optimization: In- vestigating forward and reverse kl divergences. Jour- nal of Machine Learning Research , 23(253):1–79.Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, and 1 others. 2024. Scaling instruction-finetuned lan- guage models. Journal of Machine Learning Re- search , 25(70):1–53. Tao Feng, Yicheng Li, Li Chenglin, Hao Chen, Fei Yu, and Yin Zhang. 2024. Teaching small language models reasoning through counterfactual distillation. InProceedings of the 2024 Conference on Empiri- cal Methods in Natural Language Processing , pages 5831–5842, Miami, Florida, USA. Association for Computational Linguistics. Yuxian Gu, Li Dong, Furu Wei, and Minlie Huang. 2024. MiniLLM: Knowledge distillation of large language models. In The Twelfth International Conference on Learning Representations . Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 . Or Honovich, Thomas Scialom, Omer Levy, and Timo Schick. 2023. Unnatural instructions: Tuning lan- guage models with (almost) no human labor. In Proceedings of the 61st Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers) , pages 14409–14428, Toronto, Canada. Association for Computational Linguistics. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, and 1 others. 2022. Lora: Low-rank adaptation of large language models. ICLR , 1(2):3. Tao Huang, Shan You, Fei Wang, Chen Qian, and Chang Xu. 2022. Knowledge distillation from a stronger teacher. In Advances in Neural Information | https://arxiv.org/abs/2505.16297v1 |
Process- ing Systems , volume 35, pages 33716–33727. Curran Associates, Inc. Harold Jeffreys. 1946. An invariant form for the prior probability in estimation problems. Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences , 186(1007):453–461. Jongwoo Ko, Sungnyun Kim, Tianyi Chen, and Seyoung Yun. 2024. Distillm: Towards streamlined distillation for large language models. In The Forty-first Interna- tional Conference on Machine Learning . ICML. Daphne Koller and Nir Friedman. 2009. Probabilistic graphical models: principles and techniques . MIT press. Hyoje Lee, Yeachan Park, Hyun Seo, and Myungjoo Kang. 2023. Self-knowledge distillation via dropout. Computer Vision and Image Understanding , 233:103720. Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out , pages 74–81, Barcelona, Spain. Association for Computational Linguistics. 9 Tom Minka and 1 others. 2005. Divergence measures and message passing. Team OLMo, Pete Walsh, Luca Soldaini, Dirk Groen- eveld, Kyle Lo, Shane Arora, Akshita Bhagia, Yuling Gu, Shengyi Huang, Matt Jordan, Nathan Lambert, Dustin Schwenk, Oyvind Tafjord, Taira Anderson, David Atkinson, Faeze Brahman, Christopher Clark, Pradeep Dasigi, Nouha Dziri, and 21 others. 2025. 2 olmo 2 furious. Preprint , arXiv:2501.00656. Qwen, :, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, and 25 oth- ers. 2025. Qwen2.5 technical report. Preprint , arXiv:2412.15115. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, and 1 others. 2019. Language models are unsupervised multitask learn- ers.OpenAI blog , 1(8):9. Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, and 1 others. 2021. Multitask prompted training en- ables zero-shot task generalization. arXiv preprint arXiv:2110.08207 . Makoto Shing, Kou Misaki, Han Bao, Sho Yokoi, and Takuya Akiba. 2025. Taid: Temporally adaptive inter- polated distillation for efficient knowledge transfer in language models. arXiv preprint arXiv:2501.16937 . Gemma Team, Aishwarya Kamath, Johan Ferret, Shreya Pathak, Nino Vieillard, Ramona Merhej, Sarah Perrin, Tatiana Matejovicova, Alexandre Ramé, Morgane Rivière, Louis Rouillard, Thomas Mesnard, Geoffrey Cideron, Jean bastien Grill, Sabela Ramos, Edouard Yvinec, Michelle Casbon, Etienne Pot, Ivo Penchev, and 197 others. 2025. Gemma 3 technical report. Preprint , arXiv:2503.19786. Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, and 1 others. 2023. Llama 2: Open foun- dation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 . Chaoqi Wang, Yibo Jiang, Chenghao Yang, Han Liu, and Yuxin Chen. 2023a. Beyond reverse kl: Generalizing direct preference optimization with diverse divergence constraints. arXiv preprint arXiv:2309.16240 . Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2023b. Self-instruct: Aligning language models with self-generated instructions. In Proceed- ings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 13484–13508, Toronto, Canada. Association for Computational Linguistics.Yizhong Wang, Swaroop Mishra, Pegah Alipoormo- labashi, Yeganeh Kordi, Amirreza Mirzaei, Atharva Naik, Arjun | https://arxiv.org/abs/2505.16297v1 |
Ashok, Arut Selvan Dhanasekaran, An- jana Arunkumar, David Stap, Eshaan Pathak, Gian- nis Karamanolakis, Haizhi Lai, Ishan Purohit, Is- hani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Kuntal Kumar Pal, and 16 others. 2022. Super- NaturalInstructions: Generalization via declarative instructions on 1600+ NLP tasks. In Proceedings of the 2022 Conference on Empirical Methods in Nat- ural Language Processing , pages 5085–5109, Abu Dhabi, United Arab Emirates. Association for Com- putational Linguistics. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V Le. 2022. Finetuned language mod- els are zero-shot learners. In International Confer- ence on Learning Representations . Yuqiao Wen, Zichao Li, Wenyu Du, and Lili Mou. 2023. f-divergence minimization for sequence-level knowl- edge distillation. In Proceedings of the 61st An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 10817– 10834, Toronto, Canada. Association for Computa- tional Linguistics. Taiqiang Wu, Chaofan Tao, Jiahao Wang, Runming Yang, Zhe Zhao, and Ngai Wong. 2025. Rethinking Kullback-Leibler divergence in knowledge distilla- tion for large language models. In Proceedings of the 31st International Conference on Computational Linguistics , pages 5737–5755, Abu Dhabi, UAE. As- sociation for Computational Linguistics. Peiyuan Zhang, Guangtao Zeng, Tianduo Wang, and Wei Lu. 2024a. Tinyllama: An open-source small language model. arXiv preprint arXiv:2401.02385 . Songming Zhang, Xue Zhang, Zengkui Sun, Yufeng Chen, and Jinan Xu. 2024b. Dual-space knowledge distillation for large language models. arXiv preprint arXiv:2406.17328 . Songming Zhang, Xue Zhang, Zengkui Sun, Yufeng Chen, and Jinan Xu. 2024c. Dual-space knowledge distillation for large language models. In Proceed- ings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 18164–18181, Miami, Florida, USA. Association for Computational Linguistics. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, and 1 others. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Pro- cessing Systems , 36:46595–46623. 10 A Gradient Derivations A.1 Derivation of FKL Gradient We consider the forward KL divergence term at time step tand vocabulary token vi, defined as: D(t,i) FKL(p, qθ) =pilogpi qi(12) where: pi:=p(vi|y<t,x), q i:=qθ(vi|y<t,x) (13) To compute the gradient with respect to qi, we apply the product rule: ∂ ∂qiD(t,i) FKL(p, qθ) =∂ ∂qi pilogpi qi (14) Since piis independent of qi, we treat it as a con- stant: =pi·∂ ∂qi(logpi−logqi) =−pi·1 qi(15) Thus, the gradient becomes: ∂ ∂qθ(vi|y<t,x)D(t,i) FKL(p, qθ) =−p(vi|y<t,x) qθ(vi|y<t,x) (16) A.2 Derivation of RKL Gradient We now derive the gradient for the reverse KL di- vergence, defined as: D(t,i) RKL(p, qθ) =qilogqi pi(17) where the same definitions apply: pi:=p(vi|y<t,x), q i:=qθ(vi|y<t,x) (18) Applying the product rule: ∂ ∂qiD(t,i) RKL(p, qθ) =∂ ∂qi qilogqi pi =∂ ∂qi(qilogqi−qilogpi) (19) Since logpiis constant w.r.t. qi, the derivative sim- plifies to: ∂ ∂qiD(t,i) RKL(p, qθ) = (log qi+ 1)−logpi = logqi pi+ 1 (20) Hence, the final gradient expression is: ∂ ∂qθ(vi|y<t,x)D(t,i) RKL(p, qθ) (21) = logqθ(vi|y<t,x) p(vi|y<t,x)+ 1B Proof of Sigmoid Weight-Function Properties For the ToDi weight function αt,i=σ logp vi|y<t,x qθ vi|y<t,x! (22) | https://arxiv.org/abs/2505.16297v1 |
we prove the following: •Ifp(vi|y<t,x)> q θ(vi|y<t,x),then logp(vi|y<t,x) qθ(vi|y<t,x)>0⇒αt,i>0.5,which increases the contribution of FKL. •Ifqθ(vi|y<t,x)> p(vi|y<t,x),then logp(vi|y<t,x) qθ(vi|y<t,x)<0⇒αt,i<0.5,which increases the contribution of RKL. •Letr=p(vi|y<t,x)/qθ(vi|y<t,x),so thatαt,i=σ(logr).Then dαt,i dr=σ(logr) 1−σ(logr) r>0 implying that αt,iis monotonically increasing inr. •Since∀z, σ(z)∈(0,1), it follows that αt,i∈ (0,1). C Jeffreys-Inspired Weighting with Stop-Gradient The token-wise weight αt,iin ToDi is inspired by Jeffreys divergence. In this section, we outline this connection and, in particular, show analyti- cally how applying a stop-gradient ( detach ) toαt,i yields gradients that differ from those of standard Jeffreys divergence. At time step tfor token vi∈ V, the Jeffreys divergence can be written using Equation 1 and Equation 2 as: D(t,i) Jeffreys(p, qθ) = D(t,i) FKL(p, qθ) +D(t,i) RKL(p, qθ) (23) The ToDi weighting function αt,ican then be 11 derived from Jeffreys divergence as: pilogpi qi+qilogqi pi =pilogpi qi−qilogpi qi = (pi−qi) logpi qi =p2 i−q2 i pi+qilogpi qi =p2 i pi+qilogpi qi−q2 i pi+qilogpi qi =p2 i pi+qilogpi qi+q2 i pi+qilogqi pi =pi pi+qi pilogpi qi +qi pi+qi qilogqi pi =σ logpi qi pilogpi qi + 1−σ logpi qi qilogqi pi (24) where, for brevity, we denote pi:=p(vi| y<t,x)andqi:=qθ(vi|y<t,x).In ToDi, σ(logpi qi)isdetached so that no gradient flows through it. As a result, αt,iacts purely as a constant weight, leading to an optimization behavior that diverges from Jeffreys divergence. To clarify this difference, we compare deriva- tives with respect to qθ(vi|y<t,x): • Jeffreys divergence derivative: ∂ ∂qθ plogp qθ+qθlogqθ p =−p qθ+ logqθ p+ 1 (25) •ToDi derivative ( αt,iisdetached , so treated as constant): ∂ ∂qθ αt,i·plogp qθ+ (1−αt,i)·qθlogqθ p =αt,i −p qθ + (1−αt,i) logqθ p+ 1 (26) Using the detached weight αt,i, ToDi increases the weight on D(t,i) FKL(p, qθ)when p > q θ, elevating the student probability, and increases the weight on D(t,i) RKL(p, qθ)when qθ> p, suppressing the student probability. Unlike Jeffreys divergence, which ap- plies divergence uniformly across the vocabulary, ToDi adaptively refines divergence intensity at the token level.Settings GPT2 TinyLLaMA LLaMA2 Epoch 20 10 10 Learning Rate 5e-4 1e-3 1e-3 Batch Size 32 32 32 Fine-Tuning Method Full LoRA LoRA LoRA Rank - 256 256 LoRA Alpha - 8 8 LoRA Dropout - 0.1 0.1 Table 5: Hyperparameter settings for KD. D Experimental Details D.1 Training details Training was conducted based on the setup of Zhang et al. (2024c). For GPT2-1.5B, we employed the publicly released model from Gu et al. (2024), while GPT2-120M was trained for 20 epochs with a learning rate of 5×10−4. The TinyLLaMA and LLaMA2 models were trained for 10 epochs with a learning rate of 1×10−3. All experiments were carried out on a single RTX A6000 GPU. The train- ing loss was composed by combining the KD loss and the cross-entropy loss in equal proportions (0.5:0.5). Detailed hyperparameter settings for each model are summarized in Table 5. D.2 Evaluation details All test sets were processed following Gu et al. (2024). The number of samples in each test set is as follows: DollyEval contains 500 examples; S- NI includes 1,694 examples with response lengths exceeding 11 tokens; UnNI comprises 10,000 ex- amples with | https://arxiv.org/abs/2505.16297v1 |
response lengths exceeding 11 tokens; SelfInst has 242 examples; and VicunaEval con- sists of 80 examples. For response generation, we used random seeds {10, 20, 30, 40, 50} and report the average ROUGE-L score across these seeds. E Experiments on Additional Models We further evaluated ToDi’s performance across diverse teacher–student configurations. As shown in Table 6, ToDi consistently outperforms existing baselines under these configurations. This demon- strates that ToDi can transfer knowledge robustly and effectively across different teacher–student se- tups. F Details of GPT-4 Evaluation Pairwise comparison of model responses was per- formed using the gpt-4o-2024-11-20 API, with 12 [System] Please act as an impartial judge and evaluate the quality of the responses provided by two AI assistants to the user question displayed below. You should choose the assistant that follows the user’s instructions and answers the user’s question better. Your evaluation should consider factors such as the helpfulness, relevance, accuracy, depth, creativity, and level of detail of their responses. Begin your evaluation by comparing the two responses and provide a short explanation. Avoid any positional biases and ensure that the order in which the responses were presented does not influence your decision. Do not allow the length of the responses to influence your evaluation. Do not favor certain names of the assistants. Be as objective as possible. After providing your explanation, output your final verdict by strictly following this format: "[[A]]" if assistant A is better, "[[B]]" if assistant B is better, and "[[C]]" for a tie. [Question] {} [The Start of Assistant A’s Answer] {} [The End of Assistant A’s Answer] [The Start of Assistant B’s Answer] {} [The End of Assistant B’s Answer]Figure 7: Prompt for GPT-4 Evaluation. Methods OLMo2 Qwen2.5 Gemma3 Teacher 30.24 ±0.48 27.42 ±0.63 30.60 ±0.42 SFT 24.53 ±0.41 24.89 ±0.25 24.12 ±0.37 FKL 26.88 ±0.57 26.71 ±0.56 26.88 ±0.35 RKL 25.98 ±0.46 27.14 ±0.32 28.69 ±0.14 JS 25.39 ±0.59 26.82 ±0.12 25.10 ±0.40 TVD 25.60 ±0.34 26.78 ±0.52 26.06 ±0.21 SKL 25.86 ±0.31 27.04 ±0.17 26.16 ±0.35 SRKL 26.03 ±0.12 26.74 ±0.54 25.90 ±0.59 AKL 25.97 ±0.13 26.66 ±0.22 28.53 ±0.37 ToDi (Ours) 26.94 ±0.41 27.20 ±0.34 29.03 ±0.43 Table 6: ROUGE-L scores on the DollyEval benchmark across diverse distillation settings with varying teacher- student model pairs, including OLMo2-7B →OLMo2- 1B (OLMo et al., 2025), Qwen2.5-1.5B →Qwen2.5- 0.5B (Qwen et al., 2025) and Gemma3-4B →Gemma3- 1B (Team et al., 2025). The best result is shown in bold. response order randomized in the prompt to mit- igate position bias. We followed the LLM-as-a- Judge evaluation protocol of Zheng et al. (2023), employing the pairwise comparison prompt shown in Figure 7. G Sensitivity Study for β Table 7 reports ROUGE-L scores as a function of the scaling parameter β∈ {0.6,0.8,1.0,1.2,∞}. The experiments show that β= 1.0achieves the highest average score of 18.66. Two key trends are observed:β Dolly S-NI UnNI Self Vicuna Average 0.6 24.44 18.17 22.44 10.88 16.09 18.40 0.8 24.50 19.15 22.04 10.76 15.74 18.44 1 24.81 19.42 22.16 11.30 15.61 18.66 1.2 24.29 18.85 21.86 11.15 15.69 18.37 ∞ 24.30 18.96 21.89 10.93 15.11 | https://arxiv.org/abs/2505.16297v1 |
18.24 Table 7: Comparison of ROUGE-L scores of GPT-2 student models under different values of the scaling parameter β. •Low-sensitivity regime ( β < 1): Asβde- creases, the sigmoid’s slope becomes shal- lower, causing the weight αt,ito converge toward 0.5. This nearly fixed combination of FKL and RKL reduces responsiveness to token-level prediction discrepancies, degrad- ing training effectiveness. Indeed, at β= 0.6, the average performance drops to 18.40. •High-sensitivity regime ( β→ ∞ ): As β grows large, the sigmoid approaches a step function and the weight αt,ibecomes discrete: αt,iβ→∞− − − − → 1 p vi|y<t,x > qθ vi|y<t,x . This fully separates the application of FKL and RKL, introducing discontinuities in the learning signal near the boundary p≈qθ. Such abrupt transitions undermine training sta- bility, and the average performance declines to 18.24. 13 | https://arxiv.org/abs/2505.16297v1 |
INFERENCE DYNAMICS : Efficient Routing Across LLMs through Structured Capability and Knowledge Profiling Haochen Shi1*, Tianshi Zheng1∗, Weiqi Wang1∗, Baixuan Xu1, Chunyang Li1, Chunkit Chan1, Tao Fan1,2, Yangqiu Song1, Qiang Yang3 1The Hong Kong University of Science and Technology2WeBank 3The Hong Kong Polytechnic University {hshiah, tzhengad, wwangbw}@cse.ust.hk Abstract Large Language Model (LLM) routing is a piv- otal technique for navigating a diverse land- scape of LLMs, aiming to select the best- performing LLMs tailored to the domains of user queries, while managing computa- tional resources. However, current routing approaches often face limitations in scalabil- ity when dealing with a large pool of special- ized LLMs, or in their adaptability to extend- ing model scope and evolving capability do- mains. To overcome those challenges, we pro- pose InferenceDynamics , a flexible and scal- able multi-dimensional routing framework by modeling the capability and knowledge of mod- els. We operate it on our comprehensive dataset RouteMix , and demonstrate its effectiveness and generalizability in group-level routing us- ing modern benchmarks including MMLU-Pro, GPQA, BigGenBench, and LiveBench, show- casing its ability to identify and leverage top- performing models for given tasks, leading to superior outcomes with efficient resource uti- lization. The broader adoption of Inference Dy- namics can empower users to harness the full specialized potential of the LLM ecosystem, and our code will be made publicly available to encourage further research. 1 Introduction The rapid proliferation of Large Language Models (LLMs) has unveiled a rich landscape of special- ized capabilities, with different models demonstrat- ing unique strengths across a multitude of domains and tasks (Matarazzo and Torlone, 2025; Li et al., 2024a). This specialization necessitates a sophisti- cated approach to model selection, where the pri- mary goal is to identify and utilize the LLM best suited to the specific demands of a user’s query. LLM routing (Chen et al., 2025) emerges as a criti- cal paradigm to address this, creating mechanisms to strategically dispatch queries to the most capable *Equal Contribution Multilingual Knowledge Retrieval Reasoning Agentic Instruction Following Coding Comprehension Mathematics LogicLinguisticsPsychologyLawLanguageMedicineData Analysis LLM Ability Ranks Across Dimensions Grok-2 Gemini-1.5-Pro Qwen2.5-Max GPT-4oFigure 1: Quantification of Knowledge and Capability of top 4 models among candidate LLMs. model from a diverse pool, thereby maximizing per- formance, relevance, and the quality of outcomes, while also considering factors like inference cost and latency. Early explorations in LLM routing often simpli- fied the selection problem, for instance, by framing it as a binary classification task—e.g., choosing between a generalist small model and a power- ful large model. Methods like AutoMix (Aggar- wal et al., 2024), HybridLLM (Ding et al., 2024), and RouteLLM (Ong et al., 2025) demonstrated the viability of this approach, typically focusing on cost-performance trade-offs. While valuable for two-model scenarios, such binary frameworks face inherent scalability challenges, as selecting the best-performing model from many candidates using only pairwise comparisons becomes compu- tationally costly and inefficient. More recent works have advanced the field by leveraging richer model representations to bet- 1arXiv:2505.16303v1 [cs.CL] 22 May 2025 ter evaluate and route LLMs based on their spe- cific capabilities. While methods including | https://arxiv.org/abs/2505.16303v1 |
Rou- terDC (Chen et al., 2024), C2MAB-V (Dai et al., 2024), and P2L (Frick et al., 2025) offer more sophisticated mechanisms for capturing model strengths, their primary limitation lies in the sig- nificant retraining or recalibration required to ef- fectively support newly introduced LLMs, hinder- ing their agility in a rapidly evolving model land- scape. Model-SAT (Zhang et al., 2025) aimed to re- solve this weakness through human-defined, model- independent capability decompositions. However, its reliance on predefined capability sets under- mined adaptability to new capability dimensions, thereby limiting nuanced performance capture in specialized domains. To address this gap, we introduce Inference- Dynamics , a novel system designed for perfor- mant, scalable, and adaptable LLM routing. In- ferenceDynamics operates by extracting capabil- ity requirements and domain-specific knowledge from incoming queries, modeling the correspond- ing capabilities and knowledge profiles of avail- able LLMs, and then intelligently routing queries to the most suitable models. To demonstrate the effectiveness and generalizability of our ap- proach, we constructed a comprehensive dataset aggregated from 24 diverse benchmarks. We then evaluated our routing algorithm on four chal- lenging out-of-distribution (OOD) benchmarks: MMLU-Pro (Wang et al., 2024b), GPQA (Rein et al., 2023), BigGenBench (Kim et al., 2024), and LiveBench (White et al., 2025). Experimental results show that our routing algorithm achieved the highest average score, surpassing the top- performing single LLM by a substantial margin of 1.28 points under optimal routing conditions. Fur- thermore, when operating under cost constraints, our algorithm delivered competitive performance comparable to the best single LLM, while utilizing nearly half the budget. The contributions of our work are summarized as follows: •We introduce RouteMix , a comprehensive dataset aggregated from 24 diverse bench- marks, specifically curated for rigorously eval- uating the generalization capabilities of LLM routing algorithms. •We propose InferenceDynamics , an efficient routing algorithm demonstrating generaliza- tion capabilities on previously unseen queries.•Experimental results validate that Inference- Dynamics significantly enhances LLM rout- ing, substantially outperforming the leading single model while concurrently reducing computational overhead. 2 Related Works 2.1 Multi-LLM System A Multi-LLM system (Chen et al., 2025) refers to the architecture that combines LLMs to collabora- tively solve tasks more effectively than any single model. The rapid proliferation of diverse LLMs has spurred significant interest in such systems, which are realized through several architectural patterns. LLM ensembling (Jiang et al., 2023; Li et al., 2024b) enhances accuracy or robustness by processing the same input through several models and then aggregating their responses. Cascaded systems (Zhang et al., 2024; Kolawole et al., 2024; Chen et al., 2023) strategically employ a sequence of models—often initiating with smaller, faster LLMs for initial processing or simpler queries and escalating to more powerful, resource-intensive ones only when necessary—thereby optimizing resource use. Furthermore, the development of collaborative LLM agents (Wang et al., 2024a; Xu et al., 2024; Liang et al., 2024) involves multiple LLMs, with distinct roles or access to different tools, interacting to address complex, multi-step problems that demand sophisticated coordination. While these multi-LLM approaches demonstrate considerable advancements, they often necessitate querying multiple models, which can increase com- putational cost and | https://arxiv.org/abs/2505.16303v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.