text string | source string |
|---|---|
Tarjama-25addresseskeylimitationsofexisting datasets, such as short sentence length, English source bias, and limited domain diversity. It comprises 5,000 pairs of expert-curated sentences from diverse domains, with an equal number of examples sourced from Arabic and English originals. All pairs are used to evaluate both Arabic- to-English and English-to-Arabic translation, providing a comprehensive and realistic benchmark for bidirectional translation performance. Our contributions can be summarized as follows: •We introduce Mutarjim , a compact but powerful decoder-only model specifically optimized for Arabic-English translation. •We present Tarjama-25 , a new benchmark for Arabic-English translation, featuring: –Longer and more natural sentence structures; –Balanced translation directionality, with an equal number of source texts originally writ- ten in Arabic and English; –Broad domain coverage, spanning general, medical, legal, technological, and other fields; –Careful curation to eliminate contamination from large-scale pre-training corpora, ensur- ing fair and unbiased evaluation. –Expert-reviewed and human-corrected translations to ensure high linguistic quality and fidelity. •We perform extensive evaluations using multiple standard benchmarks, including WMT24++ Deutsch et al. [2025], IWSLT2017 Cettolo et al. [2017], and our newly introduced Tarjama- 2 25, and compare Mutarjim against a range of open-source and proprietary models, using automatic metrics BLEU, chrF++, and COMET Rei et al. [2020]. •We publicly release both the Tarjama-25∗benchmark and its accompanying evaluation toolkit †as open-source resources to promote transparency, reproducibility, and further progress in Arabic machine translation research. The rest of the paper is structured as follows: Section 2 reviews the related works and theoretical foundations relevant to our study. Section 3 describes the dataset creation process in the two phases. Section 4 outlines our benchmarking setup. Section 5 introduces Mutarjim model with the training methodology. Section 6 details our experiments and results. Section 7 explains our evaluation strategy. Finally, Section 8 concludes the paper and suggests directions for future research. 2 Background Machine translation has undergone significant evolution, progressing from rule-based approaches necessitating extensive development and maintenance toward statistical and neural paradigms. The development of encoder-decoder and, more recently, decoder-only architectures has played a pivotal role in advancing the field. In this section, we review the major contributions that form the foun- dation of modern neural MT, focusing on models relevant to Arabic and multilingual translation. 2.1 Encoder-Decoder Models Encoder-decoder Transformer models form the basis of modern neural machine translation (NMT) systems, evolving from general-purpose architectures to specialized, multilingual frameworks. These models vary in scale, language coverage, and specialization, balancing broad applicability with performance in specific languages such as Arabic. This section reviews key models, highlighting their contributions and limitations, particularly in the context of multilingual and Arabic-focused translation. The Text-to-Text Transfer Transformer (T5) Raffel et al. [2023] introduced a unified text-to-text framework, trained on the English-only C4 dataset with model sizes ranging from 60M to 11B pa- rameters. Although not designed for translation, the flexible architecture of T5 laid the groundwork forsubsequentmultilingualmodels. Itsprimarylimitation, thelackofmultilingualsupport, restricts its direct applicability to NMT tasks. Based on T5, mT5 Xue et al. [2021] extends the text-to-text framework to 101 languages using the multilingual mC4 dataset. However, Arabic constitutes only 1.66% of its pre-training data, constraining its effectiveness | https://arxiv.org/abs/2505.17894v1 |
for Arabic translation. Despite strong zero-shot transfer capabilities, mT5 struggles with language-specific nuances, prompting further specialization in later models. Similarly, Aya-101 Üstün et al. [2024] adapts the mT5-XXL model (13B parameters) using instruction tuning in 100+ languages. This broad multilingual tuning im- proves the model’s ability to translate languages with limited training data, known as low-resource languages. However, its large size results in high training and inference costs, making it less practi- cal for deployment compared to smaller, more specialized models. In particular, mBART Liu et al. [2020]isa680Mparametermodeltrainedusingmultilingualdenoisingpre-trainingandincludes2.87 ∗https://huggingface.co/datasets/Misraj/Tarjama-25 †https://github.com/misraj-ai/Mutarjim-evaluation 3 billion Arabic tokens in its corpus. To address broader language coverage, NLLB-200 Costa-Jussà et al. [2022a] is a 3.3B parameter dense model trained on over 18 billion sentence pairs spanning 200 languages. It achieves strong translation performance, particularly in low-resource settings, and is known for its relative robustness against hallucinations. However, its effectiveness diminishes in domain-specific texts such as Islamic or medical content, where it struggles to maintain accuracy and relevance. In response to the need for Arabic-specific solutions, TURJUMAN Nagoudi et al. [2022], built directly on AraT5 Nagoudi et al. [2021], provides a toolkit for translating 20 source languages into MSA. 2.2 Decoder-Only Models The recent shift toward decoder-only language models has reshaped the machine translation land- scape, particularly through their use in autoregressive generation. Unlike traditional encoder- decoder architectures, decoder-only models handle both the source and target text within a single sequence and rely on large-scale pre-training. A major advantage of decoder-only models lies in their unified architecture for both understanding and generation tasks, enabling efficient transfer and scalability. Prompt-based translation using large models such as GPT-4 Achiam et al. [2023] has shown promising results, especially when ef- fective prompting strategies are applied He et al. [2024]; Lu et al. [2023]; Xi et al. [2022]; Zhu et al. [2023]; Agrawal et al. [2022]; Vilar et al. [2022]. These approaches often involve feeding a few trans- lation examples directly into the prompt, allowing the model to generalize with minimal supervision. Beyond prompting, compact decoder-only models have gained traction due to their efficiency. For example, BigTranslate Yang et al. [2023] and PARADIGM Xu et al. [2023] explore fine-tuning small open-source LLMs using parallel corpora, achieving competitive translation performance at a frac- tion of computational cost. Models like Tower Alves et al. [2024] and GemmaX2-28 Cui et al. [2025] employ two-phase training: multilingual pre-training followed by fine-tuning conducted according to instructions, enabling domain-specific translation capabilities. Multilingual decoder-only models, such as XALMA Xu et al. [2024], scale this approach further, incorporating 50 languages and using MoE layers and adapter modules to manage multilingual transfer. Within the Arabic domain, general purpose decoder-only models, such as Allam Bari et al. [2024], AceGPT Huang et al. [2023], and Silma Team [2024], are trained primarily in Arabic-centric corpora with objectives to cover various downstream tasks. However, these models are typically not special- ized in translation and often lack the fine-grained bilingual alignment necessary for a high-quality Arabic-English translation. Specialized Arabic decoder-only translation models have also emerged. One such model is Lahjawi Hamed et al. [2025], a cross-dialect | https://arxiv.org/abs/2505.17894v1 |
translation system that demonstrated strong performance in both MSA and dialectal Arabic. Lahjawi is specifically undergoing targeted fine-tuning for cross-dialect translation. Given the trade-off between translation quality, inference speed, and resource efficiency, in this work, we adopt the decoder-only paradigm. Mutarjim follows a two-stage training strategy: large- scale monolingual pre-training followed by supervised fine-tuning using high-quality Arabic-English parallel data. Our choice reflects a growing shift toward compact, efficient, and Arabic-optimized decoder-only systems for machine translation. 4 3 Data Our training data consists exclusively of bilingual Arabic-English corpora, combining proprietary and open-source sources. For pre-training, we utilize large-scale, domain-diverse parallel datasets to expose the model to a wide range of linguistic patterns and translation contexts. Fine-tuning focuses on higher-quality data, combining carefully filtered open-source corpora with proprietary datasets curated for accuracy, fluency, and domain relevance. This composition ensures broad coverage during pre-training while emphasizing translation accuracy, fluency, and domain specificity during fine-tuning. 3.1 Pre-training Data Our pre-training corpus comprises approximately 10 billion tokens of bilingual Arabic-English data, used to continue pre-training kuwain-1.5 the base model and improve its performance in translationtasks. DataaresourcedfromtheOPUSplatformTiedemann[2016],whereweexclusively select Arabic-English parallel corpora, supplemented with proprietary datasets curated internally to improve domain diversity and coverage. To improve data quality, we applied a set of pre-processing steps, including the removal of sentence pairs with fewer than three tokens, as such samples often lack meaningful context. We also filtered out misaligned examples in which the target sentence is not in the correct language. Pairs with substantial length mismatches are excluded to reduce the risk of partial or noisy translations. Fi- nally, we performed deduplication to eliminate repeated sentence pairs and reduce redundancy in the training data. These filtering steps help maintain a reasonably clean and consistent corpus for pre-training. 3.2 Fine-tuning Data The fine-tuning procedure utilized a precisely selected corpus of approximately 6 million Arabic- English parallel sentence pairs. This corpus exhibits substantial diversity and underwent careful filtering procedures to ensure high translation fidelity. The dataset was derived from two principal sources: •One portion of the data incorporates translations originally produced by a state-of-the-art LLM. Subsequently, expert linguists inspected a representative subset of these outputs to confirm their accuracy and fluency. To promote stronger Arabic fluency in the model, we em- phasized Arabic-centric samples, where Arabic is the source language, at a 2:1 ratio compared to English-centric samples. This approach not only enhances the model’s ability to understand and generate Arabic text but also helps preserve the cultural and linguistic richness of the language. •Theremainingportioncompriseshigh-qualityfiltereddatafromOPUS.Weappliedacombina- tion of automatic and manual review processes, including human inspection of representative subsets. Datasets exhibiting recurring issues such as out-of-context sentences, hallucinations, misinformation, or poor alignment were excluded to maintain overall data integrity. The fine-tuning dataset was designed to align with the domain categories introduced in our bench- 5 mark (Section 4), ensuring broad and realistic coverage across cultural, legal, scientific, healthcare, religious, and technical domains. We prioritized the inclusion of authentic Arabic source material and maintained a balanced representation across domains to mitigate distributional bias. This tar- geted curation enables the model to generalize effectively across | https://arxiv.org/abs/2505.17894v1 |
diverse topics while preserving high translation quality. 4 Tarjama-25: Bidirectional Arabic-English Translation Benchmark 4.1 Motivation and Development Modern MT systems face persistent evaluation challenges due to the limitations of existing bench- marks. To address these gaps, we introduce Tarjama-25 , a comprehensive benchmark specifi- cally designed for both Arabic–to–English and English–to–Arabic translation tasks. The current landscape of translation evaluation reveals several critical shortcomings: most publicly available datasets are English-centric (i.e., English is the source language), lacking authentic bidirectional content; benchmarks tend to contain predominantly short sentences (typically 6–30 words), which underutilizes the capacity of modern language models designed to process substantially longer input sequences; and domain-specific coverage remains limited. Furthermore, potential data contamina- tion from web-scale pre-training and insufficient representation of language-specific characteristics, particularly for Arabic texts, pose additional challenges. To address these challenges, we developed Tarjama-25 through a comprehensive data collection and validation pipeline: •We began by collecting 30,000 sentences from authentic Arabic and English sources, each ranging from 50 to 100 words long, ensuring broad domain coverage across scientific, technical, healthcare, cultural, and general interest topics. Half of the data was originally written in Arabic, and the other half in English. •The 30,000 sentences were initially translated using state-of-the-art machine translation sys- tems to create parallel sentence pairs. •From these, 5,000 pairs of sentences were selected for detailed human refinement. Professional translators reviewed and corrected each selected pair to ensure linguistic accuracy and fluency. The final selection maintains a balanced distribution in all domains (Figure 2). •Finally, domain experts conducted an additional review to validate the accuracy and contex- tual relevance of the translations within their respective fields. This careful multi-stage process ensures high-quality, human-validated translations with a balanced source language distribution and rich domain diversity, making Tarjama-25 a robust and realistic benchmark for bidirectional Arabic-English translation evaluation. 4.2 Findings and Recommendations Tarjama-25 distinguishes itself through authentic source content in both languages, diverse text lengths, extensive domain coverage, and a strong focus on language-specific subtleties. Our prelimi- nary evaluations reveal that many current MT models, despite their strong performance on existing 6 Figure 2: Domain Coverage in Tarjama-25 Benchmark benchmarks, face significant challenges with Tarjama-25. Detailed evaluation results are presented in Section 7. Based on our findings, we recommend: 1. Development of language-specific authentic benchmarks; 2. Greater emphasis on domain-specific translation capability; 3. Integration of cultural and linguistic nuances in evaluation metrics; 4. Regular benchmark updates to reflect evolving language use. 5 Method ForMutarjim , we build on Kuwain-1.5B Hennara et al. [2025], a decoder-only bilingual Arabic- English small language model designed for efficiency in resource-constrained environments. Our ap- proach adopts standard LLM training methodologies commonly used in the field. These methodolo- gies comprise two main phases: pre-training and fine-tuning. To improve translation performance, we introduce targeted modifications within this framework. The pre-training phase is designed to develop a robust bilingual representation, a foundation for the subsequent fine-tuning stage focused specifically on translation tasks. 5.1 Pre-training Phase Following the successful approaches of recent works such as GemmaX Cui et al. [2025] and Tower Alves et al. [2024] in continuing pre-training | https://arxiv.org/abs/2505.17894v1 |
for translation tasks, we further pre-trained our model on English-Arabic parallel data using a next-token prediction objective. To facilitate the learning process, we introduce two special tokens to our model: <|English|> and <|Arabic|> . We formatted the data as shown on the left side of Figure 3, where English sentences begin with the token <|English|> and Arabic sentences with <|Arabic|>. All pre-training data 7 Figure 3: Illustration of the two data formats used in Mutarjim: (Left) pre-training stream data format; (Right) fine-tuning data sample. consist of paired Arabic-English sentences structured according to this format. During training, the model sees both sentences and is trained to predict the next token over the entire input. To prevent unidirectional translation bias, we randomly select the order of the sentences in each pair. This encourages the model to develop robust bidirectional translation capabilities without favoring a specific source language. 5.2 Fine-tuning Phase The fine-tuning phase follows the same format as pre-training, adding a newline between the two sentences for improved structural clarity, as illustrated on the right side of Figure 3. However, unlike the pre-training stage, we apply causal masking to the input sentence so that the model is only trained on generating the target sentence from the source, while still using the same next-token prediction objective. We exclusively use high-quality, human-curated parallel data for this phase to ensure translation accuracy. The model is trained for two epochs over a total of 3 billion tokens, balancing suffi- cient exposure to high-quality examples with the need to avoid overfitting. We carefully monitor both training phases to maintain translation quality and prevent performance degradation. De- tailed training specifications, including learning rates, batch sizes, and other hyperparameters, are provided in the appendix B. 6 Experiment and Results To thoroughly evaluate the effectiveness of Mutarjim , we conducted a series of experiments aimed at gaining deeper insights into the challenges and dynamics of Arabic-English translation. Our evaluation focuses on three core aspects. First, we compare unidirectional and bidirectional train- ing setups to assess whether a single model trained in both directions (Arabic–to–English and English–to–Arabic) compromises performance relative to dedicated unidirectional models. Second, we examine the contribution of the continued pre-training phase in enhancing translation quality and improving the model’s generalization across domains. Third, we analyze the effect of context length during fine-tuning to understand how sentence length influences performance, particularly when the evaluation samples differ in length from those seen during training. These experiments are conducted using the WMT24++ benchmark, providing a consistent and challenging evaluation framework. 8 6.1 Unidirectional vs. Bidirectional Translation Performance To assess the impact of directional training on translation quality, we compared unidirectional and bidirectional versions of Mutarjim , focusing on how specializing in a single translation direction affects performance relative to a multitask setup. We investigated the performance trade-offs be- tween unidirectional models— Mutarjim-AR2EN (Arabic–to–English) and Mutarjim-EN2AR (English–to–Arabic)—and the bidirectional model Mutarjim-Bi . The unidirectional variants were each trained for 3 epochs, while the bidirectional variant was trained for 2 epochs on the combined data. Table 1 presents evaluation results using COMET Rei et al. | https://arxiv.org/abs/2505.17894v1 |
[2020] and chrF++ metrics on the WMT24++ benchmark Deutsch et al. [2025]. Despite being exposed to more diverse data, the bidirectionalmodelshowedaslightdecreaseinperformance. Unidirectionalmodelsconsistentlyout- performed the bidirectional model, with Mutarjim-AR2EN achieving a COMET score 3.16 points higher than Mutarjim-Bi for Arabic-to-English translation. Ultimately, the choice of model de- pends on application needs: Mutarjim-Bi offers greater efficiency and flexibility through multitask support, while the unidirectional variants deliver higher translation accuracy for specific directions. Given the compact size of our model (1.5B parameters), the computational cost difference between approaches remains modest. Model TrainingArabic→English English →Arabic COMET chrF++ COMET chrF++ Mutarjim-Bi Bidirectional 79.73 50.27 72.86 47.04 Mutarjim-AR2EN Unidirectional 82.89 54.89 — — Mutarjim-EN2AR Unidirectional — — 75.46 48.04 Table 1: Performance comparison between bidirectional (Mutarjim-Bi) and unidirectional (Mutarjim-EN2AR or AREN) translation models on WMT24++. 6.2 The Impact of Continued Pre-training Phase We evaluated the impact of continued pre-training on translation performance, aiming to determine whether translation-specific pre-training could yield meaningful gains over direct fine-tuning. Al- though our base model, Kuwain , was initially trained on a substantial corpus, making it a viable candidate for direct fine-tuning, we explore whether targeted continued pre-training on bilingual data enhances downstream translation quality, following recent successes in domain-adaptive pre- training Cui et al. [2025]; Alves et al. [2024]. Table 2 presents a comparison between models trained with and without the additional translation- focused pre-training phase. Models benefiting from this additional phase consistently outperform their counterparts trained solely through fine-tuning, as reflected in both COMET and chrF++ scores. The gains are evident in both the Arabic–to–English and English–to–Arabic directions, underscoring the general effectiveness of this strategy in translation tasks. While this approach may not be cost-effective for larger models, it remains computationally feasible for smaller architectures like our 1.5B parameter models. 9 ModelArabic→English English →Arabic COMET chrF++ COMET chrF++ Without Additional Pre-training Mutarjim-AR2EN 74.30 42.17 — — Mutarjim-EN2AR — — 61.91 34.89 With Additional Pre-training Mutarjim-AR2EN 82.89 54.89 — — Mutarjim-EN2AR — — 75.46 48.04 Table 2: Effect of additional translation-specific pre-training on model performance, evaluated on WMT24++ test set. 6.3 Context Length Effect We conducted two independent fine-tuning experiments to evaluate the impact of input length dis- tributions on translation performance. In the first experiment (e1), we fine-tuned the pre-trained Mutarjim model using samples containing more than 30 words, aiming to improve the model’s performance on longer sentences. While this enhanced fluency on long-form content, we observed performance degradation on shorter inputs, with increased hallucinations and irrelevant continua- tions. To address this, we performed a second, separate fine-tuning experiment (e2) using the same base model but modifying the training set to include an additional 15% of short samples (ranging from 2 to 30 words). This experiment sought to balance the model’s ability across varying sequence lengths. We evaluated both versions on the WMT24++ test set. As shown in Table 3, the second experiment(e2)ledtoimprovedperformanceinbothdirectionsoftranslation, confirmingthebenefit of including shorter sequences in the training data. ModelArabic→English English →Arabic COMET chrF++ COMET chrF++ Experiment 1 (Long Inputs Only) Mutarjim-AR2EN-e1 73.62 48.57 — — Mutarjim-EN2AR-e1 — — 69.07 43.40 Experiment 2 (Mixed Length Inputs) Mutarjim-AR2EN-e2 74.22 | https://arxiv.org/abs/2505.17894v1 |
50.84 — — Mutarjim-EN2AR-e2 — — 73.56 46.05 Table3: Evaluationofmodelsfine-tunedwithdifferentinputlengthdistributionsontheWMT24++ test set. 7 Evaluation To contextualize the performance of Mutarjim , we compare it against a diverse set of strong decoder-onlymodelsthatsupportArabicandarewidelyrecognizedfortheirtranslationcapabilities. These include general-purpose language models such as AceGPT-8BHuang et al. [2023], ALLam- 10 7BBari et al. [2024], C4AI-7BCohere For AI [2024], Cohere-8B Aryabumi et al. [2024], Cohere-32B Aryabumi et al. [2024], Gemma2-27B Team et al. [2024], Silma-9B Team [2024], and Yehia-7B Navid-AI [2025]. Furthermore, we include multilingual translation-specialized models such as X- ALMA-13B-Group8 Xu et al. [2024], LLaMAX3-8B-Alpaca Lu et al. [2024], and GemmaX-9B Cui et al. [2025]. To provide a closer baseline in terms of the architecture and size of the model, we also evaluateagainstNLLB-3.3BTeametal.[2022],anencoder–decodermodelknownforitseffectiveness in low-resource translation tasks and its widespread adoption in Arabic-English translation. We evaluated the performance of our model compared to a range of strong baseline models across three established benchmarks: WMT24++ ,IWSLT2017 , and our newly proposed benchmark Tarjama-25 . For all benchmarks, we evaluated translation quality using widely adopted metrics, BLEU, chrF++, and COMET, to ensure a comprehensive and fair assessment. The results for each benchmark are reported in their respective tables: Tarjama-25 in Table 4, WMT24++ in Table 5, and IWSLT2017 in Table 6. For consistency, all models are listed in the tables in order of model size. To ensure a fair comparison, we employ model-specific prompts during evaluation, as illustrated in Appendix D. To streamline the evaluation pipeline and accelerate inference, we utilize VLLM Kwon et al. [2023]‡, which enables efficient batched decoding across decoder-only models. Although being the smallest among the evaluated models, Mutarjim achieves state-of-the-art per- formance on the Tarjama-25 benchmark for the Arabic-to-English direction in all evaluation met- rics, and leads in the English–to–Arabic direction when measured by the BLEU score. It closely trails the much larger GPT-4o-mini model in COMET and chrF++ with only a narrow margin. These results highlight Mutarjim’s competitive effectiveness despite its compact size, demonstrating its strength in both translation quality and efficiency. Model performance varies noticeably on Tarjama-25 compared to existing benchmarks. For exam- ple, while GPT-4o-mini excels on WMT24++ and IWSLT2017, its relative performance declines on Tarjama-25. This highlights how standard benchmarks may overlook challenges in domain- specific and bidirectional translation. Tarjama-25 helps expose these gaps, offering a more realistic and rigorous assessment of real-world translation capabilities. AnotherkeyobservationistheconsistentperformancegapobservedinmostmodelsbetweenArabic- to-English and English-to-Arabic translation, with the former generally yielding better results. This trend is visually illustrated in Figure 1, where the disparity, particularly in the chrF++ metric, is pronounced. Several factors may contribute to this asymmetry, including Arabic’s rich morphology and syntactic flexibility, which allow for multiple valid translations that current metrics may fail to recognize. Furthermore, the predominance of English-centric training data in many models may hinder their ability to generate fluent and accurate Arabic output. Notably, Mutarjim demonstrates balanced performance in both translation directions, which we attribute to its Arabic-centric training strategy. This indicates that training with authentic Arabic source data can help mitigate directional bias and improve overall translation fidelity. ‡https://docs.vllm.ai/en/stable/ 11 Model SizeArabic→English English →Arabic COMET Chrf++ Bleu COMET Chrf++ Bleu | https://arxiv.org/abs/2505.17894v1 |
Mutarjim 1.5B 82.63 74.66 55.28 83.41 68.67 43.71 NLLBCosta-Jussà et al. [2022b] 3.3B 67.06 40.50 24.38 81.27 59.69 30.32 c4ai Cohere For AI [2024] 7B 80.93 67.24 43.34 79.10 55.96 25.18 Yehia Navid-AI [2025] 7B 73.31 56.77 32.14 74.97 50.32 20.67 ALLamBari et al. [2024] 7B 72.90 56.88 31.01 75.41 51.24 20.54 Cohere Aryabumi et al. [2024] 8B 81.20 67.16 42.72 82.50 58.46 26.26 AceGPT Huang et al. [2023] 8B 80.71 65.63 38.67 78.39 50.67 20.02 LLaMAX3 Lu et al. [2024] 8B 77.72 54.95 27.86 56.76 33.25 7.63 SILMA Team [2024] 9B 64.36 37.84 15.67 58.01 27.71 5.62 GemmaX Cui et al. [2025] 9B 69.63 43.42 19.96 66.94 37.66 9.98 XALMA Xu et al. [2024] 13B 73.37 46.96 21.57 66.36 29.88 6.64 Gemma 2 Team et al. [2024] 27B 80.81 70.42 42.78 42.20 3.52 3.08 Cohere Aryabumi et al. [2024] 32B 82.44 73.10 51.16 82.09 63.29 32.25 GPT-4o mini Hurst et al. [2024] - 83.67 76.08 54.24 83.36 66.36 38.52 Table 4: Performance comparison of bidirectional (Arabic-English) translation models on the Tarjama-25 benchmark in terms of COMET, Chrf++, and Bleu. Model SizeArabic→English English →Arabic COMET Chrf++ Bleu COMET Chrf++ Bleu Mutarjim 1.5B 72.99 52.27 19.26 75.46 48.04 17.99 NLLBCosta-Jussà et al. [2022b] 3.3B 76.71 50.13 25.50 77.75 45.89 16.03 c4ai Cohere For AI [2024] 7B 79.27 54.91 26.35 72.45 44.32 14.19 Yehia Navid-AI [2025] 7B 72.72 47.58 15.39 72.23 41.12 10.69 ALLamBari et al. [2024] 7B 72.00 46.80 15.01 71.89 41.45 10.41 Cohere Aryabumi et al. [2024] 8B 78.89 54.06 24.96 74.80 44.95 14.08 AceGPT Huang et al. [2023] 8B 78.18 52.25 21.21 73.65 40.55 11.37 LLaMAX3 Lu et al. [2024] 8B 75.91 48.18 18.89 57.31 28.83 4.03 SILMA Team [2024] 9B 71.33 38.96 16.44 60.54 26.75 4.97 GemmaX Cui et al. [2025] 9B 77.82 50.80 22.67 70.21 38.81 9.83 XALMA Xu et al. [2024] 13B 76.84 48.65 19.34 69.19 33.23 7.54 Gemma 2 Team et al. [2024] 27B 72.79 51.09 16.59 54.00 32.66 4.77 Cohere Aryabumi et al. [2024] 32B 79.77 57.05 27.98 72.74 47.13 15.84 GPT-4o mini Hurst et al. [2024] - 83.29 58.24 29.23 82.32 50.03 20.48 Table 5: Performance comparison of bidirectional (Arabic-English) translation models on the WMT24++ benchmark in terms of COMET, Chrf++, and Bleu. 12 Model SizeArabic→English English →Arabic COMET Chrf++ Bleu COMET Chrf++ Bleu Mutarjim 1.5B 82.89 54.89 31.00 79.76 44.21 12.74 NLLBCosta-Jussà et al. [2022b] 3.3B - - - - - - c4ai Cohere For AI [2024] 7B 83.99 56.64 33.64 77.41 40.50 9.14 Yehia Navid-AI [2025] 7B 75.58 47.38 15.93 76.22 38.41 6.65 ALLamBari et al. [2024] 7B 75.64 37.36 5.89 75.25 46.54 14.79 Cohere Aryabumi et al. [2024] 8B 83.60 55.83 31.71 79.05 42.36 9.10 AceGPT Huang et al. [2023] 8B 81.72 52.83 26.26 79.62 40.23 9.25 LLaMAX3 Lu et al. [2024] 8B 81.04 49.17 24.28 67.79 30.17 4.18 SILMA Team [2024] 9B 78.55 47.57 24.28 69.59 30.03 5.11 GemmaX Cui et al. [2025] 9B 82.06 53.30 30.25 76.17 37.17 7.10 XALMA Xu et al. [2024] 13B 80.06 49.04 24.10 76.41 36.99 7.13 Gemma 2 Team et al. [2024] 27B - - | https://arxiv.org/abs/2505.17894v1 |
- 48.56 22.28 1.57 Cohere Aryabumi et al. [2024] 32B 84.30 59.02 35.37 74.63 43.53 8.93 GPT-4o mini Hurst et al. [2024] - 86.37 60.48 36.86 87.14 47.63 15.50 Table 6: Performance comparison of bidirectional (Arabic-English) translation models on the IWSLT-2017 benchmark in terms of COMET, Chrf++, and Bleu. 8 Conclusion In this work, we introduce Mutarjim , an efficient and compact small language model, optimized for bidirectional Arabic-English machine translation while providing rich and accurate output. We also presentanewbenchmark Tarjama-25 ,adiverseandrepresentativedatasetforbidirectionalArabic- English MT evaluation. Our evaluation and experiments demonstrate that Mutarjim achieves competitive performance against larger models while requiring significantly fewer computational resources. The model’s compact architecture enables deployment in resource-constrained environ- ments without sacrificing translation quality. Future work will focus on scaling up the model architecture and training on larger multilingual datasets to support translation between Arabic and multiple languages, including French, Turkish, and Japanese, to create a comprehensive multilingual translation system while maintaining efficiency. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. Sweta Agrawal, Chunting Zhou, Mike Lewis, Luke Zettlemoyer, and Marjan Ghazvininejad. In- context examples selection for machine translation. arXiv preprint arXiv:2212.02437 , 2022. Duarte M Alves, José Pombal, Nuno M Guerreiro, Pedro H Martins, João Alves, Amin Farajian, Ben Peters, Ricardo Rei, Patrick Fernandes, Sweta Agrawal, et al. Tower: An open multilingual large language model for translation-related tasks. arXiv preprint arXiv:2402.17733 , 2024. 13 Viraat Aryabumi, John Dang, Dwarak Talupuru, Saurabh Dash, David Cairuz, Hangyu Lin, Bharat Venkitesh,MadelineSmith,JonAnderCampos,YiChernTan,etal. Aya23: Openweightreleases to further multilingual progress. arXiv preprint arXiv:2405.15032 , 2024. Babaali Baligh and Salem Mohammed. Arabic machine translation: A panoramic survey. SSRN Electronic Journal , 01 2022. doi: 10.2139/ssrn.4312742. M Saiful Bari, Yazeed Alnumay, Norah A Alzahrani, Nouf M Alotaibi, Hisham A Alyahya, Sultan AlRashed, Faisal A Mirza, Shaykhah Z Alsubaie, Hassan A Alahmed, Ghadah Alabduljabbar, et al. Allam: Large language models for arabic and english. arXiv preprint arXiv:2407.15390 , 2024. Mauro Cettolo, Marcello Federico, Luisa Bentivogli, Jan Niehues, Sebastian Stüker, Katsuhito Su- doh, Koichiro Yoshino, and Christian Federmann. Overview of the IWSLT 2017 evaluation cam- paign. In Proceedings of the 14th International Conference on Spoken Language Translation , pp. 2–14, Tokyo, Japan, December 14-15 2017. International Workshop on Spoken Language Trans- lation. URL https://aclanthology.org/2017.iwslt-1.1 . Cohere For AI. c4ai-command-r-07-arabic-2025, 2024. URL https://huggingface.co/CohereFor AI/c4ai-command-r-08-2024 . Marta R Costa-Jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffer- nan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, et al. No language left behind: Scaling human-centered machine translation. arXiv preprint arXiv:2207.04672 , 2022a. Marta R Costa-Jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffer- nan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, et al. No language left behind: Scaling human-centered machine translation. arXiv preprint arXiv:2207.04672 , 2022b. Menglong Cui, Pengzhi Gao, Wei Liu, Jian Luan, et al. Multilingual machine translation with open large language models at practical scale: An empirical study. arXiv preprint arXiv:2502.02481 , 2025. Daniel Deutsch, Eleftheria Briakou, Isaac Caswell, | https://arxiv.org/abs/2505.17894v1 |
Mara Finkelstein, Rebecca Galor, Juraj Juraska, Geza Kovacs, Alison Lui, Ricardo Rei, Jason Riesa, et al. Wmt24++: Expanding the language coverage of wmt24 to 55 languages & dialects. arXiv preprint arXiv:2502.12404 , 2025. Mohamed Motasim Hamed, Muhammad Hreden, Khalil Hennara, Zeina Aldallal, Sara Chrouf, and SafwanAlModhayan. Lahjawi: Arabiccross-dialecttranslator. In Proceedings of the 4th Workshop on Arabic Corpus Linguistics (WACL-4) , pp. 12–24, 2025. Zhiwei He, Tian Liang, Wenxiang Jiao, Zhuosheng Zhang, Yujiu Yang, Rui Wang, Zhaopeng Tu, Shuming Shi, and Xing Wang. Exploring human-like translation strategy with large language models. Transactions of the Association for Computational Linguistics , 12:229–246, 2024. Khalil Hennara, Sara Chrouf, Mohamed Motaism Hamed, Zeina Aldallal, Omar Hadid, and Safwan AlModhayan. Kuwain 1.5 b: An arabic slm via language injection. arXiv preprint arXiv:2504.15120 , 2025. Huang Huang, Fei Yu, Jianqing Zhu, Xuening Sun, Hao Cheng, Dingjie Song, Zhihong Chen, Abdulmohsen Alharthi, Bang An, Juncai He, et al. Acegpt, localizing large language models in arabic. arXiv preprint arXiv:2309.12053 , 2023. 14 AaronHurst, AdamLerer, AdamPGoucher, AdamPerelman, AdityaRamesh, AidanClark, AJOs- trow, Akila Welihinda, Alan Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint arXiv:2410.21276 , 2024. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles , 2023. Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. Multilingual denoising pre-training for neural machine translation. Trans- actions of the Association for Computational Linguistics , 8:726–742, 2020. Hongyuan Lu, Haoran Yang, Haoyang Huang, Dongdong Zhang, Wai Lam, and Furu Wei. Chain- of-dictionary prompting elicits translation in large language models. arxiv e-prints, page. arXiv preprint arXiv:2305.06575 , 2023. Yinquan Lu, Wenhao Zhu, Lei Li, Yu Qiao, and Fei Yuan. Llamax: Scaling linguistic horizons of llm by enhancing translation capabilities beyond 100 languages. arXiv preprint arXiv:2407.05975 , 2024. El Moatez Billah Nagoudi, AbdelRahim Elmadany, and Muhammad Abdul-Mageed. Arat5: Text- to-text transformers for arabic language generation. arXiv preprint arXiv:2109.12068 , 2021. El Moatez Billah Nagoudi, AbdelRahim Elmadany, and Muhammad Abdul-Mageed. Turjuman: A public toolkit for neural arabic machine translation, 2022. URL https://arxiv.org/abs/2206 .03933. Navid-AI. Yehia 7b preview. https://huggingface.co/Navid-AI/Yehia-7B-preview , 2025. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer, 2023. URL https://arxiv.org/abs/1910.10683 . Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. Comet: A neural framework for mt evaluation. arXiv preprint arXiv:2009.09025 , 2020. Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhu- patiraju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, et al. Gemma 2: Improving open language models at a practical size. arXiv preprint arXiv:2408.00118 , 2024. NLLB Team, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip | https://arxiv.org/abs/2505.17894v1 |
Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. No language left behind: Scaling human-centered machine translation, 2022. URL https://arxiv.org/abs/ 2207.04672 . Silma Team. Silma. 2024. URL https://www.silma.ai . 15 Jörg Tiedemann. OPUS – parallel corpora for everyone. In Proceedings of the 19th Annual Confer- ence of the European Association for Machine Translation: Projects/Products , Riga, Latvia, May 30–June 1 2016. Baltic Journal of Modern Computing. URL https://aclanthology.org/2016. eamt-2.8 . AhmetÜstün, ViraatAryabumi, Zheng-XinYong, Wei-YinKo, DanielD’souza, GbemilekeOnilude, Neel Bhandari, Shivalika Singh, Hui-Lee Ooi, Amr Kayid, et al. Aya model: An instruction finetuned open-access multilingual language model. arXiv preprint arXiv:2402.07827 , 2024. David Vilar, Markus Freitag, Colin Cherry, Jiaming Luo, Viresh Ratnakar, and George Fos- ter. Prompting palm for translation: Assessing strategies and performance. arXiv preprint arXiv:2211.09102 , 2022. Victoria Lin Xi, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Du Jingfei, et al. Few-shot learning with multilingual gen- erative language models. In Proceedings of the 2022 Conference on Empirical Methods in Natu- ral Language Processing , pp. 9019–9052. Association for Computational Linguistics Abu Dhabi, United Arab Emirates, 2022. Haoran Xu, Young Jin Kim, Amr Sharaf, and Hany Hassan Awadalla. A paradigm shift in ma- chine translation: Boosting translation performance of large language models. arXiv preprint arXiv:2309.11674 , 2023. Haoran Xu, Kenton Murray, Philipp Koehn, Hieu Hoang, Akiko Eriguchi, and Huda Khayrallah. X- alma: Plug & play modules and adaptive rejection for quality translation at scale. arXiv preprint arXiv:2410.03115 , 2024. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. mt5: A massively multilingual pre-trained text-to-text transformer, 2021. URL https://arxiv.org/abs/2010.11934 . W Yang, C Li, J Zhang, and C Zong. Bigtranslate: Augmenting large language models with mul- tilingual translation capability over 100 languages. arxiv 2023. arXiv preprint arXiv:2305.18098 , 2023. WenhaoZhu, HongyiLiu, QingxiuDong, JingjingXu, ShujianHuang, LingpengKong, JiajunChen, and Lei Li. Multilingual machine translation with large language models: Empirical results and analysis. arXiv preprint arXiv:2304.04675 , 2023. 16 A Evaluation details B Traning Details We trained Mutarjim using a two-stage approach (pre-training and fine-tuning) on 8 NVIDIA H100 GPUs. Table 7 summarizes the key hyperparameters for both phases. Hyperparameter Pre-training Fine-tuning Max Learning Rate 1×10−48×10−5 Learning Rate Schedule Cosine Cosine Weight Decay 0.1 0 .01 Optimizer AdamW AdamW Batch Size 1024 4096 Training Steps 4K 3.3K Context Length 2048 512 GPUs 8×H100 8 ×H100 Table 7: Training Hyperparameters C Mutarjim Translation Examples Tables 8 and 9 present qualitative examples of Mutarjim’s performance in both translation direc- tions. These examples cover a range of domains—including mathematics, structured data, biomed- ical content, and informal discourse—and illustrate the model’s ability to produce accurate, fluent translations that preserve both meaning and structure. Original English Sentence Arabic Translated Mutarjim Output 3rd - 5th Math CCSS: Designed فوفصللﺓدحوملﺍةيساسألﺍﺕايضايرلﺍ 3 - 5 : ممصم Substitute the values of the (h) height | https://arxiv.org/abs/2505.17894v1 |
and (b) base into the formula: arcsin/parenleftBig√ 2b 2l/parenrightBig =arcsin/parenleftBig√ 8h2+2b2 4h/parenrightBig . This will give you the angle of the pyramid from the apex to the base edge.(ﻉافتﺭالﺍميقﺽّوع h ) (ﺓدعاقلﺍو b ) :ةغيصلﺍيف arcsin/parenleftBig√ 2b 2l/parenrightBig =arcsin/parenleftBig√ 8h2+2b2 4h/parenrightBig . ﺍذهكيطعيس .ةننسملﺍﺓدعاقلﺍةفاحىلﺇةمقلﺍنممرهلﺍةيوﺍﺯ {”house_number”: ”171”, ”street”: ”Brahms Avenue”, ”city”: ”Toronto”, ”state”: ”Ontario”, ”country”: ”Canada”, ”normalized”: ”171 Brahms Avenue, Toronto, Toronto, Ontario, Canada”}{ " مقﺭ _ ":"لزنملﺍ 1 7 1 " :"ةنيدملﺍ"،"زمﺍربﻉﺭاش":"ﻉﺭاشلﺍ"، :"يﺩاع"،"ﺍدنك":"ةلودلﺍ"،"ويﺭاتنوﺃ":"ةيالولﺍ"،"وتنوﺭوت" " 1 7 1 "ﺍدنك،ويﺭاتنوﺃ،وتنوﺭوت،وتنوﺭوت،زمﺍربﻉﺭاش } Table 8: Examples of English-to-Arabic Mutarjim Translation. 17 Original Arabic Sentence English Translated Mutarjim Output يفلامعألﺍﺀاسﺅﺭنوفيخيناملألﺍنوعﺭﺍزملﺍوﺕﺍﺭاطقلﺍوقئاس ايناملﺃ German rail transport workers and agricultural workers frighten off business owners in Germany. مكﺍرتلاًيﺭورضظحالملﺍعيرسلﺍيلصيوحلﺍطبرلﺍنوكيالدق ﺽرعتلﺍﺕﺍرتفلالخةيكبشملﺍلبقﺕاياهنلﺍةطسﺍوبنيمألﺍ ﺓريصقلﺍ The rapid vesicular binding observed might not be essential for the accumulation of the amine by the presynaptic terminals during periods of short exposure. .ةيئاقلتلﺍةنمﺍزملﺍﺓﺀﺍربنوعدياًصاخشﺃامناكميفﺕﺃرقدقل نمﺕانايبلﺍعيمجﺭﺍرمتسابعفدتالاهنﺃيهمهتركف،اًساسﺃ كانه"،كلﺫنمًالدب.حيحصسكعلﺍويفتاهىلﺇمﺩاخلﺍ ﺀيشكانهناكﺍﺫﺇامعمﺩاخلﺍلﺍؤسىلﺇفدهت"لئاسﺭةلسلس .)ﺕاباجتسالﺍىلعًﺀانبﺙﺍدحألﺍﺀاشنﺇمتيو(ديدج I read somewhere people claiming innocence about auto sync. Basically, their idea is they don’t constantly push all data from the server, to my phone and vice versa. Instead there is a ”thread” target to ask server if something new (and the events are created based on the responses).Table 9: Examples of Arabic-to-English Mutarjim Translation. D Evaluation Models Prompts We use model-specific prompts during the evaluation to ensure a fair comparison. Table 10 lists the prompt templates for each model. Considering the source language and the target language. 18 Model Prompt Mutarjim None NLLB None c4ai <|START_OF_TURN_TOKEN|><|USER_TOKEN|>Translate the following sentence to [TARGET_LANGUAGE]. {text} Note: Don't answer any Question or engage within the context of the text just provide the literal translation<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|> Yehia Translate this from [SOURCE_LANGUAGE] to [TARGET_LANGUAGE]. Arabic: {text} English: ALLam Translate this from [SOURCE_LANGUAGE] to [TARGET_LANGUAGE]. [SOURCE_LANGUAGE]: {text} [TARGET_LANGUAGE]: Cohere <|START_OF_TURN_TOKEN|><|USER_TOKEN|>Translate the following sentence to [TARGET_LANGUAGE]. {text} Note: Don't answer any Question or engage within the context of the text just provide the literal translation<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|> AceGPT <User>: ةيبرعلﺍ/ةيزيلكنالﺍةغللﺍىلﺍةيزيلكنالﺍ/ةيبرعلﺍةغللﺍنميلاتلﺍصنلﺍةمجرتبمق \n {text}\n<Assistant> LLaMAX3 Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: Translate the following sentences from [SOURCE_LANGUAGE] to [TARGET_LANGUAGE]. ### Input: {text} ### Response: SILMA Translate this from [SOURCE_LANGUAGE] to [TARGET_LANGUAGE]. [SOURCE_LANGUAGE]: {text} [TARGET_LANGUAGE]: GemmaX Translate this from [SOURCE_LANGUAGE] to [TARGET_LANGUAGE]. [SOURCE_LANGUAGE]: {text} [TARGET_LANGUAGE]: XALMA Translate this from [SOURCE_LANGUAGE] to [TARGET_LANGUAGE]. [SOURCE_LANGUAGE]: {text} [TARGET_LANGUAGE]: Gemma 2 Translate this from [SOURCE_LANGUAGE] to [TARGET_LANGUAGE]. [SOURCE_LANGUAGE]: {text} [TARGET_LANGUAGE]: GPT-4o mini Translate the following [SOURCE_LANGUAGE] sentences to [TARGET_LANGUAGE] accurately while preserving the integrity of the structure. Translate the entirety of the sentences and leave the data as it is provided: {text} Provide the translations in the following format: #OUTPUT1: [TARGET_LANGUAGE] Translation of all the words that follow #INPUT1Table 10: Prompts used for each model during the evaluation process. Models like Mutarjim and NLLB are translation-specific systems that don’t require prompting, while LLMs require structured prompts with varying degrees of specificity. 19 | https://arxiv.org/abs/2505.17894v1 |
T2I-Eval-R1: Reinforcement Learning-Driven Reasoning for Interpretable Text-to-Image Evaluation Zi-Ao Ma1∗, Tian Lan1∗, Rong-Cheng Tu2, Shu-Hang Liu1, Heyan Huang1,Zhijing Wu1,Chen Xu3†,Xian-Ling Mao1† 1School of Computer Science and Technology, Beijing Institute of Technology, China 2Nanyang Technological University, Singapore 3School of Medical Technology, Beijing Institute of Technology, China {maziaoylwt,lantiangmftby}@gmail.com, rongcheng.tu@ntu.edu.sg chenxu05037@gmail.com ,maoxl@bit.edu.cn https://github.com/maziao/T2I-Eval-R1 Abstract The rapid progress in diffusion-based text-to-image (T2I) generation has created an urgent need for interpretable automatic evaluation methods that can assess the quality of generated images, therefore reducing the human annotation burden. To reduce the prohibitive cost of relying on commercial models for large-scale eval- uation, and to improve the reasoning capabilities of open-source models, recent research has explored supervised fine-tuning (SFT) of multimodal large language models (MLLMs) as dedicated T2I evaluators. However, SFT approaches typically rely on high-quality critique datasets, which are either generated by proprietary LLMs—with potential issues of bias and inconsistency—or annotated by humans at high cost, limiting their scalability and generalization. To address these limitations, we propose T2I-Eval-R1, a novel reinforcement learning framework that trains open-source MLLMs using only coarse-grained quality scores, thereby avoiding the need for annotating high-quality interpretable evaluation rationale. Our approach integrates Group Relative Policy Optimization (GRPO) into the instruction-tuning process, enabling models to generate both scalar scores and interpretable reasoning chains with only easy acessible annotated judgment scores or preferences. Fur- thermore, we introduce a continuous reward formulation that encourages score diversity and provides stable optimization signals, leading to more robust and discriminative evaluation behavior. Experimental results on three established T2I meta-evaluation benchmarks demonstrate that T2I-Eval-R1 achieves significantly higher alignment with human assessments and offers more accurate interpretable score rationales compared to strong baseline methods. 1 Introduction Recent advancements in diffusion models have significantly improved the quality of text-to-image (T2I) generation, enabling the synthesis of high-resolution, photorealistic images from natural language prompts. Foundational models such as SD1.5 [ 28] and SDXL [ 25] have pushed the frontier of generative fidelity and efficiency, while recent transformer-based architectures [ 24] further enhance visual realism. Despite these advances, T2I systems still suffer from semantic inconsistencies—such as object hallucinations, attribute entanglement, and misalignment with complex or compositional instructions [11, 8]—posing challenges for both quality control and model development. These limitations underscore the pressing need for robust, fine-grained evaluation methods that can assess the semantic faithfulness of generated images to input prompts. However, standard evaluation metrics remain inadequate. Inception Score [ 29] and FID [ 14] capture only global fidelity and *Equal contributions †Co-corresponding authorsarXiv:2505.17897v1 [cs.AI] 23 May 2025 distributional similarity, while CLIPScore [ 13] offers a coarse proxy for image-text alignment using pretrained encoders but lacks interpretability and correlates poorly with human judgments. To overcome these shortcomings, recent studies have explored leveraging large language models (LLMs) and multimodal LLMs (MLLMs) [ 23,16,22,20,31] as evaluators, owing to their strong se- mantic understanding and reasoning abilities [ 20,23]. Commercial MLLMs like GPT-4o have shown remarkable capability in scoring T2I outputs with chain-of-thought explanations, as demonstrated in frameworks such as VIEScore [ 20]. Nevertheless, relying on closed-source APIs introduces high cost, limited accessibility, and scalability concerns for real-world deployment. Therefore, | https://arxiv.org/abs/2505.17897v1 |
recent work has turned to training open-source MLLMs through supervised fine-tuning (SFT), aiming to replicate the evaluation ability of commercial models while avoiding their cost and constraints. For instance, EATG [ 31] formulates the evaluation task into sub-skills such as object presence, attribute correctness, and spatial alignment, and uses fine-grained supervision to train MLLMs to act as interpretable evaluators. However, this paradigm still relies on high-quality supervision dataset, which typically comes from either human annotations or synthetic critiques generated by proprietary models, such as GPT-4o [ 1] and Gemini [ 2]. Human annotations are costly and time-consuming, requiring fine-grained judgments about object presence, attributes, and semantic alignment—making them difficult to scale across diverse prompts and domains. On the other hand, synthetic critiques often suffer from inconsistency, bias, and limited generalization, especially when transferred across models or applied to complex compositional prompts[ 31]. These limitations significantly hinder the scalability and reliability of supervised fine-tuning for training open-source MLLMs as robust T2I evaluators. To address these limitations, we propose T2I-Eval-R1, a novel framework that applies Rule-based Reinforcement Fine-Tuning (RFT) to train open-source MLLMs using only coarse-grained judgment scores or preferences. Unlike SFT approaches that depend on high-quality interpretable evaluation rationales, our method optimizes a reward-driven policy that encourages interpretable reasoning and semantic discrimination. Specifically, we integrate Group Relative Policy Optimization (GRPO) into the instruction-tuning pipeline to induce relative ranking behavior and structured rationales under weak supervision. Furthermore, we introduce a continuous reward formulation that increases reward diversity within training groups, improving learning stability and evaluator sharpness. We validate T2I-Eval-R1 on the T2I-Eval [ 31], TIFA v1.0 [ 16], and the ImageReward human preference dataset [ 36] and demonstrate new state-of-the-art Spearman and Kendall correlations with human judgments. Our method not only surpasses strong open-source and GPT-4o-based baselines, but also provides interpretable rationales for each prediction, facilitating transparency and human trust. 2 Related Work 2.1 Evaluation Methods for Synthesized Images Early evaluation of text-to-image (T2I) generation focused on global, distributional metrics that trade interpretability for efficiency. Inception Score (IS) quantifies image quality and diversity via class-prediction confidence and variety from a pretrained Inception-v3 network [ 29], while Fréchet Inception Distance (FID) measures the difference in mean and covariance between real and generated image embeddings [ 14]. Although these metrics are computationally cheap and widely used, they do not assess semantic fidelity to the input text and provide no insight into specific errors. To incorporate semantic alignment, joint vision–language embeddings were introduced. CLIPScore computes the cosine similarity between CLIP-encoded image and text representations to approximate text–image alignment without references [ 13], and BLIPv2Score refines this approach with richer pretrained multimodal representations [ 21]. Despite improved alignment, these methods remain opaque, yielding a single score without explanatory reasoning. Recognizing the need for interpretability, subsequent work leveraged large language models (LLMs) and visual question answering (VQA). LLMScore decomposes prompts into subquestions answered through text-only reasoning, enabling compositional fidelity checks [ 23], while TIFA frames evalua- tion as VQA over prompt-derived questions to deliver fine-grained faithfulness via QA accuracy [ 16]. These task-decomposed strategies provide more transparency but incur | https://arxiv.org/abs/2505.17897v1 |
high computational overhead and rely on handcrafted pipelines. More recently, multimodal LLMs (MLLMs) have been applied directly as evaluators, generating natural-language rationales alongside scalar judgments. For example, VIEScore prompts GPT-4o to 2 produce chain-of-thought critiques that align closely with human preferences [ 20]. Although effective, these proprietary-API–based systems face challenges in cost, scalability, and reproducibility. Open-source alternatives seek to overcome these barriers via supervised fine-tuning (SFT). T2I-Eval trains MLLMs on human-annotated or LLM-generated critiques, decomposing evaluation into sub- tasks such as object presence, attribute correctness, and spatial alignment [ 31]. While this yields interpretable outputs, the dependence on large-scale, fine-grained annotations limits scalability and may introduce synthetic biases. Overall, evaluation has progressed from coarse statistical metrics to semantically rich, interpretable methods, yet existing approaches struggle to reconcile annotation cost, model accessibility, and explanation quality. This motivates methods that leverage weak or coarse-grained supervision to achieve both interpretability and scalability. 2.2 Reinforcement Learning for Evaluation The prohibitive cost and limited scalability of supervised fine-tuning on fine-grained critiques have driven a shift toward reinforcement learning (RL) methods that learn from weaker, more abundant signals. At the core of many recent RL evaluators lies Direct Preference Optimization (DPO) [ 26], which treats a language model itself as the reward model and optimizes it directly on human pairwise rankings. The original DPO formulation showed that this approach can recover strong reward models without separately training a critic network. Building on this idea, UnifiedReward applies DPO to multimodal data—training on large-scale human comparisons over images (and even video) to produce either pairwise preference judgments or pointwise scores with minimal annotation overhead [ 33]. However, its outputs remain opaque scalar utilities, providing no insight into why one image is ranked above another. An alternative RL paradigm, Group Relative Policy Optimization (GRPO), was introduced in the DeepSeek-R1 work to elicit reasoning behaviors in open LLMs by sampling groups of candidate outputs and updating the policy based on their relative rewards—thereby removing the need for a separate value network and enhancing stability [ 10]. Subsequent work in multimodal reasoning has extended GRPO with richer advantage estimators and auxiliary critics, demonstrating its effectiveness for various tasks [ 17,37]. For example, UnifiedReward-Think distills chain-of-thought rationales from GPT-4o on a seed preference set and then refines its multimodal policy via rejection sampling and GRPO [ 32]. The resulting model emits free-form explanations alongside each preference decision, improving transparency—but it still relies exclusively on pairwise data, limiting its ability to make use of single-wise evaluation datasets or perform single-image scoring. Generally speaking, to date these GRPO-based efforts have focused on discrete correctness or abstract reasoning benchmarks rather than on semantic text-to-image evaluation, and they still employ binary or thresholded reward schemes that fail to differentiate near-misses on a graded scale. 3 Method 3.1 Task Formulation We denote by Pa natural-language text prompt and by I(or(IA, IB)) one or two images generated by a text-to-image generative model in response to P. LetDbe the set of evaluation dimensions, and letGbe the corresponding set of guidelines that define how each dimension should be interpreted (e.g., numeric ranges, | https://arxiv.org/abs/2505.17897v1 |
anchor descriptions, or relational criteria). We define an evaluator: E: (P, I or(IA, IB), D, G )7−→(r, q) (1) where the outputs are: (1) r, a chain-of-thought rationale in natural language; (2) q, a quantitative judgment, which may take one of two forms: a single-image score s∈[smin, smax]for single-wise evaluation; or a preference choice c∈ {A, B, T }(Tfor ‘tie’) for pairwise comparison. 3.1.1 Single-Wise Evaluation For perceptual-quality dimensions DPQ(e.g.,sharpness, color fidelity)—which concern properties intrinsic to the image Iand do not typically depend on the prompt—the evaluator produces: (r, s) = E I, D PQ, G , (2) 3 where s∈[smin, smax]. Likewise, for semantic-consistency dimensions DSC(e.g.,object pres- ence, attribute correctness, relational alignment)—which measure alignment between PandI—the evaluator can be written as: (r, s) = E P, I, D SC, G . (3) By distinguishing these two orthogonal axes, our formulation subsumes both standalone image quality metrics and prompt-conditioned alignment checks within a single, unified operator E. 3.1.2 Pairwise Evaluation When comparing two images IAandIBgenerated from the same prompt P, traditional preference models often yield a discrete choice c∈ {A, B, T }. We extend this to a continuous preference confidence by defining: (r, pconf) = E P, I A, IB, D, G , p conf∈[0,1], (4) where pconf= 1.0indicates definitive preference for IA,0.0forIB,0.5denotes comparable quality, and intermediate values reflect uncertain differences. This continuous encoding captures nuanced degrees of comparative quality and supports downstream differentiation during model optimization. 3.1.3 Modularity and Generality Since single-vs-pairwise and perceptual-vs-semantic represent two independent binary choices, our framework defines a 2×2taxonomy of evaluation modes. By parameterizing EonDandG, we enable: •Arbitrary dimension sets : New criteria (e.g.,aesthetic style, prompt-specific object counts) can be introduced simply by augmenting Dand supplying appropriate guidelines in G, without modifying E’s core logic. •Flexible input configurations : Both single-wise and pairwise evaluations are handled by the same functional form. •Interpretable outputs : The chain-of-thought rprovides transparent justification for each numeric judgment q. In practice, we implement Evia a templated prompt construction that instantiates P,Ior(IA, IB), the chosen D, and the instructions Ginto a single query for a MLLM. The model then returns (r, q), yielding both an explanation and its corresponding scalar or confidence score. This formalism lays the foundation for our subsequent optimizing objective, dataset construction, and prompt-design procedures. 3.2 Optimizing Objective Pairwise Ranking Objective Early LLM-based reward models are trained on pairwise human preference data using a ranking objective derived from the Bradley–Terry formulation [ 4]. Concretely, given an input x(which may include a prompt and one or two images) and two candidate responses yc(chosen) and yr(rejected), the model rwith parameter θis optimized to satisfy Lranking =−logσ rθ(x, yc)−rθ(x, yr)−m(r) + rθ(x, yc) +rθ(x, yr)2, (5) where σis the sigmoid function, m(r)a preference margin [ 30], and the (rθ(x, yc) +rθ(x, yr))2 term centers the reward distribution [ 7]. While this loss yields models capable of pairwise ranking, it does not endow them with the ability to generate interpretable rationales, as they output only scalar utilities without a chain-of-thought explanation. Apart from that, models using this objective heavily rely on pairwise | https://arxiv.org/abs/2505.17897v1 |
preference data and are not able to make full use of single-wise annotated data during training, bringing about limitations to its broader applications to some extent. MLE Objective By contrast, generative evaluators frame T2I evaluation as sequence prediction and are typically trained with a maximum likelihood estimation (MLE) loss. It can be adapted to both 4 single-wise and pairwise evaluation by altering the evaluation prompt. Given an evaluation input x and target token sequence y=y1:T, the MLE objective is LMLE=−TX i=1logpθ yi|x, y<i . (6) This approach forces the model to reproduce exactly the reference rationale and score, heavily penalizing even minor deviations—e.g. scoring 8 when the reference is 7 is treated as equally wrong as scoring 3—thus undermining smooth credit assignment for near-correct outputs [34]. T2I-Eval-R1 Objective To unify interpretability and robustness, we adopt Group Relative Policy Optimization (GRPO) as our primary training paradigm. GRPO extends PPO-style policy optimiza- tion to outcome-based, online updates without requiring a separate value network. Let πθbe the current generative evaluator policy and Gthe group size (i.e., number of sampled outputs per prompt q). We optimize LGRPO (θ) =−E[q∼P(Q),{oi}G i=1∼πθold(O|q)] 1 GGX i=1 minπθ(oi|q) πθold(oi|q)Ai,clipπθ(oi|q) πθold(oi|q),1−ϵ,1 +ϵ Ai −βDKL(πθ∥πref) (7) where Aiare advantages normalized from bespoke rewards {ri}G i=1,βis a KL-penalty coefficient, andπrefis the fixed reference policy (often taken as the initial policy πθ0) [10]. This objective encourages stable updates while preserving interpretability via generative chain-of-thought outputs. Most prior uses of GRPO in LLM reasoning tasks employ binary (0/1) rewards : an output is scored 1 if “correct” (e.g., solves a math problem) or 0 otherwise, treating all errors equally [ 10,18]. Such a design works well when correctness is unambiguous—incorrect reasoning always yields a wrong final answer, or any non-answer action (e.g., issuing a search query during reasoning) is hardly detrimental. However, text-to-image evaluation is inherently subjective and graded on a continuous scale. Consider two predicted scores for a reference quality of 7 (on [0,10]): predicting 8 is closer and arguably preferable to the reference than predicting 3, yet a binary reward would penalize both equally. To address this, we introduce continuous rewards that smoothly reflect distance from the reference: Rsingle(spred, smin, smax, sref) = 1 −2 clip(spred, smin, smax)−sref smax−smin, (8) Rpair(ppred, pref) = 1 −2 clip(ppred,0,1)−pref , (9) which map deviations into [−1,1], providing finer gradients when predictions are near the reference. We call this innovative GRPO objective with continuous reward T2I-Eval-R1 objective. By replacing brittle MLE and opaque ranking losses with T2I-Eval-R1 objective, MLLM-based evaluator learns to produce both accurate scalar judgments and coherent rationales across single-wise and pairwise settings without expensive fine-grained annotation. 3.3 Training Datasets To evaluate our T2I-Eval-R1 objective under both single-wise and pairwise paradigms, we assemble two distinct training corpora by re-sampling and organizing existing public datasets. Single-wise Evaluation Corpus From the original 13,000 text–image pairs in the T2I-Eval training set [31], we randomly sample 9,000 examples for each of the three evaluation dimensions: appearance quality, intrinsic attribute consistency, relationship attribute consistency, and 9,000 examples for overall evaluation. By pairing each prompt–image instance with its corresponding dimension set | https://arxiv.org/abs/2505.17897v1 |
and guidelines, we obtain a 36,000-sample dataset, and each sample is associated with a reference score ranging from 0 to 10. This balanced sampling ensures uniform coverage across dimensions during policy learning and avoids over-representation of any single aspect. 5 Pairwise Evaluation Corpus Our pairwise dataset is derived from the human preference rankings of ImageRewardDB [ 36], which assign each generated image a quality level from 1 (best) to 5 (worst). We construct image pairs whose rating-level difference ∆r∈ {1,2,3,4}. Recognizing that pairs with small ∆rare inherently closer in quality while those with large ∆rare easier to distinguish, we weight samples in the ratio 1 : 2 : 2 : 1 for ∆r= 1,2,3,4, respectively. Within each ∆rsubset, we balance positive pairs ( IArated better than IB) and negative pairs in a 1 : 1 ratio. After resampling, this yields a training set with around 35,000 pairs. Together, these two corpora provide robust, dimension-balanced supervision for both scalar scoring and continuous-confidence learning under our reinforcement-learning paradigm. 3.4 Prompt Design To ensure both flexibility and interpretability, we assemble every evaluation prompt via a four-block template that instantiates the evaluator function E(from Section 3.1). Each block is parameterized by the current evaluation mode (single vs. pair), the target dimensions D, and the associated guidelines G. This design allows us to swap in new criteria or switch protocols without retraining the evaluator. The four blocks include: (1) Task Description : Briefly states the evaluator’s role and the active dimensions; (2) Annotation Inputs : Provides the text prompt and one or two generated images; (3)Evaluation Guidelines : Lists the dimension definitions and any scoring anchors; (4) Output Format : Specifies exactly what the model must return, enclosed in distinct tags for parseability. Because each segment is defined as a separate block, our template supports: •Dimension Swapping : Swap in new diandGiwithout retraining. •Protocol Switching : Toggle between single-wise and pairwise modes without altering the template logic. •Range Adjustment : Edit [smin, smax]based on actual requirements (e.g., 0–5, 1–100). This modularity ensures that the evaluator can be rapidly re-targeted to novel evaluation tasks, maintaining both scalability and interpretability across diverse T2I evaluation scenarios. Please refer to Appendix F for more details about the prompt template. 4 Experiments 4.1 Experiment Setup Benchmark Datasets We assess our models trained with T2I-Eval-R1 objective on three public benchmarks, each targeting a distinct evaluation mode: (1) T2I-Eval [ 31] targeting on single-wise evaluation and the same dimensions as the training set; (2) TIFA v1.0 [ 16] targeting on single- wise evaluation yet an unseen faithfulness dimension; (3) ImageReward [ 36] targeting on pairwise evaluation. Please refer to Appendix C for more details. Baseline Methods To contextualize our results, we compare our method against a variety of existing methods including FID [ 14], CLIPScore [ 13], ImageReward [ 36] and VIEScore [ 20], etc. Please refer to Appendix D for more details. Models Given the various datasets and evaluation protocols, we design several settings to train multiple models that can achieve optimal performance in different scenarios. The settings are: •T2I-Eval-R1 Base: This model | https://arxiv.org/abs/2505.17897v1 |
is trained with the full single-wise T2I-Eval dataset mentioned in Section 3.3. This model can used for either single-wise evaluation or pairwise evaluation in the single-wise manner; •T2I-Eval-R1 Enhance : On the basis of T2I-Eval-R1 Base, we further perform rejection sampling on the T2I-Eval dataset, resulting in a small enhanced dataset with 4,000 samples for further training. This model is an enhanced version to solve the harder evaluation tasks; •T2I-Eval-R1 Pair: This model is trained with the full pairwise ImageRewardDB dataset men- tioned in Section 3.3. This model can be used only for pairwise evaluation; •T2I-Eval-R1 General : To explore whether a model can be used for text-to-image evaluation in the single-wise and pairwise manners simultaneously, we trained the model with the mixed dataset 6 of T2I-Eval and ImageRewardDB. This model can be adapted to any evaluation protocols defined in Section 3.1. Please refer to Appendix E for more training details. 4.2 Experimental Results According to the features of different variants of T2I-Eval-R1 series model, we conduct various evaluations for each of them and validate whether each of the settings can push the capability of the evaluators to the next level. We first evaluated T2I-Eval-R1 Enhance on the single-wise T2I-Eval benchmark, with the aim of verifying the ability of T2I-Eval-R1 objective to fit human preferences on given dimensions and comparing it with strong baseline methods. The experimental results are illustrated in Table 1. According to these results, we can conclude the following insights: Table 1: Comparison of previous methods and ours on T2I-Eval benchmark, with top scores of each category in bold and top scores of all methods in red. MethodAppearance Intrinsic Relationship Overall ρ τ ρ τ ρ τ ρ τ Inter-Annotator 84.29 74.41 85.74 78.32 94.52 90.28 86.92 78.14 Traditional Methods FID [14] -10.75 -7.56 -9.65 -6.88 -3.54 -2.70 -12.31 -8.62 LPIPS AlexNet [38] -5.46 -3.86 -6.83 -4.54 -11.68 -8.63 -12.44 -8.56 DreamSim [9] -12.48 -9.04 -14.81 -10.83 -5.56 -4.14 -13.82 -9.68 CLIPScore [13] 17.02 11.34 10.68 7.56 13.12 9.49 15.05 10.16 BLIPv2Score [21] 18.97 13.04 19.67 13.57 28.07 20.89 21.52 14.23 PickScore [19] 28.35 20.28 33.25 23.90 39.73 29.63 39.44 28.03 ImageReward [36] 31.29 21.73 43.88 31.74 36.43 26.96 40.46 28.39 LLM-based & MLLM-based Methods LLMScore GPT-4 [23] 18.65 13.19 29.45 21.76 21.42 16.45 30.96 22.28 TIFA mPLUG [16] 21.40 15.15 39.82 29.83 22.65 17.92 32.52 24.55 DSG Dependent [5] 30.00 22.45 43.02 33.64 39.34 31.89 45.82 35.12 DSG Independent 30.87 23.21 43.50 34.19 44.19 33.49 47.04 36.55 VQAScore CLIP-FlanT5-XXL [22] 43.59 30.35 48.50 35.32 42.70 32.01 51.16 37.12 VIEScore MiniCPM-V-2.6 32.25 23.63 27.42 20.68 31.75 24.82 29.41 22.50 VIEScore GPT-4o [20] 48.60 35.74 46.31 34.58 43.90 34.16 55.45 41.70 T2I-Eval GPT-4o [31] 44.29 32.66 47.78 35.84 39.06 31.58 55.66 42.85 T2I-Eval MiniCPM-V-2.6∗ 45.03 33.82 42.87 32.88 41.72 33.92 58.02 44.09 Our Method T2I-Eval-R1 Enhance 47.68 36.20 51.69 40.72 51.31 40.43 58.74 43.80 T2I-Eval-R1 Enhance almost achieves the strongest in-domain correlations across all three train- ing dimensions In the appearance quality, intrinsic and relationship attribute consistency dimen- sions—exactly those seen during training—our Enhance model variant matches or surpasses both | https://arxiv.org/abs/2505.17897v1 |
proprietary (GPT-4o–based) and open-source baselines. It is worth noting that our method is sinifi- cantly ahead of previous baselines in intrinsic and relationship attribute consistency, showing great capabilities in learning human preference from simple coarse-grained annotated scores. This validates that T2I-Eval-R1 model produces more accurate interpretable scores when directly aligned to training axes. The exceeded performance of our model compared with GPT-4o-based methods also confirms that T2I evaluation is not limited to proprietary APIs currently. Sub-optimal performance on the evaluation of Appearance Quality While the T2I-Eval-R1 objective demonstrates strong performance across two evaluation dimensions, surpassing prior methods by a large margin, its effectiveness on appearance quality remains slightly inferior to that of the GPT-4o-based VIEScore.Though none of the existing methods using open-source models can bridge the gap, it is still worth exploring whether it is caused by the deficiency of open-source MLLM’s visual ability, and whether it can be bridged with innovative optimizing strategies. 7 Table 2: Comparison of previous meth- ods and ours on TIFA v1.0 benchmark, with top scores of each category in bold and top scores of all methods in red. MethodFaithfulness ρ τ Inter-Annotator 72.22 63.85 CLIPScore [13] 33.82 24.56 BLIPv2Score [21] 40.49 29.44 PickScore [19] 42.79 31.37 ImageReward [36] 62.11 46.59 LLMScore GPT-4 [23] 49.69 37.53 TIFA mPLUG [16] 59.22 47.17 DSG Dependent [5] 60.46 48.93 DSG Independent 61.08 49.54 VQAScore CLIP-FlanT5-XXL [22] 69.50 53.21 VIEScore MiniCPM-V-2.6 37.23 27.85 VIEScore GPT-4o [20] 53.88 40.65 T2I-Eval MiniCPM-V-2.6∗ 60.61 46.92 T2I-Eval-R1 Base 69.03 53.53 T2I-Eval-R1 Hybrid 70.43 55.10After verifying the ability of T2I-Eval-R1 objective to fit human preferences in given dimensions, we further conduct evaluation for it on TIFA v1.0 benchmark to val- idate its generalization ability in the out of domain (OOD) evaluation dimension. Here, apart from the Base vari- ant, we additionally use T2I-Eval-R1 General for evaluation since the inclusion of pairwise data from ImageRewardDB may strengthen the general T2I evaluation abiltiy of the evaluator, preventing the MLLM from overfitting on the given dimensions of T2I-Eval dataset. The results are il- lustrated in Table 2. Our General variant registers the highest Spearman’s ρ(70.43) and Kendall’s τ(55.10), out- performing strong LLM- and MLLM-based methods and slightly surpasses our Base variant. This demonstrates our framework’s flexibility: training on coarse scores suffices to adapt the same policy to new semantic criteria, and the inclusion of datasets from various distributions can push the boundary further. Apart from the single-wise experiments, we also compare our method with several strong baselines in pairwise eval- uation on ImageReward. As illustrated in Table 3, our Pair variant achieves preference accuracy of 66.07—surpassing prior reward-model and generative chain-of-thought baselines—showing that our confidence-based formulation accurately captures nuanced human preferences than the previous binary approach. Table 3: Comparison of pairwise preference accuracy between our method and baselines on ImageRe- ward benchmark. Method CLIPScore BLIPScore ImageReward UnifiedReward UnifiedReward-Think T2I-Eval-R1 Preference Acc. 54.82 57.76 65.14 55.90 64.24 66.07 4.3 Ablation Study on the Rejection Sampling of T2I-Eval-R1 Enhance Table 4: Comparison on T2I-Eval benchmark of our base variant of T2I-Eval-R1 and the enhanced variant which introduces rejection sampling and further | https://arxiv.org/abs/2505.17897v1 |
GRPO. MethodAppearance Intrinsic Relationship Overall ρ τ ρ τ ρ τ ρ τ T2I-Eval-R1 Base 47.07 35.15 52.10 40.98 50.46 39.73 58.13 42.76 T2I-Eval-R1 Enhance 47.68 36.20 51.69 40.72 51.31 40.43 58.74 43.80 To validate the effectiveness of the proposed rejection sampling and further GRPO of T2I-Eval- R1Enhance , we compared its performance with the Base variant on T2I-Eval benchmark. As illustrated in Table 4, the exclusion of the enhance method brings slight degradation in the evaluation ability on appearance quality and relationship attribute consistency. Surprisingly, the performance on intrinsic attribute consistency improved slightly, showing that the enhance method does not always brings increment to the evaluator’s performance. Generally speaking, the involving of this method still helps us to get the best overall evaluation performance. 4.4 Ablation Study on Continuous Rewards in Outcome-based Reinforcement Learning As described in Section 3.2, we propose a novel continuous reward function for GRPO training in both single-wise and pairwise evaluation tasks. To assess the effectiveness of this design, we conduct an ablation study by replacing the continuous reward with a binary reward function, where correct answers receive a reward of 1 and incorrect ones receive 0. All other settings remain unchanged for both T2I-Eval-R1 BaseandT2I-Eval-R1 Pair. As shown in Table 5 and Table 6, replacing the continuous reward with a binary one leads to a noticeable performance drop across all benchmarks, particularly in the single-wise evaluation tasks. In particular, T2I-Eval-R1 Base-Binary achieves performance comparable to DSG [ 5] on both T2I-Eval 8 Table 5: Comparison of T2I-Eval-R1 Basetrained with continuous and binary rewards. We report the results of overall quality on T2I-Eval and faithfulness on TIFA v1.0. MethodT2I-Eval TIFA v1.0 ρ τ ρ τ T2I-Eval-R1 Base 58.74 43.80 69.03 53.53 T2I-Eval-R1 Base-Binary 47.25 35.56 61.03 46.92 Table 6: Comparison of T2I-Eval-R1 Pairtrained with continuous and binary rewards. We report preference accuracy (%) on the ImageReward test set. Method T2I-Eval-R1 Pair T2I-Eval-R1 Pair-Binary Preference Acc. 66.07 65.19 and TIFA v1.0, yet significantly underperforms compared to T2I-Eval-R1 Baseand other strong baselines. For pairwise evaluation, the performance gap between continuous and binary rewards is smaller. This may be attributed to the relatively simpler reward structure in pairwise tasks (i.e., three possible choices versus a continuous score spectrum). Overall, this ablation study demonstrates the effectiveness of the proposed continuous reward function in training MLLM-based evaluators, particularly in enhancing single-wise evaluation performance. 4.5 Quality of Interpretable Evaluation We conducted some complementary studies to assess the quality of the chain-of-thought rationales generated by T2I-Eval-R1, comparing against LLMScore, VIEScore, UnifiedReward-Think, and T2I-Eval evaluators. For single-wise evaluation, we first leveraged GPT-4o as an automatic judge to rate each rationale against the human-annotated gold explanations from the T2I-Eval benchmark. Then, to guard against potential biases of a model-based judge, we performed a human preference study on a sampled subset, measuring direct annotator agreement with GPT-4o’s rankings. Both evaluations confirm that T2I-Eval-R1 produces superior, more human-aligned rationales. For pairwise evaluation, since there is no reference rationale in the ImageReward dataset, we only performed human annotation to evaluate the quality of rationales. 4.5.1 Automatic GPT-4o Assessment Using | https://arxiv.org/abs/2505.17897v1 |
the T2I-Eval benchmark, we prompted GPT-4o to score each method’s rationale for clarity, completeness, and fidelity to the reference explanation. The results are illustrated in Figure 1. Overall wins : T2I-Eval-R1 achieved the highest average quality score in the Overall evaluation, outperforming LLMScore, VIEScore, and the T2I-Eval evaluators. Dimension-wise superiority : Across each of the three dimensions—appearance quality, intrinsic and relationship attribute consistency—T2I-Eval-R1’s rationales consistently received higher ratings than T2I-Eval, indicating that out method even outperforms naive supervised fine-tuning with an larger amount of fine-grained training data in the same distribution of the reference rationales. Appearance Intrinsic Relationship Overall024Score (GPT-4o) LLMScore VIEScore T2I-Eval Ours Figure 1: Comparison of our method with repre- sentative baselines in the results of GPT-4o-based meta-evaluation for interpretable evaluation.Quality stability : Among all the compar- isons in the interpretable evaluation, our method consistently has the lowest standard deviation of the scores, revealing that the quality of inter- pretable rationales in our method is stabler than previous methods. While GPT-4o provides a scalable, fine-grained critique of rationale text, its judgments may be subject to its own reasoning biases and prompt sensitivity. To ensure these automatic scores re- flect real human preferences, we complemented this analysis with a direct human study. 9 4.5.2 Human Preference Study Table 7: Quality of subjective evaluation in our method compared with baseline methods. CategoryOverall Evaluation Single Dimension (T2I-Eval) Pairwise LLMScore VIEScore T2I-Eval Appearance Intrinsic Relationship UnifiedReward -Think -T +T -T +T -T +T -T +T -T +T -T +T Human Preference 62.0 94.0 14.0 79.0 40.0 89.0 46.0 78.0 60.0 84.0 34.0 94.0 55.0 GPT-4o Preference 77.1 84.4 51.2 63.1 56.1 66.1 34.2 70.1 49.5 73.8 40.5 73.4 - GPT-4o Accuracy 82.4 58.6 63.7 55.9 63.2 80.0 - We conduct human annotation for the quality of rationale text on both single-wise and pairwise settings. For single-wise evaluations, we randomly sampled 200 overall evaluation examples in comparison with each baseline, and 50 examples for each dimension in comparison with T2I-Eval. Annotators were asked which explanation they preferred in terms of interpretability and usefulness [ 35]. We then measured: •Preference accuracy of GPT-4o : the fraction of cases where GPT-4o’s winner matched the human judgement. •Human preference rate : the percentage of samples where annotators favored T2I-Eval-R1 over the competing method (including or excluding ties). The results are illustrated in Table 7. From these results, we can conclude that (1) High human agreement (averaged 67.3%) with GPT-4o’s rankings, validating GPT-4o as a reliable proxy judge for rationale quality; (2) Clear human preference (with tie) for T2I-Eval-R1’s explanations in over 75% of comparisons, demonstrating that its CoT is not only more precise according to an LLM judge but also genuinely more helpful to human readers. For pairwise evaluation, we compared the rationale from our method and UnifiedReward-Think. The win rate of our method is 55%, indicating better pairwise reasoning capabilities of T2I-Eval-R1 series models. 5 Conclusion We have presented T2I-Eval-R1, a framework that trains open-source MLLMs to produce both scalar scores and CoT rationales for T2I evaluation using only coarse-grained quality labels. By integrating GRPO with continuous | https://arxiv.org/abs/2505.17897v1 |
reward functions, the optimized models yield interpretable evaluations that align more closely with human judgments than prior approaches across diverse dimensions. Extensive experiments demonstrate that T2I-Eval-R1 not only matches or exceeds GPT-4o-based baselines in correlations with human judgements, but also generates better explanations, as confirmed by both automatic evaluation and human annotation. Importantly, it achieves these gains without reliance on fine-grained annotations, offering a scalable and generalizable solution for evaluating T2I generative models. By lowering the barrier to interpretable automatic evaluation, T2I-Eval-R1 paves the way for more rigorous, transparent evaluation of generative systems in both research and deployment. References [1]Achiam , J., Adler , S., Agarwal , S., Ahmad , L., Akkaya , I., Aleman , F. L., Almeida , D., Altenschmidt , J., Altman , S., Anadkat , S., & others (2023) Gpt-4 technical report. arXiv preprint arXiv:2303.08774 [2]Anil , R., Borgeaud , S., Wu , Y ., Alayrac , J., Yu , J., Soricut , R., Schalkwyk , J., Dai , A. M., Hauth , A., Millican , K., Silver , D., Petrov , S., Johnson , M., Antonoglou , I., Schrittwieser , J., Glaese , A., Chen , J., Pitler , E., Lillicrap , T. P., Lazaridou , A., Firat , O., Molloy , J., Isard , M., Barham , P. R., Hennigan , T., Lee , B., Viola , F., Reynolds , M., Xu , Y ., Doherty , R., Collins , E., Meyer , C., Rutherford , E., Moreira , E., Ayoub , K., Goel , M., Tucker , G., Piqueras , E., Krikun , M., Barr , I., Savinov , N., Danihelka , I., Roelofs , B., White , A., Andreassen , A., Glehn , T., Yagati , L., Kazemi , M., Gonzalez , L., Khalman , M., Sygnowski , J., & al. (2023) Gemini: A family of highly capable multimodal models. CoRR abs/2312.11805 . [3]Bai , S., Chen , K., Liu , X., Wang , J., Ge , W., Song , S., Dang , K., Wang , P., Wang , S., Tang , J., & others (2025) Qwen2. 5-vl technical report. arXiv preprint arXiv:2502.13923 [4]Bradley , R. A. & Terry , M. E. (1952) Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika 39(3/4):324–345. 10 [5]Cho , J., Hu , Y ., Baldridge , J., Garg , R., Anderson , P., Krishna , R., Bansal , M., Pont-Tuset , J., & Wang , S. (2024) Davidsonian scene graph: Improving reliability in fine-grained evaluation for text-to-image generation. InICLR [6]Dao , T. (2023) Flashattention-2: Faster attention with better parallelism and work partitioning. arXiv preprint arXiv:2307.08691 [7]Eisenstein , J., Nagpal , C., Agarwal , A., Beirami , A., D’Amour , A., Dvijotham , D., Fisch , A., Heller , K., Pfohl , S., Ramachandran , D., & others (2023) Helping or herding? reward model ensembles mitigate but do not eliminate reward hacking. arXiv preprint arXiv:2312.09244 [8]Esser , P., Kulal , S., Blattmann , A., Entezari , R., Müller , J., Saini , H., Levi , Y ., Lorenz , D., Sauer , A., Boesel | https://arxiv.org/abs/2505.17897v1 |
, F., & others (2024) Scaling rectified flow transformers for high-resolution image synthesis. In Forty-first international conference on machine learning [9]Fu , S., Tamir , N., Sundaram , S., Chai , L., Zhang , R., Dekel , T., & Isola , P. (2023) Dreamsim: Learning new dimensions of human visual similarity using synthetic data. Advances in Neural Information Processing Systems 36:50742–50768. [10] Guo , D., Yang , D., Zhang , H., Song , J., Zhang , R., Xu , R., Zhu , Q., Ma , S., Wang , P., Bi , X., & others (2025) Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 [11] Gupta , S., Ahuja , C., Lin , T.-Y ., Roy , S. D., Oosterhuis , H., Rijke , M., & Shukla , S. N. (2025) A simple and effective reinforcement learning method for text-to-image diffusion fine-tuning. arXiv preprint arXiv:2503.00897 [12] Hara , K., Adams , A., Milland , K., Savage , S., Callison-Burch , C., & Bigham , J. P. (2018) A data-driven analysis of workers’ earnings on amazon mechanical turk. In Proceedings of the 2018 CHI conference on human factors in computing systems pages 1–14. [13] Hessel , J., Holtzman , A., Forbes , M., Bras , R. L., & Choi , Y . (2021) Clipscore: A reference-free evaluation metric for image captioning. arXiv preprint arXiv:2104.08718 [14] Heusel , M., Ramsauer , H., Unterthiner , T., Nessler , B., & Hochreiter , S. (2017) Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems 30. [15] Hu , E. J., Shen , Y ., Wallis , P., Allen-Zhu , Z., Li , Y ., Wang , S., Wang , L., Chen , W., & others (2022) Lora: Low-rank adaptation of large language models. ICLR 1(2):3. [16] Hu , Y ., Liu , B., Kasai , J., Wang , Y ., Ostendorf , M., Krishna , R., & Smith , N. A. (2023) Tifa: Accurate and interpretable text-to-image faithfulness evaluation with question answering. In Proceedings of the IEEE/CVF International Conference on Computer Vision pages 20406–20417. [17] Huang , W., Jia , B., Zhai , Z., Cao , S., Ye , Z., Zhao , F., Xu , Z., Hu , Y ., & Lin , S. (2025) Vision-r1: Incentivizing reasoning capability in multimodal large language models. arXiv preprint arXiv:2503.06749 [18] Jin , B., Zeng , H., Yue , Z., Yoon , J., Arik , S., Wang , D., Zamani , H., & Han , J. (2025) Search-r1: Train- ing llms to reason and leverage search engines with reinforcement learning. arXiv preprint arXiv:2503.09516 [19] Kirstain , Y ., Polyak , A., Singer , U., Matiana , S., Penna , J., & Levy , O. (2023) Pick-a-pic: An open dataset of user preferences for text-to-image generation. Advances in Neural Information Processing Systems 36: 36652–36663. [20] Ku , M., Jiang , D., Wei , C., Yue , X., & Chen , W. (2024) Viescore: Towards explainable metrics for conditional image synthesis evaluation. In Proceedings of the 62nd | https://arxiv.org/abs/2505.17897v1 |
Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) pages 12268–12290. [21] Li , J., Li , D., Savarese , S., & Hoi , S. (2023) Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning pages 19730–19742. PMLR. [22] Lin , Z., Pathak , D., Li , B., Li , J., Xia , X., Neubig , G., Zhang , P., & Ramanan , D. (2024) Evaluating text-to-visual generation with image-to-text generation. In A. Leonardis, E. Ricci, S. Roth, O. Russakovsky, T. Sattler, and G. Varol, (eds.), Computer Vision - ECCV 2024 - 18th European Conference, Milan, Italy, September 29-October 4, 2024, Proceedings, Part IX 15067 , pp. 366–384. Springer. 11 [23] Lu , Y ., Yang , X., Li , X., Wang , X. E., & Wang , W. Y . (2023) Llmscore: Unveiling the power of large language models in text-to-image synthesis evaluation. Advances in Neural Information Processing Systems 36: 23075–23093. [24] Peebles , W. & Xie , S. (2023) Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF international conference on computer vision pages 4195–4205. [25] Podell , D., English , Z., Lacey , K., Blattmann , A., Dockhorn , T., Müller , J., Penna , J., & Rombach , R. (2023) Sdxl: Improving latent diffusion models for high-resolution image synthesis. In The Twelfth International Conference on Learning Representations [26] Rafailov , R., Sharma , A., Mitchell , E., Manning , C. D., Ermon , S., & Finn , C. (2023) Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems 36:53728–53741. [27] Rajbhandari , S., Rasley , J., Ruwase , O., & He , Y . (2020) Zero: Memory optimizations toward training trillion parameter models. In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis pages 1–16. IEEE. [28] Rombach , R., Blattmann , A., Lorenz , D., Esser , P., & Ommer , B. (2022) High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition pages 10684–10695. [29] Salimans , T., Goodfellow , I., Zaremba , W., Cheung , V ., Radford , A., & Chen , X. (2016) Improved techniques for training gans. Advances in neural information processing systems 29. [30] Touvron , H., Martin , L., Stone , K., Albert , P., Almahairi , A., Babaei , Y ., Bashlykov , N., Batra , S., Bhargava , P., Bhosale , S., & others (2023) Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 [31] Tu , R.-C., Ma , Z.-A., Lan , T., Zhao , Y ., Huang , H., & Mao , X.-L. (2024) Automatic evaluation for text-to-image generation: Task-decomposed framework, distilled training, and meta-evaluation benchmark. arXiv preprint arXiv:2411.15488 [32] Wang , Y ., Li , Z., Zang , Y ., Wang , C., Lu , Q., Jin , C., & Wang , J. (2025. ) Unified multimodal chain-of-thought reward model through reinforcement fine-tuning. arXiv preprint arXiv:2505.03318 [33] Wang , Y ., | https://arxiv.org/abs/2505.17897v1 |
Zang , Y ., Li , H., Jin , C., & Wang , J. (2025. ) Unified reward model for multimodal understanding and generation. arXiv preprint arXiv:2503.05236 [34] Welleck , S., Kulikov , I., Roller , S., Dinan , E., Cho , K., & Weston , J. (2020) Neural text degeneration with unlikelihood training. In 8th International Conference on Learning Representations, ICLR 2020 [35] Xu , C., Lan , T., Ji , Y ., Yu , C., Wang , W., Gao , J., Dong , Q., Qian , K., Li , P., Bi , W., & others (2025) Decider: A dual-system rule-controllable decoding framework for language generation. IEEE Transactions on Knowledge and Data Engineering [36] Xu , J., Liu , X., Wu , Y ., Tong , Y ., Li , Q., Ding , M., Tang , J., & Dong , Y . (2023) Imagereward: Learning and evaluating human preferences for text-to-image generation. Advances in Neural Information Processing Systems 36:15903–15935. [37] Zhang , J., Huang , J., Yao , H., Liu , S., Zhang , X., Lu , S., & Tao , D. (2025) R1-vl: Learning to reason with multimodal large language models via step-wise group relative policy optimization. arXiv preprint arXiv:2503.12937 [38] Zhang , R., Isola , P., Efros , A. A., Shechtman , E., & Wang , O. (2018) The unreasonable effectiveness of deep features as a perceptual metric. In CVPR [39] Zhao , Y ., Huang , J., Hu , J., Wang , X., Mao , Y ., Zhang , D., Jiang , Z., Wu , Z., Ai , B., Wang , A., & others (2025) Swift: a scalable lightweight infrastructure for fine-tuning. In Proceedings of the AAAI Conference on Artificial Intelligence 39 , pp. 29733–29735. 12 A Limitations Despite its strengths, T2I-Eval-R1 has the following limitations: Evaluation across unseen dimensions is constrained Although we demonstrate generalization on the faithfulness dimension from TIFA v1.0, the lack of diverse, high-quality human-annotated benchmarks for other aspects (e.g., aesthetic style, composition rules, cultural context) limits our ability to comprehensively validate robustness across truly novel evaluation criteria. Model and compute resource scope All experiments in this work employ the Qwen2.5-VL-7B- Instruct backbone due to our restricted computational budget. Consequently, we have not verified T2I-Eval-R1’s effectiveness on other multimodal architectures or larger-scale models, leaving open questions about scaling behavior and cross-model transfer. B Ethical Considerations Most of the text-to-image task inputs in our training dataset are sourced from publicly available datasets, ensuring that they pose no harm to individuals or groups. Furthermore, the text generated by multi-modal language models (MLLMs) is carefully curated and processed by human annotators to safeguard privacy and confidentiality. No personally identifiable information (PII) is included. However, it is important to note that the generated images from TIFA-v1.0 [ 16], ImageRewardDB [ 36] and T2I-Eval [ 31] may contain harmful content. Despite these potential risks, it is crucial to disclose the full scope of this research, as materials from these datasets have been extensively used in safety research within the community. All annotators are compensated fairly, with an hourly wage | https://arxiv.org/abs/2505.17897v1 |
of approximately $5.15 USD, which exceeds the average hourly wage of $3.13 USD on Amazon Mechanical Turk [12]. C Benchmark Datasets We assess our models trained with Eval-GRPO on three public benchmarks, each targeting a distinct evaluation mode: •T2I-Eval [31] (under MIT License): Contains coarse human annotated scores on three fine- grained dimensions—appearance quality, intrinsic attribute consistency, and relationship at- tribute consistency. We train on its training split and report Spearman’s ρand Kendall’s τ against its held-out test set to verify in-domain performance. •TIFA v1.0 [16] (under Apache 2.0 License): Focuses on text-to-image faithfulness, a dimension unseen during our training. We run T2I-Eval-R1 in single-wise mode on its test prompts and compute Spearman’s ρand Kendall’s τagainst the published QA-based faithfulness judgments to measure generalization. •ImageReward [36] (under Apache 2.0 License): Provides large-scale pairwise human pref- erences. We evaluate our pairwise confidence outputs against its ground-truth comparisons, again reporting preference accuracy, to demonstrate our model’s ranking capability. D Baseline Methods To contextualize our results, we compare our method against four categories of existing evaluators: •Traditional Metrics (e.g., FID [ 14], Inception Score [ 29]): Rely on distributional statistics of deep-network embeddings to measure overall realism and diversity, but they lack fine-grained, prompt-conditioned assessments. •Embedding-based and Learning-based Methods (e.g., CLIPScore [ 13], BLIPv2Score [ 21], LPIPS [ 38], DreamSim [ 9], PickScore [ 19], ImageReward [ 36]): Leverage pretrained vi- sion–language or perceptual models to quantify semantic alignment or visual similarity, offering stronger prompt sensitivity than purely statistical metrics. •LLM-based Evaluators (e.g., LLMScore [ 23], TIFA [ 16]): Use large language mod- els—sometimes combined with visual question answering—to decompose evaluation into subquestions, yielding interpretable scores but often at high computational cost. 13 •MLLM-based Chain-of-Thought Methods (e.g., VIEScore [ 20], T2I-Eval [ 31], UnifiedReward-Think [ 32]): Instruct multimodal LLMs to generate detailed rationales along- side ratings, achieving state-of-the-art correlations with human judgments while highlighting the gap between proprietary APIs and open-source alternatives. We select these baselines to cover the spectrum from lightweight, reference-free metrics through sophisticated, rationale-driven evaluators, ensuring a comprehensive assessment of both our in-domain performance and generalization capabilities. E Training Settings We optimize the open-source MLLM Qwen2.5-VL-7B-Instruct [ 3] to serve as a reasoning-based au- tomatic evaluator. To ensure the optimized model effectively captures the comprehensive information embedded in the training corpus, we set the context length to 4,096 tokens during training, accom- modating the majority of samples within the dataset. To optimize the computational efficiency and uphold the performance of the fine-tuned model, we employed Low-Rank Adaptation (LoRA) [ 15] with the rank of 256 and αof 512. Apart from that, we adopt various methods to accelerate training including ZeRO [ 27] and Flash Attention 2 [ 6]. The model training was conducted on 4 Nvidia A100-SXM4-80GB GPUs with a group size of 8 over a single epoch. All training is implemented with the SWIFT framework [39]. F Full Prompt Templates for Involved Settings # Task DescriptionYou are an expert evaluator specialized in assessing the quality of text-to-image generative tasks. Your role is to evaluate thegenerated image according to the specified evaluation | https://arxiv.org/abs/2505.17897v1 |
goal: {dimensions}.This evaluation focuses on the following criteria:{dimension_explanations} You should make a well-reasoned judgment based on these criteria.# Annotation Input# Evaluation Guidelines-Carefully analyze the generated output(s) in relation to the input data and defined evaluation criteria.-Consider the output from the perspective of a human user, with particular attention to aspects relevant to {dimensions}.-Maintain objectivity and apply consistent judgment across all samples.# Output Format-UserInput:{input_data}-GeneratedOutputA:{output_data_a}-GeneratedOutputB:{output_data_b}-Provide a single floating-point rating between {rating_min} and {rating_max} based on the evaluation criteria as your annotation score:{evaluation_guidelines}-Evaluatewhichoutput(AorB)isbetterbasedontheevaluationcriteria.-{evaluation_guidelines}Producearesponseofasingle{rating_dtype}between{rating_min}and{rating_max}basedontheevaluationcriteriaasyourannotationscore.Firstthinksaboutthereasoningprocessforevaluationinthemindandthenprovidesyouranswer.Thereasoningprocessandanswerareenclosedwithin<think></think>and<answer></answer>tags,respectively,i.e.,<think>reasoningprocesshere</think><answer>answerhere</answer>Pleaseprovideasinglefloating-pointnumberbetween0.0and1.0toindicateyourconfidencethatOutputAisbetterthanOutputB,basedonthegivenevaluationcriteria:-0.0:OutputBisclearlyandentirelybetterthanOutputA.-1.0:OutputAisclearlyandentirelybetterthanOutputB.-0.5:OutputsAandBareequallygood.-Valuesbetween0.0and1.0(excluding0.5):Oneoutputisbetterthantheother,butthesuperiorityispartialoruncertain.-UserInput:{input_data}-GeneratedOutput:{output_data} Figure 2: Prompt template for T2I-Eval-R1 evaluators. The evaluation prompt is dynamically assembled with this template according to requirements of the specific task. The adjustable parts for single-wise and pairwise protocols are placed on the left and right side, respectively. As described in Section 3.4, the four-block prompt assembly framework is illustrated in Figure 2. There are various evaluation settings involved in our experiment. Though sharing the same framework, each setting has diverse evaluation dimensions, definitions and guidelines. The settings include: •Single-wise : – Appearance Quality : As illustrated in Figure 3; – Intrinsic Attribute Consistency : As illustrated in Figure 4; – Relationship Attribute Consistency : As illustrated in Figure 5; 14 – Overall Quality : As illustrated in Figure 6; – Text-to-Image Faithfulness : As illustrated in Figure 7; •Pairwise : As illustrated in Figure 8. # Task Description You are an expert evaluator specialized in assessing the quality of text-to-image generative tasks. Your role is to evaluate the generated image according to the specified evaluation goal: appearance quality . This evaluation focuses on the following criteria: * What we mean by “appearance quality” is that the appearance of the entity in the generated image should be realistic, aesthetically pleasing, and aligns with human intuition. You should make a well-reasoned judgment based on these criteria. # Annotation Input -User Input : <text> -Generated Output : <image> # Evaluation Guidelines - Carefully analyze the generated output(s) in relation to the input data and defined evaluation criteria. - Consider the output from the perspective of a human user, with particular attention to aspects relevant to appearance quality . - Maintain objectivity and apply consistent judgment across all samples. - Provide a single floating-point number between 0.0 and 10.0 based on the evaluation criteria as your annotation score: Evaluate the Appearance Quality of the generated image: - Determine whether the entity from the user-provided text is in the generated image. If yes, proceed to the next step. If no, give a score of 0. - Give a score from 0.0 to 10.0: - 0-3: The appearance is not realistic, aesthetically pleasing, or align with human intuition at all. - 4-7: The appearance is somewhat realistic, aesthetically pleasing, or align with human intuition. - 8-10: The appearance is very realistic, aesthetically pleasing, or align with human intuition. # Output Format Produce a response of a single floating-point number between 0.0 and 10.0 based on the evaluation criteria as your annotation score. Figure 3: Prompt template for Appearance Quality evaluation from T2I-Eval dataset. # Task Description You are an expert evaluator specialized in assessing the | https://arxiv.org/abs/2505.17897v1 |
quality of text-to-image generative tasks. Your role is to evaluate the generated image according to the specified evaluation goal: intrinsic attribute consistency . This evaluation focuses on the following criteria: * What we mean by “intrinsic attribute” is that the attributes are properties of the entity explicitly mentioned in the input text, such as color, size, shape, material, quantity, etc. You should make a well-reasoned judgment based on these criteria. # Annotation Input -User Input : <text> -Generated Output : <image> # Evaluation Guidelines - Carefully analyze the generated output(s) in relation to the input data and defined evaluation criteria. - Consider the output from the perspective of a human user, with particular attention to aspects relevant to intrinsic attribute consistency . - Maintain objectivity and apply consistent judgment across all samples. - Provide a single floating-point number between 0.0 and 10.0 based on the evaluation criteria as your annotation score: Evaluate the Intrinsic Attribute Consistency of the generated image: - Compare the intrinsic attributes in the generated image with the user-provided text. If the entity does not exist in the image, give a score of 0. If not, proceed to the next step. - Give a score from 0.0 to 10.0 reflecting the intrinsic attribute consistency of the generated image: - 0-3: The generated attribute is not consistent with the text at all. - 4-7: The generated attribute is somewhat consistent with the text. Semantics are similar but not entirely consistent. - 8-10: The generated attribute is very consistent with the text. # Output Format Produce a response of a single floating-point number between 0.0 and 10.0 based on the evaluation criteria as your annotation score. Figure 4: Prompt template for Intrnsic Attribute Consistency evaluation from T2I-Eval dataset. 15 # Task Description You are an expert evaluator specialized in assessing the quality of text-to-image generative tasks. Your role is to evaluate the generated image according to the specified evaluation goal: relationship attribute consistency . This evaluation focuses on the following criteria: * What we mean by “relationship attribute” is that the attributes describe the entity’s relationship with other entities. You should make a well-reasoned judgment based on these criteria. # Annotation Input -User Input : <text> -Generated Output : <image> # Evaluation Guidelines - Carefully analyze the generated output(s) in relation to the input data and defined evaluation criteria. - Consider the output from the perspective of a human user, with particular attention to aspects relevant to relationship attribute consistency . - Maintain objectivity and apply consistent judgment across all samples. - Provide a single floating-point number between 0.0 and 10.0 based on the evaluation criteria as your annotation score: Evaluate the Relationship Attribute Consistency of the generated image: - Compare the relationship attributes in the generated image with the user-provided text. If the entity does not exist in the image, give a score of 0. If not, proceed to the next step. - Give a score from 0.0 to 10.0 reflecting the relationship attribute consistency of the generated image: - 0-3: The generated attribute is not consistent with the text | https://arxiv.org/abs/2505.17897v1 |
at all. - 4-7: The generated attribute is somewhat consistent with the text. Semantics are similar but not entirely consistent. - 8-10: The generated attribute is very consistent with the text. # Output Format Produce a response of a single floating-point number between 0.0 and 10.0 based on the evaluation criteria as your annotation score. Figure 5: Prompt template for Relationship Attribute Consistency evaluation from T2I-Eval dataset. G Case Study Here we provide some typical cases to show the quality of textual rationales generated by our method compared with previous methods. The cases include: •Appearance Quality Cases : As illustrated in Figure 9; •Intrinsic Attribute Consistency Cases : As illustrated in Figure 10; •Relationship Attribute Consistency Cases : As illustrated in Figure 11; •LLMScore v.s. T2I-Eval-R1 Cases : As illustrated in Figure 12; •VIEScore v.s. T2I-Eval-R1 Cases : As illustrated in Figure 13; •T2I-Eval v.s. T2I-Eval-R1 Cases : As illustrated in Figure 14. 16 # Task Description You are an expert evaluator specialized in assessing the quality of text-to-image generative tasks. Your role is to evaluate the generated image according to the specified evaluation goal: appearance quality, intrinsic attribute consistency and relationship attribute consistency . This evaluation focuses on the following criteria: * What we mean by “appearance quality” is that the appearance of the entity in the generated image should be realistic, aesthetically pleasing, and aligns with human intuition. * What we mean by “intrinsic attribute” is that the attributes are properties of the entity explicitly mentioned in the input text, such as color, size, shape, material, quantity, etc. * What we mean by “relationship attribute” is that the attributes describe the entity’s relationship with other entities. You should make a well-reasoned judgment based on these criteria. # Annotation Input -User Input : <text> -Generated Output : <image> # Evaluation Guidelines - Carefully analyze the generated output(s) in relation to the input data and defined evaluation criteria. - Consider the output from the perspective of a human user, with particular attention to aspects relevant to appearance quality, intrinsic attribute consistency and relationship attribute consistency . - Maintain objectivity and apply consistent judgment across all samples. - Provide a single floating-point number between 0.0 and 10.0 based on the evaluation criteria as your annotation score: Evaluate the Overall Quality of the generated image: - How good is the generated image of this text (i.e., has high appearance, intrinsic attribute and relationship attribute qualities.)? How happy would you be if you gave an AI assistant this text and received this image result? - 0-3: The generated image is not consistent with the text at all. - 4-7: The generated image is somewhat consistent with the text. Semantics are similar but not entirely consistent. - 8-10: The generated image is very consistent with the text. # Output Format Produce a response of a single floating-point number between 0.0 and 10.0 based on the evaluation criteria as your annotation score. Figure 6: Prompt template for Overall Quality evaluation from T2I-Eval dataset. 17 # Task Description You are an expert evaluator specialized in assessing the quality | https://arxiv.org/abs/2505.17897v1 |
of text-to-image generative tasks. Your role is to evaluate the generated image according to the specified evaluation goal: text-to-image faithfulness . This evaluation focuses on the following criteria: * What we mean by “relationship attribute” is that the attributes describe the entity’s relationship with other entities. You should make a well-reasoned judgment based on these criteria. # Annotation Input -User Input : <text> -Generated Output : <image> # Evaluation Guidelines - Carefully analyze the generated output(s) in relation to the input data and defined evaluation criteria. - Consider the output from the perspective of a human user, with particular attention to aspects relevant to text-to-image faithfulness . - Maintain objectivity and apply consistent judgment across all samples. - Provide a single floating-point number between 1.0 and 5.0 based on the evaluation criteria as your annotation score: On a scale of 1-5, score “does the image match the prompt?”: - To evaluate the generated image, there are two aspects: image quality and text-image match. Here we only care about text-image match, which is referred to as “faithfulness”. - There are several kinds of elements in the text: object, attribute, relation, and context. Measure the consistency by counting how many elements are missed/misrepresented in the generated image. - For some elements, e.g. “train conductor’s hat” if you can see there is a hat but not a train conductor’s hat, consider half of the element is missed/misrepresented in the generated image. - Objects are the most important elements. If an object is missing, then consider all related attributes, activity, and attributes missing. - When you cannot tell what the object/attribute/activity/context is, consider the element missing. (e.g., can’t tell if an object is a microwave) Given the above guideline, suppose the text input contains n elements, and x elements are missed or misrepresented. n and x are all counted by the annotators. The reference scoring guideline is as follows: - 5: The image perfectly matches the prompt. - 4: x≤2 and x ≤n/3. A few elements are missed/misrepresented. - 3: min{2, n/3} < x ≤n/2 elements are missed/misrepresented. - 2: x > n/2. More than half of the elements are missed/misrepresented. - 1: None of the major objects are correctly presented in the image. # Output Format Produce a response of a single floating-point number between 1.0 and 5.0 based on the evaluation criteria as your annotation score. Figure 7: Prompt template for Text-to-Image Faithfulness evaluation from TIFA-v1.0 dataset. 18 # Task Description You are an expert evaluator specialized in assessing the quality of text-to-image generative tasks. Your role is to evaluate the generated image according to the specified evaluation goal: image-text alignment, fidelity, and harmlessness . This evaluation focuses on the following criteria: * What we mean by “image-text alignment” is that the generated image should match the text entered by the user. Some examples of inconsistent image generation are: - The subject described in the text does not appear in the image generated, for example, “A cat dressed as Napoleon Bonaparte” generates an image without the word “cat”. - The object properties generated in | https://arxiv.org/abs/2505.17897v1 |
the image are different from the text description, for example, generating an image of “a little girl sitting in front of a sewing machine” with many little girls is incorrect. * What we mean by “fidelity” is that the generated image should be true to the shape and characteristics of the object, and not generated haphazardly. Some examples of low-fidelity images are: - “Spider-Man” should only have two legs, generating an image with more legs is incorrect. - “Unicorn” should only have one horn, generating an image with multiple horns is incorrect. - People eat noodles with utensils instead of grabbing them with their hands, generating an image of someone eating noodles with their hands is incorrect. * What we mean by “harmlessness” is that the generated images should not cause physical, psychological, or social harm to people; damage or loss of equipment or property; damage to the environment; or harm to institutions or resources necessary for human well-being. Some examples of harmful images are: - Images that are pornographic, violent, prejudicial or even denigrating specific groups are harmful. - Images that cause psychological discomfort when seen are harmful. You should make a well-reasoned judgment based on these criteria. # Annotation Input -User Input : <text> -Generated Output A : <image> -Generated Output B : <image> # Evaluation Guidelines - Carefully analyze the generated output(s) in relation to the input data and defined evaluation criteria. - Consider the output from the perspective of a human user, with particular attention to aspects relevant to image-text alignment, fidelity, and harmlessness . - Maintain objectivity and apply consistent judgment across all samples. - Evaluate which output (A or B) is better based on the evaluation criteria. - Consider how satisfied a user would be if they received these images in response to the input text. Select the image that best meets the criteria (A or B). # Output Format Please provide a single floating-point number between 0.0 and 1.0 to indicate your confidence that Output A is better than Output B, based on the given evaluation criteria: - 0.0: Output B is clearly and entirely better than Output A. - 1.0: Output A is clearly and entirely better than Output B. - 0.5: Outputs A and B are equally good. - Values between 0.0 and 1.0 (excluding 0.5): One output is better than the other, but the superiority is partial or uncertain. Figure 8: Prompt template for evaluation from ImageReward dataset. 19 bananapartlysliced,peanutbutteronaspoon,andcookies-Question1:Doesthedepictionofthebananalookrealisticandalignwithhumanintuition?-Explanation:Thebananapeeldoesnotlookfresh,thusresemblingawiltedbanana.Itsbrownandshriveledappearancedoesnotalignwithhumanintuitionofwhatafreshbananashouldlooklike.-Question2:Doesthedepictionofthepeanutbutterlookrealisticandalignwithhumanintuition?-Explanation:Thedepictionofthepeanutbutterisrealisticandalignswithhumanintuition.Thespread'screamytextureandconsistentcolorwithslightvariationsinspreaddetail(likecreasesandpeaks)makeitauthentic.-Question3:Doesthedepictionofthespoonlookrealisticandalignwithhumanintuition?-Explanation:Yes,thespoonlooksrealistic,withawoodenhandleandmetallicbowl,resemblingtypicaldesigns.Thereflectionandthenaturalshadowingitcastsmakeitappearlifelikeandfitwithinhumanexpectations.-Question4:Doesthedepictionofthecookieslookrealisticandalignwithhumanintuition?-Explanation:Yes,thedepictionofthecookiesisrealisticandalignswithhumanintuition.Thetextureofthecookies,theevenspreadofpeanutbutter,andthenaturallookofthecookiepiecesallcontributetoabelievableandappetizingappearance.Let'sevaluatethegeneratedimageagainsttheprovidedtext:1.AppearanceQuality:Theimageshowscookieswithpeanutbutteronthem,whichmatchesthedescriptionof"peanutbutteronaspoon"and"cookies."However,thereisnobananapresentintheimage,sotheappearancequalityforthebananapartis0.0.Thecookiesandpeanutbutterarerealisticandaestheticallypleasing,butsincethebananaismissing,theoverallappearancequalityisaffected.2.InappropriateContent:Thereisnopornographicorviolentcontentintheimage.Giventhesepoints,thescoreshouldreflecttherealisticandaestheticallypleasingaspectsoftheimagewhileacknowledgingthemissingbanana.Ayellowhazardsignsittingonthesideofaroad.-Question1:Doestheappearanceoftheyellowhazardsignseemrealisticandcontextuallyappropriate?-Explanation:Basedontheimage,theyellowhazardsignhasasolidandcleardesignwhichisrealistic.Itappearstobeplacedappropriatelyontheroad,readytowarnapproachingvehiclesofpossibledangers.Thesign'splacementanddesignareconsistentwithtypicaltrafficsignsusedtoalertdriverstohazards.However,withoutfurthercontextonthelocationorthespecifichazarditsignifies,theappropriatenessforaspecificsettingcannotbefullydetermined.-Question2:Isthedepictionoftheroadrealisticanddoesitalignwithhumanintuition?-Explanation:Thedepictionoftheroadisrealisticwithitsdarkasphaltsurfaceandclearlypaintedyellowlines,whicharecommonfeaturesinrealroads.Thepresenceoftheroadsignontheleftcurvealsoalignswellwithhumanintuitionandtrafficsafetymeasures.Thegrassyhillandskyaddtotherealisticportrayalofanoutdoorsetting.Thegeneratedimagedepictsayellowhazardsign,whichmatchesthedescriptionprovidedintheuser-providedtext.Theappearanceofthesignisrealistic,withclearlinesandcolorsthatalignwithhumanintuitionforwarningsigns.However,thedesignofthesymbolonthesignappearsunusualanddoesnotresemblestandardhazardsymbols,whichmightslightlyreduceitsaestheticappealandrealism.Nonetheless,theoverallappearanceisstillquitegoodandalignswellwiththeintendedpurposeofahazardsign.Giventheseconsiderations,Iwouldratetheappearancequalityas7.5outof10.0.Figure 9: Two evaluation cases for appearance quality rationales from T2I-Eval and our T2I-Eval-R1. For the upper case, T2I-Eval fails to recognize the absence of banana in the image; for the lower case, T2I-Eval fails to recognize the distortion of the hazard sign. Our T2I-Eval-R1 successfully recognize the problems in both cases. 20 Alittleboywearingabaseballhatholdingabaseballbat....-Question3:Istheboyholdingabaseballbatintheimage?-Answer:Yes,theboyisholdingadecoratedbaseballbatinatypicalbaseballplayer\u2019sway.-Explanation:Theboyisindeedholdingadecoratedbaseballbatinthecorrectmannerforsomeonewhoplaysbaseball,witharmsslightlybent,readytoswing.Thispostureandthewayheholdsthebatsuggestfamiliaritywithplayingbaseball,matchingthedescriptionexactly.-Question4:Doesthebaseballhatexistintheimage?-Answer:Yes,thebaseballhatexistsintheimageandispredominantlybluewitharedlogo.-Explanation:Thebaseballhatisclearlyvisiblewiththedescribedcharacteristics.-Question5:Isthebaseballhatbeingwornbytheboyintheimage?-Answer:No,thebaseballhatdoesnotappeartobewornbytheboy;itisplacedonhishead.-Explanation:Thebaseballhatisnotcurrentlyontheboy'shead.Itisplacedabovehisshirt,suggestingitisnotwornbyhim.-Question6:Doesthebaseballbatexistintheimage?-Answer:Yes,thereisabaseballbatintheimage,heldbytheboyinhisrighthand.-Explanation:Thebaseballbatisvisibleintheimage;itisbeingheldintheboy'srighthandandappearstobeastandardsizeforbaseballplayers....Let'sevaluatetheimageagainsttheprovidedtext:1.AttributesintheText:-Alittleboy-Wearingabaseballhat-Holdingabaseballbat2.AttributesintheImage:-Ayoungboy-Wearingabaseballcap(whichissimilartoabaseballhat)-HoldingabaseballbatTheimagecontainsayoungboywhoisindeedwearingabaseballcapandholdingabaseballbat,whichmatchesthedescriptioninthetext.Theonlydiscrepancyisthatthetextspecifies"abaseballhat,"whiletheimageshowsabaseballcap.However,abaseballcapisatypeofbaseballhat,sothisisaminorinconsistency.Giventheseobservations,theimageisquiteconsistentwiththetext,butthereisaslightdifferenceinterminology. InthepicturesqueYosemiteNationalPark,avibrantemeraldLobsterBisqueexudesitsenticingaroma.-Question1:DoesYosemiteNationalParkexistintheimage?-Answer:Yes,theimageclearlydepictsaportionofYosemiteNationalParkinthebackgroundwithitsdistinctivegranitecliffs,coniferoustrees,andvibrantalpinescenery.-Explanation:Thevisualdetailsintheimagesuchasthegranitecliffs,coniferoustrees,andalpinesceneryaredistinctfeaturesrecognizedaspartofYosemiteNationalPark,affirmingitspresenceintheimage.ThesefeaturesarevisibleandaccuratelyrepresentthenaturalelementstypicalofYosemiteNationalPark.-Question2:DoestheLobsterBisqueexistintheimage?-Answer:Yes,theimageclearlyshowsabowlofLobsterBisquewitharedlobsterontop.-Explanation:TheimageclearlyshowsabowlofLobsterBisquewitharedlobsterontop.-Question3:IsthecoloroftheLobsterBisqueaccuratelydepictedasemerald?-Answer:TheLobsterBisqueisdepictedwithanemeraldgreencolor,whichishighlyunusualforlobsterbisque.Typically,alobsterbisquemightexhibithuesoflightorangeorbeige,aligningwiththenaturalcoloroflobstermeatwhencooked.However,thisdishisdistinctlygreen,sothecolorcouldbeconsideredaccuratewithinthecontextofthisdish'suniquerecipebutitisnottypical.-Explanation:Thecolor'emeraldgreen'asdescribedcapturestheunnaturalhueofthebisqueintheimagebutdivergesfromwhatistypicallyassociatedwithlobsterbisque.Thiscolorchoiceappearstobeadefiningaspectofthedish'screation,makingthedescriptionbothaccuratetotheimageandcontextuallyunorthodox.Let'sevaluatetheintrinsicattributesofthegeneratedimageagainsttheuser-providedtext:1.LobsterBisque:Thetextmentions"LobsterBisque,"whichisatypeofsouptypicallymadewithlobsterandcream.Theimageshowsabowlofgreensoupwithalobsterontop,whichalignswellwiththedescriptionofLobsterBisque.2.Color:ThetextdescribestheLobsterBisqueas"vibrantemerald."Theimagedepictsagreensoup,whichmatchestheemeraldcolormentionedinthetext.3.Aroma:Thetextmentionsthe"enticingaroma"oftheLobsterBisque.Whiletheimagedoesn'tdirectlyshowthearoma,thecontextsuggeststhatthesoupispresentedattractively,whichcouldimplyagoodaroma.4.Setting:Thetextmentions"YosemiteNationalPark,"whichisasceniclocation.Thebackgroundoftheimageshowsamountainouslandscapewithtrees,whichalignswiththedescriptionofYosemiteNationalPark.Giventheseobservations,thegeneratedimagecloselymatchestheintrinsicattributesdescribedinthetext.Thesoupisgreen,thereisalobsterontop,andthesettingisconsistentwithYosemiteNationalPark. Figure 10: Two evaluation cases for intrinsic attribute consistency rationales from T2I-Eval and our T2I-Eval-R1. For the upper case, T2I-Eval fails to comprehend the concept ‘wear’; for the lower case, T2I-Eval fails to pay attention to the ‘enticing aroma’. Our T2I-Eval-R1 successfully recognizes the problems in both cases. | https://arxiv.org/abs/2505.17897v1 |
21 Thephotoperspectiveoffersareflectionshotofamechanic,intentlytroubleshootinganelectricalissue.-Question1:Isthemechanicintentlyworkingontheelectricalissue?-Answer:Bothmechanicsarefocusedonthetaskathand,withoneholdingthecarinplacewhiletheotherappearstobeworkingwiththetools,suggestingtheyareworkingontheissueintently.-Explanation:Themechanicsintheimageareobservedtobeveryconcentratedontheirtask.Oneofthemisseenholdingthevehiclesteadywhiletheotherisusingtools;thisindicatesdiligenthandlingandconcentrationontherepairworkrelatedtothecar.Thepresenceoftools,thepositioningofthevehicle,andtheposturesofthemechanicsreinforcethisobservation.Hence,theanswerthatbothmechanicsarefocusedonthetaskathandandareworkingonresolvinganelectricalissueintentlyissupportedbythevisibleevidenceintheimage.Theuser-providedtextdescribesareflectionshotofamechanictroubleshootinganelectricalissue.However,thegeneratedimageshowstwomechanicsworkingunderacar,whichdoesnotmatchthereflectionconceptmentionedinthetext.Therefore,therelationshipattributeisnotconsistentwiththetextprovided.Additionally,thereisnoinappropriatecontentpresentintheimage;itdepictsaprofessionalsettingwithoutanypornographicorviolentelements.Giventheseobservations,thescoreforrelationshipattributeconsistencyshouldbelowsincetheimagedoesnotalignwiththereflectionconceptdescribedinthetext.Twostrawberriessitonaplatenexttoamilkshake.-Question1:Arethestrawberriesplacednexttothemilkshake?-Answer:No,thestrawberriesareplacedaroundtherimsoftwowhiteplates,whilethemilkshakeisinaseparatewhitecupwithaspooninit.-Explanation:Thestrawberriesarepositionedaroundtheedgeoftwowhiteplates,notdirectlynexttothemilkshake.Themilkshakeiscontainedinaseparatewhitecupwithaspoon.Thisarrangementestablishesadistinctionintheirplacementandpositioninginthescene,supportingthisanswer.Let'sanalyzethegeneratedimageagainsttheuser-providedtext:1.RelationshipAttributeConsistency:-Theuser-providedtextstates:"Twostrawberriessitonaplatenexttoamilkshake.”-Inthegeneratedimage,therearemultiplestrawberriesscatteredaround,butnoplateisvisible.Instead,therearebowlscontainingwhatappearstobewhippedcream,whichisnotmentionedinthetext.-Thepresenceofmultiplestrawberriesandtheabsenceofaplateandmilkshakemaketherelationshipattributeinconsistentwiththetext.2.InappropriateContent:-Thereisnopornographicorviolentcontentpresentintheimage.Giventheseobservations,theimagedoesnotmeettherelationshipattributeconsistencycriteriaasdescribedinthetext.Therefore,thescoreshouldreflectalowconsistencylevel. Figure 11: Two evaluation cases for relationship attribute consistency rationales from T2I-Eval and our T2I-Eval-R1. For the upper case, T2I-Eval fails to caption the action of the mechanic; for the lower case, T2I-Eval fails to recognize the error in the number of strawberries. Our T2I-Eval-R1 successfully recognizes the problems in both cases. 22 Asmallplaneistakingofffromasandybeach##OverallRationaleTheimagedescriptionscores70fortheoverallqualityintermsofmatchingthetextprompt.Themajorelementsdescribedmatchtheessenceofthetextpromptbuttheactionperformedbytheplaneisnotaccurateastheplaneisdescribedaslandingratherthantakingoff.##ErrorCountingRationaleThereare2compositionerrorsintheimagedescriptioncomparedtothetextprompt.Thefirsterrorisrelatedtothewrongactionoftheplanewhichisdescribedaslandinginsteadoftakingoff.Theseconderrorisaboutthepositionoftheplanewhichisdepictedascoveringmostoftheimage.Thetextpromptdoesnotrestrictthepositionoftheplaneintheimageorhowmuchofthepictureitcovers.##AppearanceQualityTheimagedepictsasmallairplanetakingofffromasandybeach,whichmatchesthedescriptionprovidedintheuser'stext.Theairplaneappearsrealistic,withproperproportionsanddetailslikethewheelsandwings.Thebeachsettingisalsowell-represented,withsandtextureandaclearsky.Thereisnoinappropriatecontentpresentintheimage.Giventheseobservations,theimagemeetsthecriteriaforappearancequalityanddoesnotcontainanyinappropriatecontent.##IntrinsicAttributeConsistencyLet'sevaluatetheimageagainsttheprovidedtext:1.Plane:Theimageshowsaplane,whichmatchesthedescriptioninthetext.2.Size:Theplaneappearstobesmall,asdescribedinthetext.3.Shape:Theplanehasatypicalairplaneshape,whichisconsistentwiththedescription.4.Material:Theplaneseemstobemadeofmaterialscommonlyusedforairplanes,whichisconsistentwiththedescription.5.Quantity:Thereisonlyoneplane,whichmatchesthedescription.Theimagemeetstheintrinsicattributeconsistencycriteriaasitaccuratelyrepresentstheplanedescribedinthetext.Therearenoinappropriatecontentspresentintheimage.##RelationshipAttributeConsistencyTheuser-providedtextdescribesasmallplanetakingofffromasandybeach.Inthegeneratedimage,thereisindeedasmallplaneonasandybeach,whichmatchesthedescriptionprovided.Theplaneappearstobeintheprocessoftakingoff,whichalignswiththeactiondescribedinthetext.Therearenoinappropriateelementspresentintheimage,andtherelationshipattributesareconsistentwiththetext.Giventheseobservations,theimagemeetstherequirementsforrelationshipattributeconsistencyanddoesnotcontainanyinappropriatecontent. Avendorsellingfruitfromthebackofatruck.##OverallRationaleTheimagedescriptioncapturestheessenceofthetextprompteffectively,detailingthevendorsellingfruitfromthebackofatruckandspecifyingvariouscomponentslikethefruits,thetruck,andtheindividualsinvolved.Itprovidesavividportrayalofthescene,emphasizingthevarietyoffruitsandtheinteractionsamongpeople,aligningwellwiththeprompt'sintent.However,slightdiscrepanciesinlocationdetailsandtheemphasisoncertainelementsslightlydetractfromtotalprecision.##ErrorCountingRationaleTherearethreecompositionerrorsinthedescription.Thetextpromptmentions"fruitfromthebackofatruck,"yetthedescriptiondoesn'texplicitlystatethatthefruitsarepositionedonthetruck'srear,creatingambiguity.Theorangesarenotedtobeclayingonthegroundratherthanclearlydisplayedonornearthetruckalteringtheintendedrelationshipamongobjects.Similarly,theredumbrellaisdescribedbutnotdirectlytiedtoshadingthestandonthetruck,whichslightlymisrepresentsspatialrelationships.##AppearanceQualityTheimagedepictsavendorsellingfruitfromthebackofatruck,whichmatchesthedescriptionprovidedbytheuser.Theappearanceofthefruitsandthetruckisrealisticandaestheticallypleasing.Thereisnoinappropriatecontentpresentintheimage.Therefore,theimagemeetsthehigh-qualitystandardsforappearanceanddoesnotcontainanyinappropriatecontent.##IntrinsicAttributeConsistencyLet'sevaluatetheimageagainsttheprovidedtext:1.Vendor:Theimageshowsapersonsellingfruit,whichmatchesthedescriptionofavendor.2.Fruit:Theimagecontainsvariousfruits,includingapples,oranges,andpossiblyothertypesoffruit,whichalignswiththedescriptionoffruit.3.Truck:Theimagedepictsatruckwithacanopy,whichcorrespondstothedescriptionofatruck.Theimagemeetstheintrinsicattributesdescribedinthetext.Therearenoinconsistenciesintheattributesmentioned.Now,let'sconsiderthequalityofthecontent:-Theimageisclearandwell-composed,showingarealisticsceneofafruitvendor.-Thereisnoinappropriatecontentpresent.Giventhesepoints,theimageishighlyconsistentwiththetextandmeetshigh-qualitystandards.##RelationshipAttributeConsistencyLet'sevaluatethegeneratedimageagainsttheuser-providedtext:1.User-providedtext:"Avendorsellingfruitfromthebackofatruck."2.Generatedimage:Theimageshowsatruckwithcratesoffruits,andthereisapersonstandingbehindthetruck,likelyavendor.Thekeyelementstocheckforconsistencyare:-Thepresenceofatruck.-Thepresenceoffruit.-Thepresenceofavendor.Intheimage,wesee:-Atruckwithcratesoffruits,whichmatchesthedescription.-Thereisapersonstandingbehindthetruck,whichcouldbeinterpretedasavendor.Theimagemeetstherequirementsoftheuser-providedtext.Therelationshipattributesareconsistent,andthereisnoinappropriatecontentpresent. Figure 12: Two evaluation cases for overall rationales from LLMScore and our T2I-Eval-R1. For the upper case, LLMScore fails to capture the action of the mechanic; for the lower case, LLMScore ignored the error in the number of strawberries. Our T2I-Eval-R1 avoids the problems in both cases. 23 Severalvehiclesprovidinggroundtransportationareshowninthephoto:streetcar,tourbus,classiccarandfamilycars##SemanticConsistencyTheimagesuccessfullydepictsseveralvehiclesprovidinggroundtransportation,includingastreetcar,atourbus,classiccars,andfamilycars.Allspecifiedtypesarepresentandrecognizable.However,thedistinctionbetweentheclassiccarandfamilycarscouldbeslightlymoreevident.##PerceptualQualityThesceneappearsquitenaturalwithrealisticlighting,shadows,andsenseofdistance.However,someofthevehiclesandtheirinteractionswiththeenvironmentseemslightlyoff,especiallywiththeredbusontheleftblendingawkwardlywiththebackground.Additionally,therearenoticeabledistortionsandslightblurringwithsomeofthevehicles,particularlythedetailsonthecars,indicatingartifacts.##AppearanceQualityThegeneratedimagecontainsseveralvehiclesthatmatchthedescriptionprovidedintheuser-providedtext:astreetcar,atourbus,aclassiccar,andfamilycars.Thevehiclesappeartoberealisticandaestheticallypleasing,withdetailsthatalignwithhumanintuition.Thesceneissetinacityenvironment,whichaddstotherealismandaestheticappeal.Thereisnoinappropriatecontentpresentintheimage.Giventheseobservations,theimagemeetsthecriteriaforhigh-qualitystandardsintermsofappearanceanddoesnotcontainanyinappropriatecontent.##IntrinsicAttributeConsistencyLet'sevaluatetheimageagainsttheprovidedtext:1.Streetcar:Theimageshowsastreetcar,whichmatchesthedescription.2.Tourbus:Thereisnotourbuspresentintheimage;instead,thereisatramandabus.3.Classiccar:Thereisaclassiccarintheimage,matchingthedescription.4.Familycars:Therearemultiplefamilycarsintheimage,matchingthedescription.Theimagecontainsastreetcar,atram,aclassiccar,andfamilycars,whichareallmentionedinthetext.However,thepresenceofatraminsteadofatourbusslightlydeviatesfromtheexactmatchdescribedinthetext.Giventheseobservations,theimageissomewhatconsistentwiththetextbutnotentirelysoduetothediscrepancywiththetourbus.Therefore,thescoreshouldreflectamoderatelevelofconsistency.##RelationshipAttributeConsistencyLet'sevaluatethegeneratedimageagainsttheuser-providedtext:1.Streetcar:Theimageshowsastreetcar,whichmatchesthedescriptioninthetext.2.Tourbus:Thereisnotourbuspresentintheimage;instead,thereisatrolleybus.3.Classiccar:Thereareclassiccarsintheimage,matchingthedescription.4.Familycars:Therearefamilycarsintheimage,matchingthedescription.Theimagecontainsastreetcar,classiccars,andfamilycars,butitlacksatourbus.Thisdiscrepancyaffectstheoverallconsistencyoftherelationshipattributes.Giventheseobservations,theimageissomewhatconsistentwiththetextbutnotentirelysoduetotheabsenceofatourbus. Agroupingofbananashanginginatreenexttograssyarea.##SemanticConsistencyTheimagesuccessfullyportraysagroupingofbananashanginginatreewithagrassyareanearby.Thebananasareclearlyattachedtothetreeandthesurroundingfoliagesuggestsagrassyenvironment.##PerceptualQualityTheimagedemonstratesanaturalclusterofbananasinarealisticoutdoorsetting.Thelighting,shadows,andtexturesappearconvincing.Thereisaminorunnaturalnessinthearrangementofbananasthatseemslightlytooperfect.Artifactsareminimalwithnovisibledistortionsorunusualelements.##AppearanceQualityThegeneratedimagedepictsabunchofbananashangingfromatree,whichmatchesthedescriptionprovidedintheuser'stext.Theappearanceofthebananasisrealistic,withnaturalcolorsandtexturesthatalignwithhumanintuition.Theimagealsoincludesagrassyareainthebackground,whichcomplementsthescenewithoutanyinappropriatecontent.Therefore,theimagemeetsthecriteriaforhigh-qualitystandardsintermsofappearanceandcontentappropriateness.##IntrinsicAttributeConsistencyLet'sevaluatetheimageagainsttheprovidedtext:1.IntrinsicAttributes:-Thetextmentions"bananashanginginatreenexttoagrassyarea."-Theimageshowsbananashangingfromatree,whichmatchesthedescription.-Thepresenceofagrassyareaisnotexplicitlymentioned,buttheimagedoesshowanaturalsettingthatcouldbeinterpretedasagrassyarea.2.Consistency:-Thebananasarepresentandhangingfromatree,whichisconsistentwiththetext.-Theimagedoesnotcontainanyinappropriatecontentlikepornographicorviolentmaterial.3.Scoring:-Sincetheimagecontainsthekeyelements(bananasandatree)andthesettingalignswiththedescription,thescoreshouldreflectahighlevelofconsistency.-However,theabsenceofacleargrassyareamightslightlyreducethescore.Giventhesepoints,theimageishighlyconsistentwiththetext,butthere'saminordiscrepancyregardingthegrassyarea.##RelationshipAttributeConsistencyLet'sevaluatethegeneratedimageagainsttheuser-providedtext:1.User-providedtext:"Agroupingofbananashanginginatreenexttoagrassyarea."2.Generatedimage:Theimageshowsabunchofbananashangingfromatree,whichmatchesthedescriptionofbananashanginginatree.However,thereisnograssyareavisibleintheimage;instead,thereareleavesandpartofthetreetrunk.Giventheseobservations:-Thebananasarepresentandcorrectlypositionedonthetree,whichisconsistentwiththetext.-Thereisnograssyareaintheimage,whichcontradictstheuser-providedtext.Therelationshipattributeconsistencyscoreshouldreflectthefactthatwhilethebananasarecorrectlydepicted,theabsenceofthegrassyareamakestheoverallconsistencylessthanperfect.Basedonthescoringcriteria:-0-3:Thegeneratedattributeisnotconsistentwiththetextatall.-4-7:Thegeneratedattributeissomewhatconsistentwiththetext.Semanticsaresimilarbutnotentirelyconsistent.-8-10:Thegeneratedattributeisveryconsistentwiththetext.Sincetheimagelacksthegrassyarea,thescoreshouldbebelow7.Giventhepartialconsistency,ascoreof6.5seemsappropriate. Figure 13: Two evaluation cases for overall rationales from VIEScore and our T2I-Eval-R1. For the upper case, VIEScore fails to find out the absence of tourbus; for the lower case, VIEScore ignore the absence of grassy area. Our T2I-Eval-R1 avoids the problems in both cases. 24 Twogiraffes,onelookingtowardthecamera,standonsomegrass.##AppearanceQuality-Question1:Doestheappearanceofthegiraffesintheimagelookrealisticandaestheticallypleasing?-Explanation:Thegiraffesintheimageappearrealisticandaestheticallypleasing.Thedetailsintheirfurpatterns,theirposes,andtheshadowscastarerenderedwithahighlevelofdetail,contributingtoanaestheticallypleasingimagethatalignscloselywiththerealcharacteristicsofgiraffes.##IntrinsicAttributeConsistency-Question1:Aretheretwogiraffesintheimage?-Answer:No,therearethreegiraffesintheimage.-Explanation:Theimageclearlyshowsthreegiraffesstandingnexttoeachotherinanaturalenvironment.-Question2:Dothegiraffesexistintheimage?-Answer:Yes,thegiraffesexistintheimage.Theyareclearlyidentifiablebytheirlongnecks,spottedpatterns,andoverallphysicalfeatures.-Explanation:Thegiraffesarevisibleintheimagewithdistinctivelongnecksandapatternedhidethatstronglyidentifiesthemasgiraffes.##RelationshipAttributeConsistency-Question1:Arethegiraffesstandingongrass?-Answer:Yes,thegiraffesarestandingongrass,asseeninthesurroundingsoftheimage.-Explanation:Theimageisinblackandwhitewhichdetractsfromtheoverallvisualappealbutdoesprovideagoodcontrasttothegiraffesstandingongrass.##AppearanceQualityThegeneratedimageshowsthreegiraffesstandingongrass,whichmatchesthedescriptionprovidedintheuser-providedtext.However,theimagecontainsthreegiraffesinsteadoftwo,whichdoesnotfullyalignwiththeoriginaltext.Despitethisdiscrepancy,thegiraffesappeartoberealisticandaestheticallypleasing,withattentiontodetailintheirpatternsandposture.Theimagedoesnotcontainanyinappropriatecontentsuchaspornographicorviolentmaterial...##IntrinsicAttributeConsistencyLet'sevaluatetheintrinsicattributesofthegeneratedimageagainsttheuser-providedtext:1.Quantity:Thetextmentions"Twogiraffes,"whiletheimageshowsthreegiraffes.Thisisnotconsistent.2.Appearance:Thetextspecifies"onelookingtowardthecamera,"whichisnotclearlydepictedintheimage.Thegiraffesarefacingforward,buttheirgazesarenotdirectedatthecamera.3.Setting:Thetextmentions"standonsomegrass,"whichisaccuratelyrepresentedintheimage.Giventheseobservations,theintrinsicattributeconsistencyislowduetothediscrepancyinthequantityofgiraffesandthegazedirection.##RelationshipAttributeConsistencyTheuser-providedtextdescribes"Twogiraffes,onelookingtowardthecamera,standonsomegrass."Thegeneratedimageshowsthreegiraffesstandingongrass,whichisclosetothedescriptionbutnotexactlymatchingthetext.Thetextspecifiesonlytwogiraffes,whiletheimagehasthree.However,thegiraffesareindeedstandingongrass,andoneofthemislookingtowardthecamera,whichalignswiththedescription.... Twosmartwatchesarelyinginthewetlands.##AppearanceQuality-Question1:Doestheappearanceofthesmartwatchesinthewetlandsseemrealisticandauthentic?-Explanation:Observingthenaturalsettingofwetlands,theplacementofthesmartwatchesappearsoutofplace,asonewouldnottypicallyexpecttoseesuchdevicesinsuchanenvironment.Thisjuxtapositiongivesitanunusualandsomewhatstagedappearance,whichreducesitsauthenticity.##IntrinsicAttributeConsistency-Question1:Aretheretwosmartwatchesintheimage?-Answer:Yes,therearetwosmartwatchesdepictedintheimage.Oneshowsareddialandabluestrap,whiletheotherfeaturesacolorfuldisplaywithabluestrap.-Explanation:Thedescriptionoftheimageaccuratelyidentifiesthepresenceoftwosmartwatches,eachwithdistinctfeaturesthatareclearlyvisibleintheimage.Onesmartwatchshowsareddialandhasabluestrap,whiletheotherfeaturesacolorfuldisplayandalsosportsabluestrap.Thisprovidesaclearandpreciseanswertothequestion.-Question2:Dothesmartwatchesexistintheimage?-Answer:Yes,thesmartwatchesexistintheimage.-Explanation:Thesmartwatchesareclearlyvisiblewithdistinctfeaturesanddisplays,indicatingtheirpresenceintheimage.-Question3:Arethesmartwatcheslocatedinthewetlands?-Answer:No,thesmartwatchesarenotlocatedinsidethewater,butareplacedontheground,partiallysubmerged,inthewetlands.-Explanation:Thesmartwatchesareplacedonthegroundadjacenttothewaterinthewetlands,notinsidethewater.Theyappeartobepartiallysubmergedbutareclearlyontheground.##RelationshipAttributeConsistency-Question1:Arethesmartwatcheslyinginthewetlands?-Answer:Yes,thesmartwatchesareplacedonthegroundinthewetlandarea,showingthattheyareinanoutdoorenvironmentwithwaterandvegetation.-Explanation:Theimageclearlycapturesthesmartwatchespositionedonthewetlands,indicatingthattheyareplacedinanoutdoorsettingmarkedbywaterandsurroundingvegetation.Thisplacementallowsforanaccuraterepresentationofthewatchesinthewetlandsenvironment.##AppearanceQualityTheimagedepictstwosmartwatchesplacedinwhatappearstobeawetlandenvironment,whichmatchesthedescriptionprovidedintheuser'stext.Theappearanceofthesmartwatchesisrealistic,withdetailslikethestraps,watchfaces,andthesurroundingvegetationlookingnatural.Thelightingandcolorsalsoseemappropriateforanoutdoorsetting.Thereisnoinappropriatecontentpresentintheimage.Giventheseobservations,theimagemeetsthecriteriaforappearancequalityanddoesnotcontainanyinappropriatecontent.##IntrinsicAttributeConsistencyLet'sevaluatetheimageagainsttheprovidedtext:1.Quantity:Thetextmentions"Twosmartwatches,"andtheimageshowsexactlytwosmartwatches.2.Location:Thetextspecifies"wetlands,"andtheimagedepictsasettingwithwaterandvegetation,whichalignswiththeconceptofwetlands.3.Attributes:Thesmartwatchesappeartobeofthesamemodelandcolor(bluestraps),whichisconsistentwiththedescriptionof"smartwatches."Giventheseobservations,theimagemaintainstheintrinsicattributesmentionedinthetext.Therearenoinconsistenciesorinappropriateelementspresent.##RelationshipAttributeConsistencyTheuser-providedtextstatesthattherearetwosmartwatcheslyinginthewetlands.Thegeneratedimageshowstwosmartwatchesplacedinawetlandenvironment,whichmatchesthedescriptionprovidedinthetext.Therelationshipattributeconsistencyishighsinceboththenumberofsmartwatchesandtheirplacementinthewetlandsalignwiththetext.Thereisnoinappropriatecontentpresentintheimage. Figure 14: Two evaluation cases for overall rationales from T2I-Eval and our T2I-Eval-R1. For the upper case, T2I-Eval ignore the action ‘looking toward the camera’; for the lower case, T2I-Eval mistaken the meaning of ‘lying in the wetlands’. Our T2I-Eval-R1 avoids the problems in both cases. 25 | https://arxiv.org/abs/2505.17897v1 |
arXiv:2505.17923v1 [cs.CL] 23 May 2025Language models can learn implicit multi-hop reasoning, but only if they have lots of training data Yuekun Yao Saarland University ykyao@coli.uni-saarland.deYupei Du Utrecht University y.du@uu.nlDawei Zhu Saarland University dzhu@lsv.uni-saarland.de Michael Hahn∗ Saarland University mhahn@lst.uni-saarland.deAlexander Koller∗ Saarland University koller@coli.uni-saarland.de Abstract Implicit reasoning is the ability of a language model to solve multi-hop reasoning tasks in a single forward pass, without chain of thought. We investigate this capability using GPT2-style language models trained from scratch on con- trolled k-hop reasoning datasets ( k= 2,3,4). We show that while such models can indeed learn implicit k-hop reasoning, the required training data grows exponentially in k, and the required number of transformer layers grows linearly in k. We offer a theoretical explanation for why this depth growth is necessary. We further find that the data requirement can be mitigated, but not eliminated, through curricu- lum learning. 1 Introduction Large language models (Brown et al., 2020; Achiam et al., 2023) have demonstrated strong ca- pabilities in complex reasoning tasks (Jaech et al., 2024; Guo et al., 2025). With chain-of-thought methods (Wei et al., 2023; Nye et al., 2021), lan- guage models (LMs) learn to explicitly generate the intermediate steps of the given problem before generating the final answer. However, such meth- ods incur long inference time (Chen et al., 2024b) and require costly annotations (Nye et al., 2021; Ze- likman et al., 2022). This raises the question: Can language models learn to reason effectively with- out explicit chain-of-thoughts, i.e., through implicit reasoning? There has been some research exploring implicit reasoning abilities of language models (Yang et al., 2024a; Biran et al., 2024; Wang et al., 2024). Such studies design their task in a two-hop question an- swering format, where the model is assumed to know individual facts like The father of A is B and The teacher of B is C , and then asked questions likeWho is the teacher of the father of A? . Find- ings from these works suggest that LMs can learn ∗Joint senior authors. train2-hop task Who is the instructor of the teacher of Jennifer? Answer: _ test LM train test LM train test LM3-hop task Who is the instructor of the teacher of the advisor of Jennifer? Answer: _ 4-hop task Who is the instructor of the teacher of the advisor of the instructor of Jennifer? Answer: _ Figure 1: Example illustrating our finding: An LM can be trained to perform implicit k-hop reasoning, but requires a large increase in training data as kgrows. implicit reasoning by combining individual factual knowledge. However, their reasoning tasks are limited to questions that can be solved with two in- termediate steps (i.e. 2-hop), leaving more difficult k-hop ( k >2) reasoning questions alone. Hence, it remains unclear whether language models can learn to perform such k-hop reasoning or not. In this paper, we study the capacity of lan- guage models to learn k-hop reasoning tasks, where k= 2,3,4. By training a randomly initialized GPT2-style transformer (Vaswani et al., 2017; Rad- ford et al., 2019) on knowledge (e.g. Jennifer ’s instructor | https://arxiv.org/abs/2505.17923v1 |
is Robert ) and knowledge-based ques- tions (e.g. Who is the instructor of the instructor of Jennifer? ), we study if such language models learn to generalize to questions that require novel combinations of learned facts. Our study addresses three research questions. •First, can LMs learn implicit k-hop reason- ing, and if so, under what conditions? Our findings suggest that LMs can indeed learn im- plicit k-hop reasoning, but doing so requires exponentially increasing data budgets as k grows (see Figure 1), primarily due to the explosion in the search space of fact combina- tions. •Second, we investigate how models perform k-hop reasoning internally through mechanis- tic interpretability experiments. Our analysis reveals that models trained with sufficient data systematically derive intermediate hop entities in a layer-wise manner, progressing from shal- low layers to deeper layers in a step-by-step fashion, consistent with Biran et al. (2024); Wang et al. (2024). We further show a the- oretical lower bound (Theorem 5.1) suggest- ing that such a mechanism, with the required depth growing with k, may be unavoidable for the transformer architecture. •Third, motivated by the substantial data re- quirements for k-hop reasoning, we ask: How can we reduce the data budget for k-hop rea- soning? We explore the use of easier ( m-hop, m < k ) tasks as auxiliary training signals. Our findings show that curriculum learning (Elman, 1993; Bengio et al., 2009), which introduces tasks in a progressively harder or- der, significantly reduces the required training data, while simply mixing m-hop tasks with k-hop tasks provides only modest gains. Bringing our findings together, we answer the broader question Can language models learn im- plicit reasoning? with a "yes, but" response. Lan- guage models can solve k-hop reasoning; however, this capability comes at the cost of an exponential increase in training data and at least linear growth in model depth as kincreases. Curriculum learning serves as a significantly effective mitigation strat- egy to reduce the training data requirement, but the data growth issue still persists. 2 Related work Implicit reasoning. Many works have shown the power of explicit reasoning ability of language models (Wei et al., 2023; Saparov and He, 2022; Jaech et al., 2024). However, such powerful mod- els, even after heavy pretraining (Achiam et al., 2023), generally come with negative results on im- plicit reasoning tasks (Press et al., 2023; Dziri et al., 2023). Relevant studies can mainly be categorized into two groups according to the evaluation task: knowledge-based reasoning (Kassner et al., 2020; Press et al., 2023; Yang et al., 2024b), and mathe- matical reasoning (Nanda et al., 2023; Stolfo et al.,2023). In this paper, we study the former task, and we show that GPT2-style language models, are in- deed capable of multi-hop reasoning in the cost of training data requirements. Most previous work studies knowledge-based reasoning with existing large language models (Yang et al., 2024b; Biran et al., 2024; Press et al., 2023), where language models are assumed to gain single-hop knowledge through pretraining and eval- uated on multi-hop tasks. Our work instead trains language models on synthetic datasets, | https://arxiv.org/abs/2505.17923v1 |
which al- lows us to accurately attribute the model behavior to particular aspects like data and models. Wang et al. (2024) also train a transformer on synthetic datasets to evaluate 2-hop reasoning. By contrast, we investigate this question across increasingly complex tasks (e.g. 2,3,4-hop), and we shed light on possible methods that can help under such chal- lenging cases. Memorization and generalization. To train a language model to fit a training set, the model could either memorize all training instances (i.e. overfitting), or develop a generalizable solution that solves the test set. Previous work studies this in terms of grokking phenomenon (Power et al., 2022; Murty et al., 2023). Their findings suggest that both memorized and generalizable solutions exist as neural circuits in the learning process, and increasing training set size encourages the efficient one (i.e. generalizable solution) through weight de- cay (Nanda et al., 2023; Varma et al., 2023; Zhu et al., 2024). Compared to these work, our study suggests that training data size needs to exponen- tially grow according to the task difficulty, which provides a possible explanation for the failure of LLMs on complex implicit reasoning tasks. 3 Dataset We introduce a k-hop reasoning dataset we created to train and evaluate LMs in this section. We focus on knowledge-based multi-hop reasoning, where generating the correct answer requires combining multiple known facts. Following previous work (Wang et al., 2024; Allen-Zhu and Li, 2024), we generate datasets according to synthetic knowledge, which allows better control of the task difficulty and attribution of model behaviors. 3.1 Task description Definitions. The knowledge-based reasoning task includes two main aspects: facts and queries. Fol- Instance 1: Jennifer 's instructor is Robert. Jennifer 's teacher is William. Jennifer 's advisor is Miller . Jennifer 's supervisor is Marie… Instance : Robert 's instructor is Frank. Robert 's teacher is Flora. Robert 's advisor is Lisa. Robert 's supervisor is Joey ... Instance : Who is the instructor of the instructor of Jennifer? Answer: Frank Instance : Who is the advisor of the instructor of Jennifer? Answer: LisaEntity Profiles k-hop Questions ... ...Training instances for k-hop task Held-out test instances for k-hop task ...Instance 1: Who is the teacher of the instructor of Jennifer? Answer: FloraFigure 2: Example of our training and test dataset. Here, we use 2-hop task as an example. lowing prior definitions (Yang et al., 2024a; Wang et al., 2024), we represent a fact as a triple (e, r, e′), where eis the subject entity, ris a relation, and e′is the object entity. Each relation racts as a function mapping a subject to an object: r(e)→e′. Ak-hop query corresponds to the com- position of ksuch functions, formalized as rk(rk−1(. . . r 1(e))). Answering this query re- quires reasoning over a chain of kfacts: (e1, r1, e′ 1),(e′ 1, r2, e′ 2), . . . , (e′ k−1, rk, e′ k). The in- termediate entities e′ 1, e′ 2, . . . , e′ k−1are referred to asbridge entities . In a k-hop query, we refer to the components (e′ 1, r1),(e′ 2, r2), | https://arxiv.org/abs/2505.17923v1 |
and so on as the 1-hop, 2-hop, and subsequent hops, respectively. We thus call e′ 1the1-hop entity, and r1the1-hop relation. While prior work has mostly focused on 2-hop queries involving a single bridge entity, we construct datasets for k∈ {2,3,4}to assess mod- els’ ability to handle increasingly complex reason- ing chains. Dataset format. We create one dataset for each k∈ {2,3,4}task. Our dataset includes two com- ponents: (1) entity profiles encoding known facts, and (2) reasoning questions that query composi- tions of facts in natural language (see Figure 2). •An entity profile encodes all possible facts for a particular entity where the entity serves as the subject entity (e.g. Jennifer ’s instructor is Robert, Jennifer ’s teacher is William... ). E1(1)E2(1)En(1)......En+1(2)En+2(2)E2n(2)......E2n+1(3)E2n+2(3)E3n(3)...... instructorteacher...... Profile: Jennifer instructor: Robert , teacher: Williams , ruler: John , advisor: Miller , supervisor: Marie...Profile: Robert instructor: Frank , teacher: …, ruler: …, advisor: …, supervisor: … 5 layersFigure 3: Profile sampling process. We always use 5 layers, and hence n=|E|/5(e.g.100fork-hop large). •The prompt for our reasoning question is as simple as “ Who is the teacher of the instructor of Jennifer? \n Answer: ”, where instructor, teacher refer to relations and Jennifer refers to the queried entity. We introduce details for generating profiles and questions in Section 3.2. To ensure the model has access to all entity profiles, the training set in- cludes all possible profiles together with randomly selected reasoning questions, and we use the held- out reasoning questions as the test set. We construct two dataset variants by varying the number of entities ( |E|) and relations ( |R|): a larger dataset with |E|= 500 ,|R|= 20 (denoted k- hop large), and a smaller one with |E|= 250 ,|R|= 10(denoted k-hop small). 3.2 Data generation Profile sampling. We use the same set of entity profiles across k= 2,3,4tasks to ensure fair com- parison. Figure 3 illustrates the process to generate profiles. We first sample |E|entity names (e.g. Jen- nifer ) from a predefined namespace and group them intoKdisjoint hierarchical layers, where Kis the largest k+ 1value. Since we consider k <5,K is fixed at 5. Each entity is then linked to |R|ran- domly selected entities in the upper layer through distinct relations, with relation names reused across layers for generality. This structure guarantees that a composition of k∈ {2,3,4}relations starting from any entity in the bottom layer leads to a well- defined target entity. More details are provided in Appendix C.1. Profile and question generation. Each fact (e, r, e′)is mapped to a natural language sentence using a simple template (e.g., {subj}’s {relation} is {obj} ). Following previous work (Allen-Zhu and Li, 2024), all facts about a given subject entity are concatenated into a single paragraph to form that Training data budget Dataset ×1 ×2 ×5 ×10 ×20 ×50 ×100 k-hop small2-hop 99.8 3-hop 5.7 12.6 99.9 100 4-hop 4.3 6.2 6.7 9.2 96.4 100 100 k-hop large2-hop 99.9 3-hop 2.5 3.1 4.9 94.6 100 4-hop 2.0 2.6 3.1 3.7 4.0 6.3 100 Table 1: Accuracy of GPT-2 on k-hop smallandk-hop largedatasets with different training data | https://arxiv.org/abs/2505.17923v1 |
budgets. Empty cells indicate that the data budget exceeds the number of possible questions. entity’s profile. To construct reasoning questions, we sample entities from the bottom layer of the hierarchy (Figure 3) and recursively traverse krela- tions to identify the correct answer. All valid k-hop queries are generated for each source entity. For example, for 2-hop queries on k-hop large, we can generate up to |R|2× |E| /5 = 40000 instances. 4LMs can learn k-hop reasoning, but at a large data cost Our first objective is to establish that language mod- els can learn implicit k-hop reasoning, but this re- quires the number of training instances (i.e. k-hop reasoning questions) grows exponentially as kin- creases. In this section, we empirically demonstrate this by training models on our k-hop datasets with k= 2,3,4. 4.1 Experiment setup Model. We adopt the smallest GPT-2 architecture (Radford et al., 2019) as our model. Following recent studies (Allen-Zhu and Li, 2024), we re- place the original positional embeddings in GPT-2 with Rotary Position Embedding (RoPE) (Su et al., 2024). We use the GPT-2 tokenizer (Radford et al., 2019) and extend its vocabulary by adding all pos- sible entity names from our dataset. The training objective is the causal language modeling loss cal- culated over all tokens in each prompt. In our main experiments, we train the model from scratch by randomly initializing all parameters. Additionally, we conduct experiments using the pretrained GPT-2 and its larger variants (see Appendix B for results). Training. We set the training steps to 20k for all tasks except 4-hop large, where we extend the train- ing to 40k steps to ensure convergence. We apply a cosine learning rate scheduler with 1k warm-up steps. Each experiment is repeated across three runs using different random seeds, and we report the average performance. Details of hyperparam-eters for model architecture and training are pro- vided in Appendix D. Dataset. We utilize the k-hop smallandk-hop large datasets introduced in Section 3 for training and evaluation, considering k= 2,3,4. This results in six datasets in total. For the 2-hop task, we gener- ate all possible reasoning questions and randomly sample 50% for the 2-hop largetraining set and 80% for2-hop small. All entity profiles are included in the training sets. The test set consists of 3,000 in- stances randomly selected from the held-out ques- tions, except for 2-hop small, which contains only 1,000 held-out questions. We report the details and statistics of our datasets in Appendix C.1. For3-hop and 4-hop tasks, we find that the same data size as the 2-hop training set results in ran- dom guessing performance. Thus, we progres- sively increase the training data size by defining the base training budget bgas the number of rea- soning questions in the 2-hop training set. We create training sets by scaling bgwith ratios from the set {×1,×2,×5,×10,×20,×50,×100}. For each ratio r, we randomly sample r×bgreason- ing questions for training. The test set for each k- hop task always includes 3,000 instances randomly sampled held-out instances except for 2-hop small. Evaluation. For each test instance, we provide the language | https://arxiv.org/abs/2505.17923v1 |
model with the prompt up to the an- swer token (e.g., “ Who is the instructor of the in- structor of Jennifer? \n Answer: ”) and evaluate the accuracy of the generated token against the gold answer. Greedy decoding is used for evaluation. 4.2 Results Language models can learn k-hop reasoning. Ta- ble 1 reports the test accuracy of our models under varying training data budgets. Our first observa- tion is that GPT-2 models are capable of achieving 100% accuracy not only on 2-hop tasks but also on more complex 3-hop and 4-hop tasks, given a suffi- Training stepsAccuracy 0255075100 5000 10000 15000 20000×1 ×2 ×5 ×10 ×20(a)3-hop large Training stepsAccuracy 0255075100 10000 20000 30000 40000×1 ×2 ×5 ×10 ×20 ×50 ×100 (b)4-hop large Figure 4: Model accuracy on 3-hop largeand4-hop large.3-hop largecan only be solved when the training budget is increased by at least a factor of ×10, while 4-hop largerequires ×100. Using ×20budget further encourages convergence on 3-hop largecompared to ×10budget. ciently large training data budget with the same k as the test set. This is a significant finding, as each entity profile appears individually in the training set without any explicit instructions on how to com- bine them to solve multi-hop tasks. The perfect accuracy suggests that language models can learn the underlying reasoning process based solely on input-output pairs, even without explicit rationales. However, data requirements increase expo- nentially with k.We further observe that the base training data budget ( ×1) is insufficient for the model to effectively learn 3-hop and 4-hop tasks, as evidenced by test accuracy below 10%. As the training data budget increases, model performance improves correspondingly. We define a model as successfully learning the task if it achieves a test accuracy above 80%. On k-hop small datasets, a minimum budget of ×5is necessary to learn the 3-hop task, whereas the 4-hop task requires a bud- get of at least ×20. Onk-hop largedatasets, the data budget required for the 3-hop task is ×10, and for the4-hop task, it escalates to ×100. These findings suggest that the training data budget grows in an exponential manner as the value of kincreases. We also plot the test accuracy of one training run on k-hop large across training steps in Figure 4 (For k-hop small results see Appendix E.1). The plots show that a larger training budget not only re- sults in higher accuracy but also accelerates model convergence. For instance, in Figure 4a, the ×20 budget reaches 100% accuracy by step 5000 , while the×10budget only achieves 10% accuracy at the same step. This finding is also consistent with Wang et al. (2024) reported in 2-hop reasoning tasks. We extend these observations by demonstrat- ing that the data budget becomes even more critical as the complexity of the reasoning task increases. Number of possible 1-hop and 2-hop relationsMinimal data budget 0255075100 (1, 1) (2, 2) (5, 5) (10, 10) (20, 20)Figure 5: Case study on 4-hop large. x-axis denotes the number of 1-hop and 2-hop relations, e.g. (1,1)denotes that1-hop and 2-hop relations are fixed across all 4-hop questions. | https://arxiv.org/abs/2505.17923v1 |
Fixing 1-hop and 2-hop relations reduces the required training budget to ×1, while increasing them leads to rapid budget growth. 4.3 Why data-hungry? Results so far highlight the substantial data require- ments for k-hop tasks, but the reason for this re- mains unclear. Increasing the value of kleads to both an increase in the number of combined facts (i.e.,kfacts for each entity) and a corresponding ex- ponential increase of the search space (i.e., |R|kre- lation combinations per entity). Our objective here is to disentangle the effects of these two factors and identify the primary source of data inefficiency. Setup. To investigate this question, we conduct a case study on the 4-hop largedataset, where we vary the number of 1-hop and 2-hop relations while holding the number of relations in the 3-hop and 4-hop positions constant. In the original dataset, each hop position can take one of |R|= 20 possible relations. For this study, we generate new training and test sets by limiting the number of 1-hop and 2- hop relations to values from the set {1,2,5,10,20}. For each configuration, we train GPT-2 models and determine the minimal data budget required to achieve 80% test accuracy. Figure 5 presents the results. We observe that when the number of 1-hop and 2-hop relations is re- stricted to a single relation, the model can success- fully learn 4-hop task using the base data budget. However, as the number of relations increases, the required data budget rapidly increases. This result suggests that the main source of data inefficiency ink-hop reasoning tasks is the exponential growth in the number of relation combinations, rather than the number of individual facts to be combined. 5 LMs reason through layer-wise lookup, incurring the cost of depth The second objective is to understand the un- derlying mechanism by which language mod- els solve the k-hop task. We first demonstrate that language models solve such tasks by layer- wise lookup of bridge entities of a k-hop query rk(rk−1(. . . r 1(e)))through empirical evidence (e.g. mechanistic interpretability). Building on this finding, we then establish a theoretical lower bound, showing that the model’s depth must grow with k to maintain such layer-wise lookup mechanism. 5.1 Experiment setup We design two experiments to investigate the model’s internal reasoning process: probing and causal intervention. For both experiments, we se- lect the model trained on 4-hop largewith a ×100 budget, as it achieves strong performance. Probing. We use probing tasks (Belinkov and Glass, 2019; Liu et al., 2019) to assess whether information about intermediate bridge entities is encoded in the hidden representations. In this setup, we freeze the model parameters and train a linear probe classifier on top of the hidden states to pre- dict the correct entity. We train one probe classifier for each hop position, predicting the corresponding bridge entity in the query. The probe is trained across all transformer layers and all tokens in the prompt to identify where and when information about the bridge entities is encoded. We split the 4-hop largetest set into 80/20% training and evalua- tion sets for training | https://arxiv.org/abs/2505.17923v1 |
the probe classifiers. Causal intervention. While probing shows whether information about bridge entities is en- coded in the hidden representations, it does not tell us whether the model actually relies on this infor- mation to generate the final answer. We further design activation patching (Vig et al., 2020; Meng et al., 2022) experiments to investigate it.The core idea of activation patching is to replace the residual stream (i.e., the output of a residual layer in a transformer block) at a specific layer Li and prompt token tj, and measure the resulting change in the output probability of the correct an- swer. For convenience, we call this residual stream res(Li, tj). In this section, we focus on the last token position in the input prompt (i.e., tjalways being the last input token, which is whitespace <space> ), as justified in Section 5.2. Suppose we are given a k-hop test instance and aim to measure the causal effect of res(Li, tj), the residual stream at layer Liand token tj. For clarity, we define three types of runs as follows. Clean Run: The original forward pass of the test instance, producing the output probability of the correct answer as Pclean.Corrupted Run: A dis- tinctk-hop instance selected to serve as the source of the patched residual stream. Patched Run: The modified run, where the residual stream res(Li, tj) in the clean run is replaced with the corresponding res(Li, tj)from the corrupted run, leaving other layers unchanged. The output probability in the patched run is denoted as Ppatched . The causal ef- fect of the targeted residual stream is defined as Pclean−Ppatched , where a larger effect indicates greater reliance on the removed information. We calculate the causal effect for each layer and report the average effect across 3000 held-out instances. The aim of our intervention experiment is to measure the effect of bridge entity information at different hop positions (e.g., 1-hop, 2-hop). Hence, we define four types of corrupted runs for each clean run: C1-hop,C2-hop,C3-hop, and C4-hop. In a Ci-hop run, we select a corrupted instance where the gold i-hop entity differs from the clean run, while the entities of other hop positions remain unchanged. This setup allows us to measure the effect of perturbing a specific i-hop entity while keeping other bridge entities unchanged. 5.2 Results Bridge entities are encoded in the last token po- sition. Figure 6 presents the probing results across layers and token positions in the input prompt, e.g., Who is the instructor of the teacher of the advisor of the instructor of <Entity>? \n Answer:<space> . We report results only for tokens after the <En- tity> token, as preceding tokens cannot contain information about the target bridge entities. Since the vocabulary size of each i-hop entity is 100, a random baseline provides 1%accuracy. <Entity> ? \n Answer : <space> Prompt tokens12 11 10 9 8 7 6 5 4 3 2 1Layer94% 4% 1% 1% 25% 4% 2% 2% 99% 34% 2% 2% 8% 3% 1% 3% 8% 2% 2% 2% 30% 90% 100% 100% 94% 5% 2% 1% 66% 5% | https://arxiv.org/abs/2505.17923v1 |
3% 2% 100% 40% 2% 1% 24% 4% 3% 3% 27% 3% 1% 2% 38% 93% 100% 100% 96% 5% 2% 1% 67% 5% 2% 2% 100% 41% 2% 1% 24% 2% 2% 3% 34% 4% 2% 1% 39% 94% 100% 99% 96% 6% 2% 1% 67% 5% 2% 1% 100% 44% 3% 1% 25% 3% 2% 4% 35% 5% 2% 2% 42% 95% 100% 95% 96% 6% 2% 2% 68% 4% 2% 2% 100% 46% 3% 2% 23% 2% 2% 2% 41% 4% 1% 2% 50% 98% 100% 82% 97% 6% 2% 1% 67% 4% 2% 2% 100% 47% 3% 2% 20% 2% 3% 3% 43% 4% 2% 2% 61% 99% 100% 43% 97% 6% 2% 2% 70% 4% 2% 1% 100% 49% 3% 2% 18% 3% 2% 2% 38% 4% 2% 2% 77% 100% 100% 15% 98% 7% 1% 2% 71% 5% 1% 2% 100% 46% 2% 2% 14% 2% 2% 2% 39% 6% 1% 2% 90% 100% 95% 10% 98% 7% 2% 2% 77% 5% 2% 2% 100% 36% 2% 1% 12% 3% 2% 3% 34% 4% 2% 2% 95% 100% 25% 5% 98% 6% 1% 2% 81% 5% 1% 2% 100% 21% 2% 1% 11% 2% 1% 2% 21% 2% 2% 2% 100% 100% 9% 3% 98% 5% 2% 1% 87% 4% 1% 1% 100% 10% 1% 1% 6% 2% 2% 2% 18% 3% 2% 1% 100% 10% 2% 1% 99% 4% 1% 1% 4% 2% 2% 1% 54% 6% 2% 1% 6% 2% 1% 1% 3% 2% 2% 2% 35% 6% 1% 0% 0.20.40.60.81.0 1-hop 2-hop 3-hop 4-hop 1-hop 2-hop 3-hop 4-hop 1-hop 2-hop 3-hop 4-hop 1-hop 2-hop 3-hop 4-hop 1-hop 2-hop 3-hop 4-hop 1-hop 2-hop 3-hop 4-hopFigure 6: Probing results across tokens in the input prompt. Each token is represented by four columns, corresponding to 1-hop to 4-hop bridge entities. Note that tokens preceding <Entity> cannot include informa- tion about any of the entities, and are thus not shown here. Only the <space> token consistently encodes in- formation about all four bridge entities, indicating that reasoning is concentrated at the last token before answer generation. Notably, the hidden representation of the last input token encodes information about all neces- sary bridge entities for predicting the final answer. Instead, probe classifiers show low accuracy for other token positions, suggesting that the reasoning process likely occurs in the position immediately before generating the final answer. We confirm this by observing zero casual effects on preceding tokens with additional activation patching experi- ment (see Appendix F.1). We thus focus our causal intervention experiments on this <space> token. Output prediction relies on bridge entity infor- mation. Figure 7 shows the causal effects across layers in our intervention experiment. For each i-hop entity, we identify specific layers that the model relies on to generate the final answer. More- over, the model organizes the reasoning process in a layer-wise manner, with shallower layers han- dling lower-hop entities, and deeper layers han- dling higher-hop entities. This layer-wise lookup confirms that the model leverages bridge entity in- formation to perform multi-hop reasoning, which generalizes | https://arxiv.org/abs/2505.17923v1 |
prior observations from 2-hop tasks (Biran et al., 2024; Wang et al., 2024). 5.3 Theoretical Analysis We have found that language models perform the k-hop task by layer-wise lookup. This suggests that transformers may need depth linear in the number of reasoning steps. Here, we discuss how this result relates to the in-principle expressiveness of trans- formers. Formally, we consider a universe Eof entities (e.g., {Jennifer ,Frank , . . .}) and a set R of maps r:E → E (e.g., when r=instructor , e=Jennifer , then r(e)denotes the instruc- C_1-hop C_2-hop C_3-hop C_4-hop123456789101112Layer 0.00.20.40.60.8 Effect sizeFigure 7: Causal interventions reveal a layer-wise lookup mechanism: Intervening on the 1-hop entity has the strongest effect in the 1st layer, and little effect in higher layers. Intervening on the 2-hop entity has an effect mainly in the 2nd and 3rd layers; analogously for the 3-hop and 4-hop entities. Overall, these results indicate that entities are looked up layer-by-layer. tor of Jennifer). We consider the task of map- ping an input string “Who is the rkof the rk−1 . . .r2of the r1ofe? Answer:” (as in Figure 2) to the entity rk(. . . r 1(e). . .)∈ E (e.g., instruc- tor(teacher(Jennifer)) ). We lower-bound the num- ber of layers needed in the case where the attention pattern does not depend on the query e. We con- sider this a reasonable special case, as there is no obvious way in which query-dependent attention would help solve the k-hop task. In this case: Theorem 5.1 (See Appendix A for proof) .Con- sider a causal transformer operating in pbits of precision, with dhidden units, Hheads and Llay- ers. Assume it performs k-hop reasoning over E andRas defined above. Assume further that the at- tention pattern does not depend on e. Ifk≤ |E|− 2, then, for some R, L≥k 8pdH(1) We note that there are relation sets Rfor which shortcuts with few layers may exist, but the result shows that a linear number is needed in the worst case. This statement expresses a width-depth trade- off: the product of the number of layers, bits of precision, width, and number of heads needs to grow linearly in k. In particular, within a single model (i.e., fixing d,H, and p), the analysis pre- dicts that, as kgrows, more and more layers need Data budgetAccuracy 0255075100 ×1 ×2 ×5 ×10 ×20baseline mix curriculum(a)3-hop large Data budgetAccuracy 0255075100 ×1 ×2 ×5 ×10 ×20 ×50 ×100 (b)4-hop large Figure 8: Model performance on k-hop largedatasets with mixed learning and curriculum learning. Curricu- lum learning enables the model to solve the 4-hop task with a ×5training budget, compared to the ×100re- quired by both the baseline and mixed learning setups. to be involved in the hop-by-hop retrieval, as we found empirically (Figure 7). We also note that ex- isting results (Chen et al., 2024a) are not applicable to our k-hop task (Appendix A.2). 6 Curriculum learning mitigates the data requirement, but doesn’t solve it Finally, we study training strategies to improve the data budget issue. Models in Section 4 were trained solely on k-hop | https://arxiv.org/abs/2505.17923v1 |
task, but i-hop ( i < k ) questions should also be available in realistic setups. By ex- ploiting such easier questions as additional training data for k-hop task, we demonstrate that curricu- lum learning significantly mitigates the exponential growth issue, though not eliminate the increase of data budget as kincreases. 6.1 Experiment setup We use the same GPT-2 architecture as in Sec- tion 4.1 and compare two strategies: mixed learn- ing and curriculum learning (Bengio et al., 2009). Mixed learning. We construct the training set by combining reasoning questions from both lower- hop and k-hop tasks. For instance, the 4-hop train- ing set contains a mix of 2-hop, 3-hop, and 4-hop questions, along with all relevant entity profiles. Lower-hop questions are generated using the same entity profiles as the target task. We vary the k-hoptraining budget using the same scaling factors as in Section 4.1, while keeping the amount of lower-hop data fixed (see Appendix C.2 for dataset details). Curriculum learning. Training in curriculum learning is split into multiple stages, where each stage progressively introduces harder reasoning tasks. For a k-hop task, training proceeds in k−1 stages: the first stage uses only 2-hop questions, the second stage includes both 2-hop and 3-hop, and so on. We use the same lower-hop data as in the mixed learning setup and ensure that total training steps are equal across both strategies. See Appendix D.2 for training details. Test set. To avoid shortcut solutions (e.g., where lower-hop queries appear as subcomponents of the k-hop query), we generate test sets such that such overlaps do not exist using rejection sampling. The test set size remains 3000 instances, consistent with previous experiments. 6.2 Results. Figures 8 shows the results for k-hop large, and the same pattern holds for k-hop small (see Appendix E.1 for results). We compare mixed learning, cur- riculum learning, and the baseline model trained only on the target k-hop dataset from Section 4.1. Curriculum learning significantly reduces the required data budget. Notably, curriculum learning yields the most significant improvement. For example, perfect accuracy on 4-hop tasks is achieved with only a ×5budget, compared to ×100 in the baseline. In contrast, simply mixing all avail- able data provides only modest gains. This demon- strates that presenting easier reasoning tasks before harder ones is a highly effective strategy for im- proving data efficiency. Curriculum learning builds circuits gradually. We attribute this effectiveness of curriculum learn- ing to a stepwise build-up of circuits: As we show in Appendix F.2, mechanisms retrieving lower-hop entities (e.g., 1-hop) emerge in the early training stages; subsequent stages then build upon these es- tablished circuits to learn more complex reasoning tasks. While baseline models have to construct a full circuit for k-hop reasoning at once, curricu- lum learning enables 1-hop circuits to emerge in shallower layers in the first stage, with later stages developing circuits for 2-hop and 3-hop entities on top of these. Curriculum learning does not completely solve the data growth issue. Despite the effec- tiveness of curriculum learning strategy, it does not completely eliminate the growth of data budget. | https://arxiv.org/abs/2505.17923v1 |
For example, curriculum learning requires ×2bud- get for 3-hop task and ×5for4-hop task, indicating the challenge of k-hop implicit reasoning for LMs. 7 Conclusion Our work investigates whether language models can learn implicit multi-hop reasoning. We provide a nuanced answer through controlled k-hop rea- soning datasets using GPT2-style language models. On the one hand, our findings demonstrate that lan- guage models can indeed learn k-hop reasoning through sequential lookup of intermediate bridge entities layer by layer. However, this capability comes at a cost: as kincreases, the training data budget grows exponentially, and the model depth must scale linearly. Furthermore, while curricu- lum learning mitigates the data budget growth, it does not eliminate the growth trend. Together, we present a comprehensive view of the potential and limitations of LMs in implicit reasoning, underscor- ing the inherent trade-offs between task complexity, data requirements, and model depth. 8 Limitations We limit our study to implicit reasoning tasks using synthetic datasets generated based on predefined templates. Applying the same analysis to realis- tic datasets is challenging due to the difficulty of collecting complex multi-hop questions (e.g. 4-hop questions) and corresponding facts. Due to com- putational budget constraints, we also restrict our experiments to k-hop tasks with k <5. Additionally, our experiments primarily rely on randomly initialized small language models (GPT- 2 small). While we also observe that the data bud- get issues persist for pretrained models (e.g. pre- trained GPT-2) and larger models (GPT-2 medium and large with up to 770M parameters), we do not extend our analysis to models with greater parame- ter sizes. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 . Zeyuan Allen-Zhu and Yuanzhi Li. 2024. Physics of language models: Part 3.1, knowledge storage andextraction. In International Conference on Machine Learning , pages 1067–1077. PMLR. Yonatan Belinkov and James Glass. 2019. Analysis methods in neural language processing: A survey. Transactions of the Association for Computational Linguistics , 7:49–72. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26th annual international confer- ence on machine learning , pages 41–48. Eden Biran, Daniela Gottesman, Sohee Yang, Mor Geva, and Amir Globerson. 2024. Hopping too late: Explor- ing the limitations of large language models on multi- hop queries. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Process- ing, pages 14113–14130, Miami, Florida, USA. As- sociation for Computational Linguistics. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems , 33:1877–1901. Lijie Chen, Binghui Peng, and Hongxun Wu. 2024a. Theoretical limitations of multi-layer transformer. arXiv preprint arXiv:2412.02975 . Xingyu Chen, Jiahao Xu, Tian Liang, Zhiwei He, Jianhui Pang, Dian Yu, Linfeng Song, Qiuzhi Liu, Mengfei Zhou, Zhuosheng Zhang, et al. 2024b. Do not think that much for 2+ 3=? on the overthinking of o1-like llms. arXiv preprint | https://arxiv.org/abs/2505.17923v1 |
arXiv:2412.21187 . Yuntian Deng, Yejin Choi, and Stuart Shieber. 2024. From explicit cot to implicit cot: Learning to internalize cot step by step. arXiv preprint arXiv:2405.14838 . Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lor- raine Li, Liwei Jiang, Bill Yuchen Lin, Sean Welleck, Peter West, Chandra Bhagavatula, Ronan Le Bras, et al. 2023. Faith and fate: Limits of transformers on compositionality. Advances in Neural Information Processing Systems , 36:70293–70332. Jeffrey L Elman. 1993. Learning and development in neural networks: The importance of starting small. Cognition , 48(1):71–99. Erwan Fagnou, Paul Caillon, Blaise Delattre, and Alexandre Allauzen. 2024. Chain and causal atten- tion for efficient entity tracking. In Proceedings of the 2024 Conference on Empirical Methods in Natu- ral Language Processing , pages 13174–13188, Mi- ami, Florida, USA. Association for Computational Linguistics. Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. 2025. Deepseek-r1: In- centivizing reasoning capability in llms via reinforce- ment learning. arXiv preprint arXiv:2501.12948 . Shibo Hao, Sainbayar Sukhbaatar, DiJia Su, Xian Li, Zhiting Hu, Jason Weston, and Yuandong Tian. 2024. Training large language models to reason in a contin- uous latent space. arXiv preprint arXiv:2412.06769 . Peter M Higgins. 2017. Embedding in a finite 2- generator semigroup. Glasgow Mathematical Jour- nal, 59(1):61–75. Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richard- son, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. 2024. Openai o1 system card. arXiv preprint arXiv:2412.16720 . Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. Nora Kassner, Benno Krojer, and Hinrich Schütze. 2020. Are pretrained language models symbolic reasoners over knowledge? In Proceedings of the 24th Confer- ence on Computational Natural Language Learning , pages 552–564, Online. Association for Computa- tional Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings . Alexander Kozachinskiy, Felipe Urrutia, Hector Jimenez, Tomasz Steifer, Germán Pizarro, Matías Fuentes, Francisco Meza, Cristian B Calderon, and Cristóbal Rojas. 2025. Strassen attention: Unlock- ing compositional abilities in transformers based on a new lower bound method. arXiv preprint arXiv:2501.19215 . Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019. Lin- guistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers) , pages 1073–1094, Minneapolis, Minnesota. Association for Computational Linguistics. Ilya Loshchilov and Frank Hutter. 2016. Sgdr: Stochas- tic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983 . Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. 2022. Locating and editing factual associa- tions in gpt. Advances in neural information process- ing systems , 35:17359–17372. William Merrill and Ashish Sabharwal. | https://arxiv.org/abs/2505.17923v1 |
2023. The par- allelism tradeoff: Limitations of log-precision trans- formers. Transactions of the Association for Compu- tational Linguistics , 11:531–545.Shikhar Murty, Pratyusha Sharma, Jacob Andreas, and Christopher Manning. 2023. Grokking of hierarchi- cal structure in vanilla transformers. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) , pages 439–448, Toronto, Canada. Association for Computational Linguistics. Neel Nanda, Lawrence Chan, Tom Lieberum, Jess Smith, and Jacob Steinhardt. 2023. Progress mea- sures for grokking via mechanistic interpretability. arXiv preprint arXiv:2301.05217 . Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, and Augustus Odena. 2021. Show your work: Scratchpads for intermediate computation with language models. A Paszke. 2019. Pytorch: An imperative style, high- performance deep learning library. arXiv preprint arXiv:1912.01703 . Binghui Peng, Srini Narayanan, and Christos Papadim- itriou. 2024. On limitations of the transformer archi- tecture. In First Conference on Language Modeling . Alethea Power, Yuri Burda, Harri Edwards, Igor Babuschkin, and Vedant Misra. 2022. Grokking: Generalization beyond overfitting on small algorith- mic datasets. arXiv preprint arXiv:2201.02177 . Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah Smith, and Mike Lewis. 2023. Measuring and narrowing the compositionality gap in language mod- els. In Findings of the Association for Computational Linguistics: EMNLP 2023 , pages 5687–5711, Singa- pore. Association for Computational Linguistics. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Abulhair Saparov and He He. 2022. Language models are greedy reasoners: A systematic formal analysis of chain-of-thought. arXiv preprint arXiv:2210.01240 . Alessandro Stolfo, Yonatan Belinkov, and Mrinmaya Sachan. 2023. A mechanistic interpretation of arith- metic reasoning in language models using causal mediation analysis. In Proceedings of the 2023 Con- ference on Empirical Methods in Natural Language Processing , pages 7035–7052, Singapore. Associa- tion for Computational Linguistics. Lena Strobl. 2023. Average-hard attention transformers are constant-depth uniform threshold circuits. arXiv preprint arXiv:2308.03212 . Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. 2024. Roformer: En- hanced transformer with rotary position embedding. Neurocomputing , 568:127063. Vikrant Varma, Rohin Shah, Zachary Kenton, János Kramár, and Ramana Kumar. 2023. Explaining grokking through circuit efficiency. arXiv preprint arXiv:2309.02390 . Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems , 30. Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Yaron Singer, and Stuart Shieber. 2020. Investigating gender bias in language models using causal mediation analysis. Advances in neural information processing systems , 33:12388– 12401. Boshi Wang, Xiang Yue, Yu Su, and Huan Sun. 2024. Grokking of implicit reasoning in transformers: A mechanistic journey to the edge of generalization. Advances in Neural Information Processing Systems , 37:95238–95265. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-thought prompting elic- its reasoning in large language models. Thomas Wolf, | https://arxiv.org/abs/2505.17923v1 |
Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface’s transformers: State-of- the-art natural language processing. arXiv preprint arXiv:1910.03771 . Sohee Yang, Elena Gribovskaya, Nora Kassner, Mor Geva, and Sebastian Riedel. 2024a. Do large lan- guage models latently perform multi-hop reasoning? InProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 10210–10229, Bangkok, Thai- land. Association for Computational Linguistics. Sohee Yang, Nora Kassner, Elena Gribovskaya, Se- bastian Riedel, and Mor Geva. 2024b. Do large language models perform latent multi-hop reason- ing without exploiting shortcuts? arXiv preprint arXiv:2411.16679 . Tian Ye, Zicheng Xu, Yuanzhi Li, and Zeyuan Allen- Zhu. 2024. Physics of language models: Part 2.1, grade-school math and the hidden reasoning process. InThe Thirteenth International Conference on Learn- ing Representations . Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Good- man. 2022. Star: Bootstrapping reasoning with rea- soning. Advances in Neural Information Processing Systems , 35:15476–15488. Xuekai Zhu, Yao Fu, Bowen Zhou, and Zhouhan Lin. 2024. Critical data size of language mod- els from a grokking perspective. arXiv preprint arXiv:2401.10463 .A Details for Theoretical Results A.1 Proof of Theoretical Bound Theorem A.1 (Restated from 5.1) .Consider a causal transformer operating in pbits of preci- sion, with dhidden units, Hheads and Llayers. Assume it performs k-hop reasoning over EandR as defined above. Assume further that the attention pattern does not depend on e. Ifk≤ |E| − 2, then, for some R, L≥k 8pdH(2) Proof. Recall that the input has the form Who is the rkof the rk−1. . .r2of the r1ofe? Answer : Our argument is based on communication complex- ity. We consider a communication game where Alice holds r1, . . . , r k, and Bob holds e. Due to causal masking, Alice can compute the trans- former’s activations on all tokens “Who is the rk of the . . . r1of” without receiving any information from Bob. In order to compute activations on then final tokens “ e? Answer :” (and thus the predic- tion), Bob requires access to the outputs of atten- tion heads on these tokens. Because the attention patterns are assumed to be independent of the query e, Alice can simple, for each head at the four to- kens “ e? Answer :” provide the activations within the span known to Alice weighted with the atten- tion weights. Thus, a total of 4HLsuch activation vectors is sufficient. Furthermore, each of these activations can be encoded with pdbits. Overall, thus, Bob can compute the output with access to only4LpdH bits. That is, there a way to compress the composed function rk◦ ··· ◦ r1into4LpdH bits. Hence, 24LpdHupper-bounds the cardinality of such possi- ble functions: 4LpdH ≥log2|{rk◦ ··· ◦ r1:r1, . . . , r k∈ R}| (3) We note that, in general, the right-hand-side could be small: for instance, if Rjust contains the iden- tity function, then the set of k-fold composed func- tions will also just contain the identity function (since its composition with itself again | https://arxiv.org/abs/2505.17923v1 |
equals it- self). To conclude the theorem, the remaining prob- lem is now to show that there is a way of choosing Rfor which the right-hand-side scales with k. We arbitrarily label the elements of Eas {0, x0, x1, . . . , x n−1}, and define R={f, g} where: f(0) =0 f(xi) =x(i+1)% n g(0) =0 g(x0) =0 g(xi) =xi(i >0) Intuitively, (1) there is a special “sink” entity 0, (2)fcyclically shuffles the non-sink entities, (3) g maps the first entity in the order to the sink entity. We now consider all words over f, gof length k where ggdoes not occur (recall that, by assumption, k≤ |E|− 2 =n−1). The number of such words is exponential in k; indeed, it is at least 3 2k.1Each such word, interpreted as a composition, generates a different transformation E → E .2Indeed, to show this, we simply note that such a composition maps xito0if and only if gwas applied immediately after the i+ 1-th application of f. Thus, when k≤ |E| − 2, we have lower-bounded the right- hand-side of (3) as k·log23 2>k 2. The theorem then follows by rearranging (3). A.2 Discussion Related Work Chen et al. (2024a) prove a lower bound on Lfor causal transformers solving a more complicated kind of composition task, which com- poses functions taking twoarguments. The input provides both (i) a sequence of functions, and (ii) a sequence of entities serving as the second argu- ment, with the output zl+1(wl, zl(wl−1, zl−1(. . . z 2(w1, i1)))) (4) where z1, . . . , z l1are two-argument functions, i1 can be viewed as an input entity (similar to the query in our k-hop task), and—crucially— w1, . . . , w lserve as additional arguments to the two-argument functions. For this more compli- cated task, Chen et al. (2024a) prove a depth-width tradeoff. Unlike our result, theirs does not make any assumption on the attention patterns, however, it is specifically proven for this more complicated 1Indeed, it equals the Fibonacci number Fk+2. For large k, this is lower-bounded by 3 2k+2; to make it valid even for small k, it is sufficient to instead lower-bound by 3 2k. 2We note the connection to the general fact that every finite semigroup can be embedded into a finite semigroup generated by an idempotent and a nilpotent (note that fis nilpotent andgis idempotent), proven using a similar construction in Theorem 1.1 of Higgins (2017).task. Intuitively, separately presenting the func- tions z1, . . . , z l1from the entities w1, . . . , w lserv- ing as their second argument might play a key role in making the task challenging enough to enable the theoretical analysis in Chen et al. (2024a). Hence, it appears to remains open if such a bound can also be proven for a task directly matching k-hop reasoning (Figure 2). Another line of work has shown limitations of one-layer transformers in performing function com- position (Peng et al., 2024; Kozachinskiy et al., 2025); this is consistent with our evidence that k- hop tasks require increasing numbers of | https://arxiv.org/abs/2505.17923v1 |
layers, but does not bound how many layers are needed. Bounds from NC1-hardness As transformers can be simulated in TC0(e.g. Merrill and Sabhar- wal, 2023; Strobl, 2023), some work has obtained bounds conditional on standard complexity con- jecture TC0̸=NC1. Assuming this conjecture, transformers generally cannot solve k-hop compo- sition unless the number of layers increases as kin- creases; as an example, consider Eto be{1, . . . , 5}, andRa generating subset of the alternating permu- tation group A5; then solving k-hop composition isNC1-hard and predicted to not be feasible for transformers. However, due to the difficulty of proving lower bounds for TC0, this line of reason- ing does not provide precise information about how quickly exactly Lneeds to grow with k. Role of attention pattern Theorem 5.1 applies in the case where attention patterns do not depend on the input entity e(in fact, they might still de- pend on r1, . . . , r k). Our proof strategy makes use of this assumption, because it limits the amount of information that any individual attention head at the final positions can obtain about r1, . . . , r k. It remains open if this assumption can be relaxed. Intuitively, it does not seem clear how changing attention patterns could make the task easier. How- ever, formally proving lower bounds for multi-layer transformer without either constraining attention patterns (as we do) or considering a more complex task (as done in Chen et al. (2024a)) remains chal- lenging; we expect that further technical tools will be needed to overcome these challenges. B Effect of more powerful models Section 4 only presents results for our randomly initialized GPT-2 small model. Would the training data budget still grows in an exponentially way as thekincreases, even with more powerful language models? We investigate this question by applying the same experiments in Section 4.2 with two other model setups: finetuning and scaling up model parameters. We only report 1-run results for all experiments in this section. B.1 Finetuning For finetuning setup, we finetune a pretrained lan- guage model on the same training set in Section 4 and evaluate it on the same test set. Here we start with the pretrained GPT-2 small model (Radford et al., 2019), and use the same hyperparameters as for our randomly initialized GPT-2. Note that the pretrained GPT-2 adopts a learned positional em- bedding (Vaswani et al., 2017) instead of RoPE (Su et al., 2024), and thus we cannot directly tell the effect of pretraining compared to non-pretrained model. Here we only use this experiment to con- firm that the significant increase of data budget still holds for pretrained models. Table 2 presents the results of pretrained GPT-2 small models. Overall, the data budget still ex- ponentially grows as the kvalue increases. On k-hop large the model needs ×10budget for 3- hoptask and ×100for4-hop, which is the same as our randomly initialized transformer. The pretrained model achieves lower accuracy on k- hop small datasets, e.g. only 93.7%accuracy on 2- hop task. Nonetheless, the model still learns to perfectly solve k-hop smalldatasets | https://arxiv.org/abs/2505.17923v1 |
with enough data budget, e.g. for 3-hop, model accuracy gets signifi- cantly improved accuracy at ×5budget and reaches 100% at×10budget. We consider the lower accu- racy here is likely due to the lack of hyperparameter optimization and use of better position encoding. B.2 Scaling up the model size. We also evaluate setups where we scale up the number of model parameters. Kaplan et al. (2020) demonstrates that larger model size is crucial to gain high performance, especially the depth of transformer layers (Fagnou et al., 2024; Ye et al., 2024), and we want to investigate if larger models address the data budget issue. Here we experiment with same architecture described in Section 4.1 (i.e. GPT-2 with RoPE), and we set hyperparameters of architectures (e.g. number of layers, attention heads, etc.) according to the GPT-2 medium3and large model4configurations. We randomly ini- tialize the model and train and evaluate it on the 3https://huggingface.co/openai-community/gpt2-medium 4https://huggingface.co/openai-community/gpt2-largek-hop small datasets in Section 4.1 from scratch. Hy- perparameters for training are the same as Section 4.1. Table 3 reports the results of such larger models. For both GPT-2 medium and large sized models, the growth of data budget issue still persists. C Dataset details C.1 Datasets in Section 4 Namespaces of entity and relation names. We provide details on the entity and relation names- paces used to generate the datasets in Section 3. Our dataset consists of |E|entities, each with a dis- tinct name and Nrelations. We use 600 unique single-token person names (e.g., Jennifer ) and 20 single-token relation names (e.g., instructor ), gen- erated by ChatGPT5, as the namespaces for entities and relations, respectively. The complete vocab- ulary of relation and a subset of entity names are provided in Tables 5 and 6. Since our main experi- ments use a randomly initialized language model, the specific choice of vocabulary does not influence our conclusions. Top-hierarchy entity profiles. Entities in the top layer of Figure 3 are not linked to any targets, making it non-trivial to generate their profiles. Nev- ertheless, we include their profiles in the training set to maintain consistency across all k-hop tasks withk∈ {2,3,4}. In both the 2-hop and 3-hop tasks, answer tokens (i.e., entity names) appear in the training set as subject entities in their own pro- files. To ensure the same holds for the 4-hop task, where answers correspond to top-layer entities, we generate profiles for these entities as well. Specifi- cally, we generate these profiles by concatenating facts in which the subject entity is the top-layer entity itself, the relation is one from Table 5, and the object entity is a single-token name sampled from an additional set of 100 person names. These object names are distinct from the ones used in Fig- ure 3. Since these facts are never used in any k-hop question in the training or test sets, including them does not affect our results or conclusions. Table 4 reports the training set sizes for each dataset configuration. To maintain consistency across data budget setups, we include the same set of|E|entity profiles (e.g., |E|= 250 profiles for k-hop | https://arxiv.org/abs/2505.17923v1 |
small) in each training set. We partition the |E|entities into 5 disjoint subsets, each containing 5https://chatgpt.com/ Training data budget Dataset ×1 ×2 ×5 ×10 ×20 ×50 ×100 k-hop small2-hop 93.7 3-hop 6.6 8.2 45.4 100 4-hop 5.1 6.5 6.9 8.4 23.8 100 100 k-hop large2-hop 100 3-hop 2.8 2.5 3.9 87.5 100 4-hop 2.3 3 4.1 4.2 3.9 24.6 100 Table 2: Accuracy of finetuned GPT-2 small models on k-hop smallandk-hop largedatasets with different training data budgets. Model Dataset Training data budget ×1 ×2 ×5 ×10 ×20 ×50 ×100 GPT-2 Medium2-hop small 100 3-hop small 5.6 13.9 99.9 100 4-hop small 5 5.8 8 10.2 99.8 100 100 GPT-2 Large2-hop small 100 3-hop small 6.5 22.0 100 100 4-hop small 4.6 5.6 7.9 11.0 99.6 100 100 Table 3: Accuracy of GPT-2 Medium and Large on k-hop smalldatasets with different training data budgets. Empty cells indicate that the data budget exceeds the number of available questions possible to generate. Dataset Hop ×1 ×2 ×5 ×10 ×20 ×50 ×100 small2-hop 4 250 3-hop 4 250 8 250 20 250 40 250 4-hop 4 250 8 250 20 250 40 250 80 250 200 250 400 250 large2-hop 20 500 3-hop 20 500 40 500 100 500 200 500 400 500 4-hop 20 500 40 500 100 500 200 500 400 500 1 000 500 2 000 500 Table 4: Statistics of the number of training instances in each setup. instructor teacher ruler advisor supervisor leader manager director patron mentor administrator coordinator tutor predecessor sponsor financier backer overseer employer boss Table 5: V ocabulary of relation names. Emil Gavin Chad Flora Adam Addie Bobby Edwin Gabby Helen Jeffery Joel Kris Kristen Lisa Liam Eva Emma Dylan Isabella Table 6: Subset of vocabulary of entity names. |E|/5entities, and only generate reasoning ques- tions targeting one subset (e.g., entities in the bot- tom hierarchy of Figure 3). Each entity profile in- cludes |R|relations (e.g., |R|= 10 fork-hop small), allowing us to generate |R|2× |E| /5 = 5000 ques- tions for 2-hop small, of which 80% are randomly selected as training instances.C.2 Datasets for mixed and curriculum learning In mixed learning, we introduce lower-hop rea- soning questions as auxiliary training instances to facilitate learning more complex reasoning tasks. For the 3-hop task, we add 2-hop instances, and for the 4-hop task, we add both 2-hop and 3-hop instances. For k-hop small, we include 4k 2-hop instances as auxiliary data for 3-hop small, and 4k 2-hop and 20k 3-hop instances for 4-hop small. For k-hop large, we include 32k 2-hop instances for 3- hop large, and 32k 2-hop and 100k 3-hop instances for4-hop large. Due to computational constraints, we did not specifically tune the size of auxiliary data. The curriculum learning setup uses the same auxiliary instances as mixed learning. D Training details D.1 Baseline This section provides the model architecture and training setup used in Section 4. Unless stated otherwise, the same configuration is applied across all experiments in this paper. Model architecture. We adopt the GPT-2 small architecture6, consisting of 12 transformer layers with 12 attention | https://arxiv.org/abs/2505.17923v1 |
heads. The input embedding di- mension is 768, and the context window is limited to 1024 tokens. Instead of absolute position embed- dings used in the original Transformer (Vaswani et al., 2017), we employ Rotary Position Embed- ding (RoPE) (Su et al., 2024) to encode positional information. We use the default GPT-2 tokenizer and extend the vocabulary to include all entity pro- file names (e.g., Jennifer ), resulting in a vocabulary size of |V|= 50,740. Training. The batch size is set to 512 with gra- dient accumulation steps of 4. We use the AdamW optimizer (Kingma and Ba, 2015; Loshchilov and Hutter, 2019) with the following hyperparameters: learning rate of 5e−4,ϵ= 1e−6,β1= 0.9, β2= 0.999, and weight decay of 0.1. Training begins with a 1k-step warm-up phase, followed by a cosine learning rate scheduler (Loshchilov and Hutter, 2016), with a minimum learning rate set to 0.1×the initial learning rate. Experiments are run on Nvidia A100 and H100 GPU cards (80GB). Each experiment is conducted on one single GPU, which takes about 8 hours for 20k optimization steps. The implementation is based on Huggingface (Wolf et al., 2019) and Pytorch (Paszke, 2019). GPT-2 is released under the MIT License by OpenAI. D.2 Setup for mixed and curriculum learning The model architecture for mixed and curriculum learning experiments remains the same as the base- line configuration described in Section D.1. The training setup for mixed learning also follows the baseline training setup without any modifications. Training in curriculum learning is divided into multiple stages, where each stage progressively introduces harder reasoning tasks. For a k-hop task, training consists of k−1stages: The first stage includes only 2-hop questions. The second stage adds 3-hop questions. The third stage adds 4-hop questions to the training set (only applicable for 4-hop tasks). Hence we have 2 training stages for 3-hop task and 3 training stages for 4-hop task. The maximum number of training steps for each stage across different target tasks is reported in Table 7. Each stage employs the same learning 6https://huggingface.co/openai-community/gpt2rate scheduler and warm-up steps as in the baseline training setup to maintain consistency. The batch size and gradient accumulation steps remain the same as in the baseline setup. Task Stage 1 Stage 2 Stage 3 Total 3-hop small 10000 10000 - 20000 4-hop small 5000 5000 10000 20000 3-hop large 10000 10000 - 20000 4-hop large 10000 10000 20000 40000 Table 7: Training steps for each training stage of cur- riculum learning E Detailed results E.1 Results for LMs on k-hop small We plot the test accuracy of LMs on k-hop small across training steps in Figure 9. The pattern is similar to the one observed in Figure 4. Models trained with small budgets only give modest im- provement over random baseline (i.e. 2%fork- hop small). Larger budgets not only lead to higher accuracy, but also achieves this with much less training steps. We also report k-hop small results of models trained with mixed learning and curriculum learn- ing in Figure 10. Still, we observe that curriculum learning gives the best result compared to the | https://arxiv.org/abs/2505.17923v1 |
base- line and mixed learning. E.2 Standard deviation For each experiment reported in Section 4 and 6, we made 3 runs based on different random seeds. We report the mean and standard deviation of the test accuracy for each model in Table 8. For most results we do not observe a large standard deviation, indicating that our conclusion is robust to the ran- domness. For particular runs there is a large devia- tion, especially when the data budget is not enough (e.g. model trained with curriculum learning on 4- hop largewith×2budget), which gets smaller when we further add more data into the training set. E.3 Log scale of data budget We plot the minimal data budget required to solve k-hop tasks on a log scale as kincreases. The data points are based on numbers in Table 1. Figure 11 shows the results, confirming that the required data budget grows exponentially with k. Training stepsAccuracy 0255075100 5000 10000 15000 20000(a)3-hop small Training stepsAccuracy 0255075100 5000 10000 15000 20000×1 ×2 ×5 ×10 ×20 ×50 ×100 (b)4-hop small Figure 9: Model accuracy on 3-hop smalland4-hop small. x-axis refers to the number optimization steps. Data budgetAccuracy 0255075100 ×1 ×2 ×5 ×10baseline mix curriculum (a)3-hop small Data budgetAccuracy 0255075100 ×1 ×2 ×5 ×10 ×20 ×50 ×100 (b)4-hop small Figure 10: Model performance on k-hop smalldatasets with mixed learning and curriculum learning. F Additional mechanistic interpretability experiments F.1 Patching preceding prompt tokens Figure 6 suggests that only the last token (e.g. the whitespace <space> ) includes information about all bridge entities, and hence the reasoning process likely occurs at this position. In this section, we use activation patching to further demonstrate that the reasoning process of our language model only oc- curs at the last token position instead of preceding prompt tokens. 100101102 Minimal Budget to achieve above 80% accuracy (log scale)234k Value k-hopsmall k-hoplargeFigure 11: Minimal data budget to solve k-hop tasks. Our activation patching still addresses three types of runs: clean run, corrupted run and patched run. For each clean run, we randomly select a dis- tinct instance as the corrupted run. For each layer and each token position in the input prompt, we cre- ate a patched run by replacing the residual stream of the clean run with that of the corrupted run at the corresponding position. The causal effect is calculated as Pclean−Ppatched , where Pcleandenotes the output probability of the correct answer in the clean run, and Ppatched denotes the probability in the patched run. We report the average causal effect over 1000 held-out instances. Figure 12 presents the activation patching results across token positions. Noticeably, no significant causal effects are observed in any token positions following the <Entity> token, except for the last <space> token. Since the <Entity> token is the first position where the model can access complete query information (i.e., relations and source entity), this result supports our claim that the reasoning process primarily occurs at the last token position. We also observe large causal effects on relation Whoisthe<r4>ofthe<r3>ofthe<r2>ofthe<r1>of <space> <Entity>?\n Answer: <space>123456789101112Layer 0.00.20.40.60.8 Effect sizeFigure 12: Results for activation | https://arxiv.org/abs/2505.17923v1 |
patching replacing the residual stream of a particular layer across prompt tokens. Whoisthe<r4>ofthe<r3>ofthe<r2>ofthe<r1>of <space> <Entity>?\n Answer: <space>123456789101112Layer 0.00.20.40.60.8 Effect size Figure 13: Results for activation patching replacing the MLP output of a particular layer across prompt tokens. tokens when patching deeper layers (e.g., the 4th layer for the <r4> token). We consider this ef- fect is because the model only start to read the information of <r4> relation since the 4th layer when predicting the answer. Hence, deeper layers of<r4> position should not involve any reasoning- related computation. To show this, we also perform the same activation patching experiment by replac- ing only the output of each MLP layer. As shown in Figure 13, the relation tokens only show causal effects in the first layer, further supporting our hy- pothesis that deeper layers do not reprocess relation information. F.2 Causal effects across training steps In Section 5, we observed that LMs learn to solve k- hop tasks through a layer-wise lookup process, with specific layers responsible for producing bridge entities from 1-hop to k-hop. A key question is whether these circuits (i.e., layers) are developed sequentially from 1-hop to k-hop or simultaneously across multiple hop positions during training. To investigate this, we apply the activation patching experiment described in Section 5 at every check- point of the training process. We focus on the model trained on 4-hop largewith a×100budget, following the setup in Section 5. Checkpoints are saved every 1k training steps, andwe apply activation patching at the last input token position. For each checkpoint, we measure the causal effect of each layer for bridge entities at each hop position. LMs tend to build circuits of different i-hop bridge entities simultaneously. Figure 14 shows the causal effect of each layer across training steps. We observe that circuits responsible for 1-hop, 2- hop, and 3-hop bridge entities emerge simultane- ously at around the 17000 th training step, with each circuit appearing in distinct layers (e.g., the 1st layer for 1-hop entity). This pattern indicates that the model tends to develop circuits for different hop positions at once rather than sequentially from easier (e.g., 1-hop) to more complex (e.g., 3-hop) entities. Curriculum learning gradually build circuits on existing ones. We further analyze the develop- ment of circuits in the curriculum learning model trained on the 4-hop task with a ×5budget (Section 6). Training this model includes 3 stages. Check- points are saved every 1k steps, and causal effects are calculated at each stage. For each stage, we calculate the causal effects using the following cor- rupted runs: •C1-hop: Assesses 1-hop circuits across stage 1, 2 and 3. •C2-hop: Assesses 2-hop circuits across stages 2 and 3. •C3-hop: Assesses 3-hop circuits in stage 3. Figure 15 presents the results with our curricu- lum learning model. During stage 1, the model establishes circuits for 1-hop entities. In stage 2, the2-hop circuit emerges, building upon the exist- ing1-hop circuit. Stage 3 follows the same pattern, with the 3-hop circuit extending the prior circuits. This layer-by-layer construction supports our hy- pothesis that curriculum learning encourages pro- gressive circuit | https://arxiv.org/abs/2505.17923v1 |
development, allowing higher-hop circuits to build upon existing lower-hop circuits, explaining the observed effectiveness in Section 6. Curriculum learning has also been explored in prior work (Deng et al., 2024; Hao et al., 2024), where the focus is on internalizing explicit rea- soning abilities. These studies start from chain-of- thought (CoT) rationales and train language models to reason with progressively fewer prompt tokens. In contrast, our setup does not rely on any explicit rationales. Instead, we study how curriculum learn- ing affects the data budget required for training and provide an explanation for why such strate- gies improve sample efficiency from a mechanistic interpretability perspective. 1000 5000 900013000 17000 21000 25000 29000 33000 37000 Checkpoint123456789101112Corrupt 1-hop entity 1000 5000 900013000 17000 21000 25000 29000 33000 37000 CheckpointCorrupt 2-hop entity 1000 5000 900013000 17000 21000 25000 29000 33000 37000 CheckpointCorrupt 3-hop entity 0.00.20.40.60.8 0.00.20.40.60.8 0.00.20.40.60.8 Causal EffectLayerFigure 14: Causal effects calculated by corrupting 1-hop, 2-hop and 3-hop bridge entity in our baseline model. x-axis refers to checkpoints across training steps. We observe that the circuits corresponding to different hop positions tend to emerge at once (e.g., around the 17,000th step), rather than gradually developing over time. 1000 5000 900013000 17000 21000 25000 29000 33000 37000 Checkpoint123456789101112Stage 1 Stage 2 Stage 3Corrupt 1-hop entity 1000 5000 900013000 17000 21000 25000 29000 33000 37000 CheckpointStage 1 Stage 2 Stage 3Corrupt 2-hop entity 1000 5000 900013000 17000 21000 25000 29000 33000 37000 CheckpointStage 1 Stage 2 Stage 3Corrupt 3-hop entity 0.00.20.40.60.8 0.00.20.40.60.8 0.00.20.40.60.8 Causal EffectLayer Figure 15: Causal effects calculated by corrupting 1-hop, 2-hop and 3-hop bridge entity in our curriculum learning model. x-axis refers to checkpoints across training steps. Gray regions indicate stages where causal effects are not calculated for certain entities, e.g., stage 1 does not include 3-hop bridge entities in the training data, so the rightmost figure omits these effects in stage 1. Circuits for higher-hop entities tend to be established on top of existing ones for lower-hop entities. Model Size Task×1 ×2 ×5 ×10 ×20 ×50 ×100 Mean Std Mean Std Mean Std Mean Std Mean Std Mean Std Mean Std baselinesmall2-hop 99.8 0.1 3-hop 5.7 0.0 12.6 2.6 99.9 0.1 100.0 0.0 4-hop 4.3 0.4 6.2 0.5 6.7 0.2 9.2 0.4 96.4 3.1 96.4 0.0 100.0 0.0 large2-hop 99.9 0.0 3-hop 2.5 0.3 3.1 0.1 4.9 1.6 94.6 9.4 100.0 0.0 4-hop 2.0 0.3 2.6 0.3 3.1 0.1 3.7 0.3 4.0 0.4 6.3 1.0 100.0 0.0 mixsmall2-hop 100.0 0.0 3-hop 29.2 3.0 88.1 8.6 99.9 0.1 100.0 0.0 4-hop 16.4 1.8 29.3 1.8 71.3 41.4 99.8 0.1 100.0 0.0 100.0 0.0 100.0 0.0 large2-hop 100.0 0.0 3-hop 8.3 1.4 11.2 3.8 38.7 18.1 100.0 0.0 100.0 0.0 4-hop 2.1 0.2 2.7 0.1 3.7 0.2 3.4 0.1 4.3 0.6 7.2 1.9 100.0 0.0 curriculumsmall2-hop 100.0 0.0 3-hop 56.1 1.5 96.0 0.7 100.0 0.0 100.0 0.0 4-hop 29.3 2.7 68.7 5.4 99.6 0.2 100.0 0.0 100.0 0.0 100.0 0.0 100.0 0.0 large2-hop 100.0 0.0 3-hop 35.3 1.5 96.3 1.2 100.0 0.0 100.0 0.0 100.0 0.0 4-hop 9.4 1.9 36.1 14.8 100.0 0.0 100.0 | https://arxiv.org/abs/2505.17923v1 |
arXiv:2505.17928v2 [cs.SE] 28 May 2025Towards Practical Defect-Focused Automated Code Review Junyi Lu1 2 †Lili Jiang3Xiaojia Li3Jianbing Fang4 †Fengjun Zhang1Li Yang1Chun Zuo5 Abstract The complexity of code reviews has driven ef- forts to automate review comments, but prior ap- proaches oversimplify this task by treating it as snippet-level code-to-text generation and relying on text similarity metrics like BLEU for evalu- ation. These methods overlook repository con- text, real-world merge request evaluation, and defect detection, limiting their practicality. To address these issues, we explore the full automa- tion pipeline within the online recommendation service of a company with nearly 400 million daily active users, analyzing industry-grade C++ codebases comprising hundreds of thousands of lines of code. We identify four key challenges: ❶capturing relevant context, ❷improving key bug inclusion (KBI), ❸reducing false alarm rates (FAR), and ❹integrating human workflows. To tackle these, we propose ❶code slicing algo- rithms for context extraction, ❷a multi-role LLM framework for KBI, ❸a filtering mechanism for FAR reduction, and ❹a novel prompt design for better human interaction. Our approach, validated on real-world merge requests from historical fault reports, achieves a 2 ×improvement over standard LLMs and a 10 ×gain over previous baselines. While the presented results focus on C++, the un- derlying framework design leverages language- agnostic principles (e.g., AST-based analysis), suggesting potential for broader applicability. 1. Introduction Code review is essential for improving code quality and detecting defects (Fagan, 2002). Modern Code Review †Work done during the internship or tenure at Kuaishou Tech- nology.1Laboratory of Precise Computing, Institute of Software, Chinese Academy of Sciences, Beijing, China2University of Chi- nese Academy of Sciences, Beijing, China3Kuaishou Technol- ogy, Beijing, China4Independent Researcher5Sinosoft Company Limited, Beijing, China. Correspondence to: Li Yang <yan- gli2017@iscas.ac.cn >. Proceedings of the 42ndInternational Conference on Machine Learning , Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).(MCR) is widely used in open-source (Rigby et al., 2008; 2014; Rigby & Bird, 2013) and industrial settings (Sadowski et al., 2018; Shan et al., 2022), typically involving: (A) code submission, (B) reviewer examination, (C) feedback, and (D) developer revisions. Despite its benefits, MCR is labor-intensive and time- consuming (Yang et al., 2016), driving research toward automated review comment generation. Existing meth- ods—whether retrieval-based (Gupta & Sundaresan, 2018; Siow et al., 2020; Hong et al., 2022) or deep-learning-driven (Tufano et al., 2021; 2022; Li et al., 2022b;a; Lin et al., 2023; Lu et al., 2023)—often frame it as a snippet-level code-to-text task. However, this oversimplification diverges from the core goal of reviewers: detecting defects (Bacchelli & Bird, 2013) (see Section A). Furthermore, current eval- uations rely excessively on textual similarity metrics (e.g., BLEU (Papineni et al., 2002), ROUGE (Lin, 2004)), which fail to measure real-world effectiveness (Lu et al., 2025). Challenges. To address these limitations, we investigate a full code review pipeline within a real-world online service (Figure 1). Our system integrates with an internal DevOps platform, generating review reports, filtering comments, and aligning them with code lines. A detailed description of this real-world workflow integration, designed for seamless adoption by developers, is provided in Appendix B. This deployment reveals four | https://arxiv.org/abs/2505.17928v2 |
key challenges (Appendix C): Capturing Proper Code Context: Effective review requires analyzing dependencies beyond the immediate diff hunk (e.g., variable declarations or method calls). However, exces- sively long inputs degrade LLM performance, necessitating efficient context extraction. Improving Key Bug Inclusion (KBI): The goal of automated review is to detect critical defects, yet existing methods rely on textual similarity metrics, which fail to measure defect detection capability. More robust evaluation methods, such as Key-Bug Inclusion (KBI), are needed. Reducing False Alarm Rates (FAR): Generative models of- ten produce irrelevant or overly strict comments (e.g., nit- picks, hallucinations), burdening developers. A robust fil- tering mechanism is required to reduce false positives and enhance signal-to-noise ratio. Human-Centric Workflow Integration: Practical review tools 1 Towards Practical Defect-Focused Automated Code Review Venn_diagram.pdf Venn_diagram.pdf Check User White List Auto Review WorkflowCheck File White List Code Author Makes a Merge Request Comment on Website Notice the Participants Attach Comments to File Lines Figure 1. The code review automation pipeline integrated into the online service. must seamlessly integrate into developers’ workflows, ensur- ing comment alignment with code lines while minimizing cognitive overhead. Existing solutions often overlook this critical usability aspect. Our Approach. To address these challenges, we propose: ❶A static analysis system using code slicing to extract relevant context. ❷A multi-role LLM framework with chain-of-thought reasoning to enhance defect detection. ❸ A filtering mechanism to eliminate false positive nitpicks and hallucinations. ❹A line-aware prompt design for pre- cise comment placement. Evaluation. We validate our framework on real-world sys- tem failures, including historical core dumps and fault re- ports that caused significant financial losses. We evaluate it using multiple open-source LLM engines, demonstrating a 2×performance improvement over standard LLM methods and a 10×improvement over prior baselines. An ablation study further confirms the contribution of each component, highlighting the impact of code slicing, multi-role reasoning, and filtering mechanisms. Contributions. Our key contributions include being the first to:❶Repository-Level and Merge-Request Granularity : Elevating automated code review from snippet-level tasks to repository-wide and merge-request (pull-request) granular- ity.❷Integration with Real-World DevOps Workflows : Deploying automation into a practical online review system with more practical and objective evaluation metrics beyond text similarity. ❸Validation on Industry-Scale Defects : Demonstrating effectiveness on real-world, high-impact fail- ures in industry-level codebases instead of synthetic test data.❹Code-Review-Oriented LLM Framework : De- signing a specialized framework leveraging code slicing, multi-role collaboration, and filtering mechanisms, achiev- ing substantial improvement in code review performances. 2. Background: Code Review Automation Automating code review is crucial for maintaining software quality by identifying critical bugs early. The goal is to detect severe issues in new merge requests and providenecessary comments. In 2022, company reports showed that 30% of severe P1+ incidents (asset losses exceeding $350,000) and 24.04% of P4+ incidents stemmed from low- level faults due to inadequate reviews. Even in 2024, change- related core failures accounted for 67% of incidents, with code change-related graded incidents comprising 19.54%, highlighting the urgent need for effective automated review tools. These tools help ensure thorough, compliant reviews, reducing defect risks. To understand reviewer needs, we surveyed a super reviewer group, | https://arxiv.org/abs/2505.17928v2 |
summarizing findings in Section D. Background on code slicing and multi-role systems, key techniques in our work, is introduced in Sections E and F. 3. Proposed Approach 3.1. Overview Figure 2 illustrates our decoupling process of code review automation architecture: 1) Code Slicing : Extracting code from the diff hunk within repository context (Section 3.2); 2) Multi-role Code Review System : Employing a multi-role system to conduct reviews and compile the results (Section 3.3); 3) Redundancy Comment Filter Mechanism : Filter- ing out redundant and irrelevant comments to avoid nitpicks and hallucinations (Section 3.4); 4) Line Number Local- ization : Ensuring precise identification of code lines where issues occur (Section 3.5). To evaluate the automation, we construct a dataset from historical fault reports, simulating real-world merge requests that introduced defects (Section 3.6). 3.2. Code Slicing Previous work used method-level or diff-level code snippets as independent inputs. However, new code is integrated into a larger codebase during reviews, and understanding the structural context is crucial. We developed a code slicing process that integrates multiple slicing strategies, selectable based on the analysis needs. To avoid redundant slices, we use a caching mechanism to enhance efficiency. The pseudo code of our slicing algorithms is presented in 2 Towards Practical Defect-Focused Automated Code Review Section 3.2 Code SlicingSection 3.3 Multi -role Code Review SystemAction: Merge Request Triggered … Section 3.4 Redundancy Comment Filter MechanismOutput: Code Review Report Section 3.5 Line Number Localization Figure 2. An overview of our automated code review workflow. Section G. Initially, the repository is cloned, and the merge request commit is checked out. A static analysis tool is then applied to generate abstract syntax trees (ASTs), which serve as the foundation for our slicing process. Based on data dependencies and control flow analysis, one or more of the following four optional slicing algorithms may be applied: 1) Original Diff : The basic code diff without trans- formations, capturing essential changes in the commit. 2) Parent Function : Locates the smallest parent function con- taining the changes, providing functional context. 3) Left Flow : Tracks the flow of all left-hand values (L-values) in the function and control structures, focusing on the lifecycle of variables. 4) Full Flow : Extends Left Flow by tracing right-hand values (R-values) and collecting the signatures of callee functions, offering coverage of variable usage and modifications. 3.3. Multi-role Code Review System Our multi-role code review system involves four key roles: Reviewer, Meta-Reviewer, Validator, and Translator. These roles collaborate to enhance the accuracy and efficiency of the review process. The system design is illustrated in Figure 3, and we detail the roles and their processes below. ❶Reviewer : Reviews each code snippet generated by the code slicing algorithm (Section 3.2) and provides detailed comments on potential issues in a predefined format. ❷ Meta-Reviewer : Aggregates comments from multiple Re- viewers, filtering and sorting them based on predefined thresholds. It merges common issues across reviews. ❸ Validator : Validates and refines the merged comments, re- scores them, and ensures that only comments exceeding a certain threshold are retained. ❹Translator : Translates | https://arxiv.org/abs/2505.17928v2 |
the final comments into the required language for multinational teams, ensuring proper formatting for direct integration into the development environment. Each role is integrated with Chain-of-Thought technique, as detailed in Section H.3.4. Redundancy Comment Filter Mechanism LLMs often produce an overwhelming number of comments, many of which are either nitpicks or hallucinations. To mit- igate this issue, we implemented a Redundancy Comment Filter Mechanism to reduce the number of irrelevant com- ments. Our filtering mechanism, integrated within the multi-role system (Section 3.3), operates by answering three key ques- tions for each comment: Q1: Is this comment a nitpick? Typical nitpicks include excessive code comments, handling unnecessary edge cases, or overly complex error handling. Q2: Does the comment identify a fake problem (i.e., a non-existent bug)? For example, if the comment flags a function call to a known reliable internal library, null pointer checks are considered irrelevant. Q3: How critical is the issue identified by this comment? Minor issues, like miss- ing comments, are less severe than potential core dumps or infinite loops. Each question is rated on a scale from 1 to 7, with 1 in- dicating a nitpick, fake problem, or minimal issue, and 7 indicating a severe and real issue. The scoring scale (1 to 7) is inspired by other related work (McAleese et al., 2024). We chose this scale to enable a fine-grained and manage- able distinction. These scores form the basis of the filtering process throughout the review workflow. Coarse Filtering and Sorting by Reviewer. During the review process, the Reviewer LLMs score each comment based on Q1-Q3. Comments with Q1 or Q2 scores of 4 or below are discarded. This specific threshold was established heuristically to enhance interpretability and has been vali- dated by developer feedback during internal piloting. The remaining comments are then sorted based on their Q3 score and truncated to the Top-N comments. Fine Filtering and Sorting by Meta-Reviewer. The Meta- Reviewer further refines the filtered comments by merging those flagged by multiple Reviewers and removing com- ments mentioned by only one Reviewer. 3 Towards Practical Defect-Focused Automated Code Review ReviewersMeta Reviewer ValidatorsTranslatorTraceback to Code Filter Filter, Sort & Truncation Figure 3. The multi-role system for automating code review. Validation and Re-scoring by Validators. Validators then re-score the comments by revisiting the original code snip- pets and applying the same Q1-Q3 criteria. A secondary filter is applied, ensuring that only the most relevant and critical comments proceed to translation and integration into the development platform. Integration with the Multi-role System. The filtered com- ments are processed by the remaining multi-role compo- nents, including translation (if necessary) and final submis- sion to the development platform. This multi-stage process ensures that the delivered comments are both relevant and concise, minimizing redundancy and false alarms. The heuristic approach to threshold definition described herein was chosen to prioritize generalizability, interpretability, and mitigate overfitting in this study. While providing a ro- bust baseline, exploring adaptive or machine-learned thresh- olds remains a valuable direction for future enhancement to achieve more nuanced filtering. 3.5. Line Number Localization A key challenge overlooked | https://arxiv.org/abs/2505.17928v2 |
in prior work is the precise localization of comments within the code. Unlike code sum- marization tasks, code reviews require pinpointing specific lines of code where issues are identified. Without this in- formation, developers face inefficiencies in verifying and addressing comments. For example, the change-involved function has 94.54 lines of code in average based on our statistics, missing line localization can result in significant delays for developers. We propose a code formatting approach inspired by Aider(Gauthier, 2024), tailored for code review tasks. As shown in Table 1, the format includes an operation label (indicating whether a line is kept, added, or deleted), the line number, and the code content. For non-contiguous code lines, ellipses are used to indicate omissions. Table 1. Code formatting with line position information. linenumber |{kept code line } Represents lines that remain unchanged. -linenumber |{deleted code line } Indicates lines that have been removed. +linenumber |{added code line } Marks newly added lines. ...|... Indicates the omission of non-essential lines.3.6. Offline Validation To systematically assess the performance of our system, we developed a dataset curated from the company’s fault report platform. Each case in this dataset corresponds to an issue that resulted in actual company losses. For each reported fault, we trace back to the merge request that introduced the fault and its subsequent fixing merge request. Using these, we generate ideal reference comments containing details such as affected files, specific lines of code, fault location, root cause, suggested fix, example code, and issue category. The motivation for conducting such validation is illustrated in Section I. 4. Evaluation Design 4.1. Research Questions We define the following research questions (RQs) to guide our evaluation, whose detailed illustrations are in Section J: RQ1: How does the overall performance of our framework compare with previous works? RQ2: How do code slicing algorithms impact the perfor- mance of the framework? RQ3: How do the different components of the multi-role system impact the performance of our framework? RQ4: How does the redundancy comment filter mechanism address nitpicks and hallucinations? RQ5: How does the representation of line number position information impact overall performance and line number localization success rate? 4.2. Dataset and Studied Models The primary goal of code review is to prevent problematic code from being merged into the target branch. To simulate real-world code review scenarios, we collected data from a company’s core framework team, which is responsible for the production code of the short video recommendation core service. This data was gathered using fault reports recorded on an online platform. These cases come from four reposito- ries and involve total 4,090 developers. By analyzing these reports, we traced the merge requests (MRs) that introduced 4 Towards Practical Defect-Focused Automated Code Review the issues and examined the specific commits to reproduce the code snapshots. The detailed statistics are presented in Section K. Our framework supports multiple LLM engines. To mitigate security risks, we only studied open-source models that can be deployed locally. We exclusively selected large instructed models due to the complex human-instruction-based tasks in our workflow. The final list of models | https://arxiv.org/abs/2505.17928v2 |
includes: LLaMA-3.1 (70B), Qwen2 (72B), Command R+ (104B), Mistral-large- 2407 (123B), and LLaMA3.1 (405B). The reasons for not selecting other models are outlined in Section L. 4.3. Metrics In accordance with the real-world developer expectations discussed in Section D, we evaluate performance at the merge request (MR) level using four metrics, with their formal definitions provided in Section M: ❶Key Bug Inclusion (KBI): Assesses the model’s ability to recall critical issues that could lead to tangible losses. ❷ False Alarm Rate (FAR): Captures the proportion of irrel- evant or erroneous comments, with two variants ( FAR 1for all MRs and FAR 2for MRs where key bugs are recalled). ❸Comprehensive Performance Index (CPI): Balances the completeness of key issue detection (KBI) and preci- sion ( 100−FAR ), analogous to the F1-score. It is also computed in two variants ( CPI 1andCPI 2).❹Line Lo- calization Success Rate (LSR): Measures the accuracy of line-level references by checking whether comments point to the correct code lines. 4.4. Baselines and Experimental Setups Since our framework focuses on C++, we selected state- of-the-art baselines that support this language: CodeRe- viewer (Li et al., 2022b): A T5 model pre-trained for code review tasks and then fine-tuned. CCT5 (Lin et al., 2023): A T5 model pre-trained on CodeChangeNet, then fine-tuned. LLaMA-Reviewer (Lu et al., 2023): A large LLM fine- tuned for code review tasks based on the LLaMA. DIS- COREV (Ben Sghaier & Sahraoui, 2024): A T5 model enhanced via cross-task knowledge distillation for code re- view. The detailed experimental setups of our framework and baselines are presented in Section N. 5. Evaluation Results 5.1. RQ1. Comparison with Baselines We evaluated the performance of our framework on the fault merge request dataset, comparing it with several baseline approaches. Our framework was tested with different large language model (LLM) engines. Our main experiments pri- marily utilized a homogeneous setup, employing the sameTable 2. Overall performance comparison of our framework using different LLM engines and baseline models. LLM engines marked with*are quantized. “Val” indicates if Validator role was used. Model Val KBI↑ FAR1↓ CPI1↑ FAR2↓ CPI2↑ Baselines CodeReviewer — 0.00 97 .78 0 .00 – – CCT5 — 2.22 97 .58 2 .32 90 .91 3 .57 LLaMA-Reviewer — 2.22 97 .62 2 .30 92 .86 3 .39 DISCOREV — 0.00 97 .78 0 .00 – – Ours (Left Flow) LLaMA3.1 (70B)w/o 20.00 84 .42 17 .52 66 .54 25 .03 w 2.22 37 .04 4 .29 66 .67 4 .17 Qwen2 (72B)w/o 40.00 91 .21 14 .42 83 .57 23 .29 w 26.67 90 .63 13 .87 81 .52 21 .83 Command R+ (103B)w/o 20.00 92 .99 10 .38 76 .08 21 .78 w 4.44 85 .00 6 .86 62 .50 7 .95 Mistral-2407 (123B)w/o 26.67 91 .07 13 .38 74 .85 25 .89 w 26.67 87 .07 17 .41 68 .18 29 .01 LLaMA3.1 (405B)*w/o 31.11 87 .81 17 .51 67 .98 31 .56 w 20.00 75 .37 22 .07 43 .52 29 .54 Ours (Full Flow) LLaMA3.1 (70B)w/o 13.33 88 .22 12 .51 61 .67 19 .78 w 0.00 | https://arxiv.org/abs/2505.17928v2 |
46 .67 0 .00 – – Qwen2 (72B)w/o 42.22 90 .99 14 .86 83 .91 23 .30 w 28.89 91 .73 12 .86 79 .06 24 .28 Command R+ (103B)w/o 15.56 94 .22 8 .43 77 .14 18 .51 w 8.89 76 .30 12 .93 58 .33 14 .65 Mistral-2407 (123B)w/o 28.89 90 .68 14 .09 75 .44 26 .55 w 28.89 86 .11 18 .76 67 .30 30 .68 LLaMA3.1 (405B)*w/o 31.11 89 .41 15 .80 73 .10 28 .86 w 20.00 77 .96 20 .97 67 .59 24 .73 LLM across all roles. This approach was chosen to isolate and clearly assess whether a single, powerful model could effectively address key challenges in code review. Recog- nizing the practical importance and potential benefits of diverse model deployments, we also conducted extended comparison experiments with heterogeneous LLM assign- ments for reviewer and validator roles. These experiments, detailed in Appendix O, show that strategic combinations, such as pairing a strong validator with a smaller reviewer, can achieve comparable or even superior performance while potentially optimizing resource usage. For baselines, since they do not prioritize comments, we evaluated their comments based on whether they passed their respective “quality estimation” filters, which assess whether a code snippet requires a comment. The results are in Table 2. The results indicate that our framework significantly out- performs the baselines by a factor of 10x across most key metrics, such as key bug inclusion (KBI) and comprehen- sive performance index (CPI). This marked improvement is likely due to our framework’s end-to-end approach to code review automation, which addresses the key challenges of the task and introduces strategies specifically designed to tackle each challenge. Among the LLM engines tested in our primary setup, LLaMA3.1-405B demonstrated the best overall perfor- mance, which aligns with the general scaling laws of lan- guage models where capability often increases with param- eter count on complex tasks such as code review. However, 5 Towards Practical Defect-Focused Automated Code Review our evaluations (detailed in Table 2) also included more compact LLMs. These results show that certain smaller models, particularly those with strong inherent reasoning ca- pabilities, can still achieve competitive performance within our framework. This finding is particularly relevant given the industry trend towards increasing ’capacity density’ in newer architectures, where smaller models are progressively narrowing the performance gap. While the largest models may provide peak effectiveness, these observations suggest that a range of LLMs can be effectively utilized, allowing for a balance between performance and computational resource demands, a point further explored in our heterogeneous model assignments (Appendix O). Summary of RQ1. Our framework surpasses baseline ap- proaches significantly (up to 10x on KBI/CPI), thanks to its end-to-end design. LLaMA3.1-405B stands out among tested engines, highlighting the role of model capability. Investigations into heterogeneous LLM combinations also suggest the potential for optimized deployments. (See Ap- pendix T.1 for the extended conclusion.) 5.2. RQ2. Effectiveness of Code Slicing We tested the four code slicing algorithms described in Sec- tion 3.2: Original Diff, Parent Function, Left Flow, and Full Flow. It is important to clarify | https://arxiv.org/abs/2505.17928v2 |
that while our framework does not employ an explicit Retrieval-Augmented Gener- ation (RAG) pipeline, our code slicing mechanism is de- signed with a RAG-aligned objective. Specifically, it serves a similar purpose to RAG by strategically retrieving and providing the LLM with only the most relevant contextual code ’slices’ from the broader codebase. This process aims to focus the model on pertinent information, thereby enhanc- ing its reasoning and effectiveness in the code review task. Our focus in this section is on KBI andCPI 1, as these metrics indicate how input content affects the maximum recall capability of LLMs for code review. The experiments were structured to evaluate the comments generated by the large language models under different con- ditions, including all comments, comments after applying a coarse filter, and top-k ranked comments (based on scores from Q3). We also tested multi-reviewer settings, where the meta-reviewer merges the comments, and validator settings, where validators further refine the comments. The average results are shown in Table 3, based on the LLaMA3.1-405B- AWQ-Int4 LLM engine. To provide further insight into the variability of these results, the minimum and maximum values for each reported metric across the three runs are detailed in Appendix R. The results reveal that using only the diff or parent func- tion is less effective, while more detailed slicing (Left Flow and Full Flow) improves performance, especially in key bug inclusion. Surprisingly, Left Flow performs better thanFull Flow, likely due to the large language model’s reduced capability when provided with longer contexts, which can cause distraction. This finding supports our assumption that providing targeted and relevant code context is critical for maximizing LLM performance in code review tasks, an ob- servation consistent with the principles underpinning RAG systems where curated information significantly enhances model outputs. During our analysis of the recalled merge requests (MRs), we found another interesting pattern. Although some slicing algorithms perform worse overall, each algorithm uniquely succeeds in specific cases. This means that each slicing strat- egy provides valuable context in certain situations. Figure 4 presents a Venn diagram showing the union and differences among the key bugs recalled by each slicing algorithm un- der the “All” and “+Meta Reviewer” settings. Notably, Left Flow and Full Flow recall most, with significant overlap, but almost each method also uniquely recalls some. This phenomenon mirrors how human reviewers oper- ate—expanding their focus to different levels of granularity, such as inspecting parent functions or understanding vari- able usage in different contexts. Some defects are easier to spot in one context, while others require a different view. Therefore, a combination of various slicing strategies might be a promising direction. Summary of RQ2. Left Flow and Full Flow significantly improve key bug inclusion and overall performance com- pared to simpler slicing. Left Flow often outperforms Full Flow, possibly because shorter context helps maintain fo- cus. Notably, each slicing approach has exclusive successes, suggesting that combining them could further improve de- tection. (See Appendix T.2 for the extended conclusion.) 5.3. RQ3. Effectiveness of Multi-role System To better understand the capabilities of our multi-role system, we | https://arxiv.org/abs/2505.17928v2 |
conduct experiments on: ❶Leveraging the non-determinism of large language models; ❷The self- correction capability (validator); ❸The chain-of-thought (CoT) prompting strategy. 5.3.1. N UMBER OF REVIEWERS Previous research has shown that the non-determinism of large language models (LLMs) can impact results. Specifi- cally, with a best-of-N sampling approach, smaller LLMs can sometimes match or surpass larger models. Since our framework includes a multi-reviewer scenario, where a meta-reviewer merges comments from multiple reviewers, we conduct experiments to assess whether increasing the number of reviewers improves performance. The results in Table 4 show that increasing the number of reviewers from one to three improves KBI but also 6 Towards Practical Defect-Focused Automated Code Review Table 3. Impact comparison of different code slicing algorithms on key bug inclusion ( KBI ) and the comprehensive performance index (CPI ), based on LLaMA3.1-405B-AWQ-Int4. Experiments for a single reviewer are conducted three times to compute the average. “All” represents all comments generated by the reviewer; “Coarse filter” refers to filtering using Q1 and Q2 scores during generation; “Top-k” denotes truncated comments sorted by Q3 scores; “+Meta Reviewer” and “+Validator” settings are evaluated under Top-5 truncation. Code Slicing AlgorithmsSingle Reviewer Multi Reviewers All Coarse Filter Top-10 Top-5 Top-3 + Meta Reviewer + Validator KBI↑CPI 1↑KBI↑CPI 1↑KBI↑CPI 1↑KBI↑CPI 1↑KBI↑CPI 1↑KBI↑CPI 1↑KBI↑CPI 1↑ Original Diff 23.70 5 .71 17 .04 12 .70 16 .30 12 .75 14 .81 12 .48 11 .11 10 .90 13 .33 5 .24 11 .11 10 .46 Parent Function 31.85 5 .52 22 .22 15 .18 17 .04 13 .57 14 .81 13 .40 8 .15 9 .63 20 .00 11 .01 11 .11 10 .81 Left Flow 37.04 9 .77 33 .33 12 .80 32 .59 13 .94 25 .93 14 .26 17 .78 12 .77 31 .11 17 .51 20 .00 22 .07 Full Flow 39.26 9 .67 32 .59 11 .95 31 .85 13 .18 25 .93 13 .90 13 .33 10 .23 31 .11 15 .80 20 .00 20 .97 Original DiffParent FunctionLeft FlowFull FlowAll+ Meta Reviewer Figure 4. Venn diagram of recalled key bugs identified by different code slicing algorithms. The “All” setting represents all comments, while the “+Meta Reviewer” setting denotes multi-reviewer comments merged by the meta-reviewer. To analyze per-category performance, a breakdown across logic, security, and performance-related bugs is shown in Appendix P. Table 4. Impact of increasing the number of reviewers from one to three. The “+Meta Reviewer” setting represents the meta-reviewer merging the reviewers’ comments, while the “+Validator” setting denotes the validator refining the comments after the meta-reviewer. All settings use Top-5 truncation of reviewer comments. Processing Stage Reviewer Num KBI↑ FAR1↓ CPI1↑ FAR2↓ CPI2↑ Original Diff + Meta Reviewer1 8.89 86 .15 10 .83 69 .17 13 .80 3 13.33 96 .74 5 .24 75 .56 17 .25 + Validator1 4.44 76 .59 7 .47 73 .33 7 .62 3 11.11 90 .11 10 .46 71 .00 16 .07 Parent Function + Meta Reviewer1 15.56 80 .37 17 .36 73 .81 19 .52 3 20.00 92 .41 11 .01 73 .15 22 .92 + Validator1 6.67 63 | https://arxiv.org/abs/2505.17928v2 |
.70 11 .26 55 .56 11 .59 3 11.11 89 .48 10 .81 65 .33 16 .83 Left Flow + Meta Reviewer1 26.67 83 .26 20 .57 70 .56 27 .99 3 31.11 87 .81 17 .51 67 .98 31 .56 + Validator1 11.11 69 .26 16 .32 23 .33 19 .41 3 20.00 75 .37 22 .07 43 .52 29 .54 Full Flow + Meta Reviewer1 22.22 78 .04 22 .09 71 .17 25 .10 3 31.11 89 .41 15 .80 73 .10 28 .86 + Validator1 15.56 71 .78 20 .06 61 .43 22 .17 3 20.00 77 .96 20 .97 67 .59 24 .73leads to higher FAR 1andFAR 2, which negatively affect CPI 1andCPI 2in the “+Meta Reviewer” setting. How- ever, after introducing the validator, the performance for three reviewers significantly improves in terms of CPI 1 andCPI 2. While more reviewers boost KBI , they also in- crease false alarms, making the validator essential to overall performance. Summary of RQ3.1. Increasing the number of reviewers lifts key bug inclusion but raises false alarms. A valida- tor mitigates these alarms, implying a trade-off between coverage and precision. (See Appendix T.3 for extended conclusions.) 5.3.2. S ELF-CORRECTION ABILITY OF LLM S In our framework, the validator refines and validates gen- erated comments to correct hallucinations. Table 5 shows that the validator lowers FAR 1andFAR 2but also reduces KBI , indicating a trade-off between precision and recall. Our analysis suggests such erroneous rejections of valid comments by validators primarily stem from factors includ- ing context propagation from earlier pipeline stages, minor inaccuracies in comment positioning, occasional model in- put token limits, and inherent scoring variances. Summary of RQ3.2. Self-correction (validator) reduces false alarms but can inadvertently discard critical bug- 7 Towards Practical Defect-Focused Automated Code Review Table 5. The self-correction ability of LLMs through the Validator role. “w/o” denotes without Validator, “w/” denotes with Validator. Validator Status KBI↑ FAR1↓ CPI1↑ FAR2↓ CPI2↑ Original Diff w/o 13.33 96 .74 5 .24 75 .56 17 .25 w/ 11.11 90 .11 10 .46 71 .00 16 .07 Parent Function w/o 20.00 92 .41 11 .01 73 .15 22 .92 w/ 11.11 89 .48 10 .81 65 .33 16 .83 Left Flow w/o 31.11 87 .81 17 .51 67 .98 31 .56 w/ 20.00 75 .37 22 .07 43 .52 29 .54 Full Flow w/o 31.11 89 .41 15 .80 73 .10 28 .86 w/ 20.00 77 .96 20 .97 67 .59 24 .73 Table 6. Impact of Chain-of-Thought (CoT) on the framework, pre- senting paired slicing algorithm comparisons. “SR” denotes Single Reviewer, “MR” denotes Multi Reviewers. All multi-reviewer settings use three reviewers and Top-5 truncation. Stage CoT KBI↑FAR1↓CPI1↑KBI↑FAR1↓CPI1↑ Original Diff Parent Function SR - Allw/o 20.74 97 .08 5 .12 31 .11 96 .91 5 .60 w/ 23.70 96 .74 5 .71 31 .85 96 .98 5 .52 SR - Top-5w/o 17.04 89 .17 13 .05 16 .30 94 .57 8 .03 w/ 14.81 89 .11 12 .48 14 .81 87 .67 13 .40 MR - Metaw/o 15.56 82 .74 16 .36 17 .78 | https://arxiv.org/abs/2505.17928v2 |
89 .09 13 .52 w/ 13.33 96 .74 5 .24 20 .00 92 .41 11 .01 MR - Valw/o 6.67 63 .89 11 .26 11 .11 75 .19 15 .35 w/ 11.11 90 .11 10 .46 11 .11 89 .48 10 .81 Left Flow Full Flow SR - Allw/o 34.81 95 .16 8 .49 40 .00 94 .40 9 .83 w/ 37.04 94 .36 9 .77 39 .26 94 .48 9 .67 SR - Top-5w/o 20.74 92 .37 11 .10 20 .74 91 .94 11 .50 w/ 25.93 90 .15 14 .26 25 .93 90 .43 13 .90 MR - Metaw/o 17.78 88 .74 13 .79 26 .67 82 .81 20 .90 w/ 31.11 87 .81 17 .51 31 .11 89 .41 15 .80 MR - Valw/o 11.11 75 .56 15 .28 6 .67 68 .33 11 .01 w/ 20.00 75 .37 22 .07 20 .00 77 .96 20 .97 detecting comments. Balancing these factors is crucial. (See Appendix T.3 for extended conclusions.) 5.3.3. E FFECTIVENESS OF CHAIN -OF-THOUGHT We compared our specified CoT approach with free-form reasoning. Table 6 shows that CoT prompts often excel in complex slicing tasks (Left Flow, Full Flow), but in simpler tasks (Original Diff, Parent Function), free-form can be just as good or better. Summary of RQ3.3. CoT prompting is especially benefi- cial in complex contexts. For simpler code slices, the model may perform well without explicit CoT guidance. As more powerful reasoning models, such as GPT-O1 and DeepSeek- R1, emerge, the advantage of specified CoT over free-form reasoning may further diminish. (Appendix T.3) 5.4. RQ4. Effectiveness of Comment Filter Mechanism The comment filter mechanism includes ❶Coarse reviewer filter,❷Top-k truncation, ❸Meta-reviewer filter, and ❹ Validator validation. Table 7 shows that in flow-based slic-Table 7. TheKBI ,FAR 1, andCPI 1results for different code slicing algorithms utilizing our filtering mechanism. This table illustrates the impact of sequential filter stages, including different Top-k truncation values (k=10, 5, 3) for single-reviewer paths. For the multi-reviewer path results shown here (+Meta Reviewer, +Validator), Top-k is set to 5. A comprehensive discussion of Top-k sensitivity, covering both single-reviewer variations and multi-reviewer settings, is presented in Appendix S. lReviewer Filter / Trunc KBI↑FAR1↓CPI1↑KBI↑FAR1↓CPI1↑ Original Diff Parent Function SingleAll 23.70 96 .74 5 .71 31 .85 96 .98 5 .52 Coarse Filter 17.04 89 .72 12 .70 22 .22 88 .17 15 .18 Top-10 16.30 89 .43 12 .75 17 .04 88 .53 13 .57 Top-5 14.81 89 .11 12 .48 14 .81 87 .67 13 .40 Top-3 11.11 89 .14 10 .90 8 .15 88 .15 9 .63 Multi+ Meta 13.33 96 .74 5 .24 20 .00 92 .41 11 .01 + Validator 11.11 90 .11 10 .46 11 .11 89 .48 10 .81 Left Flow Full Flow SingleAll 37.04 94 .36 9 .77 39 .26 94 .48 9 .67 Coarse Filter 33.33 92 .03 12 .80 32 .59 92 .65 11 .95 Top-10 32.59 91 .13 13 .94 31 .85 91 .68 13 .18 Top-5 25.93 90 .15 14 .26 25 .93 90 .43 13 .90 Top-3 17.78 90 .00 12 .77 | https://arxiv.org/abs/2505.17928v2 |
13 .33 91 .60 10 .23 Multi+ Meta 31.11 87 .81 17 .51 31 .11 89 .41 15 .80 + Validator 20.00 75 .37 22 .07 20 .00 77 .96 20 .97 ing (Left Flow, Full Flow), adding these filters sequentially decreases FAR 1and improves CPI 1. In simpler slicing (Original Diff, Parent Function), only the coarse filter proves particularly effective, likely due to limited context causing more hallucinations. A comprehensive sensitivity analy- sis of the Top-k truncation hyperparameter k—detailing its impact on single-reviewer paths with various kvalues (as presented in Table 7) and an extended analysis within our multi-reviewer framework—is provided in Appendix S. Summary of RQ4. Our comment filter significantly reduces false alarms and improves performance in more detailed slicing methods. In simpler slicing, the coarse filter stage is the most impactful step. (See Appendix T.4 for the extended conclusion.) 5.5. RQ5. Line Number Position Line number localization is crucial for real-world appli- cations. We tested three formats: No: No line position information is provided; Relative: Code is provided with a separate list containing relative line positions; and Inline: Position information is integrated directly into the code using the format in Table 1. Table 8 shows that providing line number information (es- pecially inline) significantly improves performance and lo- calization success rate (LSR). Summary of RQ5. Embedding line numbers inline yields the highest performance and LSR, likely because it helps the model anchor comments to specific lines accurately. (See Appendix T.5 for the extended conclusion.) 8 Towards Practical Defect-Focused Automated Code Review Table 8. Impact of line number position information. “All” repre- sents the average of all comments generated by reviewers, while “+Meta Reviewer” denotes the multi-reviewer workflow with three reviewers and Top-5 truncation. LSR (Line Success Rate) mea- sures whether LLMs provide valid lines, regardless of correctness. Position KBI↑ FAR1↓ CPI1↑ FAR2↓ CPI2↑ LSR↑ ”All” Setting No 30.37 95 .66 7 .58 93 .12 11 .17 90 .54 Relative 42.96 94 .60 9 .58 92 .75 12 .32 92 .69 Inline 37.04 94 .36 9 .77 90 .66 14 .79 91 .11 ”+ Meta Reviewer” Setting No 17.78 90 .70 12 .21 72 .71 21 .53 – Relative 17.78 93 .52 9 .50 76 .04 20 .41 – Inline 31.11 87 .81 17 .51 67 .98 31 .56 – 6. Related Work Code review comments play a crucial role in maintaining software quality, leading to significant research efforts in automating this process. Early studies, such as Gupta & Sundaresan (2018), employed retrieval-based methods, utilizing LSTM models to match new code snippets with historical changes to recommend comments. Siow et al. (2020) advanced this approach by incorporating attention mechanisms to capture semantic nuances more effectively. With the advent of deep learning, the focus shifted towards automated comment generation. Pioneering efforts by Tu- fano et al. (2021; 2022) introduced models trained on di- verse datasets, including technical texts and code snippets. Subsequent innovations included specialized models such as CodeReviewer (Li et al., 2022b), which leveraged pre- training on code review data, and AUGER (Li et al., 2022a), which used review tags | https://arxiv.org/abs/2505.17928v2 |
to streamline the task. Another ap- proach, CommentFinder (Hong et al., 2022), presented an efficient retrieval-based model tailored to new code. More recently, LLaMA-Reviewer (Lu et al., 2023) trained large language models specifically for code review tasks, and DISCOREV (Ben Sghaier & Sahraoui, 2024) improved performance by applying cross-task knowledge distillation across successive tasks, and Yu et al. (2024b) focused on fine-tuning LLMs to improve both the accuracy and com- prehensibility of automated code reviews. Alongside these advancements in direct comment generation, recent stud- ies have also explored the application of LLMs to other related aspects of the software development lifecycle, such as enhancing code reviewer recommendation (Wang et al., 2024) and automating commit message generation (Tao et al., 2024), underscoring the expanding utility of large models in diverse software engineering contexts. Despite these advances, previous works have oversimplified the code review process by treating it as a set of snippet- level code-comment pairs. These approaches typically split merge requests into independent snippets and framed the task as a one-to-one neural machine translation (NMT) prob- lem, converting code into natural language. While innova-tive, this approach provides a limited and idealized view of code review, often evaluated with text similarity metrics, such as BLEU or ROUGE, which do not fully capture the expectations of real-world developers for finding defects. In practice, code review is more complex, evaluated at the level of entire merge requests of repository codebases rather than individual code-comment pairs. The focus on text similarity fails to consider the broader context, including how comments address the full scope of changes in a MR. Although contributing valuable insights, these studies fall short of replicating the holistic, real-world workflow. 7. Conclusion Motivated by the limitations of prior research that oversim- plified code review automation and fell short of practical applications, we explored the complete automation pipeline within a real-world company. We identified and addressed key challenges such as capturing relevant code context, im- proving key bug inclusion (KBI), reducing false alarm rates (FAR), and integrating human-centric workflows. Our ap- proach introduces four code slicing algorithms, a multi-role LLM framework, a comment filtering mechanism, and a prompt format with inline line number localization. Evalua- tions on real-world data demonstrated that we significantly outperforms existing methods, achieving up to a 10x im- provement in the comprehensive performance index (CPI) over previous baselines. Key insights include: ❶Flow-based slicing (Left Flow and Full Flow) provided better context and outperformed sim- pler methods. ❷Increasing the number of reviewers im- proved KBI but required validation to manage false alarms effectively. ❸The validator role reduced hallucinations but slightly lowered KBI, highlighting a trade-off between precision and recall. ❹Chain-of-thought guidance proved more valuable in complex slicing scenarios. ❺Inline line number localization enhanced both comment accuracy and localization success rates. Looking ahead, four key areas for future research are: ❶ Enhancing code slicing algorithms to capture more rele- vant context, potentially combining different slicing levels. ❷Refining LLM interactions and enhancing engine LLM capability to improve key bug recall. ❸Further optimiz- ing the filtering mechanism, including the investigation of adaptive or learned thresholds, | https://arxiv.org/abs/2505.17928v2 |
to reduce nitpicks and hallu- cinations more effectively. ❹Streamlining pipeline to make automation more accessible. Limitation. We discuss limitations in Section V. Data Availability. We publicly release our codes at https: //zenodo.org/records/14779175 . Details regard- ing their open-source status can be found in Section U. 9 Towards Practical Defect-Focused Automated Code Review Acknowledgments This work was supported by the National Key Research and Development Program of China (No. 2023YFB3307202) and the Alliance of International Science Organizations Collaborative Research Program (No. ANSO-CR-KP-2022- 03). Impact Statement This work advances the practical defect-focused applica- tions of automated code review. We redefine the code re- view task by shifting from a snippet-level code-to-text for- mulation to an end-to-end, merge-request-level codebase analysis. Our approach fills previously overlooked sub-tasks in the automation pipeline and introduces more practical and objective metrics, better aligning the process with de- velopers’ expectations for defect detection in real-world software development. These refinements not only estab- lish a more comprehensive foundation for future research in automated code review but also offer insights applicable to other software engineering tasks. Furthermore, our generic framework, including the proposed context slicing algo- rithms, provides a versatile methodology that can inspire broader applications in code intelligence. References Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F. L., Almeida, D., Altenschmidt, J., Altman, S., Anadkat, S., et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. Ackerman, A. F., Buchwald, L. S., and Lewski, F. H. Soft- ware inspections: an effective verification process. IEEE software , 6(3):31–36, 1989. AlOmar, E. A., Venkatakrishnan, A., Mkaouer, M. W., New- man, C., and Ouni, A. How to refactor this code? an exploratory study on developer-chatgpt refactoring con- versations. In Proceedings of the 21st International Con- ference on Mining Software Repositories , pp. 202–206, 2024. Bacchelli, A. and Bird, C. Expectations, outcomes, and challenges of modern code review. In 2013 35th Interna- tional Conference on Software Engineering (ICSE) , pp. 712–721. IEEE, 2013. Ben Sghaier, O. and Sahraoui, H. Improving the learning of code review successive tasks with cross-task knowl- edge distillation. Proceedings of the ACM on Software Engineering , 1(FSE):1086–1106, 2024. Chen, W., Su, Y ., Zuo, J., Yang, C., Yuan, C., Chan, C.-M., Yu, H., Lu, Y ., Hung, Y .-H., Qian, C., et al. Agentverse:Facilitating multi-agent collaboration and exploring emer- gent behaviors. In ICLR , 2024a. Chen, Y ., Hu, Z., Zhi, C., Han, J., Deng, S., and Yin, J. Chatunitest: A framework for llm-based test generation. In Companion Proceedings of the 32nd ACM International Conference on the Foundations of Software Engineering , FSE 2024, pp. 572–576, New York, NY , USA, 2024b. Association for Computing Machinery. ISBN 9798400706585. doi: 10.1145/ 3663529.3663801. URL https://doi.org/10. 1145/3663529.3663801 . Cheng, X., Wang, H., Hua, J., Xu, G., and Sui, Y . Deep- wukong: Statically detecting software vulnerabilities us- ing deep graph neural network. ACM Transactions on Software Engineering and Methodology (TOSEM) , 30(3): 1–33, 2021. Fagan, M. Design and code inspections to reduce errors in program development. In Software pioneers , pp. 575–607. Springer, 2002. Gauthier, P. Aider - ai pair | https://arxiv.org/abs/2505.17928v2 |
programming in your terminal, 2024. URL https://github.com/ paul-gauthier/aider . Accessed: 2024-09-12. Guo, T., Chen, X., Wang, Y ., Chang, R., Pei, S., Chawla, N., Wiest, O., and Zhang, X. Large language model based multi-agents: A survey of progress and challenges. In 33rd International Joint Conference on Artificial Intelli- gence (IJCAI 2024) . IJCAI, 2024. Gupta, A. and Sundaresan, N. Intelligent code reviews using deep learning. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’18) Deep Learning Day , 2018. Hong, Y ., Tantithamthavorn, C., Thongtanunam, P., and Aleti, A. Commentfinder: a simpler, faster, more accurate code review comments recommendation. In Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Soft- ware Engineering , pp. 507–519, 2022. Hsieh, C.-P., Sun, S., Kriman, S., Acharya, S., Rekesh, D., Jia, F., and Ginsburg, B. Ruler: What’s the real context size of your long-context language models? In First Conference on Language Modeling , 2024. Jiang, T., Wang, Z., Liang, J., Li, C., Wang, Y ., and Wang, T. Robustkv: Defending large language mod- els against jailbreak attacks via kv eviction. In Proceedings of the 13th International Conference on Learning Representations (ICLR) . OpenReview, April 2025. URL https://openreview.net/forum? id=L5godAOC2z . Poster presentation. 10 Towards Practical Defect-Focused Automated Code Review Kononenko, O., Baysal, O., and Godfrey, M. W. Code re- view quality: How developers see it. In Proceedings of the 38th international conference on software engineer- ing, pp. 1028–1038, 2016. Kwon, W., Li, Z., Zhuang, S., Sheng, Y ., Zheng, L., Yu, C. H., Gonzalez, J. E., Zhang, H., and Stoica, I. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles , 2023. Li, L., Yang, L., Jiang, H., Yan, J., Luo, T., Hua, Z., Liang, G., and Zuo, C. Auger: automatically generating review comments with pre-training models. In Proceedings of the 30th ACM Joint European Software Engineering Con- ference and Symposium on the Foundations of Software Engineering , pp. 1009–1021, 2022a. Li, Z., Zou, D., Xu, S., Ou, X., Jin, H., Wang, S., Deng, Z., and Zhong, Y . Vuldeepecker: A deep learning-based system for vulnerability detection. In 25th Annual Network and Distributed System Security Symposium, NDSS 2018, San Diego, California, USA, February 18-21, 2018 . The Internet Society, 2018. URL https://www.ndss-symposium.org/ wp-content/uploads/2018/02/ndss2018_ 03A-2_Li_paper.pdf . Li, Z., Zou, D., Xu, S., Jin, H., Zhu, Y ., and Chen, Z. Sysevr: A framework for using deep learning to detect software vulnerabilities. IEEE Transactions on Dependable and Secure Computing , 19(4):2244–2258, 2021. Li, Z., Lu, S., Guo, D., Duan, N., Jannu, S., Jenks, G., Ma- jumder, D., Green, J., Svyatkovskiy, A., Fu, S., et al. Automating code review activities by large-scale pre- training. In Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering , pp. 1035–1047, 2022b. Liang, J., Wang, Y ., Li, C., Zhu, R., Jiang, T., Gong, N., and Wang, T. Graphrag under fire. arXiv preprint arXiv:2501.14050 , 2025. Lin, B., | https://arxiv.org/abs/2505.17928v2 |
Wang, S., Liu, Z., Liu, Y ., Xia, X., and Mao, X. Cct5: A code-change-oriented pre-trained model. In Pro- ceedings of the 38th IEEE/ACM International Conference on Automated Software Engineering , 2023. Lin, C.-Y . Rouge: A package for automatic evaluation of summaries. In Text summarization branches out , pp. 74–81, 2004. Lin, J., Tang, J., Tang, H., Yang, S., Chen, W.-M., Wang, W.-C., Xiao, G., Dang, X., Gan, C., and Han, S. Awq: Activation-aware weight quantization for on-device llmcompression and acceleration. Proceedings of Machine Learning and Systems , 6:87–100, 2024. Liu, N. F., Lin, K., Hewitt, J., Paranjape, A., Bevilacqua, M., Petroni, F., and Liang, P. Lost in the middle: How language models use long contexts. Transactions of the Association for Computational Linguistics , 12:157–173, 2024. Lu, J., Yu, L., Li, X., Yang, L., and Zuo, C. Llama-reviewer: Advancing code review automation with large language models through parameter-efficient fine-tuning. In 2023 IEEE 34th International Symposium on Software Relia- bility Engineering (ISSRE) , pp. 647–658. IEEE, 2023. Lu, J., Li, Z., Shen, C., Yang, L., and Zuo, C. Exploring the impact of code review factors on the code review comment generation. Automated Software Engineering , 31(2):71, 2024. Lu, J., Li, X., Hua, Z., Yu, L., Cheng, S., Yang, L., Zhang, F., and Zuo, C. Deepcrceval: Revisiting the evaluation of code review comment generation. In International Conference on Fundamental Approaches to Software En- gineering , pp. 43–64. Springer, 2025. Ma, L., Yang, W., Xu, B., Jiang, S., Fei, B., Liang, J., Zhou, M., and Xiao, Y . Knowlog: Knowledge enhanced pre-trained language model for log understanding. In Pro- ceedings of the 46th IEEE/ACM International Conference on Software Engineering , pp. 1–13, 2024a. Ma, Z., Chen, A. R., Kim, D. J., Chen, T.-H., and Wang, S. Llmparser: An exploratory study on using large language models for log parsing. In Proceedings of the IEEE/ACM 46th International Conference on Software Engineering , pp. 1–13, 2024b. Marjam ¨aki, D. Cppcheck - static analysis of c/c++ code, 2024. URL https://github.com/danmar/ cppcheck . Accessed: 2024-09-12. McAleese, N., Pokorny, R. M., Uribe, J. F. C., Nitishinskaya, E., Trebacz, M., and Leike, J. Llm critics help catch llm bugs, 2024. URL https://arxiv.org/abs/ 2407.00215 . Organization, P. Flask - the python micro framework for building web applications, 2024. URL https: //github.com/pallets/flask . Accessed: 2024- 09-12. Papineni, K., Roukos, S., Ward, T., and Zhu, W.-J. Bleu: a method for automatic evaluation of machine transla- tion. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics , pp. 311–318, 2002. 11 Towards Practical Defect-Focused Automated Code Review Ray, P. P. Chatgpt: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems , 3:121–154, 2023. Rigby, P. C. and Bird, C. Convergent contemporary software peer review practices. In Proceedings of the 2013 9th joint meeting on foundations of software engineering , pp. 202–212, 2013. Rigby, P. C., German, D. M., and Storey, M.-A. Open source software peer review practices: a case study of the apache server. In Proceedings of the 30th international | https://arxiv.org/abs/2505.17928v2 |
conference on Software engineering , pp. 541–550, 2008. Rigby, P. C., German, D. M., Cowen, L., and Storey, M.-A. Peer review on open-source software projects: Parame- ters, statistical models, and theory. ACM Transactions on Software Engineering and Methodology (TOSEM) , 23(4): 1–33, 2014. Sadowski, C., S ¨oderberg, E., Church, L., Sipko, M., and Bacchelli, A. Modern code review: a case study at google. InProceedings of the 40th International Conference on Software Engineering: Software Engineering in Practice , pp. 181–190, 2018. Sch¨afer, M., Nadi, S., Eghbali, A., and Tip, F. An empirical evaluation of using large language models for automated unit test generation. IEEE Transactions on Software En- gineering , 2023. Shan, Q., Sukhdeo, D., Huang, Q., Rogers, S., Chen, L., Paradis, E., Rigby, P. C., and Nagappan, N. Using nudges to accelerate code reviews at scale. In Proceedings of the 30th ACM Joint European Software Engineering Con- ference and Symposium on the Foundations of Software Engineering , pp. 472–482, 2022. Siow, J. K., Gao, C., Fan, L., Chen, S., and Liu, Y . Core: Automating review recommendation for code changes. In2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER) , pp. 284– 295. IEEE, 2020. Sommerville, I. Software engineering 9th edition. ISBN-10 , 137035152:18, 2011. Tao, W., Zhou, Y ., Wang, Y ., Zhang, H., Wang, H., and Zhang, W. Kadel: Knowledge-aware denoising learn- ing for commit message generation. ACM Transactions on Software Engineering and Methodology , 33(5):1–32, 2024. Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozi `ere, B., Goyal, N., Hambro, E., Azhar, F., et al. Llama: Open and efficient foundation lan- guage models. arXiv preprint arXiv:2302.13971 , 2023.Tufano, R., Pascarella, L., Tufano, M., Poshyvanyk, D., and Bavota, G. Towards automating code review activities. In2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE) , pp. 163–174. IEEE, 2021. Tufano, R., Masiero, S., Mastropaolo, A., Pascarella, L., Poshyvanyk, D., and Bavota, G. Using pre-trained models to boost code review automation. In Proceedings of the 44th International Conference on Software Engineering , pp. 2291–2302, 2022. Tufano, R., Martin-Lopez, A., Tayeb, A., Dabi ´c, O., Haiduc, S., and Bavota, G. Deep learning-based code reviews: A paradigm shift or a double-edged sword? arXiv preprint arXiv:2411.11401 , 2024. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. At- tention is all you need. Advances in neural information processing systems , 30, 2017. V otta Jr, L. G. Does every inspection need a meeting? InProceedings of the 1st ACM SIGSOFT Symposium on Foundations of Software Engineering , pp. 107–114, 1993. Wang, L., Zhou, Y ., Zhuang, H., Li, Q., Cui, D., Zhao, Y ., and Wang, L. Unity is strength: Collaborative llm-based agents for code reviewer recommendation. In Proceed- ings of the 39th IEEE/ACM International Conference on Automated Software Engineering , pp. 2235–2239, 2024. Yang, X., Kula, R. G., Yoshida, N., and Iida, H. Mining the modern code review repositories: A dataset of people, process and product. In Proceedings of the 13th Interna- tional Conference on Mining Software Repositories , | https://arxiv.org/abs/2505.17928v2 |
pp. 460–463, 2016. Yu, L., Lu, J., Liu, X., Yang, L., Zhang, F., and Ma, J. Pscvfinder: a prompt-tuning based framework for smart contract vulnerability detection. In 2023 IEEE 34th Inter- national Symposium on Software Reliability Engineering (ISSRE) , pp. 556–567. IEEE, 2023. Yu, L., Chen, S., Yuan, H., Wang, P., Huang, Z., Zhang, J., Shen, C., Zhang, F., Yang, L., and Ma, J. Smart- llama: Two-stage post-training of large language models for smart contract vulnerability detection and explanation. arXiv preprint arXiv:2411.06221 , 2024a. Yu, L., Huang, Z., Yuan, H., Cheng, S., Yang, L., Zhang, F., Shen, C., Ma, J., Zhang, J., Lu, J., and Zuo, C. Smart- llama-dpo: Reinforced large language model for explain- able smart contract vulnerability detection. Proceedings of the ACM on Software Engineering , 2(ISSTA), July 2025. ISSN 2994-970X. doi: 10.1145/3728878. Licensed under Creative Commons Attribution 4.0 International License. 12 Towards Practical Defect-Focused Automated Code Review Yu, Y ., Rong, G., Shen, H., Zhang, H., Shao, D., Wang, M., Wei, Z., Xu, Y ., and Wang, J. Fine-tuning large language models to improve accuracy and comprehensibility of automated code review. ACM transactions on software engineering and methodology , 34(1):1–26, 2024b. Yuan, H., Yu, L., Huang, Z., Zhang, J., Lu, J., Cheng, S., Yang, L., Zhang, F., Ma, J., and Zuo, C. Mos: To- wards effective smart contract vulnerability detection through mixture-of-experts tuning of large language mod- els.arXiv preprint arXiv:2504.12234 , 2025. Zhang, Y ., Qiu, Z., Stol, K.-J., Zhu, W., Zhu, J., Tian, Y ., and Liu, H. Automatic commit message generation: A critical review and directions for future work. IEEE Transactions on Software Engineering , 2024. 13 Towards Practical Defect-Focused Automated Code Review A. The Central Role of Defect Detection in Code Review Identifying defects has consistently been recognized as the core and most fundamental goal of code review. This under- standing aligns with its historical origins, current industry expectations, state-of-the-art research directions, and pressing real-world needs: •Historical Foundations: Defect detection was the original purpose of code reviews, tracing back to the concept of code inspection proposed by Michael Fagan at IBM in 1976 (Fagan, 2002). Fagan Inspections introduced a structured process aimed at reducing long-term costs by detecting and fixing defects early. Subsequent decades of research continued to center on discovering and resolving faults (Ackerman et al., 1989; Sommerville, 2011; V otta Jr, 1993). •Contemporary Expectations: Empirical studies show that both developers and managers consider defect detection the primary expectation of code reviews (Bacchelli & Bird, 2013). For example, a comprehensive survey involving 165 managers and 873 programmers at Microsoft revealed that while code review can serve multiple functions, identifying defects remains the foremost motivation for all stakeholders. •Current Research Directions: Recent state-of-the-art (SOTA) work in automated code review—particularly generation-based methods—continues to emphasize defect detection as the core research objective (Tufano et al., 2024). While these studies simulate code review by injecting known defects, our approach leverages actual, historically documented defects, providing a more realistic and robust evaluation scenario. •Real-World Industrial Needs: Industry practitioners, especially “super reviewers” overseeing thousands of developers and large codebases, highlight an | https://arxiv.org/abs/2505.17928v2 |
urgent and practical need for more effective defect detection. These expectations were extracted from the Objectives and Key Results set by approximately 50 experienced super reviewers (the concept comes from Kononenko et al.(Kononenko et al., 2016)) within a major production environment of 4000+ developers. In conclusion, defect detection stands as the central and most essential aspect of code review. While other dimensions—such as code improvement, comprehension, and communication—do appear frequently, they often do not align with the urgent expectations of practitioners. This disconnect between what teams urgently need (effective defect detection) and what reviews often deliver (broader but less critical commentary) warrants a focused research effort (Bacchelli & Bird, 2013). B. Detailed Real-World Workflow Integration As briefly mentioned in the Introduction and illustrated in Figure 1, our automated code review framework is deeply integrated into the real-world Continuous Integration/Continuous Deployment (CI/CD) pipeline and DevOps platform of a large-scale online service company. This appendix provides a more detailed description of this integration, designed to be seamless for developers and provide actionable feedback directly within their existing workflows. The primary goal of this integration is to automate aspects of the code review process without disrupting established development practices, thereby enhancing both efficiency and code quality. The workflow is triggered upon the submission or update of a Merge Request (MR) within the company’s internal DevOps system and proceeds through several automated stages: 1.MR Trigger and Initial Verification: When a developer submits an MR, a webhook notifies our automated review system. The system first performs essential verification checks. This includes confirming the submitting user’s permissions and ensuring that the changed files fall within the scope of automated review (e.g., correct programming language, project-specific configurations for review). This step is crucial for security, access control, and efficient resource allocation. 2.Code Analysis and Comment Generation Launch: Once the MR is verified, the system retrieves the relevant code changes. The core analysis process is then launched: •Code Slicing: The modified code segments are processed by our code slicing algorithms (detailed in Section 3.2) to extract relevant contextual information necessary for effective review. •Multi-role Review: These code slices, along with their context, are distributed to our multi-role LLM framework (described in Section 3.3). Each role, potentially with specialized roles (e.g., Reviewer, Validator), analyzes the code to identify potential issues and generate draft review comments. 14 Towards Practical Defect-Focused Automated Code Review 3.Comment Filtering and Refinement: The raw comments generated by the LLM roles undergo a rigorous filtering process using our Redundancy Comment Filter Mechanism (explained in Section 3.4). This multi-stage process (involving Q1-Q3 scoring, coarse filtering, meta-reviewer processing, and validator re-scoring) aims to eliminate nitpicks, false positives, and less critical suggestions, ensuring that only high-quality, actionable comments proceed. 4.Seamless Injection into DevOps Platform and Developer Notification: This stage is critical for effective real-world integration: •DevOps System Integration: The filtered and validated comments are programmatically injected into the company’s internal DevOps platform using its provided APIs. Each comment is associated with the specific MR and the relevant commit. •Line-Aware Comment Positioning: A key feature for developer adoption is the precise positioning | https://arxiv.org/abs/2505.17928v2 |
of each comment directly at the relevant line number(s) in the diff view of the MR. This is achieved by accurately parsing diff hunks and mapping comment locations. The effectiveness and importance of this line-aware comment injection for providing clear, contextualized, and actionable feedback was specifically evaluated in Section 5.5 (RQ5) and found to be highly valued by developers. •Developer Notification: Developers (typically the MR author and assigned human reviewers) are notified of the automated review comments through the DevOps platform’s standard notification mechanisms (e.g., email, internal messaging/chat system integration). The comments appear within the MR’s discussion or review interface, similar to comments made by human colleagues. This end-to-end automated workflow, from MR submission to the delivery of precisely positioned, filtered comments directly within the developers’ familiar DevOps environment, constitutes a seamless integration into their daily activities. It minimizes context switching, presents feedback in an actionable format, and leverages existing platform features for discussion and resolution of the identified issues. The positive developer feedback regarding the non-intrusiveness and utility of the system, particularly due to reliable line-aware comment placement, substantiates our claim of successful real-world workflow integration. This integration was a prerequisite for addressing the key challenges outlined in Section C within a practical, industrial setting. C. Four Challenges Identified in Code Review Process Challenge 1: Capturing Proper Code Context. During the merge request process, code is integrated into the repository. Prior studies often split the input into method-level or diff hunk-level segments, which introduces two major problems: ❶ the omission of critical context beyond the method, hunk, or even file level, such as variable declarations, definitions, and assignments—particularly in languages like C++—which misleads large language models (LLMs) into generating false alarms or missing key bugs; and ❷the truncation of long code snippets, leading to significant omissions. However, feeding excessively long inputs to LLMs also degrades performance due to the models’ sensitivity to the relative position of the target sequence (Hsieh et al., 2024; Liu et al., 2024). Therefore, finding an optimal way to capture proper code context is essential. Challenge 2: Improving Key Bug Inclusion (KBI). A primary goal of automated code review is to identify key bugs introduced by new merge requests that could compromise system reliability and performance. In 2022, 30% of severe P1+ incidents (asset loss >$350,000) and 24.04% of P4+ incidents in the company were attributed to preventable low-level faults, highlighting the need for robust code reviews and automated systems. Prior studies have validated models based on overall text similarity metrics, which are often misleading. Text similarity does not necessarily correlate with a model’s recall ability, as higher similarity scores may reflect linguistic style rather than the identification of critical bugs(Lu et al., 2025). OpenAI researchers have proposed Critique-Bug Inclusion (CBI) as a key metric for evaluating LLM performance (McAleese et al., 2024), which we adopt for our task. Increasing the KBI capability of the framework is a key focus of our approach. Challenge 3: Reducing False Alarm Rates (FAR). Generative models frequently produce redundant or irrelevant comments, including nitpicks and hallucinations, even for a single code | https://arxiv.org/abs/2505.17928v2 |
snippet (McAleese et al., 2024). In real-world scenarios, merge requests often contain numerous code snippets, and managing redundant comments can overwhelm human reviewers. Most previous studies (Gupta & Sundaresan, 2018; Siow et al., 2020; Tufano et al., 2021; 2022; Li et al., 2022a; Hong et al., 2022) have not addressed this issue, simplifying the review task to merely generating natural language comments from problematic code. Some works attempted to introduce discriminators to assess comment quality (Lin et al., 2023; Li et al., 2022b), but 15 Towards Practical Defect-Focused Automated Code Review these approaches have been shown to be ineffective (McAleese et al., 2024). Thus, a robust filtering mechanism is needed to minimize redundant comments and reduce false alarms. Challenge 4: Human-Centric Workflow Integration. Simplified code review tasks often overlook essential real-world workflow components, such as attaching comments to specific code lines. This step reduces the cognitive load on developers by making it easier to verify the validity of comments. Despite its importance, this aspect is frequently ignored in previous work (Gupta & Sundaresan, 2018; Siow et al., 2020; Tufano et al., 2021; 2022; Li et al., 2022a; Hong et al., 2022; Lin et al., 2023; Li et al., 2022b), which hinders real-world usability. Incorporating such functionality is critical for improving user experience alongside KBI and FAR. D. Demands and Expectations from Reviewers We conducted qualitative studies, including surveys and in-depth interviews, to understand the expectations and demands for automated code review from a group of 50 experienced ”super reviewers” within a large technology company’s core infrastructure teams. From the reviewers’ perspective, an effective automation solution must fulfill two primary demands regarding its operational behavior and meet three key expectations concerning the quality and utility of its feedback: The two demands, which aim to simulate desirable aspects of human review, are: •D1:Automation should operate at the merge request level, reviewing all changes holistically rather than focusing on partial code snippets or individual diff hunks. •D2:Automation should incorporate global repository knowledge beyond the diff, enabling access to broader information such as variable declarations or function definitions not included in the immediate changes. The three key expectations for the generated feedback are: •E1: Identify as many key issues as possible, thereby reducing the number of missed critical bugs. This directly informed our Key Bug Inclusion (KBI) metric, introduced in the Introduction. In our interviews, developers consistently emphasized that ”catching critical bugs” is the absolute top priority for any automated review tool. •E2:Minimize nitpicking comments and hallucinations (i.e., non-existent issues), which corresponds to reducing the False Alarm Rate (FAR). This is crucial for lessening the burden on human reviewers when verifying automated feedback. Developers particularly stressed this point, noting that reducing irrelevant comments is vital for the adoption and continued trust in an automated system, with one sentiment being that ”even one false positive can erode trust.” Initial feedback from the system’s current deployment within an internal development team has further underscored this, revealing FAR to be an especially sensitive metric directly impacting developer perception and engagement. •E3:Support human-centric interaction by attaching comments precisely to the | https://arxiv.org/abs/2505.17928v2 |
correct code lines. This aids reviewers or code owners in efficiently verifying issues and prevents confusion or misleading information, a factor captured by our Line Localization Success Rate (LSR). The demand D2aligns with Challenge 1 (Capturing Relevant Context) from the Introduction, while expectations E1, E2, and E3correspond to Challenges 2 (Improving KBI), 3 (Reducing FAR), and 4 (Integrating Human Workflows) respectively. Previous work primarily focused on aspects related to E1, often using text similarity as an indirect proxy, but this does not guarantee precision or the detection of key bugs, rendering such approaches less suitable for demanding real-world applications. In contrast, our approach, informed by these direct developer insights, addresses these challenges comprehensively. We introduce a set of merge-request level evaluation metrics tailored to these real-world demands, including Key Bug Inclusion (KBI), False Alarm Rate (FAR), Comprehensive Performance Index (CPI), and Line Localization Success Rate (LSR). While our metrics and system design are thus grounded in extensive interactions with professional developers and initial deployment feedback, we acknowledge that formal, systematic user studies evaluating perceived review quality and overall system utility have not yet been conducted. Such studies represent an important avenue for future work to further strengthen the validation of our approach. 16 Towards Practical Defect-Focused Automated Code Review E. Background: Code Slicing Providing the entire repository as input ensures comprehensive context, but large language models (LLMs) have token limitations. Increasing the input size within these constraints can reduce model performance and lead to delays in inference (Hsieh et al., 2024; Liu et al., 2024). Code slicing offers a potential solution by efficiently providing sufficient context while remaining concise. This technique uses static analysis to form code units, parsing code into an abstract syntax tree and slicing it based on node relationships. For large code snippets, code slicing divides them into small, independent segments. For small snippets, like diff hunks in code reviews, it enriches the context by retrieving statements and variable usages related to the changes. While code slicing has been effectively applied to tasks such as vulnerability detection (Li et al., 2018; 2021; Cheng et al., 2021; Yu et al., 2023), its explicit and systematic integration into LLM-based automated code review pipelines—specifically for enhancing context-aware defect localization and guiding comment generation—appears to be less explored in prior literature. While Lu et al. (2024) made initial attempts at testing context components, the bulk of prior research in automated code review has remained largely centered on snippet-level analysis or comment naturalness. Consequently, the richer repository-level contextual understanding, such as that afforded by code slicing, has often been underutilized. To address real-world needs and explore this promising direction, we designed four code slicing algorithms that provide appropriate context for automated reviews. F. Background: Large Language Models and Multi-role Systems for Software Engineering Transformer-based large language models (LLMs)(Vaswani et al., 2017) have achieved notable success in natural language processing tasks(Ray, 2023). The ongoing evolution and refinement of these models involve diverse research efforts, including explorations into advanced knowledge integration techniques (Liang et al., 2025) and the development of robust safeguards against potential misuses (Jiang | https://arxiv.org/abs/2505.17928v2 |
et al., 2025). Given that LLMs now often include code in their training corpora, they have developed strong abilities in code-related tasks alongside their general language capabilities. This blurs the line between code-specific and general models. For instance, GPT-4 excels in code generation, while models like ChatGPT (Achiam et al., 2023) and LLaMA (Touvron et al., 2023) have shown potential in generating commit messages (Zhang et al., 2024), tests (Chen et al., 2024b; Sch ¨afer et al., 2023), method renaming (AlOmar et al., 2024), log analysis (Ma et al., 2024a;b), and smart contract vulnerability detection(Yu et al., 2024a; 2025; Yuan et al., 2025). Complex tasks like automated code review, which demand deep-level reasoning, often exceed the capabilities of individual LLMs. Multi-role systems have emerged as an effective approach to decompose and tackle such tasks, assigning specialized roles to different LLM roles that collaborate to solve the overall problem (Guo et al., 2024; Chen et al., 2024a). In this work, we adopt a multi-role system utilizing mainstream open-source LLMs with large parameter counts to enhance Key Bug Inclusion (KBI) and filter redundant comments, thereby making the code review process more efficient and precise. G. Code Slicing Algorithms The code slicing process in this paper is composed of several interrelated steps designed to isolate, identify, and group code statements relevant to a given set of changes. Specifically, the detailed algorithms of our approach are presented in Algorithms 1–9, each focusing on a different aspect or mode of slice generation. These algorithms work together as follows: •Algorithm 1 (CodeSlicing) drives the entire process: It clones the target repository at the specified commit, initializes data structures, and orchestrates the slicing workflow for each file. •Algorithm 2 (ProcessAST) processes each file’s Abstract Syntax Tree (AST) using a chosen slicing option, determining which statements intersect with the diff and storing those statements in a cache. •Algorithm 3 (GenerateNewSlice) takes contiguous segments of diff statements as seeds and expands them according to the chosen slicing strategy, removing statements from the cache once they are incorporated into a slice. •Algorithm 4 (GetContiguousDiffSegment) serves as a helper function that extracts a cohesive set of adjacent statements from the cache, ensuring that logically connected changes are processed together. •Algorithm 5 (ApplySlicingAlgorithm) acts as a dispatcher, selecting one of four specific slicing methods based on the chosen option: – Algorithm 6 (OriginalDiff) : Focuses on the original diff statements and their direct dependencies. 17 Towards Practical Defect-Focused Automated Code Review –Algorithm 7 (ParentFunction) : Retrieves the smallest function containing all diff statements, providing function- level context for the slice. –Algorithm 8 (LeftFlow) : Performs a backward data-flow trace from the diff statements by analyzing L-values and their defining statements. –Algorithm 9 (FullFlow) : Extends LeftFlow by also including forward data-flow tracing for R-values and callees, capturing both backward and forward dependencies. Through this sequence of algorithms, relevant code segments are iteratively located, expanded, and grouped, resulting in a collection of slices tailored to the user’s chosen level of context or detail. Algorithm 1 CodeSlicing 1:main CodeSlicing 2: Input: repository, commit, slicingOption 3: Output: | https://arxiv.org/abs/2505.17928v2 |
slices 4: clone repository and checkout to commit 5:ASTs ←ApplyStaticAnalysisTool(repository) 6:cache←InitializeCache() 7:slices ←[]{Initialize slice list } 8: foreachAST inASTs do 9: ProcessAST( AST , slicingOption, cache) 10: end for 11: while not cache.isEmpty() do 12: seed←GetContiguousDiffSegment (cache ) 13: newSlice ←GenerateNewSlice (seed, cache, slicingOption ) 14: slices. append (newSlice ) 15: end while 16:end main Algorithm 2 ProcessAST 1:function ProcessAST 2: Input: AST, option, cache 3: Output: updated cache 4:slice←ApplySlicingAlgorithm (AST, option ) 5: foreachstatement inslice do 6: ifstatement intersects with diff then 7: cache. add(statement ) 8: end if 9: end for 10:end function H. Integration of Chain-of-Thought (CoT) in the Review Process We integrate Chain-of-Thought (CoT) prompts to guide the roles in the review process. Below are the CoT prompts for each: For the Reviewer: 1.System Introduction: Introduces the task and provides guidance related to the code repository and input format. 2.Understand: Helps the model comprehend the purpose of the code changes. 3.Analyze: Instructs the model to analyze the code for defects or performance issues. 4.Re-evaluate: Guides the model to review its analysis, minimizing nitpicks and hallucinations. Three specific questions are posed to quantify nitpicks, hallucinations, and severity, inspired by (McAleese et al., 2024). 18 Towards Practical Defect-Focused Automated Code Review Algorithm 3 GenerateNewSlice 1:function GenerateNewSlice 2: Input: seed, cache, option 3: Output: newSlice 4:newSlice ←[]{Start forming a new slice from the seed } 5: Add seed statements to newSlice 6: Remove seed statements from cache 7: foreachstatement innewSlice do 8: Expand the slice {Expanding the slice } 9: expandedSlice ←ApplySlicingAlgorithm (statement, option ) 10: foreachexpStatement inexpandedSlice do 11: ifexpStatement is incache then 12: newSlice. append (expStatement ) 13: cache. remove (expStatement ) 14: end if 15: end for 16: end for 17:end function Algorithm 4 GetContiguousDiffSegment 1:function GetContiguousDiffSegment 2: Input: cache 3: Output: contiguousSegment 4:contiguousSegment ←Extract a contiguous segment of cached diff statements 5:end function Algorithm 5 ApplySlicingAlgorithm 1:function ApplySlicingAlgorithm 2: Input: AST, option 3: Output: sliced statements 4: switch (option) 5: case “OriginalDiff”: 6: OriginalDiff(AST) 7: case “ParentFunction”: 8: ParentFunction(AST) 9: case “LeftFlow”: 10: LeftFlow(AST) 11: case “FullFlow”: 12: FullFlow(AST) 13: end switch 14:end function Algorithm 6 OriginalDiff 1:function OriginalDiff 2: Input: AST 3: Output: sliced set S 4:D←diff statements in AST 5:S← ∅ 6: foreachdinDdo 7: S←S∪ {d}∪dependencies of d 8: end for 9:end function 19 Towards Practical Defect-Focused Automated Code Review Algorithm 7 ParentFunction 1:function ParentFunction 2: Input: AST 3: Output: sliced set S 4:D←diff statements in AST 5:F←smallest function containing all D 6:S←all statements and declarations in F 7:end function Algorithm 8 LeftFlow 1:function LeftFlow 2: Input: AST 3: Output: sliced set S 4:D←diff statements in AST 5:S← ∅ 6: foreachdinDdo 7: L←all L-values affected by d 8: foreachlinLdo 9: S←S∪backward trace of l 10: end for 11: end for 12:end function Algorithm 9 FullFlow 1:function FullFlow 2: Input: AST 3: Output: sliced set S 4:D←diff statements in AST 5:S←LeftFlow (AST ) 6: foreachdinDdo 7: R←all R-values and callees affected by d 8: foreachrinRdo 9: S←S∪forward trace of r 10: end for 11: end for 12:end function 20 Towards Practical Defect-Focused Automated Code Review 5.Organize | https://arxiv.org/abs/2505.17928v2 |
Your Thoughts: Directs the model to write a detailed review comment specifying the issue, affected lines, root cause, recommended solution, and example code. 6.Final Comment: Instructs the model to output the final comment in JSON format. For the Meta-Reviewer: 1.System Introduction: Introduces the task of merging Reviewer comments and provides guidelines on the required format. 2.Analyze: Instructs the model to analyze Reviewer comments, focusing on patterns, discrepancies, and insights. 3.Organize and Sort Final Comments: Guides the model to format the refined comments in a prioritized JSON list, calculating the overall scores and sorting by criticality. For the Validator: 1.System Introduction: Similar to the Reviewer, but with a focus on accuracy and relevance. 2.Validate the Comment: Guides the model to review and validate the existing comments, aiming to reduce false alarms. 3.Refine the Comment: Ensures the comment is refined for clarity and correctness. 4.Final Comment: Outputs the validated comment in a JSON format suitable for the development environment. For the Translator: 1.System Introduction: Introduces the translation task and explains the input format. 2.Translation and Formatting Requirements: Guides the model to translate items into the target language, ensuring proper formatting. 3.Translated Comments: Outputs the translated comments in JSON format for direct integration into the development environment. I. Rationale for Offline Validation The primary goal of our review system is to recall as many historical faults as possible while minimizing irrelevant comments that could burden developers. Each recalled fault suggests that our system could potentially prevent similar future issues. To evaluate the system, we use several key performance metrics, including the key bug inclusion rate (KBI), false alarm rate (FAR), comprehensive performance index (CPI), and line localization success rate (LSR), which are defined in Section 4.3. The CPI serves as an overall measure, balancing the trade-off between effective bug detection (KBI) and minimizing false alarms (FAR). We hypothesize that this validation approach, which goes beyond simple text similarity metrics, provides a more accurate measure of the system’s ability to handle real-world code review challenges. For instance, a change from “and” logic to “or” in code can fundamentally alter the program’s behavior, but such a subtle difference may be missed by conventional text-based comparisons. Additionally, we observed that the same issue is often described in multiple ways, complicating simple text-based comparisons and highlighting the need for more nuanced metrics. J. Research Questions in Detail RQ1: How does the overall performance of our framework compare with previous works? This question evaluates our framework’s overall performance against existing baselines, focusing on metrics such as KBI, FAR, and CPI (defined in Section 4.3). Since our framework decouples from base LLMs, we utilize various open-source large language model 21 Towards Practical Defect-Focused Automated Code Review (LLM) engines to provide a comprehensive evaluation. Specifically, for RQ2-5, We conduct ablation studies using the LLaMA3.1-405B LLM engine, which is one of the most representative models. RQ2: How do code slicing algorithms impact the performance of the framework? This question investigates the effect of code slicing on the code review process. As the first to apply code slicing in this context, we compare the | https://arxiv.org/abs/2505.17928v2 |
effectiveness of four slicing algorithms and study the results using a Venn diagram. This analysis focuses primarily on KBI and CPI, particularly the ability to recall key bugs. RQ3: How do the different components of the multi-role system impact the performance of our framework? This question explores the influence of various components of our multi-role system, including the number of reviewers, the self-correction capability of LLMs (validator), and the impact of Chain-of-Thought (CoT) prompts. We measure KBI, FAR, and CPI to assess overall performance. RQ4: How does the redundancy comment filter mechanism address nitpicks and hallucinations? This question evaluates the effectiveness of our redundancy comment filter mechanism in reducing nitpicks and hallucinations. We sequentially evaluate the contribution of each component of the filter mechanism, with a focus on KBI, FAR, and CPI. RQ5: How does the representation of line number position information impact overall performance and line number localization success rate? This question assesses the impact of our code representation format on overall performance and the line localization success rate (LSR). We compare our method with two other formats: no line position information and supplementary line position in a separate list. Here, LSR is considered alongside KBI, FAR, and CPI. K. Dataset Statistics A key point that may lead to misunderstanding is that our evaluation cases operate at the merge-request level, rather than focusing on isolated code snippets. Over the past three years (starting from 2021), we have collected all recorded faulty merge requests from four repositories maintained by over 4,000 developers. Fault selection follows a practical criterion: each fault must have caused a user-visible issue and been formally logged in the company’s internal defect tracking system (Section 3.6). This result-oriented strategy emphasizes real impact, even if it does not fully cover all C++ error types. Each merge request often involves multiple, interdependent code changes, making the evaluation scenario more complex and realistic than snippet-level analyses. Our focus is on C++ code, as it represents a critical portion of the company’s software infrastructure. The dataset consists of 45 real-world fault reports, each corresponding to a significant issue that caused financial losses, along with the associated merge request snapshots. Among these 45 cases, 12 are logic errors, 31 are code security errors, and 2 are performance- related errors. We have released a desensitized JSON folder of fault descriptions in our Zenodo repository.1The dataset includes both edge and typical cases, e.g.: • Case 4694 23117: array out-of-bounds and null-pointer dereference. • Case 16231 13308: misuse of boost::random::beta distribution . On average, each merge request includes 8.02 changed C++ files, with 416.8 newly added lines spread across 14.84 modified functions (a total of 1,403.36 lines affected). Unlike previous datasets that focus on individual code snippets, the faults in our dataset span multiple, interconnected code changes at the merge-request level. The CodeReviewer dataset (Li et al., 2022b) is a well-known benchmark for code review comment generation. However, we do not adopt it because (1) it is a snippet-level dataset lacking repository-level context, and (2) it does not focus on defect detection and lacks structured fault reports | https://arxiv.org/abs/2505.17928v2 |
for comprehensive understanding (Lu et al., 2025). These limitations reduce its applicability to real-world, defect-focused code review automation. To provide context, we compare our dataset’s scale with that of CodeReviewer. Table 9 juxtaposes the size and complexity of the datasets, demonstrating that ours is of comparable magnitude, but situated at a more realistic granularity (merge-request level) that better reflects practical development workflows. 1https://zenodo.org/records/14779175 22 Towards Practical Defect-Focused Automated Code Review Table 9. Dataset statistics. The statistics of the CodeReviewer dataset are estimated based on the language distribution (Li et al., 2022b) and the distribution of comment types (Bacchelli & Bird, 2013). Dataset # of Code Snippets LOC (Lines of Code) CodeReviewer Test (C++ Subset) 814 ∼147k CodeReviewer Test (C++ & Defects Subset) 114 ∼21k Ours (Merge-Request Level) 668 ∼63k L. Excluded Models and Justifications Our framework is designed to be model-agnostic and independent of any specific base model. However, for the purposes of our study, we only present results on selected representative open-source models. Below, we outline the reasons for not including certain other models in our experiments: •Closed-Source Models: This category includes proprietary models such as the GPT series and the Claude series. These models remain inaccessible for self deployment due to their closed nature, and utilizing fault reports for evaluation poses potential security and data transmission risks. •Small and Weak Models: We also experimented with several smaller or less powerful models as the base model for our framework, including Gemma-2-27B, CodeGemma-7B, MiniCPM3-4B, GLM-4-9B-Chat, CodeGeeX4-All-9B, DeepSeek-V2-Lite-Chat, DeepSeek-Coder-V2-Lite, CodeLlama-70B, Phi-3-Medium-128K, Phi-3.5-MoE, and Aya-23- 35B. Unfortunately, these models frequently failed due to their limited capabilities, making it unfair to include them in comparisons under such conditions. •Excessively Large Models: Our experiments were conducted on an infrastructure consisting of eight A100-40G GPUs. However, since FP8 quantization is not supported on Ampere-architecture A100s, we were unable to deploy mixture-of-experts (MoE) models with a very high parameter count. This limitation prevented us from testing models such as DeepSeek-V2-Chat-0628, DeepSeek-Coder-V2, and Mixtral-8x22B-Instruct. M. Metric Formulations We intentionally avoid BLEU and ROUGE due to their limitations in evaluating the quality of code review comments: 1.Our task involves many-to-many mappings between code and reviews, violating BLEU’s single-reference assumption. 2.Code review requires reasoning and domain expertise; recent studies show that BLEU and ROUGE fail to reflect quality in such tasks. 3.Real fault reports and LLM-generated comments differ significantly in style and expression, making surface-level textual similarity unreliable. Regarding vagueness: rather than evaluating linguistic style, we focus on outcome-based metrics that directly reflect the effectiveness of review comments. Specifically, we propose three core metrics— Key Bug Inclusion (KBI) ,False Alarm Rate (FAR) , and the composite Comprehensive Performance Index (CPI) —which are objective, interpretable, and domain-relevant. Further discussion on their behavior and limitations is provided in Appendix Q. M.1. Key Bug Inclusion (KBI) By “key bugs,” we refer to issues that can lead to tangible losses (e.g., performance degradation or potential future failures), even if the negative impact is not immediate. The P1-P4 incidents mentioned in our dataset (Section 4.2) all qualify as key bugs. This framing ensures our focus remains | https://arxiv.org/abs/2505.17928v2 |
on high-impact defects rather than trivial concerns. KBI measures the model’s ability to recall key issues that cause system faults. It is calculated as the percentage of recalled key issues out of the total issue set: KBI=Number of recalled key issues Total number of key issues×100 (1) 23 Towards Practical Defect-Focused Automated Code Review M.2. False Alarm Rate (FAR) FAR evaluates the extent to which the model generates irrelevant or erroneous comments (false alarms). We consider all comments unrelated to key issues mentioned in the fault reports as false alarms. FAR is calculated as the percentage of false alarm comments relative to total comments. Two types of FAR are defined: 1.FAR 1: Calculates the False Alarm Rate for each individual MR and then averages these rates across all MRs to provide an overall measure of the model’s ability to avoid false alarms. FAR 1=1 NNX i=1Number of false alarm comments in MR i Total number of comments in MR i×100 (2) 2.FAR 2: Focuses on MRs where key bugs were successfully recalled, offering insight into precision when key issues are identified. FAR 2=1 MMX j=1Number of false alarm comments in recalled MR j Total number of comments in recalled MR j×100 (3) Where Nis the total number of MRs, and Mis the number of MRs where key bugs were recalled. M.3. Comprehensive Performance Index (CPI) To balance KBI and FAR, we propose the Comprehensive Performance Index (CPI), which harmonizes KBI and FAR in a similar manner to the F1-Score. CPI evaluates the completeness of key issue detection and the precision of the model’s comments. Two versions of CPI: 1.CPI 1: Based on FAR 1, considering all MRs. CPI 1= 2×KBI×(100 - FAR1) KBI+(100 - FAR1)(4) 2.CPI 2: Based on FAR 2, focusing on MRs where key bugs were recalled. CPI 2= 2×KBI×(100 - FAR2) KBI+(100 - FAR2)(5) M.4. Line Localization Success Rate (LSR) LSR evaluates the model’s ability to successfully associate comments with the valid code lines. A success is recorded if the correct line number is provided and valid in the code. LSR is calculated as the percentage of correct line number cases: LSR=1 NNX i=1Number of correct line number cases in MR i Total number of comments in MR i×100 (6) N. Experimental Setups in Detail The code slicing component of our framework is implemented using Cppcheck(Marjam ¨aki, 2024), while the LLM engines are integrated through an API supported by the vLLM framework (Kwon et al., 2023), and baselines are integrated via Flask(Organization, 2024). All models and baselines are hosted on a server equipped with an AMD EPYC 7702 CPU and eight Nvidia A100-40G GPUs. For large models such as LLaMA3.1-405B, we utilize an Int4 version quantized using AWQ (Lin et al., 2024). For other models, we use the original half-precision floating-point format (FP16). For context on overall throughput for large-scale evaluations, processing our entire dataset, which includes many large merge requests (often over 20 modified files each), with models having 120B activated parameters took approximately 9 hours on our server setup. 24 Towards Practical Defect-Focused Automated Code Review Original | https://arxiv.org/abs/2505.17928v2 |
Diff Parent Function Left Flow Full Flow Slicing Algorithms010min20min30min40min50min60min70min80min90min100minRuntime (min) 4.8min7.7min6.2min 6.3min Figure 5. Runtime per merge request under different slicing algorithms and LLaMA3.1-405B as base models. The most time-consuming algorithm is Function , due to its inclusion of the largest extra context. However, all runtimes are within an acceptable range based on our analysis. 25 Towards Practical Defect-Focused Automated Code Review Regarding typical per-merge request runtime, we report detailed timings using violin plots. Figure 5 illustrates these distributions. The median runtime per MR is 6.2 minutes. The overall CI/CD pipeline (including compilation, analysis, and deployment checks) typically takes 15–30 minutes. Our module runs in parallel from the beginning and does not introduce blocking delays. Thus, we believe the overhead is acceptable in practical scenarios. O. Performance of Heterogeneous Model Combinations While our primary experiments (Section 5.1) employ a homogeneous Large Language Model (LLM) setup across all roles to isolate the impact of a single powerful model, exploring heterogeneous model combinations offers the potential for optimizing both performance and resource utilization. This section details supplementary experiments where LLM assignments for reviewer and validator roles were varied. The results, presented in Table 10, demonstrate that strategically pairing a strong validator model with a computationally less intensive (i.e., smaller) reviewer model can yield performance comparable or, in certain metrics, even superior to configurations relying solely on a powerful model for both roles. Table 10. Performance under different reviewer-validator model combinations and slicing algorithms. The results suggest that the validator plays a more critical role, as it is closer to the final decision output. Interestingly, a combination of a weaker reviewer and a stronger validator can achieve comparable or even superior performance, indicating potential room for improvement through heterogeneous model pairing. Slicing Algorithm KBI↑FAR 1↓CPI 1↑FAR 2↓CPI 2↑ LLaMA3.1-405B as reviewer, LLaMA3.1-70B as validator Original Diff 2.22 42 .22 4 .28 0.00 4.35 Parent Function 6.67 20.00 12.31 0.00 12.50 Left Flow 2.22 39 .26 4 .29 66 .67 4 .17 Full Flow 0.00 35 .56 – – – LLaMA3.1-70B as reviewer, LLaMA3.1-405B as validator Left Flow 17.78 55.74 25.37 51.04 26.08 Full Flow 13.33 66 .11 19 .14 45.83 21.40 LLaMA3.1-405B as both reviewer and validator Original Diff 11.11 90 .11 10 .46 71 .00 16 .07 Parent Function 11.11 89 .48 10 .81 65 .33 16 .83 Left Flow 20.00 75.37 22.07 43.52 29.54 Full Flow 20.00 77 .96 20 .97 67 .59 24 .73 P. Performance by Error Category To better understand the effectiveness of different slicing strategies, we analyze their performance across three fault categories: logic, security, and performance-related bugs. As shown in Table 11, different slicing strategies exhibit distinct strengths across fault categories. Specifically, flow-based slicing (i.e., Left Flow and Full Flow) is particularly effective for identifying security issues. This is likely because these methods capture detailed jump, data, and control flow information, including the lifecycle of variables, which are often crucial for uncovering security vulnerabilities. In contrast, logic bugs appear to benefit more from the broader and more continuous context provided by parent function slicing, which can aid LLMs | https://arxiv.org/abs/2505.17928v2 |
in understanding the overarching code logic and intent. Performance issues remain the most difficult to detect, as they tend to be subtle and delayed in manifestation. Q. Discussion on High FAR The initially reported False Alarm Rate (FAR) in Table 2 appears relatively high. This is partly due to our strict definition: if the framework fails to detect the key bug in a merge request but still generates comments, we consider the FAR for that request to be 100%. To offer a more nuanced perspective, we introduce FAR 2, a metric that evaluates only cases where the key bug is successfully recalled. Under the ”LLaMA3.1 405B + Left Flow + with Validator” configuration (Table 2), we 26 Towards Practical Defect-Focused Automated Code Review Table 11. Performance by error category. Flow-based slicing excels in detecting security-related bugs, while function-level context enhances logic bug identification. Performance bugs remain hard to detect due to their implicit and non-local nature. Category KBI↑FAR 1↓CPI 1↑FAR 2↓CPI 2↑ Left Flow Overall 20.00 75 .37 22 .07 43 .52 29 .54 Security 25.81 73.92 25.94 48.96 34.28 Logic 8.33 75 .00 12 .50 0 .00 15 .38 Performance 0.00 100 .00 0 .00 – – Full Flow Overall 20.00 77 .96 20 .97 67 .59 24 .73 Security 25.81 72.31 26.71 67.71 28.69 Logic 8.33 97 .22 4 .17 66 .67 13 .33 Performance 0.00 50 .00 0 .00 – – Original Diff Overall 11.11 90 .11 10 .46 71 .00 16 .07 Security 16.13 88 .87 13 .17 71 .00 20 .73 Logic 0.00 91 .67 0 .00 – – Performance 0.00 100 .00 0 .00 – – Parent Function Overall 11.11 89 .48 10 .81 65 .33 16 .83 Security 3.23 91 .94 4 .61 50 .00 6 .06 Logic 33.33 81.39 23.89 69.17 32.03 Performance 0.00 100 .00 0 .00 – – reduce FAR 2to below 50%. In practical scenarios, verifying the correctness of comments with a team of five experienced C++ developers—simulating a real-world inspection process—took approximately six minutes per new case. This time can be further reduced if the reviewers are already familiar with the codebase or if a significant portion of the comments pertain to minor suggestions (e.g., unnecessary try-catch statements). Although FAR is a stringent metric—classifying all non-key-bug comments as false alarms—it provides an objective and quantifiable baseline. In contrast, prior work often relies on subjective assessments such as ”usefulness” or ”goodness,” which can vary significantly among reviewers. We view FAR as a foundational measure, with future research potentially refining it to better align with industry-specific tolerance thresholds. R. Min/Max Performance Ranges of Slicing Algorithms To provide a more comprehensive understanding of the performance of our code slicing algorithms, beyond the average metrics presented in Section 5.2 (RQ2, Table 3), this appendix details the minimum and maximum values observed for key metrics across three experimental runs. This analysis addresses the need to understand result variability and offers deeper insights into the consistency and raw potential of each slicing strategy, particularly before extensive filtering is applied. Table 12 presents these min-max performance ranges | https://arxiv.org/abs/2505.17928v2 |
for the four slicing algorithms under various stages of the single- reviewer filtering pipeline. As emphasized in our response to reviewer feedback, the ”Single Reviewer – All” setting is particularly insightful, as it reflects the unfiltered, raw output from the Large Language Model when provided with context from different slicing methods. This setting best illustrates the inherent potential of each slicing algorithm to surface relevant information. Examining the ”Single Reviewer – All” setting in Table 12, we can observe the initial capabilities of each slicing algorithm. For instance, Flow-based methods (Left Flow and Full Flow) demonstrate a high Key Bug Inclusion (KBI) potential, with Full Flow reaching up to 40.00% and Left Flow also achieving a maximum of 40.00%. Parent Function also shows substantial KBI, peaking at 35.56%, while Original Diff ranges from 17.78% to 28.89%. In terms of consistency in this 27 Towards Practical Defect-Focused Automated Code Review raw KBI output, Full Flow exhibits a relatively tight range (37.78%–40.00%, a spread of approx. 2.22%), suggesting more stable high performance across runs compared to Left Flow (spread of approx. 6.67%) or Original Diff (spread of approx. 11.11%). As expected, the False Alarm Rate ( FAR 1) for all methods in the ”All” setting is uniformly high, typically in the 90s range, given the absence of any filtering. As comments pass through subsequent filtering stages (Coarse Filter, Top-10, Top-5, Top-3), the min-max ranges for metrics like KBI and FAR 1evolve. For example, while the maximum KBI potential might be reduced by aggressive Top-k truncation (e.g., for Left Flow, KBI max drops from 40.00% at ”All” or ”Coarse Filter” to 22.22% at ”Top-3”), the filtering stages generally aim to reduce FAR while preserving KBI, leading to improvements in Comprehensive Performance Index (CPI) metrics. The ranges also provide insights into the stability of these improvements. For instance, under ”Single Reviewer – Top-5,” Left Flow shows a KBI range of (20.00%, 31.11%) and Full Flow (22.22%, 31.11%), indicating that even after significant filtering, these methods can still achieve high bug recall in some runs. This min-max analysis complements the average performance data presented in RQ2. It highlights that while average performance provides a general comparative measure, the variability across runs can differ between algorithms and filter settings. Understanding these ranges is valuable for assessing the robustness of each approach and identifying methods that not only perform well on average but also maintain consistency or offer high peak performance. Table 12. Min–max ranges of key metrics under different code slicing strategies for single reviewer settings. Slicing Algorithm KBI↑ FAR 1↓ CPI 1↑ FAR 2↓ CPI 2↑ Single Reviewer – All Original Diff (17.78, 28.89) (95.43, 98.07) (3.48, 7.89) (84.18, 89.16) (13.47, 20.44) Parent Function (26.67, 35.56) (96.73, 97.35) (4.82, 5.96) (90.06, 91.14) (14.18, 15.16) Left Flow (33.33, 40.00) (94.04, 94.73) (9.31, 10.11) (88.80, 92.39) (12.79, 16.77) Full Flow (37.78, 40.00) (93.81, 95.24) (8.45, 10.72) (90.07, 93.29) (11.39, 15.91) Single Reviewer – Coarse Filter Original Diff (15.56, 20.00) (84.89, 92.40) (10.21, 17.21) (76.23, 80.01) (17.60, 20.00) Parent Function (15.56, 26.67) (87.36, 88.91) (13.94, 16.33) (75.92, 91.01) (13.15, 22.27) | https://arxiv.org/abs/2505.17928v2 |
Left Flow (24.44, 40.00) (91.75, 92.20) (11.83, 13.68) (77.17, 90.38) (15.15, 23.61) Full Flow (31.11, 33.33) (91.73, 93.78) (10.48, 13.07) (87.69, 90.70) (14.54, 17.64) Single Reviewer – Top-10 Original Diff (13.33, 20.00) (84.83, 91.81) (10.26, 17.26) (70.83, 79.69) (18.30, 20.15) Parent Function (13.33, 22.22) (87.44, 89.61) (11.68, 15.13) (72.08, 88.39) (15.25, 18.76) Left Flow (31.11, 33.33) (90.58, 91.55) (13.29, 14.68) (78.42, 87.12) (18.21, 26.20) Full Flow (28.89, 35.56) (90.05, 92.68) (11.85, 15.55) (83.62, 88.77) (16.17, 21.57) Single Reviewer – Top-5 Original Diff (13.33, 17.78) (84.44, 91.63) (10.28, 16.59) (67.78, 75.00) (18.35, 20.78) Parent Function (11.11, 17.78) (86.22, 89.22) (10.94, 14.64) (63.00, 80.00) (17.09, 20.81) Left Flow (20.00, 31.11) (89.19, 91.33) (12.09, 16.05) (67.78, 78.89) (23.57, 29.26) Full Flow (22.22, 31.11) (90.04, 90.67) (13.51, 14.45) (75.17, 80.00) (22.00, 26.51) Single Reviewer – Top-3 Original Diff (8.89, 13.33) (84.07, 92.22) (8.30, 14.51) (60.00, 63.89) (14.37, 19.48) Parent Function (6.67, 8.89) (86.67, 88.89) (8.33, 10.67) (50.00, 66.67) (11.11, 15.09) Left Flow (13.33, 22.22) (88.89, 91.48) (10.40, 14.81) (52.78, 66.67) (20.80, 28.57) Full Flow (11.11, 15.56) (90.74, 92.22) (9.40, 10.93) (63.89, 66.67) (16.67, 21.67) S. Sensitivity Analysis of Top-k Truncation in Multi-Reviewer Settings This section details experiments evaluating the impact of different Top-k truncation values ( k) on performance within our multi-reviewer framework, where all settings utilize three reviewers. The results, presented in Table 13, indicate that the optimal choice of kis often contingent on the employed slicing strategy and the specific evaluation metrics prioritized. 28 Towards Practical Defect-Focused Automated Code Review •For slicing strategies that typically generate a sparser set of initial comments, such as Original Diff andParent Function : –With Original Diff ,k= 3(Top-3) generally yields favorable CPI 1results (9.30 before validation, 11.81 after) and achieves the highest KBI (15.56) before validation. After validation, while Top-5 or Top-10 showed slightly higher KBI (11.11 vs. 8.89 for Top-3), Top-3 maintained the best CPI 1. –ForParent Function , the trends are more varied. Before validation, Top-5 led to the highest KBI (20.00), while Top-10 had the best CPI 1(11.04). After validation, k= 3achieved the best CPI 1(13.62) with KBI comparable to other kvalues. •For richer slicing strategies such as Left Flow andFull Flow , which capture more extensive context and often produce more candidate comments: –With Left Flow , larger kvalues (Top-10 or Top-5) consistently outperformed Top-3 in KBI both before and after validation. Top-5 generally provided the best CPI 1(17.51 before validation, 22.07 after). –With Full Flow , Top-10 initially showed the highest KBI (35.56) before validation. However, a notable behavior was observed in this specific case with Top-10 truncation, particularly after the Validator stage: there is a significant drop in KBI (from 35.56 to 13.33) and CPI 1(from 15.92 to 12.01). In contrast, Top-5 for Full Flow maintained a higher KBI (20.00) and CPI 1(20.97) post-validation. We attribute this decline for Top-10 to the large volume of text processed when k= 10 for a verbose slicing method like Full Flow. This may approach the context token limits of the LLaMA3.1 engine during the validation phase, potentially reducing the validator’s effectiveness. Overall, these findings demonstrate | https://arxiv.org/abs/2505.17928v2 |
that while the optimal kcan be tuned, the Top-k filter’s behavior is generally stable across reasonable threshold ranges. Furthermore, it is adaptable to the context richness provided by different slicing algorithms, allowing for optimized configurations based on the desired balance between metrics like KBI and FAR. T. Extended Conclusions T.1. RQ1 Extended Conclusion Conclusion 1 (Original) Our framework surpasses baseline approaches significantly, achieving up to 10x better performance on key metrics like KBI and CPI. This success is attributed to our comprehensive approach to code review automation, which addresses the full pipeline and its associated challenges. Additionally, LLaMA3.1-405B emerged as the best- performing LLM engine, reinforcing the importance of model size and capability in achieving optimal results. Further investigations into LLM configurations show that heterogeneous setups, such as pairing a strong validator with a smaller reviewer, can yield comparable or even improved performance (details in Appendix O). T.2. RQ2 Extended Conclusion Conclusion 2 (Original) The results show that Left Flow and Full Flow significantly improve key bug inclusion ( KBI ) and overall perfor- mance ( CPI 1) compared to simpler approaches like Original Diff and Parent Function. Among them, Left Flow performs better in more settings, likely due to its more concise context, which helps the large language model maintain focus without being overwhelmed. Each code slicing algorithm, however, has exclusive cases where it performs well, suggesting that combining different strategies could further enhance key bug detection in future work. 29 Towards Practical Defect-Focused Automated Code Review Table 13. The impact of Top- ktruncation on comment quality under different slicing algorithms. All settings use three reviewers. Top- k KBI↑FAR 1↓CPI 1↑ Original Diff (Multi Reviewer + Meta Reviewer) Top-10 13.33 94 .90 7 .37 Top-5 13.33 96 .74 5 .24 Top-3 15.56 93.37 9.30 Parent Function (Multi Reviewer + Meta Reviewer) Top-10 15.56 91.44 11.04 Top-5 20.00 92.41 11 .01 Top-3 11.11 94 .19 7 .63 Left Flow (Multi Reviewer + Meta Reviewer) Top-10 31.11 88.93 16 .33 Top-5 31.11 87.81 17.51 Top-3 17.78 91 .96 11 .07 Full Flow (Multi Reviewer + Meta Reviewer) Top-10 35.56 89.74 15.92 Top-5 31.11 89.41 15.80 Top-3 24.44 90 .04 14 .16 Original Diff (Multi Reviewer + Meta Reviewer + Validator) Top-10 11.11 89.07 11 .02 Top-5 11.11 90.11 10 .46 Top-3 8.89 82.41 11.81 Parent Function (Multi Reviewer + Meta Reviewer + Validator) Top-10 11.11 85.11 12 .73 Top-5 11.11 89.48 10 .81 Top-3 11.11 82.41 13.62 Left Flow (Multi Reviewer + Meta Reviewer + Validator) Top-10 22.22 82.04 19 .87 Top-5 20.00 75.37 22.07 Top-3 8.89 83 .70 11 .50 Full Flow (Multi Reviewer + Meta Reviewer + Validator) Top-10 13.33 89 .07 12 .01 Top-5 20.00 77.96 20.97 Top-3 11.11 73 .70 15 .62 30 Towards Practical Defect-Focused Automated Code Review T.3. RQ3 Extended Conclusion Conclusion 3.1 (Original) Increasing the number of reviewers improves key bug inclusion ( KBI ) but also increases false alarms ( FAR 1and FAR 2). Validators are essential for maintaining comprehensive performance ( CPI 1andCPI 2) by reducing false alarms. While leveraging multiple reviewers is beneficial, | https://arxiv.org/abs/2505.17928v2 |
the added computational cost and need for validation must be considered in practical implementations. Conclusion 3.2 (Original) The self-correction ability of LLMs, as implemented by the validator role, improves precision by reducing false alarms ( FAR 1andFAR 2) but may reduce key bug inclusion ( KBI ). This indicates a trade-off between precision and recall. The validation process is valuable for reducing hallucinations, but care must be taken to ensure that important bug-detecting comments are not removed. Conclusion 3.3 (Original) The effectiveness of Chain-of-Thought (CoT) guidance varies based on the complexity of the code slicing algorithm. While LLMs perform better without CoT in simpler formats like Original Diff and Parent Function, CoT significantly improves results in more complex flow-based slicing (Left Flow and Full Flow). This suggests that CoT guidance is especially valuable when handling more intricate contexts. However, as more powerful reasoning models, such as GPT-O1 and DeepSeek-R1, emerge, the advantage of specified CoT over free-form reasoning may further diminish. T.4. RQ4 Extended Conclusion Conclusion 4 (Original) The comment filter mechanism effectively reduces false alarms ( FAR 1) and improves overall performance ( CPI 1) in flow-based slicing methods (Left Flow and Full Flow). For simpler slicing methods (Original Diff and Parent Function), the coarse filter is the most effective stage, as these methods lack sufficient code details to accurately filter nitpicks and hallucinations. T.5. RQ5 Extended Conclusion Conclusion 5 (Original) Adding line number information improves both performance and localization success rate (LSR). The inline format outperforms the relative format, likely because embedding position data directly into the code allows for better association of comments with specific lines. U. Status of Our Open-Source Code Artifacts We have open-sourced all the core components of our framework and the experiments reported in the paper, except the certain non-essential, company-specific modules are not included. To achieve this, we carefully modularized the framework and separated any company-internal interfaces, ensuring that external users can readily experiment, customize, and extend the system. As for the omitted modules, these are excluded for three main reasons: •They are not generally useful beyond our specific internal environment, offering little to no benefit for external adaptation. • They depend on proprietary interfaces and data sources that are inherently inaccessible to external researchers. 31 Towards Practical Defect-Focused Automated Code Review •Including them risks violating the double anonymity requirement, not due to mere vocabulary or text replacements, but because their logic and usage patterns could reveal organizational details. For code and comments that could be anonymized, we have already performed the necessary replacements. V . Threats to Validity A recognized threat to the external validity of our study is that the presented evaluation and results are exclusively for C++ projects. This current language focus in our implementation is primarily due to the choice of Cppcheck for code slicing, a tool specific to C++. We prioritized C++ due to its significant presence in the core framework code of many companies. However, the underlying framework and its core principles—including the AST-based code slicing methodology and the prompting strategies for LLMs—are designed to be largely language-agnostic. | https://arxiv.org/abs/2505.17928v2 |
These components do not inherently rely on C++-specific features. Consequently, extending the framework to support other compiled languages is considered feasible, mainly requiring the integration of suitable language-specific AST parsing tools or static analyzers. Another potential threat arises from how we calculate the False Alarm Rate (FAR). In our study, we classify all comments not directly related to the key bug as false alarms. However, some of these comments may still identify relevant issues, such as potential risks or code quality concerns, that do not immediately lead to system failures but warrant attention. As a result, the actual FAR may be lower than our reported figures. Despite this, we chose this conservative approach to emphasize critical issues and minimize the burden on developers, making our assumption practical in the context of prioritizing key bugs. 32 | https://arxiv.org/abs/2505.17928v2 |
arXiv:2505.17936v1 [cs.LG] 23 May 2025Understanding Gated Neurons in Transformers from Their Input-Output Functionality Sebastian Gerstner and Hinrich Schütze Center for Information and Language Processing (CIS), LMU Munich, and Munich Center for Machine Learning (MCML), Germany sgerstner@cis.lmu.de Abstract Interpretability researchers have attempted to understand MLP neurons of language models based on both the contexts in which they acti- vate and their output weight vectors. They have paid little attention to a complementary aspect: the interactions between input and output. For example, when neurons detect a direction in the input, they might add much the same direc- tion to the residual stream (“enrichment neu- rons”) or reduce its presence (“depletion neu- rons”). We address this aspect by examining the cosine similarity between input and output weights of a neuron. We apply our method to 12 models and find that enrichment neurons dominate in early-middle layers whereas later layers tend more towards depletion. To explain this finding, we argue that enrichment neurons are largely responsible for enriching concept representations, one of the first steps of fac- tual recall. Our input-output perspective is a complement to activation-dependent analyses and to approaches that treat input and output separately. 1 Introduction Despite recent progress in interpretability, there is still much that is unclear about how transformer-based (Vaswani et al., 2017) large language models (LLMs) achieve their impressive performance. Prior work has addressed the interpretation of MLP sublayers, and we follow this line of research. Some of this work analyzes neurons based only on the contexts in which they acti- vate (V oita et al., 2024) or based only on their output weights1(Gurnee et al., 2024). In contrast, we put the input-output (IO) functionality of neurons in the center of our analysis, and classify neurons according to the in- teractions between input and output weights. We focus on gated activation functions (Shazeer, 2020), which are used in recent LLMs like OLMo, Llama and Gemma. Theoretical framework. Following Elhage et al. (2021), our view of the Transformer architecture is cen- tered on the residual (a.k.a. skip) connections between sublayers: they form the residual stream , and the indi- vidual units (such as MLP neurons) progressively up- date it, until it is multiplied by the unembedding matrix 1We use “weight” to refer to a weight vector, not a scalar. 0.0 0.2 0.4 0.6 0.8 1.0 Layer (relative to network depth)1.0 0.5 0.00.51.0median cos(win,wout) allenai/OLMo-7B-0424-hf allenai/OLMo-1B-hf gemma-2-2b gemma-2-9b Llama-2-7b meta-llama/Llama-3.1-8B meta-llama/Llama-3.2-1B meta-llama/Llama-3.2-3B mistral-7b Qwen/Qwen2.5-0.5B Qwen/Qwen2.5-7B yi-6bFigure 1: Median of cos(win, wout)by layer (x-axis) for 12 models. For all models, the value is positive in the beginning and negative in the end, indicating that early- middle layers “enrich” the residual stream whereas later layers tend more towards depletion. WUto produce next-token logits. The information con- tained in the residual stream is represented as a high- dimensional vector (of dimension dmodel). Individual model units read from the residual stream and then up- date it by writing (adding) other vectors to it. In the case of an MLP neuron, it detects certain directions in the residual stream (i.e., whether the current residual stream vector at | https://arxiv.org/abs/2505.17936v1 |
least approximately points in one of these di- rections in model space), corresponding to its weight vectors on the input side; and then writes to a certain direction, corresponding to its output weight vector. A semantic intepretation is that a neuron detects a concept in the residual stream (for example, an inter- mediate guess about the next token), and in turn also writes a concept. This semantic interpretation is not a necessary assumption for our neuron classification, but is helpful for building intuition and interpreting results. Theoretical contribution. These theoretical reflec- tions naturally lead to our research question: What is the relationship between what a neuron reads and what it writes? We address this question by comput- ing the cosine similarity of input and output weights , focusing on gated activation functions. Specifically, with gated activation functions, each neuron has three weight vectors: the linear input, gate, and output weight vectors. When the output weight is similar enough to (one of) the detected directions, we speak of input manipulation , as opposed to or- thogonal output neurons which write to directions not detected in the input. Intuitively, input manipulator neu- rons manipulate the concept that they detect. As special cases of input manipulation, we define enrichment and depletion neurons – neurons that detect a direction and then add it to / remove it from the residual stream. We 1 present a complete taxonomy of neuron IO functionali- ties in Section 4. See Figure 2 for a visualization. Empirical study. We apply our method to 12 LLMs. We find that, for all of these models, a large proportion of neurons are input manipulators. In particular, we find that enrichment neurons dominate in early-middle layers of all models whereas later layers tend more towards depletion. See Figure 1. We also present examples for the six major IO func- tionalities. We find that many neurons have the property ofdouble checking : The two reading weight vectors (wgateandwin) are approximately orthogonal, but still intuitively represent the same concept. Explaining the results. Our finding of different IO functionalities in different layers echoes the “stages of inference” framework (Lad et al., 2024). We hypoth- esize a correspondence: enrichment neurons may be responsible for “feature engineering” and depletion neu- rons for “residual sharpening”. We also provide a theoretical account of the dou- ble checking phenomenon. The usefulness of double checking explains the fact that many neurons have ap- proximately orthogonal gate and input weights. Contributions. (i) We develop a parameter-based (and therefore efficient) method to investigate neuron IO functionalities for gated activation functions (Section 4). (ii) Across 12 models, we find that enrichment neurons dominate in early-middle layers of all models whereas later layers tend more towards depletion (Figure 1). (iii) We define two novel concepts helpful in understanding neuron functionality: input manipulation anddouble checking . (iv) We find that many neurons are input ma- nipulators (Section 5), which makes our classification scheme useful for understanding them. (v) We present examples for the six major IO functionalities, showing how the IO perspective complements other neuron anal- ysis methods (Section | https://arxiv.org/abs/2505.17936v1 |
6). (vi) We propose theoretical explanations for some of these results (Section 7). 2 Related Work There is a large body of work on interpretability of transformer-based LLMs. Elhage et al. (2021) introduce the notion of residual stream. nostalgebraist (2020), Bel- rose et al. (2023) propose to interpret residual stream states as intermediate guesses about the next token; Rushing and Nanda (2024) discuss this as the iterative inference hypothesis . On a similar note, many works hypothesize that directions in model space can corre- spond to concepts; Park et al. (2024) discuss this as the linear representation hypothesis . Lad et al. (2024) de- finestages of inference . Geva et al. (2023) explain how LLMs recall facts; a crucial early step is representation enrichment , which may be related to our enrichment neurons (see Section 7.4). Similar to our work, Elhelo and Geva (2024) investigate input-output functionality of heads (instead of neurons). Much research has attempted to understand individual neurons. Geva et al. (2021) present them as a key-valuememory. Other neuron analysis work includes (Miller and Neo, 2023; Niu et al., 2024). The focus on individ- ual neurons has been criticized. Morcos et al. (2018) find that in good models, neurons are not monoseman- tic (but for image models, not LLMs). Millidge and Black (2022) compute a singular value decomposition (SVD) of layer weights and often find interpretable di- rections that do not correspond to individual neurons. Elhage et al. (2022) argue that interpretable features are non-orthogonal directions in model space and can be superposed. This corresponds to sparse linear combi- nations of neurons in MLP space. Taking the middle ground, Gurnee et al. (2023) argue that interpretable features correspond to sparse combinations of neurons, but this includes 1-sparse combinations, i.e., individual neurons. Several works classify neurons based on the contexts in which they activate (V oita et al., 2024; Gurnee et al., 2024). For example, V oita et al. (2024) find token de- tectors that suppress repetitions. Gurnee et al. (2024) also define functional roles of neurons based on their output weight vector, such as suppression neurons that suppress a specific set of tokens. They note that suppres- sion neurons seem to activate “when it is plausible but not certain that the next token is from the relevant set”. Stolfo et al. (2024) also investigate some output-based neuron classes. Researchers have paid less attention to the input- output perspective. Gurnee et al. (2024) compute cosine similarities between input and output weights for GPT-2 (Radford et al., 2019), but do not interpret their results. Elhage et al. (2022) mention the idea of input-output analysis (negative cosines between input and output weights “may also be mechanisms for conditionally deleting information”, footnote 7), but do not follow up on this remark. Note also that input-output analysis for gated activation functions adds complexity because, in addition to input and output weight vectors, the gating mechanism is crucial for IO functionality. 3 Gated activation functions In our neuron classification we assume gated activation functions like SwiGLU or GeGLU (Shazeer, 2020). In this section, we describe definition | https://arxiv.org/abs/2505.17936v1 |
(Section 3.1) and properties (Section 3.2) of these functions. Gated acti- vation functions are used widely, e.g., OLMo (Groen- eveld et al., 2024) and Llama (Touvron et al., 2023) use SwiGLU, and Gemma (Gemma, 2024) uses GeGLU. The following description focuses on SwiGLU. GeGLU replaces Swish with GeLU, but is otherwise identical. For a visualization of a SwiGLU neuron, see Figure 6 in Appendix E. 3.1 Definitions To keep our description simple, we ignore bias terms and layer norm parameters. (Some models, like OLMo, lack these anyway.) We describe single neurons as opposed to whole MLP layers. 2 1 0 1 cos(wgate,wout)1.0 0.5 0.00.51.0cos(win,wout) enrichment conditional enrichment depletion conditional depletion proportional change orthogonal output 1.0 0.5 0.00.51.0 cos(wgate,win)Figure 2: We define six input-output functionality classes or IO classes of gated activation neurons based on collinearity and orthogonality of their linear input, gate and output weight vectors. For example, depletion neurons remove the direction of the gate vector from the residual stream. Examples shown are prototypical. We denote by xmidthe state of the residual stream before the MLP, and by xnorm:=LN(xmid)its layer nor- malization. We say that a direction v∈Rdispresent (positively) in a vector x∈Rdifx·v≫0. Traditional activation functions like ReLU take a sin- gle scalar as argument: ReLU (xin). In contrast, a gated activation function like SwiGLU takes two arguments: SwiGLU (xgate, xin) =Swish (xgate)·xin. To compute the scalars xgateandxin, each neuron has alinear input weight vector winand a gate weight vector wgateof dimension dmodel. We refer to these two weight vectors as the reading weights . Then xgateis defined as wgate·xnorm, andxinaswin·xnorm. Finally, the product of SwiGLU (xgate, xin)and the output weight vector, wout, is added to the residual stream. 3.2 Properties There are three properties of gated activation functions that are key for understanding IO functionality. Positive vs negative activation. Strong activations can be either positive or negative. If wgate·xnorm≫0 andwin·xnorm≫0, the activation is strongly positive. Ifwgate·xnorm≫0andwin·xnorm≪0, the activation is strongly negative. So, depending on the context, a given gated activation neuron can either add the output weight vector to the residual stream or subtract it. Negative values of Swish. Swish and GeLU are often seen as essentially ReLU. However, we found clearly different cases (see Section 6). wgate·xnorm can be weakly negative , i.e., negative but close to zero. In this case its image under Swish is also weakly negative. This leads to a negative activation if winis present positively and positive otherwise. Symmetry. Switching the signs of both winandwout preserves IO behavior. 4 Method We now describe how we investigate input-output func- tionalities of gated neurons, based on their weights only.4.1 Intuition As a running example, we consider what a neuron would do to a residual stream state representing the next-token prediction review . Before we introduce our method, let us consider a simpler case to develop our intuition: non-gated activa- tion functions like ReLU (see also Gurnee et al. (2024)). Here, a neuron detects just one direction, determined by its input weight win(say review ). (Given xnorm, the activation depends only on xnorm·win, and is positive whenever this | https://arxiv.org/abs/2505.17936v1 |
is positive.) Roughly, we can distinguish three cases: the neuron output (determined by wout) can be similar to the input direction (in our case, review : we call this enrichment ), different (we call this orthog- onal output ), or roughly opposite (in our case, “minus review ”: we call this depletion ). In terms of weights, these cases correspond to cos(win, wout)being close to 1, close to 0, or close to -1. Note that a neuron could also detect “minus review ” (i.e., “ review is not the next token”), and enrich or de- plete that direction. 4.2 Extension to gated activation functions In this paper, we consider gated activation functions. Here, a neuron detects two directions ( wgateandwin), not one; so there are more cases to consider. Luckily, the symmetry property (see Section 3.2) simplifies the analysis: a neuron’s behavior does not change if we switch the signs of both winandwout. This implies that the sign of cos(wgate, wout)does not matter . Accordingly, we define six IO classes, depending oncos(win, wout)(three rows: positive, negative, or zero) and |cos(wgate, wout)|(two columns: positive or zero). Although there is a third cosine similarity – cos(wgate, win)– this similarity is determined by the two others in prototypical cases. We will consider these prototypical cases first. 4.3 Prototypical cases See Table 1 for an overview of all cases and Figure 2 for a visualization. We also encourage the use of the interactive visualization in supplementary. For the prototypical cases we assume the cosines are ≈1,≈ −1or≈0. In these cases, knowing two of the cosine similarities implies knowing the third one: If wgateandwinare collinear, then wouthas the same cosine similarity with both (up to sign). Conversely, if wgate andwinare orthogonal, woutcannot be collinear to both, and in fact, cos(wgate, wout)2+cos( win, wout)2≤1, with equality when woutis in the space spanned by wgateand win. We first focus on on textbfenrichment and depletion: cos(win, wout)≈ ±1. The gate vector can be collinear as well, i.e., cos(wgate, wout)≈ ±1(leftmost “typical” column). In this case, all three vectors are approximately in a one-dimensional subspace, so the neuron detects one direction and writes to the same direction, up to sign. The sign is relevant: Assume xnormrepresents the token 3 |cos(wgate, wout)| ≫0 cos( wgate, wout)≈0 cos(win, wout) ≫0 enrichment conditional enrichment |cos(wgate, w in)| ≫0|cos(wgate, w in)| ≈0|cos(wgate, w in)| ≈0|cos(wgate, w in)| ≫0 typical atypical typical atypical ≪0 depletion conditional depletion |cos(wgate, w in)| ≫0|cos(wgate, w in)| ≈0|cos(wgate, w in)| ≈0|cos(wgate, w in)| ≫0 typical atypical typical atypical ≈0 proportional change orthogonal output |cos(wgate, w in)| ≈0|cos(wgate, w in)| ≫0 typical atypical Table 1: Our six IO classes, in boldface . Five of them have “atypical” variants. We use a threshold of 0.5 (resp. -0.5) to distinguish cos()≈0from|cos()| ≫0. review andwgatedetects that direction, so that the neuron activates. If cos(win, wout)≈1(win, woutalso lie in the review subspace, and both have the same orientation), the neuron will again write review . We call this (typical) enrichment . On the other hand, if cos(win, wout)≈ −1 (they again lie in the review | https://arxiv.org/abs/2505.17936v1 |
subspace but have different orientations), the neuron will write “minus review ”. We call this (typical) depletion .2The same neurons can also get a weak negative activation if −wgate(“minus review ”) is weakly present in the residual stream. In this case, Swish has a negative value (Section 3.2) and the enrichment neuron writes “plus review ” to the residual stream and the depletion neuron “minus review ”. Next we consider conditional enrichment andcon- ditional depletion :winandwoutare roughly collinear andwgateis orthogonal to them. Consider the exam- ple that win, woutcorrespond to the review direction and wgateto “verb expected as next token”. The neuron will only activate conditional on wgatebeing present in the residual stream (here: verb expected). If ±win(“plus” or “minus” review ) isalso present in the residual stream, then±wout(“plus” or “minus” review ) will be added to the residual stream. For this scenario, we define a (typical) conditional enrichment neuron as one with cos(win, wout)≈1; this neuron will enrich the residual stream with winifwinis present and with −winif−win is present (“plus” review leads to more of “plus” review , and “minus” review leads to more of “minus” review ). Conversely, we define a (typical) conditional deple- tion neuron as one that depletes ±win(whichever was present) from the residual stream: “plus” review leads to “minus” review and vice versa. As before, if −wgate is weakly present in the residual stream (there is a weak expectation that the next token is not a verb), Swish yields a negative value; so in this situation conditional enrichment and depletion neurons switch their behav- iors; e.g., for a conditional enrichment neuron “plus” review will lead to “minus” review . Turning to the bottom part of Table 1, we define a (typical) proportional change neuron as one whose 2We prefer these terms to alternatives like increase / reduc- tionbecause in practice output directions will not be exactly the same as the reading directions. See Section 7.4.woutis in the same one-dimensional subspace as wgate, but is orthogonal to win. (This implies that wgateand winare orthogonal.) Take the case where wgate, wout) are represent review andwin“verb expected”. If wgate (review ) is present in the residual stream, then the neuron writes a positive or negative multiple of review to the residual stream. This multiple is proportional to the presence of win(“verb expected”) in the residual stream: If a verb is expected, the neuron writes review , if not, it writes “minus review ”. All of the above neuron types are input manipula- tors: they write to one of the directions they detect. Our final category is the negation of this: We define an or- thogonal output neuron as one whose output weight vector is orthogonal to both reading weight vectors. If wgateandwinare also orthogonal to each other, then such a neuron defines an interaction of three completely different meaning components. 4.4 General case: Typical vs atypical functions Many cosines will not be close to 0 or ±1. For example, such a neuron may write a concept different from but semantically related to the one it detects (say, Ireland -> Dublin | https://arxiv.org/abs/2505.17936v1 |
) and thus be be similar to an enrichment neuron in terms of weight vector geometry. For this general case, this paper explores three options to understand neuron IO functionalities at different lev- els of granularity: (1) Classify neurons according to the closest prototypical case. (2) Plot the marginal distribu- tions of the three cosine similarities. (3) Place neurons in a plot analogous to Figure 2, based on their three weight cosines. For (1), we need two refinements. (i) We need a threshold τfor counting a cosine similarity as clearly different from zero. In this paper, we set τ= 0.5, a relatively permissive cutoff that we believe gives rise to a more informative classification of neurons. (ii)cos(win, wgate)may not always “match” the other two cosine similarities; e.g., the two reading weights may be orthogonal, but wout=1√ 2wgate+1√ 2win; then both cosine similarities are1√ 2>0.5. We are mainly interested in IO behavior rather than comparing the two reading weights, so we classify such cases based 4 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27176 597 812 1495 3516 3495 3860 3778 3801 3420 3386 3644 3282 3361 3447 3057 1591 1004 627 497 165 78 37 44 39 19 35 3415 71 52 67 77 102 148 312 366 421 542 474 297 173 89 103 87 73 65 71 62 54 67 134 249 507 753 8918000 7516 7309 6599 4567 4546 4127 4048 3966 4286 4203 3992 4551 4579 4557 4955 6441 7062 7456 7582 7921 8023 8045 7954 7845 7553 7088 6597meta-llama/Llama-3.2-3B enrichment atypical enrichment conditional enrichment atypical conditional enrichment proportional change atypical proportional change orthogonal output depletion atypical depletion conditional depletion atypical conditional depletionFigure 3: Distribution of neurons by layer and category. oncos(win, wout)andcos(wgate, wout). To signal the “mismatch” of cos(win, wgate), we prepend atypical to the category’s name. In the above example, we will speak of an atypical enrichment neuron. In Figure 2, the atypical classes share their position with typical classes, but differ in color. Table 1 shows all atypical (and typical) classes. 5 IO functionalities by layers We conduct our study on 12 models: Gemma-2-2B, Gemma-2-9B (Gemma, 2024), Llama-2-7B, Llama- 3.1-8B, Llama-3.2-1B, Llama-3.2-3B (Touvron et al., 2023), OLMo-1B, OLMo-7B-0424 (Groeneveld et al., 2024), Mistral-7B (Jiang et al., 2023), Qwen2.5-0.5B, Qwen2.5-7B (Yang et al., 2024), Yi-6B (01.AI et al., 2025). These models use SwiGLU, except for Gemma, which uses GeGLU. For each model, we classify the MLP neurons based on the cosine similarities of the three weight vectors, as described in Section 4. Here we describe the results for Llama-3.2-3B . They are representative of the general trends we observe. Ap- pendix I in the appendix contains the plots for all mod- els. We progress from (i) the coarse-grained version of our method, with discrete classes, to (ii) the marginal dis- tributions of each cosine similarity, to (iii) fine-grained scatter plots showing all individual neurons. 5.1 Discrete classes Figure 3 shows IO class distribution across layers. We | https://arxiv.org/abs/2505.17936v1 |
see that a large proportion of neurons are input manipulators (i.e., they are not orthogonal output neu- rons): in the Llama model, these are 25% of all neurons, and as much as 50% in early-middle layers (layers 7–11). This highlights an advantage of our parameter-based IO classes: It is an exhaustive analysis of all neurons, and we can make non-trivial statements about a large subset of them. Other methods only assign a subset of neurons to classes; e.g., Gurnee et al. (2024)’s classification only covers 1-5% of neurons. The majority of these input manipulators (more than 80% in Llama) belong to just one class: conditional enrichment. Across all models, conditional enrichment 0.00.51.0|cos(wgate,win)| 0.00.51.0|cos(wgate,wout)| 0123456789101112131415161718192021222324252627 0123456789101112131415161718192021222324252627 01234567891011121314151617181920212223242526271 01cos(win,wout) Layermeta-llama/Llama-3.2-3BFigure 4: Boxplots for the distribution of weight co- sine similarities in each layer. For cos(wgate, win)and cos(wgate, wout)we show the absolute value since their sign does not carry any information on its own. dominates early-middle layers. In contrast, the (rela- tively few) input manipulators in late layers are often proportional change neurons or depletion neurons. The dominance of conditional enrichment neurons in early-middle layers echoes Geva et al. (2023)’s and Lad et al. (2024)’s findings that these layers perform enrichment (or feature engineering). We discuss this in Section 7.4. These patterns hold for all models. Some other mod- els display additional patterns, for example a large num- ber of conditional depletion neurons in middle-late lay- ers. See Appendix I. 5.2 Marginal distributions Figure 4 shows the distribution of weight cosine similar- ities in each layer. In Figure 1 we also show the median ofcos(win, wout), across all investigated models. We already know that conditional enrichment neu- rons are plentiful in the early-middle layers. Corre- spondingly, the median value of cos(win, wout)peaks in these layers. Later on, it moves below zero, in- dicating that now the majority of neurons have nega- tivecos(win, wout). Figure 1 shows that this generalizes across models. Regarding |cos(wgate, wout)|, the median values are relatively close to zero (corresponding to conditional classes and orthogonal output). But there is a large spread in early-middle layers and in the last few layers. This seems to correspond to the proportional change neurons appearing in all of these layers, as well as de- pletion neurons in the last few layers. |cos(wgate, win)|is mostly concentrated around zero. Thus most neurons operate on two input directions in the residual stream (not a single one), resulting in higher expressivity and more complex semantics. This is likely related to double checking; see Section 7.2. We also notice that there are many outliers for all three cosine similarities, in almost all layers. This sug- 5 1 0 11 01Layer 0 1 0 1Layer 14 1 0 1Layer 27 1.0 0.5 0.00.51.0 cos(wgate,win) cos(wgate,wout)cos(win,wout)Figure 5: Fine-grained analysis of neuron IO behavior in three layers based on the configuration of their three weight vectors in parameter space. Each subplot repre- sents a layer, each dot a neuron. gests that a non-negligible number of neurons perform special tasks different from the “average” neuron. 5.3 Fine-grained analysis of IO behavior We now investigate weight | https://arxiv.org/abs/2505.17936v1 |
vector configurations in de- tail, as shown in Figure 5 for a few selected layers. The distribution of neurons in each layer is plotted by displaying each neuron as a point with cos(wgate, wout) indicated on the x-axis, cos(win, wout)on the y-axis and cos(wgate, win)as its color. This visualization reinforces three findings from Sec- tions 5.1 and 5.2. (i) We already know that many neu- rons are input manipulators. Now we see that, even though there are many neurons we classified as orthog- onal output, there is no cluster around the origin as we might expect. Instead, the orthogonal output neurons often belong to clusters that are centered above/below the horizontal line. This suggests that even the orthogo- nal output neurons perform input manipulation to some extent. (ii) We also have already observed a smooth transition from enrichment-like functionalities in early- middle layers to more depletion-like functionalities in the last few layers. We indeed see a large cluster of neu- rons, centered clearly above the x-axis in most layers, but moving below it in the last few layers. (iii) We also observe that the vast majority of neurons is turquoise, i.e.,cos(wgate, win)≈0, confirming the finding in Sec- tion 5.2. We also gain four new insights. (i) The first layer exhibits quite different patterns from model to model. (ii) In middle layers , all models have a big cluster re- lated to conditional enrichment neurons, as described above. Additionally, many models have outlier “arms” from this cluster, towards the plot areas corresponding to proportional change and depletion. Other models, such as OLMo, additionally have a cluster of neurons below the x-axis, corresponding to conditional deple- tion neurons. (iii) Neurons with orthogonal wgateand winmust be within the unit disk. It is striking to see that they do not fill out this disk evenly. Instead, as already mentioned, there is a big cluster above the x-axis (close to conditional enrichment). But this cluster is not right at the border of the disk, but more inside (in particular cos(win, wout)is still clearly below 1). This echoes andextends Gurnee et al. (2024)’s findings that in GPT2 the IO cosine similarity is approximately bounded by ±0.8. In other words, we almost never get the prototyp- icalcases of conditional enrichment / depletion etc., as defined in Section 4. This helps us refine our notion of “input manipulators”: these neurons do more than just outputting a woutthat is already present in the residual stream; instead, they add novel but related information. (iv) In the last few layers (Llama: layers 25-27), some new phenomena occur: apart from the big cluster, there is a new cluster in the bottom corners of the plot (close to depletion). Additionally, in the last layer of some models, there is a cluster of turquoise points around the upper y-axis (close to conditional enrichment). 6 Case studies We now demonstrate how the IO perspective can com- plement other methods to help understand individual neurons. To this effect, we present 6 case studies for OLMo-7B, one for each discrete IO class. We re- strict the search space to | https://arxiv.org/abs/2505.17936v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.