text
string
source
string
LILI YU, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, and Omer Levy. 2023. LIMA: Less is more for alignment. In Advances in Neural Information Processing Systems , volume 36, pages 55006–55021. Curran Associates, Inc. 12 A Pretrained or Backbone Choice We experimented with several pretrained or base model choices. In particular, we focused on fine- tuning recently introduced encoder models such as ModernBERT (Warner et al., 2024) to decoder- based large language models such as Qwen-2.5 (less than <500M parameters). We fine-tune each backbone on the whole BGE retrieval training sub- set (15 datasets & 1.6M training pairs) for up to 3 training epochs with different hyperparameters to fit the training with 4xA6000 GPUs. We plot the model configurations and training settings in Table 9. Validation results. From Table 9, we observe that encoder models pre-trained such as E5-base or E5- large achieve the highest nDCG@10 scores on four BEIR datasets. These outperform recent backbones such as ModernBERT-base (Warner et al., 2024) or even smaller-sized LLMs such as Qwen-2.5 (0.5B). This anecdotally confirms that the unsu- pervised pre-training stage in E5 pretrained models is useful and necessary for achieving a competitive nDCG@10 score on BEIR. Since fine-tuning E5 (large) is around 2 ×slower than fine-tuning E5 (base), we run our main experiments on E5 (base) due to computational budget constraints. Dataset ∼100K ∼250K ∼400K ∼680K MS MARCO 49,571 145,000 210,000 485823 HOTPOT QA 10,250 30,000 84,516 84516 NQ 6110 30,000 58,568 58,568 FEVER 8017 28,755 28,755 28,755 SCIDOCS RR 12,654 12,654 12,654 12,654 FIQA 5500 5,500 5,500 5,500 ARGUANA 4065 4,065 4,065 4,065 Total Pairs 96,167 255,974 404,058 679,881 Table 8: Training pair distribution across seven datasets for four configurations: 100K, 250K, 400K, and 680K. B Leaving-one-dataset-out We provide detailed scores for leaving-one-dataset- out (Figure 4) in Table 10, where we fine-tune E5-base retriever models on: Part (a): no datasets; Part (b): all 15 datasets; Part (c): all 15 datasets but one left-out dataset; Part (d): 7 datasets that cause the most signifi- cant effectiveness drop after being removed; 13 Backbone #Params #Layers Hidden Size Pool LR Batch Size Epoch Time Taken COVID NFC. FiQA SciFact E5-large (unsup.) (Wang et al., 2022b) 330M 24 1024 mean 1e– 5 128 x 8 x 4 3 ∼36 hours 0.712 0.383 0.475 0.747 ModernBERT-base (Warner et al., 2024) 149M 22 768 mean 2e– 4 256 x 8 x 4 3 ∼12 hours 0.560 0.279 0.440 0.602 E5-base (unsup.) (Wang et al., 2022b) 110M 12 768 mean 2e– 5 256 x 8 x 4 3 ∼18 hours 0.731 0.381 0.444 0.728 E5-small (unsup.) (Wang et al., 2022b) 33M 12 384 mean 3e– 5 256 x 8 x 4 3 ∼13 hours 0.667 0.349 0.420 0.698 Qwen-2.5-0.5B (Yang et al., 2024a) 500M 24 896 last 1e– 5 96 x 8 x 4 3 ∼36 hours 0.503 0.356 0.417 0.692 SmolLM2-360M (Allal et al., 2025) 360M 32 960 last 1e– 5 96 x 8 x 4 3 ∼33 hours 0.670 0.336 0.355 0.635 SmolLM2-135M (Allal et al., 2025) 135M 30 576 last 1e– 5 128 x 8
https://arxiv.org/abs/2505.16967v1
x 4 3 ∼24 hours 0.668 0.327 0.304 0.608 Table 9: Model configuration, training settings, and retrieval results (nDCG@10) for backbone models fine-tuned on the BGE-training dataset (1.6M training pairs) and evaluated on four datasets from the BEIR benchmark. The models are sorted according to parameter size; The best score is highlighted as bold , the second best is underlined . COVID denotes the TREC-COVID dataset and NFC. denotes the NFCorpus dataset. Setting Training Pairs TREC-COVID NFCorpus NQ HotpotQA FiQA-2018 ArguAna Touché-2020 DBPedia SCIDOCS FEVER Climate-FEVER SciFact TREC-NEWS Robust04 Avg. 14 Improved Reduced Keep Dataset? (a) Pre-trained (Only) 0 0.610 0.358 0.390 0.524 0.401 0.422 0.169 0.354 0.211 0.634 0.154 0.737 0.441 0.416 0.416 - - - (b) (ALL) Training Pairs 1.60M 0.731 0.381 0.595 0.726 0.444 0.652 0.181 0.437 0.233 0.871 0.370 0.728 0.434 0.477 0.519 - - - w/o ELI5 1.27M 0.772 0.378 0.593 0.728 0.424 0.652 0.213 0.434 0.235 0.868 0.377 0.734 0.469 0.478 0.525 7 5 ✗ w/o FEVER 1.57M 0.748 0.379 0.598 0.725 0.446 0.647 0.175 0.434 0.234 0.787 0.240 0.749 0.423 0.483 0.505 6 5 ✓ w/o HotpotQA 1.51M 0.724 0.381 0.600 0.642 0.449 0.652 0.178 0.425 0.232 0.863 0.358 0.725 0.441 0.489 0.511 4 7 ✓ w/o MS MARCO Document 1.23M 0.742 0.380 0.586 0.726 0.445 0.656 0.175 0.435 0.235 0.866 0.347 0.742 0.458 0.490 0.520 6 5 ✗ w/o Stack Overflow (Dup.) 1.58M 0.720 0.379 0.593 0.726 0.444 0.650 0.174 0.436 0.235 0.870 0.368 0.729 0.431 0.487 0.517 7 2 ✗ w/o Trivia QA 1.54M 0.729 0.380 0.595 0.730 0.450 0.647 0.174 0.440 0.234 0.870 0.382 0.731 0.443 0.481 0.520 7 3 ✗ w/o NLI 1.60M 0.729 0.380 0.594 0.726 0.445 0.652 0.177 0.437 0.233 0.870 0.368 0.728 0.436 0.477 0.518 1 3 ✗ (c) w/o SQuAD 1.51M 0.709 0.379 0.598 0.723 0.445 0.654 0.181 0.437 0.234 0.872 0.376 0.729 0.439 0.481 0.518 5 3 ✗ w/o ArguAna 1.59M 0.736 0.381 0.598 0.728 0.448 0.434 0.174 0.434 0.234 0.871 0.378 0.731 0.445 0.486 0.506 8 3 ✓ w/o FIQA-2018 1.59M 0.728 0.380 0.596 0.727 0.428 0.658 0.174 0.436 0.235 0.871 0.370 0.729 0.433 0.477 0.517 3 2 ✓ w/o MS MARCO Passage 1.11M 0.699 0.377 0.551 0.730 0.440 0.650 0.162 0.407 0.237 0.869 0.338 0.733 0.431 0.484 0.508 3 10 ✓ w/o NQ 1.54M 0.745 0.381 0.553 0.728 0.451 0.659 0.178 0.435 0.234 0.867 0.369 0.728 0.435 0.472 0.517 5 4 ✓ w/o Quora 1.54M 0.759 0.382 0.599 0.727 0.451 0.653 0.185 0.436 0.234 0.867 0.371 0.729 0.436 0.481 0.522 6 1 ✗ w/o S CIDOCS RR 1.59M 0.733 0.378 0.595 0.727 0.447 0.662 0.178 0.436 0.201 0.868 0.374 0.740 0.434 0.475 0.518 5 4 ✓ w/o STS 1.60M 0.718 0.379 0.596 0.727 0.446 0.652 0.177 0.437 0.234 0.867 0.369 0.729 0.435 0.478 0.517 1 4 ✗ (d) 7 Datasets Pruned ( ✓) 680K 0.781 0.376 0.593 0.728 0.421 0.664 0.242 0.440 0.204 0.875 0.397 0.748 0.467 0.464 0.529 9 5 - Table 10: Retrieval results measuring nDCG@10 on 14 datasets in the BEIR benchmark by fine-tuning E5 (base) by leaving out one training
https://arxiv.org/abs/2505.16967v1
dataset at a time and fine-tuning the rest. Improved denotes E5 (base) with a nDCG@10 better than +1point, Reduced with a nDCG@10 worse than −1point, and NoChange within the ±1point range, compared to part (b) E5 (base) fine-tuned on ALL Training Pairs. Each row in part (c) is fine-tuned on all but one left-out dataset. Part (c) is fine-tuned on the 7 selected datasets. 14 SYSTEM: Given (1) a search question, (2) a relevant ground-truth document, (3) and a set of unrelated documents that may appear in any system’s response to that question. Your task is to evaluate whether any of the unrelated documents are relevant compared to the ground-truth document in answering the question. A document is only considered relevant to the question if it provides sufficient information in answering the question. ##Input You will receive: 1.question : The question that the to-be-judged documents will be evaluated on. 2.ground_truth : A pre-validated document judged as most relevant to the question. This document can answer the question and should be used as a guide for your analysis. 3.documents : A set of unrelated documents which may not be relevant in answering the question. You will first read the question and carefully analyze each unrelated documents provided to you. Read every question and unrelated document carefully as you would when proofreading. ##Criteria Use the following criteria to judge the relevance of each document: -Relevant : A document is considered relevant to the question if it provides sufficient information in answering the question, containing allnecessary parts highlighted in the ground truth. -Not Relevant : The document does not answer the question and does not provide information in entailing parts present in the ground truth. ##Output Follow these detailed steps and output your reasoning for each step wrapped for each respective XML tag below: 1. You should think and provide your reasoning under <thinking> [ . . . ] </thinking> onwhy andhow if an unrelated document is relevant following the criteria above. 2. Next, for all unrelated documents which are found to be relevant , compare them against the ground truth (<ground_truth> ) document in answering the question under <preference> [ . . . ] </preference> tokens. 3. Finally, output the list of documents which are (1) relevant and (2) prefer better or equal under the XML tag (<better> ) or worse ( <worse> ) than the ground truth ( <ground_truth> ) document for answering the question in <verdict> [ . . . ] </verdict> . Output [ ] if none of the documents are found to be relevant. Follow strictly the format below: <thinking> Evaluate the reasoning individually for all unrelated documents to answer the question Doc (1): output the reasoning here Doc (2): output the reasoning here . . . </thinking> <preference> Compare the ground truth and every relevant document individually to answer the question Doc (1): compare the relevance of Doc (1) with the <ground_truth> document here, which is more preferred? . . . </preference> <verdict> <better> Preferred over or equally as ground truth: [Doc (2) . . . ] </better> ,
https://arxiv.org/abs/2505.16967v1
<worse> Relevant but not preferred over ground truth: [Doc (1) . . . ] </worse> </verdict> ———– <question> {question }</question> <ground_truth> {ground _truth }</ground_truth> <documents> {documents }</documents> Figure 6: Prompt used in RLHN with GPT-4o-mini and GPT-4o for relabeling hard negatives for all BGE training datasets. Certain texts above in the prompt are bolded and tab-aligned to assist with reading. 15 0.760.780.80nDCG@10 TREC-COVID 0.3750.3800.3850.390 NFCorpusNo Filter RLHN Stage 1 (GPT-4o-mini) RLHN Stage 2 (GPT-4o-mini + GPT-4o) 0.570.580.590.60 NQ 0.700.720.74 HotpotQA 0.4400.4450.4500.4550.460nDCG@10 FiQA-2018 0.700.710.72 ArguAna 0.220.240.26 T ouché-2020 0.4350.4400.445 DBPedia 0.24250.24500.24750.25000.2525nDCG@10 SCIDOCS 0.860.870.88 FEVER 0.320.340.360.38 Climate-FEVER 0.7340.7360.7380.7400.742 SciFact 100K 250K 400K 680K0.470.48nDCG@10 TREC-NEWS 100K 250K 400K 680K0.440.460.48 Robust04 100K 250K 400K 680K0.2680.2700.2720.274 Signal-1M 100K 250K 400K 680K0.360.370.380.39 BioASQFigure 7: nDCG@10 scores on all 16 BEIR datasets by fine-tuning E5 (base) retrieval model on a subset of the 100K, 250K, 400K, and 680K training pairs using the “RLHN” technique on both stages: Stages 1 and 2. 100K 250K 400K 680K0.320.340.36nDCG@10 Arxiv 100K 250K 400K 680K0.400.420.44 FinanceNo Filter RLHN Stage 1 (GPT-4o-mini) RLHN Stage 2 (GPT-4o-mini + GPT-4o) 100K 250K 400K 680K0.490.500.510.52 Healthcare 100K 250K 400K 680K0.120.130.14 Law 100K 250K 400K 680K0.450.460.47nDCG@10 News Figure 8: nDCG@10 scores on all 5 AIR-B ENCH datasets by fine-tuning E5 (base) retrieval model on a subset of the 100K, 250K, 400K, and 680K training pairs using the “RLHN” technique on both stages: Stages 1 and 2. 16 Figure 9: A screenshot of the human validation study conducted via Label Studio. First, the human assessor reads the query and the relevant passages. Next, the assessor reads a sequence of hard negative passages one by one and evaluates the relevancy with the question, marking their decision in the checkbox as either (1) relevant or (2) non-relevant . 17
https://arxiv.org/abs/2505.16967v1
arXiv:2505.16968v2 [cs.AR] 25 May 2025 CASS: Nvidia to AMD Transpilation with Data, Models, and Benchmark Ahmed Heakl1Sarim Hashmi†1Gustavo Bertolo Stahl†1Seung Hun Eddie Han1 Salman Khan1,2Abdulrahman Mahmoud1∗ 1MBZUAI2Australian National University /githubhttps://github.com/GustavoStahl/CASS ὑ7https://huggingface.co/datasets/MBZUAI/cass Abstract We introduce CASS , the first large-scale dataset and model suite for cross- architecture GPU code transpilation, targeting both source-level (CUDA ↔HIP) and assembly-level (Nvidia SASS ↔AMD RDNA3) translation. The dataset com- prises 70k verified code pairs across host and device, addressing a critical gap in low-level GPU code portability. Leveraging this resource, we train the CASS family of domain-specific language models, achieving 95% source translation accuracy and 37.5% assembly translation accuracy, substantially outperforming commercial baselines such as GPT-4o, Claude, and Hipify. Our generated code matches native performance in over 85% of test cases, preserving runtime and memory behavior. To support rigorous evaluation, we introduce CASS-Bench , a curated benchmark spanning 16 GPU domains with ground-truth execution. All data, models, and evaluation tools are released as open source to foster progress in GPU compiler tooling, binary compatibility, and LLM-guided hardware translation. Dataset and benchmark are on HuggingFace, with code at GitHub. 1 Introduction Graphics Processing Units (GPUs) are foundational to modern machine learning and scientific computing workloads due to their high-throughput parallelism. Nvidia’s Compute Unified Device Architecture (CUDA) [ 1] has become the dominant programming model for GPU acceleration, but its tight coupling to proprietary hardware introduces severe vendor lock-in: CUDA code cannot run on non-Nvidia GPUs due to incompatible instruction set architectures (ISAs) [ 2]. As a result, organizations with large CUDA-based codebases face steep engineering costs when migrating to alternative platforms. Meanwhile, AMD GPUs, offering potential favorable performance-per- dollar [ 3,4], are increasingly adopted across both data centers and consumer devices [ 5], creating a growing need to execute legacy CUDA programs on AMD hardware without full rewrites in software [6]. In response, AMD introduced the Heterogeneous-computing Interface for Portability (HIP) [ 7], a C++ GPU API built into the ROCm stack [ 8], designed to mirror CUDA’s functionality while supporting cross-platform development. HIP enables a unified codebase for both Nvidia and AMD GPUs. Tools like HIPIFY [ 9], a static translator, assist migration by converting CUDA-specific constructs into ∗†Equal contribution. Preprint. their HIP equivalents, streamlining adoption of the ROCm stack. However, HIPIFY only operates at the source level and cannot execute precompiled CUDA binaries. Furthermore, it exhibits a high failure rate when converting CUDA programs, highlighting the need for more reliable and lower-level transpilation approaches [10]. Translating GPU assembly across vendors is hindered by divergent ISAs and compilation pipelines. Nvidia employs a proprietary toolchain centered on nvcc , producing PTX and low-level SASS [ 1], while AMD uses GCN/RDNA architectures compiled via the open-source ROCm stack using hipcc [8] (Figure 2 provides a detailed breakdown of the alternative stacks). Bridging this gap at the assembly level is critical for democratizing the hardware computing landscape, transfer of hardware-specific optimizations across vendors, and enabling automation beyond source-level rewrites, especially for legacy CUDA codebases rich in low-level tuning. Our model introduces the first foundation for Nvidia-to-AMD assembly and source translation,
https://arxiv.org/abs/2505.16968v2
focusing on correctness and alignment. While not optimization-aware yet, it paves the way for future systems that preserve and adapt performance-critical patterns across GPU backends. To address the lack of cross-architecture GPU translation datasets, we introduce CASS (CUDA–AMD ASsembly and Source Mapping), a large-scale corpus of 70k semantically aligned CUDA–HIP source pairs and their corresponding host (CPU – x86 ISA) and device (GPU) assemblies for Nvidia (SASS) and AMD (RDNA3) platforms. Each sample comprises functionally equivalent low-level code across vendors, verified through successful compilation and execution, enabling instruction-level analysis across execution boundaries. Unlike generic code corpora like The Stack [ 11], which lack GPU-aligned and compilable content, CASS provides complete source and binary representations across both GPU compute stacks. To construct CASS, we developed a fully open-source pipeline that scrapes, synthesizes, translates (via HIPIFY [ 9]), compiles, and aligns GPU code. We evaluate CASS along two dimensions: (1) instruction coverage, capturing diverse SASS and RDNA3 opcodes; and (2) domain coverage, spanning real-world compute kernels from ML, graphics, and HPC. CASS is the first dataset to enable source- and assembly-level translation research for GPU architectures. To validate the utility of our dataset, we introduce the CASS model family, a suite of domain-specific large language models fine-tuned for both source and assembly-level GPU code translation. These models are trained on our curated corpus and demonstrate significant improvements over SoTA propri- etary systems such as GPT-4o [ 12], Claude-3.7 [ 13], and traditional tools like HIPIFY [ 9]—achieving 95% accuracy in source-level translation and 37.5% in assembly translation. To ensure rigorous evaluation, we further contribute CASS-Bench , the first benchmark tailored to cross-architecture GPU transpilation. It spans 16 diverse GPU domains with execution-verified source and assembly pairs, providing a standardized testbed for future work in low-level translation and performance-aware code generation. Our contributions are summarized as follows: •CASS Dataset. We introduce CASS , the first large-scale dataset for GPU transpilation, containing 70k semantically aligned Nvidia ↔AMD pairs at both the source (CUDA ↔ HIP) and assembly levels (SASS ↔RDNA3), covering 16 real-world GPU domains. •CASS-Bench. We contribute the first evaluation benchmark for cross-architecture GPU translation, with 40 curated tasks across 16 domains, including functionally verified outputs and aligned CUDA/HIP source and SASS/RDNA3 assembly. •CASS Models. We release domain-specialized CASS LLMs trained for cross-architecture code translation. Our 7B model achieves 95% source and 37.5% assembly accuracy, outperforming GPT-4o and Claude (0%) on CASS-Bench. Crucially, 85% of translated assemblies preserve execution runtime and memory compared to native, confirming semantic and performance fidelity. •CASS Dataset Pipeline. We designed a scalable pipeline for scraping, synthesizing, transpiling, and compiling CUDA/HIP code into aligned host and device assemblies across Nvidia and AMD GPUs. The rest of the paper is organized as follows: §2 reviews prior work on Nvidia-to-AMD and assembly translation. §3 describes our data collection, conversion, and filtering pipeline. §4 analyzes dataset structure and coverage. §5 outlines model training and evaluation, with results and ablations in §6. Finally, §7 lists limitations and future work, followed by §8 concluding remarks. 2 Table 1: Comparison of Domain/Characteristics across Different Datasets Domain/ ComputeEval
https://arxiv.org/abs/2505.16968v2
Rodinia SHOC Poly Babel Ours Characteristics NVIDIA[19] Bench[20] [21] Bench[22] Stream[23] CUDA (source) ✓ ✓ ✓ ✓ ✓ ✓ SASS (assembly) ✗ ✗ ✗ ✗ ✗ ✓ RDNA3 (assembly) ✗ ✗ ✗ ✗ ✗ ✓ OpenCL (source) ✗ ✓ ✓ ✓ ✓ ✓ 2 Related Works In this section, we describe prior work in GPU translation efforts (§2.1), assembly-level transpilation (§2.2), and related benchmarks (and their shortcomings) in the space (§2.3) 2.1 Translating from Nvidia to AMD The fragmentation of GPU software ecosystems has driven the need for robust CUDA-to-HIP translation tools. HIPIFY [ 14] statically converts CUDA source code into HIP, enabling ROCm compatibility via direct syntax substitution. Operating at a lower abstraction, CuPBoP-AMD [ 15] translates NVVM IR to HIP-compatible LLVM IR using the LLVM toolchain [ 16,17], offering more flexible intermediate-level interoperability. Earlier, GPU Ocelot [ 18] explored dynamic binary translation, recompiling CUDA to AMD/x86 ISAs at runtime. Although innovative, it was limited by poor scalability and high overhead, making it impractical for modern GPU workloads. All these tools have lacked consistent updates to keep up with CUDA advances, suffer from usability issues, and operate only at the source level. More recently, ZLUDA [ 6] introduced a runtime system for executing unmodified CUDA binaries on AMD GPUs without source access by intercepting CUDA APIs and translating PTX/SASS into AMD-compatible code via LLVM. Originally targeting Intel, it now supports AMD RDNA3 through runtime patching. ZLUDA operates at the LLVM IR level rather than the hardware assembly. While a reasonable level in the stack to target, ZLUDA would not be able to benefit from low-level, backend Nvidia optimizations (operating below the PTX level), and is limited to the AMD stacks backend optimizations. In our work, we target assembly-to-assembly translation, in an effort to leverage hardware-specific optimizations below the intermediate representation (IR) level, that may be missing altogether in the corresponding AMD codebase. 2.2 Assembly-to-Assembly Translation Translating assembly across ISAs is challenging due to divergent instruction sets and execution models. Recent work employs language models for this task, including CRT [ 24], a lightweight transpiler from x86 assembly (CISC) to ARM (RISC), and Guess & Sketch [ 25], which integrates language models with symbolic reasoning to translate between ARMv8 and RISC-V . These recent successes open the door for assembly-to-assembly translation in the unexplored GPU-to-GPU space. A key contributing factor to their success is the large CPU-centric dataset enabling training from one ISA to another. Given the lack of such a rich dataset in the GPU space, a primary goal of this work is to enable such an exploration and transpilation across GPU vendors, democratizing compute in the critical GPU and ML-acceleration landscape, where Nvidia/CUDA currently dominate the market. 2.3 Datasets and Benchmarks for CUDA and HIP As shown in table 1, existing benchmarks in the GPU space generally focus on runtime performance, do none target the assembly level, and do not have paired/aligned data across Nvidia/AMD codebases. ComputeEval [ 19] includes only CUDA code for hardware evaluation. Rodinia [ 20] and SHOC [ 21] provide heterogeneous benchmarks using CUDA/OpenCL/OpenMP but omit
https://arxiv.org/abs/2505.16968v2
AMD code and assem- bly. PolyBench [ 22] evaluates compilers with CUDA/OpenCL kernels, yet lacks assembly-level or AMD support. BabelStream [ 23] benchmarks HIP/CUDA/OpenCL memory bandwidth but excludes assembly and domain diversity. Hetero-Mark [ 26] targets CPU–GPU workloads where GPU code is minimal. The Stack [ 11,27] dataset nearly 200k CUDA files but no AMD coverage or aligned 3 Qwen2.5- 32B-Coder Generate BatchNorm optimized for MemoryGenerate BatchNorm optimized for MemoryGenerate BatchNorm optimized for MemoryGeneration Prompts CUDA RepositoriesCUDA RepositoriesCUDA RepositoriesScrape Relevant Repositories Stack Data Deduplication CASS Dataset Variable-Augmented Personas BatchNorm optimized for <optimization> Fluid simulation using <method> optimized for <optimizations>Templates ComputeMLVariables Mem. bandwidth Occupancy Shared Memory Throughput Synchronization ...Meshless Spectral Finite Dif f. Level Set ....Scraped HIP Files Scraped HIP Files Scraped HIP Files Synthetic HIP Files Synthetic HIP Files Synthetic HIP Files CUDA Files CUDA Files Scraped CUDA FilesSynthetic CUDASynthetic CUDASynthetic CUDA FilesHipify Filtration Non-CUDA Files Long/Short Files BoilerPlate Files No kernel filesCompile & Match OutputsFigure 1: CASS Pipeline: We collect CUDA code from public repositories and synthesize additional samples via prompt-based LLM generation. After filtering and deduplication, all CUDA files are translated to HIP using HIPIFY , then compiled to extract host and device assembly. Matched outputs form the CASS dataset with aligned source and assembly pairs across Nvidia and AMD stacks. assembly. In contrast, CASS uniquely offers 70k semantically aligned CUDA–HIP source and SASS–RDNA3 assembly pairs across both host and device, enabling instruction-level analysis and forming the first dataset purpose-built for cross-vendor GPU assembly translation. To the best of our knowledge, no existing dataset provides paired source- and assembly-level Nvidia- AMD code, hindering effective training and benchmarking. 3 Methods This section outlines the end-to-end methodology behind CASS-Instruct , including data collection, code conversion, and compilation for Nvidia and AMD GPUs. We built the low-level assembly corpus from high-level CUDA code using three strategies: scraping public repositories, generating synthetic samples, and applying targeted code generation frameworks. 3.1 CUDA Code Scraping We leveraged the Stackv2 dataset [ 27] to extract CUDA source files. This dataset, curated from a vast array of public code repositories, offers deduplicated and license-compliant samples, facilitating the assembly of a diverse corpus of GPU-oriented code. To maximize the number of compiled files in the later stage, we used the dataset’s metadata to identify and download the top 200 repositories with the highest number of CUDA files. This repository-level download preserved the original directory structure and relative imports, as shown in Figure 1, and improved compilation success by 23.7% compared to isolated file scraping. After extraction, we applied additional filtering to remove overly long files ( >7k lines), trivially short files ( <10 lines), naive boilerplate samples (e.g., “Hello World”), and files lacking CUDA kernel definitions. This process resulted in a final set of 24k usable samples. 3.2 Synthetic Data Generation We employed a coding-oriented large language model ( Qwen2.5-Coder32B ) to synthesize CUDA kernel code using our variable-augmented persona strategy. The process begins by defining a set of natural language prompt templates with variable placeholders. For example, a template might read: Generate a CUDA kernel for cloth simulation
https://arxiv.org/abs/2505.16968v2
with a {size}X{size} grid. Optimize for {optimization}. To fill these templates, we prepared predefined lists of variable values. For instance, {size} was instantiated with values such as 32, 64, and 128, while {optimization} was sampled from options like "memory bandwidth", "register usage", and "multi-GPU scaling". This allowed us to systemati- cally generate a broad range of prompts, each specifying different values for the placeholders in the templates. Refer to the Appendix A.5 for full details. These prompts were then passed to the LLM, which generated CUDA source files accordingly. While this method introduced some functional inconsistencies that required significant post-generation 4 filtering (syntactic errors, missing definitions, or invalid memory operations), it enabled the creation of rich and diverse CUDA samples. In total, we generated 85k CUDA samples, of which 49.1% compiled successfully, yielding a final set of 46.3k valid files. 3.3 Transpilation and Compilation After collecting CUDA files from the previous stages, we performed deduplication to ensure all samples are unique in our dataset. We then used AMD’s Hipify tool [ 9] to convert CUDA source files by replacing CUDA-specific API calls with HIP equivalents. Files that failed conversion (approx. 43.9%) were discarded. Once CUDA–HIP pairs were available, we compiled them to host and device assemblies using -Oscompilation flag to reduce code size, achieving a 9.3% average token reduction compared to O3. Given the architectural divergence of the two stacks (see figure 2), their compilation pipelines differed substantially, requiring significant effort to engineer and standardize our described workflow. In figure 2, a key distinction between the CUDA and HIP compilation pipelines lies in how they manage host and device assembly separation. In ROCm, the device binary is typically embedded into the host binary during the BitCode-to-assembly transition. We modified this behavior by deferring insertion until after host assembly was converted to object code, enabling: (1)independent extraction of pure host and device assemblies, and (2)selective recombination for controlled translation and evaluation. Conversely, Nvidia provides no access to its binary injection process, device and host assemblies remain intertwined, with no official method for extraction or reintegration [ 28]. Since our goal was to support host-to-host and device-to-device transpilation, recombination on the CUDA side was unnecessary. Instead, we developed a regex-based filtering pipeline to disentangle host and device assembly sections during CUDA compilation. After compiling both stacks to SASS and RDNA3, we retained only samples that compiled suc- cessfully on both Nvidia and AMD pipelines, accounting for asymmetric failures. The final dataset includes matched CUDA–HIP source pairs, SASS–RDNA3 device assemblies, and host assemblies. We got 64k samples from these steps. 3.4 OpenCL Pipeline OpenCL stands as an independent pipeline in generating Nvidia to AMD mapping datasets outside of the CUDA/HIP framework. In other words, it alsos compiling down to the assembly level without going through the aforementioned stacks, operating as a single “source" for GPU code deveolpment [ 29]. Approximately 6k OpenCL code snippets were collected from the Stack dataset and compiled down to the device assemblies. On the Nvidia stack, a wrapper C++ function was used to encapsulate the clBuildProgram library provided by OpenCL
https://arxiv.org/abs/2505.16968v2
[ 30] and convert them into PTX, after which the CUDA stack was used to compile them down to assemblies. On the AMD stack, clang was used to directly transpile the OpenCL files into device assemblies whilst forcing it to emit intermediate LLVM during this process [17]. In total, these pipelines produced 70k aligned assembly samples, with the final distribution detailed in Table 2. All compilations were performed on an Nvidia A100 PCIe machine for the CUDA stack (SASS sm85 ISA) and on AMD Radeon RX 7900 XT GPUs (RDNA3 ISA) for the AMD stack. 4 CASS-Instruct and CASS-Bench Datasets The final instruction training dataset ( CASS-Instruct ) comprises 70,694 samples spanning a broad range of domains as seen in Figure 3, with a primary focus on GPU compute and GPU-related data structures. The dataset also includes corresponding CUDA and HIP source code alongside their compiled assembly representations. All samples have been verified to compile successfully and have pairwise source/assembly alignments. 5 nvccnvcc nvccSource (CUDA) BinaryAssembly (SASS) gcc ObjectSource (HIP) cuobjdumpHIPI Assembly (RDNA3)HIPI BitCode Assembly Objecthipcc ExecutableBitCode Object Shared BinaryInjection .incbin Fat Binary ExecutableIR (PTX)IR Assembly Device operations Host operations Text Tool nameLabelsCUDA Compilation Pipeline HIP Compilation Pipeline Figure 2: The Nvidia (left) and AMD (right) stacks illustrate the compilation process for CUDA and HIP. Blue denotes device-side steps; green denotes host-side steps. Nvidia’s stack is opaque; accessing device assembly (SASS) requires first compiling to binary, then using cuobjdump . In contrast, AMD’s process is transparent, allowing direct inspection and modification of device assembly (RDNA3) before host integration. ML 6.4%GRAHPICS 5.7%CR YPT O5.1%SIMULA TION4.1% CASS-Instruct COMPUTE 35.2%DA TA STRUCTURE 42.5% Parallel Algorithms Image ProcessingScientific ComputingLinear Algebra Deep LearningMath Physics SimulationData StructuresGraph Memory OperationsBasic Operations Vector OperationsHistogramming Signal ProcessingDynamicsSearch012345Count5 4 4 4 4 4 3 2 2 2 1 1 1 1 1 1 Figure 3: CASS coverage across dataset and benchmark (left) domain distribution of training samples (right) category distribution in CASS-Bench . 4.1 Dataset Analysis CASS reveals pronounced structural divergence between CUDA and HIP at both source and assembly levels, underscoring the inherent complexity of cross-architecture GPU transpilation. We analyze this by looking at the length of the assembly files, their syntactic similarity, and opcode diversity. Length of Assembly Files. Figure 4 (left) shows that AMD device assembly is, on average, twice as long as Nvidia’s in both synthetic and Stack subsets, while Nvidia’s device assembly exceeds HIP device assembly by 50% in the OpenCL set. We found an exponential relationship between source complexity and assembly size, with CUDA producing more verbose outputs than HIP for equivalent code. This highlights the growing difficulty of assembly-level translation as code complexity scales. See appendix A.4.1 for full details. 6 Synthetic Stack OpenCL05001.0k1.5k2.0k2.5k3.0kLines of Code 8641.2k1.5k1.4k905 2.3k 1.2k 1.0k887 734Total: 2.2kTotal: 3.1k Total: 2.1k Total: 1.9k Total: 1.5k Total: 1.0kCUDA Device CUDA Host HIP Device HIP Host(a) OpenCL-DeviceStack-Device Stack-Host Stack-SourceSynthetic-Device Synthetic-Host Synthetic-Source020406080100CHRF Score (b) Figure 4: Comparison of structural and syntactic patterns in CASS: (a) verbosity across subsets and backends; (b) syntactic similarity of translated code. Code Efficiency and Analysis. Assembly accuracy varies across
https://arxiv.org/abs/2505.16968v2
domains 0% in math, data structures, and graph tasks, 25–50% in linear algebra and memory operations, and up to 100% in physics simulations—highlighting the challenge of preserving low-level semantics. Despite this, the translated code closely matches the original in execution: memory usage deviates by less than ±0.3%, and execution time stays within ±11.8%, with over 85% of samples falling within ±5.6% for both metrics. Syntax Similarity . As illustrated in figure 4 (right), the CHRF [ 31] score indicates that HIP and CUDA assembly exhibit low syntactic similarity for both device and medium similarity to host code, particularly in the OpenCL and Stackv2 subsets. In contrast, the source code trans- lations, especially in the synthetic subset, show high overlap, highlighting that surface-level syntax is better preserved in the source code than in the compiled assembly representations. Table 2: Dataset composition by source and size Dataset Collected Final Synthetic 85k 40.k Stack 124k 24k OpenCL 6k 6k Total –70kOpcode Diversity . We noticed that tensor operations domi- nate both CUDA and HIP assembly, especially in device code, with memory-related instructions such as movandcall ap- pearing most frequently (refer to appendix A.4). Additionally, HIP opcodes like s_mov_b32 andv_add_co_u32 are used extensively reflecting low-level vector and memory opera- tions unique to AMD’s ISA, while Nvidia is dominated by its own variant of common instructions such as movq ,call , and jmp, with greater host-side integration (refer to appendix A.4). Both stacks share common control and memory ops (e.g., mov, test ), but HIP provides finer-grained access to GPU internals, revealing deeper visibility into parallelism. The synthetic subset emphasizes memory-oriented in- structions, aligning with LLM-driven template optimizations. Figure 6 further shows t-SNE clusters of opcode embeddings (via BERTCoder), suggesting that despite backend differences, Nvidia and AMD share semantically aligned opcode distributions across device and host levels. 4.2 CASS-Bench CASS-Bench is a 40-sample evaluation suite spanning 16 GPU-centric domains, each represented by 1–5 curated prompts. For each, we (1) used Claude-3.7 to generate a CUDA implementation; (2) compiled and executed on Nvidia hardware to obtain reference outputs; then (3) prompted Claude-3.7 to generate the corresponding AMD code. If outputs mismatched due to compilation errors, formatting differences or random generators variance, the AMD code was regenerated. Only samples with manually verified output equivalence were included. All final Nvidia–AMD pairs were processed using our pipeline (Section 3) to extract aligned host and device assembly. Figure 3 (right) shows the category distribution. 7 Table 3: Performance of different models on our CASS-Bench. Bold cells refer to the best results. Model Assembly Accuracy (%) Source-to-Source Accuracy (%)ToolsZLUDA [6] 2.5% 27.5% Hipify [9] – 87.5%LLMsGPT-4o [12] 0% 90.0% Gemini-2.0-Flash [38] 0% 80.0% Claude-3.7 [13] 0% 90.0% Qwen2.5-Coder-32B [32] 25.0% 85.0%OursCASS-1.5B 12.5% 90.0% CASS-3B 20.0% 92.5% CASS-7B 37.5% 95.0% 5 Experiments We evaluate the CASS dataset by instruction-supervised fine-tuning the Qwen2.5-Coder [32] models at various parameter scales. Two variants are developed: one for assembly translation and another for source translation. We benchmark these models against both proprietary and open-source baselines, including larger-scale systems. Instruction Supervised Finetuning. To ensure that input samples fit within
https://arxiv.org/abs/2505.16968v2
the 16K-token context window of the LLM, we normalized CUDA assembly code by removing redundant whitespace and comments, which reduced token count by roughly 15%. No preprocessing was applied to HIP assembly code due to its sensitivity to whitespace changes. We fine-tuned the Qwen2.5-Coder [32] models at 1.5B, 3B and 7B parameter scales on 4xA100 GPUs, using a batch size of 4, gradient accumulation of 32 (effective batch size of 512) and a learning rate of 1×10−5. The relatively aggressive learning rate was selected due to the dataset’s distributional divergence from the models’ pretraining corpus. Training employed DeepSpeed [ 33] with optimizer state sharding to maximize hardware efficiency, achieving 98% GPU utilization. Additionally, we incorporated Liger Kernel [ 34] and Paged Adam optimizer [ 35] to accelerate training and manage memory more effectively. We utilized LLaMA- Factory [36] to implement all of these optimizations. All models were trained with a 16K-token context window. At inference time, we applied RoPE [ 37] extrapolation to support up to 32.7K tokens. Inference was efficient, requiring approximately 56 seconds per a 16K-token sample. Evaluation Protocol. For both source and assembly transpilation, the LLM-generated code (HIP source or host/device assembly) was compiled and executed. The resulting outputs were then compared against the ground truth from CASS-Bench to verify functional correctness. 6 Results Assembly-to-Assembly Performance. Table 3 reports CASS-Bench results across LLMs and tools. All baselines, including proprietary and large open models, failed with 0% accuracy, except Qwen2.5- Coder-32B, which reached 25% . ZLUDA, a runtime-level system, achieved only 2.5% assembly accuracy despite operating directly on compiled binaries which be attributed to its compatibility with RNDA1. In contrast, our CASS models reached up to 37.5%, highlighting that our dataset imparts essential assembly-level knowledge absent from existing tools and models. Source-to-Source Performance. To further validate the utility of the dataset, we also evaluated source transpilation performance as shown in table 3. This task aligns more closely with some of the pretraining objectives of many proprietary models, as reflected in their relatively strong performance (ranging from 80% to 90%). Nonetheless, even the smallest CASS model (1.5B) significantly outperformed all baselines, achieving 90% accuracy. The 7B variant showed an outstanding state-of- 8 Table 4: Ablation study on the impact of different data types. Experiment Source Accuracy Assembly Accuracy ∆Impact Stack subset [27] 87.5% 17.5% - +Synthetic 95.0% 30.0% +12.5% +OpenCL [29] 95.0% 32.5% +2.5% +RoPE Extrapolation [37] 95.0% 37.5% +5.0% the-art performance of 95% accuracy. Although our CUDA dataset was entirely translated by Hipify and we retained only semantically aligned samples, our model surpassed Hipify by 7.5%. Figure 5: Source and assembly-level accuracy across cate- gories. 40 20 0 20 40 Dimension 140 20 02040Dimension 2Categories Memory Ops T ensor Ops Control Flow Thread Sync Kernel Ops Math Functions Data Structures Error Handling Io Operations Other Datasets Synthetic Stack OpenclCode Types Cuda Device Cuda Host Hip Device Hip HostFigure 6: t-SNE projection of CUDA and HIP assembly embeddings. Ablation Study. Table 4 shows that using only Stack data yields 17.5% assembly accuracy. Adding synthetic data improves it by +12.5%, highlighting its
https://arxiv.org/abs/2505.16968v2
role in learning low-level patterns. OpenCL adds +2.5%, providing complementary coverage, while RoPE extrapolation pushes accuracy to 37.5% by extending context capacity. 7 Limitations and Future Work Despite achieving state-of-the-art assembly transpilation, current performance is inadequate for production due to limited accuracy in complex or underrepresented domains. Expanding category diversity is essential to address this. The dataset currently covers only one host/device pair per vendor (RTX 4090 and RX7900), limiting generalizability across GPU architectures with varying ISAs. Broader architectural representation is needed to support real-world deployment. Finally, dataset size was minimized to fit within 16K-token context windows, excluding many vendor-specific low-level optimizations. Incorporating these will require future models with larger context capacity or more advanced chunking and attention strategies. 8 Conclusion We present CASS, the first large-scale dataset and model suite for cross-architecture GPU code transpilation, encompassing 70k aligned pairs of source and assembly code for both Nvidia and AMD platforms. Our dataset uniquely bridges both source-to-source (CUDA to HIP) and assembly-to- assembly (SASS to RDNA3) mappings, addressing a critical gap in low-level code portability. To validate its effectiveness, we train the CASS model family, which achieves 95% accuracy in source translation and 37. 5% in assembly translation, substantially outperforming both proprietary and open-source baselines. Furthermore, our transpiled code preserves functional behavior: over 85% of samples match native execution in both memory usage and runtime. We also introduce CASS-Bench, a purpose-built evaluation suite spanning 16 GPU-centric domains. All models, data, and benchmarks are released as open-source resources, establishing a foundation for future research in compiler tooling, hardware interoperability, and performance-aware code generation. 9 References [1] Mark Harris. An even easier introduction to cuda. NVIDIA Developer Blog , 2024. [2]NVIDIA Corporation. Turing Compatibility Guide for CUDA Applications , 2021. Version 11.4.2. [3]AMD. Gaming gpu benchmarks. https://www.amd.com/en/products/graphics/ gaming/gaming-benchmarks.html , 2024. Accessed: 2025-05-15. [4]The Verge. Amd radeon rx 9070 xt review: performance that beats the price. https://www.theverge.com/gpu-reviews/624423/ amd-radeon-rx-9070-xt-review-benchmarks-price , 2024. Accessed: 2025-05- 15. [5]Financial Times. Nvidia’s rivals take aim at its software dominance. 2024. Accessed: 2025-05- 14. [6]Andrzej Janik. Zluda: Cuda on non-nvidia gpus. https://github.com/vosen/ZLUDA , 2024. Accessed: 2025-04-28. [7]AMD. HIP: Heterogeneous-computing Interface for Portability. https://github.com/ ROCm-Developer-Tools/HIP , 2024. Accessed: 2025-04-30. [8]Advanced Micro Devices (AMD). Amd rocm ™6: Open software platform for gpu computing. Technical report, Advanced Micro Devices, Inc., 2024. [9] Advanced Micro Devices, Inc. HIPIFY Documentation , 2025. Accessed: 2025-04-28. [10] Anwar Hossain Zahid, Ignacio Laguna, and Wei Le. Testing gpu numerics: Finding numerical differences between nvidia and amd gpus. In SC24-W: Workshops of the International Con- ference for High Performance Computing, Networking, Storage and Analysis , pages 547–557. IEEE, 2024. [11] Anton Lozhkov, Raymond Li, Loubna Ben Allal, Federico Cassano, Joel Lamy-Poirier, Noua- mane Tazi, Ao Tang, Dmytro Pykhtar, Jiawei Liu, Yuxiang Wei, et al. Starcoder 2 and the stack v2: The next generation. arXiv preprint arXiv:2402.19173 , 2024. [12] Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint arXiv:2410.21276 , 2024. [13] Anthropic. Claude 3.7 sonnet and claude code, February 2025. Accessed:
https://arxiv.org/abs/2505.16968v2
2025-05-14. [14] AMD ROCm Documentation. HIP Porting Guide , 2024. Accessed: 2025-01-29. [15] Jun Chen, Xule Zhou, and Hyesoon Kim. Cupbop-amd: Extending cuda to amd platforms. In Proceedings of the SC’23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis , pages 1093–1104, 2023. [16] Chris Lattner and Vikram Adve. LLVM: A compilation framework for lifelong program analysis and transformation. In Proceedings of the International Symposium on Code Generation and Optimization , pages 75–86, San Jose, CA, USA, 2004. IEEE Computer Society. [17] The Clang Team. Clang: a c language family frontend for llvm, 2025. Accessed: 2025-01-29. [18] Gregory Diamos, Andrew Kerr, and Sudhakar Yalamanchili. Gpuocelot: A dynamic compilation framework for ptx. https://github.com/gtcasl/gpuocelot , 2009. Accessed: 2025-04- 28. [19] NVIDIA. Computeeval: Evaluating large language models for cuda code generation. https: //github.com/NVIDIA/compute-eval , 2024. Accessed: May 2025. [20] Shuai Che, Michael Boyer, Jiayuan Meng, David Tarjan, Jeremy W Sheaffer, Sang-Ha Lee, and Kevin Skadron. Rodinia: A benchmark suite for heterogeneous computing. In 2009 IEEE international symposium on workload characterization (IISWC) , pages 44–54. Ieee, 2009. 10 [21] Anthony Danalis, Gabriel Marin, Collin McCurdy, Jeremy S Meredith, Philip C Roth, Kyle Spafford, Vinod Tipparaju, and Jeffrey S Vetter. The scalable heterogeneous computing (shoc) benchmark suite. In Proceedings of the 3rd workshop on general-purpose computation on graphics processing units , pages 63–74, 2010. [22] Scott Grauer-Gray, Lifan Xu, Robert Searles, Sudhee Ayalasomayajula, and John Cavazos. Auto-tuning a high-level language targeted to gpu codes. in 2012 innovative parallel computing (inpar). IEEE, Piscataway, NJ, USA , pages 1–10, 2012. [23] Tom Deakin, James Price, Matt Martineau, and Simon McIntosh-Smith. Gpu-stream v2. 0: Benchmarking the achievable memory bandwidth of many-core processors across diverse parallel programming models. In High Performance Computing: ISC High Performance 2016 International Workshops, ExaComm, E-MuCoCoS, HPC-IODC, IXPUG, IWOPH, Pˆ 3MA, VHPC, WOPSSS, Frankfurt, Germany, June 19–23, 2016, Revised Selected Papers 31 , pages 489–507. Springer, 2016. [24] Ahmed Heakl, Chaimaa Abi, Rania Hossam, and Abdulrahman Mahmoud. From cisc to risc: language-model guided assembly transpilation. arXiv preprint arXiv:2411.16341 , 2024. [25] Celine Lee, Abdulrahman Mahmoud, Michal Kurek, Simone Campanoni, David Brooks, Stephen Chong, Gu-Yeon Wei, and Alexander M Rush. Guess & sketch: Language model guided transpilation. arXiv preprint arXiv:2309.14396 , 2023. [26] Yifan Sun, Xiang Gong, Amir Kavyan Ziabari, Leiming Yu, Xiangyu Li, Saoni Mukherjee, Carter McCardwell, Alejandro Villegas, and David Kaeli. Hetero-mark, a benchmark suite for cpu-gpu collaborative computing. In 2016 IEEE International Symposium on Workload Characterization (IISWC) , pages 1–10. IEEE, 2016. [27] Anton Lozhkov, Raymond Li, Loubna Ben Allal, Federico Cassano, Joel Lamy-Poirier, Noua- mane Tazi, Ao Tang, Dmytro Pykhtar, Jiawei Liu, Yuxiang Wei, et al. Starcoder 2 and the stack v2: The next generation. arXiv preprint arXiv:2402.19173 , 2024. [28] NVIDIA Corporation. CUDA Binary Utilities , 2025. https://docs.nvidia.com/cuda/ cuda-binary-utilities/index.html . [29] The Khronos Group. Opencl guide. https://github.com/KhronosGroup/OpenCL-Guide , 2025. Accessed: 2025-05-14. [30] Khronos Group. clbuildprogram - opencl 3.0 reference pages. https://registry.khronos. org/OpenCL/sdk/3.0/docs/man/html/clBuildProgram.html , 2020. Accessed: 2025- 05-14. [31] Maja Popovi ´c. chrf: character n-gram f-score for automatic mt evaluation. In Proceedings of the tenth workshop on statistical machine
https://arxiv.org/abs/2505.16968v2
translation , pages 392–395, 2015. [32] Binyuan Hui, Jian Yang, Zeyu Cui, Jiaxi Yang, Dayiheng Liu, Lei Zhang, Tianyu Liu, Jia- jun Zhang, Bowen Yu, Keming Lu, et al. Qwen2. 5-coder technical report. arXiv preprint arXiv:2409.12186 , 2024. [33] Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining , pages 3505–3506, 2020. [34] Pin-Lun Hsu, Yun Dai, Vignesh Kothapalli, Qingquan Song, Shao Tang, Siyu Zhu, Steven Shimizu, Shivam Sahni, Haowen Ning, and Yanning Chen. Liger kernel: Efficient triton kernels for llm training. arXiv preprint arXiv:2410.10989 , 2024. [35] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 , 2017. [36] Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, and Yongqiang Ma. Llamafactory: Unified efficient fine-tuning of 100+ language models. arXiv preprint arXiv:2403.13372 , 2024. 11 [37] Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding. Neurocomputing , 568:127063, 2024. [38] Demis Hassabis and Koray Kavukcuoglu. Introducing gemini 2.0: our new ai model for the agentic era, December 2024. Accessed: 2025-05-14. 12 A Appendix A.1 Evaluation on ZLUDA To assess ZLUDA’s ability to execute CUDA code on AMD GPUs, we designed a two-track evaluation strategy targeting both source-level and binary-level workflows (the latter being akin to assembly-level translation). In the source-to-source setting, we leveraged access to the original CUDA source files to manually compile them into PTX using nvcc . These PTX files were then ingested by ZLUDA, which translated them into AMD-compatible LLVM IR before lowering them into native executables targeting RDNA3 hardware. In the assembly-to-assembly setting, we instead compiled the CUDA source into a complete executable and invoked it directly. ZLUDA intercepted the CUDA runtime calls, dynamically translated the embedded PTX or SASS, and executed the resulting code on the AMD backend. This dual strategy allowed us to assess both ZLUDA’s static translation capabilities and its runtime interoperability under realistic execution conditions. A.2 Hardware used All experiments were conducted on two distinct machines to generate architecture-specific outputs. For AMD-related compilation and execution, we used a workstation equipped with an Intel i7- 14700KF CPU and an AMD Radeon RX 7900 XT GPU. For NVIDIA-related outputs, we used a server with an AMD EPYC 9654 CPU and an Nvidia A100 (80GB) GPU. Although the Nvidia system features a high-end data center GPU, all CUDA code was compiled targeting the compute capabilities of a standard consumer-grade GPU (e.g., RTX 4090) to maintain parity with the AMD hardware. Furthermore, to ensure consistency and reproducibility across platforms, all file generation was performed within Docker containers tailored to each architecture. Figure 7: Relationship between source and assembly-level LoC in the CASS dataset. Scatter plot comparing source code lines of code (LoC) to the corresponding assembly LoC for both CUDA and HIP backends across the Stackv2 and Synthetic subsets. Trend lines and density contours illustrate that CUDA typically produces more verbose assembly
https://arxiv.org/abs/2505.16968v2
output than HIP for equivalent source sizes. 13 Figure 8: Opcode Category Distribution by Dataset and Code Type. Stacked bar chart showing the distribution of assembly instructions across 10 opcode categories for device and host code in the Synthetic, Stackv2, and OpenCL subsets. Each bar represents a (dataset, code type) pair, illustrating the functional composition of the code across memory, tensor, control flow, synchronization, and other operations. (a) HIP assembly opcodes. (b) CUDA assembly opcodes. Figure 9: Most frequent opcodes in HIP and CUDA assembly. Word clouds depicting the most common opcodes in HIP and CUDA assembly files. The size of each opcode reflects its relative frequency in the compiled dataset, highlighting structural and architectural differences between the two backends. 0k 50k 100k 150k 200k 250k 300k Training Steps020406080100Accuracy (%)Source Accuracy (7B) Source Accuracy (3B) Source Accuracy (1.5B) Assembly Accuracy (7B) Assembly Accuracy (3B) Assembly Accuracy (1.5B) Figure 10: Accuracy vs. training steps for source/assembly across CASS model scales (1.5B, 3B, 7B). 14 0.100 0.075 0.050 0.025 0.000 0.025 0.050 0.075 0.100 Memory Usage Difference (%) with Std Dev38343026401272437141392939121011419582821633322736311763225232118File Less memory than Ground TruthMore memory than Ground Truth -0.0800% ±0.01%-0.0745% ±0.01%-0.0587% ±0.02%-0.0480% ±0.01%-0.0427% ±0.01%-0.0427% ±0.01%-0.0426% ±0.02%-0.0162% ±0.02%-0.0160% ±0.02%-0.0107% ±0.02%-0.0107% ±0.01%-0.0106% ±0.01%-0.00534% ±0.01%-0.00534% ±0.01%-0.00318% ±0.01%0.00457% ±0.01%0.00514% ±0.01%0.00527% ±0.01%0.00533% ±0.01%0.0105% ±0.01%0.0128% ±0.01%0.0160% ±0.01%0.0206% ±0.01%0.0213% ±0.01%0.0320% ±0.01%0.0340% ±0.01%0.0376% ±0.01%0.0479% ±0.01%0.0534% ±0.01%0.0534% ±0.02%0.0534% ±0.01%0.0543% ±0.01%0.0586% ±0.01%0.0640% ±0.01%0.0641% ±0.02%0.0748% ±0.01%0.0801% ±0.01% 4 2 0 2 4 6 Execution Time Difference (%) with Std Dev33729225364124021418257269328191139341038273312317243128301316216File Faster than Ground TruthSlower than Ground Truth -3.14% ±0.58%-2.80% ±0.60%-2.03% ±0.77%-1.93% ±0.35%-1.53% ±0.62%-1.52% ±0.67%-1.27% ±0.65%-1.18% ±0.53%-1.01% ±0.70%-0.51% ±0.70%-0.48% ±0.85%-0.26% ±0.66%-0.26% ±0.68%-0.25% ±0.69%0.00% ±0.73%0.15% ±0.19%0.22% ±0.63%0.23% ±0.09%0.26% ±0.60%0.48% ±0.77%0.50% ±0.62%0.53% ±0.43%1.18% ±0.62%1.31% ±0.72%1.55% ±0.66%1.81% ±0.54%1.81% ±0.62%2.06% ±0.67%2.07% ±0.65%2.09% ±0.60%2.17% ±0.61%2.55% ±0.73%2.58% ±0.71%2.70% ±0.88%3.08% ±0.75%3.71% ±0.69%4.95% ±0.57%Figure 11: Comparison of memory usage (left) and execution time (right) between predicted and ground truth HIP programs, measured via compilation and runtime profiling. A.3 CASS Domain Coverage To obtain the domain-level breakdown shown in Figure 3, we developed a static analysis pipeline that categorizes each source file based on its content. The classification is performed by matching the file’s text against curated sets of domain-specific keywords corresponding to seven high-level categories: general compute, simulation, data structure, machine learning, graphics, cryptography, andscientific computing . Each keyword set includes terms commonly associated with the respective domain; for example, the machine learning category includes terms such as neural ,gradient , and activation , while cryptography includes hash ,encrypt , and signature . For a given file, the domain with the highest keyword match count is assigned. If no keywords are matched, a default label (e.g., general compute ) is applied. After all files are processed, their assignments are aggregated to produce the final domain distribution. This process provides a simple yet straightforward and interpretable way of grouping source files by their functional domain. A.4 Extra Data Analysis A.4.1 Length of Assembly Files As shown in the figure 7 We found an exponential relationship between source complexity and assembly size, with CUDA producing more verbose outputs than HIP for equivalent code. This highlights the growing difficulty of assembly-level translation as code complexity
https://arxiv.org/abs/2505.16968v2
scales. A.4.2 Code efficiency As shown figure 5, assembly accuracy is inconsistent, 0% in Math, Data Structures, and Graph, 25–50% in linear algebra and memory operations. This reflects the challenge of low-level semantic preservation. Only physics simulation achieves 100%, likely due to simpler or repetitive control flows. As shown in figures 11, the translated code exhibits tight fidelity to the ground truth. Memory usage deviates by less than ±0.3% for all files, with 18 files using more memory (max +0.3%) and 22 using less (min −0.3%). Execution time differences are similarly small: 11 files are slower (max +11.8%), 8 are faster (min −10.0%), and the rest are unchanged. Over 85% of samples fall within ±5.6% across both metrics, confirming that our model preserves both memory and runtime efficiency during assembly translation. Each test was executed 20 times, and the reported values reflect the average across runs to mitigate noise and ensure statistical reliability. A.4.3 Opcode Diversity Taking a deeper dive into the low-level instructions representation shown in figure 9, a few extra insights can be drawn. In the HIP case, many opcodes, such as s_mov_b32 ,v_add_co_u32 , and s_waitcnt , come directly from AMD’s GPU instruction set. These reflect fine-grained control over the hardware, including scalar and vector operations and synchronization. On the other hand, the CUDA assembly is mostly made up of x86-64 instructions like movq ,call ,jmp, and pushq , which 15 are typically used on the CPU. This suggests that the CUDA output includes more host-side code or that GPU instructions are hidden behind a higher level of abstraction. Still, both stacks share common instructions like movandtest , showing that some basic control and memory operations are similar. In general, HIP provides more visibility into what the GPU is doing, while CUDA hides many of those low-level details behind a more unified host-device model. A.5 Sythetic Generation To generate large-scale, diverse CUDA programs, we design a multiprocessing Python pipeline that interacts with a locally hosted large language model via a chat-based API. The pipeline leverages a wide array of handcrafted prompt templates, each parameterized with variables such as problem size, optimization target, algorithm type, and architectural features (see Appendix A.5.1). At runtime, these templates are instantiated with randomly sampled values from curated sets covering domains like matrix operations, graph algorithms, scientific computing, machine learning, and sparse computation (see Table 5). Each worker process independently generated prompts, sends them to the model, extracts valid CUDA code from the response, and saves the output in a structured format. Robust fault- tolerance mechanisms—including retry logic, output validation, and file existence checks—ensure resilience to model failures and concurrent access. The system supports parallel generation with controlled API concurrency and automatic resumption from previous checkpoints, enabling scalable and efficient generation of compilable CUDA code samples suitable for downstream benchmarking or training. Table 5: Representative values for prompt placeholders used in the synthetic code generation. Placeholder Example Values {size} 64, 1024, 16384 {dimension} 1, 3, 6 {optimization} memory coalescing, shared memory usage, warp-level programming {operation} sum, histogram, L2 norm {algorithm} matrix multiplication, radix sort, BFS
https://arxiv.org/abs/2505.16968v2
{radius} 1, 5, 13 {graph_format} adjacency matrix, CSR, edge list {md_algorithm} Verlet integration, leapfrog, Runge-Kutta {linear_solver} conjugate gradient, Jacobi, multigrid {numerical_method} finite difference, spectral, Crank-Nicolson {factorization_method} SVD, LU, eigenvalue decomposition {conv_layer_count} 2, 6, 12 {neuron_count} 64, 512, 2048 {sparse_format} CSR, ELL, HYB {nbody_algorithm} Barnes-Hut, brute force, particle mesh {filter_type} Gaussian, Sobel, Gabor {filter_size} 3, 7, 15 {resolution} 720p, 1080p, 4K {segmentation_algorithm} watershed, region growing, U-Net {signal_transform} FFT, wavelet, Hilbert {optimization_algorithm} Adam, simulated annealing, particle swarm {crypto_algorithm} AES, RSA, Argon2 {cracking_method} brute force, dictionary attack, rainbow table {hash_algorithm} SHA-256, BLAKE3, Bcrypt {data_structure} binary tree, hash table, bloom filter {collision_strategy} linear probing, cuckoo hashing, separate chaining 16 A.5.1 Prompt Templates for Synthetic CUDA Code Generation Basic Operations 1. Implement a CUDA kernel for {size}D FFT (Fast Fourier Transform). Optimize for { optimization}. 2. Generate a CUDA implementation for {size}D stencil computation with radius { radius}. Optimize for {optimization}. 3. Write a CUDA kernel for parallel reduction to compute the {operation} of an array of size {size}. Focus on {optimization}. 4. Create a CUDA implementation for convolution operation with a {size}x{size} filter. Focus on {optimization} optimization. 5. Generate a CUDA kernel for matrix multiplication of two matrices A and B of size {size}x{size}. Include error handling and optimize for {optimization}. Graph Algorithms 1. Write a CUDA implementation for graph coloring of a graph with {size} nodes. Focus on {optimization}. 2. Implement a CUDA kernel for community detection in a graph with {size} nodes using the {community_algorithm} algorithm. 3. Implement a CUDA kernel for graph processing that computes {algorithm} on a graph with {size} nodes. Optimize for {optimization}. 4. Generate a CUDA kernel for finding strongly connected components in a directed graph with {size} nodes. Optimize for {optimization}. 5. Create a CUDA implementation for breadth-first traversal on a graph with {size} nodes stored in {graph_format}. Optimize for {optimization}. Scientific Computing 1. Write a CUDA implementation for {size}D fluid simulation using {method}. Focus on {optimization}. 2. Create a CUDA kernel for Monte Carlo simulation of {size} paths for option pricing. Focus on {optimization}. 3. Implement a CUDA solver for {size}x{size} sparse linear system using { linear_solver}. Focus on {optimization}. 4. Generate a CUDA implementation for {size}D heat equation solver using { numerical_method}. Optimize for {optimization}. 5. Create a CUDA kernel for molecular dynamics simulation of {size} particles using {md_algorithm}. Optimize for {optimization}. Machine Learning 1. Generate a CUDA kernel for k-means clustering of {size} data points in {dimension }D space. Optimize for {optimization}. 2. Implement a CUDA kernel for {size}x{size} matrix factorization using { factorization_method}. Optimize for {optimization}. 3. Create a CUDA implementation for computing attention mechanism in a transformer with {size} tokens. Focus on {optimization}. 4. Implement a CUDA kernel for backpropagation in a convolutional neural network with {conv_layer_count} conv layers. Optimize for {optimization}. 5. Write a CUDA implementation for training a neural network with {layer_count} layers and {neuron_count} neurons per layer. Focus on {optimization}. 17 Sparse Operations 1. Generate a CUDA kernel for sparse FFT computation. Optimize for {optimization}. 2. Implement a CUDA kernel for sparse tensor operations with {size} non-zero elements. Optimize for {optimization}.
https://arxiv.org/abs/2505.16968v2
3. Write a CUDA implementation for sparse convolution with {size}x{size} filter on sparse input. Focus on {optimization}. 4. Create a CUDA implementation for sparse matrix-matrix multiplication in { sparse_format} format. Focus on {optimization}. 5. Generate a CUDA kernel for sparse matrix-vector multiplication where the matrix has approximately {size} non-zero elements. Optimize for {optimization}. Simulation 1. Generate a CUDA kernel for cloth simulation with {size}x{size} grid. Optimize for {optimization}. 2. Write a CUDA implementation for raytracing of a scene with {size} objects. Focus on {optimization}. 3. Create a CUDA implementation for {algorithm} of {size} particles in a {dimension} D space. Focus on {optimization}. 4. Create a CUDA implementation for fluid-structure interaction with {size} boundary elements. Focus on {optimization}. 5. Implement a CUDA kernel for N-body simulation of {size} particles using { nbody_algorithm}. Optimize for {optimization}. Image and Signal Processing 1. Create a CUDA implementation for feature extraction from {size}x{size} images. Focus on {optimization}. 2. Generate a CUDA kernel for image segmentation using {segmentation_algorithm}. Optimize for {optimization}. 3. Write a CUDA implementation for real-time video processing of {resolution} frames . Focus on {optimization}. 4. Implement a CUDA kernel for signal processing with {size}-point {signal_transform }. Optimize for {optimization}. 5. Implement a CUDA kernel for image filtering using {filter_type} filter of size { filter_size}x{filter_size}. Optimize for {optimization}. Optimization Algorithms 1. Implement a CUDA kernel for simulated annealing with {size} states. Optimize for {optimization}. 2. Generate a CUDA kernel for genetic algorithm with population size {size}. Optimize for {optimization}. 3. Write a CUDA implementation for {optimization_algorithm} with {size} variables. Focus on {optimization}. 4. Write a CUDA implementation for gradient descent optimization with {size} parameters. Focus on {optimization}. 5. Create a CUDA implementation for particle swarm optimization with {size} particles in {dimension}D space. Focus on {optimization}. 18 Cryptography and Security 1. Generate a CUDA kernel for homomorphic encryption operations. Optimize for { optimization}. 2. Write a CUDA implementation for secure hashing using {hash_algorithm}. Focus on { optimization}. 3. Generate a CUDA kernel for {crypto_algorithm} encryption/decryption. Optimize for {optimization}. 4. Create a CUDA implementation for blockchain mining with difficulty {size}. Focus on {optimization}. 5. Implement a CUDA kernel for password cracking using {cracking_method}. Optimize for {optimization}. Data Structures 1. Create a CUDA implementation for priority queue with {size} elements. Focus on { optimization}. 2. Create a CUDA implementation for {data_structure} with {size} elements. Focus on {optimization}. 3. Implement a CUDA kernel for operations on a B-tree with {size} nodes. Optimize for {optimization}. 4. Generate a CUDA kernel for skip list operations with {size} elements. Optimize for {optimization}. 5. Write a CUDA implementation for hash table with {size} buckets using { collision_strategy}. Focus on {optimization}. A.5.2 Qualitative Comparison with Other LLMs We highlight several cases where CASS-7B outperforms existing LLMs such as Claude, Qwen- Coder, and GPT-4o in faithfully transpiling CUDA to HIP. For example, in one instance, CASS-7B correctly transpiled the CUDA code while preserving the exact string constants from the original program, including the label CUDA in the output format string. Maintaining these strings is essential for preserving the intended user-facing behavior, particularly in logging or debugging
https://arxiv.org/abs/2505.16968v2
scenarios where clarity and consistency matter. In contrast, Claude, Qwen-Coder, and GPT4o unnecessarily altered the string to say HIP, despite the output still originating from a CUDA kernel. This substitution introduces a semantic error, as the original string refers to CUDA, not HIP, and should remain unchanged. CASS-7B printf("tanh(%f) = %f CUDA vs %f (CPU)\n", h_input[idx], h_output[idx], tanh(h_input[idx])); Claude, Qwen-Coder, GPT4o printf("tanh(%f) = %f (HIP) vs %f (CPU)\n", h_input[idx], h_output[idx], tanh(h_input[idx])); In another example, CASS-7B retained the classical CUDA-style kernel launch syntax using triple angle brackets ( <<<...>>> ), while also ensuring that the generated code remained compilable by correctly including the required HIP header <hip/hip_runtime.h> . This demonstrates a high degree of structural fidelity to the source code, which is especially important for developers familiar with standard CUDA conventions. In contrast, other models such as Claude and Qwen-Coder replaced the launch expression with the HIP-specific macro hipLaunchKernelGGL , which, while functionally valid, deviates from the original representation. More critically, they failed to include the necessary HIP header, rendering the output uncompilable. This example highlights how CASS-7B goes beyond syntactic accuracy to produce code that is both faithful to the original structure and immediately usable in a real compilation setting. 19 CASS-7B #include <hip/hip_runtime.h> #include <iostream> ... add<<<(N + 255) / 256, 256 >>>(d_a, d_b, d_c, N); Claude, Qwen-Coder #include <iostream> ... hipLaunchKernelGGL(add , (N + 255) / 256, 256 , 0, 0, d_a, d_b, d_c, N); Lastly, when verifying numerical correctness, CASS-7B preserved the original logging behavior by correctly emitting output to std::cout , as in the source code. This choice maintains consistency with the original program’s semantics, especially in distinguishing between standard output and error streams; important in contexts where output may be redirected or parsed. In contrast, GPT-4o unnecessarily altered the output stream to std::cerr , which, while syntactically valid, changes the runtime behavior of the program. Such a change could lead to unexpected side effects in downstream tools or logging pipelines. This example further demonstrates CASS-7B ’s attention to both structural and behavioral fidelity in its translations. CASS-7B std:: cout << "Error at element " << i << ": " << h_output[I] << " vs. expected " << h_reference[i] << std::endl; GPT4o std:: cerr << "Error at element " << i << ": " << h_output[i] << " vs expected " << h_reference[i] << std::endl; 20
https://arxiv.org/abs/2505.16968v2
arXiv:2505.16972v1 [cs.CL] 22 May 2025From Tens of Hours to Tens of Thousands: Scaling Back-Translation for Speech Recognition Tianduo Wang♡*,Lu Xu♢,Wei Lu♡,Shanbo Cheng♢† ♡StatNLP Research Group, Singapore University of Technology and Design ♢ByteDance Seed {tianduo_wang,luwei}@sutd.edu.sg ,{xu.lu1,chengshanbo}@bytedance.com https://github.com/tianduowang/speech-bt Abstract Recent advances in Automatic Speech Recog- nition (ASR) have been largely fueled by mas- sive speech corpora. However, extending cover- age to diverse languages with limited resources remains a formidable challenge. This paper introduces Speech Back-Translation, a scal- able pipeline that improves multilingual ASR models by converting large-scale text corpora into synthetic speech via off-the-shelf text-to- speech (TTS) models. We demonstrate that just tens of hours of real transcribed speech can effectively train TTS models to generate synthetic speech at hundreds of times the orig- inal volume while maintaining high quality. To evaluate synthetic speech quality, we de- velop an intelligibility-based assessment frame- work and establish clear thresholds for when synthetic data benefits ASR training. Using Speech Back-Translation, we generate more than 500,000 hours of synthetic speech in ten languages and continue pre-training Whisper- large-v3, achieving average transcription error reductions of over 30%. These results high- light the scalability and effectiveness of Speech Back-Translation for enhancing multilingual ASR systems. 1 Introduction Automatic Speech Recognition (ASR) technology has become increasingly important in making digi- tal services accessible across languages and modal- ities (Baevski et al., 2020; Zhang et al., 2021; Rad- ford et al., 2022). While recent transformer-based architectures have achieved impressive results for high-resource languages, e.g., English and Chinese, many of the world’s languages still lack sufficient transcribed speech for training robust ASR mod- els (Pratap et al., 2020a; Babu et al., 2021; Chen et al., 2024). This data scarcity creates a significant barrier to developing effective multilingual speech *Work done during internship at ByteDance †Corresponding authortechnologies, particularly affecting communities where manual data collection is resource-intensive or logistically challenging (Costa-jussà et al., 2022; Communication et al., 2023; Pratap et al., 2023). A natural way to mitigate the data scarcity issue is to leverage high-quality generative models. Re- cent work has demonstrated successful applications of these models for data augmentation in computer vision (Fan et al., 2023; Azizi et al., 2023), natu- ral language processing (Gunasekar et al., 2023; Li et al., 2024), and speech recognition (Yang et al., 2024). Despite their demonstrated potential, the role of generative models in overcoming data scarcity presents a paradox. These models them- selves typically demand vast amounts of labeled data to attain their remarkable capabilities. For in- stance, Stable Diffusion (Rombach et al., 2022), a leading text-to-image model frequently used for data augmentation (Tian et al., 2023; Trabucco et al., 2023), was trained on millions of labeled images. This reliance prompts a fundamental ques- tion: do synthetic data truly alleviate data scarcity in downstream tasks, or do they simply shift the burden of data collection to the pre-training stage of generative models? Our work investigates whether an off-the-shelf text-to-speech (TTS) model can be trained with limited real transcribed speech data—just tens of hours—to generate synthetic data that enhances multilingual ASR models. To address this chal- lenge, we propose Speech
https://arxiv.org/abs/2505.16972v1
Back-Translation (see Figure 1), a scalable method that build large- scale synthetic transcribed speech from text cor- pora with TTS models. Our results demonstrate that synthetic data, generated by TTS models trained on just tens of hours of labeled audio, can effectively expand small human-labeled datasets to tens of thousands of hours. To assess the quality of this back-translated synthetic dataset, we propose a novel intelligibility-based metric and use it to establish thresholds indicating when 1 Seed data (≤100 hours of transcribed speech in low -resource languages) Pre-trained TTS model Fine -tuned TTS model Fine -tuningLarge -scale textual corpusSynthetic speech (> 10,000 hours) Speech synthesisDiverse audio promptsFigure 1: Pipeline of Speech Back-Translation . The main objective is to augment limited training data ( ≤100 hours) for low-resource languages by synthesizing extensive amounts of speech ( >10,000 hours). Starting from a multilingual TTS model pre-trained with high-resource languages, we fine-tune it on a small set of seed data, then generate synthetic speech by conditioning the fine-tuned model on a large textual corpus and diverse audio prompts. synthetic speech reliably enhances ASR perfor- mance. Finally, we scale Speech Back-Translation to 500K hours across ten languages and continue pre-training Whisper-large-v31, one of the state- of-the-art multilingual ASR models. As a result, we observe consistent improvements across all lan- guages, achieving an average reduction of over 30% in transcription error rates. To summarize, our main contributions are listed as follows: 1.We demonstrate that just tens of hours of real transcribed speech can effectively train TTS models to generate tens of thousands of high- quality synthetic speech, achieving a scaling factor of several hundred. 2.We introduce an intelligibility-based evalua- tion framework for synthetic speech and estab- lish thresholds to determine when synthetic data reliably benefits ASR performance. 3.We build the largest synthetic speech dataset to date—500K hours spanning ten lan- guages—and use it to further pre-train Whisper-large-v3. This yields an average 30% reduction on transcription error rates, high- lighting the scalability of our approach. 2 Background 2.1 Back-Translation Back-translation is a data augmentation technique originally used in machine translation to expand training data (Sennrich et al., 2016a; Edunov et al., 2018). In a typical setup, a model trained to trans- late from the target language back into the source language (i.e., a “ reverse ” model) is used to gen- erate synthetic source sentences from real target- language data. These newly created source-target 1https://hf.co/openai/whisper-large-v3pairs can then be used to train a forward trans- lation model, effectively increasing its exposure to a broader range of textual content. For speech recognition, back-translation offers a mechanism to supplement scarce or imbalanced datasets by leveraging an abundance of target-side text. Here, the “ reverse ” model is typically a text-to-speech model that generates synthetic speech from textual corpora. Integrating this synthetic speech with ex- isting training data allows the model to handle a wider range of speech variability, enhancing recog- nition performance despite resource constraints. 2.2 Zero-shot Text-to-Speech Model Zero-shot Text-to-Speech (TTS) models (Wang et al., 2023; Casanova et al., 2024) represent a mile- stone in speech synthesis, enabling the generation
https://arxiv.org/abs/2505.16972v1
of high-quality speech for previously unseen speak- ers without additional fine-tuning. These models typically contain the following components: •Audio Tokenizer : Encodes raw acoustic inputs (e.g., mel-spectrograms) into discrete audio to- kens, forming the basis for synthesis. •Speaker Embeddings : Contain speaker-specific acoustic features, which are normally extracted from audio clips, enabling zero-shot adaptation to new voices. •Decoder-only Transformer : Processes speaker embeddings alongside textual tokens to generate sequences of audio tokens. The Transformer model is trained in an auto-regressive manner. •Vocoder : Converts the generated audio tokens into waveform audio, producing the final synthe- sized output. The synergy of these components allows zero-shot TTS models to generalize effectively to speakers not encountered during training, maintaining high voice similarity and naturalness. 2 3 Approach: Speech Back-Translation In this section, we introduce the proposed Speech Back-Translation (see Figure 1). we first detail how we extend existing TTS models to support new low- resource languages with fine-tuning (Section 3.1). We then describe how we generate large-scale syn- thetic speech dataset (Section 3.2). 3.1 Fine-tuning with Low-resource Languages Obtaining high-quality transcribed speech for low- resource languages poses a significant challenge for multilingual ASR training. To address this, we ex- tend existing multilingual TTS models—originally trained on high-resource languages—to new, low- resource languages via targeted fine-tuning with limited data. Vocabulary Expansion Before fine-tuning, we expand the vocabulary of pre-trained TTS models to accommodate words not encountered during the initial training phase. We employ the Byte-Pair Encoding algorithm (Sennrich et al., 2016b) on textual data from the target language, appending the newly derived subwords to the model’s original vocabulary. This approach preserves the integrity of the existing vocabulary while enabling effective representation of new linguistic units. Limited Data Fine-tuning Given the scarcity of transcribed speech data, we adopt a conservative fine-tuning strategy: we freeze modules responsi- ble for low-level acoustic representations, such as the audio tokenizer and vocoder, while selectively fine-tuning only the transformer part of the TTS model. This ensures stability in fundamental acous- tic modeling while effectively adapting linguistic and prosodic mappings to the target language. Dur- ing fine-tuning, each pair of audio and transcript data is processed by first extracting a speaker em- bedding efrom the audio clip. Then, we tokenize both the transcript and audio clip, concatenating theStext tokens x= [x1, . . . , x S]andTaudio tokens y= [y1, . . . , y T]intoz= [z1, . . . , z S+T]. The training objective minimizes the negative log- likelihood of sequence zconditioned on the speaker embedding e: L=−S+TX t=1logp zt|z1, . . . , z t−1,e ,(1) Quality Estimation Evaluating the performance of fine-tuned models is essential before deploy-ing them for large-scale synthetic data generation. Intelligibility—commonly measured as the Word Error Rate (WER) using a robust ASR system—has emerged as the standard metric for assessing syn- thetic speech quality (Wang et al., 2023; Casanova et al., 2024). Yet this conventional method has two drawbacks: (1) the judge ASR introduces its own errors, particularly in low -resource languages; and (2) absolute WER values are not comparable across languages. To alleviate these issues,
https://arxiv.org/abs/2505.16972v1
we propose a novel metric called Normalized Intel- ligibility , leveraging ASR performance on natu- ral speech as a reference baseline. We use the Fleurs dataset (Conneau et al., 2022), which pro- vides high-quality audio-transcript pairs across 102 languages, and Whisper-large-v3 as our judge ASR system. By synthesizing speech using transcripts from Fleurs, we measure two WER scores for each language: WER on synthetic speech ( WER s) and WER on real speech ( WER r). Normalized Intelli- gibility (Norm_I) is defined as: Norm_I = expWER r−WER s WER r (2) This formulation offers several advantages: (1) it normalizes ASR performance across languages us- ing real speech as a baseline, (2) it enables mean- ingful cross-language comparisons, and (3) it pro- duces intuitive scores bounded between 0 and e, where higher values reflect better synthetic speech quality relative to natural speech. 3.2 Generating Large-scale Synthetic Speech Zero -shot TTS converts textintoaudio by condi- tioning on two indispensable inputs: (i) an audio prompt that specifies the target voice style and (ii) a text sentence that supplies the textual content. Both inputs must therefore be covered at scale and with maximal diversity. •Audio Prompts : We curate around 1 million short audio clips spanning diverse speakers and recording conditions. After strict de -duplication to remove near -identical voices, every retained clip can serve as a style prompt that the TTS model imitates. Details of data sources and filter- ing are provided in Appendix C. •Text Corpus : To maximize linguistic variety, we sample sentences across various domains, following the data -mixing practices of recent open -source LLMs (Touvron et al., 2023a; Wei et al., 2023). Construction and statistics of the corpus appear in Appendix D. 3 0 150 300 450 600 750XTTS +DS +BatchIncreasing inference speed by 32.5xInference speed (audio tokens per second)Figure 2: XTTS inference speed measured on a single NVIDIA V100-32GB GPU. “DS” refers to DeepSpeed- Inference while “Batch” refers to batch inference. For batch inference, we set batch size to be 16. Inference Speed-up A key challenge in employ- ing TTS models for large-scale dataset creation is their inference speed. We address this bottleneck using two complementary optimization techniques: •DeepSpeed -Inference (Aminabadi et al., 2022): Involving fused CUDA kernels integration and optimized kernel scheduling, significantly en- hancing inference throughput. •Batch Inference : We group multiple sentences with similar lengths using a single audio prompt, then apply tailored attention masks to enable simultaneous generation of multiple utterances in one forward pass. We evaluate the effectiveness of these techniques using XTTS (Casanova et al., 2024) on a single NVIDIA V100 GPU. As demonstrated in Figure 2, we observe that these optimizations yield a more than30×speed -up, making large -scale speech syn- thesis feasible for our experiments. More details can be found in Appendix A. 4 Experimental Setup 4.1 ASR Backbone Models Our experiments leverage Whisper models (Rad- ford et al., 2022), a family of multilingual ASR models pre-trained on 680,000 hours of labeled speech data, as the backbone. The models are avail- able in five sizes: Tiny (39M), Base (74M), Small (244M), Medium
https://arxiv.org/abs/2505.16972v1
(769M), and Large (1.5B). Fur- ther training details are provided in Appendix E. 4.2 Zero-shot TTS Models We employ two state-of-the-art zero-shot TTS mod- els in our experiments: XTTS (Casanova et al., 2024) and ChatTTS (2noise, 2024). XTTS sup- ports 16 languages, covering a range of language families and resource levels, while ChatTTS only supports Chinese and English. More details about these two models can be found in Appendix B.ModelWER↓ vi cs hu Whisper-medium 25.4 22.5 27.8 + Real-only 22.8 15.6 16.9 + Speech BT 19.0 10.3 13.2 Whisper-large 24.5 19.9 23.8 + Real-only 19.9 12.5 13.9 + Speech BT 16.0 9.1 11.1 Table 1: WER results for low-resource languages on Common Voice . The “Real-only” rows indicate models trained only on tens of hours of real audio, while the “Speech BT” rows present performance achieved when expanding training data to 10K hours using our method. 4.3 Languages Our experiments span ten languages across diverse language families and resource levels. Following Whisper’s training data distribution, we categorize them based on the relative resource availability: •High (≥10K hours): English ( en), Chinese ( zh), French ( fr), German ( de), Spanish ( es) • Mid (1K ∼10K hours): Dutch ( nl), Italian ( it) •Low (≤1K hours): Vietnamese ( vi), Czech (cs), Hungarian ( hu) Of these languages, XTTS supports all except Viet- namese. To enable Vietnamese support, we fine- tune XTTS with 100 hours of transcribed speech sampled from viV oice2, a high-quality dataset de- rived from YouTube. 4.4 Datasets Most of our experiments use Common V oice data (Ardila et al., 2019), chosen for its high qual- ity and broad language coverage, and it also serves as the primary training corpus for XTTS. To as- sess generalization, we additionally evaluate our ASR models on V oxpopuli (Wang et al., 2021) and Multilingual LibriSpeech (Pratap et al., 2020b). 5 Results We begin by demonstrating the effectiveness of our approach in scaling limited real training data to tens of thousands of hours using synthetic speech (Section 5.1). We then evaluate the models’ mul- tilingual performance and examine their general- ization to out-of-domain data (Section 5.2). Next, 2https://hf.co/datasets/capleaf/viVoice 4 ModelCommon Voice (In-Domain) Voxpopuli (Out-of-Domain) High Mid Low Avg. ∆ High Mid Low Avg. ∆ Whisper-medium 11.5 10.6 25.2 - 11.3 21.8 23.4 - + Real-only 9.0 8.0 17.6 -4.0 11.0 20.9 19.9 -1.4 + Speech BT 8.5 6.1 11.1 -6.6 10.0 19.4 13.3 -4.1 Whisper-large 10.5 9.1 21.9 - 11.4 20.3 18.1 - + Real-only 8.7 7.2 15.4 -5.0 10.7 19.3 16.2 -1.2 + Speech BT 6.6 5.2 10.7 -6.3 9.5 17.7 12.5 -3.3 Table 2: Comparison of Whisper models’ WER across in-domain and out-of-domain data. Adding 3,800 hours of Common V oice data (Real-only) provides strong in-domain gains but limited out-of-domain improvements, whereas scaling synthetic Speech BT data to 160,000 hours achieves robust gains across both domains. French German Spanish101 100101102Hours (thousands)9.7513.3411.1030.44 33.08 32.95 1.18 1.38 0.71(a) High-Resource Languages Dutch Italian Czech Hungarian101 100101102Hours (thousands)2.082.58 0.190.3822.32 22.93 11.36 12.54 0.070.37 0.050.06(b) Mid/Low-Resource Languages Original Whisper Dataset Our Synthetic Dataset Common Voice Dataset Figure
https://arxiv.org/abs/2505.16972v1
3: Comparison of dataset sizes across seven languages (log-scale y-axis). Languages are catego- rized by resource availability in the Whisper dataset: (a) high-resource, (b) mid and low-resource groups. we analyze the relationship between TTS quality and ASR performance using our fine-tuned TTS model (Section 5.3), and explore strategies for opti- mally leveraging limited in-domain real data (Sec- tion 5.4). Finally, we scale the synthetic corpus to 500K hours and compare our results with prior work (Section 5.5). 5.1 From Tens of Hours to Tens of Thousands We first assess the effectiveness of our approach by expanding the amount of training data for three low-resource languages—Vietnamese ( vi), Czech (cs), and Hungarian ( hu)—from mere tens of hours to ten thousand hours. As a baseline, we sample real audio in amounts matching the data originally used for TTS training (100 hours for vi, 50 hoursforcs, and 60 hours for hu)3. Table 1 compares these “Real-only” models against models enhanced with our “Speech BT” method for both Whisper- medium and Whisper-large. Consistently across all three languages, Speech BT provides substantial gains in WER, underscoring the effectiveness of augmenting limited real speech with large-scale synthetic data. 5.2 Multilingual Performance and Out-of-Domain Generalization To evaluate the effectiveness and scalability of our approach in a multilingual setting, we generated 160,000 hours of synthetic speech spanning seven languages at varying resource levels: French, Ger- man, and Spanish (high-resource); Dutch and Ital- ian (mid-resource); Czech and Hungarian (low- resource). As a baseline, we also collected 3,800 hours of transcribed speech from Common V oice as the training data. Figure 3 compares our syn- thetic dataset with the original Whisper Dataset and Common V oice. Our synthetic dataset pro- vides substantially more training hours than the original Whisper dataset for each language: a 3- fold increase for high-resource languages, a 10-fold increase for mid-resource languages, and a 40-fold increase for low-resource languages. While both Whisper training data and Common V oice exhibit substantial resource imbalance across languages (with high-resource languages having significantly more data than mid and low-resource ones), our Speech BT dataset maintains a more uniform dis- tribution. This balanced allocation across language resources enables more equitable training, address- ing a key limitation of naturally collected datasets. 3Training data for vicomes from viV oice, whereas data forcsandhuare sampled from Common V oice. 5 0 40 80 120 160 Hours of Training Data (thousands)1020304050WER (%)Common Voice 16 39742447691550Model Sizes (M)Figure 4: Whisper’s performance improves consis- tently with larger models and more training data. We train five sizes of Whisper models with up to 160,000 hours of data and conduct evaluation on Common V oice 16. We report averaged WER across seven languages. Out-of-Domain Generalization Table 2 shows a detailed comparison of model performance in both in-domain (Common V oice) and out-of-domain (V oxpopuli) scenarios. Training with only real transcribed speech from Common V oice (Real- only) yields clear in-domain improvements for both Whisper-medium and Whisper-large (4.0% and 5.0% average WER reduction, respectively), but the generalization to out-of-domain data is lim- ited (just 1.4% and 1.2% average reduction). In contrast, supplementing
https://arxiv.org/abs/2505.16972v1
real data with Speech BT significantly enhances both in-domain (6.6% for Whisper-medium, 6.3% for Whisper-large) and out- of-domain performance (4.1% and 3.3%, respec- tively). This clearly demonstrates that our synthetic data not only improves model robustness within- domain but also enhances generalization capabili- ties across diverse domains. Scalability with Model and Data Size To fur- ther assess scalability, we train five Whisper model variants—tiny, base, small, medium, and large—using the same data mentioned above. Fig- ure 4 presents the averaged WER across all seven languages for each model size at increasing scales of training data up to 160,000 hours. The results show two clear trends. First, adding more train- ing data consistently lowers WER across all model sizes. Second, larger models achieve substantially lower WER at each data scale. These scaling trends suggest that our Speech BT approach effectively improves multilingual ASR performance across different model and data scales. 2 4 6 8 10 Epochs10203040506070TTS Quality (WER) Real SpeechTraining Data 20 Hrs 40 Hrs 60 Hrs 80 Hrs 100 HrsFigure 5: Impact of training data quantity and epochs on Vietnamese TTS quality. The purple dashed line shows the WER of natural speech from Fleurs. 5.3 TTS Quality vs ASR Performance We now investigate extending TTS support to a new language, Vietnamese, using limited amounts of transcribed speech data. To explore how the quantity of training data impacts TTS model per- formance, we sampled datasets at increments of {20,40,60,80,100} hours and trained each for up to 10 epochs. The results, shown in Figure 5, clearly indicate that performance consistently improves as the amount of training data and the number of epochs increase. Specifically, the model trained on the 100-hour dataset reaching a WER of 10% in the end, which closely approaches the baseline WER for natural speech. Next, we analyze the relationship between TTS model quality and downstream ASR performance. We selected several checkpoints from the fine- tuned TTS models, varying by the amount of train- ing data and the number of epochs. For each check- point, we generated 100 hours of synthetic speech and subsequently used it to train Whisper-medium. We then measured the resulting changes in WER (denoted as ∆WER, where negative values indi- cate improvement) on the Common V oice dataset. The correlation between each checkpoint’s normal- ized intelligibility score (see Equation 2) and ASR performance is illustrated in Figure 6. Our anal- ysis reveals a strong correlation between TTS in- telligibility scores and ASR performance improve- ments. Notably, we identified a critical intelligibil- ity threshold around 0.01, serving as a clear inflec- tion point. Below this threshold, TTS-generated speech leads to increased WER, degrading ASR performance by up to 2 points. Conversely, once 6 103 102 101 100 Normalized Intelligability3 2 1 012 WER Training Data 20 hrs 40 hrs 60 hrs 80 hrs 100 hrs 0246810EpochsFigure 6: Relationship between TTS quality and ASR performance. Higher TTS intelligibility correlates with greater ASR improvement. the threshold is surpassed, synthetic speech consis- tently enhances ASR accuracy, with greater intelli- gibility corresponding to more pronounced reduc- tions in WER. This underscores the
https://arxiv.org/abs/2505.16972v1
importance of achieving a minimum TTS quality level for effec- tive ASR data augmentation. Additionally, the vol- ume of TTS training data significantly influences the ability to surpass this intelligibility threshold. Models trained on larger datasets generally achieve higher intelligibility scores and yield greater ASR performance gains. However, we observe diminish- ing returns as normalized intelligibility approaches 1.0, where WER reductions stabilize around 3 per- centage points. This finding suggests that while adequate training data is essential to cross the qual- ity threshold, further improvements in ASR perfor- mance may plateau beyond a certain point. 5.4 Effective Utilization of Limited In-Domain Transcribed Audio The experiments in Section 5.3 are essentially con- ducted under out-of-domain conditions, as the TTS models are trained on the viV oice dataset but the final ASR performance are evaluated with the Com- mon V oice dataset. Notably, we identified only about three hours of transcribed audio in Com- mon V oice Vietnamese available for training. This prompts an important research question: how can we effectively leverage such a small but valuable amount of in-domain data? We propose three meth- ods for effectively utilizing the in-domain data to enhance model performance: •Approach 1: Pre-train Whisper on large-scale synthetic data followed by supervised fine- tuning using the limited in-domain data.Models WER ↓ Whisper-medium 25.4 + in-domain fine-tune 21.6 Approach 1 + Synthetic data pre-train 21.2 + in-domain fine-tune 20.4 Approach 2 + Synthetic data pre-train 20.1 Approach 3 + Synthetic data pre-train 18.6 Table 3: Vietnamese WER performance on Common V oice using different approaches for leveraging limited in-domain data. •Approach 2: Prompt the fine-tuned Vietnamese TTS model with in-domain audio clips for speech synthesis. •Approach 3: Further fine-tune the Vietnamese TTS model with in-domain data before synthe- sizing speech. For all three approaches, we utilize fine-tuned XTTS checkpoints trained on 100 hours of tran- scribed speech and generate 1,000 hours of syn- thetic speech for pre-training Whisper-medium. Table 3 summarizes the resulting WERs. As a baseline, we first fine-tune Whisper-medium on three hours of in-domain data, which alone reduces WER from 25.4 to 21.6 and demonstrates the ef- fectiveness of even limited domain adaptation. By combining synthetic speech pre-training on 1,000 hours and subsequent in-domain fine-tuning (Ap- proach 1), we obtain further WER reduction to 20.4. However, the most pronounced improvements arise from leveraging in-domain data within the TTS pipeline itself (Approach 2 and 3). Simply prompt- ing a fine-tuned XTTS model with in-domain audio achieves a WER of 20.1, already outperforming the synthetic pre-training baseline. Adding a dedicated fine-tuning stage for XTTS on the three hours of Common V oice audio (Approach 3) yields the best overall WER of 18.6—a 27.0% relative improve- ment over the 25.4 baseline. This underscores the value of adapting both the TTS and ASR models to the target domain, especially for low-resource lan- guages like Vietnamese. In summary, leveraging a small in-domain dataset—through synthetic pre- training, language-specific fine-tuning, and TTS- based in-domain adaptation—proves highly effec- tive for improving ASR performance under real- world low-resource conditions. 7 Model SizeCommon Voice Voxpopuli MLS High Mid
https://arxiv.org/abs/2505.16972v1
Low High Mid Low High Mid SeamlessM4T-medium 1.2B 13.3 12.8 24.4 10.7 20.0 12.6 8.0 13.0 Whisper-large-v2 1.5B 11.4 8.1 19.9 9.8 19.5 16.3 6.3 11.5 Whisper-large-v3 1.5B 10.1 5.9 15.6 12.6 28.6 14.4 5.3 10.2 + Real-only (15K Hrs) - 8.6 4.9 12.5 7.9 17.1 10.6 5.0 9.4 + Speech-BT (500K Hrs) - 7.8 4.3 8.3 7.6 16.2 8.0 4.4 7.6 Table 4: Multilingual ASR performance on various benchmarks. Results are averaged for each language resource category. Word Error Rate (WER) is reported for all languages except Chinese, which is measured with Character Error Rate (CER). All results are normalized with Whisper Normalizer (Radford et al., 2022). 5.5 Scaling to 500,000 Hours Building on insights from our previous analysis, we now push the limits of multilingual ASR training with Speech Back-Translation. Starting from the baseline approach in Section 5.2, we implement several key enhancements: Training Data Expansion We expand cover- age to ten languages by incorporating three addi- tional ones—English, Chinese, and Vietnamese. We also extend the amount of real speech: in addition to Common V oice (Ardila et al., 2019), we include real transcribed speech from Multilin- gual LibriSpeech (Pratap et al., 2020b), V oxpop- uli (Wang et al., 2021), and viV oice, bringing the total amount of real data to 15,000 hours. Most significantly, we scale our synthetic speech dataset to 500,000 hours—a volume more than thirty times larger than the real data. The statistics of training data is illustrated in Appendix G. Backbone Model and Baselines We adopt Whisper-large-v3, one of the state-of-the-art multilingual ASR models with 1.5B parame- ters, as our backbone model. For compari- son, we include two ASR models with simi- lar sizes—SeamlessM4T-medium (Communication et al., 2023) and Whisper-large-v2 (Radford et al., 2022)—as our baselines for their competitive per- formance and wide language coverage. Results We evaluate both our models and base- line models on three benchmarks, Common V oice, V oxpopuli, and Multilingual LibriSpeech (MLS), and present the results in Table 4. We report the averaged results for each language category. Re- sults demonstrate a clear performance trajectory, training Whisper-large-v3 with 15K hours of realaudio consistently improves performance across all benchmarks, while augmenting with 500K hours of Speech-BT data yields further substantial gains, achieving state-of-the-art results across all lan- guage categories. On average across all bench- marks, our full model achieves a 30% error rate reduction over the base Whisper-large-v3. Break- ing this down by language groups, high-resource and mid-resource languages achieve 26% and 30% improvements respectively, while low-resource lan- guages achieve a remarkable 46% improvement. These findings indicate that augmenting real data substantially with our synthetic Speech BT data contributes significantly to advancing multilingual ASR systems, with particular benefits for tradition- ally underserved language communities. Detailed per-language results can be found in Appendix H. 6 Conclusion This work introduced Speech Back-Translation, a scalable approach to address the persistent chal- lenge of data scarcity in multilingual ASR. Our method demonstrates that TTS models trained on merely tens of hours of transcribed speech can gen- erate hundreds of times more synthetic data of suf- ficient
https://arxiv.org/abs/2505.16972v1
quality to significantly improve ASR perfor- mance. The large-scale implementation across ten languages with 500,000 hours of synthetic speech yielded an average 30% reduction in Whisper-large- v3’s transcription error rates, confirming the effec- tiveness and scalability of our approach. Speech Back-Translation challenges the need for massive human-labeled datasets by effectively scaling lim- ited data, making advanced speech recognition more accessible across diverse languages. Future work could extend to extremely low-resource lan- guages, refine language-specific metrics, and com- bine with other augmentation techniques. 8 Limitations While our approach demonstrates significant im- provements in multilingual ASR performance, sev- eral limitations should be noted. First, the synthetic speech data generated through TTS models may not fully capture the acoustic complexity present in real-world environ- ments, particularly in scenarios with background noise, multiple speakers, or variable recording con- ditions. This limitation could impact model robust- ness when deployed in settings with poor signal-to- noise ratios or challenging acoustic environments. Second, although we introduce an intelligibility- based metric for assessing synthetic speech quality, this assessment framework may not comprehen- sively capture all relevant aspects of speech that could influence ASR training effectiveness. Future work could explore additional quality metrics that consider factors such as prosody and emotional expression. Third, our experimental validation is primarily based on two TTS models (XTTS and ChatTTS), which may not represent the full spectrum of TTS capabilities and limitations. A more comprehensive evaluation across a broader range of TTS systems could provide additional insights into the gener- alizability of our approach and identify potential TTS-specific biases or artifacts. Lastly, while we demonstrate the scalability of our method by generating 500,000 hours of syn- thetic speech, our language coverage remains lim- ited to ten languages, with nine already supported by existing TTS models. Further research is needed to validate our approach’s effectiveness in other low-resource languages, particularly those with dis- tinct phonological characteristics or limited linguis- tic resources. Acknowledgments This research/project is supported by the National Research Foundation, Singapore under its National Large Language Models Funding Initiative, (AISG Award No: AISG-NMLP-2024-005), and Ministry of Education, Singapore, under its Academic Re- search Fund (AcRF) Tier 2 Programme (MOE AcRF Tier 2 Award No. : MOE-T2EP20122-0011). Any opinions, findings and conclusions or recom- mendations expressed in this material are those of the authors and do not reflect the views of the Na- tional Research Foundation, Singapore, or Ministry of Education, Singapore.References 2noise. 2024. Chattts. https://github.com/2noise/ ChatTTS . Reza Yazdani Aminabadi, Samyam Rajbhandari, Minjia Zhang, Ammar Ahmad Awan, Cheng Li, Du Li, Elton Zheng, Jeff Rasley, Shaden Smith, Olatunji Ruwase, and Yuxiong He. 2022. Deepspeed- inference: En- abling efficient inference of transformer models at unprecedented scale. SC22: International Confer- ence for High Performance Computing, Networking, Storage and Analysis , pages 1–15. Rosana Ardila, Megan Branson, Kelly Davis, Michael Henretty, Michael Kohler, Josh Meyer, Reuben Morais, Lindsay Saunders, Francis M. Tyers, and Gregor Weber. 2019. Common voice: A massively- multilingual speech corpus. ArXiv , 1912.06670. Shekoofeh Azizi, Simon Kornblith, Chitwan Saharia, Mohammad Norouzi, and David J. Fleet. 2023. Syn- thetic data from diffusion models improves imagenet classification.
https://arxiv.org/abs/2505.16972v1
ArXiv , 2304.08466. Matthew Baas and Herman Kamper. 2021. V oice con- version can improve asr in very low-resource settings. InProceedings of Interspeech . Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Miguel Pino, Alexei Baevski, Alexis Conneau, and Michael Auli. 2021. Xls-r: Self-supervised cross-lingual speech representation learning at scale. In Proceedings of Interspeech . Alexei Baevski, Henry Zhou, Abdel rahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. ArXiv , 2006.11477. Ye Bai, Jingping Chen, Jitong Chen, Wei Chen, Zhuo Chen, Chen Ding, Linhao Dong, Qianqian Dong, Yujiao Du, Kepan Gao, Lu Gao, Yi Guo, Minglun Han, Ting Han, Wenchao Hu, Xinying Hu, Yuxi- ang Hu, Deyu Hua, Lu Huang, Ming Huang, Youjia Huang, Jishuo Jin, Fanliu Kong, Zongwei Lan, et al. 2024. Seed-asr: Understanding diverse speech and contexts with llm-based speech recognition. ArXiv , 2407.04675. Loïc Barrault, Ond ˇrej Bojar, Marta R. Costa-jussà, Christian Federmann, Mark Fishel, Yvette Gra- ham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias Müller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine transla- tion (WMT19). In Proceedings of the Fourth Confer- ence on Machine Translation . Martijn Bartelds, Nay San, Bradley McDonnell, Dan Jurafsky, and Martijn B. Wieling. 2023. Making more of little data: Improving low-resource auto- matic speech recognition using data augmentation. InProceedings of ACL . 9 Edresson Casanova, Kelly Davis, Eren Gölge, Görkem Göknar, Iulian Gulea, Logan Hart, Aya Aljafari, Joshua Meyer, Reuben Morais, Samuel Olayemi, and Julian Weber. 2024. Xtts: a massively multilingual zero-shot text-to-speech model. ArXiv , 2406.04904. William Chen, Wangyou Zhang, Yifan Peng, Xinjian Li, Jinchuan Tian, Jiatong Shi, Xuankai Chang, Soumi Maiti, Karen Livescu, and Shinji Watanabe. 2024. Towards robust speech representation learning for thousands of languages. ArXiv , 2407.00837. Shanbo Cheng, Zhichao Huang, Tom Ko, Hang Li, Ningxin Peng, Lu Xu, and Qini Zhang. 2024. To- wards achieving human parity on end-to-end simul- taneous speech translation via llm agent. ArXiv , 2407.21646. Seamless Communication, Loïc Barrault, Yu-An Chung, Mariano Coria Meglioli, David Dale, Ning Dong, Mark Duppenthaler, Paul-Ambroise Duquenne, Brian Ellis, Hady ElSahar, Justin Haaheim, John Hoffman, Min-Jae Hwang, Hirofumi Inaguma, Christopher Klaiber, Ilia Kulikov, Pengwei Li, Daniel Licht, et al. 2023. Seamless: Multilingual ex- pressive and streaming speech translation. ArXiv , 2312.05187. Alexis Conneau, Min Ma, Simran Khanuja, Yu Zhang, Vera Axelrod, Siddharth Dalmia, Jason Riesa, Clara Rivera, and Ankur Bapna. 2022. Fleurs: Few-shot learning evaluation of universal representations of speech. 2022 IEEE Spoken Language Technology Workshop (SLT) , pages 798–805. Marta Ruiz Costa-jussà, James Cross, Onur cCelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Mail- lard, Anna Sun, et al. 2022. No language left behind: Scaling human-centered machine translation. ArXiv , 2207.04672. Frederico Santos de Oliveira, Edresson Casanova, Ar- naldo Candido J’unior, Anderson da Silva Soares, and Arlindo R. Galvão Filho. 2023. Cml-tts a multi- lingual dataset for speech synthesis in low-resource languages. ArXiv , 2306.10097. Sergey Edunov, Myle Ott, Michael
https://arxiv.org/abs/2505.16972v1
Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proceedings of EMNLP . Lijie Fan, Kaifeng Chen, Dilip Krishnan, Dina Katabi, Phillip Isola, and Yonglong Tian. 2023. Scaling laws of synthetic images for model training . . . for now. InProceedings of CVPR . Heting Gao, Kaizhi Qian, Junrui Ni, Chuang Gan, Mark A. Hasegawa-Johnson, Shiyu Chang, and Yang Zhang. 2024. Speech self-supervised learning using diffusion model synthetic data. In Proceedings of ICML . Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio C’esar Teodoro Mendes, Allison Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann,Gustavo de Rosa, Olli Saarikivi, Adil Salim, S. Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, and Yuan-Fang Li. 2023. Textbooks are all you need. ArXiv , 2306.11644. Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In Proceedings of NeurIPS . Haorui He, Zengqiang Shang, Chaoren Wang, Xuyuan Li, Yicheng Gu, Hua Hua, Liwei Liu, Chen Yang, Jiaqi Li, Peiyang Shi, Yuancheng Wang, Kai Chen, Pengyuan Zhang, and Zhizheng Wu. 2024. Emilia: An extensive, multilingual, and diverse speech dataset for large-scale speech generation. ArXiv , 2407.05361. Cong Duy Vu Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative back- translation for neural machine translation. In Pro- ceedings of NMT@ACL . Zhichao Huang, Rong Ye, Tom Ko, Qianqian Dong, Shanbo Cheng, Mingxuan Wang, and Hang Li. 2023. Speech translation with large language models: An industrial practice. ArXiv , 2312.13585. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Machine Transla- tion Summit . Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae. 2020. Hifi-gan: Generative adversarial networks for ef- ficient and high fidelity speech synthesis. ArXiv , 2010.05646. Guillaume Lample, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Unsupervised machine translation using monolingual corpora only. InProceedings of ICLR . Haoran Li, Qingxiu Dong, Zhengyang Tang, Chaojun Wang, Xingxing Zhang, Haoyang Huang, Shaohan Huang, Xiaolong Huang, Zeqiang Huang, Dongdong Zhang, Yuxian Gu, Xin Cheng, Xun Wang, Si-Qing Chen, Li Dong, et al. 2024. Synthetic data (almost) from scratch: Generalized instruction tuning for lan- guage models. ArXiv , 2402.13064. Xingyuan Pan, Luyang Huang, Liyan Kang, Zhicheng Liu, Yu Lu, and Shanbo Cheng. 2024. G-dig: To- wards gradient-based diverse and high-quality in- struction data selection for machine translation. In Proceedings of ACL . Vineel Pratap, Anuroop Sriram, Paden Tomasello, Awni Y . Hannun, Vitaliy Liptchinsky, Gabriel Syn- naeve, and Ronan Collobert. 2020a. Massively mul- tilingual asr: 50 languages, 1 model, 1 billion param- eters. ArXiv , 2007.03001. Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Mam- douh Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui 10 Zhang, Wei-Ning Hsu, Alexis Conneau, and Michael Auli. 2023. Scaling speech technology to 1, 000+ languages. ArXiv , 2305.13516. Vineel Pratap, Qiantong Xu, Anuroop Sriram, Gabriel Synnaeve, and Ronan Collobert. 2020b. Mls: A large-scale multilingual dataset for speech research. ArXiv , 2012.03411. Krishna C Puvvada, Piotr ˙Zelasko, He Huang, Olek- sii Hrinchuk, Nithin Rao Koluguri, Kunal Dhawan,
https://arxiv.org/abs/2505.16972v1
Somshubra Majumdar, Elena Rastorgueva, Zhehuai Chen, Vitaly Lavrukhin, et al. 2024. Less is more: Accurate speech recognition & translation without web-scale data. ArXiv , 2406.19674. Alec Radford, Jong Wook Kim, Tao Xu, Greg Brock- man, Christine McLeavey, and Ilya Sutskever. 2022. Robust speech recognition via large-scale weak su- pervision. ArXiv , 2212.04356. Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. 2019. Zero: Memory optimizations toward training trillion parameter models. SC20: In- ternational Conference for High Performance Com- puting, Networking, Storage and Analysis . Robin Rombach, A. Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High- resolution image synthesis with latent diffusion mod- els. In Proceedings of CVPR . Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In Proceedings of ACL . Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of ACL . Hubert Siuzdak. 2023. V ocos: Closing the gap between time-domain and fourier-based neural vocoders for high-quality audio synthesis. ArXiv , 2306.00814. Jenthe Thienpondt and Kris Demuynck. 2023. Ecapa2: A hybrid neural network architecture and training strategy for robust speaker embeddings. 2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU) , pages 1–8. Yonglong Tian, Lijie Fan, Phillip Isola, Huiwen Chang, and Dilip Krishnan. 2023. Stablerep: Synthetic im- ages from text-to-image models make strong visual representation learners. ArXiv , 2306.00984. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023a. Llama: Open and efficient foundation language models. ArXiv , 2302.13971. Hugo Touvron, Louis Martin, Kevin R. Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava,Shruti Bhosale, Daniel M. Bikel, Lukas Blecher, Cris- tian Cantón Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, et al. 2023b. Llama 2: Open founda- tion and fine-tuned chat models. ArXiv , 2307.09288. Brandon Trabucco, Kyle Doherty, Max Gurinas, and Ruslan Salakhutdinov. 2023. Effective data augmen- tation with diffusion models. ArXiv , 2302.07944. Changhan Wang, Morgane Rivière, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Miguel Pino, and Emmanuel Dupoux. 2021. V oxpopuli: A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation. ArXiv , 2101.00390. Chengyi Wang, Sanyuan Chen, Yu Wu, Zi-Hua Zhang, Long Zhou, Shujie Liu, Zhuo Chen, Yanqing Liu, Huaming Wang, Jinyu Li, Lei He, Sheng Zhao, and Furu Wei. 2023. Neural codec language models are zero-shot text to speech synthesizers. ArXiv , 2301.02111. Tianduo Wang, Shichen Li, and Wei Lu. 2024. Self- training with direct preference optimization improves chain-of-thought reasoning. In Proceedings of ACL . Tianduo Wang and Wei Lu. 2022. Differentiable data augmentation for contrastive sentence representation learning. In Proceedings of EMNLP . Tianduo Wang and Wei Lu. 2023. Learning multi-step reasoning by solving arithmetic tasks. In Proceed- ings of ACL . Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li,
https://arxiv.org/abs/2505.16972v1
Liu Yang, Xilin Luo, Xue Gang Wu, Lunan Liu, Wenjun Cheng, et al. 2023. Skywork: A more open bilingual foundation model. ArXiv , 2310.19341. Guanrou Yang, Fan Yu, Ziyang Ma, Zhihao Du, Zhifu Gao, Shiliang Zhang, and Xie Chen. 2024. Enhanc- ing low-resource asr through versatile tts: Bridging the data gap. ArXiv , 2410.16726. Heiga Zen, Viet Dang, Robert A. J. Clark, Yu Zhang, Ron J. Weiss, Ye Jia, Z. Chen, and Yonghui Wu. 2019. Libritts: A corpus derived from librispeech for text-to-speech. In Proceedings of Interspeech . Binbin Zhang, Hang Lv, Pengcheng Guo, Qijie Shao, Chao Yang, Lei Xie, Xin Xu, Hui Bu, Xiaoyu Chen, Chenchen Zeng, Di Wu, and Zhendong Peng. 2022. Wenetspeech: A 10000+ hours multi-domain man- darin corpus for speech recognition. In Proceddings of ICASSP . Yu Zhang, Daniel S. Park, Wei Han, James Qin, An- mol Gulati, Joel Shor, Aren Jansen, Yuanzhong Xu, Yanping Huang, Shibo Wang, Zongwei Zhou, Bo Li, Min Ma, William Chan, Jiahui Yu, Yongqiang Wang, Liangliang Cao, Khe Chai Sim, Bhuvana Ramabhad- ran, Tara N. Sainath, Franccoise Beaufays, Zhifeng 11 Chen, Quoc V . Le, Chung-Cheng Chiu, Ruoming Pang, and Yonghui Wu. 2021. Bigssl: Exploring the frontier of large-scale semi-supervised learning for automatic speech recognition. IEEE Journal of Selected Topics in Signal Processing , 16:1519–1532. A Inference Optimization Details We accelerate inference by integrating DeepSpeed- Inference (Aminabadi et al., 2022) into the TTS pipeline. DeepSpeed’s deep fusion merges multiple tiny CUDA launches into a single, highly optimized kernel that combines element-wise operations, ma- trix multiplications, transpositions, and reductions. Merging these operations reduce kernel-invocation overhead and off-chip memory traffic, translating into noticeably lower latency and higher through- put. We compound these gains with batch infer- ence. Input sentences are grouped by language and length, then paired with a single audio prompt that supplies the target voice. Custom attention masks mark prompt–text boundaries, allowing the TTS model to synthesize multiple utterances con- currently. This batching strategy reduces redundant computations and GPU idle time, dramatically im- proving overall inference efficiency. B XTTS vs ChatTTS In this section, we present a comparative analysis of XTTS and ChatTTS for generating synthetic audio in Chinese and English. Table 5 summarizes the architectural details of both models. As the XTTS’s training data mainly come from Common V oice, we treat Common V oice 16 as the in-domain dataset and Fleurs as the out-of-domain dataset for evaluation. Performance Comparison We synthesize speech from 100K Chinese and English sentences using both models and train Whisper-medium to assess the effectiveness of these synthetic datasets. As shown in Figure 7 (a) and (b), XTTS outperforms ChatTTS on in-domain Chinese data, whereas ChatTTS excels on out-of-domain Chinese data. For English, XTTS achieves a WER of 4.0%, surpassing ChatTTS’s 4.4%. These trends highlight each model’s distinct strengths in handling language-specific characteristics. TTS Quality Comparison To understand the performance difference, We assess the TTS qual- ity with our proposed normalized intelligibility metric for both models. As shown in Figure 7, XTTS achieves superior intelligibility in English en zh0.00.20.40.60.81.0Norm_I0.96 0.590.740.87(c) TTS Quality Comparison XTTS ChatTTS zh910111213CER (%)12.312.8(a) In-Domain:
https://arxiv.org/abs/2505.16972v1
CV16 en zh2345678WER/CER (%) 4.07.6 4.47.1(b) Out-of-Domain: FleursFigure 7: Comparison of Whisper-medium ASR perfor- mance on in-domain (CV16) and out-of-domain (Fleurs) test sets, as well as TTS quality, when training with syn- thetic Chinese and English speech generated by XTTS and ChatTTS. (0.96 vs 0.74) while ChatTTS excels in Chinese (0.87 vs 0.59). Nevertheless, XTTS performs better on in-domain Chinese data, suggesting that while ChatTTS produces more intelligible speech in gen- eral, XTTS is more effective within the specific domain represented by Common V oice 16. This discrepancy may be attributed to domain-matched acoustic patterns and speaking styles that XTTS models more accurately. Meanwhile, on out-of- domain data (Fleurs), ChatTTS’s superior general intelligibility dominates, leading to stronger per- formance. In English, XTTS demonstrates higher intelligibility and more robust ASR results com- pared to ChatTTS. Overall, these findings under- score how a TTS model’s domain alignment and language-specific strengths can influence synthetic data quality and downstream ASR performance. C Audio Prompt Details We collect a diverse set of audio clips from various sources to serve as audio prompts for our TTS mod- els. To prevent redundancy in voice characteristics, we extract speaker embeddings from each reference clip using the ECAPA2 speaker encoder (Thien- pondt and Demuynck, 2023) and remove duplicates by comparing their cosine similarity, applying a threshold of 0.8. Table 6 summarizes the sources of these audio clips. D Textual Data Details Our textual corpus is sourced from a wide range of domains. Since some sources include sequences that are too long for TTS synthesis, we first seg- ment the text using a sentencizer. We then filter out sentences that are either too short, too long, or con- tain an excessive number of non-alphabetic charac- 12 ModelTransformer V ocabularyV ocoder Parameters Lang Layers Width Heads Text Audio XTTS 30 1,024 16 6,681 1,024 Hifi-GAN (2020) 467M 16 ChatTTS 20 768 12 21,178 626 V ocos (2023) 280M 2 Table 5: Architecture details of XTTS and ChatTTS. Dataset Num. Clips Emilia (He et al., 2024) 560K CommonV oice (Ardila et al., 2019) 230K WenetSpeech (Zhang et al., 2022) 102K CML-TTS (de Oliveira et al., 2023) 92K LibriTTS (Zen et al., 2019) 10K Total 994K Table 6: Audio prompt distribution. The audio clips used for voice cloning comes from various sources. ters. To reduce redundancy, we perform sentence- level de-duplication. A detailed breakdown of our corpus sources is provided below. Wikipedia Wikipedia is a collaborative online encyclopedia containing millions of articles, serves as a valuable source of high-quality natural text, therefore has been widely used for training lan- guage models (Touvron et al., 2023a,b). WMT (Barrault et al., 2019) We also collected textual data from the training split of WMT19 trans- lation task, which is a widely-used training data source in machine translation research. Books Our Books dataset is sourced primarily from Project Gutenberg, a digital library of public domain literature. Book-level de-duplication is performed to ensure the quality and uniqueness of the corpus. Europarl (Koehn, 2005) Europarl is a parallel corpus created for training machine translation systems, containing aligned text in European lan-
https://arxiv.org/abs/2505.16972v1
guages extracted from European Parliament pro- ceedings. We utilize 8th version of the dataset. SkyPile (Wei et al., 2023) SkyPile is a large- scale Chinese dataset containing approximately 150 billion tokens, curated specifically for pre- training large language models. The corpus is compiled from diverse Chinese web pages across the public internet and undergoes rigorous qual- ity control, including thorough document-level de- duplication and content filtering.E Training Details Whisper We train Whisper using AdamW ( β1= 0.9,β2= 0.98,ϵ= 1e−8) with a weight de- cay of 0.01. We use constant learning rate 7e−6 after 5% warm-up steps. To optimize distributed training, we leverage DeepSpeed ZeRO-2 (Rajb- handari et al., 2019). Additionally, we concatenate short audio clips—up to Whisper’s 30-second in- put limit—to improve efficiency. Unless otherwise specified, our batch size is 128. In experiments pre- sented in Section 5.2, we increase it to 768, while in Section 5.5 experiment, we further increase it to 1,024. For evaluation, we generate transcripts with greedy decoding. XTTS Before fine-tuning, we expand the model’s text vocabulary by incorporating 2,000 additional Vietnamese tokens by running Byte-Pair Encoding algorithms over Vietnamese textual data. We used the AdamW optimizer β1= 0.9,β2= 0.96, and ϵ= 1e−8with weight decay 0.01, and a learning rate of 5e-6. The batch size is set to 32. F Related Work F.1 Synthetic Data for Multilingual ASR Recently we have witnessed the application of syn- thetic data in various domains and modalities, e.g., contrastive representation learning (Wang and Lu, 2022; Tian et al., 2023), math reasoning (Wang and Lu, 2023; Wang et al., 2024). Our work fo- cuses on improving multilingual ASR models using synthetic audio generated by zero-shot TTS mod- els, with particular emphasis on low-resource lan- guages. This research builds upon previous efforts that address data scarcity through synthetic data generation. Bartelds et al. (2023) demonstrated that both self-training and TTS-generated data can effectively overcome data availability limitations in resource-scarce languages. Their work specifically examined four languages: Gronings, West-Frisian, Besemah, and Nasa, showing significant improve- ments in ASR performance. Baas and Kamper (2021) explored voice conversion (VC) models 13 for data augmentation in low-resource languages. Their key finding was that a VC system trained on a well-resourced language like English could gen- erate effective training data for previously unseen low-resource languages. More recently, Gao et al. (2024) proposed using diffusion models to gener- ate high-quality synthetic audio for self-supervised pre-training. The authors suggest that diffusion models are particularly adept at capturing complex speech structures from real audio, making the syn- thetic data especially valuable for self-supervised learning tasks. F.2 Text-Based Back-Translation Back-Translation (Sennrich et al., 2016a; Edunov et al., 2018) is originally proposed machine trans- lation (Sennrich et al., 2016b; Pan et al., 2024) to augment the limited parallel training corpus from the large amount of monolingual textual data. It is designed to translate the target-language data into the source language, generatin addi- tional synthetic parallel data that boosts overall translation quality (Sennrich et al., 2016a). This method capitalizes on monolingual text resources, which are more abundant than parallel corpora, thereby increasing model robustness and
https://arxiv.org/abs/2505.16972v1
reduc- ing overfitting. Subsequent work has explored variants of back-translation such as iterative back- translation, filtering synthetic data by quality, and domain adaptation strategies (Edunov et al., 2018; Hoang et al., 2018). In addition, dual learning frameworks have incorporated back-translation and forward-translation jointly for unsupervised and semi-supervised machine translation scenarios (He et al., 2016; Lample et al., 2018). These develop- ments underscore the broader impact of synthetic data in enhancing model performance, even where labeled data are sparse. F.3 Speech Translation Beyond text-based machine translation, speech translation deals with converting audio signals in one language to either text or audio in another lan- guage, frequently via cascading automatic speech recognition and machine translation modules or through end-to-end systems (Cheng et al., 2024; Huang et al., 2023). One persistent challenge in this domain, especially for lower-resource languages, is the scarcity of paired audio-transcript data. A widely used approach to address this limitation is to create pseudo-labeled data by transcribing ex- isting audio and then translating the resulting tran-LanguageAmount (Hrs) Real Synthetic English 3,951 75,159 French 2,486 94,822 German 3,706 90,782 Spanish 1,674 47,745 Chinese 204 37,910 Dutch 1,525 41,095 Italian 839 38,069 Czech 119 33,312 Hungarian 156 33,492 Vietnamese 104 13,444 Total 14,864 505,830 Table 7: Statistics of the training data in our 500K-hour experiment. scripts (Communication et al., 2023; Puvvada et al., 2024). A natural future direction for our Speech Back-Translation approach could be extended to speech translation tasks by synthesizing speech from existing parallel corpora. G 500K-Hour Training Data Statistics The detailed statistics of training data used in our 500K-hour scaling up experiments are presented in Table 7. H Additional 500K-Hour Scaling Results In this section, we show detailed results for each languages from Section 5.5. The results for Multilingual Librispeech (MLS), V oxpopuli, and Common V oice 16 are presented in Table 8, Ta- ble 9, and Table 10 respectively. Additionally, we make comparisons with state-of-the-art multilin- gual ASR models: SeedASR (Bai et al., 2024), SeamlessM4T (Communication et al., 2023), Ca- nary (Puvvada et al., 2024), and Whisper-large and Whisper-large-v2 (Radford et al., 2022). 14 Model SizeHigh Mid en fr de es nl it SeedASR - 4.1 5.1 - 3.8 - - SeamlessM4T-medium 1.2B 9.8 7.9 8.9 5.4 13.6 12.3 Canary 1.0B 5.1 4.4 4.7 3.4 - - Whisper-large 1.5B 7.2 8.8 7.4 5.3 11.1 14.1 Whisper-large-v2 1.5B 6.8 7.4 6.4 4.6 10.0 12.9 Whisper-large-v3 1.5B 5.3 5.6 6.0 4.0 10.4 9.9 + Real-only (15K Hrs) - 5.5 5.1 5.7 3.5 10.2 8.5 + Speech BT (500K Hrs) - 5.2 4.3 4.9 3.0 8.5 6.7 Table 8: Performance comparison across languages on Multilingual LibriSpeech (MLS). Model SizeHigh Mid Low en fr de es nl it cs hu SeamlessM4T-medium 1.2B 8.2 11.8 14.0 8.8 17.2 22.8 11.0 14.1 Canary 1.0B 6.0 9.2 10.7 7.0 - - - - Whisper-large 1.5B 8.1 10.5 15.2 8.5 17.6 22.9 17.7 18.4 Whisper-large-v2 1.5B 7.9 10.4 13.1 7.9 15.8 23.2 14.3 18.3 Whisper-large-v3 1.5B 9.7 10.4 19.7 10.6 24.9 32.3 12.4 16.3 + Real-only (15K Hrs) - 6.0 8.9 9.6 7.0 12.5 21.7 9.5 11.7
https://arxiv.org/abs/2505.16972v1
+ Speech BT (500K Hrs) - 5.6 8.4 9.0 7.4 12.5 19.8 7.6 8.3 Table 9: Performance comparison across languages on V oxpopuli. Model SizeHigh Mid Low en fr de es zh nl it cs hu vi SeamlessM4T-medium 1.2B 11.3 14.5 12.1 9.8 18.7 15.2 10.4 14.4 34.8 24.1 Canary 1.0B 8.6 6.9 5.1 4.4 - - - - - - Whisper-large 1.5B 12.2 15.0 8.9 7.6 17.3 8.1 10.1 19.9 23.8 24.5 Whisper-large-v2 1.5B 11.7 13.7 7.8 6.9 16.9 6.9 9.3 16.5 20.3 22.8 Whisper-large-v3 1.5B 10.7 11.8 6.5 5.5 16.1 4.9 6.9 10.9 15.3 20.5 + Real-only (15K Hrs) - 9.7 8.7 5.9 4.4 14.3 4.3 5.5 9.2 11.4 16.9 + Speech BT (500K Hrs) - 8.8 7.3 5.0 4.2 13.6 3.7 4.9 5.2 6.0 13.6 Table 10: Performance comparison across languages on Common V oice 16. 15
https://arxiv.org/abs/2505.16972v1
arXiv:2505.16973v2 [cs.CL] 23 May 2025/searc/b♀t/b♀t/b♀t/b♀t/b♀t/b♀t/b♀t/b♀t/b♀t/b♀t/b♀t/b♀t/b♀t/b♀t/b♀t/b♀t/b♀tVERIFASTSCORE : Speeding up long-form factuality evaluation Rishanth Rajendhran1Amir Zadeh2Matthew Sarte2Chuan Li2Mohit Iyyer1 1University of Maryland 2Lambda Labs {rishanth, miyyer}@umd.edu Abstract Metrics like FACTSCORE andVERISCORE that evaluate long-form factuality operate by decom- posing an input response into atomic claims and then individually verifying each claim. While effective and interpretable, these methods incur numerous LLM calls and can take upwards of 100 seconds to evaluate a single response, lim- iting their practicality in large-scale evaluation and training scenarios. To address this, we pro- pose VERIFASTSCORE , which leverages syn- thetic data to fine-tune Llama3.1 8B for simul- taneously extracting and verifying all verifiable claims within a given text based on evidence from Google Search. We show that this task cannot be solved via few-shot prompting with closed LLMs due to its complexity: the model receives ∼4Ktokens of evidence on average and needs to concurrently decompose claims, judge their verifiability, and verify them against noisy evidence. However, our fine-tuned VER- IFASTSCORE model demonstrates strong cor- relation with the original VERISCORE pipeline at both the example level ( r=0.80) and system level ( r=0.94) while achieving an overall speedup of 6.6×(9.9×excluding ev- idence retrieval) over VERISCORE . To facili- tate future factuality research, we publicly re- lease our VERIFASTSCORE model and syn- thetic datasets.1 1 Introduction Modern methods to evaluate the factuality of long-form generation, such as FACTSCORE (Min et al., 2023), SAFE (Wei et al., 2024), and VERISCORE (Song et al., 2024), rely on a pipelined approach that uses LLMs to decompose an input text into “atomic” claims and then verify each claim against retrieved evidence documents. While this pipeline results in interpretable outputs that corre- late strongly with human annotations of factuality, 1Code and data available at https://github.com/ RishanthRajendhran/VeriFastScore .it is also slow (Liu et al., 2025), requiring many calls to large models to evaluate a text, which limits its usability in both evaluation and training (e.g., as a reward model in RLHF). In this paper, we propose a significantly faster alternative: a single-pass factuality evaluator that simultaneously performs claim decomposition and verification over a long-form input. Our method, VERIFASTSCORE , is trained to jointly extract all verifiable claims from a response and assess their factuality against retrieved evidence. To train VER- IFASTSCORE , we collate sentence-level evidence with claim-level outputs produced by VERISCORE , a high-quality general-domain factuality evaluation metric, into a synthetic dataset used to fine-tune Llama3.1 8B Instruct (Grattafiori et al., 2024). Each training example contains a response to be evaluated as well as a corresponding evidence context that consists of web search results retrieved by querying individual sentences in the response. VERIFASTSCORE is then asked to output a list of verifiable claims from the response, each labeled as Supported orUnsupported based on the evidence (see Figure 1). This task is non-trivial: the model must internally decontextualize the response, iden- tify atomic claims, resolve coreferences, and as- sess support against an evidence context spanning up to 4Ktokens of potentially noisy information. Standard few-shot prompting of powerful closed models like GPT-4o struggles with this complexity, achieving a
https://arxiv.org/abs/2505.16973v2
Pearson correlation of only 0.33 with VERISCORE despite high API costs. Despite these challenges, VERIFASTSCORE matches the quality of VERISCORE closely while offering substantial improvements in both cost and runtime efficiency: it achieves a Pearson correlation of 0.80(Pearson r, p<0.001) with VERISCORE ’s scores, a 9.9×modeling speedup and a 6.64×overall speedup in wall-clock time, reducing both modeling and retrieval latency while maintaining the metric’s interpretability. Human 1 Prompt: Write a biography of Ted ThompsonResponse: Ted Thompson was an American football executive, best known for his role as the general manager of the Green Bay Packers from 2005 to 2017. Born on December 17, 1953, in Fleming, Colorado, he played college football at the University of Utah. After being selected in the 10th round of the 1975 NFL Draft by the Houston Oilers, he played as a linebacker and later had a brief career with the New England Patriots… Claim 1: Ted Thompson was an American football executive.Claim 2: Ted Thompson was the general manager of the Green Bay Packers from 2005 to 2017.Claim 3: Ted Thompson was born on December 17, 1953, in Fleming Colorado.Claim 4: Ted Thompson played college football at the University of Utah.Claim 5: Ted Thompson was selected in the 10th round of the 1975 NFL Draft by the Houston Oilers.Claim 6: Ted Thompson played as a linebacker for the Houston Oilers.Claim 7: Ted Thompson had a brief career with the New England Patriots. Claim 1:SupportedClaim 2:Supported Claim 3:UnsupportedClaim 4:Unsupported Claim 5:SupportedClaim 6:Supported Claim 7:Unsupported Claim 1: Ted Thompson was an American football executive. SupportedClaim 2: Ted Thompson was the general manager of the Green Bay Packers from 2005 to 2017. SupportedClaim 3: Ted Thompson was born on December 17, 1953, in Fleming Colorado. UnsupportedClaim 4: Ted Thompson played college football at the University of Utah. UnsupportedClaim 5: Ted Thompson was selected in the 10th round of the 1975 NFL Draft by the Houston Oilers. SupportedClaim 6: Ted Thompson played as a linebacker. SupportedClaim 7: Ted Thompson had a brief career with the New England Patriots. UnsupportedVeriScore (Song et al., 2024)VeriFastScore (ours)Unlike previous work, VeriFastScore combines claim decomposition and verification into a single model call, significantly speeding up the factuality evaluation pipeline.Figure 1: (left) TheVERISCORE pipeline (Song et al., 2024) involves a separate call to a trained model (or LLM API) to decompose every sentence of the long-form response. More model calls are issued for each decomposed claim to verify them against claim-level evidence retrieved from Google Search. In contrast, our VERIFASTSCORE method retrieves sentence-level evidence from Google Search, combines it together with the response into a single long-context prompt, and outputs a list of claims and labels with just a single call to a trained model. evaluation corroborates the high quality of out- puts from VERIFASTSCORE . We release VERI- FASTSCORE and our synthetic training dataset to support efficient, interpretable factuality evaluation and facilitate its use in alignment settings. 2 Building a faster factuality metric In this section, we first describe the limitations of existing factuality metrics like VERISCORE , whose separate decomposition, retrieval, and verification stages require
https://arxiv.org/abs/2505.16973v2
several calls to LLMs to complete for a single response. These limitations motivate our design and development of VERIFASTSCORE , which combines all steps in the existing factuality evaluation pipeline into a single model call. To enable such a fast evaluation, we generate synthetic data from VERISCORE and use it to fine-tune a language model on response-level decomposition and verification. 2.1 The “decompose and verify” pipeline for factuality evaluation Recently-proposed metrics for long-form factual- ity evaluation such as FACTSCORE andSAFE are implemented by the following multi-stage pipeline: 1.Claim extraction: decompose the long-form model response into “atomic” (i.e., short) claims 2.Evidence retrieval: collect evidence for ev- ery claim individually by retrieving relevant documents from a datastore or search engine3.Claim verification: mark each atomic claim as either supported orunsupported by the evi- dence collected for that claim InFACTSCORE andVERISCORE , the first and third stages are implemented by either open-weight or closed LLMs (e.g., GPT-4), while SAFE’s decomposition and verification stages are imple- mented by prompting closed models from OpenAI or Claude. VERISCORE , our starting point: Unlike FACTSCORE (Min et al., 2023) and SAFE (Wei et al., 2024), which attempt to verify allatomic claims in a model’s response using external evidence, the decomposition stage of VERISCORE focuses solely on verifiable claims—defined as Verifiable claims describe a single event orstate3with all necessary modifiers (e.g., spatial, temporal, or relative clauses) that help denote entities or events in the real world. They should be verifiable against reliable external sources (e.g., Wikipedia). They should exclude personal experiences, subjective opinions, hypotheticals, suggestions, ad- vice, or instructions. (Maienborn, 2003, 2019) 3Event: change of state, for example, “Jensen Huang founded NVIDIA in 1993 in California, U.S.” State: for ex- ample, “Westborough is a town in Worcester, MA.” 2 Prompt Source Split # Responses Avg. Claims/Response Avg. Token Length Claim Verification Label (%) Response Evidence Facts Supported Unsupported VeriScore dataTrain 4082 16 298 4381 271 68 32 Test 1815 16 301 4466 278 68 32 Tulu3 PersonasTrain 4860 20 366 3755 334 59 41 Test 4083 20 370 3714 329 59 41 Table 1: Overview of the datasets used. For each data source and split (train/test), we report the total number of model responses, the average number of verifiable claims per response, average token lengths2of model responses, consolidated evidence, and consolidated list of verifiable claims extracted by VERISCORE from model responses, and the percentage of claims labeled by VERISCORE as supported and unsupported by retrieved evidence. Labels are more balanced in the dataset collected during this work using prompts from Tulu3 Personas dataset. Like SAFE,VERISCORE also relies on Google Search to retrieve evidence in the second stage of the pipeline. We use VERISCORE as a start- ing point due to its increased domain generality compared to prior long-form factuality metrics. 2.2 Issues with VeriScore VERISCORE evaluates factuality using a multi- stage pipeline that proceeds sentence-by-sentence through a model response. For each sentence, it extracts verifiable claims using a claim extraction model, where the full response is included in the input to ensure proper decontextualization. Each extracted claim is then used as
https://arxiv.org/abs/2505.16973v2
a query to retrieve evidence via Google Search. Finally, a claim veri- fication model checks the factuality of each claim against the retrieved evidence. While this careful decomposition and step-wise verification ensures precision, it incurs significant computational cost. On average, a single response of 14 sentences leads to 23 extracted claims (see Table 6). Each response thus results in: •14 calls to the claim extraction model (one per sentence), • 23 Google searches (one per claim), • 23 calls to the claim verification model. This totals roughly 60 model or API calls per re- sponse. Processing a single response takes approx- imately 100 seconds, making the method impracti- cal for real-time or large-scale evaluation (Liu et al., 2025; Song et al., 2024). 2.3 V ERIFASTSCORE ’s Integrated Approach Given advances in instruction following and long- context reasoning in modern LLMs, we investigatewhether it is possible to perform claim decomposi- tion and verification in tandem using a single model pass. A central challenge in such an approach is the order of operations: traditional pipelines perform decomposition before retrieval in order to resolve coreferences, identify verifiability, and construct precise search queries. By bypassing decomposi- tion, we must ensure that evidence retrieval remains effective despite potential ambiguity or context de- pendence in the original response. Evidence Retrieval Without Claim Decomposi- tion: To retrieve evidence without prior knowl- edge of claims, we treat each sentence in the model response as a search query. We use the SERPER API4, as in VERISCORE , to obtain the top 10 search results per sentence. Snippets from all queries are concatenated to form a consolidated evidence con- text. This method leverages redundancy in web search and the semantic breadth of full-sentence queries to compensate for the lack of explicitly formulated claims. Simultaneous Claim Decomposition and Verifi- cation: With the consolidated evidence and the full model response as inputs, VERIFASTSCORE performs both claim extraction and verification in a single forward pass. This removes the need for in- termediate steps and per-claim processing, offering a significantly more efficient pipeline. Despite the lack of decomposition at the retrieval stage, we find that VERIFASTSCORE maintains strong performance. On average, each response contains ∼23 verifiable claims and is associated with∼186 evidence snippets, totaling over 4k to- kens. These are processed together, with VERI- FASTSCORE identifying and verifying each claim directly against the evidence context. As shown in 4serper.dev 3 Table 9 and Table 5, VERIFASTSCORE achieves high factual precision and recall with respect to VERISCORE , while reducing inference time per response by over 6 ×. Score calculation: To score a model’s response, we adapt the metrics used in VERISCORE . Factual precision of a model response ris defined as: P(r) =S(r) |C| where Cis the set of all verifiable claims in rand S(r)denotes the number of verifiable claims in r that are supported by the consolidated evidence E. Furthermore, to prevent models from gaming the evaluation, it is important to also consider the number of verifiable claims produced by the model in its response. Thus, we also compute factual recall as follows: R(r) =min
https://arxiv.org/abs/2505.16973v2
(1,|C| K) where Kis the median number of verifiable claims across model responses. We compute F1@K(r): F1@K(r) =(2P(r)RK(r) P(r)+RK(r)ifS(r)>0 0 ifS(r) = 0 The final score for a model Macross a dataset Xis: VERIFASTSCORE =1 |X|X x∈XF1@K(Mx)(1) 2.4 Generating Synthetic Training Data for VERIFASTSCORE Data Sources and Motivation: To train VER- IFASTSCORE , we began with data collected for human evaluation during the development of VERISCORE (see VeriScore data in Table 1) con- sisting of prompts sourced from LongFact (Wei et al., 2024), AskHistorian, ELI5 (Xu et al., 2023), ShareGPT (Chiang et al., 2023), FreshQA (Vu et al., 2023) etc. Early experiments with subsets of this data of different sizes ranging between 0.5K 4Token lengths are estimated using Llama 3.1 Instruct tokenizer.Section Content (see Table 8 for full text ) SYSTEM You are trying to verify factuality by extracting atomic, verifiable claims. . . Output: <fact i>: {Supported, Unsupported}. If none, output “No verifiable claim.” RESPONSE . . . In the summer of 1215, a pivotal moment in the course of Western political and legal history unfolded in England.. . . EVIDENCE The Magna Carta is the charter of English liberties granted by King John on June 15, 1215,. . . FACTS 1. The Magna Carta is considered a pivotal moment in Western political and legal history: Supported 2. . . . Table 2: Heavily-truncated example of the prompt for- mat used to fine-tune VERIFASTSCORE for response- level factuality evaluation. The model receives a SYS- TEM prompt, RESPONSE , and EVIDENCE , and it is asked to generate a list of FACTS that consists of verifiable claims and their labels. and 5K revealed a clear trend of improved model performance with increasing training data size. Mo- tivated by this, we aimed to scale up training via a synthetic dataset generated using the VERISCORE pipeline. Specifically, we targeted prompts likely to elicit factual claims, enabling us to generate high- quality supervision for claim decomposition and verification. Prompt Selection and Dataset Choice: We se- lected prompts from the Tulu3 Personas dataset (Lambert et al., 2025), which contains diverse and high-quality instructions across domains intended to test model’s instruction-following ability and adaptation to user intent. This dataset was chosen because it aligns well with our goal of generating model responses that contain factual, verifiable con- tent. We used GPT-4o-mini to identify a subset of prompts likely to elicit factual claims5. A manual inspection of 200 filtered prompts showed that over 90% did indeed lead to factually dense responses. Data Collection Pipeline: From the filtered prompts, we generated responses using a selection of LLMs: GPT-4o (OpenAI et al., 2024), Llama3.1 8B, Gemma2 7B (Team et al., 2024), Mistral-v2 7B (Jiang et al., 2023), and Qwen2 7B (Yang et al., 2024). We collected a total of about 9K prompt- response pairs, ensuring diversity across model ar- chitectures and outputs. Each response was pro- 5More details in A.4 4 Evidence Granularity Model Precision ↑Recall ↑Claim Accuracy (%) Correlation b/w factuality scores ↑ Correct ↑Incorrect ↓Missing ↓ claim-levelGPT-4o few-shot ICL 0.30 0.31 18.1 7.2 73.2 0.28 VERIFASTSCORE 0.87 0.90
https://arxiv.org/abs/2505.16973v2
71.6 15.5 12.3 0.87 sentence-levelGPT-4o few-shot ICL 0.38 0.34 19.3 8.2 71.2 0.33 VERIFASTSCORE 0.83 0.86 66 17.6 15 0.80 Table 3: Performance metrics across test splits: VERIFASTSCORE produces claims and verification labels that are largely consistent with VERISCORE , achieving a strong corelation of 0.80 even with sentence-level evidence, while significantly outperforming the GPT-4o few-shot prompting baseline. Evidence Granularity denotes whether evidence was retrieved using full sentences or claims: claim-level includes VeriScore data andTulu3 Personas data while sentence-level includes Tulu3 Personas data only. Precision andRecall measure overlap between claims identified by VERIFASTSCORE andVERISCORE . Under Claim Accuracy ,Correct ,Incorrect (Label) , and Missing (Claim) capture agreement, label mismatches, and omissions relative to VERISCORE . A small fraction of instances were marked Erroneous (not shown) due to ambiguities in automatic evaluation (see A.5). Correlation reports Pearson rbetween factuality scores (see Eq. 1) assigned by VERIFASTSCORE andVERISCORE . For all Pearson’s correlation scores reported in this paper, p<0.001, unless stated otherwise. cessed through the VERISCORE pipeline to decom- pose it into verifiable claims and assign verifica- tion labels based on evidence retrieved using those claims. Discrepancy between evidence used during training vs. test-time: A key difference between VERISCORE andVERIFASTSCORE is the granu- larity at which evidence for claim verification is retrieved. As illustrated in Figure 1, VERISCORE first extracts verifiable claims from the model re- sponse and then uses them as search queries to retrieve evidence for claim verification. This is in contrast to the setting in VERIFASTSCORE where evidence for claim verification is collected even before claim decomposition. This is achieved by using the sentences in the model response to collect search results which are then consolidated together for verification. To train and evaluate VERIFASTSCORE model, we opted to keep the claim-level evidence collected byVERISCORE during the synthetic data genera- tion pipeline outlined earlier for two reasons: •it allowed us to reuse the unmodified and val- idated VERISCORE pipeline to ensure high- quality data, and •it promotes robustness by training the model to verify claims against arbitrary but relevant evidence sets. Additionally, we also collected sentence-level evi- dence for the Tulu3-personas subset of our training and test data, in-line with the true setting of VER- IFASTSCORE . We evaluate the model’s ability togeneralize to test-time evidence conditions in Sec- tion Section 3. Evidence Overlap and Label Robustness: To assess whether our training setup, consisting largely of claim-level evidence, is compatible with test-time inference, we analyzed the overlap be- tween evidence gathered using these two different approaches. We found that 17% of claim-level re- trieved URLs are also present in the sentence-level evidence, and about 60% of claims have at least one matching URL in the sentence-level evidence.6 Empirical proof of soundness: Despite this modest overlap, we observed a strong correlation between verification labels and model-level factu- ality scores produced by VERIFASTSCORE and VERISCORE (discussed in Section 3), suggesting that the model generalizes well across evidence types. This empirical agreement indicates that the discrepancy in evidence provenance has limited practical impact, and supports the validity of using claim-level supervision for training. 2.5 Evaluation
https://arxiv.org/abs/2505.16973v2
Metrics Since we motivate VERIFASTSCORE as a speedy replacement to the multi-stage pipeline of VERISCORE , we want its outputs to be closely aligned with outputs from VERISCORE . We evalu- ate V ERIFASTSCORE using three main metrics: •Claim Precision: Computed as the proportion of claims produced by VERIFASTSCORE that are also produced by V ERISCORE , 6Note that we only count exact URL matches; semantic overlap is likely higher. 5 •Claim Recall: Computed as the proportion of claims produced by VERISCORE that are also produced by V ERIFASTSCORE , •Claim Accuracy: –Correct: Percentage of claims pro- duced by both VERIFASTSCORE and VERISCORE with the same verification label, –Incorrect Label: Percentage of claims produced by both VERIFASTSCORE and VERISCORE with the differing verifica- tion label, –Missing Claim: Percentage of claims produced by VERISCORE but not by VERIFASTSCORE (analogous to Claim recall), •Pearson’s Correlation: Correlation coeffi- cient between factuality scores of model re- sponses produced by VERIFASTSCORE and by V ERISCORE . Automatic Evaluation: During early ex- periments, we found that exact match met- rics—comparing model-extracted claims and verification labels directly to reference out- puts—were often too brittle. Minor surface-level differences (e.g., paraphrasing, slight reordering, or omission of non-essential modifiers) led to false negatives, thereby underestimating the model’s actual performance. For example, VERIFASTSCORE achieved an exact match claim accuracy of 23.7% on the test set, versus 71.6% with paraphrase-aware evaluation.7 While exact match metrics were retained dur- ing training for their efficiency and stability, we adopted a more nuanced automatic evaluation for the test set. Specifically, we used GPT-4o-mini as a judge to compare the model outputs against gold reference annotations and assess semantic overlap. This helped capture more accurate performance trends. The evaluation setup and prompts are de- tailed in Appendix A.5. 2.6 Training the V ERIFASTSCORE Model We train VERIFASTSCORE using a two-stage fine- tuning procedure on synthetic data generated via theVERISCORE pipeline. The base model is Llama3.1 8B Instruct, trained first on claim-level 7We re-report Table 9 with exact match metrics in Ap- pendixevidence and then on a mixture of claim- and sentence-level evidence to better match the test- time setting. This mixed-evidence setup improves robustness and mitigates the observed performance drop when relying solely on sentence-level evi- dence (see Table 10). All training examples consist of a model- generated response and retrieved evidence, with supervision derived from VERISCORE ’s annotated claim lists and verification labels. Details of the multi-stage training setup are provided in Ap- pendix A.1. 3 Results In this section, we describe the test set per- formance and computational cost of the trained VERIFASTSCORE model and compare it with VERISCORE . GPT 4o 60.1 GPT 4omini 56.7 Claude 3.7 55.4 o3mini 49.1 Gemini2 Flash 47.3 Mistral 7B 46.0 Mixtral 8×7B 43.8 Llama3 8B 42.5 Qwen 7B Distill 36.7 Qwen 1.5B 36.4 Olmo 7B 33.4GPT 4o 57.0 Claude 3.7 53.4 o3mini 52.5 GPT 4omini 51.0 Gemini2 Flash 50.5 Llama3 8B 42.5 Mixtral 8×7B 40.5 Mistral 7B 39.4 Qwen 7B Distill 34.8 Olmo 7B 29.1 Qwen 1.5B 27.5VeriScore VeriFastScore Figure 2: Model rankings produced by VERISCORE andVERIFASTSCORE using a
https://arxiv.org/abs/2505.16973v2
random subset of 100 prompts from the test split of Tulu3-Personas data. The rankings produced by both models are fairly consistent with each other despite some differences in order. There is a clear separation between open-weight and closed- weight models. 6 Example (truncated) Remarks Response . . . The “chimney” and “riverbank” serve as powerful metaphors — the chimney suggesting an unreachable goal while the riverbank signifies his current position . . . VERIFASTSCORE : (Unsupported) The main character in “Araby” imagines climbing a chimney to reach his desire. VERIFASTSCORE : (Unsupported) The “riverbank” in “Araby” signifies the main character’s current position. VERISCORE : (Unsupported) The chimney symbolizes an unreachable goal. VERISCORE : (Unsupported) The riverbank symbolizes the protagonist’s current position.This example illustrates the benefit of response-level extraction in VERIFASTSCORE over the sentence-focused V ERISCORE ; VERIFASTSCORE better de-contextualises each claim. Response . . . Navigating a rainforest can pose several challenges . . . unpredictable weather which can lead to sudden rainstorms or flooding . . . VERIFASTSCORE : (Supported) Unpredictable weather in a rainforest can lead to sudden rainstorms. VERISCORE : (Unsupported) Navigating a rainforest can lead to sudden rainstorms.In this example, VERIFASTSCORE exhibits precise semantic alignment between extracted claim and model response while VERISCORE doesn’t. Response . . . JSON-formatted letter quoting Admiral Chester W. Nimitz: “Courage is not the absence of fear . . . ” . . . VERIFASTSCORE : (Unsupported) Admiral Chester W. Nimitz said, “Courage is not the absence of fear, but rather the judgment that something else is more important than fear.” VERISCORE : No verifiable claim.VERIFASTSCORE remains robust even when facts are buried in noisy, non-standard contexts (here, a JSON letter). Response . . . The fast-paced editing, over-the-top physical comedy, and clever sound effects have become hallmarks of animation, influencing everything from anime to Pixar films . . . VERIFASTSCORE : (Unsupported) Over-the-top physical comedy is a hallmark of animation. VERISCORE : (Supported) Classic cartoons influenced the use of over-the-top physical comedy in animation.Spurious inference via clause-conflation: VERIFASTSCORE sometimes glues fragments into an unsupported cause–effect claim. Table 4: Qualitative analysis: VERIFASTSCORE exhibits generalization and robustness at test time due to diverse training, but still inherits some undesirable behavior from V ERISCORE . 3.1 Main results High correlation between outputs from VERI- FASTSCORE and those from VERISCORE :In Table 9, we observe that VERIFASTSCORE outputs have both a high factual precision and factual recall with outputs from VERISCORE as ground truth irre- spective of the provenance of the evidence. There is a drop in claim accuracy when using sentence- level evidence which manifests itself as an increase in incorrect verification label and in missing claims. The higher percentage of missing claims can possi- bly be mitigated by more training on sentence-level evidence. GPT-4o Baseline: For a baseline, we prompt GPT-4o with three few-shot demonstrations. De- spite the high API cost (roughly $300 for our eval- uation), GPT-4o is about 50% less accurate than VERIFASTSCORE which can even be run locally. A closer inspection revealed that GPT-4o failed to extract any verifiable claim from
https://arxiv.org/abs/2505.16973v2
about 29% of the instances. These results underscore the complexi- ties involved in simultaneous claim extraction and verification.3.2 Speedup Average time spent (s) Evidence retrievalModeling VERISCORE 21.4 83.0 VERIFASTSCORE 13.5 9.5 Speedup 1.9x 9.9x Table 5: Execution time: VERIFASTSCORE spends sig- nificantly lesser time than VERISCORE in the modeling stages i.e. extraction and verification stages. Overall, VERIFASTSCORE is 6.6x faster than V ERISCORE8. We measured the execution time for both the VERIFASTSCORE andVERISCORE pipelines. By retrieving evidence at a coarser granularity, VERI- FASTSCORE achieves a 39% reduction in API costs and cuts retrieval time by almost half. Additionally, VERISCORE takes significantly longer due to its se- quential claim extraction and verification processes. VERIFASTSCORE is approximately 10 times faster in extracting and verifying claims without sacri- 8Estimated from 200 instances 7 ficing performance (see Table 9). Overall, VERI- FASTSCORE achieves a 6.64x speedup compared toVERISCORE . Longer model responses further increase speedup, as VERISCORE ’s sentence-by- sentence processing is particularly slow. 3.3 Ranking factuality of models with VERIFASTSCORE We randomly sample 100 prompts from the Tulu3- personas test set to obtain responses from 12 differ- ent LLMs: •Closed-weight: GPT-4o, GPT-4o-mini, o3- mini (OpenAI, 2025), Claude-3.7-Sonnet (An- thropic, 2025), Gemini-2.0-Flash (Google, 2025) •Open-weight: Llama3.1 8B, Mistral v0.3 7B, Mixtral v0.1 8x7B, Qwen2.5 1.5B (Qwen et al., 2025), Deepseek-R1-distill Qwen 7B (DeepSeek-AI et al., 2025), Olmo2 7B (OLMo et al., 2025) As shown in Figure 2, factuality scores produced byVERIFASTSCORE correlate strongly with those from VERISCORE (Pearson r= 0.94, p=1.1e−5), indicating alignment despite differing approaches. Given that VERIFASTSCORE operates over a sin- gle model pass and avoids per-claim retrieval, it delivers these rankings with significantly reduced latency and cost (see Table 5, 6). 4 Qualitative analysis Compararive inspection of claims extracted by VERIFASTSCORE and VERISCORE shows four recurring missteps. First, the model sometimes stitches together unrelated clauses into unwarranted cause–effect claims. Second, it extracts statements using vague or anaphoric phrases that lack the con- text needed for standalone verification. Third, ev- idence retrieval using Google search can surface both supporting and contradicting passages for the same claim, causing label flips. And fourth, both approaches occasionally extract unverifiable claims from model response. Human evaluation: A manual review of 401 claims across 21 test instances found that VER- IFASTSCORE reliably grounded its outputs in the response text, with no hallucinated claims. Verifica- tion accuracy was high, though some false positives arose from noisy evidence. Common failure modes included missed claims (in 29% of examples) andunverifiable outputs due to missing context. Full details are in Appendix A.3. 5 Related work Factuality evaluation via decomposition and ver- ification FACTSCORE computes factual preci- sion over atomic claims (Min et al., 2023), while SAFE uses GPT-4 for claim-wise verification (Wei et al., 2024). VERISCORE extracts only verifi- able claims and uses Google Search to verify them (Song et al., 2024). Other systems like FACTOOL (Chern et al., 2023), RARR (Gao et al., 2023), and COVE(Elazar et al., 2021) operate similarly al- beit with document-level or retrieval-augmented reasoning. Decomposition Quality and Claim Granularity. Wanner et al. (2024) introduce DECOMP SCORE to quantify
https://arxiv.org/abs/2505.16973v2
decomposition consistency. Chen et al. (2023) and Yue et al. (2024) report failures tied to vague or anaphoric claims. Hu et al. (2025) cata- log common decomposition errors and tradeoffs. VERISCORE explicitly avoids unverifiable content like opinions or suggestions (Song et al., 2024). End-to-End and Lightweight Approaches. MINICHECK avoids decomposition entirely, using a small model to classify sentence-level factuality (Tang et al., 2024). LLM-OASIS introduces a synthetic benchmark for scalable factuality evalua- tion (Scirè et al., 2025), while FACTCHECK -GPT trains an end-to-end verifier integrating retrieval and judgment (Wang et al., 2024). 6 Conclusion We present VERIFASTSCORE , a model that unifies claim decomposition and verification into a single model pass. Trained on synthetic data derived from VERISCORE , our method processes long-form out- puts with consolidated evidence and produces fine- grained factuality judgments. Despite the complexity of the task, VERI- FASTSCORE closely matches the accuracy of VERISCORE (Pearson r= 0.80), while achieving a 6.6×speedup in evaluation time. It also outper- forms strong few-shot prompting baselines such as GPT-4o, and provides interpretable, claim-level outputs suitable for evaluation and alignment. We release the model and data to support scal- able, efficient factuality assessment and to facilitate further research on long-form factuality and reward modeling. 8 7 Limitations Latency remains a bottleneck. While VERI- FASTSCORE offers substantial speedups over multi- stage pipelines like VERISCORE , it still requires full-sequence generation conditioned on long ev- idence and response contexts. This makes it less suited for real-time or low-latency feedback set- tings such as RLHF. Although we explored a scoring-head variant (Appendix A.2) to reduce in- ference cost, it struggled to match the accuracy and reliability of the generative approach. Reliance on synthetic supervision. VERI- FASTSCORE is trained on synthetic labels produced byVERISCORE , which, while effective, is not free of noise or bias. Errors in claim decomposition, incomplete evidence, or incorrect verifiability judg- ments in the teacher outputs may propagate into the learned model, potentially reinforcing systemic weaknesses in downstream predictions. Retrieval noise and specificity. Our retrieval strategy issues Google search queries for each sen- tence in the model response, aggregating the results into a shared evidence context. While this aligns with the single-pass structure of VERIFASTSCORE , it can introduce irrelevant or insufficient evidence, especially for vague or underspecified sentences. More adaptive or agentic retrieval—allowing the model to detect failures and issue improved follow- up queries—may yield higher evidence quality. Lack of explicit rationales. VERIFASTSCORE produces fine-grained factuality labels for claims but does not provide natural language justifications. Although rationale generation adds computational cost, it could improve interpretability and trustwor- thiness, especially in sensitive domains. Training with rationales, even if only used during finetun- ing (e.g., as in Qwen3’s “reasoning mode” (Yang et al., 2025)), may offer benefits without incurring runtime overhead at inference time. Noise in evaluation signals. We use GPT-4o- mini to assess label correctness and semantic equiv- alence, which enables scalable evaluation but in- troduces potential noise—particularly for nuanced paraphrase detection. This may affect metric es- timates such as precision and recall, and obscure more subtle performance differences across mod- els.
https://arxiv.org/abs/2505.16973v2
Language and configuration limitations. Our experiments focus primarily on English-languageoutputs. Although some non-English generations were present, we did not conduct language-specific analysis. Additionally, we only minimally modi- fied prompts from VERISCORE and did not exhaus- tively tune training configurations. Future work could explore multilingual robustness and system- atically investigate the effects of prompt and hyper- parameter choices. Ethics Statement This work involves training and evaluating factual- ity metrics for long-form language generation. All model outputs analyzed in this study were gener- ated by publicly available language models. The human evaluation was conducted by the first au- thor and involved no sensitive or private user data. No personally identifiable information (PII) was collected, and no crowdworkers or third-party an- notators were employed. The synthetic data used to train our model was generated automatically us- ing existing metrics and publicly available datasets. We will release our model and training data upon publication to support reproducibility and future research. Acknowledgments We extend our special gratitude to Yixiao Song for sharing data and details about VERISCORE with us. We also extend gratitude to members from the NGRAM lab at UMD for their feedback. This project was partially supported by awards IIS- 2046248, IIS-2312949, and IIS-2202506 from the National Science Foundation (NSF). References Anthropic. 2025. Claude 3.7 sonnet system card. Shiqi Chen, Yiran Zhao, Jinghan Zhang, I-Chun Chern, Siyang Gao, Pengfei Liu, and Junxian He. 2023. Felm: Benchmarking factuality evaluation of large language models. Preprint , arXiv:2310.00741. I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, and Pengfei Liu. 2023. Factool: Factual- ity detection in generative ai – a tool augmented framework for multi-task and multi-domain scenar- ios.Preprint , arXiv:2307.13528. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An open- source chatbot impressing gpt-4 with 90%* chatgpt quality. 9 DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang, Bochao Wu, Bei Feng, and Chengda Lu... 2025. Deepseek-r1: Incentivizing reasoning ca- pability in llms via reinforcement learning. Preprint , arXiv:2501.12948. Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, and Yoav Goldberg. 2021. Measuring and improving consistency in pretrained language models. Preprint , arXiv:2102.01017. Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vin- cent Y . Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, and Kelvin Guu. 2023. Rarr: Researching and re- vising what language models say, using language models. Preprint , arXiv:2210.08726. Google. 2025. Gemini 2.0 flash. https://cloud. google.com/vertex-ai/generative-ai/docs/ models/gemini/2-0-flash . Accessed: 2025-05- 16. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schel- ten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, and Anthony Hartshorn... 2024. The llama 3 herd of models. Preprint , arXiv:2407.21783. Qisheng Hu,
https://arxiv.org/abs/2505.16973v2
Quanyu Long, and Wenya Wang. 2025. Decomposition dilemmas: Does claim decompo- sition boost or burden fact-checking performance? Preprint , arXiv:2411.02400. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Men- sch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guil- laume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7b. Preprint , arXiv:2310.06825. Nathan Lambert, Jacob Morrison, Valentina Pyatkin, Shengyi Huang, Hamish Ivison, Faeze Brahman, Lester James V . Miranda, Alisa Liu, Nouha Dziri, Shane Lyu, Yuling Gu, Saumya Malik, Victoria Graf, Jena D. Hwang, Jiangjiang Yang, Ronan Le Bras, Oyvind Tafjord, Chris Wilhelm, Luca Soldaini, Noah A. Smith, Yizhong Wang, Pradeep Dasigi, and Hannaneh Hajishirzi. 2025. Tulu 3: Pushing fron- tiers in open language model post-training. Preprint , arXiv:2411.15124. Siyi Liu, Kishaloy Halder, Zheng Qi, Wei Xiao, Niko- laos Pappas, Phu Mon Htut, Neha Anna John, Yas- sine Benajiba, and Dan Roth. 2025. Towards long context hallucination detection. In Findings of the Association for Computational Linguistics: NAACL2025 , pages 7827–7835, Albuquerque, New Mexico. Association for Computational Linguistics. Claudia Maienborn. 2003. Event-internal modifiers: Semantic underspecification and conceptual interpre- tation , pages 475–510. De Gruyter Mouton, Berlin, Boston. Claudia Maienborn. 2019. Events and States. In The Oxford Handbook of Event Structure . Oxford Univer- sity Press. Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen tau Yih, Pang Wei Koh, Mohit Iyyer, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2023. Factscore: Fine-grained atomic evaluation of fac- tual precision in long form text generation. Preprint , arXiv:2305.14251. Team OLMo, Pete Walsh, Luca Soldaini, Dirk Groen- eveld, Kyle Lo, Shane Arora, Akshita Bhagia, Yul- ing Gu, Shengyi Huang, Matt Jordan, Nathan Lam- bert, Dustin Schwenk, Oyvind Tafjord, Taira An- derson, David Atkinson, Faeze Brahman, Christo- pher Clark, Pradeep Dasigi, Nouha Dziri, Michal Guerquin, Hamish Ivison, Pang Wei Koh, Jiacheng Liu, Saumya Malik, William Merrill, Lester James V . Miranda, Jacob Morrison, Tyler Murray, Crystal Nam, Valentina Pyatkin, Aman Rangapur, Michael Schmitz, Sam Skjonsberg, David Wadden, Christo- pher Wilhelm, Michael Wilson, Luke Zettlemoyer, Ali Farhadi, Noah A. Smith, and Hannaneh Hajishirzi. 2025. 2 olmo 2 furious. Preprint , arXiv:2501.00656. OpenAI, :, Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, Aleksander M ˛ adry, Alex Baker-Whitcomb, Alex Beutel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kirillov, Alex Nichol, Alex Paino, and Alex Renzin... 2024. Gpt-4o system card. Preprint , arXiv:2410.21276. OpenAI. 2025. Openai o3-mini system card. https:// cdn.openai.com/o3-mini-system-card-feb10. pdf. Accessed: 2025-05-16. Qwen, :, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tianyi Tang, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan
https://arxiv.org/abs/2505.16973v2
Qiu. 2025. Qwen2.5 technical report. Preprint , arXiv:2412.15115. Alessandro Scirè, Andrei Stefan Bejgu, Simone Tedeschi, Karim Ghonim, Federico Martelli, and Roberto Navigli. 2025. Truth or mirage? to- wards end-to-end factuality evaluation with llm-oasis. Preprint , arXiv:2411.19655. 10 Yixiao Song, Yekyung Kim, and Mohit Iyyer. 2024. Veriscore: Evaluating the factuality of verifiable claims in long-form text generation. Preprint , arXiv:2406.19276. Liyan Tang, Philippe Laban, and Greg Durrett. 2024. Minicheck: Efficient fact-checking of llms on ground- ing documents. Preprint , arXiv:2404.10774. Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhu- patiraju, and Léonard Hussenot ... 2024. Gemma 2: Improving open language models at a practical size. Preprint , arXiv:2408.00118. Tu Vu, Mohit Iyyer, Xuezhi Wang, Noah Constant, Jerry Wei, Jason Wei, Chris Tar, Yun-Hsuan Sung, Denny Zhou, Quoc Le, and Thang Luong. 2023. Freshllms: Refreshing large language models with search engine augmentation. Preprint , arXiv:2310.03214. Yuxia Wang, Revanth Gangi Reddy, Zain Muhammad Mujahid, Arnav Arora, Aleksandr Rubashevskii, Ji- ahui Geng, Osama Mohammed Afzal, Liangming Pan, Nadav Borenstein, Aditya Pillai, Isabelle Au- genstein, Iryna Gurevych, and Preslav Nakov. 2024. Factcheck-bench: Fine-grained evaluation bench- mark for automatic fact-checkers. In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 14199–14230, Miami, Florida, USA. Association for Computational Linguistics. Miriam Wanner, Seth Ebner, Zhengping Jiang, Mark Dredze, and Benjamin Van Durme. 2024. A closer look at claim decomposition. In Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024) , pages 153–175, Mexico City, Mexico. Association for Computational Lin- guistics. Jerry Wei, Chengrun Yang, Xinying Song, Yifeng Lu, Nathan Hu, Jie Huang, Dustin Tran, Daiyi Peng, Ruibo Liu, Da Huang, Cosmo Du, and Quoc V . Le. 2024. Long-form factuality in large language models. Preprint , arXiv:2403.18802. Fangyuan Xu, Yixiao Song, Mohit Iyyer, and Eun- sol Choi. 2023. A critical evaluation of evalua- tions for long-form question answering. Preprint , arXiv:2305.18201. An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, Chujie Zheng, Dayi- heng Liu, Fan Zhou, Fei Huang, Feng Hu, Hao Ge, Haoran Wei, Huan Lin, Jialong Tang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jing Zhou, Jingren Zhou, Junyang Lin, Kai Dang, Keqin Bao, Kexin Yang, Le Yu, Lianghao Deng, Mei Li, Mingfeng Xue, Mingze Li, Pei Zhang, Peng Wang, Qin Zhu, Rui Men, Ruize Gao, Shixuan Liu, Shuang Luo, Tianhao Li, Tianyi Tang, Wenbiao Yin, Xingzhang Ren, Xinyu Wang, Xinyu Zhang, Xu- ancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yinger Zhang, Yu Wan, Yuqiong Liu, Zekun Wang,Zeyu Cui, Zhenru Zhang, Zhipeng Zhou, and Zi- han Qiu. 2025. Qwen3 technical report. Preprint , arXiv:2505.09388. An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, Guanting Dong, Hao- ran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, Jianxin Yang, Jin Xu, Jingren Zhou, Jinze Bai, Jinzheng He, Junyang Lin, Kai Dang, Keming Lu, Ke- qin Chen, Kexin Yang, Mei Li, Mingfeng Xue, Na Ni, Pei Zhang, Peng
https://arxiv.org/abs/2505.16973v2
Wang, Ru Peng, Rui Men, Ruize Gao, Runji Lin, Shijie Wang, Shuai Bai, Sinan Tan, Tianhang Zhu, Tianhao Li, Tianyu Liu, Wenbin Ge, Xiaodong Deng, Xiaohuan Zhou, Xingzhang Ren, Xinyu Zhang, Xipin Wei, Xuancheng Ren, Xuejing Liu, Yang Fan, Yang Yao, Yichang Zhang, Yu Wan, Yunfei Chu, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, Zhifang Guo, and Zhihao Fan. 2024. Qwen2 techni- cal report. Preprint , arXiv:2407.10671. Zhenrui Yue, Huimin Zeng, Lanyu Shang, Yifan Liu, Yang Zhang, and Dong Wang. 2024. Retrieval aug- mented fact verification by synthesizing contrastive arguments. In Proceedings of the 62nd Annual Meet- ing of the Association for Computational Linguis- tics (Volume 1: Long Papers) , pages 10331–10343, Bangkok, Thailand. Association for Computational Linguistics. A Appendix A.1 Training V ERIFASTSCORE We follow a two-stage pipeline to train our VERI- FASTSCORE model. •In the first stage , we finetune Llama3.1 8B on the combined train split of VeriScore data andTulu3 Personas data using claim-level ev- idence. •In the second stage , we further finetune the best model from stage 1 on the same dataset but with a mixture (60%:40%) of claim-level and sentence-level evidence. Motivation for multi-stage training: In our syn- thetic data, ground-truth claim verification labels were obtained from VERISCORE using claim-level evidence. To ensure correctness of labels and model’s reliance on evidence provided in the input during generation of output labels, the first stage trains VERIFASTSCORE model using consolidated claim-level evidence. Drop in accuracy with sentence-level evidence: As seen in Table 9, there was a ∼20% drop in claim accuracy when using sentence-level evidence, which is our true test setting, instead 11 of claim-level evidence. This is possibly due to differences in quality of retrieved search results and length of consolidated evidence depending on the granularity at which evidence was collected. To mitigate this, the model was further trained on a mixture of claim- and sentence-level evidence in the second stage of training. Compute details: We finetune our models for 10 epochs on 4 A100 GPUs for about 1-2 day(s). We evaluate the model after every epoch on our validation split and save the best model based on claim accuracy. Test evaluations of the model were performed on a single GH200 GPU node. A.2 V ERIFASTSCORER model To take the cost-efficiency one-step further, we trained a model with a scoring head to directly pro- duce the factuality score given the model response and consolidated sentence-level evidence without going through the whole process of claim decompo- sition and verification. We envision such a model being used as a reward model in RLHF setups to train models to produce more factual responses. Model architecture: We experimented with starting from the base model as well as from the best model from our multi-stage training pipeline. We explored two different architecture for the scor- ing head: •Linear output layer on top of the last hidden state of the last token in the input prompt •Linear projection layer on top of a learnable attention layer over the last hidden state of all tokens in the input prompt followed by a linear output layer
https://arxiv.org/abs/2505.16973v2
To avoid overfitting, we added a dropout layer in both of these configurations and ablated by freezing the language model parameters. We also experi- mented with a multi-task learning setup with losses computed with both the scoring head output and the decomposition and verification outputs from the language modeling head. Abysmal generalization: Our best model achieved 75% score accuracy on the train split, 55% on the validation split and 45% on the test split. Due to poor correlation with ground-truth scores, this avenue was abandoned. We leave it for future work to explore more complex scoring head architectures and training configurations.Retrieval time (%)Average SERPER queriesCost estimate9 ($) VERISCORE 20.5 23 $0.01725 VERIFASTSCORE 58.8 14 $0.0105 Table 6: Cost estimates: VERIFASTSCORE makes lesser search queries than VERISCORE thereby reducing cost8. A.3 Human Evaluation To better understand VERIFASTSCORE ’s behav- ior, the first author conducted a small-scale human evaluation of 401 (response, evidence, claim, label) tuples, drawn from 21 test instances in the Tulu3 Personas dataset. The evaluation focused on three core aspects of VERIFASTSCORE ’s output: claim extraction, claim verifiability, and verification ac- curacy. Claim grounding and hallucination: All ex- tracted claims were grounded in the original model response, with no instances of hallucinated or fab- ricated claims. This is a crucial property for a factuality metric, particularly in downstream use cases like hallucination detection and RLHF train- ing where spurious claim hallucination could mis- lead supervision. Extraction coverage and omissions: VERI- FASTSCORE was generally effective at identify- ing claims, especially in responses rich with fac- tual content. However, it showed reduced reli- ability in responses that interweave factual and non-factual content, or contain few factual claims overall. Claim omission was observed in approx- imately 29% of evaluated instances, though two- thirds of these cases involved fewer than four miss- ing claims. Notably, in 50% of omission cases, VERIFASTSCORE extracted claims only from the final few sentences of the response, missing ear- lier factual content—suggesting a positional bias or early truncation. Unverifiable and ill-formed claims: Roughly 10% of the extracted claims were judged to be unverifiable due to missing key details (e.g., named entities, dates) in the original model re- sponse. In rare cases, VERIFASTSCORE also ex- tracted claims from inherently unverifiable sen- tences, such as advice or vague generalizations. 9Estimated using the tier priced at $0.75/1000 queries 12 Ill-formed claims—those missing critical contex- tual elements—were rare (around 1%), but they did alter the meaning of the original sentence in notable ways. Verification accuracy: VERIFASTSCORE was largely accurate in labeling claims against the pro- vided evidence, with an estimated verification error rate of 8.5%. Among these errors, 76% were false positives—instances where VERIFASTSCORE in- correctly marked a claim as supported. These errors often arose when partial matches or topical over- laps misled the model, particularly in cases where evidence spanned multiple unrelated contexts or referenced similar entities in different time periods or locations. Evidence quality and noise: Finally, we ob- served that the retrieved evidence was often noisy, especially in cases where the original response con- tained few or vague factual claims. This
https://arxiv.org/abs/2505.16973v2
is likely due to the sentence-level retrieval strategy used by VERIFASTSCORE , which may issue underspecified queries and produce loosely related results. These limitations in evidence quality can contribute both to omission in extraction and errors in verification. A.4 Automatic Filtering of Tulu3 prompts 13 Table 7: Automatic Filtering: Prompt used for automatic filtering of Tulu3-personas prompts with GPT-4o-mini. You are a helpful assistant. Given a prompt, you are to judge whether a response to the prompt would contain fine-grained factually verifiable claims. Each of these fine-grained facts should be verifiable against reliable external world knowledge (e.g., via Wikipedia). Any story, personal experiences, hypotheticals (e.g., "would be" or subjunctive), subjective statements (e.g., opinions), suggestions, advice, instructions, and other such content is considered factually verifiable. Biographical, historical, scientific, and other such texts are not personal experiences or stories. Here are some examples of responses and some fine-grained factually verifiable claims present in those responses: Text: The sweet potato or sweetpotato (Ipomoea batatas) is a dicotyledonous plant that belongs to the bindweed or morning glory family, Convolvulaceae. Its large, starchy, sweet-tasting tuberous roots are used as a root vegetable. The young shoots and leaves are sometimes eaten as greens. Sentence to be focused on: Its large, starchy, sweet-tasting tuberous roots are used as a root vegetable. Facts: •Sweet potatoes’ roots are large. •Sweet potatoes’ roots are starchy. •Sweet potatoes’ roots are sweet-tasting. •Sweet potatoes’ roots are tuberous. •Sweet potatoes’ roots are used as a root vegetable. Text: After the success of the David in 1504, Michelangelo’s work consisted almost entirely of vast projects. He was attracted to these ambitious tasks while at the same time rejecting the use of assistants, so that most of these projects were impractical and remained unfinished. Sentence to be focused on: After the success of the David in 1504, Michelangelo’s work consisted almost entirely of vast projects. Facts: •Michelangelo achieved the success of the David in 1504. •After 1504, Michelangelo’s work consisted almost entirely of vast projects. Text: After the success of the David in 1504, Michelangelo’s work consisted almost entirely of vast projects. He was attracted to these ambitious tasks while at the same time rejecting the use of assistants, so that most of these projects were impractical and remained unfinished. In 1504 he agreed to paint a huge fresco for the Sala del Gran Consiglio of the Florence city hall to form a pair with another just begun by Leonardo da Vinci. Both murals recorded military victories by the city (Michelangelo’s was the Battle of Cascina), but each also gave testimony to the special skills of the city’s much vaunted artists. Sentence to be focused on: In 1504 he agreed to paint a huge fresco for the Sala del Gran Consiglio of the Florence city hall to form a pair with another just begun by Leonardo da Vinci. Automatic Filtering: Continued on next page 14 Facts: •In 1504, Michelangelo agreed to paint a huge fresco for the Sala del Gran Consiglio of the Florence city hall. •Around 1504, Leonardo da Vinci just began with a mural for
https://arxiv.org/abs/2505.16973v2
the Florence city hall. Text: After the success of the David in 1504, Michelangelo’s work consisted almost entirely of vast projects. He was attracted to these ambitious tasks while at the same time rejecting the use of assistants, so that most of these projects were impractical and remained unfinished. In 1504 he agreed to paint a huge fresco for the Sala del Gran Consiglio of the Florence city hall to form a pair with another just begun by Leonardo da Vinci. Both murals recorded military victories by the city (Michelangelo’s was the Battle of Cascina), but each also gave testimony to the special skills of the city’s much vaunted artists. Leonardo’s design shows galloping horses, Michelangelo’s active nudes—soldiers stop swimming and climb out of a river to answer an alarm. Sentence to be focused on: Both murals recorded military victories by the city (Michelangelo’s was the Battle of Cascina), but each also gave testimony to the special skills of the city’s much vaunted artists. Facts: •Michelangelo’s murals for the Florence city hall recorded military victories by the city. •Leonardo da Vinci’s murals for the Florence city hall recorded military victories by the city. •Michelangelo’s mural for the Florence city hall was the Battle of Cascina. Text: I (27f) and my fiance "Leo" (27m) decided to let my FSIL "Maya" (32f) stay at our house because she needed space from her husband due to some relationship struggles they’re having. Leo and I had gotten wedding cake samples from an expensive bakery specializing in wedding cakes. We planned to test them along with Maya after we finished up some other wedding plans yesterday. However, when I came home from work to see Leo yelling at Maya, the box the samples came in wide open on the living room table, and Maya arguing with him. I asked what was happening, and Leo angrily told me that while we were both at work, Maya had some friends over and they ended up eating almost all of our cake samples. Sentence to be focused on: However, when I came home from work to see Leo yelling at Maya, the box the samples came in wide open on the living room table, and Maya arguing with him. Facts: No verifiable claim. Text: I was a catholic school kid, educated by nuns and somehow on a spring day in 1972, I was called down to the principal’s office by Sister Mary Roberts, who informed me that I had gained admission to Stuyvesant High School. I was excited to be freshman in one of New York City’s elite public schools but soon came to realize that my catholic school education did not provide the groundwork for abstract concepts like science and algebra. My parochial education in Science at St. Joseph’s was essentially “God made it, what else do you need to know?” Automatic Filtering: Continued on next page 15 Sentence to be focused on: I was excited to be freshman in one of New York City’s elite public schools but soon came to realize that my catholic school education did
https://arxiv.org/abs/2505.16973v2
not provide the groundwork for abstract concepts like science and algebra. Facts: •Stuyvesant High School is in New York City. •Stuyvesant High School is an elite high school. •Stuyvesant High School is a public school. •In 1972, St. Joseph’s catholic school education did not provide the groundwork for abstract concepts like science and algebra. Text: Ãcariya Mun related the story of a dhutanga monk (ascetic monk) who inadvertently went to stay in a forest located next to a charnel ground. He arrived on foot at a certain village late one afternoon and, being unfamiliar with the area, asked the villagers where he could find a wooded area suitable for meditation. They pointed to a tract of forest, claiming it was suitable, but neglected to tell him that it was situated right on the edge of a charnel ground. They then guided him to the forest, where he passed the first night peacefully. On the following day he saw the villagers pass by carrying a corpse, which they soon cremated only a short distance from where he was staying. Sentence to be focused on: They then guided him to the forest, where he passed the first night peacefully. Facts: No verifiable claim. Input format: ### Prompt <prompt here> Output format: ### Thoughts <your thoughts here> ### Judgement Yes/No 16 Table 8: VERIFASTSCORE prompt format: Sample input and output prompt You are trying to verify how factual a response is by extracting fine-grained, verifiable claims. Each claim must describe one single event or one single state (for example, “Nvidia was founded in 1993 in Sunnyvale, California, U.S. ”) in one sentence with at most one embedded clause. Each fact should be understandable on its own and require no additional context. This means that all entities must be referred to by name but not by pronoun. Use the name of entities rather than definite noun phrases (e.g., “the teacher”) whenever possible. If a definite noun phrase is used, be sure to add modifiers (e.g., an embedded clause or a prepositional phrase). Each fact must be situated within relevant temporal and location details whenever needed. All necessary specific details-including entities, dates, and locations—must be explicitly named, and verify here means that every detail of a claim is directly confirmed by the provided evidence. The verification process involves cross-checking each detail against the evidence; a detail is considered verified if it is clearly confirmed by the evidence. Avoid extracting stories, personal experiences, hypotheticals (e.g., those using “would be” or the subjunctive mood), subjective opinions, suggestions, advice, instructions, or similarly non-factual content; however, biographical, historical, scientific, and similar texts are acceptable. Also, ignore any listed references. For each extracted claim, classify it as follows: Supported : Every detail of the claim (including entities, dates, and locations) is directly confirmed by the provided evidence with no contradictions. Unsupported : One or more details of the claim are either missing from or contradicted by the provided evidence, even though the claim remains verifiable using external sources. You do not need to justify what you extract. Output format: <fact 1>: <your
https://arxiv.org/abs/2505.16973v2
judgment of fact 1> <fact 2>: <your judgment of fact 2> . . . <fact n>: <your judgment of fact n> If no verifiable claim can be extracted, simply output "No verifiable claim." ### Response: Historical Context and Significance: In the summer of 1215, a pivotal moment in the course of Western political and legal history unfolded in England. Amidst mounting discontent between King John and his barons, a document was drafted in Runnymede, a meadow along the River Thames. This document, known as the Magna Carta, was the first significant attempt to limit the power of a monarch by law, and it marked a crucial step towards modern representative democracy. The Heart of the Matter: The Magna Carta’s Provisions: The Magna Carta, officially known as the "Great Charter", was a compact between King John and his barons, sealing a series of agreements that aimed to secure political rights and liberties. Some of the most influential clauses protected church rights, granted trial by jury, limited the amount of unjust taxes, and established the principle that no free man could be imprisoned or exiled without a fair trial. These provisions laid the foundation for the rule of law, constitutional government, and individual rights that are widely upheld today. VERIFASTSCORE prompt format: Continued on next page 17 Lasting Impact and Legacy: The Magna Carta has left an indelible mark on the annals of history and continues to serve as a symbol of liberty and the rule of law in the modern world. Although it has undergone numerous reissues and modifications throughout the centuries, the original 1215 charter remains a significant historical artifact. It has been celebrated as a precursor to the Bill of Rights in the United States, the Universal Declaration of Human Rights, and many other international charters and constitutions. More than eight centuries after its inception, the Magna Carta’s enduring legacy continues to inspire efforts towards justice, freedom, and governance worldwide. ### Evidence The Magna Carta is the charter of English liberties granted by King John on June 15, 1215, under threat of civil war. The Magna Carta (or Great Charter) was written in Latin and was effectively the first written constitution in European history. This significant moment, the first time a ruling monarch had been forcibly persuaded to renounce a great deal of his authority, took place at Runnymede, a ... It promised the protection of church rights, protection for the barons from illegal imprisonment, access to swift and impartial justice, and limitations on ... Eight hundred years ago today, King John of England sealed the Magna Carta, a groundbreaking legal document that served as the foundation for ... The Magna Carta, officially granted by King John of England on 15 June 1215, stands as one of the most influential and pivotal documents in ... On June 15, 1215, in a field at Runnymede, King John affixed his seal to Magna Carta. Confronted by 40 rebellious barons, he consented to their ... The Department of Defense undertook preparation ofa history ofthe 11. September 2001 attack on the Pentagon at the suggestion
https://arxiv.org/abs/2505.16973v2
of Brig. Gen. John S. Brown, ... hours This course examines the causes and consequences of globalization. Issues are examined from a changing historical context of economy, politics, and ... ... along the south bank of the Thames River, in a meadow called Runnymede. King John affixed his seal to the Magna Carta on June 15, 1215. The document then ... On June 15, 1215, John met the barons at Runnymede on the Thames and set his seal to the Articles of the Barons, which after minor revision was ... The result was the Magna Carta, which was signed on June 15, 1215, at Runnymede, a meadow by the River Thames. Key Provisions of the Magna Carta. While the ... They accrued at Runnymede, a meadow alongside the banks of the River Thames, on a fateful day in June. They demanded that King John renowned their rights and ... Magna Carta originated in 1215 as a peace treaty between King John and a group of rebellious barons · The original document was written in Latin ... A deal was brokered between King John and the barons who agreed to meet on 10th June 1215 at Runnymede, a large flat expanse of meadow near Egham, ideally ... In May the barons took London, and withdrew their homage and fealty. In the middle of June, 1215, on a meadow, Runnymede, along the River Thames the rebellious ... McKechnie’s introduction to Magna Carta: A Commentary on the Great Charter of King John, with an Historical Introduction, by William Sharp McKechnie. The story culminates at Runnymede, a meadow on the banks of the River Thames. ... by King John in 1215, was a document of unprecedented ... its origins in the ... The Magna Carta is the charter of English liberties granted by King John on June 15, 1215, under threat of civil war. VERIFASTSCORE prompt format: Continued on next page 18 Magna Carta was issued in June 1215 and was the first document to put into writing the principle that the king and his government was not above the law. In the early 17th century, Magna Carta became increasingly important as a political document in arguments over the authority of the English monarchy. June 15, 2015 marks the 800th anniversary of the Magna Carta, a pioneering legal document that served as the foundation for American democracy and individual ... The document, often referred to simply as Magna Carta, did indeed innumerate several various rights and liberties, as well as limitations on the King’s powers ... Magna Carta is significant because it is a statement of law that applied to the kings as well as to his subjects. The Magna Carta was the first step of the English Monarchy giving up some power and allowing the Judicial Branches have more power. In theory it ... The Magna Carta established that the king and all individuals were subject to the law, codified the principles of due process and trial by jury. Magna Carta marked an important step, in the process by which England became a nation; but
https://arxiv.org/abs/2505.16973v2
that step was neither the first nor yet the final one.2. In ... The Magna Carta, Latin for “Great Charter,” is one of the most famous documents in history. It was originally issued in 1215 during the reign of ... A law made by the king in one national assembly might be repealed by the king in another; whereas the Great Charter was intended by the barons to be ... MCMXIV . "vo. Page 7. MAGNA CARTA. A COMMENTARY ON THE GREAT. CHARTER OF KING JOHN. WITH AN. HISTORICAL INTRODUCTION. BY . WILLIAM SHARP McKECHNIE ... Great Charter ... Magna Carta: A Commentary on the Great Charter of King John, with an Historical Introduction, by William Sharp McKechnie (Glasgow: Maclehose, 1914). Copyright. Full text of "Magna carta : a commentary on the Great Charter of King John with an historical introduction" ... Great Charter was intended by the barons to be ... a Commentary on the Great Charter of King John (2nd edn., Glasgow, 1914). ... have all their rights and liberties according to the great charter of England’. During his final year of public service, Coke forced a reconsideration of the Great Charter’s overall meaning and promise. Would the liberties of the. But minding and mining the Magna Carta became a fun task, not just to learn about the Great Charter, but to glean from his notes or his citations the major ... called "the statute r :lied the Great Charter of the Liberties of England." The simple fact that the language of Maps Carta bestowed its benefits on "free ... Indeed, it was named the ’Great Charter’, several years after the initial 1215 settlement, not in recognition of its importance, but because it ... It promised the protection of church rights, protection from illegal imprisonment, access to swift justice, and, most importantly, limitations on taxation and ... Of enduring importance to people appealing to the charter over the last 800 years are the famous clauses 39 and 40: “No free man shall be seized, imprisoned, ... The two most famous clauses; establishing the right of all to be judged by their equals, and outlawing imprisonment of free men without a trial, were clauses 39 ... Among these are the principle of no taxation without representation and the right to a fair trial under law. Clause 12 prevented kings from imposing taxes ’without common counsel’. The principle – that taxation must be by consent – became fixed in English politics. VERIFASTSCORE prompt format: Continued on next page 19 The English church shall be free and shall have its rights intact and its liberties uninfringed upon. And thus we will that it be observed. ... No free man shall be seized or imprisoned, or stripped of his rights or possessions, or outlawed or exiled, or deprived of his standing in any other way, nor. + (39) No free man shall be seized or imprisoned, or stripped of his rights or possessions, or outlawed or exiled, or deprived of his standing in any way, nor ... These guaranteed the Church the freedom to
https://arxiv.org/abs/2505.16973v2
handle its business without interference from the throne; that no free man could be imprisoned or outlawed except by ... One of the most important, and often quoted, provisions stated that "no freeman shall be seized, imprisoned, dispossessed, outlawed, or exiled, or in any way ... The Federalist # 78 states further that, if any law passed by Congress conflicts with the Con- stitution, "the Constitution ought to be preferred to the statute, ... The Bill of Rights is a founding documents written by James Madison. It makes up the first ten amendments to the Constitution including freedom of speech ... Article III of the Constitution establishes the federal judiciary. Article III, Section I states that "The judicial Power of the United States, shall be vested ... The Bill of Rights became a document that defends not only majorities of the people against an overreaching federal government but also minorities against ... "[A] bill of rights is what the people are entitled to against every government on earth, general or particular, and what no just government ... The higher law, reciprocity and mutuality of obligations, written charters of rights, the right to be consulted on policy and to grant or refuse one’s consent, ... The. Supreme Court upheld the law, ruling that Congress has the power to enact laws that directly affect the acts of individuals, thereby making ... The Universal Declaration of Human Rights is generally agreed to be the foundation of international human rights law. Adopted in 1948, the UDHR has inspired ... Article III of the Constitution establishes and empowers the judicial branch of the national government. Today, the rule of law is often linked to efforts to promote protection of human rights worldwide. Magna Carta was the result of the Angevin king’s disastrous foreign policy and overzealous financial administration. Magna Carta’s clauses provided the basis for important principles in English law developed in the fourteenth through to the seventeenth century. Magna Carta was written by a group of 13th-century barons to protect their rights and property against a tyrannical king. Though the king never meant to keep his promises, Magna Carta survived. Down through the centuries, it has been a symbol of opposition to arbitrary government. The Magna Carta is the charter of English liberties granted by King John on June 15, 1215, under threat of civil war. The Magna Carta (or Great Charter) was written in Latin and was effectively the first written constitution in European history. The Magna Carta, the medieval English historic legal document that is seen as the origin of many modern-day legal rights and constitutional principles. It is a story that runs 800 years forward and is still unfolding. It is the story of our rule of law tradition and of how our American system of government is ... exploration of how historical events have left an indelible mark on present-day national political systems, employing a comparative analysis of the British ... VERIFASTSCORE prompt format: Continued on next page 20 None of the original 1215 Magna Carta is currently in force since it
https://arxiv.org/abs/2505.16973v2
has been repealed; however, four clauses of the original charter are enshrined in the 1297 ... On the back of this charter issued in the name of King Æthelbald of Mercia in 736, it is still possible to see the impressions from where the document was ... The king and the rebel barons negotiated a peace settlement in June 1215. The king agreed to accept the terms of Magna Carta, which is dated 15 June 1215. David Carpenter considers the huge significance of the 13th-century document that asserted a fundamental principle – the rule of law. That any copies of the 1215 Magna Carta survive is even more remarkable given the number of times it was reissued—most copyholders would ... Magna Carta, charter of English liberties granted by King John on June 15, 1215, under threat of civil war and reissued, with alterations, in 1216, 1217, and ... Such an evaluation will not prove that everything commonly said in praise of the 1215 Charter is true. Some of it is myth. But it is not all myth. Many of the ... Representing a would-be peace treaty between the king and rebellious nobles, the 1215 Charter did not survive its year of issue. Pope Innocent ... The Magna Carta is a document written in 1215 that limited the power of the English king. It remains important today as a symbol of protest against ... This version is arguably just as significant as the 1215 charter in that it remained materially unchanged through various reissues and confirmations until in ... The UDHR is widely recognized as having inspired, and paved the way for, the adoption of more than seventy human rights treaties, applied today on a permanent ... It directly inspired the development of international human rights law, and was the first step in the formulation of the International Bill of Human Rights, ... The Universal Declaration of Human Rights, which was adopted by the UN General Assembly on 10 December 1948, was the result of the experience of the Second ... These three documents, known collectively as the Charters of Freedom, have secured the rights of the American people for more than two and a quarter centuries. The Magna Carta established the rule of law and the idea that all citizens, including those in power, should be fairly and equally ruled by the law. This exhibit celebrates the leadership of Eleanor Roosevelt in writing the Universal Declaration of Human Rights as we mark the 70 th anniversary of its ... The Universal Declaration of Human Rights has been the centrepiece of the modern international law of human rights for more than sixty years. If anything ... Everyone is entitled to all the rights and freedoms set forth in this Declaration, without distinction of any kind, such as race, colour, sex, language, ... Known today as the Cyrus Cylinder, this ancient record has now been recognized as the world’s first charter of human rights. It is translated into all six ... The Universal Declaration of Human Rights describes human rights of all people around
https://arxiv.org/abs/2505.16973v2
the world. D. The Bill of Rights to the U.S. Constitution ... In 1215, when King John confirmed Magna Carta with his seal, he was acknowledging the now firmly embedded concept that no man–not even the king ... As we celebrate 800 years of Magna Carta, it’s timely to reflect on the bloodshed and tyranny behind its creation and the reasons for its ... June 15, 2015 marks the 800th anniversary of the Magna Carta, a pioneering legal document that served as the foundation for American democracy and individual ... VERIFASTSCORE prompt format: Continued on next page 21 The story of Magna Carta began in 1215, but it continues eight centuries forward and is still unfolding. It is the story of modern constitutional government and ... Magna Carta established a number of very important political principles that framed and informed our discussions about human rights in the centuries that ... The Magna Carta is the charter of English liberties granted by King John on June 15, 1215, under threat of civil war. A Magna Carta moment. After eight centuries the revered document of liberty still grips the political imagination, says David Hayes in London. Magna Carta was not made law; it was more of a working document setting out how the country would be run. ### Facts The Magna Carta serves as a symbol of the rule of law.: Supported The Magna Carta is celebrated as a precursor to the Bill of Rights in the United States.: Supported The Magna Carta is officially known as the "Great Charter".: Unsupported The Magna Carta’s legacy continues to inspire efforts towards freedom worldwide.: Unsupported The Magna Carta’s legacy continues to inspire efforts towards justice worldwide.: Supported The provisions of the Magna Carta laid the foundation for constitutional government.: Supported Individual rights are widely upheld today.: Unsupported The Magna Carta has influenced many international charters and constitutions.: Supported The Magna Carta protected church rights.: Supported The Magna Carta established the principle that no free man could be imprisoned or exiled without a fair trial.: Supported The provisions of the Magna Carta laid the foundation for individual rights.: Supported The Magna Carta was the first significant attempt to limit the power of a monarch by law.: Supported The Magna Carta has left an indelible mark on the annals of history.: Supported The Magna Carta’s legacy continues to inspire efforts towards governance worldwide.: Supported The provisions of the Magna Carta laid the foundation for the rule of law.: Supported The Magna Carta is celebrated as a precursor to the Universal Declaration of Human Rights.: Supported The drafting of the Magna Carta was amidst mounting discontent between King John and his barons.: Supported The Magna Carta was a compact between King John and his barons.: Supported Constitutional government is widely upheld today.: Unsupported The Magna Carta is considered a pivotal moment in Western political and legal history.: Supported The Magna Carta has influenced the Universal Declaration of Human Rights.: Supported The Magna Carta took place in England.: Supported The Magna Carta aimed to secure political rights and liberties.: Supported
https://arxiv.org/abs/2505.16973v2
The Magna Carta granted trial by jury.: Supported The Magna Carta was incepted more than eight centuries ago.: Unsupported The drafting of the Magna Carta took place in the summer of 1215.: Unsupported The Magna Carta was drafted in the summer of 1215.: Unsupported The Magna Carta serves as a symbol of liberty.: Supported The Magna Carta has influenced the Bill of Rights in the United States.: Supported The rule of law is widely upheld today.: Unsupported The original 1215 charter of the Magna Carta remains a significant historical artifact.: Unsupported The Magna Carta was drafted in Runnymede, a meadow along the River Thames.: Supported VERIFASTSCORE prompt format: Continued on next page 22 The Magna Carta marked a crucial step towards modern representative democracy.: Supported The Magna Carta limited the amount of unjust taxes.: Unsupported The Magna Carta is celebrated as a precursor to many international charters and constitutions.: Supported The original 1215 charter of the Magna Carta has undergone numerous reissues and modifications throughout the centuries.: Supported VERIFASTSCORE prompt format 23 A.5 Automatic Evaluation To evaluate the factuality labels produced by VER- IFASTSCORE , we perform automatic evaluation using GPT-4o-mini as an entailment judge. Given an extracted claim and a set of reference claims (from VERISCORE ), the model is prompted to de- termine whether the claim can be inferred from the reference set. If supported, the model also identifies a minimal subset of reference claims that together justify the extracted claim. This setup ensures that factual consistency is assessed in a targeted, evidence-grounded manner while mini- mizing overreliance on superficial textual overlap. For scoring purposes, we use the original VERISCORE verification labels assigned to the re- trieved subset of justifying reference claims. If any of the claims in this minimal set are labeled as Un- supported by VERISCORE , the extracted claim is marked as Unsupported for evaluation. This setup enables us to scale the evaluation of thousands of (claim, evidence) pairs with approximate but con- sistent supervision. 24 Data Source, Evidence GranularityModel Precision ↑Recall ↑Claim Accuracy (%) Correlation b/w factuality scores ↑ Correct ↑Incorrect ↓Missing ↓ VERISCORE & Tulu3 Personas, claim-levelGPT-4o few-shot ICL 0.10 0.04 3.54 0.82 95.64 0.28 VERIFASTSCORE - Stage-10.06 0.29 24.64 4.53 70.84 0.85 VERIFASTSCORE 0.05 0.28 23.69 4.35 71.96 0.87 Tulu3 Personas, sentence-levelGPT-4o few-shot ICL 0.15 0.03 2.61 0.57 96.82 0.33 VERIFASTSCORE - Stage-10.06 0.19 15.33 3.43 81.24 0.75 VERIFASTSCORE 0.05 0.23 18.62 4.18 77.26 0.80 Table 9: Exact match metrics for different test splits. VERIFASTSCORE -Stage-1 is the best model from Stage 1 training with claim-level evidence. Evidence Granularity indicates whether provided evidence was collected using claims or sentences in the model responses as search queries. Precision measures how many claims produced by VERIFASTSCORE is also produced by VERISCORE .Recall measures how many claims produced by VERISCORE is also produced by VERIFASTSCORE . Under Claim accuracy ,Correct measures the accuracy of the verification label produced by VERIFASTSCORE with label produced by VERISCORE as the ground truth. Incorrect (Label) measures how many claims produced by both VERIFASTSCORE andVERISCORE but with differing labels. Missing (Claim) measures
https://arxiv.org/abs/2505.16973v2
how many claims produced by VERISCORE but not by VERIFASTSCORE .Correlation b/w factuality scores is the Pearson’s correlation between factuality scores computed by VERIFASTSCORE andVERISCORE for model responses. Data Source, Evidence GranularityModel Precision ↑Recall ↑Claim Accuracy (%) Correlation b/w factuality scores ↑ Correct ↑Incorrect ↓Missing ↓ VeriScore data & Tulu3 Personas , claim-levelGPT-4o few-shot ICL 0.30 0.31 18.1 7.2 73.2 0.28 VERIFASTSCORE - Stage-10.88 0.87 63.0 21.3 15.1 0.85 VERIFASTSCORE 0.87 0.90 71.6 15.5 12.3 0.87 Tulu3 Personas , sentence-levelGPT-4o few-shot ICL 0.38 0.34 19.3 8.2 71.2 0.33 VERIFASTSCORE - Stage-10.72 0.69 50.8 20.8 27.7 0.75 VERIFASTSCORE 0.83 0.86 66 17.6 15 0.80 Table 10: Automatic performance metrics across test splits. VERIFASTSCORE -Stage-1 refers to the best model trained with claim-level evidence. Evidence Granularity denotes whether evidence was retrieved using full sentences or claims. Precision andRecall measure overlap between claims identified by VERIFASTSCORE andVERISCORE . Under Claim Accuracy ,Correct ,Incorrect (Label) , and Missing (Claim) capture agreement, label mismatches, and omissions relative to VERISCORE . A small fraction of instances were marked Erroneous (not shown) due to ambiguities in automatic evaluation (see A.5). Correlation reports Pearson rbetween factuality scores (see Eq. 1) assigned by VERIFASTSCORE andVERISCORE . For all Pearson’s correlation scores reported in this paper, p<0.001, unless stated otherwise. 25 Table 11: Automatic Evaluation: Prompt used for automatic evaluation with GPT-4o-mini. Given an extracted claim and a list of reference claims, the task is to judge whether the extracted claim can be inferred from the provided list of claims. If that is true, the model also needs extract a minimal subset of claims from the provided list of claims which entails the extracted claim. You are a helpful assistant tasked with judging the factuality of a statement given some evidence by retrieving a minimal set of facts in the evidence that is necessary and sufficient to justify your judgement of the factuality of the statement. A fact is justifying your judgement if and only if the fact needs to be true for your judgement to be true. Below are the definitions of the two categories: Supported : A fact is supported by the evidence if everything in the fact is supported and nothing is contradicted by the evidence. Evidence may contain other information not fully related to the fact. Unsupported : A fact is unsupported by the evidence if something in the fact is either contradicted by some information in the evidence or cannot be verified from the evidence. Input format: ### Evidence 1. <fact 1 here> 2. <fact 2 here> ... n. <fact n here> ### Statement <statement here> Response format: ### Thoughts <your thoughts here> ### Justifying Facts <justifying fact number 1>, <justifying fact number 2>, ..., <justifying fact number k> ### Judgement Supported / Unsupported Guidelines: Only mention fact numbers as a comma separated list under justifying facts and not the entire text of the claims. Do not include facts that are not strictly required to be true to justify your judgement of the factuality of the statement. If there are no justifying facts, return None. ### Claims
https://arxiv.org/abs/2505.16973v2
1. Colleen Hoover’s Without Merit book is a work by Colleen Hoover. 2. Several titles by Colleen Hoover have become New York Times bestsellers. 3. Colleen Hoover’s Heart Bones book is a work by Colleen Hoover. 4. Colleen Hoover’s Slammed series includes 3 books. 5. Colleen Hoover has published 24 books as of February 2023. 6. Colleen Hoover’s books include series. 7. Colleen Hoover’s All Your Perfects book is a work by Colleen Hoover. 8. Colleen Hoover’s Reminders of Him book is a work by Colleen Hoover. 9. Colleen Hoover’s Hopeless series includes 2 books. Automatic Evaluation: Continued on next page 26 10. Colleen Hoover’s books have gained immense popularity in recent years. 11. Colleen Hoover is a prolific writer in the New Adult genre. 12. Colleen Hoover’s Layla book is a work by Colleen Hoover. 13. Colleen Hoover’s It Starts with Us book is a sequel to It Ends with Us . 14. Colleen Hoover’s It Ends with Us book is a work by Colleen Hoover. 15. Colleen Hoover is a prolific writer in the romance genre. 16. Colleen Hoover’s books include standalone novels. 17. Colleen Hoover’s books include novellas. ### Statement Colleen Hoover has published a book titled “Hopeless” as part of the “Hopeless” series. 27
https://arxiv.org/abs/2505.16973v2
SWE-Dev: Evaluating and Training Autonomous Feature-Driven Software Development Yaxin Du1∗, Yuzhu Cai2∗, Yifan Zhou3, Cheng Wang1, Yu Qian1, Xianghe Pang1, Qian Liu4,Yue Hu5, Siheng Chen1 1Shanghai Jiao Tong University,2Beijing University of Aeronautics and Astronautics, 3Soochow University,4Tiktok,5University of Michigan Abstract Large Language Models (LLMs) have shown strong capability in diverse software engineering tasks, e.g. code completion, bug fixing, and document generation. However, feature-driven development (FDD), a highly prevalent real-world task that involves developing new functionalities for large, existing codebases, remains underexplored. We therefore introduce SWE-Dev, the first large-scale dataset (with 14,000 training and 500 test samples) designed to evaluate and train au- tonomous coding systems on real-world feature development tasks. To ensure verifiable and diverse training, SWE-Dev uniquely provides all instances with a runnable environment and its developer-authored executable unit tests. This collection not only provides high-quality data for Supervised Fine-Tuning (SFT), but also enables Reinforcement Learning (RL) by delivering accurate reward sig- nals from executable unit tests. Our extensive evaluations on SWE-Dev, covering 17 chatbot LLMs, 10 reasoning models, and 10 Multi-Agent Systems (MAS), reveal that FDD is a profoundly challenging frontier for current AI (e.g., Claude- 3.7-Sonnet achieves only 22.45% Pass@3 on the hard test split). Crucially, we demonstrate that SWE-Dev serves as an effective platform for model improvement: fine-tuning on training set enabled a 7B model comparable to GPT-4o on hard split, underscoring the value of its high-quality training data. Code is available here https://github.com/justLittleWhite/SWE-Dev. 1 Introduction T estcase PASSEDtest_ordinalFAILEDtest_year...Ran 6 tests in 0.02s FAILED=2pytesttests/test_en.py------------------------------SWE-Dev14000 train + 500 test samplesInput TaskEvaluation Codebase num2words/Project Requirement Description Feature: convert numbers to English words•Convert ordinal to English:•add to_ordinalmodule in lang_EN.py•Convert year to English: •add to_yearmodule to lang_EN.py•Complete interfaceof num2words: •implement num2words in __init__.pyEvaluated Systems17 Chatbot LLMs 10 Reasoning LLMs10 Multi-Agent SystemsTraining Suppor tSFTOnline RLOffline RLMulti Agent Training Updated CodebaseGenerate Codenum2words/lang_EN.pyclassNum2Word_EN(Num2Word_Base): ...+defto_year(self,val,suffix=None,+ longval=True):+...num2words/__init__.py+ defnum2words(number, ordinal=False,+lang='en’, to='cardinal', **kwargs):+......Convert ordinal to English:test_ordinalConvert year to English: test_year... Figure 1: Overview of SWE-Dev, a software development dataset providing feature development tasks with feature description and codebase as input and test cases for evaluation. It is uniquely grounded in real-world repositories and paired with executable test suites, enabling reliable, functionally verifiable supervision. SWE-Dev is evaluated on 37 autonomous coding systems and supports advanced training paradigms like SFT, RL, and multi-agent training. ∗Equal contribution. Correspondence to: Siheng Chen <sihengc@sjtu.edu.cn> Preprint. Under review.arXiv:2505.16975v1 [cs.SE] 22 May 2025 Large Language Models (LLMs) are rapidly transforming autonomous programming, with capabilities extending from generating isolated code snippets to complex interactions within entire repositories [1,2]. As LLMs increasingly engage at this repository scale, rigorously evaluating their proficiency in handling complex coding systems becomes paramount for guiding their advancement. Current prominent benchmarks, while valuable, still struggle to judge how well LLMs perform in realistic, end-to-end development settings (Table 1). For example, SWE-Bench [ 3] measures only localized bug fixes described by GitHub issues, and RepoBench [ 4] evaluates the completion of a few unrelated functions within a repository. However, they overlook the core tasks of developing and integrating significant new functionalities, which truly define how real-world codebases evolve. The task of developing and integrating new functionalities
https://arxiv.org/abs/2505.16975v1
is formally defined as feature-driven development (FDD) [ 5,6], which consists of 40% coding tasks of all development efforts [ 7,8]. FDD involves the end-to-end creation of new features, from interpreting requirements in large, existing codebases to generating functionally correct and integrated code (see Figure 1). FDD is how most modern software, from large applications to essential libraries, primarily evolves and delivers continuous value [ 9,10]. Consequently, mastering FDD is a critical way towards achieving more comprehensive and genuinely autonomous programming capabilities with coding systems. Recognizing the central role of FDD and the limitations of current evaluation benchmarks, we introduce a feature-driven SoftWarE Dev elopment dataset, SWE-Dev , which is the first large- scale dataset designed to evaluate and train autonomous AI systems on real-world FDD tasks. It comprises 14,000 training and 500 test instances derived from over 1,000 open-source projects, and is distinguished by three key characteristics: (1) Realistic scale and complexity : SWE-Dev requires substantial code modifications (avg. 190 LOC across 3 files), challenging models with the cross-file dependencies, large contexts, and significant implementation scope characteristic of real-world feature development. (2) Robust and grounded evaluation : Each SWE-Dev sample is grounded in a real open-source repository, guided by a well-defined project requirement description (PRD), and evaluated using executable test cases to ensure the functional correctness of the proposed implementation. This design ensures alignment between task objectives and evaluation, enabling robust assessment and model supervision. (3) Verifiable training set with executable test suites : Uniquely, all 14,000 training instances are paired with runnable environments and executable unit tests, providing crucial execution-based feedback that enables effective Supervised Fine-Tuning (SFT) validation, Reinforcement Learning (RL) with accurate rewards, and Multi-Agent System (MAS) training, refer to Table 1. Our extensive experiments using SWE-Dev reveal several critical insights. Firstly, Repos itory- level featuredevelopment ischallenging: our findings show even top-tier models like Claude-3.7- Sonnet[ 11] and GPT-4o[ 12] solve only 22.45% hard samples and 68.70% easy samples with Pass@3. Secondly, MASs generallyoutperform single-agent baselines inmodestmargins. Interestingly, simple general-purpose multi-agent methods (e.g., Self-Refine[ 13], Reflexion[ 14]) often outperform more complex code-specific agents, while requiring fewer model calls and lower cost. Lastly, task-specific trainingonthistask gets substantialgains onalltrainingmeth ods. After training, a 7B fine-tuned model is comparable to GPT-4o on hard subset after task-specific training. These findings point to several promising directions for future research. First, the difficulty of FDD for LLMs necessitates enhancing LLMs’ core reasoning and long-context capabilities for software development. Second, current MAS designs often suffer from unnecessary communication overhead and limited coordination efficiency. Future work should explore lightweight agent architectures and better context-sharing mechanisms for repository-level development. Lastly, our initial experiments with RL and role-based multi-agent training show that training can be beneficial, but headroom remains. Future work could investigate multi-agent training and long-context RL with SWE-Dev. Our contributions are as follows: •We introduce SWE-Dev , the first real-world dataset for autonomous feature-driven software development. The dataset includes both training and test splits, each with runnable environments and test cases, enabling a wide range of evaluation and training. •Our evaluations on SWE-Dev offer novel insights into
https://arxiv.org/abs/2505.16975v1
the proficiency and deficiencies of various coding systems (chatbot LLMs, reasoning LLMs, and MAS) on complex FDD tasks. •We demonstrate SWE-Dev enabling and validating diverse training paradigms (SFT, RL, and MAS training), establishing its utility for advancing training-based adaptation. 2 Table 1: Comparison of SWE-Dev with existing repository-level benchmarks. Task (FC: Function Completion, PG: Project Generation, LC: Line Completion, IS: Issue Solving), usage of real- repository, availability of training sets, Number of Samples, and task statistics are compared here. Detailed statistics information is demonstrated e.g., line of code (LOC), task description PRD length. w. Testcases # SamplesTaskReal RepoTrain Existence Train Test Total w. TestcasesAvg. PRD TokensAvg. LOC ComplexCodeEval [15] FC" ✗ ✗ ✗ 7k 0 134.2 38.21 CoderEval [16] FC" ✗ ✗" 234 234 119.26 20.64 DevEval [17] FC" ✗ ✗" 2k 2k 91.5 112 rSDE-Bench [18] PG ✗ ✗ ✗" 53 53 1553 157.88 M2rc-Eval [19] LC" " ✗ ✗ 59k 0 0 1 RepoBench [4] FC" ✗ ✗" 23k 0 0 89 SWE-Bench [3] IS" " ✗" 19k 2k 195.1 32.8 SWE-Dev FDD " " "" 14.5k 14.5k 1845.4 190 2 Related Work 2.1 Coding benchmarks LLMs show significant potential in coding tasks, driving the need for robust benchmarks. Early benchmarks such as HumanEval[ 20], MBPP[ 21], APPS[ 22], and CodeContests[ 23] primarily focus on isolated, function-level tasks. These benchmarks test for correctness in constrained settings: short snippets, well-specified inputs, and short expected outputs. While useful for early-stage capability testing, such tasks fall short of reflecting the complex, multi-file dependency and long contexts nature of real-world software development tasks. To address this, repository-level benchmarks emerged, such as SWE-Bench[ 3] (issue fixing), RepoBench[ 4], and M2RC-Eval[ 19] (code completion/under- standing). Despite this progress, they often face two main issues: (1) The scope of required code generation or modification remains limited (e.g., avg. 32.8 LOC in SWE-Bench, 38.21 LOC in ComplexCodeEval[ 17]), inadequately simulating large-scale feature development or refactoring. (2) Weak or inconsistent evaluation protocols: several benchmarks [ 15,19,4] rely heavily on proxy metrics such as code similarity or static heuristics, which often fail to reflect functional correct- ness. This compromises both the robustness of evaluation and the comparability of results across models[ 24,25]. SWE-Dev directly tackles these limitations by providing large-scale repository-level feature development tasks with executable unit tests. Its tasks involve substantial modifications, ad- dressing shortcomings in both code scope and trainable environments, thereby significantly increasing task complexity and realism. 2.2 Code LLMs Training Training LLMs for coding tasks typically involves three stages: pre-training, supervised fine-tuning (SFT), and reinforcement learning (RL). Pre-trained models such as StarCoder[ 24] and Phi[ 26] leverage massive code corpora to learn syntax and general programming patterns. To improve instruction following and task completion, many works adopt SFT. Code Alpaca[ 27] employs self-instruct generation, WizardCoder[ 28] leverages Evol-Instruct[ 29] to synthesize more complex instructions. However, SFT fundamentally lacks exploration: it teaches models to imitate ground- truth outputs rather than to reason or build autonomously[ 30]. Beyond SFT, RL frameworks such as CodeRL[ 31] utilize test-case-based feedback to optimize model behavior. While promising, both SFT and RL
https://arxiv.org/abs/2505.16975v1
approaches largely focused on function-level tasks, limiting their applicability to more complex development scenarios. To address this, SWE-Gym[ 32] explores extending training to repository-scale tasks using multi-agent systems. However, due to the lack of an executable training set in SWE-Bench, SWE-Gym resorts to constructing a separate dataset of 2,438 tasks, ultimately yielding only 500 trajectory samples for training. In contrast, our proposed SWE-Dev provides a large-scale repository-level training set with runnable environments and unit-test-based supervision. It supports SFT, RL, and multi-agent training with executable feedback, enabling realistic and scalable development of code generation systems. 3 DataCollectionStep 1: T est File CollectionStep 2: Call Tree GenerationStep 3: T ask Generation 8,000 PyPIrepos1,086repos & 9,339 test filesBuildDocker & Execute T estsDataset Generation test filetest funcssource funcs CollectRepos&Te s t sDynamic AnalysisGenerate Call TreeCall ChainTarget F unctio n MaskingPRD Generation Call Tree DockerTe s t F i l eIncomplete RepoPRDModule descriptionEnhanced DocstringRemove+EvaluationTaskFigure 2: Overview of SWE-Dev dataset construction. Step 1 : We collect real-world repositories with passing test files in Dockerized environments, Step 2: trace test executions to construct function-level call trees linking test cases to invoked source code, and Step 3 : mask core functions while generating refined PRDs to create tasks. Each sample includes an incomplete repository, a natural language requirement, and executable test cases-enabling realistic, verifiable feature development. 3 SWE-Dev SWE-Dev is the first dataset designed to train and evaluate autonomous coding systems on feature- driven software development tasks. Each instance requires the model to implement a new capability within an existing codebase, based on a natural language requirement and evaluated through real- world unit tests. This section describes the construction of the dataset (§ 3.1), its core features (§ 3.2), and key statistics (§ 3.3). 3.1 Dataset Construction Our dataset construction leverages a straightforward principle: test files in real-world repositories can serve both as a source of feature requirements and as a means of verifying correct implementation. In PyPI packages, developers create high-quality test files to ensure that specific modules or features function reliably across updates. For example, in numpy ,test_random.py validates random array generation, aligning closely with the feature it tests. These test files provide executable, feature- specific validation, making them ideal for defining and evaluating development tasks. Using these developer-authored tests as ground truth, we gain two advantages. First, they provide executable, functionality-level feedback for model evaluation. Second, by tracing the test cases back to their associated implementation code, we can identify and mask the relevant source code, forming the basis of an incomplete development task. These traced functions also guide the generation of precise task descriptions. Based on this process, we divide our construction into three stages: (1) collecting repositories, test files and building Docker environments, (2) generating test-to-code call trees via dynamic tracing, and (3) creating the final task by masking the relevant source code and producing the feature specification. Step 1: Test File Collection To support realistic feature-level tasks and test-based evaluation, we begin by collecting real-world repositories that reflect common development scenarios. Specifically, we select 8,000 popular PyPI packages based on star counts. However, not
https://arxiv.org/abs/2505.16975v1
all repositories are suitable: many lack usable tests or need sophisticated installation. Therefore, we applied a strict filtering process focused on test suite executability. Repositories were retained only if they met two criteria: (1) they include an identifiable test suite (e.g., using pytest orunittest ), and (2) their test files could be executed successfully within the package Docker environment, with all tests passing. This ensures the resulting tasks are grounded in verifiable, runnable functionality. After filtering, we obtain 1,086 validated repositories (as of December 12, 2024) and 9,339 executable test files. Step 2: Call Tree Generation To locate the specific functions and methods involved in implement- ing a feature, we capture the runtime interactions between test cases and their corresponding source code through dynamic analysis. This process has two main parts: (1) Dynamic analysis: We execute each test file using pytest inside a containerized Docker environment and apply Python’s built-in 4 5/26 8/91 1/7 6/27 8/2518/805/18 Test 41 Packages 500 Samples53/ 718134/ 1760 28/ 445 88/ 1624 404/ 5206103/ 1950223/ 2155 Train 1033 Packages 13858 Samples Web & Network Automation Data Processing & Integration Security & AccessCommand-Line & Developer T ools Cloud & Data StorageData Science & Visualization OthersFigure 3: Distribution of SWE-Dev training and test samples across 6 major PyPI appli- cation domains.Table 2: Basic statistics of SWE-Dev, including task specification length, repository scale, ground truth im- plementation size, and evaluation test coverage for both train and test splits. Category MetricTestTrainEasy Hard Size # Samples 250 250 14000 # Total repos 39 39 1033 Task # Avg. tokens 1499 2148 1833 Codebase # Avg. Files 71.31 64.40 # Avg. LOC 21308 20206 GT Code # Avg. LOC 109.1 172.4 199.92 # Avg. funcs 4.75 6.972 6.03 Tests # Avg. test lines 134.8 123.9 90.9 # Avg. testcases 6.62 4.29 5.92 trace module to record all triggered functions in source code. This results multiple linear call chains that record the sequence of invoked source functions. (2) Call tree ensemble: We aggregate the call chains into into a hierarchical call tree, where the nodes of call tree represent functions, and edges capture dependency relationships. The call tree is rooted from test functions and followed by triggered source functions. The depth and width of the tree reflect the complexity of the feature logic, including nested structures and cross-file dependencies. These trees provide a precise mapping from test behavior to implementation code, enabling us to localize relevant functions and systematically control task difficulty later. Step 3: Task Generation Once we have localized the implementation logic using call trees, we convert it into a feature development task by (1) masking the relevant code and (2) generating a natural language requirement for this feature. These two components constitute a typical development scenario in which a feature is functionally defined but not yet implemented. To achieve this, we perform the following: (1) Target function masking: We use structural properties of the call tree (e.g., depth and node count) to select function nodes that represent the core logic under test. The corresponding implementation code is removed from the repository,
https://arxiv.org/abs/2505.16975v1
leaving a functional gap to fill. (2) Project Requirement Document (PRD) generation: We construct the feature description in PRD by using GPT-4o to synthesize a high-level module description from the test file and augmenting the masked function’s docstring with implementation-level details. These two elements are combined into PRD, which serves as the task prompt. See example in Fig. 9 and prompt in Appendix H. 3.2 Dataset Features Controlled Complexity via Call Tree: Leveraging call-tree analysis, SWE-Dev enables systematic adjustment of task difficulty by adjusting dependency depth for task generation. This uniquely supports rigorous assessment of model capabilities against varying complexities, see § 5 discussion. Reliable Test-Based Evaluation: Assessment uses the original, developer-authored unit tests, validated for correct execution in a controlled environment. This execution-based pass/fail verification provides an objective, reproducible, and functionally accurate measure of code, directly reflecting real-world correctness criteria. Executable Training Support: SWE-Dev includes runnable environments and test cases for every sample, enabling training paradigms such as SFT and RL with execution-based feedback. 3.3 Statistics Table 2 summarizes the key statistics of SWE-Dev, which consists of 14,000 training and 500 test samples. The test set is manually curated and split into two difficulty levels: easy andhard (250 instances each). Each dataset instance comprises four components: (1) the task, specified by a PRD, with its token count reflecting instruction length; (2) the codebase, referring to the non-test portion of the repository, where we report the number of files and lines of code (LOC); (3) the ground truth (GT) 5 Qwen2.5-1.5B-InstructQwen2.5-3B-InstructPhi-4DeepSeek-Coder-V2-Lite-InstructLlama 3.1-8BQwen2.5-7B-InstructQwen3-8B(non-thinking)Qwen3-30B-A3B(non-thinking)Llama3.3-70B-InstructGPT-4o-miniQwen2.5-14B-InstructQwen2.5-32B-InstructQwen2.5-Coder-14B-InstructClaude-3.7-SonnetDeepseek-V3Qwen2.5-72B-InstructGPT-4oQwQ-32B-PreviewDeepseek-R1-distill-Qwen2.5-7BQwen3-8B(thinking)Qwen3-30B-A3B(thinking)Deepseek-R1Deepseek-R1-distill-Qwen2.5-32BOpenAI-o1DeepSeek-R1-distill-Llama-70BClaude-3.7-Sonnet-thinkingOpenAI-o3Grok-3-beta020406080100SWE-Dev Pass@3 (%) ChatBot Model Reasoning Model Easy HardFigure 4: Comparison of Pass@3 scores for 17 chatbot and 10 reasoning LLMs on SWE-Dev across Easy andHard splits. SWE-Dev poses substantial challenges and effectively distinguishes model capabilities under both difficulty levels. See Appendix B for full results. code to be implemented, measured by its LOC and number of target functions; and (4) the test suite, evaluated via the number of test cases and total test LOC per sample. Figure 3 shows the distribution of training and test samples across six major PyPI application domains, demonstrating the diversity of software categories represented in the dataset. More statistics are in Appendix A. 4 Experiment In this section, we empirically evaluate the effectiveness of various coding systems and training paradigms on SWE-Dev. We first compare the performance of single-LLM(§ 4.1) and MAS(§ 4.1.2) on the FDD tasks. Then, the effectiveness of different training approaches, including SFT (§ 4.2.1), RL (§ 4.2.2), and multi-agent training (§ 4.2.3) is discussed. Setup. We employed the Pass@ kas an evaluation metric in SWE-Dev [ 20]. For inference code context, since SWE-Dev requires both the PRD and codebase as inputs. The codebases consist of many tokens (an average of 202K lines, see Table 2), exceeding typical LLM context window. Thus, in all the experiments below, we provide only the relevant code context—i.e., the specific files involved in the task—rather than the entire codebase. 4.1 Testing Results This section presents the performance of 17 chatbot LLMs, 10 reasoning LLMs, and 10 multi-agent systems on SWE-Dev, under the single-LLM and multi-agent settings.
https://arxiv.org/abs/2505.16975v1
Full details of the evaluated methods are provided in Appendix F.1. 4.1.1 Single LLM Inference SWE-Dev presents substantial challenges for current LLMs, revealing a clear gap between existing coding capabilities and real-world software engineering demands. Figure 4 reports Pass@3 performance of chatbot and reasoning LLMs on SWE-Dev. We observe that: (1) LLMs perform better on the easy split than the hard split. (2) Performance generally scales with model size, especially for LLMs within the same family, aligning with our understanding of LLM 6 8090 82.7% 82.6%88.9% 83.7%HumanEval 7B 14B 32B 72B Model Size (Qwen2.5)20304050 Performance (%) 20.0%25.7% 19.9%38.1% 25.0%43.6% 22.1%49.0%ComplexCodeEval SWE-DevFigure 5: Comparison of bench- marks on various model sizes. SWE- Dev shows clear performance scaling with model size, while HumanEval[ 20] fails to distinguish between mod- els. ComplexCodeEval[ 15] using CodeBLEU[ 33] exhibits high variance, making it less stable for evaluation.Table 3: Comparison of general and code-specific MAS on SWE-Dev driven by GPT-4o-mini. Bold highlights the best performance; underlined indicates results worse than the single-agent baseline. Most MAS methods outperform the single agent, and simpler general MASs are more effective and efficient than complex coding-specific MASs. Easy HardMethodPass@1 Calls Price($) Pass@1 Calls Price($) Single 34.47 1.00 0.75 11.09 1.00 0.97 Reflexion [14] 39.77 2.12 0.83 13.32 2.18 1.35 Self Refine [34] 40.02 5.00 5.78 20.03 5.00 5.8 Self Consistency [35] 37.62 6.00 4.30 18.55 6.00 7.08 LLM Debate [36] 38.48 7.00 5.95 14.56 7.00 9.35 MAD [37] 31.50 7.00 2.48 15.31 7.00 3.40 Agentverse [38] 38.67 4.52 1.40 13.42 4.83 2.90 EvoMAC [18] 34.59 7.98 3.20 13.60 8.30 4.65 MetaGPT 29.56 9.69 2.20 9.25 10.37 4.95 MapCoder [39] 24.55 21.01 6.05 5.87 23.41 10.55 ChatDev [40] 35.13 26.61 3.53 11.70 30.87 6.10 capabilities. (3) Even the best-performing LLM (Claude-3.7-Sonnet[ 11]) achieves just over 20% on theHard split. This still fall short of achieving strong performance, indicating that current models are not yet fully capable of handling tasks that approximate the complexity of real-world developing scenarios. Reasoning models generally underperform their base counterparts, with Claude-3.7-Sonnet being a exception. While Claude with thinking outperforms its base variant, most reasoning models yield worse results. This suggests that current reasoning strategies do not consistently translate into gains for complex, repository-level generation tasks. We further explain this in Appendix C.2. SWE-Dev provides stable and discriminative evaluation of model capabilities. Figure 5 compares the performance of Qwen2.5 [ 41] family on SWE-Dev, HumanEval [ 20], and ComplexCodeEval [ 15] across three runs. We use Pass@1 for SWE-Dev and HumanEval and ComplexCodeEval for CodeBLEU [ 33]. The lines represent the average performance, and the shaded regions show the variance. We observe that: (1) SWE-Dev yields low variance performance and consistent scaling with model size, demonstrating SWE-Dev’s stability and reliability in evaluating model capabilities. (2) In contrast, HumanEval—despite being stable—is too simple to differentiate models meaningfully. (3) Meanwhile, ComplexCodeEval shows high variance due to its reliance on similarity-based metrics, CodeBLEU, which limits its reliability for evaluating complex generation tasks. 4.1.2 Multi-Agent Inference Table 3 compares the performance, call times and total costs of various MAS against the single-agent baseline
https://arxiv.org/abs/2505.16975v1
driven by GPT-4o-mini. Details of MAS are given in AppendixF.1. Key observations are: MAS generally outperforms single-agent baselines on complex tasks. While the single-agent approach achieves only 11.09% Pass@1 on hard tasks, Self Refine[ 34] and EvoMAC[ 18] improve performance to 20.03% and 13.60%, respectively. These results highlight the advantage of multi-agent collaboration in solving complex, reasoning-intensive problems. Simpler multi-agent strategies offer strong performance–efficiency trade-offs. Methods such as Self Refine strike an effective balance between performance and cost. On the easy subset, Self Refine achieves the highest Pass@1 of 40.02% using only 5 calls. In contrast, more complex systems like ChatDev, despite making over 26 calls, fall behind in performance (35.13%), indicating that additional agent complexity does not necessarily lead to better results. Human-designed, workflow-heavy MAS often introduce unnecessary overhead. Systems with manually defined roles and interaction protocols, such as ChatDev and MapCoder, tend to be less effective. On hard tasks, ChatDev requires over 30 calls yet only achieves 11.7%, while Map- Coder performs even worse, with 5.87% despite 23.41 calls. These results suggest that handcrafted workflows may introduce redundant operations without improving code generation quality. 7 Table 4: Comparison of zero-shot and SFT performance (Pass@1) on SWE-Dev using Qwen2.5 models. Results are reported on both Easy and Hard test splits across model sizes from 0.5B to 7B. The ∆columns indicate relative improvement after SFT. Fine-tuning yields consistent gains. Zero-shot SFT Easy Hard Easy ∆(%) Hard ∆(%) 0.5B 6.39 1.00 12.12 +90 4.37 +337 1.5B 8.05 1.23 18.20 +126 7.64 +521 3B 15.93 5.27 27.53 +73 12.46 +136 7B 25.74 6.68 36.90 +43 18.89 +183 210211212213213.5 Amount of SFT Data1015202530Performance (%) Fine-tuned Easy (pass@1) Fine-tuned Hard (pass@1)Zero-shot Easy (pass@1) Zero-shot Hard (pass@1)Figure 6: Training data scaling of SFT Qwen2.5-7B-instruct on SWE-Dev. As data size increases, performance im- proves steadily under SFT. Our results highlight MAS’s potential for complex tasks on SWE-Dev but reveal a gap between simple and complex MAS, indicating that scalable, efficient MAS remain a challenge. Future work could focus on balancing collaboration benefits with resource costs and mitigating error amplification from LLM hallucinations across agent interactions. 4.2 Training Results In this section, we evaluate SWE-Dev’s support for different training methods, including SFT, RL. Additionally, we present preliminary results from our exploration of multi-agent training, offering an initial assessment of MAS-based learning. For detailed training setups, refer to the Appendix F.2. 4.2.1 Single LLM SFT We conducted experiments on Qwen2.5-Intstruct models of various sizes (0.5B, 1.5B, 3B, and 7B) to assess the impact of SFT on performance in SWE-Dev. Experimental setting is in Appendix F.3. Training significantly improves performance across model sizes. SFT leads to substantial perfor- mance improvements across all model sizes, especially for harder tasks. As shown in Table 4, the 7B model achieves a Pass@1 of 36.90% on the easy task set after fine-tuning, up from 25.74% in the zero-shot setting. On the hard task set, the Pass@1 increases from 6.68% to 18.89%, demonstrating the clear benefits of training in enhancing model performance. SWE-Dev effectively supports the scaling law of training. Figure 6 illustrates the scaling law
https://arxiv.org/abs/2505.16975v1
of training using Qwen2.5-7b-instruct. In this experiment, we measured model performance across varying amounts of fine-tuning data, specifically tracking changes in Pass@1 for both easy and hard task. As shown in the figure, performance improves steadily as the amount of fine-tuning data increases, with larger improvements observed for harder tasks. In summary, our results underscore the importance of fine-tuning in improving performance on SWE-Dev. The scaling law observed here further supports the idea that SWE-Dev is a valuable dataset for studying the effects of model size and training data on task performance. 4.2.2 Single LLM RL Table 5: Performance comparison of Qwen2.5-7B- Instruct as base model, SFT-Tuned and RL-Tuned models on SWE-Dev. Pass@1 Pass@3 MethodEasy Hard Easy Hard vanilla 25.74 6.68 33.35 7.73 SFT 27.09 9.77 34.49 13.63 PPO (online RL) 28.30 12.25 32.69 14.33 DPO (offline RL) 25.89 10.36 31.32 14.66SWE-Dev provides precise test cases enabling accurate rewards for coding tasks, supporting both online and offline RL. In this section, we explore the impact of RL on the Qwen2.5-7B- instruct using SWE-Dev. Considering the com- putational cost of RL, we limit our experiments in this section to 2k training samples. For full training setup, refer to the Appendix F.4. Both online and offline RL improve perfor- mance, especially on hard tasks. Table 5 shows that both PPO [ 42] and DPO [ 43] sig- 8 Table 6: Comparison of multi-agent role-wise training, base MAS and single LLM’s performance on Qwen2.5- 7B-Instruct. ∆indicates the relative improvement over the base MAS system. Partial fine-tuning of either agent also leads to consistent gains, demonstrating the effec- tiveness of role-specific supervision enabled by SWE- Dev. Org Coder Easy ∆(%) Hard ∆(%) Single - - 25.74 - 6.68 - MASbase base 26.64 - 7.39 - FT base 30.04 +12.76 12.36 +67.25 base FT 31.42 +17.94 11.49 +55.48 FT FT 31.65 +18.80 12.70 +71.85 0 1 2 3 4 Evolving times22242628303234Easy Pass@1 (%) 26.6429.0931.6332.4733.08Easy Hard 810121416 Hard Pass@1 (%) 7.3911.1112.3313.2213.63Figure 7: EvoMAC performance trajectory under ground truth test case supervision on SWE-Dev with Qwen2.5-7B-Instruct. Evo- MAC iteratively improves across reasoning rounds, guided by executable test feedback. nificantly improve Pass@1 performance, especially on the Hard split. Furthermore, PPO outperforms SFT on the same training data. These findings highlight the advantages of RL training. RL boosts one-shot success but not multi-sample gains. While RL fine-tuning yields improvements in Pass@1, it has minimal impact on Pass@3. Specifically, PPO achieves a Pass@1 of 28.30% on easy tasks, a noticeable increase from the base model’s 25.74%, but the Pass@3 remains lower than the SFT-training, even the original model’s performance. These results suggest that RL can be beneficial in refining Pass@1, particularly for more complex tasks, by increasing the model’s efficiency in generating correct answers in fewer attempts. However, this efficiency comes at the cost of reduced exploration. This aligns with findings from prior work[ 44]. Therefore, while RL improves performance, significant headroom remains, and more advanced methods or further training are needed to achieve improvements across tasks. 4.2.3 Multi-Agent Training MAS has shown promising results on SWE-Dev, and we further investigate the
https://arxiv.org/abs/2505.16975v1
training process of MAS on this dataset. As depicted in Fig. 7, the ground truth test case supervision in SWE-Dev enables EvoMAC [ 18] to improve its performance across multiple rounds of reasoning. This iterative refinement process motivates us to explore EvoMAC as the MAS for training in SWE-Dev. We apply rejection sampling to enhance agent performance via role-wise training. Trajectory Collection. We use Qwen2.5-7B-Instruct to collect training data for the MAS, following these steps: (1) EvoMAC iterative reasoning : EvoMAC performs multiple reasoning rounds, benefiting from ground truth test case supervision to progressively improve its performance. (2) Rejection sampling : At each iteration, we apply rejection sampling based on training sample testcases to select high-quality trajectories that show improvement over the previous round, ensuring the retention of beneficial data. (3) Role-wise training : The selected trajectories are used to role-wise train two agents (organizer and coder) in EvoMAC, allowing each agent to specialize in its task for better overall performance. Training Effectiveness. Table 6 presents the performance of different training configurations in terms of Pass@1. We see that: i) Fine-tuning both the organizer and coder agents results in the highest performance, with Pass@1 of 31.65% on easy tasks and 12.70% on hard tasks, outperforming all other configurations; ii) When only one agent is fine-tuned, we also see improvements over the baseline. These findings highlight the effectiveness of role-wise training for MAS training. 5 Dataset Analysis We analyze SWE-Dev’s task complexity, evaluation setup, and PRD quality to demonstrate its uniqueness and reliability. Analysis of Task Difficulty and Call Tree Characteristics. We analyze how task difficulty in SWE-Dev correlates with call tree complexity. As introduced in § 3.1, a call tree reflects the dynamic function invocation structure for this task, where nodes represent functions and edges denote their 9 1234567891011121314151617181920 Number of Nodes in Call Tree6 5 4 3 2 1Call Tree Depth 0.00.10.20.30.40.50.60.7 Pass Rate (%) (a) Complexity analysis EM ES CodeBLEU Pass@3010203040506070Performance (%)Qwen2.5-7B Qwen2.5-32B Qwen2.5-72B GPT-4o (b) Metric Comparison Clarity Completeness Actionability020406080100Voting Proportion (%) 9.5217 1113.573.5 8984.5Origin Win Tie Refined Win (c) PRD Quality Analysis Figure 8: Analysis of SWE-Dev Benchmark Characteristics. (a) Compares GPT-4o’s performance across tasks grouped by call tree depth and node count, showing that greater structural complexity correlates with lower accuracy. (b) Compares several evaluation metrics; Pass@3 shows the clearest differentiation across model scales. (c) Compares human ratings of original vs. refined PRD on 100 samples of 3 dimensions, revealing SWE-Dev’s high PRD quality. call relationships. We use two metrics: depth, indicating the maximum call nesting, and node count, representing the total number of distinct functions involved in the task. Fig. 8a shows that GPT-4o’s performance declines as depth and node count increase, revealing a strong correlation between structural complexity and task difficulty. This suggests that deeper and broader call structures introduce more functional requirements and interdependencies, making tasks more challenging. Evaluation Method Precision. SWE-Dev uses execution-based evaluation with test cases, enabling precise performance signals. We compare metrics: Exact Match (EM) [ 19], Exact Sequence (ES) [ 19], CodeBLEU [ 33], and Pass@3, using Qwen2.5 models and
https://arxiv.org/abs/2505.16975v1
GPT-4o. As Fig 8b shows, Pass@3 best reflects capability scaling, separating models by size and quality. In contrast, EM, ES, and CodeBLEU show minimal variance, failing to distinguish models. This demonstrates that SWE-Dev’s test-case- based evaluation provides a more robust and realistic signal of model performance, better reflecting the functional correctness required in real-world software development. PRD Quality. SWE-Dev includes a PRD for each task to simulate realistic developer-facing requirements, which are primarily derived from the original docstrings found within the repository source code. While many functions in open-source code include docstrings, we found that these are often incomplete—lacking clear descriptions of behavior, parameters, or edge cases. To improve instruction clarity without fabricating content, we lightly refine existing docstrings using GPT-4o, grounded in the related file and surrounding context. To evaluate instruction quality, we conducted a human assessment on 100 sampled tasks. Two experienced engineers rated the original and refined PRDs along Actionability, Completeness, and Clarity (Appendix C.1 includes human instruction). As shown in Fig. 8c, refined PRDs consistently scored higher across all dimensions. This supports SWE-Dev’s goal of providing realistic, well-scoped requirements for reliable model evaluation. 6 Conclusion In this work, we introduced SWE-Dev, the first dataset for evaluating and training autonomous coding systems on feature-driven development task. SWE-Dev consists of 14,000 training and 500 test instances, each uniquely equipped with runnable environments and developer-authored executable unit tests, which provides essential execution-based feedback for advanced training paradigms like SFT, RL, and multi-agent learning. Our experiments show FDD is profoundly challenging for current autonomous coding systems. We also validate that training on SWE-Dev can yield encouraging performance gains. These findings validate SWE-Dev as a critical platform for identifying current limitations and fostering breakthroughs in AI-driven software development. We hope the release of SWE-Dev spurs innovation in long-context reasoning, agent orchestration, and execution-aware training towards genuinely autonomous software engineering. Limitation SWE-Dev currently focuses on Python and basic training strategies, leaving open future directions in multilingual support and more advanced training approaches. 10 References [1]GitHub. GitHub Copilot. https://github.com/features/copilot , 2021. Accessed: May 10, 2025. [2]Cursor. Cursor - The AI-first Code Editor. https://www.cursor.com/ , 2023. Accessed: May 10, 2025. [3]Carlos E Jimenez, John Yang, Alexander Wettig, Shunyu Yao, Kexin Pei, Ofir Press, and Karthik R Narasimhan. Swe-bench: Can language models resolve real-world github issues? In The Twelfth International Conference on Learning Representations . [4]Tianyang Liu, Canwen Xu, and Julian McAuley. Repobench: Benchmarking repository-level code auto-completion systems. In The Twelfth International Conference on Learning Representations . [5]Peter Coad, Eric Lefebvre, and Jeff De Luca. Java Modeling in Color with UML: Enterprise Components and Process . Prentice Hall PTR, 1999. [6]Stephen R Palmer and John M Felsing. A Practical Guide to Feature-Driven Development . Prentice Hall PTR, 2002. [7]Luan Xavier, Marco D’Ambros, André Hora, Romain Robbes, and Marco Tulio Valente. Characterizing and comparing the distribution of software evolution tasks in industrial and open source projects. In 2017 IEEE 24th International Conference on Software Analysis, Evolution and Reengineering (SANER) , pages 130–140. IEEE, 2017. [8]Harness Team. Getting started with feature driven development: A comprehensive guide. April 2025. Accessed:
https://arxiv.org/abs/2505.16975v1
May 10, 2025. This guide discusses FDD principles including planning by feature and aligning with business requirements, which are conducive to automated assessment via acceptance criteria. [9]Lily Kazerouni, April Mitchell, John Smith, and Emily Johnson. Large language models for software engineering: A systematic literature review. Journal of Systems and Software , 211:111999, 2024. [10] Chunqiu Steven Xia and Lingming Zhang. Automated program repair in the era of large pre-trained language models. In Proceedings of the 45th International Conference on Software Engineering (ICSE ’23), pages 1106–1118. IEEE Press, 2023. [11] Anthorpic. Claude 3.7 Sonnet. https://www.anthropic.com/news/claude-3-7-sonnet , 2025. Accessed: May 10, 2025. [12] Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint arXiv:2410.21276 , 2024. [13] Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement with self-feedback. Advances in Neural Information Processing Systems , 36:46534–46594, 2023. [14] Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion: Language agents with verbal reinforcement learning. Advances in Neural Information Processing Systems , 36:8634–8652, 2023. [15] Jia Feng, Jiachen Liu, Cuiyun Gao, Chun Yong Chong, Chaozheng Wang, Shan Gao, and Xin Xia. Complexcodeeval: A benchmark for evaluating large code models on more complex code. In Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering , pages 1895–1906, 2024. [16] Hao Yu, Bo Shen, Dezhi Ran, Jiaxin Zhang, Qi Zhang, Yuchi Ma, Guangtai Liang, Ying Li, Qianxiang Wang, and Tao Xie. Codereval: A benchmark of pragmatic code generation with generative pre-trained models. In Proceedings of the 46th IEEE/ACM International Conference on Software Engineering , pages 1–12, 2024. [17] Jia Li, Ge Li, Yunfei Zhao, Yongmin Li, Huanyu Liu, Hao Zhu, Lecheng Wang, Kaibo Liu, Zheng Fang, Lanshen Wang, et al. Deveval: A manually-annotated code generation benchmark aligned with real-world code repositories. CoRR , 2024. [18] Yue Hu, Yuzhu Cai, Yaxin Du, Xinyu Zhu, Xiangrui Liu, Zijie Yu, Yuchen Hou, Shuo Tang, and Siheng Chen. Self-evolving multi-agent collaboration networks for software development. 2025. [19] Jiaheng Liu, Ken Deng, Congnan Liu, Jian Yang, Shukai Liu, He Zhu, Peng Zhao, Linzheng Chai, Yanan Wu, Ke Jin, et al. M2rc-eval: Massively multilingual repository-level code completion evaluation. arXiv preprint arXiv:2410.21157 , 2024. 11 [20] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 , 2021. [21] Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732 , 2021. [22] Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, et al. Measuring coding challenge competence with apps. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). [23] Yujia Li, David Choi, Junyoung
https://arxiv.org/abs/2505.16975v1
Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. Competition-level code generation with alphacode. Science , 378(6624):1092–1097, 2022. [24] Anton Lozhkov, Raymond Li, Loubna Ben Allal, Federico Cassano, Joel Lamy-Poirier, Nouamane Tazi, Ao Tang, Dmytro Pykhtar, Jiawei Liu, Yuxiang Wei, et al. Starcoder 2 and the stack v2: The next generation. arXiv preprint arXiv:2402.19173 , 2024. [25] Junjie Huang, Chenglong Wang, Jipeng Zhang, Cong Yan, Haotian Cui, Jeevana Priya Inala, Colin B Clement, Nan Duan, and Jianfeng Gao. Execution-based evaluation for data science code generation models. CoRR , 2022. [26] Marah Abdin, Jyoti Aneja, Harkirat Behl, Sébastien Bubeck, Ronen Eldan, Suriya Gunasekar, Michael Harrison, Russell J Hewett, Mojan Javaheripi, Piero Kauffmann, et al. Phi-4 technical report. arXiv preprint arXiv:2412.08905 , 2024. [27] Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Chandu, David Wadden, Kelsey MacMillan, Noah A Smith, Iz Beltagy, et al. How far can camels go? exploring the state of instruction tuning on open resources. Advances in Neural Information Processing Systems , 36:74764–74786, 2023. [28] Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. Wizardcoder: Empowering code large language models with evol-instruct. InThe Twelfth International Conference on Learning Representations . [29] Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, Qingwei Lin, and Daxin Jiang. Wizardlm: Empowering large pre-trained language models to follow complex instructions. InThe Twelfth International Conference on Learning Representations , 2024. [30] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems , 35:27730–27744, 2022. [31] Hung Le, Yue Wang, Akhilesh Deepak Gotmare, Silvio Savarese, and Steven Chu Hong Hoi. Coderl: Mastering code generation through pretrained models and deep reinforcement learning. Advances in Neural Information Processing Systems , 35:21314–21328, 2022. [32] Jiayi Pan, Xingyao Wang, Graham Neubig, Navdeep Jaitly, Heng Ji, Alane Suhr, and Yizhe Zhang. Training software engineering agents and verifiers with swe-gym. arXiv preprint arXiv:2412.21139 , 2024. [33] Shuo Ren, Daya Guo, Shuai Lu, Long Zhou, Shujie Liu, Duyu Tang, Neel Sundaresan, Ming Zhou, Ambrosio Blanco, and Shuai Ma. Codebleu: a method for automatic evaluation of code synthesis, 2020. [34] Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, and Peter Clark. Self-refine: Iterative refinement with self-feedback, 2023. [35] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models, 2023. [36] Yilun Du, Shuang Li, Antonio Torralba, Joshua B. Tenenbaum, and Igor Mordatch. Improving factuality and reasoning in language models through multiagent debate, 2023. [37] Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Shuming Shi, and Zhaopeng Tu. Encouraging divergent thinking in large language models through
https://arxiv.org/abs/2505.16975v1
multi-agent debate, 2024. 12 [38] Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, and Jie Zhou. Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors, 2023. [39] Md. Ashraful Islam, Mohammed Eunus Ali, and Md Rizwan Parvez. Mapcoder: Multi-agent code generation for competitive problem solving, 2024. [40] Chen Qian, Wei Liu, Hongzhang Liu, Nuo Chen, Yufan Dang, Jiahao Li, Cheng Yang, Weize Chen, Yusheng Su, Xin Cong, Juyuan Xu, Dahai Li, Zhiyuan Liu, and Maosong Sun. Chatdev: Communicative agents for software development, 2024. [41] Qwen, :, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tianyi Tang, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report, 2025. [42] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 , 2017. [43] Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems , 36:53728–53741, 2023. [44] Yang Yue, Zhiqi Chen, Rui Lu, Andrew Zhao, Zhaokai Wang, Yang Yue, Shiji Song, and Gao Huang. Does reinforcement learning really incentivize reasoning capacity in llms beyond the base model?, 2025. [45] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y . K. Li, Y . Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in open language models, 2024. [46] Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Tiantian Fan, Gaohong Liu, Lingjun Liu, Xin Liu, Haibin Lin, Zhiqi Lin, Bole Ma, Guangming Sheng, Yuxuan Tong, Chi Zhang, Mofan Zhang, Wang Zhang, Hang Zhu, Jinhua Zhu, Jiaze Chen, Jiangjie Chen, Chengyi Wang, Hongli Yu, Weinan Dai, Yuxuan Song, Xiangpeng Wei, Hao Zhou, Jingjing Liu, Wei-Ying Ma, Ya-Qin Zhang, Lin Yan, Mu Qiao, Yonghui Wu, and Mingxuan Wang. Dapo: An open-source llm reinforcement learning system at scale, 2025. [47] Jian Hu, Jason Klein Liu, and Wei Shen. Reinforce++: An efficient rlhf algorithm with robustness to both prompt and reward models, 2025. [48] Rui Ye, Shuo Tang, Rui Ge, Yaxin Du, Zhenfei Yin, Siheng Chen, and Jing Shao. Mas-gpt: Training llms to build llm-based multi-agent systems, 2025. [49] Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning , pages 41–48, 2009. [50] Qwen, :, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin
https://arxiv.org/abs/2505.16975v1
Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tianyi Tang, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report, 2025. [51] Qwen. Qwen3: Think deeper, act faster. https://qwenlm.github.io/blog/qwen3/ , 2025. Accessed: May 10, 2025. [52] Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, and et al. The llama 3 herd of models, 2024. [53] Marah Abdin, Jyoti Aneja, Harkirat Behl, Sébastien Bubeck, Ronen Eldan, Suriya Gunasekar, Michael Harrison, Russell J. Hewett, Mojan Javaheripi, Piero Kauffmann, James R. Lee, Yin Tat Lee, Yuanzhi Li, Weishung Liu, Caio C. T. Mendes, Anh Nguyen, Eric Price, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Xin Wang, Rachel Ward, Yue Wu, Dingli Yu, Cyril Zhang, and Yi Zhang. Phi-4 technical report, 2024. 13 [54] DeepSeek-AI, Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Daya Guo, Dejian Yang, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, and etal. Deepseek-v3 technical report, 2025. [55] DeepSeek-AI, Qihao Zhu, Daya Guo, Zhihong Shao, Dejian Yang, Peiyi Wang, Runxin Xu, Y . Wu, Yukun Li, Huazuo Gao, Shirong Ma, Wangding Zeng, Xiao Bi, Zihui Gu, Hanwei Xu, Damai Dai, Kai Dong, Liyue Zhang, Yishi Piao, Zhibin Gou, Zhenda Xie, Zhewen Hao, Bingxuan Wang, Junxiao Song, Deli Chen, Xin Xie, Kang Guan, Yuxiang You, Aixin Liu, Qiushi Du, Wenjun Gao, Xuan Lu, Qinyu Chen, Yaohui Wang, Chengqi Deng, Jiashi Li, Chenggang Zhao, Chong Ruan, Fuli Luo, and Wenfeng Liang. Deepseek-coder-v2: Breaking the barrier of closed-source models in code intelligence, 2024. [56] Binyuan Hui, Jian Yang, Zeyu Cui, Jiaxi Yang, Dayiheng Liu, Lei Zhang, Tianyu Liu, Jiajun Zhang, Bowen Yu, Keming Lu, Kai Dang, Yang Fan, Yichang Zhang, An Yang, Rui Men, Fei Huang, Bo Zheng, Yibo Miao, Shanghaoran Quan, Yunlong Feng, Xingzhang Ren, Xuancheng Ren, Jingren Zhou, and Junyang Lin. Qwen2.5-coder technical report, 2024. [57] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [58] OpenAI, :, Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, Alex Iftimie, Alex Karpenko, Alex Tachard Passos, Alexander Neitz, Alexander Prokofiev, Alexander Wei, and etal. Openai o1 system card, 2024. [59] X. Grok 3 Beta — The Age of Reasoning Agents. https://x.ai/news/grok-3 , 2025. Accessed: May 10, 2025. [60] OpenAI. GPT-4o mini: advancing cost-efficient intelligence. https://openai.com/index/ gpt-4o-mini-advancing-cost-efficient-intelligence , 2024. Accessed: May 10, 2025. [61] Sirui Hong, Mingchen Zhuge, Jiaqi Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin,
https://arxiv.org/abs/2505.16975v1
Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, and Jürgen Schmidhuber. Metagpt: Meta programming for a multi-agent collaborative framework, 2024. [62] Yifei Zhou, Andrea Zanette, Jiayi Pan, Sergey Levine, and Aviral Kumar. Archer: Training language model agents via hierarchical multi-turn rl, 2024. [63] Charlie Snell, Ilya Kostrikov, Yi Su, Mengjiao Yang, and Sergey Levine. Offline rl for natural language generation with implicit language q learning, 2023. 14 A Dataset A.1 Dataset Information Figure 9 illustrates a typical task instance in the SWE-Dev, detailing the entire development workflow. The process begins with the Project Requirement Description (PRD), which provides instructions and specifies features to be implemented. Methods to be evaluated then generate code to complete the features mentioned in the PRD, which is subsequently verified against the test suite to produce pass/fail results to calculate pass rate. Additionally, the ground truth implementation for each PRD is included for reference. The tasks in SWE-Dev simulate real-world software development cycles within a repository context. For detailed information about each data field included in SWE-Dev tasks, please refer to Table 7. Project Requirement Description (PRD)FOLDER STRUCTURE:└──num2words/├──__init__.py────num2words├──base.py────Num2Word_Base.to_currency└──lang_AZ.py────Num2Word_AZ.to_cardinal└──..InstructionModule descriptionFILE 1: num2words/__init__.py TASK DESCRIPTIONIn this project, you need to implement the functions and methods listed above. The functions have been removed from the code but their docstrings remain.Your task is to:1.Read and understandthe docstrings of each function/method…Ground TruthCodenum2words/__init__.pyGeneratedCodenum2words/__init__.pyGeneratedCode Te s t R e s u l t4 passed, 4 warnings in 0.38s========================3 failed,3passed in 0.20s========================PASSEDNum2WordAZT est::test_cardinalPASSEDNum2WordAZT est::test_currencyPASSEDNum2WordAZT est::test_yearFILE 3: num2words/lang_AZ.pyThe module facilitates numerical text conversion in the Azerbaijani language... FAILEDNum2WordAZT est::test_ordinalFAILEDNum2WordAZT est::test_ordinal_numFAILEDNum2WordAZT est::test_cardinal_fracionFILE 2: num2words/base.pyFUNCTION NAME: num2wordsSIGNATURE:def num2words(number, ordinal=False, lang='en', to='cardinal'):DOCSTRING:"""Converts a given number into its word representation in a specified language..."""CLASS METHOD: num2wordsCLASS SIGNATURE:class Num2Word_Base(object):METHOD SIGNATURE:def to_currency(self, val, currency=‘EUR’, cents=True, separator=‘,’, adjective=False):DOCSTRING:"""Converts a numeric value into a formatted currency string...."""num2words/base.pydef num2words(number, ordinal=False, lang='en', to='cardinal',**kwargs):if lang not in CONVERTER_CLASSES:lang = lang[:2]if lang not in CONVERTER_CLASSES:raise NotImplementedError()class Num2Word_Base(object):def to_currency(self, val, currency='EUR', cents=True, separator=',’, adjective=False):left, right, is_negative= parse_currency_parts(val)try:cr1, cr2 = self.CURRENCY_FORMS[currency] + class Num2Word_Base(object):+ def to_currency(self, val, currency='EUR', cents=True, + separator=',', adjective=False):+left, right, is_negative = parse_currency_parts(val)+integral_word = self._money_verbose(left, currency)+fractional_word = self._cents_verbose(right, currency) + def num2words(number, ordinal=False, lang='en', to='cardinal', **kwargs):+ if lang not in CONVERTER_CLASSES:+ raise NotImplementedError(f"Language '{lang}' is not supported.")+ converter = CONVERTER_CLASSES[lang] + if to not in CONVERTES_TYPES:+ raise NotImplementedError(f"Conversion type '{to}' is not valid.")num2words/base.py Figure 9: Example of a SWE-Dev Sample. Each sample includes a Project Requirement Description (PRD) with folder structure, module-level task description, and masked docstrings; the corresponding ground truth implementation. Generated code is evaluated by the test function execution results. This structure supports realistic, testable feature development in a repository context. Table 7: Description of each field of an SWE-Dev task instance object. Field Description PRD Description of the project development requirements for this task. file_code Incomplete code contents of the core files involved in the task. test_code Content of the test code used to verify the task’s functionality. dir_path Root directory path of the project corresponding to this task instance. package_name Name of the software package or module to which this task instance belongs. sample_name Unique identifier or name for this task instance or sample
https://arxiv.org/abs/2505.16975v1
within the benchmark. src_dir Relative directory path where the source code files for the project or task are located. test_dir Relative directory path where the test code files for the project or task are located. test_file Relative path of the unit test file used for executing tests. GT_file_code Ground Truth source code for the file to complete. GT_src_dict Ground Truth source dictionary, mapping file names/paths to their expected correct code content. dependency_dict Dictionary describing the dependencies required by the current task (e.g., internal modules) and their relationships. call_tree Function call tree or call graph of the code, representing the relation- ships between function calls. 15 A.2 Dataset Distribution We present the distribution statistics of the training and test sets in SWE-Dev. Each sample includes a Project Requirement Document (PRD), which describes the feature to be implemented. The average PRD length is 1,845.4 tokens. On average, each sample includes at least 5 unit tests for functional evaluation, spans 3 source files, and requires the implementation of approximately 6 functions. (a) Test Set Distribution (b) Train Set Distribution Figure 10: Dataset distribution of PRD tokens, number of testcases, number of files to complete, and number of functions per sample. A.3 Dataset Diversity We assess the diversity of SWE-Dev from two perspectives: sample-level diversity (Figure 11 and Figure 12), and package-level diversity (Figure 23). Sample Diversity via t-SNE. To visualize the diversity of feature requirements, we perform t- SNE on PRD embeddings generated using OpenAI’s text-embedding-ada-002 model.2We use 500 test samples and randomly sample 2,000 training samples. Each point represents a PRD, and the color denotes its corresponding package. The resulting distribution reveals rich semantic variation across tasks, even within the same package, highlighting the dataset’s diversity in both content and functionality. Package Category Diversity. To analyze the functional diversity of the dataset, we classify packages into high-level categories based on their primary domain (e.g., web development, data science, utilities). The classification is performed using GPT-4o-mini with the prompt provided below (see Figure 23). The resulting distribution confirms that SWE-Dev spans a broad spectrum of software domains. Figure 11: t-SNE visualization of PRD in test set 2https://platform.openai.com/docs/guides/embeddings 16 Figure 12: t-SNE visualization of PRD in train set B Inference Results To assess the capabilities and limitations of current LLMs on realistic feature-driven development (FDD) tasks, we conduct comprehensive inference-time evaluations on SWE-Dev. We study both single-agent and multi-agent systems, measuring their performance under consistent execution-based evaluation. B.1 Single-Agent LLM Performance We evaluate 27 state-of-the-art LLMs, including general-purpose chatbot models (e.g., GPT-4o, Claude 3.7) and reasoning models, shown in Table 9. Models are assessed using Pass@1 and Pass@3 on SWE-Dev’s test set. To contextualize benchmark difficulty, we also compare results on HumanEval [ 20] and ComplexCodeEval [ 15], using Pass@3 and CodeBLEU respectively. Our findings show that SWE-Dev poses significantly greater challenges than existing benchmarks, with leading models achieving under 25% Pass@3 on hard tasks (Table 8). Table 8: Evaluation of model performance across benchmarks. This table compares 37 general- purpose and reasoning-focused LLMs on SWE-Dev (Pass@1 and Pass@3 for Easy and Hard splits), ComplexCodeEval (CodeBLEU),
https://arxiv.org/abs/2505.16975v1
and HumanEval (Pass@1 and Pass@3). ModelSWE-Dev pass@1 SWE-Dev pass@3 ComplexCodeEval CodeBLEUHumanEval pass@1HumanEval pass@3 Easy Hard Easy Hard Chatbot Qwen2.5-1.5B-Instruct 8.05% 1.23% 10.76% 2.22% 29.72% 57.20% 69.76% Qwen2.5-3B-Instruct 15.93% 5.27% 21.99% 7.47% 12.27% 62.68% 75.00% Qwen2.5-7B-Instruct 25.74% 6.68% 33.35% 7.73% 20.00% 82.68% 87.13% Llama-3.1-8B-Instruct 26.43% 7.94% 33.01% 10.24% 20.18% 68.20% 77.87% Qwen3-8B 34.04% 12.09% 39.26% 13.33% 17.47% 86.34% 89.09% Qwen2.5-Coder-14B-Instruct 39.51% 14.82% 52.49% 18.44% 35.52% 90.48% 92.93% Qwen2.5-14B-Instruct 38.08% 13.16% 46.32% 15.89% 19.90% 82.56% 87.87% DeepSeek-Coder-V2-Lite-Instruct 21.53% 8.19% 29.68% 11.33% 26.63% 80.98% 85.18% Qwen3-30B-A3B 35.84% 12.76% 39.45% 15.20% 14.84% 89.27% 90.30% Phi-4 21.99% 5.57% 27.89% 8.56% 33.85% 86.46% 90.01% Qwen2.5-32B-Instruct 43.64% 10.15% 51.24% 11.69% 19.76% 88.90% 92.44% Qwen2.5-72B-Instruct 49.01% 10.62% 57.20% 12.33% 22.15% 83.66% 86.46% Llama3.3-70B-Instrcut 33.84% 12.85% 39.57% 14.95% 21.29% 84.51% 88.54% Deepseek-V3 41.95% 16.22% 56.79% 21.62% 28.32% 90.36% 92.92% GPT-4o 54.37% 19.13% 68.70% 21.91% 33.38% 88.41% 92.93% GPT-4o-mini 34.47% 11.09% 41.94% 13.84% 25.00% 85.97% 89.00% Claude-3.7-Sonnet 53.09% 19.74% 56.35% 24.25% 29.63% 93.66% 95.36% Reasoning Claude-3.7-Sonnet-thinking 49.47% 22.51 % 56.58% 29.28 % 29.80 % 91.22% 97.62% Deepseek-R1-distill-Qwen2.5-7B 6.30% 1.29% 10.30% 1.95% 21.05% 86.10% 93.29% Qwen3-8B-thinking 19.47% 6.36% 25.91% 9.22% 20.98% 89.63% 91.89% Qwen3-30B-A3B-thinking 23.63% 8.30% 31.00% 11.60% 25.00% 93.04% 99.57% Deepseek-R1-distill-Qwen2.5-32B 24.25% 9.79% 40.53% 19.04% 27.98% 95.17% 97.87% DeepSeek-R1-distill-Llama-70B 32.73% 8.19% 45.72% 11.33% 25.95% 96.95% 98.53% Deepseek-R1-671B 28.55% 12.84% 37.62% 17.72% 34.47% 98.65% 100% QwQ-32B-Preview 4.50% 0.70% 8.90% 1.22% 24.78% 82.31% 97.01% grok-3-beta 53.63% 18.97% 59.08% 22.26% 27.96% 87.15% 89.99% o1 36.36% 11.09% 43.77% 14.27% 33.63% 97.43% 98.78% o3 51.21% 21.86% 59.05% 28.98% 26.53% 98.04% 98.78% 17 Table 9: Information of evaluated LLMs. Model Size Release Date Open Link Qwen/Qwen2.5-Coder-14B-Instruct 14B 2024-11-12 ✓ Qwen2.5-Coder-14B-Instruct deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct 16B 2025-06-17 ✓ DeepSeek-Coder-V2-Lite-Instruct microsoft/phi-4 14B 2024-12-12 ✓ phi-4 Qwen/Qwen2.5-1.5B-Instruct 1.5B 2024-09-19 ✓ Qwen2.5-1.5B-Instruct Qwen/Qwen2.5-3B-Instruct 3B 2024-09-19 ✓ Qwen2.5-3B-Instruct Qwen/Qwen2.5-7B-Instruct 7B 2024-09-19 ✓ Qwen2.5-7B-Instruct meta-llama/Llama-3.1-8B-Instruct 8B 2024-07-23 ✓ Llama-3.1-8B-Instruct Qwen/Qwen3-8B 8B 2025-04-29 ✓ Qwen3-8B Qwen/Qwen2.5-14B-Instruct 14B 2024-09-19 ✓ Qwen2.5-14B-Instruct Qwen/Qwen3-30B-A3B 30B 2025-04-29 ✓ Qwen3-30B-A3B Qwen/Qwen2.5-32B-Instruct 32B 2024-09-19 ✓ Qwen2.5-32B-Instruct Qwen/Qwen2.5-72B-Instruct 72B 2024-09-19 ✓ Qwen2.5-72B-Instruct meta-llama/Llama-3.3-70B-Instruct 70B 2024-12-06 ✓ Llama-3.3-70B-Instruct deepseek-ai/DeepSeek-V3 - 2024-12-26 ✓ DeepSeek-V3 gpt-4o (OpenAI) - 2024-05-13 × gpt-4o/ gpt-4o-mini (OpenAI) - 2024-07-18 × gpt-4o-mini gemini-2.5-pro (Google) - 2025-03-20 × gemini-2.5-pro claude-3.7-sonnet (Anthropic) - 2025-02-25 × claude-3.7-sonnet claude-3.7-sonnet-thinking (Anthropic) - 2025-02-24 × claude-3.7-sonnet-thinking grok-3-beta (xAI) - 2025-02-19 × grok-3-beta o1 (OpenAI) - 2024-12-05 × o1 o3 (OpenAI) - 2025-04-16 × o3 deepseek-ai/DeepSeek-R1-Distill-Qwen-7B 7B 2025-01-20 ✓ DeepSeek-R1-Distill-Qwen-7B deepseek-ai/DeepSeek-R1-Distill-Qwen-32B 32B 2025-01-20 ✓ DeepSeek-R1-Distill-Qwen-32B Qwen/QwQ-32B-Preview 32B 2025-03-06 ✓ QwQ-32B-Preview deepseek-ai/DeepSeek-R1-Distill-Llama-70B 70B 2025-01-20 ✓ DeepSeek-R1-Distill-Llama-70B deepseek-ai/DeepSeek-R1 - 2025-01-20 ✓ DeepSeek-R1 AgentVerse MAD CoT EvoMac LLM Debate ChatDev MapCoder MetaGPT Reflexion SelfConsistencyThe structures of Multi Agents Figure 13: Multiagent system workflow visualization. B.2 Multi-Agent System Performance We evaluate 10 multi-agent systems (MAS), including both general-purpose MAS (e.g., AgentVerse, LLM Debate) and code-specific designs (e.g., EvoMAC, MapCoder). As detailed in Table 3, we compare each MAS against a single-agent baseline on execution success (Pass@1), total API call count, and cost-efficiency. Results show that while MAS can outperform single agents on complex tasks, simple strategies (e.g., Self-Refine) often strike a better balance between performance and resource usage than workflow-heavy systems like ChatDev. We visualize the MAS in the following Figure 13. C Analysis
https://arxiv.org/abs/2505.16975v1
C.1 Analysis of PRD Quality Our Project Requirement Descriptions (PRDs) are primarily derived from the original docstrings found within the repository source code. To enhance the quality and utility of these PRDs, we employed GPT-4o to refine and improve the original docstrings. To objectively assess this improvement, we recruited two domain experts to evaluate both the original and GPT-4o-enhanced docstrings across 100 randomly selected samples from SWE-Dev. 18 60 70 80 90 Instruction Following Rate(%)404550556065707580Pass@1(%)DeepSeek-R1-Distill-Qwen-32B Qwen2.5-32B-Instrcut DeepSeek-R1-Distill-Llama-70B Llama-3.3-70B-InstructDeepseek-R1 Deepseek-V3Claude-3.7-Sonnet-thinkingClaude-3.7-Sonnet GPT-4o Chatbot model Reasoning modelChatbot model Reasoning modelFigure 14: Comparison of reasoning and chatbot LLMs’ IFR and performance on SWE-Dev. Reason- ing models tend to outperform chatbots when they fully follow instructions, though their overall IFR is lower. The experts rated each docstring on three critical dimensions—Clarity, Completeness, and Action- ability—using a 0 to 5 scale, where higher scores indicate superior quality. The human evaluation guideline is shown in Figure 24. Participants were fully informed about the evaluation process and the nature of the task. The assessment involved only reviewing documentation and posed no ethical or privacy risks, adhering strictly to ethical standards for research involving human subjects. This evaluation provides a rigorous measure of how GPT-4o-refined docstrings enhance PRD quality in SWE-Dev. C.2 Explanation on the Underperformance of Reasoning Models Instruction Following Rate (IFR). Previous experiments have shown that reasoning models perform poorly on SWE-Dev. To investigate the reasons behind this, we analyzed the instruction-following ability of these models. We measured the percentage of code files that meet the PRD requirements for each model’s generated code as a metric of instruction following rate (IFR). The metric is formally defined as: IFR=1 nnX i=1|Gi∩ Ti| |Ti| where ndenotes the total number of tasks, Girepresents the set of files generated by the model for taski, andTidenotes the set of ground truth files required by the PRD for task i. To further explore this, we compared reasoning models with their chatbot counterparts by evaluating their instruction following rate. Specifically, in Figure 14, the x-axis represents the instruction following rate, and the y-axis shows the performance of both reasoning models and their chatbot counterparts on tasks where their instruction following rate is 100%. As shown in the figure, we see that: (i) Reasoning models generally have a lower instruction- following rate compared to their chatbot version, which explains why they underperform when handling multiple tasks simultaneously. Reasoning models tend to struggle with tasks on SWE-Dev that involve performing several steps in a single call, resulting in poorer performance overall. (ii) However, on tasks where both reasoning models and their chatbot versions have an instruction- following rate of 100%, reasoning models typically outperform the chatbots. This indicates the potential of reasoning models when they can fully adhere to instructions. (iii) Claude 3.7-Sonnet is an exception to this trend, as both its reasoning and chatbot versions exhibit similar instruction-following rates and performance, which contributes to Claude’s superior results. 19 C.3 Error Analysis Figure 15 presents the distribution of failure types for both single-agent and multi-agent systems on SWE-Dev. We sample 500 samples for error analysis and categorize errors
https://arxiv.org/abs/2505.16975v1
into five types: Incomplete, Logic, Syntax, Parameter, and Others, see error classification prompt in Figure 21. Across both agent types, the most prevalent error is the Incomplete Error, where models fail to implement PRD-required functions—indicating persistent challenges in task decomposition and execution coverage. For single-agent models, Logic Errors are the second most common, followed by Parameter Errors and Syntax Errors. Interestingly, GPT-4o and Claude-3.7 show relatively fewer Syntax Errors, suggesting better adherence to Python syntax, while smaller models like GPT-4o-mini show higher incidence of both Syntax and Parameter issues, reflecting their limited reasoning capacity and weaker control over function signatures. In contrast, multi-agent systems exhibit a different pattern. While they reduce Incomplete Errors to some extent, they often incur higher Logic or Syntax Errors—especially in methods like MAD and Self-consistency—suggesting that while agents may cover more PRD content, coordination breakdown or hallucinated reasoning steps can introduce new failure modes. Overall, the analysis highlights the need for improved function selection, robust reasoning alignment, and stronger control over generation structure—especially in collaborative multi-agent settings. 0 50 100 150 200 250 300 350 400 Error CountGPT-4o-miniQwen-2.5-32BLLAMA-3.3-70BClaude-3.7-thinkingGPT-4o 129 138 13 112119 163 47 38151 92 82 32111 139 29 3791 111 20 85(a) The number of errors (Single-agent) Incomplete Error Logic Error Syntax Error Parameter Error Others 0 50 100 150 200 250 300 350 400 Error CountMADLLM_debateSelf_consistencyCotSelf_refine 104 78 186 30111 145 33 72103 101 119 28128 94 90 30118 130 36 46(b) The number of errors (Multi-agent) Figure 15: Failure case distribution of Single and Multi-agent. C.4 Limitation and Future Work Language Scope. SWE-Dev currently targets Python, which, while widely used, does not reflect the full diversity of real-world programming languages. A natural extension is to support other major languages such as Java, JavaScript, and C++, enabling broader evaluation and enhancing generality. Training Exploration. Our training experiments focus on standard techniques—SFT, RL, and role-wise MAS training—which yield modest gains. Future work could explore stronger RL [ 45– 47], dynamic agent coordination [ 48], and curriculum learning [ 49]. Notably, SWE-Dev offers fine-grained complexity signals via call trees that can guide complexity-aware training. C.5 Broader impacts SWE-Dev is the first dataset tailored for autonomous feature-driven software development, addressing the gap between current automated coding and real-world software engineering demands. By providing large-scale, realistic tasks based on real repositories with executable tests, it enables rigorous and reliable evaluation of automated AI coding systems. SWE-Dev promotes the creation of more capable methods for complex software, driving innovation that can lower development costs and enhance software quality industry-wide. D Detailed Benchmark Construction D.1 Call tree generation To accurately localize the implementation logic associated with each test case, we construct a call tree that captures the dynamic execution path from the test to the relevant source functions. This tree serves as the foundation for identifying the core feature logic and determining task complexity. 20 test_az.pytest_cardinaltest_yeartest_ordinal_numNum2Word_AZ. to_yearNum2Word_AZ. to_cardinalNum2Word_AZ.to_cardinal_num_last_vowelNum2Word_AZ. to_cardinalNum2Word_AZ.int_to_wordNum2Word_AZ. to_cardinalNum2Word_AZ.int_to_wordFigure 16: Example of a generated call tree for test_az.py . Figure 16 shows a generated call tree for the file test_az.py , which contains multiple test functions such as test_cardinal
https://arxiv.org/abs/2505.16975v1
,test_year , and test_ordinal_num . Each test function serves as a root for its own call path, triggering downstream functions like Num2Word_AZ.to_cardinal and Num2Word_AZ.int_to_word . This tree structure reveals the multi-level and cross-functional logic activated during test execution, illustrating how test files connect to multiple feature implementations across the codebase. We use the call tree in two key ways: 1.To select target functions for masking during task generation, enabling controllable task complexity. 2.To trace which source files and logic a model must understand to solve the task, supporting fine-grained evaluation and curriculum learning. D.2 Docstring Augmentation Prompt To ensure high-quality task specifications, we augment original function-level docstrings using GPT- 4o. Figure 22 shows the prompt we use to generate concise, informative docstrings conditioned on the full code context. E Extended Related Work E.1 Multi-agent system For complex SE tasks that strain the context handling of single agents, Multi-Agent Systems (MAS) utilizing collaborating LLMs are an emerging research avenue. Existing frameworks like MetaGPT, ChatDev, and AgentVerse often rely on predefined agent roles and fixed interaction protocols. While effective on specific tasks, their hand-crafted structure limits generalization. Recent research explores trainable MAS, aiming for agents that dynamically adapt their organization or communication strategies. However, empirical studies of such adaptive MAS are largely constrained by benchmark complexity; evaluations are often confined to small-scale or synthetic tasks due to the lack of benchmarks providing complex interaction scenarios and reliable execution feedback. SWE-Dev’s scale, complexity, and provision of executable feedback (via unit tests) establish it as the first testbed capable of supporting the training and evaluation of dynamic MAS on realistic, multi-file feature development scenarios. F Experiment Settings F.1 Inference LLMs. We evaluate 17 chatbot LLMs with different model size, including Qwen2.5-Instruct models 1.5B/3B/7B/14B/32B/72B [ 50], Qwen3 models 8B/30B-A3B [ 51], Llama 3.1-8B/3.3-70B- Instruct [ 52], Phi 4 [ 53], Claude-3.7-Sonnet [ 11], Deepseek-V3 [ 54], GPT-4o [ 12], Deepseek-Coder- V2-Lite-Instruct [55], Qwen2.5-Coder-14B-Instruct [56]. Additionally, We extend the evaluation to reasoning models, including Deepseek-R1-distill models (Qwen2.5 7B/32B, Llama-70B) [ 57], Qwen3 21 8B/30B-A3B (thinking) [ 51], QwQ-32B-Preview, Deepseek-R1 [ 57], OpenAI-o1 [ 58], Claude-3.7- Sonnet-thinking [11], and Grok-3-Beta [59]. Multi-Agent Systems. To provide a more comprehensive evaluation of SWE-Dev, we expand our study to include multi-agent systems (MAS) built on LLMs. Prior research has demonstrated that MAS can enhance performance on tasks requiring multi-step reasoning and coordination. In our experiments, all MAS are implemented using GPT-4o-mini [ 60] as the underlying model to ensure consistency across methods. We evaluate coordination-based MAS such as LLM Debate [ 36], Self Refine [ 34], Multi-Agent Debate (MAD) [ 37], and Self Consistency [ 35] that feature relatively simple agent interaction strategies. We further include structured, workflow-oriented MAS designed for code generation, including Agentverse [ 38], MetaGPT [ 61], ChatDev [ 40], MapCoder [ 39], and EvoMAC [18]. F.2 Training F.3 Single-Agent Supervied Fine-tuning We fine-tune the model using LoRA, applying low-rank adaptations (rank r= 16 , scaling α= 16 , dropout = 0.05) to the query, key, value, and output projection matrices of each attention sublayer. Training is performed with
https://arxiv.org/abs/2505.16975v1
a learning rate of 6×10−4and a batch size of 32 sequences per gradient step, for up to 4 epochs. Checkpoints are saved every 50 steps, and the best model is selected based on validation loss over a held-out set of 100 examples. Fine-tuning is initialized from Qwen2.5- 7B-Instruct and completed within 20 hours using 8 NVIDIA A100 GPUs. We leverage DeepSpeed Ulysses and Flash Attention to support efficient training with long input contexts. F.4 Single-Agent Reinforcement Learning For reinforcement learning (RL) training, we sampled 2k instances from the SWE-Dev to balance computational feasibility and the ability to capture RL benefits. Specifically, we used Proximal Policy Optimization (PPO) [ 42] and Direct Preference Optimization (DPO) [ 43] for training the Qwen2.5-7B-Instruct using 8 NVIDIA A100 GPUs. PPO : The training was conducted using a batch size of 256 and trained for 5 epochs, with a learning rate of 1×10−6for the actor and 1×10−5for the critic. The training was set to save checkpoints every 10 steps. We used a maximum prompt length of 8192 tokens and set a micro-batch size of 32. The reward for PPO is calculated based on the pass rate of the test cases. DPO : For DPO training, we applied LoRA with a rank of 64, scaling factor α= 128 , and dropout set to 0. The preference loss function ( pref_loss ) was set to sigmoid, which is commonly used in DPO for preference-based optimization. Training was performed for 5 epochs, using a batch size of 8 and a learning rate of 1×10−5. For a fair comparison with SFT in §4.2.2, we used the same 2k training samples for both SFT and RL. The details for SFT training are outlined in Appendix F.3. These methods allow us to assess the impact of RL on model performance using the SWE-Dev dataset while maintaining efficient training.. F.5 Mingle-Agent Supervied Fine-tuning In our multi-agent fine-tuning experiments, we utilize a simplified version of EvoMAC [ 18], retaining only two core roles: Organizer andCoder , see Figure 17. The fine-tuning process follows an iterative workflow. Initially, the Organizer processes the Project Requirement Description (PRD) and breaks it down into clearly defined subtasks or instructions. Subsequently, the Coder generates corresponding code implementations for these subtasks. The generated code is then evaluated using the provided ground truth (GT) test cases. Feedback from these evaluations informs subsequent iterations, enabling iterative refinement of both the task decomposition by the Organizer and the code generation by the Coder. Fig. 19 and Fig. 20 respectively show the prompts used for the Coder and Organizer. Rejection Sampling Procedure. 22 Figure 17: Overview of Multi-agent Fine-tuning in EvoMAC. This framework leverages both textual and gradient-based supervision to improve multi-agent collaboration. During inference, an organizer assigns roles and coordinates a team of coders to generate code, which is then validated using ground truth test cases. Successful execution trajectories are collected and used to fine-tune both the organizer and coders individually via supervised learning, enabling role-specific optimization for complex software development tasks. To effectively leverage the feedback from GT test cases, we employ rejection
https://arxiv.org/abs/2505.16975v1
sampling—a method widely adopted in reinforcement learning and language model fine-tuning [ 62,63]. The detailed procedure is as follows: 1.Iterative Reasoning with EvoMAC : For each training instance, EvoMAC executes multiple rounds of reasoning. In each iteration, the generated code from the Coder is tested against the GT test cases to compute its performance. 2.Selection of High-quality Trajectories : Trajectories that show improved performance over previous iterations (as indicated by an increased pass rate on GT test cases) are selectively retained. Conversely, trajectories that do not demonstrate progress or degrade in performance are discarded. This ensures that only beneficial and constructive data is used for fine-tuning. 3.Role-wise Fine-tuning : The retained high-quality trajectories are utilized to separately fine- tune the Organizer and the Coder. Specifically, the Organizer is trained to better structure and decompose tasks from PRDs, while the Coder is refined to enhance code generation capabilities for defined subtasks. This role-specific fine-tuning promotes specialization and improves overall performance. As shown in §4.2.3, through this simplified EvoMAC and structured rejection sampling approach, our multi-agent fine-tuning effectively enhances the capabilities of each agent, contributing to significant performance gains on SWE-Dev. 23 Single LLM inference prompt # AIM: You need to assist me with a Python package feature development task. I will provide a Product Requirements Document (PRD) that details the functionality and lists the empty functions that need to be implemented across different file paths. I will also provide the complete "Code Context" of all files mentioned in the PRD. Your task is to implement ONLY the empty functions described in the PRD while preserving ALL OTHER CODE in the provided files exactly as is. This is absolutely critical - you must keep all imports, class definitions, functions, comments, and other code that is not explicitly mentioned in the PRD for implementation. When implementing the functions: 1. Carefully identify which functions from the PRD need implementation. Implement them based on the docstrings and specifications in the PRD 2. Do not add any new "import" statements unless absolutely necessary 3. Do not modify ANY existing code structure, only implement the empty functions For each file mentioned in the PRD, you MUST output the COMPLETE file code with your implementations inserted. Your output format must follow this exact pattern: # OUTPUT FORMAT FOR EACH FILE: @ [relative path/filename] ‘‘‘python [COMPLETE file code including ALL original code plus your implementations] ‘‘‘ @ [relative path/filename] ‘‘‘python [COMPLETE file code including ALL original code plus your implementations] ‘‘‘ IMPORTANT: Make sure your output includes EVERY function, class, import statement, and comment from the original code context. The only difference should be that the empty functions specified in the PRD are now implemented. # PRD: {PRD} # Code Context: {code_snippet} Figure 18: Single LLM Inference Prompt. G Licensing All codebases and data used in this work are sourced from publicly available GitHub repositories. We have ensured compliance with the corresponding licenses of these repositories, respecting all terms of use and attribution requirements. H Prompts This section includes all prompts used in the generation, evaluation and analysis process. 24 EvoMAC
https://arxiv.org/abs/2505.16975v1
Coding Agent Prompt You are Programmer. we are both working at ChatDev. We share a common interest in collaborating to successfully complete a task assigned by a new customer. You can write/create computer software or applications by providing a specific programming language to the computer. You have extensive computing and coding experience in many varieties of programming languages and platforms, such as Python, Java, C, C++, HTML, CSS, JavaScript, XML, SQL, PHP, etc,. Here is a new customer’s task: {task}. To complete the task, you must write a response that appropriately solves the requested instruction based on your expertise and customer’s needs. According to the new user’s task and you should concentrate on accomplishing the following subtask and pay no heed to any other requirements within the task. Subtask: {subtask}. Programming Language: python, Codes: {codes} Figure 19: Role Prompt for EvoMAC Coding Agent. 25 EvoMAC Organizing Agent Prompt As the Leader of a coding team, you should analyze and break down the problem into several specific subtasks and assign a clear goal to each subtask. Ensure each subtask is extremely detailed, with a clear description and goal, including the corresponding PRD statement where necessary. The workflow should be divided into minimal, executable subtasks, each involving one method implementation. The target_code should only contain the relative paths and function names for the specific code that is required for that subtask. Each subtask should be assigned a unique task_id, and the description should reflect the exact requirements of the PRD corresponding to that method or task. The target_code should be precise, containing only the specific Python code (relative path and method/function name) that corresponds to the subtask’s scope. The format should strictly follow the JSON structure below: ‘‘‘json [ { "task_id":"1", "description":"Task Description", "target_code": [ "relative_python_path:function_name", "relative_python_path:class_name.method_name" ] } ] ‘‘‘ Use the backticks for your workflow only. Note: (1) Each subtask should be self-contained and represent one method’s implementation. (2) The ‘description‘ should be based on specific statements from the PRD, and it must explain what the subtask is aiming to achieve. (3) The ‘target_code‘ should only reference the code paths and function names for the methods to be implemented for the subtask. (4) The number of subtasks should not exceed 5. Some tasks might combine multiple smaller functions if needed to fit within the limit. (5) Each subtask is handled independently by different agents, so the description should be thorough, ensuring clarity even without the full context of the PRD. Figure 20: Role Prompt for EvoMAC Organizing Agent. 26 Error Classification Prompt You are an error classification expert. Based on the provided PRD, LLM- generated code, and error message, your task is to analyze and categorize the primary issue. 1. Analyze the root cause of the problem using the PRD, the code, and the error message. 2. If multiple issues exist, return only the most severe and primary one ( return exactly one ProblemType). 3. Return the result in strict JSON format with the following structure: { "ProblemType": { "MainCategory": "Main error category", "SubCategory": "Specific sub-category of the issue", "Reasoning": { "SymptomAnalysis":
https://arxiv.org/abs/2505.16975v1
"Observed abnormal behavior (in Chinese)", "RootCause": "Attribution analysis combining PRD and code (in Chinese)", "ErrorMechanism": "Technical explanation of how the error occurs ( in Chinese)" } } } Below is the data provided to you: PRD: {prd} Generated Code: {results} Error Message: {input_text} Please ensure your response strictly follows the JSON format above. The allowed values for MainCategory are limited to the following five options - read them carefully and choose the most appropriate one: 1. Logic Error: Logical errors such as assertion failures or failure to meet PRD requirements. 2. Syntax Error: Syntax issues such as unexpected tokens, indentation problems, etc. 3. Parameter Error: The function required by the PRD is present, but input/ output parameters are incorrect or missing. 4. Incomplete Error: Some required functions are entirely missing as per the PRD. Make sure to distinguish between a truly missing function and one that exists but contains logic or syntax errors. 5. Others: Any other issues that do not fit the above categories. You must carefully select the MainCategory to ensure accuracy. Do not return any MainCategory that is not listed above. Do not return an empty MainCategory. Figure 21: Prompt Template for Error Type Classification. 27 Docstring Augmentation Prompt # Context: The following Python code is provided for reference. It includes functions, classes, and other elements that provide context for the function or class below. Additionally, any constants or variables defined outside functions/classes are considered as part of the context and should be explained if used. # Full Code: ‘‘‘python {full_code} ‘‘‘ # Code for {name}: ‘‘‘python {code_snippet} ‘‘‘ # Docstring: Please generate a concise and clear docstring for the above {name} based on the full code context. Ensure the docstring briefly explains the {name}’s purpose, parameters, return values, and any relevant dependencies or interactions with other parts of the code. If there are any constants or variables used within the {name}, explain their role and significance, including where they are defined and how they interact with the function or class. For functions: describe the input parameters, expected output, and any important side effects in a few sentences. Also, explain any constants used inside the function (if applicable). For class.methods: describe the input parameters, expected output, and any important side effects in a few sentences. Also, explain any constants used inside the function (if applicable). For classes: describe the main attributes and methods, along with the general purpose of the class in a brief summary. Mention any constants used in the class and explain their purpose and how they interact with class methods and attributes. Keep the docstring focused, avoiding unnecessary details or repetition. # Output format Your response should strictly follow the format below, without any other text or comments. \"\"\" docstring \"\"\" Figure 22: Docstring Augmentation Prompt in Task Generation. 28 Categories for Classifying Packages You are a Python expert. Given the name of a PyPI package, classify it into ONE category from the list below based on its MOST central and primary purpose. Categories: 1. Web & Network Automation Packages that support automation of web browsing,
https://arxiv.org/abs/2505.16975v1
API communication, and network protocols. Criteria: Enables browser control, HTTP requests, network operations, or web server handling. 2. Data Processing & Integration Packages that extract, parse, or convert structured/unstructured data formats. Criteria: Handles parsing or converting text, JSON, YAML, XML, or dates. 3. Security & Access Control Packages that focus on authentication, authorization, or access control mechanisms. Criteria: Implements rules, policies, or authentication methods. 4. Command-Line & Developer Tools Packages that assist in building CLI tools, test frameworks, or code quality analysis. Criteria: Aimed at improving the development experience, command-line interfaces, or code quality. 5. Cloud & Data Storage Packages interacting with cloud services, databases, or data storage solutions. Criteria: Provides interfaces or tools to access, manage, or validate remote data or cloud resources. 6. Data Science & Visualization Packages used for scientific computing, visualization, or statistical evaluation. Criteria: Supports data analysis, visualization, or scientific research. 7. Others Packages that do not clearly belong in the other categories or are too general/specialized. Criteria: Doesn’t strongly align with the definitions above or serves a unique/niche purpose. Please output only the category number (only one category), no explanation unless asked. Choose the single best fit. Package name: {package_name} You must strictly follow the format below, only a number no other text: 1 Figure 23: Prompt Template for Classifying Packages. 29 Human Evaluation Guideline for PRD Quality Each docstring is evaluated independently along the following three dimensions, using a 0-5 scale (0 = very poor, 5 = excellent): Clarity - How easy the docstring is to understand for a competent software engineer. Consider language clarity, readability, and absence of ambiguity. Completeness - Whether the docstring provides all necessary information to understand the function’s behavior. Consider whether inputs, outputs, parameters, and important logic are described. Actionability - How effectively the docstring guides actual implementation. Consider whether a developer could use the docstring alone to reasonably implement the function. Rating Scale: 5: Excellent - No issues; highly clear, complete, and actionable. 4: Good - Minor improvements possible. 3: Fair - Understandable but lacking in one area. 2: Poor - Vague or missing key information. 1: Very Poor - Hard to follow or largely unhelpful. 0: Unusable - Cannot inform implementation at all. If the original docstring is missing or boilerplate-only, please rate accordingly. Docstrings are to be rated individually without direct comparison. Figure 24: Human Evaluation Guideline for PRD Quality. 30
https://arxiv.org/abs/2505.16975v1
arXiv:2505.16983v1 [cs.CL] 22 May 2025LLM as Effective Streaming Processor: Bridging Streaming-Batch Mismatches with Group Position Encoding Junlong Tong1,2, Jinlan Fu3, Zixuan Lin4, Yingqi Fan2, Anhao Zhao2,Hui Su5,Xiaoyu Shen2*, 1Shanghai Jiao Tong University 2Ningbo Key Laboratory of Spatial Intelligence and Digital Derivative, Institute of Digital Twin, EIT 3National University of Singapore4University of Science and Technology of China5Meituan Inc. jl-tong@sjtu.edu.cn xyshen@eitech.edu.cn Abstract Large Language Models (LLMs) are primarily designed for batch processing. Existing meth- ods for adapting LLMs to streaming rely either on expensive re-encoding or specialized archi- tectures with limited scalability. This work identifies three key mismatches in adapting batch-oriented LLMs to streaming: (1) input- attention, (2) output-attention, and (3) position- ID mismatches. While it is commonly assumed that the latter two mismatches require frequent re-encoding, our analysis reveals that only the input-attention mismatch significantly impacts performance, indicating re-encoding outputs is largely unnecessary. To better understand this discrepancy with the common assumption, we provide the first comprehensive analysis of the impact of position encoding on LLMs in streaming, showing that preserving relative positions within source and target contexts is more critical than maintaining absolute order. Motivated by the above analysis, we introduce a group position encoding paradigm built on batch architectures to enhance consistency be- tween streaming and batch modes. Extensive experiments on cross-lingual and cross-modal tasks demonstrate that our method outperforms existing approaches. Our method requires no architectural modifications, exhibits strong gen- eralization in both streaming and batch modes. The code is available at repository https:// github.com/EIT-NLP/StreamingLLM . 1 Introduction Large language models (LLMs) have revolution- ized a multitude of tasks (Zhang et al., 2023b; Liu et al., 2024; Chu et al., 2023; Kojima et al., 2022; Kocmi and Federmann, 2023). However, research on LLMs has largely focused on batch-processing , where the entire input is processed at once (Zhao et al., 2023). In contrast, human cognition oper- ates incrementally, interpreting information as it ar- *Corresponding author (a-1) (a-2) (a-3) LLM (Interleaved-Streaming)Source Target Source Target [0] [1] [2] [3] [4] [5]LLM (Batch-Streaming) Source Target [4] [2] [3] [0] [1] [5] [2] [3] [0] [1] [0] [1] [2] [2] [3] [5] [0] [1] [2] [3] [4] [5] (b-1) (b-2) Target tokenSource token New source token Target token with reencoding Target token w/o reencoding New target token History Future Attendable attention [0] [1] Position ids Blocked path Generation path Re-encoding pathFigure 1: Two streaming paradigms of LLMs: (a) Batch- streaming simulates batch-processing, while interleaved- streaming encodes streaming data in arrival order. (a- 1)Input-Attention Mismatch : Whether the source can attend to the target. (a-2) Out-Attention Mismatch : Whether the target can attend to the latest source. (a-3) Position-ID Mismatch : Whether the position IDs reflect the true token order. (b) Batch-streaming relies on re- encoding to mimic batch-processing, differing only in accessible information range. rives—a capability essential for real-time decision- making, interactive dialogue, and other latency- sensitive applications (Gonzalez et al., 2003; Alt- mann and Mirkovi ´c, 2009). Bridging this gap be- tween batch-oriented LLMs and streaming-aware processing is vital for unlocking their potential in dynamic, real-world scenarios. A naive strategy to adapt LLMs
https://arxiv.org/abs/2505.16983v1
for streaming involves iteratively re-encoding both new inputs and prior outputs with each incoming data segment (Agostinelli et al., 2024; Wang et al., 2024; Guo et al., 2024b; Koshkin et al., 2024), as illustrated in Figure 1(b). While this batch-streaming paradigm preserves compatibility with batch-processing ar- chitectures, it introduces prohibitive computational costs. Existing efforts to optimize LLMs for stream- ing data typically fall into two categories: (1) Di- rectly encoding streaming data in arrival order (Du et al., 2024; Yang et al., 2024), an interleaved- streaming paradigm, which introduces structural mismatches with batch-processing setups used in pre-training and degrades performance; (2) Design- ing entirely new architectures tailored to the stream- ing mode (Guo et al., 2024a; Tsunoo et al., 2024; Chen et al., 2024), which is costly, lacks scalability, and fails to fully leverage pre-trained LLM capabil- ities. Furthermore, existing methods lack rigorous analysis of the fundamental discrepancies between batch and streaming processing modes. This work tackles these limitations by identi- fying three key mismatches in adapting batch- oriented LLMs to streaming, as shown in Figure 1: •Input-Attention Mismatch :Batch-streaming confines input tokens to attending only prior inputs, whereas interleaved-streaming permits attention to previously decoded outputs. •Out-Attention Mismatch : Batch-streaming allows decoded output tokens to attending to all received input tokens by KV cache re- encoding , while interleaved-streaming mode limits each output token’s attention to the sub- set of inputs available at decoding time. •Position-ID Mismatch : Batch-streaming re- lies on position re-encoding , assigning con- tiguous position IDs to inputs followed by outputs, whereas interleaved-streaming pro- cesses alternate between inputs and outputs incrementally, resulting in discontinuous posi- tional ids that disrupt sequential coherence. Building on the identification of these mis- matches, we systematically studied their effects on LLM performance. Our analysis revealed that input-attention mismatch does affect streaming model performance. In contrast, out-attention and position-ID mismatches have negligible effects. A common assumption is that streaming models re- quire re-encoding of previously generated content to mitigate token position inconsistencies arising from the incremental nature of streaming setting (Raffel et al., 2024; Guo et al., 2024a; He et al., 2024), as shown in Figure 1(b). However, our em- pirical findings do not support this hypothesis. In- stead, we observe that re-encoding the output is notnecessary1. This discrepancy with the common as- sumption raises a fundamental question: How does position encoding impact LLMs in streaming scenarios? And how should we design appropri- ate position encoding for streaming LLMs? Existing research on positional encoding in LLMs has largely focused on static scenarios (Likhomanenko et al., 2021; Haviv et al., 2022; Kazemnejad et al., 2024), while its role in stream- ing scenarios remains underexplored. We con- ducted a more in-depth analysis to further explore the impact of position encoding on streaming mod- els. Experimental results reveal that the abso- lute positional order of tokens has a negligible effect on model performance in streaming tasks. However, maintaining the internal relative order within the source and target sequences is signif- icantly more important. Based on the findings, we propose a grouped position encoding stream- ing paradigm built
https://arxiv.org/abs/2505.16983v1
on batch architectures( group- streaming ), which groups input and output position ids to enable more consistent processing with the batch model. This strategy is not only computa- tionally efficient but also generalizable across dif- ferent tasks and model architectures. We validated its effectiveness on cross-lingual (machine transla- tion) and cross-modal (automatic speech recogni- tion) tasks, demonstrating that it significantly out- performs existing solutions, including LLMs with more complex streaming-optimized architectures. The main contributions of this study can be sum- marized as: (1) We systematically analyze the mis- matches between batch and streaming processing in LLMs, providing deep insights into key factors affecting their adaptation to streaming. Contrary to mainstream assumptions, our experiments re- veal that position disorder is not the primary factor affecting LLM streaming performance. (2) We conduct the first comprehensive study on the im- pact of position encoding in streaming scenarios, demonstrating that absolute positional order is un- necessary, while maintaining relative order within source and target contexts is more critical. (3) We introduce a group streaming paradigm for stream- ing LLMs. This method imposes no architectural constraints on batch-processing LLMs, allowing seamless application to any pre-trained LLM while ensuring high scalability and adaptability to various real-world streaming tasks. 1We clarify that re-encoding the target tokens is solely for refining the generation of the latest token without altering previously generated content. 2 Streaming-Batch Mismatches LLMs are pre-trained in a batch-processing paradigm, where the entire input sequence X= [x1, . . . , x n]is processed simultaneously to gener- ate the output sequence Y= [y1, . . . , y m]. This paradigm assumes full input availability, allow- ing both self-attention and cross-attention mech- anisms to operate over complete sequences. In contrast, streaming tasks require incremental pro- cessing, where inputs and outputs arrive and are processed in an interleaved manner over time. At any time step t, the model only has access to a partial input sequence Xt= [x1, . . . , x t]and generates a corresponding partial output sequence Yt′= [y1, . . . , y t′]. This shift from batch to streaming introduces three key mismatches: Input-Attention Mismatch In batch-streaming mode, self-attention enforces a strict ordering, where each input token xican only attend to prior inputsX<i. This is typically expressed as: hi=SelfAttention (xi,X<i), (1) where hiis the hidden representation of xi. How- ever, in interleaved-streaming mode, as outputs are generated incrementally, previously decoded out- putsY<t′become available and are included in the attention context: hinterleaved i =SelfAttention (xi,X<i∪Y<t′).(2) This disrupts the model’s pre-trained assumptions, as input tokens in batch mode never attend to out- put, potentially leading to degraded performance. Out-Attention Mismatch In batch-streaming mode, each generated output token ykcan attend to all input tokens Xby KV cache re-encoding: hk=CrossAttention (yk,X) k≤j, (3) where yjis the latest generated output token. How- ever, in interleaved-streaming mode, output tokens can only attend to the subset of inputs Xtreceived up to the current step: hinterleaved j =CrossAttention (yj,X≤t).(4) This temporal constraint means that the hidden representation of each decoded token is computed based only on the partial input sequence available at
https://arxiv.org/abs/2505.16983v1
the time, which may lead to inconsistencies com- pared to batch-mode processing.Position-ID Mismatch In batch-streaming, to- kens receive contiguous position IDs by position re-encoding, so that for an input sequence Xtand output sequence Yt′, we have: p(xi) =i, p (yj) =t+j, (5) ensuring that the relative positional differences, p(tj)−p(ti), accurately reflect the true token or- der and guide the positional embedding function g(p(t))to generate coherent embeddings. For interleaved-streaming, however, inputs and outputs are interleaved (e.g., x1, y1, x2, y2, . . .) while still being assigned continuous IDs from 1ton+m. This misrepresents the true temporal gaps between tokens; the relative differences p(tj)−p(ti)no longer mirror the actual sequence structure. 3 Impact Analysis of Mismatches Applying batch-trained LLMs to streaming mode introduces structural mismatches. Existing re- search has not systematically analyzed the nature of these mismatches between streaming and batch- processing. We employ a stepwise ablation ap- proach to systematically isolate each mismatch and assess its impact on streaming task performance. Setup This section analyzes the impact of the three mismatches using the streaming text transla- tion task with wait-k reading & writing policy (Ma et al., 2019). All experiments are conducted on the IWSLT-17 dataset (Cettolo et al., 2017), cover- ing two cross-lingual translation tasks: En-Fr and En-De. We use Gemma2-2B-Instruct model (Team et al., 2024) and Phi3-Mini-Instruct model (Abdin et al., 2024) with 3.8B parameters for all experi- ments, evaluating model performance using BLEU scores. (Post, 2018). Effects of Input-Attention Mismatch The interleaved-streaming mode, which exhibits all three mismatches, serves as our baseline for com- parison. The batch-streaming mode eliminates input-attention mismatch by preventing source to- kens from attending to generated target tokens. Building on this, we apply the same positional encoding as interleaved-streaming within the batch- streaming framework. Notably, without KV cache and position embedding re-encoding, the batch- streaming approach still retains both out-attention mismatch and position-ID mismatch. Table 1 shows that eliminating the input- attention mismatch improves BLEU scores across Dataset ModeGemma2-2B-Instruct (wait-k) 1 3 5 7 Max. Imp. En-FrInterleaved-streaming 30.93±0.08 37.67±0.11 39.12±0.09 39.65±0.07 Batch-streaming (No re.) 33.13±0.09↑2.2039.29±0.06↑1.6240.66±0.10↑1.5440.82±0.09↑1.17↑2.20 Batch-streaming (Pos re.) 33.19±0.07↑0.0639.43±0.13↑0.1440.78±0.08↑0.1240.89±0.07↑0.07↑0.14 Batch-streaming (All re.) 33.47±0.10↑0.2839.62±0.08↑0.1940.91±0.11↑0.1341.01±0.09↑0.12↑0.28 En-DeInterleaved-streaming 20.44±0.06 26.86±0.10 29.13±0.08 29.90±0.07 Batch-streaming (No re.) 21.97±0.04↑1.5328.30±0.07↑1.4430.52±0.06↑1.3931.36±0.05↑1.46↑1.53 Batch-streaming (Pos re.) 22.06±0.03↑0.0928.38±0.05↑0.0830.63±0.04↑0.1131.45±0.05↑0.09↑0.11 Batch-streaming (All re.) 22.25±0.05↑0.1928.61±0.06↑0.2330.77±0.07↑0.1431.56±0.06↑0.11↑0.23 Dataset ModePhi3-Mini-Instruct (wait-k) 1 3 5 7 Max. Imp. En-FrInterleaved-streaming 29.03±0.10 36.54±0.14 38.42±0.13 39.27±0.09 Batch-streaming (No re.) 30.96±0.10↑1.9338.42±0.08↑1.8839.80±0.07↑1.4240.93±0.11↑1.66↑1.93 Batch-streaming (Pos re.) 31.08±0.06↑0.1238.51±0.08↑0.0939.87±0.12↑0.0740.96±0.05↑0.03↑0.12 Batch-streaming (All re.) 31.21±0.09↑0.2038.67±0.13↑0.1639.98±0.11↑0.1141.05±0.07↑0.09↑0.20 En-DeInterleaved-streaming 20.74±0.05 27.46±0.14 29.56±0.10 30.67±0.06 Batch-streaming (No re.) 22.21±0.08↑1.4728.85±0.11↑1.3930.88±0.05↑1.3231.92±0.07↑1.25↑1.47 Batch-streaming (Pos re.) 22.28±0.06↑0.0728.87±0.09↑0.0230.91±0.11↑0.0331.95±0.13↑0.03↑0.07 Batch-streaming (All re.) 22.45±0.07↑0.1728.98±0.07↑0.1131.01±0.07↑0.1032.03±0.07↑0.08↑0.17 Table 1: The BLEU performance variations reflect the stepwise elimination of mismatches between batch processing and streaming. Interleaved-streaming represents the presence of all three mismatches. Batch-streaming (No re.) corresponds to batch-streaming with interleaved position encoding, where the input-attention mismatch is eliminated. Batch-streaming (Pos re.) further removes the position-ID mismatch through position re-encoding. Finally, Batch- streaming (All re.) eliminates the out-attention mismatch by re-encoding the KV cache. different wait-k strategies, with a maximum in- crease of 2.20 on the En-Fr translation task and 1.53 on the En-De translation task. This indicates that processing streaming data in an interleaved streaming manner with a batch-pretrained model leads
https://arxiv.org/abs/2505.16983v1
to performance degradation. Effects of Position-ID Mismatch Re-encoding can address the remaining two mismatches. We further decompose re-encoding into two compo- nents: KV cache re-encoding and position embed- ding re-encoding. The former enables target to- kens to attend to the most recently available tokens, thereby resolving the out-attention mismatch. The latter corrects the position-ID mismatch by adjust- ing position embeddings to align with the streaming paradigm. Expanding on this, the batch-streaming paradigm with position embedding re-encoding fur- ther resolves the position-ID mismatch while still retaining out-attention mismatch. Table 1 shows that position embeddings re- encoding does not lead to significant performance improvements, with a maximum gain of only 0.14 on the En-Fr and En-De translation tasks. This suggests that position-ID mismatch is not a pri-mary factor affecting streaming task performance, challenging previous claims regarding the role of positional encoding in streaming models. Effects of Out-Attention Mismatch The KV cache re-encoding can address the remaining out- attention mismatch. On the setting of the former, incorporating both KV cache and position embed- ding re-encoding into batch-streaming paradigm eliminates all mismatches, making it closely resem- ble the batch-processing setting. Table 1 shows that re-encoding previously gen- erated tokens also does not significantly improve model performance. Although re-encoding allows target tokens to attend to the most recent input con- text, the inherent constraints of streaming tasks prevent already generated outputs from being mod- ified. As a result, re-encoding does not alter the fundamental partial information nature of stream- ing tasks; instead, its primary effect is to correct the generation path of subsequent tokens. However, experimental results indicate that this correction is not a decisive factor in performance improvement. Our experiments demonstrate that input- attention mismatch significantly impacts streaming Model Position settingEn-Fr (Wait-k) En-De (Wait-k) 1 3 5 7 1 3 5 7 Gemma2-2B-InstructRemove all pos. 27.11 34.98 37.54 38.02 19.01 25.93 27.71 28.87 Remove source pos. 28.35 36.12 38.42 39.03 19.63 26.82 28.08 29.36 Remove target pos. 29.14 36.83 39.01 39.62 19.91 27.01 28.59 29.51 Retain all pos. 33.23 39.39 40.76 40.92 22.35 28.88 30.84 31.47 Phi3-Mini-Instruct (3.8B)Remove all pos. 26.73 34.85 37.31 37.92 18.86 25.87 27.79 29.01 Remove source pos. 27.98 35.96 38.17 38.95 19.47 26.78 28.19 29.54 Remove target pos. 28.84 36.58 39.04 39.46 19.83 26.95 28.64 29.78 Retain all pos. 30.96 38.45 39.89 40.57 22.21 28.86 30.92 31.94 Table 2: Effect of source and target position removal on streaming LLMs performance. We simulate position removal by assigning a constant position ID of 0 to all tokens instead of removing the positional embeddings. translation performance, highlighting the per- formance gains of using a batch-processing architecture for streaming tasks.2On the other hand, contrary to existing studies (Raffel et al., 2024; Guo et al., 2024a; He et al., 2024), position-ID mismatch is not the primary reason for re-encoding, and interleaved positional encoding achieves performance comparable to continuous position encoding in batch processing. To investi- gate the discrepancy between our findings with the common assumption, we conduct a comprehensive analysis of how position encoding impacts LLMs in streaming scenarios. 4 Impact Analysis of Position Encoding The above
https://arxiv.org/abs/2505.16983v1
analysis suggests that positional mis- matches do not significantly impact the perfor- mance of streaming tasks. To further elucidate this phenomenon, this section provides a detailed inves- tigation into the impact of positional encoding on the performance of LLMs in streaming scenarios. 4.1 Is Position Encoding Necessary for Streaming Tasks? Building upon the experimental setup from the pre- vious section, we further investigate the necessity of positional encoding in streaming tasks by sep- arately removing global positional encoding and target-side positional encoding. Table 2 presents the BLEU scores on the En-Fr and En-De stream- ing translation tasks after removing position encod- ings at different locations. We simulate position removal by assigning a constant position ID of 0 to all tokens instead of removing the positional encod- ing module. For the setting of position-retaining, 2We provide the detailed training process for different settings in the Appendix B.3.we apply interleaved positional encoding as illus- trated in previous section. The table reveals that removing positional information from either the source or target side results in a clear performance degradation, with the maximum drop exceeding 10%. In contrast, when both source and target po- sitional information are removed, the model main- tains roughly 80% of its BLEU score compared to the fully position-retaining setting. This finding aligns with previous studies sug- gesting that LLMs can still learn certain positional information even without explicit positional encod- ing (Haviv et al., 2022). However, it is important to emphasize that positional encoding remains rele- vant for streaming tasks, particularly on the target side. Notably, the absence of target-side positional encoding leads to a measurable performance de- cline, highlighting its role in maintaining effective token generation in streaming scenarios. 4.2 Group Position Encoding Is An Option for Streaming Tasks. Given that positional encoding is necessary and in- terleaved positional encoding has minimal impact on streaming task performance, one might ques- tion whether streaming problems can be modeled using interleaved positional encoding and batch- streaming mode. However, this is not an optimal choice, as interleaved positional encoding lacks direct generalizability to batch processing. In real-world scenarios, the target sequence is not available in advance, making it impossible to predefine source positions. This limitation hinders the generalization of streaming models to offline settings. To address this issue, we propose a group position encoding based on batch-streaming frame- work for streaming LLMs as shown in Figure 2, Model Wait-kEn-Fr (Target start id ϕ) En-De (Target start id ϕ) 0 0.5 128 256 512 ∆ 0 0.5 128 256 512 ∆ Gemma2-2B-Instruct5 40.76 40.76 40.70 40.57 40.68 0.19 30.84 30.84 30.90 30.80 30.95 0.15 7 40.92 40.92 40.85 40.91 40.92 0.07 31.47 31.47 31.44 31.57 31.67 0.23 9 40.91 40.91 40.90 40.88 40.97 0.09 31.73 31.73 31.87 31.91 31.88 0.18 11 41.10 41.10 41.14 40.96 41.05 0.18 31.95 31.95 31.98 31.95 31.89 0.09 Phi3-Mini-Instruct (3.8B)5 39.89 39.89 39.91 40.06 39.87 0.19 30.92 30.92 30.76 30.81 30.86 0.16 7 40.57 40.57 40.53 40.72 40.71 0.19 31.94 31.94 31.78 31.84 31.78 0.16 9 41.31 41.31 41.24 41.35 41.44 0.20 32.18 32.18 32.10 32.21 32.09 0.12 11
https://arxiv.org/abs/2505.16983v1
41.92 41.92 42.03 41.94 41.93 0.11 32.26 32.26 32.23 32.33 32.28 0.10 LLama3.1-8B-Instruct5 40.11 40.11 40.10 39.93 39.92 0.19 30.33 30.33 30.21 30.37 30.34 0.16 7 40.30 40.30 40.32 40.35 40.31 0.03 31.23 31.23 31.18 31.16 31.25 0.09 9 40.15 40.15 40.32 40.34 40.35 0.20 31.80 31.80 31.83 31.76 31.89 0.13 11 40.53 40.53 40.47 40.58 40.63 0.16 32.04 32.04 31.98 32.07 32.08 0.10 Table 3: Performance comparison of different models with various wait-k policies and target start IDs. ∆represents the range of variation in BLEU scores when the start id of target token takes different values. We use bold to indicate the smallest variation. Underline represents the largest variation. Source Source Target TargetLarge Language Model (Group-Streaming) ... ... ... ... ...Source Target Target Source [2] [3] [0] [1] ... Target token [0] [1] Position ids History FutureSource token New source token Frozen tokenNew target token Attendable attentionBlocked attention Figure 2: Framework of our Group-streaming LLMs. (Left) Positional grouping of source and target tokens in the streaming LLM, avoiding re-encoding. The group start ID ϕis a hyperparameter. (Right) The attention mask matrix during the training ensures that target to- kens can only attend to locally available inputs. where source and target tokens are independently assigned positional encodings, ensuring only mono- tonic continuity within each group. Specifically, in our proposed approach, the source position encod- ing remains consistent with batch processing mode, starting from 0. In contrast, the target position begins from a predefined starting value ϕ. This approach makes it feasible to prefill source position encodings even without target information and naturally extends to batch processing. In fact, interleaved position encoding can be viewed as a special case of group position encoding, with the distinction that the interleaved mode uses non- uniform positional intervals.4.3 What is the Impact of Group Position Offset on Model Performance? This section provides a detailed discussion on the impact of target position offset ϕ. Setup We evaluate the impact of grouped posi- tional encoding on text translation and automatic speech recognition tasks. For text translation, we use the IWSLT-17 dataset, focusing on En-Fr and En-De translation tasks, with models including Gemma2-2B-Instruct (Team et al., 2024), Phi3- Mini-Instruct (Abdin et al., 2024), and LLaMA3.1- 8B-Instruct (Dubey et al., 2024). For ASR, we use the LibriSpeech dataset (Panayotov et al., 2015), with Phi3 as the selected model. Translation perfor- mance is assessed using BLEU scores, while ASR performance is evaluated based on WER (Radford et al., 2023). Detailed experimental settings and hyperparameters are provided in the appendix. Results The streaming text translation task re- sults in Table 3 and streaming ASR task in Table 4 indicate that varying the initial offset of the target- side group position encoding within a reasonable range does not significantly affect the performance of LLMs in streaming scenarios. This suggests that the model is highly robust to the choice of initial group position offset. Specifically, when the offset is set to 0, the source and target positions are fully overlap, whereas an offset of 0.5 results in com- plete separation. Despite this contrast, both
https://arxiv.org/abs/2505.16983v1
settings yield comparable performance, suggesting that po- sitional overlap appears to have limited impact on Wait-kSpeech-Text (Target start id ϕ) 0 256 512 1024 2048 ∆ 1 6.02 6.05 6.04 6.07 6.17 0.15 3 4.12 4.10 4.09 4.08 4.19 0.11 5 3.52 3.58 3.55 3.59 3.61 0.09 7 3.33 3.33 3.38 3.41 3.45 0.12 Table 4: Performance of Phi3 with various wait-k poli- cies and target start IDs. ∆represents the range of variation in WER scores when the start id of target to- ken takes different values. the effectiveness of group position encoding. 4.4 Why Group Position Encoding Works? The RoPE encodes relative position informa- tion via rotation matrices Rapplied to each token’s query and key: qr n=R(n)qnand kr n=R(n)kn, where ndenotes the position ID. Then the dot product attention score can be written as Attn(n,cache ) =P iqr nTkir=P iqT nRT(n)R(i)ki=PS+n i=0qT nR(n−i)ki, where Sis the token length of source input and qnis query of a target token. The relative position can be written as ∆ = n−i=ϕ+j−i, where jdenotes the index of the target token and ϕrep- resents the position offset between the first token of the target and that of the source. We split the above dot-product attention into two parts: target- to-target and target-to-source computations: Attn(n,cache ) =jX i=0qT nR(j−i)ki +SX i=0qT nR(ϕ+j−i)ki.(6) The relative position j−iin the first target-to- target term remains consistent across both RoPE and group position encoding. For cross-segment attention in target-to-source, the difference of rel- ative position between RoPE and group position encoding is determined by the position offset ϕ. In original RoPE, ϕequals the length of the source sequence and varies with input length, whereas in group position encoding, ϕis predefined as a fixed constant. LLMs are capable of easily learning and internalizing the semantics of the relative offset by fine-tuning. Once the model has correctly under- stood the meaning of ϕas a position shift, it can accurately capture and assign position relationships across segments, without requiring explicit differ-entiation between source and target token IDs.3 LLMs can learn the position offset ϕthrough simple fine-tuning, so typical values of ϕdo not sig- nificantly impact performance. However, when ϕ becomes extremely large, it may lead to discrepan- cies with the model’s pretraining distribution due to the limited context length used during pretraining. Therefore, a reasonable range for ϕshould ensure that the maximum relative distance between the last target token and the first source token remains within the model’s pretraining context length.4 We recommend using a relatively small ϕ, ide- ally below the input sentence length, to keep rel- ative position gaps closer to the pretraining dis- tribution, which may facilitate faster convergence and better performance. Notably, when ϕ= 0, the target starting token is positioned closer to the source starting token and farther from the source ending token. This configuration better reflects the sequential input arrival pattern in streaming scenar- ios, leading to more stable learning dynamics and enhanced model alignment. 4.5 Visualization of Streaming Attention Taking text translation as an example, we visualize the extent to which each target token attends to past
https://arxiv.org/abs/2505.16983v1
source information during inference. Notably, we normalize the attention weights column-wise (i.e., across each source token) to the range [0, 1]. This normalization offers two key benefits: (1) it miti- gates the influence of tokens with inherently large absolute attention values and highlights the relative importance of attention distribution, making atten- tion strength more interpretable; and (2) it provides a clearer view of how each source token distributes its attention across different target tokens. As shown in Figure 3, under the batch setting, source tokens distribute their attention uniformly across all target tokens, reflecting a globally con- strained behavior. In other words, each target token tends to attend equally to the same source token. In contrast, with group position encoding, source tokens tend to assign more attention to target to- kens with similar positional indices. That is, source tokens are less likely to attend to future target to- kens. This observation supports our earlier finding that re-encoding previously generated target tokens offers limited performance gain in streaming tasks under group position encoding. 3The detailed analysis can be found in Appendix D. 4We provide additional experiments to demonstrate the potential edge in Appendix D.3. Target (b) Batch-processing with group position encoding. 0.0 0.2 0.4 0.6 0.8 1.0Source Target Target (a) Batch-processing. Target (c) Group-streaming with wait-5 policy. Figure 3: An example of the attention distribution of target tokens, where the attention values of each target token are normalized to emphasize the relative focus. The sample is from IWSLT-17 En-Fr dataset.5 Moreover, the results in Figure 3 indicate that employing group position encoding in the batch- processing setting shifts the target tokens’ attention to the source context along the diagonal direction when the offset ϕ= 0. This adjustment encour- ages target tokens to focus more on the currently available input, making the model’s behavior more aligned with the requirements of streaming tasks. 5 Discussion Why LLMs? We apply the proposed group- streaming approach to mainstream large language models and compare its performance against other decoder-only streaming models to highlight its ad- vantages. To demonstrate the effectiveness of our method, we evaluate it on the En-Fr and En-De translation tasks from the IWSLT-17 dataset, as 5Note that the attention values have been normalized. The values do not represent the actual magnitude of attention.well as the ASR task from the Librispeech dataset. The baselines for text translation include Simul- Mask (Raffel et al., 2024) and DST (Guo et al., 2024a), while the baselines for ASR include CAAT (Liu et al., 2021) and Wav2Vec-S (Fu et al., 2024). As shown in Figure 4, the vertical axis repre- sents task-specific performance metrics—BLEU for translation and WER for ASR—while the hor- izontal axis indicates the model’s average latency (AL and LAAL), measured by the number of waited words in translation and the waiting time in ASR. The results show that group-streaming LLMs consistently outperform specialized decoder-only baselines, typically achieving higher accuracy un- der the same latency conditions. Generalization We extend our group position en- coding to batch processing. The first bar in Figure 5 represents the model that is
https://arxiv.org/abs/2505.16983v1