text
string
source
string
and text prompt finetuning. This structured progression enables the model to first learn robust tactile- visual representations, then align them with textual descriptions, and finally optimize response generation, enhancing VTV-LLM’s capability for cross-modal understanding and tactile reasoning. In the following, we describe each of these stages in detail. VTV Enhancement Existing multi-modal LLMs predominantly process natural images via un- modified Vision Transformer (ViT) encoders [ 47]. However, our research addresses visuo-tactile inputs, which exhibit fundamentally different characteristics from natural images, thus necessitating specialized fine-tuning to extract meaningful representations. Furthermore, the temporal nature of our video data introduces challenges not present in static images. Unlike images, videos possess an inherent time dimension characterized by temporal redundancy and inter-frame correlations, requiring robust video representation methodologies. While VideoMAE [ 27,28] offers a powerful masked video autoencoder with an asymmetric encoder-decoder architecture utilizing tube masking, this approach assumes minimal motion across large frame regions. This assumption proves problematic for visuo-tactile videos, which typically exhibit significant motion patterns. Direct application of tube masking to such inputs risks substantial information leakage, wherein the model can trivially reconstruct masked segments using visible tokens from temporally adjacent frames, which is a critical concern in masked video pre-training. To address these limitations, we propose a novel training pipeline specifically designed for visuo-tactile video representation, as illustrated in Fig. 3. Given the visuo-tactile video sequence V={It∈RH×W×C}T t=0, where each frame Itencodes both visual and tactile information at timestamp twith spatial dimensions H×WandCchannels, we propose selecting the middle frame as the keyframe. This selection is motivated by empirical observations that the middle frame typically exhibits the maximum contact surface area, facilitating more robust optical flow warping in subsequent processing stages. For keyframe mask initialization, conventional binarization approaches [ 49] significantly degrade the spatial continuity of object surfaces, compromising the fidelity of the reconstructed tactile information. Therefore, we introduce a Gaussian mixture model [ 50] to obtain the keyframe mask. For the keyframe Ik, we formulate a probabilistic mask using localized Gaussian functions. We select a set of N=⌈α·HW/β2⌉sampling points{pi}N i=1distributed across the frame, where α∈(0,1)controls density and βis the sampling grid size. Each point pigenerates a Gaussian kernel Gi(x, y) = exp −(x−pix)2+(y−piy)2 2λ2 with scale parameter λ. The final keyframe mask is defined as M′ k= min 1,PN i=1Gi , creating a continuous-valued mask that preserves spatial structure while enabling controlled sparsity for subsequent processing. Additionally, we employ dense motion estimation across the visuo-tactile video Vusing the RAFT architecture [ 51]. We compute bidirectional optical flow fields between consecutive frames to capture 6 the continuous deformation patterns throughout the interaction process. For each adjacent frame pair, we define the forward flow field Ot→t+1=RAFT (It, It+1). Each flow field Ot→t+1∈RH×W×2 encodes pixel-wise displacement vectors (ux,y, vx,y)for every spatial location (x, y), mapping positions from frame Itto their corresponding locations in frame It+1. The complete set of optical flows Φfor the sequence is formulated as: Φ =k−1[ t=0{Ot→t+1} ∪T[ t=k+1{Ot→t−1}. (4) This bidirectional flow representation tracks visuo-tactile features throughout the interaction, support- ing warping operations and masked frame generation. We apply spatial normalization before
https://arxiv.org/abs/2505.22566v1
flow computation to ensure scale invariance across different sequences. After that, we utilize the backward warping [ 52,53] to generate the temporal consistent masking map based on the keyframe and mask the corresponding video frames. The masked visuo-tactile frames Vm={Mt}T t=0are fed into the VTV encoder-decoder architecture for reconstruction using the mean squared error loss [ 27,28]. We also incorporate an attribute classifier to predict tactile attributes (hardness, protrusion, elasticity, and friction) using the cross-entropy loss [ 54]. Our total loss function combines both the reconstruction loss and the attribute classification loss, enabling simultaneous optimization of visuo-tactile reconstruction quality and tactile attribute classification accuracy. VTV-Text Alignment In the VTV-text alignment stage, we focus on establishing cross-modal alignment between video and language representations. With the pretrained VTV Encoder from stage 1, we introduce both V-Projector and T-Projector modules while keeping the Large Language Model frozen. This stage leverages our initial constructed VTV150K dataset to bridge the representational gap between visual and textual modalities. The V-Projector maps video embeddings from the VTV Encoder into the language model’s embedding space, while the T-Projector processes corresponding text prompt representations. By training these projection modules exclusively while freezing other components, we establish foundational cross-modal understanding, enabling the model to associate visual content with appropriate textual descriptions. This alignment is critical for downstream video understanding and description tasks as it creates a shared semantic space between the video frames and natural language. Text Prompt Finetuning In the text prompt finetuning stage, we enhance the model’s capacity to respond accurately to textual prompts about video content by implementing supervised fine-tuning across multiple components. The V-Projector, and T-Projector are jointly fine-tuned along with the LLM. Unlike previous stages where the LLM remained frozen, this stage employs parameter-efficient techniques [ 37,36] to fine-tune the language model using 10,000 newly generated question-answer pairs. These pairs are created using the same template generation approach as our VTV150K dataset, featuring diverse video understanding tasks. By generating new data rather than reusing subsets, we significantly increase training diversity and model robustness. This end-to-end optimization enables the model to generate more coherent, accurate, and contextually relevant responses to text prompts about video content. The supervised nature of this phase significantly improves the model’s ability to comprehend complex video scenes and produce natural language descriptions that align with human expectations. This final stage integrates the previously aligned representations into a cohesive multi-modal understanding system, culminating in enhanced video-language capabilities. 4 Experiments 4.1 Setup Our experiments utilize the proposed VTV150K dataset for both training and evaluation protocols. The training process follows our three-stage paradigm: Stage 1 employs multi-sensor visuo-tactile videos with their corresponding attribute annotations for representation learning. For Stage 2 and 3, we utilize two independently generated sets of 10,000 question-answer pairs to prevent data leakage between stages. To evaluate model performance, we create a separate test set comprising 600 question- answer pairs for novel objects not present in the training data, ensuring comprehensive coverage across various tactile reasoning tasks. Our LLM backbone is based on Qwen 2.5 [ 4,5], experimenting with three model variants (3B, 7B,
https://arxiv.org/abs/2505.22566v1
and 14B parameters). All experiments are conducted on 4 NVIDIA 7 Question: Among these three videos: a) <a.mp4> , b) <b.mp4> , c) <c.mp4> , identify the softestobject after describing each one.VTV-LLM: a) It presents a extremely hard and strong protrusion surface with no elasticity and strong friction. b) It presents a highly deformable and moderate protrusion surface with moderate elasticity and friction. c) It presents a extremely hardand no protrusion surface with no elasticity and slight friction. The softest object is b). Surface Optimality Identification… a.mp4 (circuit_board) b.mp4 (kiwi) c.mp4 (iron_ruler) …… Object Sensation CorrelationQuestion: Among these three videos: a) <a.mp4> , b) <b.mp4> , c) <c.mp4> , identify the softestobject after describing each one. You have tactile videos of three different objects: a) <a.mp4> , b) <b.mp4> , c) <c.mp4> . Assign each video (a, b, c) to one of these objects listed alphabetically: 1) an iron ruler, 2) a kiwi, 3) a circuit board.VTV-LLM: a) is a circuit board, b) is a kiwi and c) is an iron ruler. Tactile Scenario AnalysisQuestion: Describe these three videos based on their tactile attributes: a) <a.mp4> , b) <b.mp4> , c) <c.mp4> . Which object would be better for scraping ice offa windshield?VTV-LLM: a) … b) … c)… The most suitable object is c).Figure 4: Several task examples from the proposed VTV150K along with predictions from VTV- LLM. Table 1: Performance comparison of VTV-LLM-7B against seven state-of-the-art methods on the VTV150K dataset. The evaluation covers different tasks, with results reported in percentages (%) and the boldface indicates the best performance. Models Hardness Protrusion Elasticity Friction Combined SFD SOI OSC TSA Average GPT-4o [55] 34.7 32.6 32.6 18.7 2.1 40.9 38.4 16.6 36.0 28.0 Gemini-2.5-Pro-Exp [56] 36.2 34.7 39.1 21.0 4.3 42.6 29.4 18.5 40.0 29.5 LLaV A-OneVision-7B [57] 27.5 32.6 26.0 20.2 0.7 40.9 28.2 11.7 30.0 24.2 LLaV A-Video-Qwen2-7B [58] 30.4 29.7 28.9 18.1 2.1 33.6 29.4 17.2 36.0 25.0 InternVL2.5-VL-8B [59] 18.1 23.9 21.0 13.7 0.0 24.5 17.9 11.1 24.0 17.1 VideoLLaMA3-7B [41] 15.2 21.7 14.4 10.8 0.0 11.4 12.8 7.4 20.0 12.6 Qwen2.5-VL-7B [60] 25.3 28.9 17.3 15.9 1.4 22.9 28.2 16.0 30.0 20.6 VTV-LLM-7B (Ours) 73.9 75.0 67.3 56.5 35.6 71.3 57.6 43.2 64.0 60.4 RTX 6000 Ada GPUs. Additional implementation details and hyperparameter configurations are provided in the Supplementary Material. 4.2 Results To verify the effectiveness of our VTV-LLM, we compare it against two strong proprietary models, such as GPT-4o [ 55] and Gemini-2.5-Pro-Exp [ 56], as well as five open-source video-based VLMs, including LLaV A-OneVision-7B [ 57], LLaV A-Video-Qwen2-7B [ 58], InternVL2.5-VL-8B [ 59], VideoLLaMA3-7B [ 41] and Qwen2.5-VL-7B [ 60]. Since most of the video-based VLM models have parameters around 7B, we only use the VTV-LLM-7B model for fair comparison. To guarantee the robustness of the experimental results, we report the average results of the triplicate test with random seeds. Our first experiment focuses on tactile feature assessment, which evaluates the model’s ability to perceive and describe physical sensory attributes of objects in visuo-tactile videos. As illustrated in Fig. 1(d), when presented with a visuo-tactile
https://arxiv.org/abs/2505.22566v1
video and a question prompt, VTV-LLM generates descriptions of the four key tactile attributes. The results presented in Tab. 1 demonstrate that our method consistently outperforms all baseline models across both individual attribute and combined attribute settings. The performance gap is particularly notable in the combined attribute setting, which we attribute to our three-stage training paradigm that effectively bridges the domain gap between tactile perception and natural language understanding. In addition, we conduct high-level tactile reasoning experiments, including surface feature distinction (SFD), surface optimality identification (SOI), object sensation correlation (OSC), and tactile scenario analysis (TSA). SFD involves comparing tactile qualities between objects to determine relative differences, SOI entails analyzing multiple surfaces to determine which exhibits the highest degree of a particular quality, OSC aims at relating tactile perceptual information to the identity of a particular real-world object, and TSA addresses applying haptic knowledge to real-world situations that require physical reasoning. It is worth noting that the TSA task is not included in the training set. The qualitative results presented in Fig. 1(d) and Fig. 4 demonstrate that VTV-LLM can generate reasonable outputs. The quantitative experimental results in Tab. 1 further confirm that VTV-LLM 8 Figure 5: Performance comparison of VTV-LLM on the different parameters. Table 2: Ablation study on VTV encoder settings using the VTV-LLM-7B model. Settings SFD SOI OSC TSA Average VideoMAE (w/o train) 37.5 29.7 8.5 16.0 22.9 VideoMAE (w/ train) 52.4 46.1 28.3 38.0 41.2 Ours (w/o cls) 62.2 48.7 40.1 55.0 51.5 Ours 71.3 57.6 43.2 64.0 59.0Table 3: Ablation study on three-stage training paradigm settings using the VTV-LLM-7B model. Settings SFD SOI OSC TSA Average w/o stage 2 58.1 50.0 35.2 60.0 50.8 w/o stage 3 50.8 42.3 29.0 52.0 43.5 Same dataset 61.4 53.8 33.9 58.0 51.7 Ours 71.3 57.6 43.2 64.0 59.0 achieves superior performance across these complex reasoning tasks, highlighting its potential for embodied interaction. 4.3 Ablation Studies LLM Backbone To examine the effect of model scale on visuo-tactile understanding, we compare different parameter sizes of our LLM backbone. Fig. 5 shows performance results for VTV-LLM using three Qwen 2.5 variants (3B, 7B, and 14B parameters). We observe consistent performance improvements with increasing model size. This improvement is most significant for complex reasoning tasks like TSA, indicating larger models better integrate cross-modal information. However, larger models also require substantially more computation time during inference. VTV Encoder We conduct an ablation study on our VTV encoder design as shown in Tab. 2. Baseline VideoMAE [ 27,28] without training achieves only 22.9% average performance, while training with our VTV150K dataset improves it to 41.2%. Our method without the attribute classifier reaches 51.5%, showing the effectiveness of our optical flow-guided masking strategy. The full method with the attribute classifier further improves to 59.0%, confirming that joint reconstruction and attribute classification significantly enhances tactile understanding. Three-Stage Training Paradigm Tab. 3 validates our three-stage training paradigm through ablation studies. Removing stage 2 (VTV-text alignment) drops average performance to 50.8%, while omitting stage 3 (text prompt finetuning) causes a steeper decline to 43.5%. Using identical datasets across stages also underperforms
https://arxiv.org/abs/2505.22566v1
at 51.7%, confirming that independent datasets for each stage significantly improve model robustness. 5 Conclusion In this work, we presented VTV-LLM, the first multi-modal large language model for universal visuo- tactile video understanding. We contributed VTV150K, a comprehensive dataset of visuo-tactile videos across multiple sensors, and developed a novel three-stage training paradigm that effectively bridges the gap between tactile perception and natural language. Experimental results demonstrate 9 that VTV-LLM consistently outperforms state-of-the-art methods across various tactile reasoning tasks, establishing a foundation for more intuitive human-machine interaction in embodied domains. References [1]Qiang Li, Oliver Kroemer, Zhe Su, Filipe Fernandes Veiga, Mohsen Kaboli, and Helge Joachim Ritter. A review of tactile information: Perception and action through touch. IEEE Transactions on Robotics , 36(6):1619–1634, 2020. [2]Müge Cavdan, Katja Doerschner, and Knut Drewing. Task and material properties interac- tively affect softness explorations along different dimensions. IEEE Transactions on Haptics , 14(3):603–614, 2021. [3]Mudassir Ibrahim Awan, Waseem Hassan, and Seokhee Jeon. Predicting perceptual haptic attributes of textured surface from tactile data based on deep cnn-lstm network. In Proceedings of the 29th ACM Symposium on Virtual Reality Software and Technology , pages 1–9, 2023. [4]Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. Qwen technical report. arXiv preprint arXiv:2309.16609 , 2023. [5]An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024. [6]Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High- resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 10684–10695, 2022. [7]Boyuan Chen, Zhuo Xu, Sean Kirmani, Brain Ichter, Dorsa Sadigh, Leonidas Guibas, and Fei Xia. Spatialvlm: Endowing vision-language models with spatial reasoning capabilities. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 14455–14465, 2024. [8]Yifan Xie, Jingge Wang, Tao Feng, Fei Ma, and Yang Li. Ccis-diff: A generative model with sta- ble diffusion prior for controlled colonoscopy image synthesis. arXiv preprint arXiv:2411.12198 , 2024. [9]Shoujie Li, Zihan Wang, Changsheng Wu, Xiang Li, Shan Luo, Bin Fang, Fuchun Sun, Xiao- Ping Zhang, and Wenbo Ding. When vision meets touch: A contemporary review for visuotactile sensors from the signal processing perspective. IEEE Journal of Selected Topics in Signal Processing , 2024. [10] Wenzhen Yuan, Siyuan Dong, and Edward H Adelson. Gelsight: High-resolution robot tactile sensors for estimating geometry and force. Sensors , 17(12):2762, 2017. [11] Mike Lambeta, Po-Wei Chou, Stephen Tian, Brian Yang, Benjamin Maloon, Victoria Rose Most, Dave Stroud, Raymond Santos, Ahmad Byagowi, Gregg Kammerer, et al. Digit: A novel design for a low-cost compact high-resolution tactile sensor with application to in-hand manipulation. IEEE Robotics and Automation Letters , 5(3):3838–3845, 2020. [12] Lunwei Zhang, Yue Wang, and Yao Jiang. Tac3d: A novel vision-based tactile sensor for measuring forces distribution and estimating friction coefficient distribution. arXiv preprint arXiv:2202.06211 , 2022. [13] Ruoxuan Feng, Jiangyu Hu, Wenke Xia, Ao Shen, Yuhao Sun, Bin Fang, Di Hu,
https://arxiv.org/abs/2505.22566v1
et al. Anytouch: Learning unified static-dynamic representation across multiple visuo-tactile sensors. In The Thirteenth International Conference on Learning Representations , 2025. [14] Jialiang Zhao, Yuxiang Ma, Lirui Wang, and Edward Adelson. Transferable tactile transformers for representation learning across diverse sensors and tasks. In 8th Annual Conference on Robot Learning , 2024. [15] Samson Yu, Kelvin Lin, Anxing Xiao, Jiafei Duan, and Harold Soh. Octopi: Object property reasoning with large tactile-language models. arXiv preprint arXiv:2405.02794 , 2024. 10 [16] Letian Fu, Gaurav Datta, Huang Huang, William Chung-Ho Panitch, Jaimyn Drake, Joseph Ortiz, Mustafa Mukadam, Mike Lambeta, Roberto Calandra, and Ken Goldberg. A touch, vision, and language dataset for multimodal alignment. In International Conference on Machine Learning , pages 14080–14101. PMLR, 2024. [17] Fengyu Yang, Chao Feng, Ziyang Chen, Hyoungseob Park, Daniel Wang, Yiming Dou, Ziyao Zeng, Xien Chen, Rit Gangopadhyay, Andrew Owens, et al. Binding touch to everything: Learn- ing unified multimodal tactile representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 26340–26353, 2024. [18] Carolina Higuera, Akash Sharma, Chaithanya Krishna Bodduluri, Taosha Fan, Patrick Lancaster, Mrinal Kalakrishnan, Michael Kaess, Byron Boots, Mike Lambeta, Tingfan Wu, et al. Sparsh: Self-supervised touch representations for vision-based tactile sensing. In 8th Annual Conference on Robot Learning , 2024. [19] Neha Sunil, Shaoxiong Wang, Yu She, Edward Adelson, and Alberto Rodriguez Garcia. Visuo- tactile affordances for cloth manipulation with local control. In Conference on Robot Learning , pages 1596–1606. PMLR, 2023. [20] Ying Zheng, Lei Yao, Yuejiao Su, Yi Zhang, Yi Wang, Sicheng Zhao, Yiyi Zhang, and Lap-Pui Chau. A survey of embodied learning for object-centric robotic manipulation. arXiv preprint arXiv:2408.11537 , 2024. [21] Fengyu Yang, Chenyang Ma, Jiacheng Zhang, Jing Zhu, Wenzhen Yuan, and Andrew Owens. Touch and go: learning from human-collected vision and touch. In Proceedings of the 36th International Conference on Neural Information Processing Systems , pages 8081–8103, 2022. [22] Ruohan Gao, Yen-Yu Chang, Shivani Mall, Li Fei-Fei, and Jiajun Wu. Objectfolder: A dataset of objects with implicit visual, auditory, and tactile representations. In Conference on Robot Learning , 2021. [23] Ruohan Gao, Zilin Si, Yen-Yu Chang, Samuel Clarke, Jeannette Bohg, Li Fei-Fei, Wenzhen Yuan, and Jiajun Wu. Objectfolder 2.0: A multisensory object dataset for sim2real transfer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 10598–10608, 2022. [24] Hao Li, Yizhi Zhang, Junzhe Zhu, Shaoxiong Wang, Michelle A Lee, Huazhe Xu, Edward Adelson, Li Fei-Fei, Ruohan Gao, and Jiajun Wu. See, hear, and feel: Smart sensory fusion for robotic manipulation. In Conference on Robot Learning , pages 1368–1378. PMLR, 2023. [25] Wenzhen Yuan, Yuchen Mo, Shaoxiong Wang, and Edward H Adelson. Active clothing material perception using tactile sensing and deep learning. In 2018 IEEE International Conference on Robotics and Automation (ICRA) , pages 4842–4849. IEEE, 2018. [26] Justin Kerr, Huang Huang, Albert Wilcox, Ryan Hoque, Jeffrey Ichnowski, Roberto Calandra, and Ken Goldberg. Self-supervised visuo-tactile pretraining to locate and follow garment features. arXiv preprint arXiv:2209.13042 , 2022. [27] Zhan Tong, Yibing Song, Jue Wang, and Limin Wang. Videomae: Masked autoencoders are data-efficient learners
https://arxiv.org/abs/2505.22566v1
for self-supervised video pre-training. Advances in neural information processing systems , 35:10078–10093, 2022. [28] Limin Wang, Bingkun Huang, Zhiyu Zhao, Zhan Tong, Yinan He, Yi Wang, Yali Wang, and Yu Qiao. Videomae v2: Scaling video masked autoencoders with dual masking. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 14549–14560, 2023. [29] Mohammadreza Salehi, Michael Dorkenwald, Fida Mohammad Thoker, Efstratios Gavves, Cees GM Snoek, and Yuki M Asano. Sigma: Sinkhorn-guided masked video modeling. In European Conference on Computer Vision , pages 293–312. Springer, 2024. [30] Bingkun Huang, Zhiyu Zhao, Guozhen Zhang, Yu Qiao, and Limin Wang. Mgmae: Motion guided masking for video masked autoencoding. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 13493–13504, 2023. 11 [31] Yunze Liu, Peiran Wu, Cheng Liang, Junxiao Shen, Limin Wang, and Li Yi. Videomap: Toward scalable mamba-based video autoregressive pretraining. arXiv preprint arXiv:2503.12332 , 2025. [32] Gensheng Pei, Tao Chen, Xiruo Jiang, Huafeng Liu, Zeren Sun, and Yazhou Yao. Videomac: Video masked autoencoders meet convnets. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 22733–22743, 2024. [33] Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. Hugginggpt: Solving ai tasks with chatgpt and its friends in hugging face. Advances in Neural Information Processing Systems , 36:38154–38180, 2023. [34] Dídac Surís, Sachit Menon, and Carl V ondrick. Vipergpt: Visual inference via python execution for reasoning. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 11888–11898, 2023. [35] Rui Yang, Lin Song, Yanwei Li, Sijie Zhao, Yixiao Ge, Xiu Li, and Ying Shan. Gpt4tools: Teaching large language model to use tools via self-instruction. Advances in Neural Information Processing Systems , 36:71995–72007, 2023. [36] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. ICLR , 1(2):3, 2022. [37] Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: Enhanc- ing vision-language understanding with advanced large language models. In ICLR , 2024. [38] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems , 36:34892–34916, 2023. [39] Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 26296–26306, 2024. [40] Muhammad Maaz, Hanoona Rasheed, Salman Khan, and Fahad Khan. Video-chatgpt: Towards detailed video understanding via large vision and language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 12585–12602, 2024. [41] Boqiang Zhang, Kehan Li, Zesen Cheng, Zhiqiang Hu, Yuqian Yuan, Guanzheng Chen, Sicong Leng, Yuming Jiang, Hang Zhang, Xin Li, et al. Videollama 3: Frontier multimodal foundation models for image and video understanding. arXiv preprint arXiv:2501.13106 , 2025. [42] Qingkai Fang, Shoutao Guo, Yan Zhou, Zhengrui Ma, Shaolei Zhang, and Yang Feng. Llama-omni: Seamless speech interaction with large language models. arXiv preprint arXiv:2409.06666 , 2024. [43] Songming Liu, Lingxuan
https://arxiv.org/abs/2505.22566v1
Wu, Bangguo Li, Hengkai Tan, Huayu Chen, Zhengyi Wang, Ke Xu, Hang Su, and Jun Zhu. Rdt-1b: a diffusion foundation model for bimanual manipulation. arXiv preprint arXiv:2410.07864 , 2024. [44] Joshua Jones, Oier Mees, Carmelo Sferrazza, Kyle Stachowicz, Pieter Abbeel, and Sergey Levine. Beyond sight: Finetuning generalist robot policies with heterogeneous sensors via language grounding. arXiv preprint arXiv:2501.04693 , 2025. [45] Samanta Rodriguez, Yiming Dou, Miquel Oller, Andrew Owens, and Nima Fazeli. Touch2touch: Cross-modal tactile generation for object manipulation. arXiv preprint arXiv:2409.08269 , 2024. [46] Sky McKinley and Megan Levine. Cubic spline interpolation. College of the Redwoods , 45(1):1049–1060, 1998. [47] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, G Heigold, S Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations , 2020. 12 [48] Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415 , 2016. [49] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 16000–16009, 2022. [50] Douglas A Reynolds et al. Gaussian mixture models. Encyclopedia of biometrics , 741(659- 663):3, 2009. [51] Zachary Teed and Jia Deng. Raft: Recurrent all-pairs field transforms for optical flow. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16 , pages 402–419. Springer, 2020. [52] Joi Shimizu, Heming Sun, and Jiro Katto. Forward and backward warping for optical flow-based frame interpolation. In 2022 International Conference on Artificial Intelligence in Information and Communication (ICAIIC) , pages 082–086. IEEE, 2022. [53] Sung In Cho and Suk-Ju Kang. Extrapolation-based video retargeting with backward warping using an image-to-warping vector generation network. IEEE Signal Processing Letters , 27:446– 450, 2020. [54] Anqi Mao, Mehryar Mohri, and Yutao Zhong. Cross-entropy loss functions: Theoretical analysis and applications. In International conference on Machine learning , pages 23803–23828. PMLR, 2023. [55] Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint arXiv:2410.21276 , 2024. [56] Google DeepMind. gemini-2.5-pro-preview-05-06. https://ai.google.dev/gemini-api/ docs/models#gemini-2.5-pro-preview-05-06 , 2025. [57] Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Peiyuan Zhang, Yanwei Li, Ziwei Liu, et al. Llava-onevision: Easy visual task transfer. arXiv preprint arXiv:2408.03326 , 2024. [58] Yuanhan Zhang, Jinming Wu, Wei Li, Bo Li, Zejun Ma, Ziwei Liu, and Chunyuan Li. Video instruction tuning with synthetic data. arXiv preprint arXiv:2410.02713 , 2024. [59] Zhe Chen, Weiyun Wang, Yue Cao, Yangzhou Liu, Zhangwei Gao, Erfei Cui, Jinguo Zhu, Shen- glong Ye, Hao Tian, Zhaoyang Liu, et al. Expanding performance boundaries of open-source multimodal models with model, data, and test-time scaling. arXiv preprint arXiv:2412.05271 , 2024. [60] Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, et al. Qwen2. 5-vl technical report. arXiv preprint arXiv:2502.13923 , 2025. 13
https://arxiv.org/abs/2505.22566v1
arXiv:2505.22571v1 [cs.CL] 28 May 2025Agent-UniRAG: A Trainable Open-Source LLM Agent Framework for Unified Retrieval-Augmented Generation Systems Hoang Pham, and Khac-Hoai Nam Bui* Viettel Artificial Intelligence and Data Services Center, Viettel Group, Vietnam {hoangpv4, nambkh}@viettel.com.vn Abstract This paper presents a novel approach for uni- fied retrieval-augmented generation (RAG) sys- tems using the recent emerging large lan- guage model (LLM) agent concept. Specif- ically, Agent LLM, which utilizes LLM as fundamental controllers, has become a promis- ing approach to enable the interpretability of RAG tasks, especially for complex reason- ing question-answering systems (e.g., multi- hop queries). Nonetheless, previous works mainly focus on solving RAG systems with either single-hop or multi-hop approaches sep- arately, which limits the application of those approaches to real-world applications. In this study, we propose a trainable agent framework called Agent-UniRAG for unified retrieval- augmented LLM systems, which enhances the effectiveness and interpretability of RAG sys- tems. The main idea is to design an LLM agent framework to solve RAG tasks step-by- step based on the complexity of the inputs, si- multaneously including single-hop and multi- hop queries in an end-to-end manner. Further- more, we introduce SynAgent-RAG, a syn- thetic dataset to enable the proposed agent framework for small open-source LLMs (e.g., Llama-3-8B). The results show comparable per- formances with closed-source and larger open- source LLMs across various RAG benchmarks. Our source code and dataset are publicly avail- able for further exploitation. 1 Introduction Incorporating non-parametric knowledge into large language models (LLMs) through additional re- trieval modules has emerged as a promising ap- proach to enhance both accuracy and the timeliness of information (Borgeaud et al., 2022; Izacard et al., 2023). This issue has led to the rapid development of various retrieval-augmented LLM paradigms de- signed to provide correct answers to user queries. *Corresponding author Single Hop QueryMulti-Hop QueryRetrieve Retrieve Iterative Classifier a) Modular Appr oach b) Adaptive Appr oach c) Our Unified Appr oachIterative Planning Memory Agent Loop Agent-UniRAGSingle Hop Query Multi-Hop Query Single Hop Query Multi-Hop QueryAnswerAnswer Answer Answer Answer Sear ch ToolFigure 1: Conceptual analysis of previous works and Agent-UniRAG: (a) Modular approach handles query types separately; (b) Adaptive approach uses a classifier to determine query types before executing them sep- arately; (c) Agent-UniRAG processes all query types within a unified system using the Agent LLM concept. Accordingly, these modern paradigms address ei- ther single-hop which can respond within a single document (i.e., Naive RAG), or complex multi-hop queries, which require the integration and synthe- sis of information from multiple documents (i.e., Advanced RAG)(Fan et al., 2024). Nonetheless, existing modern approaches suf- fer from several significant limitations, including a lack of explainability and traceability. Accordingly, an emerging research issue in this regard is that current methods either inefficiently handle simple queries with unnecessary computational overhead or fail to address complex multi-step queries (Tang and Yang, 2024) (Figure 1 (a)). To address this research issue, a potential method is to add a classi- fication module to classify the complexity of input queries for selecting the appropriate RAG model to respond (Jeong et al., 2024) (Figure 1 (b)). How- ever,
https://arxiv.org/abs/2505.22571v1
this approach is only suitable when the types of queries are predefined (in specific domains or custom benchmark datasets), which might lack flex- ibility and scalability in terms of various real-world applications. Recently, LLM Agent, by leverag- ing LLMs to execute complex tasks, emerged as a promising approach to enable the interpretabil- ity and reasoning capability for LLM (Zhao et al., 2024). Specifically, LLM is regarded as the pri- mary controller, integrating with essential compo- nents such as planning, memory, and action exe- cute operations necessary to complex tasks(Wang et al., 2024a). Based on this emerging conceptual technology, this study raises a research question: Can the LLM agent enable the interpretability and reasoning capability of RAG systems in a unified manner? Figure 1 (c) illustrates our proposed approach, which is designed to enhance the interpretability and effectiveness of LLMs in RAG tasks, com- pared with previous approaches in this research field. Specifically, we leverage the emerging con- cept of LLM agents, employing LLMs as central controllers to unify RAG tasks. Our unified agent is capable of handling queries that require rea- soning processes (including both single-hop and multi-hop queries simultaneously) through self- guided instructions and interaction with the exter- nal knowledge base. Furthermore, most current LLM agent frameworks rely on closed-source mod- els with very large weight sizes (e.g., GPT-4 (Ope- nAI, 2024)), which limits their reproducibility and controllability. Our primary focus, therefore, is on enabling trainable open-source LLM agents. In this regard, we also introduce a synthetic dataset named SynAgent-RAG to train these open-source LLM-based agents for the unified RAG system. In summary, the main contributions of this study are three-fold as follows: (i)We propose a unified RAG system using the concept of the LLM agent, which can han- dle queries that require reasoning processes (e.g. single-hop and multi-hop queries) by self-guided instructions and interaction with the external knowl- edge base to derive a response to the input queries. To the best of our knowledge, this paper is the first study to execute the unified RAG system in an end-to-end manner. (ii)We process and introduce the SynAgent-RAG dataset, which obtains 16,987 synthetic samples to enable small open-source modern LLMs (e.g., Llama-3-8B) to adapt the proposed Agent-UniRAG approach via instruction finetuning. Accordingly,this contribution is important to achieve the de- sired flexibility and scalability since most emerging LLM Agent technologies only work well with very large LLMs as the backbone. (iii)We evaluate the proposed approach on vari- ous RAG benchmarks, including the test set of our proposed SynAgent-RAG dataset. The experimen- tal results show that our approach outperforms pre- vious approaches. Furthermore, with small LLMs (e.g., Llama-3-8B) instruction-finetuned on the pro- posed dataset, we can achieve competitive perfor- mances compared to closed-source (e.g., GPT-4) and larger open-source agent LLMs (e.g., Llama-3- 70B). 2 Literature Reviews 2.1 Retrieval-Augmented LLM The evolution of RAG in the era of LLMs can be di- vided into three categories, including Naive RAG, Advanced RAG, and Modular RAG (Gao et al., 2023). Naive RAG and Advanced RAG are typical Retrieve-Read paradigms (Ma et al., 2023), which focus on finding
https://arxiv.org/abs/2505.22571v1
the answers in a single document (i.e., single-hop queries (Ram et al., 2023)). Mean- while, the recent emerging Modular RAG has been introduced to go beyond the two aforementioned RAG paradigms, which requires iterative accesses to both LLMs and retrievers multiple times (i.e., multi-hop queries (Trivedi et al., 2023)). Specif- ically, dynamically selecting the suitable strategy (i.e., single-hop or multi-hop) for unified RAG tasks become an emerging research issue in this research field (Jeong et al., 2024). 2.2 LLM Agent Framework The concept of LLM agents involves LLM appli- cations that can execute complex tasks, in which LLMs serve as controllers to control the flow of operations needed to complete a task or user re- quest (Wang et al., 2024a). Accordingly, an LLM agent framework consists of the four core compo- nents such as User Request ,Agent ,Planning , and Memory . HuggingGPT (Shen et al., 2023) was introduced as one of the first comprehensive LLM- powered agent frameworks, which use LLMs (i.e., ChatGPT) and the ML community (i.e., Hugging Face) to process inputs from different modalities. Sequentially, Yin et al. (2023) introduces LUMOS, an agent framework for trainable open-source LLM. Specifically, the framework designs a modular ar- chitecture with a planning module to learn subgoals and a grounding module trained to translate sub- goals into actions, using tools in the execution mod- ule. Inspired by previous works, in this study, we present a trainable open-source LLM-based agent framework for unified RAG tasks, which focuses on integrating the interpretable ability of LLM to determine the next action for solving RAG tasks. 3 Methodology This section introduces the design of Agent- UniRAG. Following the typical pipeline of the LLM agent framework, our framework - Agent- UniRAG is put into a loop and includes four main components including Planning Module, Tool Us- ing Module, Working Memory Module, and Re- flector Module as shown in Figure 2. Input Query Knowledge BaseAction: Search Action: Final AnswerWorking Memory Max K TimesEvidence Feedback All Gathered EvidenceEvidence Reflector Planning Gen Final AnswerSear ch ToolRetrieved DocumentsKnowledge Based Agent Input: Search Query Figure 2: Overall design of Agent-UniRAG. 3.1 Planning Module Leveraging the reasoning capabilities of modern LLMs, this module is designed to systematically determine the necessary actions required to address a user’s request (Input Query) at each step of the process. Specifically, the agent decides between two primary actions at each decision point: •Action: Search – This action is triggered when the agent needs to acquire additional external knowledge to progress toward solv- ing the problem. •Action: Final Answer – This action is taken when the agent has accumulated sufficient in- formation to confidently provide a response to the query. To implement this decision-making process, Agent- UniRAG utilizes the ReAct mechanism (Yao et al., 2023), which allows the agent to iteratively re- flect on and refine its execution plan. The mech- anism guides the agent through a structured se- quence of steps: Thought ,Action , and EvidenceFeedback . Continuously evaluating and integrat- ing those steps, the agent is capable of addressing complex tasks with great precision. 3.2 Search Tool At
https://arxiv.org/abs/2505.22571v1
each stage where external knowledge is re- quired ( Action: Search ), the agent interacts with the Knowledge Base through the Search Tool by for- mulating a search query generated by the Planning Module. The purpose of querying external knowl- edge is to ground the reasoning process in reliable and up-to-date information beyond the agent’s in- ternal knowledge. This ensures that the agent’s responses are accurate and contextually relevant, especially for tasks requiring current or special- ized domain knowledge. The retrieved external evidence supports the resolution of the input query, functioning as a document retrieval task. 3.3 Reflector Module Documents retrieved from external knowledge bases often include irrelevant or extraneous infor- mation, especially when the knowledge base cannot adequately satisfy the query. Incorporating such unfiltered data into LLMs can introduce noise, de- grade performance, or even mislead the model. In- spired by (Shinn et al., 2024), to mitigate this issue, we designed a module called Evidence Reflector to provide evidence feedback to LLM, which operates after the Search Tool. The Evidence Reflector fil- ters out irrelevant content and refines the retrieved information, delivering back more focused and rele- vant insights to the agent. If no suitable evidence is found, it feedbacks with "No information found." This feedback is critical in guiding the model’s subsequent actions, ensuring the decision-making process remains both accurate and efficient. The agent can then better locate and leverage relevant information, thereby improving both the quality and precision of its responses. 3.4 Working Memory Module The Working Memory module functions as a prompt memory, designed to store the input query and internal logs, including previous thoughts, ac- tions generated by the LLM, and the extracted ev- idence obtained through the LLM’s interactions with tools at steps. This memory is processed by the LLM to inform and guide subsequent ac- tions. Furthermore, the Working Memory module ensures the system’s transparency and explainabil- ity by recording the reasoning process, including Base Source Wiki Level 5 Vital ArticlesRandomly select a passage k Related passages Promblem Solver & Base Article Passages HyperlinksSelect k - 1 linked article passages Linked Article Selected passages Final Dataset Question and Solution path Data Evidence Synthesis DataInstruction TuningSolution Verification Predicted Answer Reference AnswerCompare and scoreGenerate Question and Reference Answer Solution Annotation Evidence GiverInteract Teacher Model Student ModelFigure 3: Overview of the proposed SynAgent-RAG Dataset the gathered knowledge and the decisions made at each step. This documentation provides insights into how conclusions were reached, enhancing trust and interpretability of the system. 3.5 Agent Loop With all the defined modules, the agent operates within a loop that can be terminated either by the agent itself or when reaching a preconfigured com- puting budget. In the case of the agent, the en- tire pipeline terminates when the planning process confirms that sufficient external evidence has been gathered to answer the input query. In other cases, a parameter kis preconfigured as the agent comput- ing budget limit to limit the agent from processing too much information, and the loop is terminated when the computing budget is exhausted. Finally, in either case, the
https://arxiv.org/abs/2505.22571v1
agent triggers a ’Final answer’ ac- tion to aggregate all collected evidence and provide the final answer. 4 SynAgent-RAG Dataset While the framework is fully compatible with big- ger LLMs (e.g., GPT-4), deploying it with smaller LLMs necessitates an additional training process to maintain stability across each step. To address this challenge, we introduce SynAgent-RAG, a syn- thetic dataset designed for Agent-UniRAG. This is achieved through a distillation approach (Semnani et al., 2023), where GPT-4 serves as the teacher model to generate data, and smaller models (e.g., LLama 3) are distilled versions. The primary objec- tive of SynAgent-RAG is to empower the smaller LLM agent with the capability to reason, analyze, and synthesize information drawn from an external knowledge base before delivering a well-reasoned response to complex queries. The construction ofSynAgent-RAG follows the process illustrated in Figure 3. 4.1 Dataset Construction Process 4.1.1 Knowledge Base To construct an effective knowledge base for build- ing the dataset that demands thoroughness, reliabil- ity, and up-to-date information across a wide range of fields, we utilized Wikipedia’s Vital Articles Level 51. These articles represent a curated collec- tion that encompasses essential topics for a compre- hensive understanding of human knowledge. Prior to constructing the dataset, we carefully divided the articles into two separate sets: one for training and one for testing, to ensure a balanced evaluation of the model’s performance. 4.1.2 Question Generation To effectively generate questions that require mul- tiple inference steps to arrive at a final answer, it is crucial to group related passages from source ar- ticles. We hypothesize that these related passages are interconnected through hyperlinks within each Wikipedia article. For each article, we randomly select a passage from the core content of the article as the main passage mi. Then from passage mi, to enhance the scalability of this process, we leverage GPT-4 to determine which hyperlinks are most rel- evant to the content of the main passage, following the prompt template (see Figure 5). This process identifies up to 5 supporting articles with associated hyperlinks. Consequently, we obtain a set of main- supporting passage pairs Ds={(mi,si)}n i=1. Given the obtained set Ds, we construct both sin- gle and multi-hop questions qsthat adhere to spe- 1https://en.wikipedia.org/wiki/Wikipedia: Vital_articles/Level/5 cific criteria following previous works in the field. Single-hop questions are designed to be straightfor- ward, and answerable solely based on the informa- tion contained within the main passage mi. In con- trast, multi-hop questions necessitate information from multiple passages within the pair {(mi,si)}, requiring several inferential steps to derive the final answer. Furthermore, when employing GPT-4 with specified prompt templates (see Figure 6 and 7) the questions and long-form reference answers gener- ated exhibit a high level of reasoning and analysis capability. 4.1.3 Solution Annotation The solution annotation resulting from the plan- ning and action decision of the teacher model to solve complex tasks is the key to effectively dis- tilling the strong reasoning capabilities of student models. In this process, we generate solution an- notations for questions that include a series of steps: Thought ,Action , and Evidence Feedback . Starting with
https://arxiv.org/abs/2505.22571v1
the original question qi, each step t, GPT-4 is required to perform two tasks repli- cating the real-world RAG scenario when the pro- cess of retrieving external knowledge is needed: i) provide a short rationale on how to utilize the Search Tool to address the question (Thought rt i) and formulate a detailed search query (Action at i) to retrieve necessary information. ii, using the search query at iand the relevant sources {(mi,si)} for the question qi, GPT-4 will extract the most concise information from those sources and syn- thesize it as Evidence Feedback et i. The results set at step t, comprising {rt i, at i, et i}, are concate- nated with the question and prior steps in the or- der of qi, r1 i, a1 i, e1 i, ..., rt i, at i, et iand used as the input for the agent to determine the plan and ac- tions in the subsequent step t+ 1. The process continues until the agent concludes with a state- ment "I have the final answer" indicating suffi- cient evidence has been gathered. At this point, denoted as step T, the final answer is also provided. Finally, the solution annotation for the question qiincludes thoughts ri={r1 i, ..., rT i}, search queries ai={a1 i, ..., aT−1 i}, evidence feedbacks ei={e1 i, ..., eT−1 i}and final answer. Details of prompts for the process are in Figure 8 and 9. 4.1.4 Annotation Verification Since the data are generated by an LLM, there are instances where the entire process may fail to provide the final answer. To address this, we imple-ment both human and technical checks to ensure the scalability and reliability of the process. Ad- ditionally, we introduce an instruction eliminator, referred to as the Verification Module, to filter out failed annotations. We observe and hypothesize that if the process can produce a final answer simi- lar to the reference answer then the annotation qual- ity is considered high. Using a specified prompt template, GPT-4 is tasked with generating a brief rationale and then assigning an integer score rang- ing from 0 to 5, indicating the degree of similarity between the predicted answer and the reference answer and the relevancy to the input query. By employing the Solution Verification Module to fil- ter annotations, we ensure the quality of the dataset by retaining only those annotations that achieve a score of 4 or 5. 4.2 Dataset Analysis After the annotation generation process, our dataset comprises 16,987 annotated training samples and 1,197 testing samples. Figure 4 shows the distribu- tion of question types and indicates that our dataset largely consists of ’how’ questions confirming our initial goal of constructing a dataset to enhance the agent’s ability to reason and synthesize infor- mation. In real-world applications, RAG systems 37.1% 24.9%19.9%6.9%5.1%6.2%how what which who where other Figure 4: Question type distribution on the training set. typically handle relatively simple queries. To ac- count for this, we deliberately incorporated a higher proportion of queries requiring minimal search and fewer pieces of supporting evidence. On average, each training annotation in our dataset
https://arxiv.org/abs/2505.22571v1
necessitates two supporting passages. This design ensures that our dataset reflects practical demands while ac- commodating varying complexities of user queries. Moreover, to the best of our knowledge, our dataset is the first dataset to integrate Chain of Thought (COT) reasoning, offering enhanced guidance for the agent to interact with external knowledge. 5 Experiments 5.1 Experimental Setup 5.1.1 Datasets and Evaluation Metrics We evaluate the unified RAG model using both single-hop and multi-hop datasets. Specifically, we employ six benchmark datasets: three single- hop (SQuAD (Rajpurkar et al., 2016), Natural Questions (Kwiatkowski et al., 2019), TriviaQA (Joshi et al., 2017)) and three multi-hop (MusiQue (Trivedi et al., 2022), HotpotQA (Yang et al., 2018), 2WikiMultiHopQA (Ho et al., 2020)). To compare our model against recent state-of-the-art RAG sys- tems on these datasets, which feature short-form answers, we utilize F1, Exact Match (EM), and Ac- curacy (Acc) as evaluation metrics. The F1 score measures the overlap of words between the pre- dicted and ground truth answers, EM checks for exact matches, and Acc verifies whether the pre- dicted answer includes the ground truth. To adapt to the short-form answer setup, we use GPT-4 to extract concise answers from the detailed responses generated by the agent, as illustrated in Figure 10. With each dataset, we benchmark on 500 samples per dataset processed by (Jeong et al., 2024) and (Trivedi et al., 2023). Additionally, we evaluate performance on the SynAgent-RAG test set, comparing small open- source LLMs with larger models utilized as the backbone for the Agent-UniRAG framework. We employ ROUGE-L and BLEU metrics to assess long-form answers. ROUGE-L, based on the longest common subsequence (LCS), measures similarity, while BLEU calculates n-gram preci- sion, incorporating a brevity penalty to account for fluency and accuracy. Given the distinct response styles of each model, a comprehensive evaluation requires assessing their ability to exhibit analytical skills and produce logically coherent long-form re- sponses. To this end, we also use GPT-Score, an LLM-based evaluator, which prompts an LLM to compare the generated answer with the reference and input queries. GPT-Score specifically evalu- ates the semantic alignment between the predicted and reference answers, thereby providing a more nuanced assessment of model performance. 5.1.2 Retrieval System and Corpus Setup For the experiments on the short-form answer datasets, to ensure a fair comparison with the methodologies employed by (Jeong et al., 2024), we utilize the BM25 retriever as the baseline re-trieval model across all corpus. In addition to the BM25 baseline retrieval model, we also experi- ment with adding the Multilingual E5 Large model (Wang et al., 2024b) as the dense reranker after the sparse BM25 retrieval step to observe the ef- fect of better retrieving results can lead to better agent performance. For the external corpus, we index different sources for different dataset types. Specifically, for single-hop datasets, follow (Jeong et al., 2024) we use the Wikipedia corpus prepro- cessed by (Karpukhin et al., 2020), while for multi- hop datasets, we use the corpus preprocessed by (Trivedi et al., 2023). For the experiment on the test set of our SynAgent-RAG dataset, instead of
https://arxiv.org/abs/2505.22571v1
indexing docu- ments into a corpus, we focus on measuring the model’s reasoning capability under optimal re- trieval conditions. Here, we leave out the perfor- mance of retrieval systems and assume that the retrieved documents are correct and relevant to the original question by directly returning the reference documents as the results of the retrieval phase. 5.1.3 Models In this study, we compare our approach, Agent- UniRAG, against several retrieval-augmented LLM strategies, including Self-RAG (Asai et al., 2023) which adaptively retrieves passages on-demand, and generates and reflects on retrieved passages, Adaptive-RAG (Jeong et al., 2024), which dynami- cally adjusts retrieval based on question complexity, andIRCoT (Trivedi et al., 2023), a state-of-the-art method leveraging iterative retriever and LLM in- teraction through Chain-of-Thought In-Context rea- soning. The baseline models in these methods uti- lize GPT-3.5-Turbo, known for its larger size com- pared to our approach, which is based on LLama- 3-Instruct. To further assess the effectiveness of our framework, we conducted an ablation study on multihop datasets. First, we removed the Reflector Module to assess the impact of directly utilizing the retrieved knowledge, which may include noise, as evidence feedback for the agent, whether it will lead to degradation in the performance. Second, we evaluated the effect of bypassing the gradual retrieval process by removing the Planning Mod- ule. In this scenario, the LLM was tasked with generating all necessary queries first, subsequently using the retrieved information to directly answer the input query. This setup helps understand the importance of iterative information retrieval in en- hancing the agent’s decision-making accuracy. ModelMax SearchTop K/ BiEncoderSQUAD Natural Question TrivialQA EM F1 Acc EM F1 Acc EM F1 Acc Self-RAG* No limit 1.6 11.9 20.8 39.2 47.1 42.4 14.6 33.7 60.2 IRCoT* No limit 17.4 31.5 26.2 35.6 49.7 57.8 54.8 67.1 68.0 Adaptive-RAG* No limit 18.0 33.8 29.2 32.4 46.8 54.8 55.2 66.5 65.8 Agent-UniRAG 1 8 / No 23.8 34.5 49.6 43.4 51.6 61.2 57.6 65.8 71.2 Agent-UniRAG 1 12 / No 26.6 38.1 48.6 45.2 53.9 61.2 57.0 66.2 69.0 Agent-UniRAG No limit 8 / No 26.4 38.6 42.2 45.8 55.3 57.6 58.6 66.7 70.0 Agent-UniRAG No limit 12 / No 28.2 40.8 42.2 48.0 57.3 58.8 57.4 67.2 67.4 Agent-UniRAG No limit 12 / Yes 32.8 46.9 42.8 59.2 68.6 64.6 63.6 72.5 71.0 Table 1: Results on different single-hop benchmark datasets. * are taken from Jeong et al. (2024) with GPT-3.5 as the backbone LLM for both previous approaches. Bold texts indicate the best results. ModelMax SearchTop K/ BiEncoderMuSiQue HotpotQA 2WikiMultiHopQA EM F1 Acc EM F1 Acc EM F1 Acc Self-RAG* No limit 1.2 8.2 11.8 5.6 17.8 30.6 3.0 19.1 39.0 IRCoT* No limit 23.0 32.5 31.6 45.8 58.3 52.2 52.2 66.0 62.4 Adaptive-RAG* No limit 21.8 32.6 29.6 40.4 52.5 47.0 46.6 60.0 56.8 Agent-UniRAG No limit 8 / No 26.4 35.2 27.8 47.6 56.2 48.8 60.2 66.7 61.8 Agent-UniRAG No limit 12 / No 26.2 35.3 28.2 48.6 58.2 50.6 59.8 66.6 61.8 Agent-UniRAG No limit 12 / Yes 30.4 39.8 32.2 50.2 59.9 52.4 58.4 64.9 60.6 w/o
https://arxiv.org/abs/2505.22571v1
Evidence Reflector No Limit 12 / No 20.2 29.9 21.4 49.4 59.9 52.2 51.2 57.94 53.2 w/o Planning 1 12 / No 10.2 15.5 11.4 37.4 43.2 37.6 36.8 43.5 37.6 Table 2: Results on different multi-hop benchmark datasets. * are taken from Jeong et al. (2024) with GPT-3.5 as the backbone LLM for both previous approaches. Bold texts indicate the best results. 5.1.4 Training Configurations Agent-UniRAG uses the instruction version of Meta-Llama-3-8B2as the backbone open-source LLM model, and fine-tune instruction on the pro- posed SynAgent-RAG dataset. The fine-tuning pro- cess spanned 10 hours on a single DGX node with 8 A100 GPUs, each equipped with 40GB of VRAM. The learning rate was set at 2e−5, and the global batch size was 256. The model was trained for 2 epochs using the AdamW optimizer (Loshchilov and Hutter, 2017). 5.1.5 Training Prompt Template We distill Agent-UniRAG in a multi-task setting by fine-tuning three subtasks following the proposed framework to guide its planning, action decisions, and filter evidence feedback. Annotations are orga- nized in conversational formats to facilitate interac- tion between components, which include: Conversation planning module annotation : As illustrated in Figure 12, we start by using the user role to provide the question qin the prompt. The planning module then appends the first thought r1 and the initial search query a1as the first response supervision. For subsequent turns, we act as the 2https://huggingface.co/meta-llama/Meta-Llama-3-8B- Instructuser and provide the extracted evidence et−1of the last search query at−1to the planning module. The response supervision dictates whether the planning should terminate by the thought “I have the final answer. ” ; if not, the response should include a new thought rtalong with a new search query at. Conversation final answer annotation : Instead of letting the LLM generate the final answer in the planning module as in the data generation process, we want to add more control to the pipeline by separating the task of providing the final answer to a subtask. In that way, we collect the gathered evidence {e1, ..., eT−1}and provide the question as the user prompt and treat the final answer as the response (depicted in Figure 14). Conversation Evidence Reflector annotation : As shown in Figure 13, we provide the search query atand the relevant source containing the main- supporting passages pair {m,s}corresponding to the user turn. All the extracted evidence etserves as the user’s prompt response. Since SynAgent-RAG annotations are conversational, we structure them as {x1, y1, . . . , x i, yi, . . . , x n, yn}, where xiis thei-th user prompt and yiindicates its re- sponses. During training, we input each entire multi-turn annotation into the model, calculat- ing loss solely on the tokens of the responses Y={y1, . . . , y i, . . . , y n}and applying binary masking on the user prompt tokens to prevent computing loss on them. The final loss function is L=−X jlogpπ(tj|t<j)×1(tj∈Y)(1) where tjdenotes the j-th input token and 1(·)is a Boolean indicator function. 5.2 Main Results We present a detailed performance comparison of the proposed
https://arxiv.org/abs/2505.22571v1
approach with previous methods, as shown in Table 1 for single-hop datasets and Table 2 for multi-hop datasets. Notably, Agent-UniRAG, which leverages a small open-source LLM as its backbone, demonstrates competitive performance relative to recent state-of-the-art models that utilize significantly larger LLMs. A key strength of our model is its ability to handle diverse query types uniformly and simultaneously. In addition to that, we have three specific observations. Agent-UniRAG caneffectively interact with the external knowledge base. We observe that in- creasing the search limits and the number of top-K retrieved documents leads to performance improve- ments. Specifically, with the top-K retrieval set to 12 and the integration of a dense encoder module for reranking, our proposed Agent-UniRAG sub- stantially outperforms previous methods, achieving state-of-the-art results on the majority of bench- mark RAG datasets in this research field. Addition- ally, in the single-hop settings, when the maximum search limit is increased from 1 to ’No limit’ we ob- serve a further increase in performance, highlight- ing the LLM agent’s capability to interact with evi- dence feedback and then reason and refine search queries to gather better retrieval results. The importance ofdesigned modules inthe pipeline. In the multi-hop reasoning setting, with the conducted ablation studies to assess (Table 2), removing the Evidence Reflector module resulted in noticeable performance degradation, particularly in more complex datasets like MuSIQue (Trivedi et al., 2022), underscoring the critical role of the Evidence Reflector in provisioning concise and rel- evant evidence feedback to help the agent make better subsequent decisions. We also removed the Planning module, which serves as the central com- ponent of the pipeline. The removal of this module led to a more substantial decline in performancemetrics, thereby illustrating its pivotal role in or- chestrating the agent’s multi-step reasoning process and the necessity of iterative information retrieval. Agent LLM Rouge-L BLEU GPT-Score Step Llama-3-70B-Inst 0.36 0.12 3.62 5.68 GPT-4-Turbo 0.35 0.13 4.35 2.27 Agent-UniRAG 0.36 0.15 4.19 2.08 Table 3: Agent-UniRAG in compare with LLama-3- 70B-Inst and GPT-4-Turbo on SynAgent-RAG test set. Effectiveness ofSynAgent-RAG dataset in distilling the reasoning capability. Table 3 presents the results on the test set of SynAgent- RAG datasets. Upon analysis, it becomes evident that traditional metrics like Rouge-L and BLEU, focused on the lexical overlap, is insufficient for evaluating the reasoning and accuracy in long- form answers. In contrast, GPT-Score, leveraging LLMs for semantic evaluation, provides a more accurate assessment. As a result, our proposed Agent-UniRAG model, which is finetuned on the SynAgent-RAG training set, demonstrates strong performance based on GPT-Score, achieving com- parable results to significantly larger models such as LLaMA-70B-Inst and GPT-4. Notably, Agent- UniRAG achieves this level of performance while utilizing fewer external search queries, highlight- ing its computational efficiency compared to more demanding computational resources typically re- quired by larger LLMs and as an efficient solu- tion for generating accurate long-form answers. This result also underscores the effectiveness of the SynAgent-RAG dataset in distilling reasoning capabilities from a larger LLM (GPT-4) into a more compact framework. 6 Conclusion Most previous works for LLM RAG tasks ei- ther handle single-hop
https://arxiv.org/abs/2505.22571v1
queries with unnecessary computational complexity or fail to address com- plex multi-hop queries. This study, inspired by the emerging LLM agent technologies, presents a novel approach for unified RAG systems with en- hanced effectiveness and interpretability. Further- more, we introduce SynAgent-RAG, a synthetic dataset that enables trainable open-source LLM agents for the unified RAG task. Compared with previous works using larger LLMs, the experiment shows promising results of the Agent-UniRAG with a small backbone (i.e., Llama-3-8B). Limitations Agent-UniRAG, the unified Retrieval-Augmented Generation (RAG) model, has shown promising re- sults in managing different types of queries includ- ing single-hop and multi-hop queries. Notably, this approach can be applied to open-source small lan- guage models (LLMs). However, real-world input often encompasses scenarios when queries are of- ten more than just single-hop or multi-hop queries requiring access to external knowledge bases, they may also include non-RAG tasks such as creative writing or function calling. Therefore, a crucial direction for future research is to extend the pro- posed approach to handle broader types of queries, including both RAG and non-RAG types. Addi- tionally, similar to other LLM agent architectures, Agent-UniRAG requires multiple calls to the lan- guage model to generate a final response, which introduces a computational challenge during infer- ence. Consequently, optimizing LLM inference for agent architectures is another critical aspect of our future work. References Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi. 2023. Self-rag: Learning to retrieve, generate, and critique through self-reflection. CoRR, abs/2310.11511. Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego de Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hennigan, Saffron Huang, Loren Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen Si- monyan, Jack W. Rae, Erich Elsen, and Laurent Sifre. 2022. Improving language models by retrieving from trillions of tokens. In International Conference onMachine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA , volume 162 of Proceedings ofMachine Learning Research , pages 2206–2240. PMLR. Wenqi Fan, Yujuan Ding, Liangbo Ning, Shijie Wang, Hengyun Li, Dawei Yin, Tat-Seng Chua, and Qing Li. 2024. A survey on RAG meeting llms: To- wards retrieval-augmented large language models. In Proceedings ofthe30th ACM SIGKDD Conference onKnowledge Discovery andData Mining, KDD 2024, Barcelona, Spain, August 25-29, 2024 , pages 6491–6501. ACM. Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, Qianyu Guo,Meng Wang, and Haofen Wang. 2023. Retrieval- augmented generation for large language models: A survey. CoRR, abs/2312.10997. Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara, and Akiko Aizawa. 2020. Constructing A multi-hop QA dataset for comprehensive evaluation of reason- ing steps. In Proceedings ofthe28th International Conference onComputational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020 , pages 6609–6625. International Committee on Computational Linguistics. Gautier Izacard, Patrick S. H. Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2023. Atlas: Few-shot learning with retrieval augmented language models. J.Mach. Learn. Res., 24:251:1–251:43. Soyeong
https://arxiv.org/abs/2505.22571v1
Jeong, Jinheon Baek, Sukmin Cho, Sung Ju Hwang, and Jong Park. 2024. Adaptive-rag: Learn- ing to adapt retrieval-augmented large language mod- els through question complexity. In Proceedings of the2024 Conference oftheNorth American Chapter oftheAssociation forComputational Linguistics: Human Language Technologies (V olume 1:Long Papers), NAACL 2024, Mexico City, Mexico, June 16-21, 2024 , pages 7036–7050. Association for Com- putational Linguistics. Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehen- sion. In Proceedings ofthe55th Annual Meeting oftheAssociation forComputational Linguistics, ACL 2017, Vancouver, Canada, July 30-August 4, V olume 1:Long Papers , pages 1601–1611. Associa- tion for Computational Linguistics. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain ques- tion answering. CoRR, abs/2004.04906. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur P. Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Ken- ton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natu- ral questions: a benchmark for question answering research. Trans. Assoc. Comput. Linguistics , 7:452– 466. Ilya Loshchilov and Frank Hutter. 2017. Decou- pled weight decay regularization. In International Conference onLearning Representations. Xinbei Ma, Yeyun Gong, Pengcheng He, Hai Zhao, and Nan Duan. 2023. Query rewriting in retrieval- augmented large language models. In Proceedings ofthe2023 Conference onEmpirical Methods in Natural Language Processing , pages 5303–5315, Singapore. Association for Computational Linguis- tics. OpenAI. 2024. Gpt-4 technical report. Preprint , arXiv:2303.08774. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ ques- tions for machine comprehension of text. In Proceedings ofthe2016 Conference onEmpirical Methods inNatural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016 , pages 2383–2392. The Association for Computa- tional Linguistics. Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham. 2023. In-context retrieval-augmented lan- guage models. Transactions oftheAssociation for Computational Linguistics, 11:1316–1331. Sina J. Semnani, Violet Z. Yao, Heidi C. Zhang, and Monica S. Lam. 2023. Wikichat: Stopping the hal- lucination of large language model chatbots by few- shot grounding on wikipedia. In Findings ofthe Association forComputational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023 , pages 2387– 2413. Association for Computational Linguistics. Yongliang Shen, Kaitao Song, Xu Tan, Dong- sheng Li, Weiming Lu, and Yueting Zhuang. 2023. Hugginggpt: Solving AI tasks with chatgpt and its friends in hugging face. In Advances inNeural Information Processing Systems 36: Annual Conference onNeural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA,USA, December 10-16,2023. Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. 2024. Re- flexion: Language agents with verbal reinforce- ment learning. Advances inNeural Information Processing Systems, 36. Yixuan Tang and Yi Yang. 2024. Multihop-rag: Bench- marking retrieval-augmented generation for multi- hop queries. CoRR, abs/2401.15391. Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022. Musique: Multi- hop questions via single-hop question composition. Trans. Assoc. Comput. Linguistics, 10:539–554. Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2023. Interleaving retrieval with chain-of-thought reasoning for knowledge- intensive
https://arxiv.org/abs/2505.22571v1
multi-step questions. In Proceedings ofthe61st Annual Meeting oftheAssociation forComputational Linguistics (V olume 1:Long Papers) , pages 10014–10037, Toronto, Canada. As- sociation for Computational Linguistics. Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, and Jirong Wen. 2024a. A survey on large language model based autonomous agents. Frontiers ofComputer Science, 18(6).Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, and Furu Wei. 2024b. Mul- tilingual e5 text embeddings: A technical report. Preprint, arXiv:2402.05672. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Ben- gio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answer- ing. In Proceedings ofthe2018 Conference on Empirical Methods inNatural Language Processing, Brussels, Belgium, October 31-November 4,2018 , pages 2369–2380. Association for Computational Linguistics. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R. Narasimhan, and Yuan Cao. 2023. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference onLearning Representations, ICLR 2023, Kigali, Rwanda, May 1-5,2023. OpenReview.net. Da Yin, Faeze Brahman, Abhilasha Ravichander, Khy- athi Chandu, Kai-Wei Chang, Yejin Choi, and Bill Yuchen Lin. 2023. Agent Lumos: Unified and Modular Training for Open-Source Language Agents. arXiv preprint arXiv:2311.05657. Andrew Zhao, Daniel Huang, Quentin Xu, Matthieu Lin, Yong-Jin Liu, and Gao Huang. 2024. Expel: LLM agents are experiential learners. InThirty-Eighth AAAI Conference onArtificial Intelligence, AAAI 2024, Thirty-Sixth Conference onInnovative Applications ofArtificial Intelligence, IAAI 2024, Fourteenth Symposium onEducational Advances inArtificial Intelligence, EAAI 2014, February 20-27, 2024, Vancouver, Canada , pages 19632–19642. AAAI Press. A Appendix B Illustrations of Agent-UniRAG Sample Tables 4 and 5 illustrate our data with examples, including the question, thought process, actions taken, evidence gathered, and the final long-form answer. C Prompt Templates For better reproducibility, we present all prompt templates in the appendix. Below is a quick refer- ence list outlining the prompt templates and their usages: •Figure 5: Find related sections from linked articles to the main article. •Figure 6: Generate a multi-hop question from a cluster of related sections. •Figure 7: Generate a singlehop question from a section. Question Agent-UniRAG Output What highway was renamed in honor of Tim Russert?###Thought : The first step is to use the search_engine tool to find which highway was renamed in honor of Tim Russert. ###Action - Search Input : highway renamed in honor of Tim Russert ###Evidence : On July 23, 2008, U.S. Route 20A leading to the Buffalo Bills’ Ralph Wilson Stadium in Orchard Park, New York was renamed the ’Timothy J. Russert Highway’ in honor of Tim Russert. ###Thought : I have the final answer ###Action - Final Answer : U.S. Route 20A was renamed the ’Timothy J. Russert Highway’ in honor of Tim Russert. Table 4: Example of Agent-UniRAG output with single-hop query Question Agent-UniRAG Output How do Jerry Falwell’s beliefs about the Antichrist as a specific person contrast with Martin Wight’s interpretation of the Antichrist concept after World War II?###Thought : Understand Jerry Falwell’s beliefs about the Antichrist as a specific person. ###Action - Search
https://arxiv.org/abs/2505.22571v1
Input : Jerry Falwell beliefs about the Antichrist ###Evidence : In 1999, Jerry Falwell professed that the Antichrist would likely emerge within a decade, with the prediction that this figure would be Jewish, which led to allegations of anti-Semitism and an ensuing apology from Falwell. He believed the Antichrist and Christ would share many attributes. ###Thought : Explore Martin Wight’s interpretation of the Antichrist concept after World War II to contrast with Falwell’s views. ###Action - Search Input : Martin Wight interpretation of the Antichrist concept after World War II. ### Evidence: Post-World War II, Christian and political theorist Martin Wight interpreted the Antichrist not as an individual, but as a symbol of ’demonic concentrations of power’ recurring over time. ###Thought : I have the final answer ###Action - Final Answer : Jerry Falwell’s beliefs about the Antichrist focused on the idea that the Antichrist would be a specific individual, with Falwell predicting in 1999 that this person would likely emerge within a decade and possess similarities to Christ. In contrast, Martin Wight’s post-World War II interpretation of the Antichrist was symbolic, referring to it as ’demonic concentrations of power’ that recur throughout history, rather than an individual figure. Table 5: Example of Agent-UniRAG output with multi-hop query •Figure 8: Extract related evidence for a search query from a list of source content. •Figure 9: Generate solution annotation for a question. •Figure 10: Extract short form answer from a long form answer. •Figure 11: Take GPT Score of an annotated answer to the reference answer. •Figure 12: Training prompt template for the agent to reason and use tools. •Figure 13: Training prompt template for theagent to extract related evidence for a query from sources of content. •Figure 14: Training prompt template for the agent to provide the final answer for the ques- tion from gathered evidence All prompts are zero-shot, except for the prompt in Figure 10, which uses few-shot demonstrations to better guide the LLM to perform the task. These prompts were chosen because they perform effec- tively in practice. ### Given a source of a Wikipedia article containing [[hyperlinks]] and a list of section titles from the linked articles, identify section titles that are most relevant to supporting the main topic. ### Wikipedia article: { "article_title": "actual article title", "section_title": "actual section title", "content": "actual content" } ### Hyperlinks with Section Titles: [ { 'entity': '[[actual entity in Wikipedia article marked in double square barackets]]', 'article_title': 'linked article title', 'article_sections': ['list of linked article section titles'] }, ... ] ### Notes: 1) You can select ONL Y one section title per article 2) Response must be in the following JSON format: [ { "rationale": "condensed and short reason why you select the section", "article_title": "article_title", "section_title": "article_section_title" },... ] 3) You can select up to {{k}} most proper section titlesFigure 5: Prompt template for GPT4 to find related section content from articles. ### Clarification: 1) A multi-hop question is a question that requires multiple inferential leaps or accessing several pieces of information from different sources to arrive at a final
https://arxiv.org/abs/2505.22571v1
answer . 2) You will be given sources of articles, your job is to generate a multi-hop question and then provide the answer for the question based on the provided sources. ### Sources [ { "article_title": "actual article title", "section_title": "actual section title", "content": "actual content" },... ] ### Notes: 1) The question cannot be answered by relying on any single article alone but instead requires the solver to gradually gather and search for pieces of evidence within ALL the provided sources then understand and link information to take the next action, and finally give back the answer . 2) Make sure the question flows logically and is unambiguous. 3) The information in the answer MUST be derived from the sources 3) Response in the following JSON format: {"question": "your question", "answer": "correct answer for the question"} 4) Do not mention the source of information in the question or the answer . Figure 6: Prompt template for GPT4 to generate multi-hop questions. ### You will be given a source of article. Your job is to create a relevant question to the source and then provide the answer for the question based on the provided source : ### Source: { "article_title": "actual article title", "section_title": "actual section title", "content": "actual content" } ### The question MUST satisfy the following conditions: 1) The question must related to the content of the source. 2) Make sure the question is simple enough and unambiguous. 3) The question requires synthesizing information from the source to answer . 3) The information in the answer MUST be derived from the sources 5) Do NOT mention the source in the question or in the answer . ### Response MUST be in the following JSON format: { "question": "Y our question here", "answer": "the detailed answer to the question" }Figure 7: Prompt template for GPT4 to generate single-hop questions. ### Task: Synthesize a condensed text evidence from given sources to support a search query . ### Sources: [ { "article_title": "actual article title", "section_title": "actual section title", "content": "actual content" },... ] ### Search Query: {{actual search query}} ### Selection Guidelines: 1. Clarity: Evidence must be clear , concise. 2. Conciseness: Evidence must be presented in a succinct manner , condensed and AVOIDING unnecessary details. 3. Relevance: Evidence must directly correspond and relevant to the search query . 4. Source Integrity: Only use information from the provided sources, AVOIDING generated or unnecessary information. 5. If multiple part of a source is relevant to the search query , combine them into one element in the response list. ### Response MUST be in a JSON list as below: [ { "evidence": "condensed text supporting the search query from a source", "source_id": "an identifier of the source text" } ] If no evidence is found, respond the following json: [ { "evidence": "No supporting evidence found.", "source_id": null } ] Figure 8: Prompt template for GPT4 to extract evidence from a list of sources You will be utilizing the following tool to assist with answering questions: { "tool_name": "search_engine", "tool_description": "This tool can search
https://arxiv.org/abs/2505.22571v1
an external knowledge base to find text evidence for the provided search query .", "tool_note": "Y ou can not search for multiple information at once. Please provide one clear and detailed query for a small piece of evidence you want at a time.", "tool_input": [ { "param_name": "search_query", "param_type": "string", "param_description": "A detailed search query to search within the knowledge base to get some pieces of evidence" } ] } Your task is to solve a provided question by using the tool. Follow these steps: 1) Reasoning step by step how you will use the tool to solve the question. 2) You can only use one tool each time then you get the response and continue. 3) Provided that you DO NOT have any initial knowledge about the information mentioned in the question and DO NOT generate facts or evidence yourself. 4) Provide a CLEAR and CONCISE answer . 5. Format responses to utilize the search_engine tool as follows: ### Thought: A short and condensed rationale for using the search_engine tool. (one sentence) ### Search Input: Format the search query input for the search_engine tool as a JSON object, correctly representing input parameters. ### Observation: the text evidence after searching that will be given to you. (will be [W AITING] when you are waiting for the evidence response, also note that the observation sometimes can be partially related or not related to the search query , you need to reason and continue to use the tool until you have a response for the user . DO NOT generate observation yourself.) When you have a response for the user , or if you do not need to use a tool, use the format: ### Thought: I have the final answer ### Final Answer: your condensed answer to the main Let's begin with the question: {{actual question}}Figure 9: Prompt template for GPT4 to reason, use tools and provide the final answer for a question ### Task: You are doing the Extractive Question Answering task. You will be given a question and a reference answer . Your task is to extract exactly a list of text spans inside the reference answer that can serve as the short answers for the question. ### Question: {{actual question}} ### Reference Answer: {{actual answer}} ### Notes: 1) Your response MUST be in JSON format {"short_extracted_answers": ["extracted answer 1", ...]} 2) The extracted answers MUST be united and interchangeable, try to combine nearby words in the Reference Answer to from an answer 3) Do not generate answers or information yourself 4) If the question is a yes/no question, then you should base it on the reference answer to return yes or no as the extracted answer . 5) If you can not extract the answer or the answer is not provided in the Reference Answer , then respond: {"short_extracted_answers": null} ### Examples: Question: What percentage of French publishing houses were in Paris in the 1970s? Reference Answer: In the 1970s, 80 percent of French-language publishing houses were located in Paris. Response: {"short_extracted_answers": ["80 percent", "80"]} Question: When did Claridge's company
https://arxiv.org/abs/2505.22571v1
liquidate? Reference Answer: Claridge's company liquidated on November 10, 1917. Response: {"list_of_short_extracted_answers": ["November 10, 1917", "10 November 1917", "November 1917", "1917"] Question: Do both Icehouse pieces and El Grande come from the same game? Reference Answer: No, Icehouse pieces and El Grande do not come from the same game. Icehouse pieces are from the game system of the same name, invented by Andrew Looney and John Cooper , while El Grande is a German-style board game designed by Wolfgang Kramer and Richard Ulrich. Response: {"list_of_short_extracted_answers": ["No"]}Figure 10: Prompt template to extract short answer from long answer ### Task: You are a powerful and accurate assistant in checking the quality of a predicted answer . You will be given a predicted answer , a question and a reference answer . ### Here are some criteria for you to grade the predicted answer: 1) The score MUST be an integer range from 0 to 5. 2) The content of the predicted answer should be relevant and focus on the question. 3) Any missing or excess information in the predicted answer compared to the reference answer will be penalized in the final score. 4) If the question is a question that requires the analysis of information, then you should reinforce the above criteria. ### Question: {{question}} ### Reference Answer: {{reference answer}} ### Predicted Answer: {{predicted answer}} ### Note: Your response MUST be in the following JSON format and do NOT generate unnecessary details beyond the JSON object { "rationale": "Y our brief rationale for how you scored the predicted answer", "score": "the score of the predicted answer" } Figure 11: Prompt template for GPT4 to compare and score the predicted answer and the reference answer. ### Task: You are a problem solver . Given a question, your task is to solve the question by gradually gathering information with search_engine tool and provide the final answer to the question. ### Question: {actual question} ### Thought: {agent thougth 1} ### Action: {agent action 1} ### Search Input: {agent search input} ### Observation: {observation provided by the environment} .... ### Thought: I have the final answer . ### Action: final_answer Figure 12: Training prompt template for Agent-UniRAG to reason and use tools. Loss is computed only on the red part as the GPT turns in the conversation setup. ### Task: Extract the relevant information from the sources to support the following query . ### Sources: [ { "source_id": "an integer start from 0", "content": "related content to the query" } ] ### Query: {agent query} [ { "source_id": "related source id", "evidence": "evidence from source id" }, ... ]Figure 13: Training prompt template for Agent-UniRAG extract related evidence for a query from sources of content. Loss is computed only on the red part as the GPT turns in the conversation setup. ### Task: Given the question and a list of evidence, your task is to provide the final answer to the question based on the information within the evidence. ### Evidence: {concatenated text of gathered evidence} ### Question: {initial question} {final answer} Figure 14: Training prompt template for
https://arxiv.org/abs/2505.22571v1
arXiv:2505.22572v1 [cs.CL] 28 May 2025Fusion Steering: Prompt-Specific Activation Control Waldemar Chang Johns Hopkins University wchang58@jh.eduDr. Alhassan Yasin Johns Hopkins University ayasin1@jhu.edu Abstract We present Fusion Steering , an activation steering methodology that improves factual accuracy in large language models (LLMs) for question-answering (QA) tasks. This approach introduces flexible steering configurations, including full- layer steering andsegmented steering . Unlike traditional methods constrained to single-layer or fixed-layer operations, Fusion Steering employs dynamic injection of prompt-specific activation deltas across all transformer layers. These activation deltas are derived from reference completions that combine the ground-truth answer with a model-generated explanation to facilitate semantically enriched, example- specific steering. The injection weights are optimized per prompt using Optuna, targeting a joint objective that balances token overlap (factual alignment) and perplexity (fluency proxy). Evaluation employs a composite score integrating token overlap and LLM-graded quality, encompassing factual accuracy, coherence, and relevance. Empirical results on 260 SimpleQA prompts (selected from 500 where the baseline failed) showcase the efficacy of segmented steering . Using Gemma-2-2B-IT with 8-bit quantization, segmented steering achieves an accuracy of 25.4% (outputs scoring ≥0.6), outperforming the baseline at 3.5% and full- layer steering at 16.2%. Under the stricter SimpleQA rubric, segmented steering boosts fully correct responses from 0.0% to 13.1%. These findings highlight the strengths of segmented, dynamic intervention strategies and the promise of per- prompt, full-network activation control. Fusion Steering is also amenable to sparse representations, such as Neuronpedia or sparse crosscoders, suggesting a promising direction for interpretable and scalable activation-level control in LLMs. 1 Introduction Large language models (LLMs) have exhibited exceptional capabilities across a broad spectrum of natural language processing tasks, spanning from fundamental text classification to intricate reasoning and question-answering (QA) [ 1,2,3]. Nevertheless, achieving precise control over their outputs remains a formidable challenge. Recent advances in interpretability research underscore the complexity of comprehending or modulating LLM behavior at a holistic level, particularly due to long-range reasoning dependencies, abstract conceptual structures, and intertwined feature representations [ 4]. This highlights the increasing importance of localized, prompt-specific control methodologies that integrate multi-layer interventions to mitigate cross-layer superposition and polysemantic feature entanglement [ 5,6]. Prior investigations into steering neural network activations have predominantly centered on tasks involving straightforward semantic or syntactic adjustments, such as antonym generation, sentiment modulation, or toxicity suppression, where steering vectors are typically well-defined and exhibit linear characteristics within activation spaces [ 7,8,9]. Although effective in simpler contexts, these steering methods remain largely unexplored for more sophisticated, semantically rich tasks like factual question answering, which necessitate highly precise and context- aware outputs. All code and materials are available at https://github.com/MechInterpreter/ Fusion_Steering . Preprint. Under review. In this work, we introduce a novel steering approach specifically designed for factual QA tasks. Our method distinguishes itself from previous approaches in four key aspects: First , we introduce the concept of enriched reference activations , which capture neural activation vectors when the model processes a fully enriched prompt explicitly containing both the correct ground-truth answer and its explanation. These vectors encode detailed semantic information directly tied to factual accuracy and explanatory coherence. This approach differs from prior
https://arxiv.org/abs/2505.22572v1
methods that derive latent steering vectors from generic prompts or contrastive pairs, without integrating explanatory context as a conditioning input [7, 9, 10, 11, 12]. Second , our approach employs a synergistic combination of interpolated activation patching and additive steering, executed concurrently across alltransformer layers. This comprehensive multilayer intervention strategy contrasts with prior methods, which typically applied steering to individual layers in isolation [ 7,9,13,14], rather than concurrently across the whole network. Furthermore, unlike previous methods, we integrate an interpolated variant of activation patching with the directional modulation introduced by additive steering, ensuring a more refined and context-sensitive control mechanism. Third , we introduce prompt- and layer-specific tuning by optimizing two hyperparameters per instance: the fusion weight , which integrates the model’s original activations with the reference activations, and the steering strength , which scales the magnitude of the injected activation shift. These parameters are independently tuned across three distinct layer groups— early (layers 0–8), middle (layers 9–17), and late(layers 18–25), using 0-based indexing—enabling fine-grained, context- sensitive control over how factual guidance is incorporated throughout the model. Previous approaches to activation steering generally rely on a fixed intervention vector at a single layer or across multiple layers, often employing globally tuned intensity parameters that do not adapt to specific prompts or network regions [ 7,9,15,16]. In contrast, our approach performs localized, per-prompt calibration at multiple functional depths of the model, offering more precise control over the factual alignment process. Fourth , while previous steering approaches have targeted specific behaviors or attributes with clearly defined latent directions [ 7,8,9,17], our method is explicitly designed to enhance factual accuracy in open-ended question answering scenarios, where alignment cannot be captured by a single contrastive axis. This presents a more challenging control objective: the semantics of factual questions disperse across multiple, input-dependent dimensions. Effective control therefore requires semantically grounded steering capable of guiding the model toward discrete, knowledge-intensive answers. By distributing semantically meaningful reference activations throughout the network and dynamically adapting both their intensity and integration per prompt and layer group, our method enables holistic and precise control, explicitly guiding the model toward accurate, contextually grounded answers from the question alone. We empirically validate our approach on the SimpleQA benchmark [ 18], demonstrating that our method significantly improves factual accuracy compared to the base model’s responses. 2 Methodology and experimental setup Note: All model generations in this work, whether from Gemma-2-2B-IT during inference or either Mistral Small 3.1 instance used for filtering and grading are performed under greedy decoding with temperature = 0.0 . This ensures complete determinism throughout the pipeline and facilitates precise, reproducible evaluation across all experimental stages. 2.1 Initial inference and subset construction We first ran the instruction-tuned baseline model (Gemma-2-2B-IT) [ 19] on 500 prompts from the SimpleQA test split and used a rubric-based factuality classifier built on the Mistral Small 3.1 model [ 20] to grade each response as CORRECT ,INCORRECT , orNOT_ATTEMPTED (see Appendix A). From these, we selected the 260 examples labeled INCORRECT , excluding both correct and non- attempted responses, for steering experimentation. We focus on this 260-prompt
https://arxiv.org/abs/2505.22572v1
subset both due to computational constraints and because it represents the most challenging cases—each labeled INCORRECT by the LLM grader, making it ideal for stress-testing the effectiveness of steering interventions. 2 To confirm that our experimentation subset covers a representative range of topics, we compared the topic distribution of the full 500-prompt sample to the 260 INCORRECT examples. As shown in Figure 1, the distributions are broadly aligned across major categories such as Science and Technology ,Politics , and Art, indicating that the incorrect subset retains the diversity of the larger sample. Furthermore, the complete SimpleQA dataset spans 4,326 prompts in 10 topic categories, with the largest shares belonging to Science and Technology (19.8%), Politics (16.4%), and Art(12.7%). This general distribution, shown in Figure 1 and documented in the official release [ 18],1validates that our 500-sample—and by extension the 260-prompt subset—is well-stratified and representative of the broader benchmark. Figure 1: Topic distribution comparison across all subsets. Proportions remain consistent despite sample size differences, indicating strong stratification. 2.2 Capturing reference activations We capture reference activations from Gemma-2-2B-IT using an enriched prompt that combines the question, the ground-truth answer, and an explanation generated by the model after being fed the question and the ground truth: Question: [prompt] Answer and Explanation: [ground truth + explanation] This entire input sequence is tokenized and fed to the model. We then extract neural activations across all transformer layers, but only from token positions corresponding to the ground-truth answer and explanation. These activations are averaged at each layer to produce a single robust reference activation vector per layer. Formally, let hl∈Rddenote the activation vector for layer l, calculated as: hl=1 |T |X t∈Tal,t (1) where al,tdenotes the activation at layer lfor the token position t, andTdenotes the set of token positions aligned to the ground-truth answer and explanation within the prompt. 2.3 Multi-layer patching and steering At inference time, when presented only with the question (excluding the ground truth answer and explanation), we inject the previously captured reference activations across transformer layers using 1https://openai.com/index/introducing-simpleqa/ 3 an additive steering mechanism. This steering dynamically nudges the internal representations toward the mean representation seen during reference activation capture. Formally, let hl∈Rddenote the reference activation vector at layer l, and let µl∈Rdbe the mean of the activations at layer lacross all token positions during inference. We compute a global shift vector: ∆l=γ(hl−µl) (2) This shift is added uniformly to each token’s activation in the layer: asteered l,t =aoriginal l,t + ∆l (3) Finally, we blend the steered and original activations using an interpolation parameter α∈[0,1], which we refer to as the fusion weight : afinal l,t=αasteered l,t+ (1−α)aoriginal l,t (4) Here, γmodulates the intensity of the directional shift, and αcontrols the extent of blending—the fusion step in our Fusion Steering method—between steered and original activations. We explore two distinct implementations of this mechanism. In full-layer steering , reference activa- tions are injected uniformly across all transformer layers using a single set of αandγparameters. In contrast, segmented steering divides the model into early, middle, and late layer groups, each steered using separately optimized αandγvalues.
https://arxiv.org/abs/2505.22572v1
This allows for finer-grained control over how different representational stages of the model are influenced. We aim to understand the trade-offs between globally uniform steering and more targeted, segment-aware modulation of internal activations. 2.4 Prompt-specific parameter optimization To ensure optimal parameter selection tailored to each input question, we employ Optuna-based hyperparameter optimization [ 21]. For each prompt-steering configuration, we ran five Optuna trials and selected the configuration with the highest composite score. Parameters are tuned separately for each prompt using a composite objective that captures both factual accuracy and linguistic fluency: Optimization Metric =λ·Token Overlap −(1−λ)·Perplexity Normed (5) Here, λ∈[0,1]governs the trade-off between factual alignment and fluency. The token overlap score captures n-gram-level agreement with the reference answer, while perplexity—measured via GPT- 2 [22]—estimates generation fluency. Since lower perplexity indicates higher fluency, its normalized value is subtracted so that maximizing the objective favors both alignment and coherence. We fix λ= 0.7to moderately prioritize factual alignment over fluency. While we did not perform an exhaustive sweep of λ, this setting reflects a deliberate tradeoff: token overlap serves as a reliable optimization signal because SimpleQA ground-truths consist primarily of concise factual phrases. This setting was chosen heuristically as a stable midpoint between full token overlap weighting (λ= 1.0) and equal balance ( λ= 0.5), based on pilot trials. More importantly, this design enables efficient per-prompt tuning, as both token overlap and perplexity can be computed rapidly. During optimization, the search space for both the fusion weight αand steering strength γis bounded to [0.01,1.0], ensuring sufficient expressiveness while avoiding degenerate extremes, and was found to offer stable convergence during pilot experiments. In the full-layer steering setup, the same αandγvalues are applied uniformly across all transformer layers for a given prompt. In contrast, segmented steering partitions the model into early, middle, and late layer groups—defined as layers 0–8, 9–17, and 18–25 respectively, using 0-based indexing—with each group steered using its own optimized parameters. This finer-grained design allows more precise control over different stages of the model’s internal representation. While we apply this partitioning to Gemma-2-2B-IT, our choice is informed by prior work on LLaMA-style architectures showing that adjacent layers often share functional roles and exhibit high intra-group activation similarity [23]. These findings suggest that transformer layers naturally cluster into coherent groups, supporting layer-wise modulation at group granularity even across different decoder-only architectures. 4 2.5 Deterministic inference via greedy decoding To ensure consistency and eliminate variance due to sampling, all generations whether baseline, full-layer steering, or segmented steering—are produced using greedy decoding. At each timestep, the model deterministically selects the highest-probability token by setting temperature = 0.0 ; other sampling parameters (e.g., top-p ) are left at default values and have no effect under this setting. This inference mode guarantees a single, reproducible output for each prompt-steering configuration pair, allowing for precise attribution of differences in factuality or fluency to the effects of steering rather than stochastic variation. It also simplifies hyperparameter optimization by ensuring stable and reproducible metrics, such as token overlap and perplexity, for evaluating candidate configurations. While greedy decoding may reduce output diversity, our
https://arxiv.org/abs/2505.22572v1
primary focus is on factual correctness and reliability, both of which benefit from deterministic evaluation. Future work may extend this setup to sample-based decoding to assess the robustness and adaptability of activation-level steering under uncertainty. 2.6 Evaluation metrics 2.6.1 Composite metric (token overlap and 5-point LLM evaluation) To comprehensively quantify the effectiveness of our steering methods, we compute a combined evaluation score that averages token overlap with a rubric-based LLM grade: Evaluation Metric = 0.5·Token Overlap + 0.5·LLM Grade 5 (6) The LLM grade (scaled 1–5) is generated by Mistral Small 3.1 [ 20], using a scoring rubric that evaluates the correctness, completeness, and relevance of generated responses (Appendix B). A numerical 1-5 rubric is used instead of letter grades to enable direct normalization and smooth integration into our continuous evaluation metric. Token overlap provides a granular, n-gram-level alignment check against concise ground-truth answers, serving as a lightweight but precise correctness proxy. By combining these complementary perspectives, our evaluation metric captures both surface- level lexical fidelity and deeper semantic accuracy, enabling more graded analysis of factual alignment even when predictions fall short of exact correctness. 2.6.2 SimpleQA rubric grading To measure improvement relative to the original failure cases, we also apply a stricter SimpleQA rubric (Appendix A) using the initial instance of Mistral Small 3.1, which was used to identify the 260 INCORRECT baseline completions. Unlike the 1–5 rubric, which enables finer-grained differentiation in output quality, including partial correctness and relevance, the SimpleQA rubric enforces a categorical distinction between fully correct, incorrect, and unattempted answers. While the composite metric captures incremental improvements in factual alignment and response quality, the SimpleQA rubric serves as a high-precision filter for absolute correctness. This combination allows us to assess steering impact both in terms of subtle semantic gains and strict correctness, offering complementary perspectives on factual performance. 3 Results 3.1 Quantitative evaluation We evaluate steering effectiveness on 260 prompts where the baseline model initially failed (260 INCORRECT completions). A response is considered accurate if it achieves a composite score ≥0.6, balancing token overlap and LLM-graded quality based on a 5-point rubric (Appendix B). Segmented steering achieves 25.4% accuracy (66/260), compared to 16.2% for full-layer steering and just 3.5% for the baseline, representing a 7.3 ×improvement for segmented steering over the baseline. Distribution analysis (Figure 3a) shows that segmented steering yields higher median and variance, indicating more robust and widespread gains across prompts. 5 (a) Combined Accuracy ≥0.6 (b) Token Overlap Accuracy ≥0.5 (c) LLM Score Accuracy ≥4 (d) LLM Score Distribution (1–5) Figure 2: Evaluation across three accuracy metrics. ( Top Left ) Accuracy based on combined factual and quality score, ( Top Right ) Accuracy based on token-level overlap, ( Bottom Left ) Accuracy based on rubric-assigned LLM scores, and ( Bottom Right ) Full distribution of LLM grades (1–5) assigned to model outputs. Segmented steering consistently outperforms all baselines across evaluation dimensions. For token overlap ≥0.5, segmented steering again leads with 43.5% accuracy (113/260), outperform- ing full-layer (37.3%) and baseline (20.4%). It also attains the highest mean (0.38), median (0.23), and variance (0.42) of
https://arxiv.org/abs/2505.22572v1
overlap scores (Figure 3b), highlighting its consistency across diverse QA topics. Notably, the SimpleQA benchmark spans a wide range of domains like science, politics, art, and history, suggesting that these gains generalize across semantically distinct question types. We further validate factual correctness using a stricter SimpleQA LLM grading scheme (Ap- pendix A) that categorizes responses as CORRECT ,INCORRECT , orNOT_ATTEMPTED . Segmented steering achieves 13.1% CORRECT (34/260), full-layer achieves 9.2% (24/260), and the baseline produces zero CORRECT completions (Figure 4). This discrete evaluation confirms that steering not only boosts composite scores but also enables fully correct generations under a more conservative rubric. To unpack the more nuanced rubric-based LLM scores (Appendix B), we analyze the full distribution of 1–5 quality ratings (Figure 2d). Surprisingly, both steering variants yield a higher number of completions graded 1 than the baseline. This reflects a shift in failure mode: while the baseline often generates fluent but factually irrelevant responses (typically graded 2), the steered variants push the model closer to the target answer, sometimes at the expense of fluency or syntactic coherence. This dynamic illustrates a key trade-off in low-temperature decoding: steering increases alignment pressure, but without fluency-preserving mechanisms, may also degrade coherence. This is partic- ularly evident in completions where the model’s syntax breaks down as it attempts to emulate the ground-truth sequence (see Table 2). Despite this, both full-layer and segmented steering significantly increase the number of completions rated 3, 4, or 5—bins the baseline almost never reaches. This redistribution highlights a qualitative shift: instead of defaulting to safe but incorrect outputs, the steered generations aim for factual correctness, raising the ceiling of answer quality even as they widen the distribution of outcomes. 6 3.2 Qualitative analysis To better illustrate how Fusion Steering impacts factual accuracy and response quality, we provide representative examples comparing baseline outputs and steered generations in Table 1. These samples highlight typical improvements, particularly in factual correctness and coherence. All prompt completions, including baseline, full-layer, and segmented outputs across all 260 examples, available for review in the supplementary CSV files, ensuring transparency and reproducibility. The qualitative example in Table 1 illustrates several key dynamics enabled by Fusion Steering. The baseline model selects an incorrect answer (University of London) and provides a plausible but factually inaccurate justification. Notably, the baseline begins by hallucinating a multiple-choice format, listing four plausible universities before selecting one, despite the prompt not requesting such structure. None of the listed options include the correct answer (Harvard University), indicating that the model’s retrieval or grounding mechanisms fail to access the relevant fact at all. Full-layer steering attempts a correction, shifting to a different plausible university (MIT) and enriching the explanation with thematic relevance to El Anatsui’s artistic work. However, the selected institution remains incorrect, suggesting that full-layer activation shifts may overgeneralize or retrieve semantically adjacent but factually incorrect entities. In contrast, segmented steering successfully grounds the generation in the correct answer (Harvard University) and retains coherence and justification quality. This supports the notion that segmented steering enables more precise factual targeting, likely due to its fine-grained control over
https://arxiv.org/abs/2505.22572v1
the model’s internal representations. Although both steered outputs improve contextual grounding and explanation detail relative to the baseline, segmented steering demonstrates superior factual alignment. Table 1: Comparison of baseline, full-layer, and segmented steering outputs for a factual question. All outputs are shown verbatim and truncated for space. Prompt 61 Ground Truth: Harvard University Which university gave El Anatsui an honorary doctorate in 2016? Baseline Output Full-layer Steering Output Segmented Steering Output A. University of Oxford B. Uni- versity of Cambridge C. Univer- sity of Edinburgh D. University of London Answer: D. University of Lon- don El Anatsui is a Ghanaian artist known for his large-scale instal- lations made from discarded bot- tle caps. He received an honorary doctorate...El Anatsui, a renowned Ghanaian artist known for his innovative use of discarded materials, received an honorary doctorate from Mas- sachusetts Institute of Technol- ogy (MIT) in 2016. His work, which often incorporates metal, textiles, and other found objects...El Anatsui, a renowned Ghana- ian artist, received an honorary doctorate from Harvard Univer- sityin 2016. His innovative use of recycled materials, particularly metal, in creating large-scale in- stallations... This example illustrates how activation-level interventions—especially when guided by layer-specific hyperparameter tuning—can more precisely steer model behavior toward truth-consistent completions. By tailoring steering strength and integration depth across different transformer layers, the model is better able to align with factual targets. At the same time, the example reveals trade-offs between factual precision and semantic drift, highlighting the value of dynamic, context-sensitive control over internal representations. The example shown below in Table 2 (Prompt 38) highlights the nuanced interaction between steering- driven factual alignment and surface-level fluency. The baseline model and full-layer steering both provide fluent yet incorrect responses, confidently stating "Episode 4" rather than the ground-truth "Episode 7." By contrast, segmented steering attempts to align closely with the correct factual information, repeatedly generating variations of the correct token ("7"). However, this aggressive push toward the correct lexical content leads to a severe degradation in coherence, resulting in a response scored as the lowest grade (1) by the LLM rubric. 7 Table 2: Comparison of outputs for a factual recall question with ground-truth answer “7.” Baseline and full-layer responses are fluent but factually incorrect, while segmented steering approximates the correct answer but fails in surface coherence. Prompt 38 Ground Truth: 7 In Season 3 of the American version of “The Circle,” in which episode did Vince enter the game? Baseline Output Full-layer Steering Output Segmented Steering Output The answer is Episode 4 . Vince, a popular player in Season 3, entered the game in Episode 4. Here’s why: The Circle is known for its strategic gameplay and twists...............................7...7..7.7.7.7.7... This fluency breakdown may be partly attributable to the combination of greedy decoding and the nature of token-position expectations during activation extraction. Specifically, activations were extracted not only from ground-truth answer tokens but also from extensive explanation tokens, implic- itly conditioning the model to fill in a correspondingly long explanation during generation. Coupled with greedy decoding and a relatively large token-generation budget ( max_new_tokens=100 ), the model becomes locked into repetitively
https://arxiv.org/abs/2505.22572v1
generating the token most strongly associated with correctness (“7”), despite a lack of coherent context. This case vividly illustrates the seemingly counterintuitive pattern discussed earlier: segmented steering methods frequently yield higher token overlap with ground truth, indicative of improved factual alignment, yet may simultaneously receive lower rubric scores due to compromised fluency and readability. Such examples explain the widened distribution of LLM rubric scores observed in Figure 2d, where segmented steering shifts outputs away from mid-range scores toward the extremes, achieving higher factual accuracy in some instances but at the cost of coherence in others. The phenomenon underscores the inherent trade-off steering methods navigate between factual correctness and surface-level quality. Note: Statistical significance testing was not performed due to the deterministic nature of our evaluation setup. All completions were generated via greedy decoding, and steering parameters were tuned per prompt, eliminating stochastic variation across runs. 4 Computational resources All experiments, including per-prompt hyperparameter tuning, activation extraction, and steering inference, were conducted on a single NVIDIA T4 GPU, representative of commonly available, resource-constrained environments. To reduce memory consumption and accelerate computation, we employed 8-bit quantization [ 24] via the HuggingFace bitsandbytes integration, enabling over 2,600 generations across tuning and evaluation phases without exceeding hardware limitations. To further ensure computational efficiency, all hyperparameter optimization was performed using Optuna [21], leveraging efficient search algorithms to minimize optimization overhead. This setup highlights the lightweight and scalable nature of Fusion Steering: our method operates entirely at inference time, requires no model fine-tuning or retraining, and performs effectively on modest hardware. Despite limited resources, we observe substantial gains in factual accuracy, demonstrating that internal activation-level intervention can be both efficient and impactful. The approach is fully reproducible, with open-source code and minimal system requirements, making it well-suited for adoption in resource-limited research contexts and rapid prototyping workflows. 5 Limitations and future directions While Fusion Steering demonstrates promising improvements in factual accuracy, several limitations must be acknowledged. First, the method currently operates under greedy decoding ( temperature = 0.0 ), which, while ensuring deterministic outputs for controlled evaluation, may exacerbate 8 brittleness and fluency degradation in edge cases. Second, our steering signals are derived from reference activations that include both ground-truth answers and explanations; this conditioning may implicitly encourage overly long or unnatural completions when explanation tokens dominate the activation space. However, in practical deployments, such enriched reference prompts are often unavailable or in- complete. The requirement of precise ground-truth answers and detailed explanations limits the scalability of Fusion Steering in low-resource or real-time environments where curated supervision is rare. This reliance also poses challenges for domains involving open-ended or multi-hop reasoning, where ground-truths may be ambiguous or unstructured. Additionally, the method is evaluated only on a subset of 260 incorrect prompts from the SimpleQA benchmark, limiting generalizability to broader QA or open-domain generation settings. The op- timization process is prompt-specific, requiring per-instance hyperparameter tuning, which poses computational challenges in high-throughput or streaming scenarios. We also do not explore alter- nate decoding strategies (e.g., beam search, sampling) or assess robustness to adversarial or noisy activations. These limitations motivate future
https://arxiv.org/abs/2505.22572v1
work on plug-and-play steering mechanisms that do not require full reference completions. One promising direction is to integrate Fusion Steering with emerging neuron- level semantic representations, such as those produced by sparse autoencoders (e.g., Neuronpedia) [25] or crosscoder-based alignment methods [ 6], which identify high-level concepts within specific units. Rather than injecting raw activations derived from entire answer–explanation sequences, future systems could extract interpretable features directly from the input prompt, then activate neuron subsets that correspond to relevant semantic regions of activation space. If the geometry of concept space becomes reliably mapped, where discrete features are tied to identifiable neuron clusters, then prompts could be enriched by embedding structured encodings of these features alongside the input. This would shift steering from recall-based intervention to a feature-driven, concept- conditioned mechanism, enabling scalable and interpretable factual alignment even in open-domain or under-specified generation settings. Recent work provides encouraging theoretical and empirical foundations for this direction. The “Geometry of Concepts” study [ 26] demonstrates that sparse autoencoders produce structured rep- resentations, where related concepts cluster spatially in activation space, forming modular and hierarchical geometric patterns such as parallelograms and trapezoids. Similarly, Anthropic’s Scaling Monosemanticity project [ 5] shows that sparse autoencoders can extract interpretable features from large language models, with semantically similar concepts clustering in localized neuron groupings across modalities and languages. Sparse crosscoders [ 6] extend this approach by analyzing features distributed across layers, revealing shared structures and potentially simplifying circuit analysis through cross-layer superposition. These advances suggest that future variants of Fusion Steering could selectively activate semantically-relevant feature neighborhoods instead of full reference traces, enabling more interpretable and concept-conditioned factual intervention, with potential trade-offs between transparency and computational overhead. 6 Conclusion We introduced Fusion Steering, a novel activation-level intervention technique designed to improve factual accuracy in large language models for open-ended question answering. Our method combines enriched reference activations, multi-layer interpolated activation patching and additive steering, and prompt- and layer-specific hyperparameter tuning to guide model behavior at inference time. Unlike prior approaches targeting tasks with clearly defined linear directions, Fusion Steering is tailored for discrete, knowledge-intensive QA, where steering must guide the model toward specific factual entities rather than general stylistic shifts. It enables semantically grounded guidance without modifying model weights or requiring fine-tuning. Empirical results on 260 failed SimpleQA prompts show that segmented steering outperforms both baseline and full-layer methods, achieving over 7-fold accuracy gains under the composite metric, and an absolute improvement from 0.0% to 13.1% under the SimpleQA rubric. These results suggest that prompt-specific, interpretable activation interventions offer a promising direction for enhancing factual alignment in language models. Future work may explore how Fusion Steering integrates with neuron-level interpretability frameworks to enable even more targeted, concept-conditioned interventions. 9 References [1]Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, and et al. Language models are few-shot learners, 2020. [2] OpenAI et al. Gpt-4 technical report, 2024. [3]Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, and et al. Llama: Open and efficient foundation language models, 2023. [4]Jack Lindsey, Wes Gurnee, Emmanuel Ameisen, Brian Chen, Adam Pearce, and et
https://arxiv.org/abs/2505.22572v1
al. On the biology of a large language model. Transformer Circuits Thread , 2025. [5]Transformer Circuits Team. Scaling monosemanticity: Extracting interpretable features from claude 3 sonnet, 2024. [6]Transformer Circuits Team. Sparse crosscoders: Simplifying circuit analysis with cross-layer superposition, 2024. [7]Alexander Matt Turner, Lisa Thiergart, Gavin Leech, David Udell, Juan J. Vazquez, Ulisse Mini, and Monte MacDiarmid. Steering language models with activation engineering, 2024. [8]Joris Postmus and Steven Abreu. Steering large language models using conceptors: Improving addition-based activation engineering, 2025. [9]Nina Panickssery, Nick Gabrieli, Julian Schulz, Meg Tong, Evan Hubinger, and Alexander Matt Turner. Steering llama 2 via contrastive activation addition, 2024. [10] Nishant Subramani, Nivedita Suresh, and Matthew E. Peters. Extracting latent steering vectors from pretrained language models, 2022. [11] Kai Konen, Sophie Jentzsch, Diaoulé Diallo, Peer Schütt, Oliver Bensch, Roxanne El Baff, Dominik Opitz, and Tobias Hecking. Style vectors for steering generative large language model, 2024. [12] Niklas Stoehr, Kevin Du, Vésteinn Snæbjarnarson, Robert West, Ryan Cotterell, and Aaron Schein. Activation scaling for steering and interpreting language models, 2024. [13] Stefan Heimersheim and Neel Nanda. How to use and interpret activation patching, 2024. [14] Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, and et al. Representation engineering: A top-down approach to ai transparency, 2025. [15] Kenneth Li, Oam Patel, Fernanda Viégas, Hanspeter Pfister, and Martin Wattenberg. Inference- time intervention: Eliciting truthful answers from a language model, 2024. [16] Samuel Soo, Chen Guang, Wesley Teng, Chandrasekaran Balaganesh, Tan Guoxian, and Yan Ming. Interpretable steering of large language models with feature guided activation additions, 2025. [17] Thomas Marshall, Adam Scherlis, and Nora Belrose. Refusal in llms is an affine function, 2025. [18] Jason Wei, Nguyen Karina, Hyung Won Chung, Yunxin Joy Jiao, Spencer Papay, Amelia Glaese, John Schulman, and William Fedus. Measuring short-form factuality in large language models, 2024. [19] Gemma Team et al. Gemma 2: Improving open language models at a practical size, 2024. [20] Mistral AI. Mistral small 3.1, 2024. https://mistral.ai/news/mistral-small-3-1 . [21] Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. Optuna: A next-generation hyperparameter optimization framework, 2019. [22] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners, 2019. Technical report, OpenAI. 10 [23] Qi Sun, Marc Pickett, Aakash Kumar Nain, and Llion Jones. Transformer layers as painters, 2025. [24] Markus Nagel, Marios Fournarakis, Rana Ali Amjad, Yelysei Bondarenko, Mart van Baalen, and Tijmen Blankevoort. A white paper on neural network quantization, 2021. [25] Gonçalo Paulo, Alex Mallen, Caden Juang, and Nora Belrose. Automatically interpreting millions of features in large language models, 2024. [26] Yuxiao Li, Eric J. Michaud, David D. Baek, Joshua Engels, Xiaoqing Sun, and Max Tegmark. The geometry of concepts: Sparse autoencoder feature structure. Entropy , 27(4):344, March 2025. 11 Appendix A Factuality grading prompt template We used a rubric-based prompt with the Mistral-small model to classify predicted answers as CORRECT ,INCORRECT , orNOT_ATTEMPTED . Below is the full grading prompt used for evaluation: You are a factuality grader. Given a question, a gold target (correct answer), and a predicted answer, assign one
https://arxiv.org/abs/2505.22572v1
of the following grades: - "CORRECT" (fully contains correct information, no contradictions) - "INCORRECT" (contains incorrect, misleading, or contradictory information) - "NOT_ATTEMPTED" (fails to answer or provides irrelevant information) ### Example Grades: CORRECT: - Question: "Who discovered gravity?" - Gold target: "Isaac Newton" - Predicted: "Isaac Newton" - Predicted: "Newton discovered gravity." - Predicted: "Most believe Isaac Newton discovered gravity." INCORRECT: - Question: "Who discovered gravity?" - Gold target: "Isaac Newton" - Predicted: "Albert Einstein" - Predicted: "Newton, Einstein, and Galileo discovered gravity." (Contradicts correct answer) - Predicted: "Gravity was first theorized in 1905." (Incorrect) NOT_ATTEMPTED: - Question: "Who discovered gravity?" - Gold target: "Isaac Newton" - Predicted: "I don’t know." - Predicted: "It depends on interpretation." - Predicted: "Without more context, I cannot say." ### Rules: - Ignore capitalization, punctuation, and grammar differences. - Allow hedging (e.g., "I think" or "probably") as long as the correct answer is included. - If a predicted answer partially matches but is missing key details, grade as INCORRECT. - If a number is the correct answer, it must be accurate to the last significant figure. - Do not punish minor typos in names if the intent is clear. ### Now, grade this: Question: {question} Gold target: {target} Predicted answer: {predicted_answer} Reply with: A: CORRECT B: INCORRECT C: NOT_ATTEMPTED Only return "A", "B", or "C" with no extra text. 12 B LLM-based quality scoring rubric To evaluate the quality of model outputs after steering, we used a second rubric-driven prompt with the Mistral-small model. This rubric assigns a 1–5 score based on correctness, completeness, and relevance, and also records hallucination flags and short justifications. The full prompt used is shown below: You are a strict but fair evaluator. Evaluate the generated answer for correctness, relevance, and completeness. Question: {question} Generated Answer: {generated_text} Ground Truth Answer: {ground_truth} Give a score from 1 to 5 based on the following: - 5 = Fully correct, complete, and relevant. - 4 = Mostly correct, with minor inaccuracies or missing details. - 3 = Partially correct, has significant omissions or minor errors. - 2 = Mostly incorrect or irrelevant, with small fragments that match. - 1 = Totally wrong, irrelevant, or hallucinated. Also specify: - Hallucination (Y/N): Did the answer include fabricated information not found in the ground truth? - Notes: Brief justification. C Score distribution plots (a) Combined Score Distribution (b) Token Overlap Score Distribution Figure 3: Score distributions for combined metric and token overlap. Boxplots illustrate variation in steering effectiveness across prompts. 13 D SimpleQA rubric grading accuracy Figure 4: Percentage of completions labeled as CORRECT under the SimpleQA rubric grading. Bar heights represent the proportion of examples (out of 260) graded as fully correct by an LLM-based evaluator. The baseline yielded no correct responses, while full-layer and segmented steering show substantial gains. 14
https://arxiv.org/abs/2505.22572v1
Tell me Habibi, is it Real or Fake? Kartik Kuckreja,λParul Gupta,ξInjy Hamed,λ Thamar Solorio,λMuhammad Haris Khan,λAbhinav Dhallξ λMBZUAIξMonash University {kartik.kuckreja,injy.hamed,thamar.solorio,muhammad.haris}@mbzuai.ac.ae {parul,abhinav.dhall}@monash.edu Abstract Deepfake generation methods are evolving fast, making fake media harder to detect and raising serious societal concerns. Most deepfake detection and dataset creation research focuses on monolingual content, often overlooking the challenges of multilingual and code-switched speech, where multiple languages are mixed within the same discourse. Code-switching, especially between Arabic and English, is common in the Arab world and is widely used in digital communication. This linguistic mixing poses extra challenges for deepfake detection, as it can confuse models trained mostly on monolingual data. To address this, we introduce ArEnA V , the first large-scale Arabic-English audio-visual deepfake dataset featuring intra- utterance code-switching, dialectal variation, and monolingual Arabic content. Itcontains 387k videos and over 765 hours of real and fake videos . Our dataset is generated using a novel pipeline integrating four Text-To-Speech and two lip-sync models, enabling comprehensive analysis of multilingual multimodal deepfake detection. We benchmark our dataset against existing monolingual and multilingual datasets, state-of-the-art deepfake detection models, and a human evaluation, highlighting its potential to advance deepfake research. The dataset can be accessed here. 1 Introduction Deepfake technologies, involving the artificial generation and manipulation of audio-visual content, have rapidly advanced, significantly complicating the task of distinguishing real media from synthetic creations. The potential misuse of deepfakes for misinformation, defamation, or impersonation presents profound societal risks, driving substantial research into their detection. Although initial deepfake research primarily focused on manipulating individual modalities, audio-only (Todisco et al. [2019]) or video-only (Jiang et al. [2020], Kwon et al. [2021], Li et al. [2020]), recent developments increasingly consider joint manipulation of audio and visual streams for more realistic synthesis. A significant gap remains in existing deepfake datasets (Table 1), which largely overlook multilingual scenarios, particularly code-switching (CSW), despite its global prevalence among bilingual speakers. In the Arab world, CSW is a prominent feature of daily communication, serving not only as a linguistic tool but also as a marker of cultural identity and social context (Hamed et al. [2025]). Arabic speakers frequently alternate between Arabic and English within the same sentence, such as:@Yg.ÑêÓdeepfake detection È@ ¨ñ “ñÓ (The topic of deepfake detection is very important). Moreover, with Arabic being a diglossic language, speakers also switch between Modern Standard Arabic (MSA) and regional dialects (Mubarak et al. [2021]). Recent studies provide compelling evidence of how common CSW is in Arabic-English contexts. The ZAEBUC-Spoken corpus (Hamed et al. [2024]) reveals that approximately 19% of all non-annotation-only utterances exhibit CSW, having an average Preprint.arXiv:2505.22581v1 [cs.CV] 28 May 2025 Sync-NetYo lo-v5-FaceWhisper Transcript ManipulationVideo ManipulationAudio Manipulation ً ائيس سيل سقطلا (“The weather is not bad.”)العيال فازوا (“The boys won.”) نصنعhope (“We create hope.”) الطقس….ً ائيس (“The weather is … bad.”)العيالاوسلِي (“The boys sat.”) نصنعfun (“We create fun.”) 8,800 videos 48,000 clips Denoiser Clean Voice BG NoiseFake TranscriptsLoudness NormOption 1: Voice CloningOption 2:TTS + Voice ConversionFull speech fake sentence WhisperCropped speech fake word Find and CropRef Frames Option 1: Diff2LipOption 2: Latent-SyncFake ClipInsertReplaceDelete ###System Message###You are a controlled text-perturbation
https://arxiv.org/abs/2505.22581v1
bot. Here is the transcript of an audio. Please use the provided operations to modify the transcript to change its sentiment…. ###Input### "input":{"language":"ar","change_mode":"meaning_only","tokens": "[الطقس", "ليس", "ً ائيس", ]"."….###Output###"operations": [{"operation": "delete","old_word": "ليس",”, new_word": null, "index": 1,}]...., "operations": [{"operation": ”replace","original_dialect": "gulf", "new_dialect": "emirati”}]…{"operation": "replace", "old_word": "hope", "new_word": ”fun","index": 2}]… Test-156 hr (20.4%)Val-76 hr (9.9%)Train-534 hr (69.8%) Test-1,816 (20.6%)Val-876 (9.9%)Train-6,117 (69.4%) Test-78,432 (20.3%)Train-270,400 (69.9%) Val-38,240 (9.9%)a) Data Generation Pipelineb) #Hoursc) #Unique Videosd) #Clips CSW (25.7%)AR (74.3%) CSW (27.2%)AR (72.8%) CSW (27.2%)AR (72.8%)Figure 1: a) We show the data generation pipeline for ArEnA V dataset. In a) input videos are analysed for audio, face, and text extraction. Using few-shot prompts with GPT-4.1-mini, CSW-based spoken text manipulation is performed. This is followed by speech and face enactment generation. b-d) The plots show the data splits and CSW distribution. Here is an example of CSW input and manipulated text with translations in parentheses: © J’ hope (“We create hope.”) –> © J’ fun (“We create fun.”) of 44% English words. The corpus also highlights the presence of CSW between Arabic variants. Similarly, the ArzEn corpus (Hamed et al. [2020]) demonstrates a high frequency of CSW, where 63% of utterances involve CSW with approximately 19% of words being English. These findings highlight the extent to which Arabic-English code-switching is not merely an incidental phenomenon, but a widespread communicative strategy that deeply influences spoken discourse. However, despite its ubiquity, deepfake detection systems remain largely ill-equipped to handle such language alternation, focusing predominantly on monolingual data. Addressing this oversight, our work seeks to bridge this critical gap by introducing the first Arabic-English CSW audio-visual deepfake dataset, thus advancing the field toward more relevant detection systems. Our core contributions are as follows: • Introduction of ArEnA V , the first large-scale Arabic-English audio-visual deepfake dataset featuring intra-utterance code-switching and dialectal variation, including both bilingual and diglossic switching across Modern Standard Arabic, Egyptian, Levantine, and Gulf dialects, addressing a critical gap in multilingual deepfake research. •A novel data generation pipeline tailored to English and Arabic (MSA and dialect-rich content), integrating four TTS (Text to Speech) and two lip-sync models. •A comprehensive analysis contrasting our dataset against existing monolingual and multilin- gual datasets, existing state-of-the-art (SOTA) deepfake detection models, and a detailed User Study, underscoring its unique difficulty in detection by models and humans alike. 2 Related Work Early deepfake research was predominantly monolingual and modality-specific. Initial significant contributions included video manipulation techniques such as FaceSwap and Face2Face as introduced by Thies et al. [2020], which led to seminal datasets like FaceForensics++ (Rössler et al. [2019]) and the DeepFake Detection Challenge (DFDC) (Dolhansky et al. [2020a]). These datasets primarily provided facial manipulations within single-language contexts, focusing largely on visual realism. Parallel to video deepfake advancements, audio deepfakes evolved rapidly, driven by progress in text- to-speech (TTS) synthesis, voice conversion, and generative audio models such as Tacotron (Wang 2 Table 1: Details for publicly available deep -fake datasets in chronologically ascending order. Cla: Binary classification, SL: Spatial localization, TL: Temporal localization, FS: Face swapping, RE: Face reenactment, TTS: Text-to-speech, VC: V oice conversion. Dataset Year
https://arxiv.org/abs/2505.22581v1
Tasks Manip. Modality Method #Total Multilingual Code Switching Google DFD Nick and Andrew [2019] 2019 Cla V FS 3,431 ✗ ✗ DFDC Dolhansky et al. [2020b] 2020 Cla A V FS 128,154 ✗ ✗ DeeperForensics Jiang et al. [2020] 2020 Cla V FS 60,000 ✗ ✗ Celeb–DF Li et al. [2020] 2020 Cla V FS 6,229 ✗ ✗ KoDF Kwon et al. [2021] 2021 Cla V FS/RE 237,942 ✗ ✗ FakeA VCeleb Khalid et al. [2022] 2021 Cla A V RE 25,500 + ✗ ✗ ForgeryNet He et al. [2021] 2021 SL/TL/Cla V Random FS/RE 221,247 ✗ ✗ ASVSpoof2021DF Liu et al. [2023] 2021 Cla A TTS/VC 593,253 ✗ ✗ LA V–DF Cai et al. [2022] 2022 TL/Cla A V Content-driven RE/TTS 136,304 ✗ ✗ DF–Platter Narayan et al. [2023] 2023 Cla V FS 265 756 ✗ ✗ A V-1MCai et al. [2023a] 2023 TL/Cla A V Content-driven RE/TTS 1,146,760 ✗ ✗ PolyGlotFakeHou et al. [2024] 2024 Cla A V RE/TTS/VC 15,238 ✓ ✗ IllusionThakral et al. [2025] 2025 Cla A V FS/RE/TTS 1,376,371 ✓ ✗ ArEnA V (Ours) 2025 Cla/TL A V Content Driven RE/TTS/VC 387,072 ✓ ✓ et al. [2017]). Datasets like ASVspoof (Wang et al. [2020]) and WaveFake (Frank and Schönherr [2021]) contributed significantly by providing benchmarks to evaluate audio manipulation detection methods, albeit still largely restricted to English. In recent years, research has expanded towards joint audio-visual deepfake manipulations. Datasets such as FakeA VCeleb (Khalid et al. [2022]) showcased realistic lip-synced speech synthesis in tandem with facial manipulations. A V-Deepfake1M (Cai et al. [2024]) further advanced this domain by automating transcript alterations to create nuanced, localized audio-visual deepfakes, highlighting the necessity of detecting temporally and contextually subtle manipulations. Recently, there has been increased focus on multilingual audio deepfakes. These efforts have revealed key limitations in generalizing detection models across languages and proposed new resources to address these challenges. Marek et al. [2024] conducted a comprehensive study on cross-lingual deepfake detection, showing that models trained on one language often fail to generalize effectively to others, underscoring the role of language-specific phonetic and prosodic features in model per- formance. Complementing these dataset-focused efforts, Phukan et al. [2024] explored the utility of multilingual pre-trained models (PTMs) for detection tasks, finding that such models capture language-agnostic representations that improve cross-lingual robustness. Multilingual audio-visual datasets emerged even more recently to address the global dimension of deepfake threats. The PolyGlotFake dataset (Hou et al. [2024]) contains audio-visual deepfakes across seven languages. Although the dataset covers a wide range of language, the size of the real data is significantly small. Nonetheless, these multilingual datasets remain limited to either monolingual or monomodal scenarios within each single instance, ignoring the prevalent reality of intra-utterance language switching. Our work directly responds to this critical gap. Unlike previous studies, we not only incorporate multilingual content but also explicitly generate intra-utterance code-switched audio-visual deepfakes. LatentSync-XTTS (26.3%) LatentSync-XTTS-F (3.7%)LatentSync-TTS (11.0%)LatentSync-Fairseq(3.5%)Difff2Lip-Fairseq(3.6%)Diff2Lip-TTS (16.3%)Diff2Lip-XTTS-F(4.2%) Diff2Lip-XTTS (31.5%)Meaning only(47.4%)Meaning+ Dialect (40.8%)Meaning+Trans (11.8%)LatentSync-XTTS (30.6%) LatentSync-XTTS-F(3.7%)LatentSync-TTS (9.8%)LatentSync-Fairseq(2.3%)Difff2Lip-Fairseq(4.6%)Diff2Lip-TTS (13.5%)Diff2Lip-XTTS-F(3.1%) Diff2Lip-XTTS (32.5%)Meaning only(48.1%)Meaning+Dialect (41.2%)Meaning+Trans(10.7%)LatentSync-XTTS (28.2%)LatentSync-XTTS-F (1.9%)LatentSync-TTS (9.5%)LatentSync-Fairseq(2.5%)Difff2Lip-Fairseq(3.5%)Diff2Lip-TTS (18.9%)Diff2Lip-XTTS-F (4.4%)Diff2Lip-XTTS (31.1%)Meaning only(48.8%)Meaning+Dialect (41.7%)Meaning+Trans(9.5%)i) #Trainii) #Valiii) #Test Figure 2: Dataset distribution for
https://arxiv.org/abs/2505.22581v1
i) Train , ii)Valand iii) Test split. The outer layer shows the split between various combinations of Text-to-Speech and Lip-Sync models used for audio-visual manipulation. The middle layer shows the distribution of language in the original transcript, which is either Ar(Arabic) or CSW (Code-Switched English-Arabic). The inner layer shows the distribution of different operations applied to the original transcripts, "meaning only", "dialect+meaning", and "meaning + translation" (For fine-grained detail about what they entail, refer to Table 12.) 3 We leverage SOTA TTS and lip sync methodologies adapted for multilingual use, resulting in realistic, diverse, and challenging benchmarks. 3 ArEnA V Dataset ArEnA V is a large-scale audio-visual deepfake dataset specifically focused on Arabic–English CSW. Comprising approximately 765 hours of video data sourced from 8,809 unique YouTube videos, ArEnA V establishes itself as the first and most extensive benchmark for multilingual deepfake detection (see Table 1 for dataset comparison). The dataset is constructed to preserve the original identity and environmental context of the source videos while systematically manipulating the semantic content to introduce Arabic-English CSW. Following the taxonomy proposed by Cai et al. [2024], ArEnA V includes three manipulation strategies: Fake Audio & Fake Video : Both audio and visual content are synthetically generated, simulating complete audiovisual deepfakes. Fake Audio & Real Video : The audio track is manipulated to introduce anti-semantic and CSW content while maintaining the original visual content. Real Audio & Fake Video : The original audio is retained, while facial movements and lip synchronization are altered to create visually deceptive content. 3.1 Data Collection We use the YouTube video links from VisPer’s Arabic Train subset (Narayan et al. [2024]). Further, we first run a scene change detection model to split the video into clips, and then we use Yolo-v5 to obtain the faces in each frame as well as track them across frames. Afterwards, we generate the ground truth transcripts using a Whisper-V2 model introduced by Radford et al. [2022a], further fine-tuned on Arabic-English code-switching data, with the default output language set to Arabic. Following the transcripts, we apply Forced Alignment between the audio and text, using a multilingual wav2vec2 model (Baevski et al. [2020]) supporting both Arabic and English. This provides us with word-level timestamps for code-switched Arabic and English data. 3.2 Data Generation Pipeline The data generation pipeline roughly consists of three stages: transcript manipulation, audio genera- tion, and video generation. First, we obtain an accurate transcription of each source video and apply controlled modifications to the text. Next, we synthesize new audio for the altered transcript while preserving the speaker’s voice characteristics. Finally, we render a lip-synced video that matches the new audio, producing a realistically manipulated video clip. We detail each stage as follows. 3.2.1 Transcript Manipulation We leverage GPT-4.1-mini (OpenAI [2025]) to perform content-driven modifications of our multilin- gual transcripts. We define eight distinct transcript change modes that span both code-switched and Arabic-only contexts, allowing fine-grained control over how the transcript is altered. These modes include three main operations: first, meaning only , which only involves changing the meaning of the word and keeping
https://arxiv.org/abs/2505.22581v1
the language as it is, second, meaning + dialect , which involves changing the meaning of the word and changing its language to another Arabic variant (either MSA or any dialect), and lastly, meaning + translation , which asks the model to change the meaning of the word, and then translate it to English. Table 2 summarizes the eight modification modes with their intended effect. By categorizing edits in this way, we ensure a controlled and diverse set of manipulations ranging from subtle word substitutions to introducing or removing CSW instances. Due to the effectiveness Table 2: Transcript manipulation rules in ArEnA V for Arabic (AR) and English (EN) words. #Original Transcript Original Word Inserted Word Operation 1 CSW EN EN Change meaning only (keep English) 2 CSW AR AR Change meaning only (keep Arabic variant) 3 CSW AR AR Change meaning + change Arabic variant 4 CSW AR AR/EN multi-op; When 2-3 ops →edit 1 EN and 1-2 AR words 5 Arabic AR AR Change meaning only (keep Arabic variant) 6 Arabic AR AR Change meaning + change Arabic variant 7 Arabic AR EN Change meaning + change language to English 8 Arabic AR AR/EN multi-op; Apply all operations on Arabic words 4 Table 3: a) Entailment distribution over i) All change modes, ii) Dialect + meaning, iii) Meaning only, and iv) Meaning + translation. b) Perplexity Evaluation distribution among dataset splits, showing perplexity calculated using i) Jais-3B, an Arabic-English LLM, and ii) Qwen-2.5-7B. Reference in both shows the perplexity calculated on an Arabic-English CSW text dataset(SDAIANCAI [2025]). (a) Entailment distribution. Subset 0–0.2 0.2–0.4 0.4–0.6 0.6–0.8 0.8–1.0 Total 32011 4160 3952 3476 4785 Dialect + Meaning 12133 2154 2126 1891 2473 Meaning Only 19107 1765 1463 967 882 Meaning + Translation 771 241 363 618 1430(b) Perplexity Evaluation distribution. Model Train Val Test Reference Real Fake Real Fake Real Fake Jais-3B 1207.26 1211.39 997.39 1001.45 1146.39 1144.14 324.1 Qwen-7B 116.58 117.44 124.66 125.07 117.45 117.58 77.44 of few-shot prompting, we prompt GPT-4.1-mini with 15 examples, explaining various kinds of transitions and possible changes. Examples of original and augmented transcripts achieved by these manipulation rules are shown in Appendix B. We provide the prompt in Appendix C. We report text manipulations distributions as follows: replacement (94.6%), insertion (5.1%) and deletion (0.3%). Please note that this distribution was randomly chosen based on input text statements by GPT. Transcript Quality: To quantify the impact of our LLM-based manipulations, we employ two complementary metrics: Bidirectional Entailment Quality Mean : the average of Real →Fake and Fake →Real NLI entailment scores (1.0 = full semantic entailment; 0.0 = direct contradiction) and Perplexity : how well a language model predicts a transcript (lower = more fluent/natural). Table 3a shows the distribution of entailment quality means over different types of perturbations (meaning only, dialect + meaning, meaning + translation). In every subset, a large fraction of samples lies below the 0.5 threshold, and many even in the contradiction zone, demonstrating that our pipeline reliably injects semantic change regardless of language or dialect. Table 3 reports average perplexities on
https://arxiv.org/abs/2505.22581v1
real versus fake transcripts under two open-source LLMs; Jais-3B (Sengupta et al. [2023]) and Qwen-2.5-7B (Qwen et al. [2025]), across the data splits. The minimal difference in perplexity shows that our fake transcripts remain fluent and natural, despite major changes in meaning. This balance between altered content and surface-level fluency is essential for generating effective audio-visual deepfakes 3.2.2 Audio Generation The next step involves generating a synthetic audio track that precisely follows the edited transcript while maintaining the voice characteristics of the original on-screen speaker. Initially, we segment the audio into clean speech and background noise using a Denoiser (Defossez et al. [2020]). Conventional zero-shot voice cloning systems, such as YourTTS (Casanova et al. [2023]) exhibit strong performance in English but struggle with Arabic phonetics and cross-lingual synthesis. To address this, we employ four targeted cloning strategies: a) XTTS-v2 (Casanova et al. [2024]) : A multilingual, zero-shot TTS model natively supporting Arabic, English, and code-switching. b) XTTS-v2 (Casanova et al. [2024]) + OpenVoice-v2 (Qin et al. [2024]) : When a reference voice sample is available, we achieve higher fidelity by generating the utterance with XTTS-v2 and performing speaker conversion via OpenV oice-v2. c) Fairseq Arabic TTS (Ott et al. [2019]) + OpenVoice-v2 (Qin et al. [2024]) : For fully Arabic sentences, we generate audio with the Fairseq Arabic TTS from the MIMS initiative, followed by speaker conversion using OpenV oice-v2. d) GPT-TTS (OpenAI [2023]) + OpenVoice- v2 (Qin et al. [2024]) : GPT-TTS supports 29 languages with nine preset voices; we randomly select one voice, generate the sentence, and then convert the audio to the target speaker’s voice with OpenV oice-v2. The audio-generation flow depends on the edit type. For insert orreplace operations, we regenerate the complete sentence and validate the generated audio using Whisper-Turbo (Radford et al. [2022b]), retaining only samples that exactly match the intended transcript. This step ensures intelligibility and accurate timestamp alignment for splicing the segment into the original audio. If validation fails, we discard the sample. For a delete operation, we remove speech segments entirely, preserving only 5 background noise. Finally, after each edit, we normalise the loudness of the manipulated segment relative to the original audio and recombine it with the extracted environment noise. Audio Quality: Table 4: Audio quality comparison across different datasets. Dataset Language SECS ↑SNR(dB) ↑FAD↓ FakeA VCeleb English 0.543 2.16 6.598 LA V-DF English 0.984 7.83 0.306 A V-Deepfake1M English 0.991 9.39 0.088 ArEnA V Arabic, English 0.990 7.65 0.140Table 4 presents the comparison of audio quality for ArEnA V based on speaker similarity, signal quality and distribution realism with existing audio-visual deepfake datasets. We report speaker encoder co- sine similarity (SECS), Signal-to-Noise (SNR) and Fréchet audio distance (FAD) for recent Audio-Visual datasets. SECS measures the speaker’s voice similar- ity between a generated clip and the real reference (range [−1,1], higher is better), while FAD evaluates the distributional distance between the generated audio and real audio (lower is better). The metrics combined indicate that ArEnA V has high-quality audio samples. Table 5: (a) Visual quality comparison across different datasets and (b) Distribution
https://arxiv.org/abs/2505.22581v1
of human-reported reasons for labellng a video as fake in the user study. (a) Visual quality metrics. Dataset PSNR(dB) ↑SSIM ↑FID↓ FF++ 24.40 0.812 1.06 DFDC — — 5.69 FakeA VCeleb 29.82 0.919 2.29 LA V-DF 33.06 0.898 1.92 A V-Deepfake1M 39.49 0.977 0.49 ArEnA V 37.70 0.971 0.68(b) Distribution of top reasons. Reason Percentage (%) Unintelligible speech (weird audio) 36.5 Video/audio mismatch (lip sync is off) 25.1 Audio sounds artificial 24.7 Video looks artificial 8.7 Code-switching is unnatural 3.0 Incoherent sentence 1.9 3.3 Visual Manipulation For video generation, after extensive experimentation, we chose two diffusion-based lip-sync ap- proaches: Diff2Lip (Mukhopadhyay et al. [2023]) and LatentSync (Li et al. [2025]). Both of these models perform high quality zero-shot lip-sync and are open-sourced. Using the new generated audio and original video’s frames we generate the fake frames. For replace andinsert word operations, we generate the fake frames for the new word, and for delete word operations, we generate a face with closed lips i.e. without audio. Visual Quality: To evaluate visual quality, we use three standard metrics: Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Fréchet Inception Distance (FID). Table 5 presents PSNR, SSIM, and FID results for the ArEnA V dataset. PSNR and SSIM measure pixel-level and structural similarity, respectively, between fake and original frames (higher is better) (ArEnA V lies nearby A V-1M). FID assesses realism by comparing the distributions of fake and real frames in a learned image feature space (lower is better) (ArEnA V slightly more than A V-1M). These scores highlight that ArEnA V attains higher / comparable visual quality compared to other deepfake datasets. Real Table 6: Data distribution in ArEnA V and comparison with other multilingual datasets. Subset#Unique Videos#Real Videos#Fake Videos#Non-English Clips#CSW Videos#Arabic VideosArabic Variants PolyGlotFake Hou et al. [2024] 766 766 14,472 11,941 0 1,403 NA Illusion Thakral et al. [2025] – 141,440 1,234,931 4,385 0 – NA ArEnA V-Train 6,117 67,600 202,800 270,400 69,544 200,856 Egyptian, ArEnA V-Validation 876 9,560 28,680 38,240 10,416 27,824 MSA, ArEnA V-Test 1,816 19,608 58,824 78,432 19,832 58,600 Levantine, Gulf ArEnA V (total) 8,809 96,768 290,304 387,072 99,792 287,280 - 6 Perturbations: To mimic real-life videos scenarios better, we add localized perturbations to both the real and the fake videos. We apply 15 different visual filters (eg: salt-pepper noise and camera shaking) and 10 different audio manipulations (eg: time-stretching, random loudness and pitch). For each video, we randomly sample one to three instances for visual perturbations and one to two instances for audio perturbations. Perturbation details are mentioned in Appendix A. 3.4 User Study Table 7: User study results show that the deepfake detection and localization in multilingual CSW videos is non-trivial for human observers. Method Acc. AP@0.1 AP@0.5 AR@1 ArEnA V 60.00 8.35 0.79 1.38To investigate whether humans can identify deepfakes in ArEnA V , we conducted a user study with 19 participants, out of which, 15 are native Arabic speakers, and 4 have basic knowledge of Arabic. We randomly sampled 20 videos, with either 0 or 1 manipulation. Instructions for User Study: Each participant was asked
https://arxiv.org/abs/2505.22581v1
to 1) watch the video, and 2) answer 3 questions, i) Is the video real of fake, ii) If it is fake, localize where they think the fake region is, iii) Whether the given video contains Arabic-English code-switching or not, iv) Give a reason for labelling the video (if they have) as a deepfake. The results in Table 7 reaffirm our hypothesis that identifying audiovisual deepfakes in multilingual (specially CSW) and multimodal settings is a non-trivial task, as even humans achieve only 60% accuracy, while it is even harder to localize the deepfakes, with AP@0.5 at 0.79. Further, Table 5b shows the primary reasons why people classified the videos as fake. We report that 85% of the users fail to identify deepfakes when the manipulation happens in the English word, in the CSW video, which can be attributed to a higher quality of voice cloning in English as well as the natural change in tone when a person code-switches, which makes it harder to detect. Further, localization is very tough due to the very high quality of lip-sync with diffusion models, as shown in Table 5b, where the video being the reason for fake classification is only 8.7%. 3.5 Dataset Statistics Table 6 compares ArEnA V with other multilingual deepfake detection datasets. Existing multilingual datasets like PolyGlotFake (Hou et al. [2024]) and Illusion (Thakral et al. [2025]) have significantly smaller multilingual content, containing limited Arabic data (1,400 Arabic videos in PolyGlotFake and minimal in Illusion across 26 languages). ArEnA V includes 387k videos sourced from 8,809 unique YouTube videos, totaling over 765 hours. Videos average approximately 7.7 seconds each, with train, validation, and test splits created via multilabel stratified sampling in a 7:1:2 ratio, ensuring no overlap. Computational Cost: We spent around 50 GPU hours to generate the real transcript using Whisper- Large-V2 (Radford et al. [2022b]), 200 dollars worth of OpenAI credits, to generate fake transcripts and Text-to-Speech model, TTS-1 (OpenAI [2023]), and 650 GPU hours for data generation. Overall, we needed 800 GPU hours to generate AvEnA V with NVIDIA RTX- 6000 GPUs. 4 Benchmark and Metrics We organize the data into train ,validation , and testsplit. We use multilabel stratified sampling to divide the data in equal proportions based on the method type, the change mode, and the ground truth language. We also show evaluation on two subsets, subset V , which excludes videos with audio-only manipulation, and subset A , which excludes videos with visual-only manipulations. We evaluate models on two tasks, temporal localization anddetection of audio-visual deepfakes. We use average precision (AP) and average recall (AR) metrics as prior works (He et al. [2021], Cai et al. [2022, 2023a]) for temporal localization. For the task of deepfake detection, we use standard evaluation protocol (Rossler et al. [2019], Dolhansky et al. [2020b], Cai et al. [2023a]) to report video-level accuracy (Acc.) and area under the curve (AUC). Implementation Details: We benchmark temporal detection using SOTA models: Meso4, MesoIn- ception4, Xception, BA-TFD, and BA-TFD+. BA-TFD and BA-TFD+ (Cai et al. [2023b]) are evaluated in their original configurations, both in
https://arxiv.org/abs/2505.22581v1
a zero-shot setting (pre-trained on A V-1M; Cai et al. [2023a]) and after fine-tuning on our dataset. For image-based classifiers (Meso4, MesoInception4; 7 Afchar et al. [2018] and Xception; Chollet [2017]), we aggregate frame-level predictions to segments following Cai et al. [2023a]. For benchmarking deepfake detection , image-based models (Meso4, MesoInception4, and Xception) are trained on video frames with corresponding labels, and predic- tions are aggregated to video-level using max voting, as suggested by Cai et al. [2023a]. Additionally, we assess zero-shot performance of LLM-based models, VideoLLaMA2 and VideoLLaMA2.1-A V (Zhang et al. [2023]), prompting them to produce a confidence score indicating the likelihood of a video being a deepfake. We include an audio-only baseline, XLSR-Mamba (Xiao and Das [2025]), the best open-source audio deepfake detection model on Speech DF Arena (Face [2025]), evaluating it both in zero-shot mode (pre-trained on ASVSPoof-2019; Wang et al. [2020]) and after training with video-level labels from our dataset. BA-TFD and BA-TFD+ (Cai et al. [2022]) are also evaluated using segmentation proposals treated as frame-level predictions and aggregated by max-voting, both pre-trained on A V-1M and fine-tuned on our dataset. 5 Results and Analysis Audio-Visual Temporal Deepfake Localization. The results for temporal localization are shown in Table 8. SOTA methods show significantly lower performance on ArEnA V as compared to other localization datasets (refer to Table 10). BA-TFD and BA-TFD+, pretrained on A V-1M, show a drop in performance of more than 35% for AP@0.5 threshold, compared to evaluation on A V-1M. The image-based models, Meso4 and MesoInception4, also provide low performance, which can be attributed to the use of diffusion-based lip-sync models, which have been overlooked in previous data generation pipelines (Cai et al. [2023a,b]). Through this benchmark, we claim that the highly realistic multimodal multilingual code-switched fake content in ArEnA V will open an avenue for further research on temporal multilingual deepfake localization methods. Audio-Visual Deepfake Detection. The detection results are in Table 9. Image based models, that have access to video-level labels only, perform considerably worse, except XLSR-Mamba, which is designed to be trained on video-level labels for audio-deepfake detection. The best performing model is BA-TFD, pretrained on A V-1M and then further fine-tuned on our dataset, with AUC Score of 82% on the full subset. We also evaluate models on subsets V and A, as described in the implementation details. The audio-only model, XLSR-Mamba, performs better in the Audio-only subset A , while the image-only models perform better on Subset V for frame-level labels, compared to the fullset . XLSR-Mamba performs relatively worst when the audio is code-switched, compared to only Arabic. Cross-Dataset Comparison for Deepfake Localization. Table 10 shows the performance of BA- TFD and BA-TFD+ on LA VDF, A V-1M and ArEnA V datasets. Both models perform significantly worse on ArEnA V , highlighting the poor generalizability in multilingual and code-switching settings. Table 8: Temporal localization results on the test set of ArEnA V . SetMethod Mod. AP@0.5 AP@0.75 AP@0.9 AP@0.95 AR@50 AR@30 AR@20 AR@10 AR@5Full datasetMeso4 V 0.02 0.01 0.00 0.00 0.09 0.09 0.09 0.09 0.09 MesoInception V 0.56
https://arxiv.org/abs/2505.22581v1
0.18 0.04 0.01 4.11 4.11 4.11 4.11 4.08 Xception V 22.50 10.26 2.29 0.58 19.13 19.13 19.13 19.13 19.13 BA-TFD (ZS) A V 0.17 0.01 0.00 0.00 9.72 5.20 3.07 1.46 0.73 BA-TFD+ (ZS) A V 0.11 0.00 0.00 0.00 5.77 2.95 2.09 0.87 0.37 BA-TFD A V 2.42 0.55 0.01 0.00 22.30 10.31 3.41 2.54 1.67 BA-TFD+ A V 3.74 1.10 0.06 0.01 30.75 9.42 4.55 3.05 1.83Set VMeso4 V 0.02 0.01 0.00 0.00 0.10 0.10 0.10 0.10 0.10 MesoInception V 0.83 0.27 0.05 0.01 5.56 5.56 5.56 5.56 5.53 Xception V 32.76 14.48 3.30 0.81 27.78 27.78 27.78 27.78 27.78 BA-TFD (ZS) A V 0.12 0.00 0.00 0.00 8.44 4.34 2.44 1.13 0.49 BA-TFD+ (ZS) A V 0.07 0.00 0.00 0.00 4.69 2.39 1.65 0.69 0.29 BA-TFD A V 3.65 0.25 0.01 0.00 25.31 9.03 3.64 2.34 1.64 BA-TFD+ A V 5.65 1.89 0.08 0.02 31.09 13.21 5.91 3.05 2.05Set AMeso4 V 0.02 0.01 0.00 0.00 0.08 0.08 0.08 0.08 0.08 MesoInception V 0.38 0.09 0.01 0.00 3.25 3.25 3.25 3.25 3.22 Xception V 14.72 3.92 0.29 0.09 11.78 11.78 11.78 11.78 11.78 BA-TFD (ZS) A V 0.23 0.01 0.00 0.00 12.14 6.46 3.85 1.83 0.95 BA-TFD+ (ZS) A V 0.14 0.01 0.00 0.00 7.32 3.79 2.69 1.13 0.48 BA-TFD A V 3.21 0.60 0.02 0.00 24.45 9.26 4.15 2.61 1.93 BA-TFD+ A V 4.35 1.10 0.10 0.00 28.35 11.23 4.85 3.11 2.00 8 Table 9: Deepfake detection results on the test set of ArEnA V . Label Access Pretraining Data Methods Mod. Fullset Subset V Subset A For Training AUC Acc. AUC Acc. AUC Acc. Zero-Shot ASVSpoof-19 XLSR-Mamba A 39.19 52.77 52.73 40.68 52.50 42.59 - Internet Scale Video-LLaMA (7B) V 51.48 26.29 51.47 34.21 51.43 34.18 - Internet Scale Video-LLaMA (7B) A V 48.79 59.29 48.71 55.37 48.86 55.26 - A V-1M BA-TFD A V 61.73 26.00 66.42 34.07 59.36 33.97 - A V-1M BA-TFD+ A V 60.96 25.84 64.49 34.28 59.44 33.80 Video Level ArEnA V XLSR-Mamba A 73.00 61.00 57.47 66.16 86.33 78.00 - ArEnA V Meso4 V 49.30 75.00 49.15 66.67 49.30 66.67 - ArEnA V MesoInception4 V 50.34 46.23 50.28 47.48 50.35 47.67 - ArEnA V Xception V 50.05 75.00 49.90 66.67 50.32 66.67 Frame level ArEnA V Meso4 V 49.55 26.60 49.60 34.40 49.53 34.36 - ArEnA V MesoInception4 V 51.14 41.25 50.77 51.84 45.28 44.09 - ArEnA V Xception V 74.21 52.09 85.36 67.22 68.59 51.70 - A V-1M & ArEnA V BA-TFD A V 75.91 44.31 77.64 58.29 72.21 45.21 - A V-1M & ArEnA V BA-TFD+ A V 79.97 27.44 84.20 36.47 72.89 34.56 BA-TFD and BA-TFD+ fail to generalize effectively, as the pretrained audio and video encoders struggle with out-of-distribution data encountered in both modalities of ArEnA V . 6 Conclusion This paper presents ArEnA V , a large multilingual and the first code-switching audio-visual dataset for temporal deepfake localization and detection. The comprehensive benchmark of the dataset utilizing SOTA deepfake detection and localization methods indicates a significant drop in performance compared to previous monolingual datasets Cai et al.
https://arxiv.org/abs/2505.22581v1
[2023a,b], indicating that the proposed dataset is an important asset for building the next-generation of multilingual deepfake localization methods. As future work, we will evaluate LLM-based detectors after fine-tuning them on the dataset. Limitations . Similar to other deepfake datasets, ArEnA V exhibits a misbalance in terms of the number of fake and real videos. Due to the limited performance of current SOTA Active-V oice-Detection (Whisper v2) models on Arabic (compared to English), the data generation pipeline can result in a few noisy transcripts. Due to limited instruction following in code-switching scenarios, LLMs might not produce the desired results, as visible in Table 3a "Meaning + Translation Scenario". Compared to other subsets, Chat-GPT often fails to follow both instructions, making real and fake transcripts too similar and not always changing their meaning. Also, the dataset is currently limited to two languages only, where we hope to motivate further research in this direction. Broader Impact. ArEnA V’s diverse and realistic English-Arabic fake videos will support the devel- opment of more robust audio-visual deepfake detection and localization models, better equipped to handle code-switched speech and real-world multilingual scenarios. Ethics Statement. We acknowledge that ArEnA V may raise ethical concerns such as the potential misuse of facial videos of celebrities, and even the data generation pipeline could have a potential negative impact. Misuse could include the creation of deepfake videos or other forms of exploitation. To avoid such issues, we have taken several measures such as distributing the data with a proper end-user license agreement, where we will impose certain restrictions on the usage of the data. Furthermore, the user study follows the university IRB guidelines. Table 10: Temporal localization results on ArEnAv, A V-1M and LA V-DF. The low performance on ArEnA V demonstrates the data complexity in CSW settings. Method Dataset AP@0.5 AP@0.75 AP@0.95 AR@50 AR@20 AR@10 BA-TFDLA V-DF 79.15 38.57 0.24 64.18 60.89 58.51 A V-1M 37.37 6.34 0.02 45.55 35.95 30.66 ArEnA V 2.42 0.55 0.01 22.30 3.41 2.54 BA-TFD+LA V-DF 96.30 84.96 4.44 80.48 79.40 78.75 A V-1M 44.42 13.64 0.03 48.86 40.37 34.67 ArEnA V 3.74 1.10 0.04 30.75 4.55 3.05 9 References Darius Afchar, Vincent Nozick, Junichi Yamagishi, and Isao Echizen. MesoNet: a Compact Facial Video Forgery Detection Network. In 2018 IEEE International Workshop on Information Forensics and Security (WIFS) , pages 1–7, December 2018. ISSN: 2157-4774. Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, and Michael Auli. wav2vec 2.0: A framework for self-supervised learning of speech representations, 2020. URL https://arxiv.org/abs/ 2006.11477 . Zhixi Cai, Kalin Stefanov, Abhinav Dhall, and Munawar Hayat. Do You Really Mean That? Content Driven Audio-Visual Deepfake Dataset and Multimodal Method for Temporal Forgery Localization. In2022 International Conference on Digital Image Computing: Techniques and Applications (DICTA) , pages 1–10, Sydney, Australia, November 2022. Zhixi Cai, Shreya Ghosh, Aman Pankaj Adatia, Munawar Hayat, Abhinav Dhall, and Kalin Stefanov. A V-Deepfake1M: A Large-Scale LLM-Driven Audio-Visual Deepfake Dataset, November 2023a. arXiv:2311.15308 [cs]. Zhixi Cai, Shreya Ghosh, Abhinav Dhall, Tom Gedeon, Kalin Stefanov, and Munawar Hayat. Glitch in the matrix: A large scale benchmark for content driven audio–visual forgery detection
https://arxiv.org/abs/2505.22581v1
and localization. Computer Vision and Image Understanding , 236:103818, November 2023b. ISSN 1077-3142. Zhixi Cai, Abhinav Dhall, Shreya Ghosh, Munawar Hayat, Dimitrios Kollias, Kalin Stefanov, and Usman Tariq. 1m-deepfakes detection challenge, 2024. URL https://arxiv.org/abs/2409. 06991 . Edresson Casanova, Julian Weber, Christopher Shulby, Arnaldo Candido Junior, Eren Gölge, and Moacir Antonelli Ponti. Yourtts: Towards zero-shot multi-speaker tts and zero-shot voice conver- sion for everyone, 2023. URL https://arxiv.org/abs/2112.02418 . Edresson Casanova, Kelly Davis, Eren Gölge, Görkem Göknar, Iulian Gulea, Logan Hart, Aya Aljafari, Joshua Meyer, Reuben Morais, Samuel Olayemi, and Julian Weber. Xtts: a massively multilingual zero-shot text-to-speech model, 2024. URL https://arxiv.org/abs/2406.04904 . Francois Chollet. Xception: Deep Learning With Depthwise Separable Convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pages 1251–1258, 2017. Alexandre Defossez, Gabriel Synnaeve, and Yossi Adi. Real time speech enhancement in the waveform domain. In Interspeech , 2020. Brian Dolhansky, Joanna Bitton, Ben Pflaum, Jikuo Lu, Russ Howes, Menglin Wang, and Cris- tian Canton Ferrer. The deepfake detection challenge (dfdc) dataset, 2020a. URL https: //arxiv.org/abs/2006.07397 . Brian Dolhansky, Joanna Bitton, Ben Pflaum, Jikuo Lu, Russ Howes, Menglin Wang, and Cris- tian Canton Ferrer. The DeepFake Detection Challenge (DFDC) Dataset, October 2020b. arXiv: 2006.07397 [cs]. Hugging Face. Speech df arena - speech-arena-2025. https://huggingface.co/spaces/ Speech-Arena-2025/Speech-DF-Arena , 2025. Accessed: 2025-05-13. Joel Frank and Lea Schönherr. Wavefake: A data set to facilitate audio deepfake detection, 2021. URL https://arxiv.org/abs/2111.02813 . Injy Hamed, Ngoc Thang Vu, and Slim Abdennadher. ArzEn: A speech corpus for code-switched Egyptian Arabic-English. In Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Asuncion Moreno, Jan Odijk, and Stelios Piperidis, editors, Proceedings of the Twelfth Language Resources and Evaluation Conference , pages 4237–4246, Marseille, France, May 2020. European Language Resources Association. ISBN 979-10-95546-34-4. URL https: //aclanthology.org/2020.lrec-1.523/ . 10 Injy Hamed, Fadhl Eryani, David Palfreyman, and Nizar Habash. ZAEBUC-Spoken: A multilingual multidialectal Arabic-English speech corpus. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) , pages 17770–17782, 2024. Injy Hamed, Caroline Sabty, Slim Abdennadher, Ngoc Thang Vu, Thamar Solorio, and Nizar Habash. A survey of code-switched Arabic NLP: Progress, challenges, and future directions. In Proceedings of the 31st International Conference on Computational Linguistics , pages 4561–4585, 2025. Yinan He, Bei Gan, Siyu Chen, Yichun Zhou, Guojun Yin, Luchuan Song, Lu Sheng, Jing Shao, and Ziwei Liu. ForgeryNet: A Versatile Benchmark for Comprehensive Forgery Analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 4360–4369, 2021. Yang Hou, Haitao Fu, Chuankai Chen, Zida Li, Haoyu Zhang, and Jianjun Zhao. Polyglotfake: A novel multilingual and multimodal deepfake dataset, 2024. URL https://arxiv.org/abs/ 2405.08838 . Liming Jiang, Ren Li, Wayne Wu, Chen Qian, and Chen Change Loy. DeeperForensics-1.0: A Large-Scale Dataset for Real-World Face Forgery Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 2889–2898, 2020. Hasam Khalid, Shahroz Tariq, Minha Kim, and Simon S. Woo. Fakeavceleb: A novel audio-video multimodal deepfake dataset, 2022. URL https://arxiv.org/abs/2108.05080 . Patrick Kwon, Jaeseong
https://arxiv.org/abs/2505.22581v1
You, Gyuhyeon Nam, Sungwoo Park, and Gyeongsu Chae. KoDF: A Large-Scale Korean DeepFake Detection Dataset. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 10744–10753, 2021. Chunyu Li, Chao Zhang, Weikai Xu, Jingyu Lin, Jinghui Xie, Weiguo Feng, Bingyue Peng, Cunjian Chen, and Weiwei Xing. Latentsync: Taming audio-conditioned latent diffusion models for lip sync with syncnet supervision, 2025. URL https://arxiv.org/abs/2412.09262 . Yuezun Li, Xin Yang, Pu Sun, Honggang Qi, and Siwei Lyu. Celeb-DF: A Large-Scale Challenging Dataset for DeepFake Forensics. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 3207–3216, 2020. Xuechen Liu, Xin Wang, Md Sahidullah, Jose Patino, Héctor Delgado, Tomi Kinnunen, Massimiliano Todisco, Junichi Yamagishi, Nicholas Evans, Andreas Nautsch, and Kong Aik Lee. ASVspoof 2021: Towards Spoofed and Deepfake Speech Detection in the Wild. IEEE/ACM Transactions on Audio, Speech, and Language Processing , 31:2507–2522, 2023. ISSN 2329-9304. Bartłomiej Marek, Piotr Kawa, and Piotr Syga. Are audio deepfake detection models polyglots?, 2024. URL https://arxiv.org/abs/2412.17924 . Hamdy Mubarak, Amir Hussein, Shammur Absar Chowdhury, and Ahmed Ali. QASR: QCRI Aljazeera speech resource a large scale annotated Arabic speech corpus. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) , pages 2274–2285, 2021. Soumik Mukhopadhyay, Saksham Suri, Ravi Teja Gadde, and Abhinav Shrivastava. Diff2lip: Audio conditioned diffusion models for lip-synchronization, 2023. URL https://arxiv.org/abs/ 2308.09716 . Kartik Narayan, Harsh Agarwal, Kartik Thakral, Surbhi Mittal, Mayank Vatsa, and Richa Singh. DF- Platter: Multi-Face Heterogeneous Deepfake Dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 9739–9748, 2023. Sanath Narayan, Yasser Abdelaziz Dahou Djilali, Ankit Singh, Eustache Le Bihan, and Hakim Hacid. Visper: Multilingual audio-visual speech recognition, 2024. URL https://arxiv.org/abs/ 2406.00038 . Dufou Nick and Jigsaw Andrew. Contributing Data to Deepfake Detection Research, September 2019. 11 OpenAI. Text-to-speech api: tts-1 model. https://platform.openai.com/docs/guides/ text-to-speech , 2023. Accessed: 2025-05-13. OpenAI. Introducing gpt-4.1 in the api. https://openai.com/index/gpt-4-1/ , 2025. Accessed: 2025-05-13. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. fairseq: A fast, extensible toolkit for sequence modeling. In Waleed Ammar, Annie Louis, and Nasrin Mostafazadeh, editors, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations) , pages 48–53, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-4009. URL https://aclanthology.org/N19-4009/ . Orchid Chetia Phukan, Gautam Siddharth Kashyap, Arun Balaji Buduru, and Rajesh Sharma. Het- erogeneity over homogeneity: Investigating multilingual speech pre-trained models for detecting audio deepfake, 2024. URL https://arxiv.org/abs/2404.00809 . Zengyi Qin, Wenliang Zhao, Xumin Yu, and Xin Sun. Openvoice: Versatile instant voice cloning, 2024. URL https://arxiv.org/abs/2312.01479 . Qwen, :, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tianyi Tang,
https://arxiv.org/abs/2505.22581v1
Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report, 2025. URL https://arxiv.org/abs/2412.15115 . Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. Robust speech recognition via large-scale weak supervision, 2022a. URL https://arxiv.org/ abs/2212.04356 . Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. Robust speech recognition via large-scale weak supervision, 2022b. URL https://arxiv.org/ abs/2212.04356 . Andreas Rossler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, and Matthias Niessner. FaceForensics++: Learning to Detect Manipulated Facial Images. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 1–11, 2019. Andreas Rössler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, and Matthias Nießner. Faceforensics++: Learning to detect manipulated facial images, 2019. URL https: //arxiv.org/abs/1901.08971 . SDAIANCAI. Ar-en code-switching textual dataset. https://huggingface.co/datasets/ SDAIANCAI/Ar-En-Code-Switching-Textual-Dataset , 2025. Accessed: 2025-05-13. Neha Sengupta, Sunil Kumar Sahu, Bokang Jia, Satheesh Katipomu, Haonan Li, Fajri Koto, William Marshall, Gurpreet Gosal, Cynthia Liu, Zhiming Chen, Osama Mohammed Afzal, Samta Kamboj, Onkar Pandit, Rahul Pal, Lalit Pradhan, Zain Muhammad Mujahid, Massa Baali, Xudong Han, Sondos Mahmoud Bsharat, Alham Fikri Aji, Zhiqiang Shen, Zhengzhong Liu, Natalia Vassilieva, Joel Hestness, Andy Hock, Andrew Feldman, Jonathan Lee, Andrew Jackson, Hector Xuguang Ren, Preslav Nakov, Timothy Baldwin, and Eric Xing. Jais and jais-chat: Arabic-centric foundation and instruction-tuned open generative large language models, 2023. URL https://arxiv.org/ abs/2308.16149 . Kartik Thakral, Rishabh Ranjan, Akanksha Singh, Akshat Jain, Richa Singh, and Mayank Vatsa. ILLUSION: Unveiling truth with a comprehensive multi-modal, multi-lingual deepfake dataset. InThe Thirteenth International Conference on Learning Representations , 2025. URL https: //openreview.net/forum?id=qnlG3zPQUy . Justus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt, and Matthias Nießner. Face2face: Real-time face capture and reenactment of rgb videos, 2020. URL https://arxiv. org/abs/2007.14808 . 12 Massimiliano Todisco, Xin Wang, Ville Vestman, Md Sahidullah, Hector Delgado, Andreas Nautsch, Junichi Yamagishi, Nicholas Evans, Tomi Kinnunen, and Kong Aik Lee. ASVspoof 2019: Future Horizons in Spoofed and Fake Audio Detection, April 2019. arXiv:1904.05441 [cs, eess]. Xin Wang, Junichi Yamagishi, Massimiliano Todisco, Hector Delgado, Andreas Nautsch, Nicholas Evans, Md Sahidullah, Ville Vestman, Tomi Kinnunen, Kong Aik Lee, Lauri Juvela, Paavo Alku, Yu-Huai Peng, Hsin-Te Hwang, Yu Tsao, Hsin-Min Wang, Sebastien Le Maguer, Markus Becker, Fergus Henderson, Rob Clark, Yu Zhang, Quan Wang, Ye Jia, Kai Onuma, Koji Mushika, Takashi Kaneda, Yuan Jiang, Li-Juan Liu, Yi-Chiao Wu, Wen-Chin Huang, Tomoki Toda, Kou Tanaka, Hirokazu Kameoka, Ingmar Steiner, Driss Matrouf, Jean-Francois Bonastre, Avashna Govender, Srikanth Ronanki, Jing-Xuan Zhang, and Zhen-Hua Ling. Asvspoof 2019: A large-scale public database of synthesized, converted and replayed speech, 2020. URL https://arxiv.org/abs/ 1911.01601 . Yuxuan Wang, RJ Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J. Weiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, Quoc Le, Yannis Agiomyrgiannakis, Rob Clark, and Rif A. Saurous. Tacotron: Towards end-to-end speech synthesis, 2017. URL https://arxiv. org/abs/1703.10135 . Yang Xiao and Rohan Kumar Das. Xlsr-mamba: A dual-column bidirectional state space model for spoofing attack detection, 2025. URL https://arxiv.org/abs/2411.10027 . Hang Zhang, Xin Li, and Lidong Bing. Video-LLaMA: An Instruction-tuned Audio-Visual
https://arxiv.org/abs/2505.22581v1
Language Model for Video Understanding, June 2023. arXiv:2306.02858 [cs, eess]. 13 A Real Perturbations Table 11: List of video and audio perturbation types with descriptions. Category Perturbation Type Description Video PerturbationsGaussian Blur Applies Gaussian smoothing to simulate out-of-focus capture. Salt and Pepper Noise Random white and black pixel noise, mimicking sensor errors. Low Bitrate Compression Blocky, artifact-heavy images due to compression. Gaussian Noise Electronic sensor noise typical in low-light conditions. Poisson Noise (Shot Noise) Noise from photon-limited imaging environments. Speckle Noise Multiplicative noise creating granular interference effects. Color Quantization Banding effects from limited color palettes. Random Brightness Simulates variations in exposure and lighting. Motion Blur Imitates camera or object motion during capture. Rolling Shutter Distortion effects due to CMOS sensor movements. Camera Shake Minor frame shifts from handheld camera vibrations. Lens Distortion Optical distortions like barrel or pincushion effects. Vignetting Darkening of image edges typical of certain lenses. Exposure Variation Adjusts brightness and contrast, simulating exposure issues. Chromatic Aberration Color channel shifts causing fringing effects. Audio PerturbationsCompression Artifacts Quality loss from low bitrate compression. Pitch/Loudness Distortion Gain or frequency alterations simulating recording issues. White Noise Constant background electronic interference noise. Time Stretch Audio speed adjustments without pitch change. Reverberation Echo and reverb modeling room acoustics. Ambient Noise Background environmental sounds added. Clipping Distortion from exceeding audio amplitude limits. Frequency Filter Filtering effects simulating transmission equipment variations. Doppler Effect Pitch modulation due to relative motion. Interference Static-like bursts mimicking external disturbances. Room Impulse Response Complex echo patterns modeling specific environments. 14 B Augmentation Examples In Table 12, we provide examples of augmentations achieved through the manipulation rules previ- ously outlined in Section 3.2.1. Table 12: Examples of augmentations achieved through the different transcript manipulation rules, showing the original (orig) and augmented (aug) transcriptions. Original T ranscriptionOriginal W ordInserted W ord Operation Example Edit: Telephone → Radio T elephone اﺗﺸﻐﻠﺖ اﻟﮭﺎﻧﻢ ﻓﻲ اﻟـ[ orig] CSW EN EN Change meaning only (The lady got busy on the telephone ) (keep English) Radio اﺗﺸﻐﻠﺖ اﻟﮭﺎﻧﻢ ﻓﻲ اﻟـ[ aug] (The lady got busy with the radio ) Edit: ﻣﻨﺘﺸﺮة( MSA) → ﻣﺤﺪودة( MSA) ﺟﺪاًﻣﻨﺘﺸﺮة أﺻﺒﺤﺖ أداة Mirroring اﻟـ[orig] CSW AR AR Change meaning only (Mirorring has become a popular tool) (keep Arabic variant) ﺟﺪاًﻣﺤﺪودة أﺻﺒﺤﺖ أداة Mirroring اﻟـ[aug] (Mirorring has become a limited tool) Edit: ﺑﺸﻜﺮ( MSA) → ﺑﻜﺮه( Dialectal Arabic) اﻟﻠﻲ ﻣﻮﺟﻮدﯾﻦ sponsors ﻛﻞ الﺑﺸﻜﺮ [orig] CSW AR AR Change meaning (I thank all the present sponsors) + change Arabic variant اﻟﻠﻲ ﻣﻮﺟﻮدﯾﻦ sponsors ﻛﻞ الﺑﻜﺮه [aug] (I hate all the present sponsors) Edit: ﺳﻌﯿﺪ( MSA) → ﺣﺰﯾﻦ( MSA) ﺳﻌﯿﺪ وھﯿﻜﻮن ھﺬا اﻟﺸﺨﺺ راﺿﻲ و[ orig] Arabic AR AR Change meaning only (And this person will be content and happy ) (keep Arabic variant) ﺣﺰﯾﻦ وھﯿﻜﻮن ھﺬا اﻟﺸﺨﺺ راﺿﻲ و[ aug] (And this person will be content and sad) Edit: ﺟﻮھﺮي( MSA) → ﺗﺎﻓﮫ( Dialectal Arabic) ﺟﻮھﺮي ﻛﺎﻧﺖ ﺗﺸﺘﺮك ﺑﻌﻤﻞ أﺳﺎﺳﻲ [ orig] Arabic AR AR Change meaning (She was involved in a core and essential task) + change Arabic variant ﺗﺎﻓﮫ ﻛﺎﻧﺖ ﺗﺸﺘﺮك ﺑﻌﻤﻞ أﺳﺎﺳﻲ [ aug] (She was involved in a core and non-essential
https://arxiv.org/abs/2505.22581v1
task) Edit: اﻟﻨﺎس → friends اﻟﻨﺎس أﻧﺎ ﺑﺮوح ﻗﺎﺑﻞ [ orig] Arabic EN EN Change meaning (I go meet people ) + change language to English friends أﻧﺎ ﺑﺮوح ﻗﺎﺑﻞ[ aug] (I go meet friends ) 15 C Prompt for Text Perturbation Prompt for Fake Transcript Generation. ###SYSTEM MESSAGE### You are a controlled text-perturbation bot. Here is the transcript of an audio. Please use the provided operations to modify the transcript to change its sentiment. The operation can be one of ‘delete‘, ‘insert‘ and ‘replace‘. Please priority modify adjectives and adverbs. -------------------CHANGE-MODES------------------ •meaning_only - Change the *meaning* of one word. - Keep the same language/script and dialect. •dialect_only - Swap a word for a dialectal equivalent of *identical meaning*. - Example: <syArT> →<‘rbyT> (Gulf dialect, same meaning). •dialect_plus_meaning - Change *both* dialect *and* meaning in a single word. - Example: <jmyl> (msa, ’nice’) →<wH$> (Egyptian, ’awful’). •meaning_plus_translation - In Arabic-only sentences, pick a word that is **commonly code-switched to English** in everyday speech (e.g., <mwbayl>, <syArT>, <Antrnt>). - Translate that word to English and change the meaning simultaneously. Example: <syArT> (’car’) →bike. -------------------CSW MULTI-OP LOGIC------------------- If language == ’csw’: num = 1 →edit exactly one token matching target_token_script. num = 2 →edit 1 English + 1 Arabic token. num = 3 →edit 1 English + 2 Arabic tokens. -------------------OTHER RULES------------------- •Only modify tokens that are *commonly code-switched* in real speech (brand names, technology, everyday nouns, etc.). •Each operation targets ONE word (delete / insert / replace). •Number of operations for INSERT, DELETE and REPLACE should be equal across the data. •If sentiment can be changed with INSERT or DELETE, prefer it over REPLACE. •When dialect shifts, include original_dialect and new_dialect. •Never alter tense or add restricted content. •Return **only** a JSON object that matches the schema. Figure 3: System prompt for text-perturbation bot 16
https://arxiv.org/abs/2505.22581v1
arXiv:2505.22583v1 [cs.SE] 28 May 2025GitGoodBench: A Novel Benchmark For Evaluating Agentic Performance On Git Tobias Lindenbauer1,2*, Egor Bogomolov1, Yaroslav Zharov1 1JetBrains Research 2School of Computation, Information and Technology, Technical University of Munich Correspondence: tobias.lindenbauer@jetbrains.com Abstract Benchmarks for Software Engineering ( SE) AI agents, most notably SWE-bench, have cat- alyzed progress in programming capabilities of AI agents. However, they overlook critical developer workflows such as Version Control System ( VCS ) operations. To address this is- sue, we present GitGoodBench1, a novel bench- mark for evaluating AI agent performance on Version Control System ( VCS ) tasks. GitGood- Bench covers three core Git scenarios extracted from permissive open-source Python, Java, and Kotlin repositories. Our benchmark provides three datasets: a comprehensive evaluation suite (900 samples), a rapid prototyping version (120 samples), and a training corpus (17,469 samples). We establish baseline performance on the prototyping version of our benchmark us- ing GPT-4o equipped with custom tools, achiev- ing a 21.11% solve rate overall. We expect Git- GoodBench to serve as a crucial stepping stone toward truly comprehensive SEagents that go beyond mere programming. 1 Introduction While the rapid scaling of Large Language Mod- els (LLM s) has led to promising results across var- ious tasks initially, the improvements gained from scaling models further are slowing down. Com- pared to GPT-3 (Brown et al., 2020), GPT-3.5 achieves a approximately 60% improvement (Ope- nAI et al., 2024a) on MMLU (Hendrycks et al., 2021). The improvement from GPT-3.5 to GPT- 4, however, is just approximately 23% (OpenAI et al., 2024a). Scaling test-time compute rather than just models has emerged as an alternative for further improving performance, leading to the rise of AI agents (Yao et al., 2023; Shinn et al., 2023; Wang et al., 2024). AI agents equip LLM s with *Work done during an internship at JetBrains 1https://github.com/JetBrains-Research/ git-good-benchexternal tools (Schick et al., 2023) and employ so- phisticated planning and reasoning strategies such as ReAct (Yao et al., 2023) or Reflexion (Shinn et al., 2023) to dynamically adjust in uncertain en- vironments. Software Engineering ( SE) emerged as a piv- otal application domain due to the availability of high-quality data in open-source repositories and because the creation and maintenance of software underpins innovation and economic impact across virtually every sector. SWE-bench (Jimenez et al., 2024) is the industry-standard benchmark for evalu- ating the agent’s programming proficiency through testing the agent’s ability to fix bugs in real-world software. This spurred the rapid development of AI agents for programming by major players in the tech tooling ecosystem (Cursor, 2024; Basu et al., 2024; Zakonov, 2025; Microsoft, 2025; Anthropic, 2025). Version Control Systems ( VCS s), such as Git, are ubiquitous in SE(Cortés Ríos et al., 2022) and play a pivotal role in building software in dis- tributed teams. It is thus natural to use Git as a medium of collaboration between AI agents and hu- man engineers. While LLM providers are advertis- ing the Git capabilities of their systems (Anthropic, 2025), there currently exists no benchmark for eval- uating an AI agent’s capacity of interacting with Git in an
https://arxiv.org/abs/2505.22583v1
end-to-end manner. Furthermore, typical Git tasks such as Interactive Rebase ( IR) are time- consuming and distinct from raw code-generation. IRrequires reasoning over the Git history and an in-depth understanding of dependencies between the commits constituting the history. To stimulate innovation in the direction of com- prehensive, end-to-end SEAI agents that go be- yond mere programming, we introduce a novel benchmark for the popular VCS Git. This com- prises a training corpus for collecting agentic tra- jectories and two evaluation sets (lite and full). The benchmark supports Merge Conflict Resolu- (a) Merge Conflict Resolution: The agent must repro- duce the ground-truth merge commit given a set of conflicts. (b) Interactive Rebase: The agent generates an alter- native history based on existing commits. (c) Iterative Committing of Changes: The agent gen- erates an alternative based on a disorganized set of changes. We only use the original commit history for evaluation. Figure 1: The three Git scenarios supported by GitGood- Bench. Each scenario benchmarks a typical Git use-case and unique aspect of version control.tion ( MCR ), Interactive Rebase ( IR), and the It- erative Committing of Changes ( ICC) (Figure 1). We scrape all data from permissive, open-source, Python, Java, or Kotlin GitHub repositories. Fur- thermore, we provide a baseline implementation using GPT-4o (OpenAI et al., 2024b) with custom tools, achieving a 21.11% solve rate. 2 Related Work Several benchmarks, such as SWE-bench (Jimenez et al., 2024), or the Kowinski prize (Konwinski et al., 2024) evaluate agentic systems on com- plex, multi-turn SEtasks sourced from real-world GitHub issues. While the environment allows Git usage, the evaluation focuses solely on whether the agent resolves the bug rather than how it leverages VCS. In contrast, our benchmark explicitly mea- sures an agent’s proficiency with Git tasks. This allows future research to thoroughly examine and refine VCS -focused strategies in SEagents and tai- lor agents to VCS tasks specifically. While previous works on automating or evalu- ating MCR (Svyatkovskiy et al., 2022; Shen et al., 2023; Boll et al., 2024; Pan et al., 2021) and com- mit message generation or completion (Jiang et al., 2017; Hal et al., 2019; Eliseeva et al., 2023) exist, they exclusively cater to specific VCS subtasks. In contrast, our benchmark is the first to encapsulate multiple subtasks, such as commit message gener- ation, reasoning across commits, and rebase plan generation into a single benchmarking scenario. This uniquely positions GitGoodBench for evaluat- ing and training AI agents with expertise in VCS tasks in end-to-end settings. 3 GitGoodBench Datasets We provide GitGoodBench (900 samples) and GitGoodBench Lite (120 samples) for evaluation in comprehensive and rapid-prototyping settings, respectively. The research community recently started investigating SEagents powered by fine- tuned Small Language Models ( SLM s) (Pan et al., 2024; Jain et al., 2025; Yang et al., 2025). We believe that trained, on-device sized agents are an exciting research direction. While we do not train such a model in this work, with GitGoodBench Train (17,469 samples) we release a dataset split dedicated to collecting trajectories for training Git agents. Filter Reason Repository size ≤400MB Avoid
https://arxiv.org/abs/2505.22583v1
Git LFS heavy repositories Repository stars ≥1000 Heuristic for repository quality Repository is not archived Heuristic for repository quality Repository is not forked Avoid duplication Last commit within a month of May, 31st 2024 Avoid stale repositories Repository has permissive license Ensure legal compliance Repository ≥5branches Heuristic for merge conflict scenarios Repository ≥5contributors Heuristic for merge conflict scenarios (a) Repository metadata filters we use for selecting the initial repositories we consider in the benchmark creation. We consider the following licenses permissive: MIT, Apache 2.0, BSD 3-Clause “New” or “Revised”, BSD 2-Clause “Simplified”. Filter Reason No merge commit in File-Commit Chain (FCC) Cleanly separate scenario types No merge conflict in unsupported file type Only support Python, Java, or Kotlin All merge scenarios contain conflict Merges without a conflict are trivial Merge scenarios have exactly two parents Avoid dilution by complex and rare merge types Number of merge conflicts ≤8 Ensure the agent can theoretically solve the scenario Length of FCC ≤6 Ensure the agent can theoretically solve the scenario FCC file is modified, not added Otherwise we get a single hunk when resetting (b) Scenario level filters for selecting scenarios to include in our benchmark. Table 1: Filters for selecting repositories and scenarios to include in our benchmark. 3.1 Supported Scenarios Our benchmark covers the following three types of Git scenarios: Merge Conflict Resolution The agent must re- solve all merge conflicts by reproducing the ground truth resolutions (Figure 1a). Interactive Rebase In this scenario (Figure 1b) the agent must reason across commits and their contents to determine the optimal order- ing of commits, thereby improving the Git history. This includes commit consolidation or modification and commit message refine- ment. Iterative Committing of Changes This scenario (Figure 1c) type is the inverse of the IR. In- stead of optimizing existing commits, the agent must generate a reasonable Git history from a large disorganized set of changes. With these scenario types we cover non-trivial Git functionalities central to common Git work- flows (Cortés Ríos et al., 2022). Moreover, we explicitly cover functionality currently only imple- mented interactively in Git (e.g., git rebase -i orgit add -p ). Agents are highly applicable for such iterative tasks that depend on environment observations. However, interacting with such func- tionality is challenging for agentic systems because these functions do not provide immediate feedback and instead wait for user input. This introduces friction into the typical plan-act-observe loop of AIagents, due to delayed feedback not easily captured by usual pipelines. 3.2 Dataset Creation We collect repository metadata from repositories with permissive licenses using SEART (Dabic et al., 2021) and the metadata filters defined in Table 1a. The scenarios for IRandICC are represented by the same samples in our dataset (i.e., with one sam- ple, we can evaluate both IRandICC). We call these samples File-Commit Chain ( FCC ) samples and they refer to chains of commits in Git histories in which we observe consecutive modifications of a single file. We use this as a heuristic for identifying Git histories that may be improved through reorder- ing or consolidating commits.
https://arxiv.org/abs/2505.22583v1
These samples target the use-case of (1) cleaning up the local Git his- tory before pushing new commits to the remote (e.g.,git rebase -i HEAD~5 , and (2) construct- ing a clean git history given a set of changes for the IRandICC scenario, respectively. To tailor these samples toward evaluating an aspect of Git distinct from MCR , we remove merge commits from FCC s. This allows us to evaluate the system’s understand- ing of the rebase-todo and of relationships between commits. We then mine the Git history of these repositories for merge, and FCC samples and ap- ply our scenario-level filters (Table 1b) to obtain 6,917 merge samples and 11,572 FCC samples. To ensure a diverse benchmark, especially concern- ing represented repositories, we partition our data into strata based on the following features before sampling to construct our benchmark. File-Commit Chain Samples For these samples, we use the project size (in lines of code) and the repository name for stratification. Merge Conflict Resolution Samples In addi- tion to the above, we stratify on the difficulty of these samples. We define MCR difficulty based on the number of conflicts and their distribution across files. To determine conflicts, we run git show –remerge-diff <merge-commit> and iden- tify conflicts through Git merge conflict mark- ers. We consider scenarios with a single conflict “easy” because no reasoning across diffs is neces- sary, those with multiple conflicts in a single file “medium” because reasoning across diffs in the con- text of a single file is required, and all others, for which the agent must reason across multiple diffs and files, as “hard”. To construct the held-out test, we sample 120 scenarios for GitGoodBench Lite and 900 for Git- GoodBench. We stratify the sampling for scenario type and Programming Language ( PL). The re- maining samples yield GitGoodBench Train. All three datasets are mutually disjoint. For further details, see Appendix A. 3.3 Metrics We present the results of our baseline in terms of success and solve rate (both expressed as per- centages). The success rate refers to scenarios for which our system did not cause an error (e.g., be- cause a patch cannot be applied in MCR ). Below, we define the solve rate for each scenario: File-Commit Chain Samples For FCC sce- narios we prompt an LLM to judge the agent- generated and ground truth Git histories using the LLM-as-a-Judge (Zheng et al., 2023) approach. We opt for this approach instead of Exact-Match ( EM), because there is no clear, deterministic way to de- fine what constitutes a superior Git history. Follow- ing Zheng et al. (2023) we judge each pair of Git histories twice while switching the positions of the histories in the same prompt template to account for position bias. We prompt the judge to base its decision on (1) the quality of the commit messages considering the contents of the commit, (2) the co- hesion of changes within the commits, (3) a logical progression of changes across commits, and (4) the size of commits. If the judge chooses the agent- generated over the
https://arxiv.org/abs/2505.22583v1
ground truth Git history in both cases, we count a sample as solved. For details on the prompt see Appendix B.4.Scenario Success Rate Solve Rate IR 93.33 26.67 ICC 93.33 23.33 MCR 76.67 13.33 Total 88 21.11 Table 2: Success and solve rates (%) by scenario type, rounded to two decimal places. We observe the high complexity of the proposed benchmark, even given the strong baseline model and custom environment tools. Difficulty Level Success Rate Solve Rate Easy 80.64 22.58 Medium 84.62 7.69 Hard 62.5 0 Table 3: Success and solve rates (%) by difficulty for MCR samples, rounded to two decimal places. Git- GoodBench Lite contains 31 ( ≈52%) easy, 13 ( ≈22%) medium, and 16 ( ≈27%) hard samples. Merge Conflict Resolution Samples Because an exact ground truth solution is available, we use EM between the ground truth solution and the agent’s solution for evaluating MCR. 4 Environment As a baseline, we evaluate GPT-4o (OpenAI et al., 2024b) on GitGoodBench Lite and the tasks de- fined in Section 4.1 using the metrics in Section 3.3. While we do not use an agentic reasoning frame- work (Yao et al., 2023; Shinn et al., 2023; Wang et al., 2024), we do equip the LLM with one possi- ble set of custom tools (Section 4.2). 4.1 Provided Context Interactive Rebase In the initial context, we pro- vide all changes in all commits participating in the IR, few-shot function-calling examples and an ex- planation of valid commands for the rebase-todo file. We initiate the IRcovering all commits in the FCC before launching the agent. Iterative Committing of Changes We provide all Git-generated hunks that the agent must process, in addition to few-shot function-calling examples in the initial context. After each commit, we automat- ically show the agent the updated list of remaining hunks. We limit the agent’s selection of hunks to hunks originating from the file for which we mined theFCC and commit all other changes in a single commit after the agent terminates. Merge Conflict Resolution The initial context includes the temporal ordering of the commits be- ing merged, names of all files with conflicts and all merge conflicts it must resolve as well as few-shot function-calling examples. 4.2 Provided Tools Initially we experimented with minimalistic tool- ing, simply giving the LLM terminal access in a sandbox environment. However, preliminary re- sults indicated that the system is unable to make any meaningful progress in this setup2. In partic- ular it struggled with interactive Git functionality (Section 3.1. Because of this we opt for the strong scaffolding detailed below. Interactive Rebase We implement tools for viewing the contents of commits and interacting with the rebase-todo list, a file that specifies how Git should carry out the IR. Iterative Committing of Changes With our tool- ing for this scenario type, the agent selects any num- ber of Git-generated hunks to group into a single commit. Merge Conflict Resolution To foster coherent, conflict-spanning resolutions, we provide tools for viewing individual merge conflicts, complete files or the overall difference between commits being merged. Our tooling limits the
https://arxiv.org/abs/2505.22583v1
agent to sequentially resolving conflicts. It may only specify a patch for resolving the current conflict. 5 Baseline Results In Table 2, we see that our baseline implementation succeeds in 88% and solves 21.11% of scenarios in GitGoodBench Lite3overall. Even with significant scaffolding support the LLM is unable to solve the majority of tasks in our benchmark. This highlights the need to explicitly consider Git use-cases when engineering and training SE agents. For both IRand ICC scenarios our system achieves higher success and solve rates than for MCR scenarios (Table 2). We partially attribute to the stricter scaffolding for these two scenarios. InMCR scenarios the agent must generate code that can be applied at the location of the conflict to 2We acknowledge that a Git Model-Context Protocol (MCP ) may address this issue but as the focus of our work is a benchmark, we do not further investigate this. 3We release the raw evaluation data with our repository.solve the conflict. Especially in scenarios which re- quire the agent to make globally consistent conflict resolution choices (i.e., medium and hard samples in Table 3) the system’s performance rapidly dete- riorates. In FCC-based scenarios, the agent must simply select a set of hunks to commit for ICC sce- narios or modify the rebase-todo file through a tool forIRscenarios. This indicates that the failure rate of agentic systems interacting with Git increases as the level of technical abstraction from Git de- creases. We do however note that some amount of this performance degradation may also be due to the stricter EMevaluation metric used for MCR scenarios. Regarding the difficulty heuristic for MCR , we note that it accurately captures a sam- ple’s complexity regarding the solve rate. Easy samples have a ≈3times higher solve rate than hard samples. Furthermore, the scenarios based on FCC samples ( IRandICC) result in similar success and solve rates. This indicates that our LLM-as- a-Judge evaluation methodology is consistent in assessing similar Git histories and is thus a suitable choice. Our difficulty heuristic for IRandICC sce- narios did not correlate with the observed difficulty, for details see Appendix A.2.3. 6 Conclusions GitGoodBench is a novel benchmark for training and evaluating AI agents on the Git scenarios: MCR ,IRandICC. Our baseline implementation demonstrates capabilities in resolving merge con- flicts and improving Git histories when equipping GPT-4o (OpenAI et al., 2024b) with tools for inter- acting with Git, achieving an overall solve rate of 21.11% on GitGoodBench Lite. The poor overall performance and the observed performance degra- dation for MCR across difficulty levels highlight the need to explicitly consider Git when designing SEagents. Just as we construct agents for SEwith repository-level reasoning and code generation in mind, we should consider the agents’ understand- ing of Git artifacts and capacity to use Git function- ality. We hope our benchmark spurs innovation in this direction. 7 Limitations Our baseline implementation has several con- straints that present opportunities for improvement. The MCR tooling cannot modify Git-generated hunk boundaries, limiting flexibility when these hunks are too coarse. For ICC, expanding be- yond a single-file focus would
https://arxiv.org/abs/2505.22583v1
allow more accu- rate handling of multi-file changes. Furthermore, enabling commit content modification during IR would allow handling more complex IRscenar- ios, including ones during which a merge conflict occurs. Additionally, for FCC samples our eval- uation methodology may introduce bias, as it is LLM -based. We suggest that future work evaluat- ing agents on GitGoodBench use an ensemble of LLM s for judging trajectories to mitigate bias and subjectivity of the evaluation. Finally, we did not investigate how a Git implementation of the novel Model-Context Protocol ( MCP ) (Anthropic, 2024) affects an agent’s ability to solve Git tasks. Regarding the dataset itself, while we made ef- forts to ensure diversity, certain limitations remain. While our difficulty heuristic for MCR showed promising results, a FCC difficulty heuristic based onFCC purity (Appendix A.2.3) didn’t correlate with empiric performance. Due to this, the distribu- tion of FCC samples may be skewed with respect to their difficulty in our benchmark. While our three scenario types cover core Git functionality, our benchmark does not yet include important Git diagnostic workflows such as git bisect . Incor- porating bisect scenarios would enable evaluation of an AI agents’ ability to systematically locate commits introducing bugs, a capability that could significantly enhance automated debugging and re- gression analysis in SEAI agents. Furthermore, as our benchmark is static, we may need to update our benchmark with more diverse and complex sce- narios to counteract benchmark saturation and data leakage. Acknowledgements We thank Yury Khudyakov, Alexandra Eliseeva, Maria Tigina, and Abhiram Bellur for the valuable discussions and advice during this project. References Anthropic. 2024. Introducing the model context proto- col. Accessed on May 20, 2025. Anthropic. 2025. Claude 3.7 sonnet and claude code. Accessed on February 27, 2025. Shrestha Basu, Mallick, and Kathy Korevec. 2024. The next chapter of the gemini era for developers. Ac- cessed on February 27, 2025. Alexander Boll, Yael Van Dok, Manuel Ohrndorf, Alexander Schultheiß, and Timo Kehrer. 2024. To- wards Semi-Automated Merge Conflict Resolution:Is It Easier Than We Expected? In Proceedings of the 28th International Conference on Evaluation and Assessment in Software Engineering , EASE ’24, pages 282–292. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Ma- teusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. In Ad- vances in Neural Information Processing Systems , volume 33, pages 1877–1901. Curran Associates, Inc. Julio César Cortés Ríos, Suzanne M. Embury, and Sukru Eraslan. 2022. A unifying framework for the sys- tematic analysis of Git workflows. Information and Software Technology , 145(C). Cursor. 2024. New composer ui, agent, commit mes- sages. Accessed on February 27, 2025. Ozren Dabic, Emad Aghajani, and Gabriele Bavota. 2021. Sampling projects in github for MSR stud- ies. In 18th IEEE/ACM International Conference on Mining Software Repositories, MSR 2021
https://arxiv.org/abs/2505.22583v1
, pages 560–564. Aleksandra Eliseeva, Yaroslav Sokolov, Egor Bogo- molov, Yaroslav Golubev, Danny Dig, and Timofey Bryksin. 2023. From Commit Message Generation to History-Aware Commit Message Completion. In 2023 38th IEEE/ACM International Conference on Automated Software Engineering (ASE) . S. R. P. van Hal, M. Post, and K. Wendel. 2019. Generat- ing Commit Messages from Git Diffs. arXiv preprint . ArXiv:1911.11690 [cs.SE]. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language under- standing. In International Conference on Learning Representations . Naman Jain, Jaskirat Singh, Manish Shetty, Liang Zheng, Koushik Sen, and Ion Stoica. 2025. R2E- Gym: Procedural Environments and Hybrid Veri- fiers for Scaling Open-Weights SWE Agents. arXiv preprint . Siyuan Jiang, Ameer Armaly, and Collin McMillan. 2017. Automatically generating commit messages from diffs using neural machine translation. In 2017 32nd IEEE/ACM International Conference on Auto- mated Software Engineering (ASE) , pages 135–146. Carlos E Jimenez, John Yang, Alexander Wettig, Shunyu Yao, Kexin Pei, Ofir Press, and Karthik R Narasimhan. 2024. SWE-bench: Can language mod- els resolve real-world github issues? In The Twelfth International Conference on Learning Representa- tions . Andy Konwinski, Christopher Rytting, Justin Fiedlerand Alex Shaw, Sohier Dane, Walter Reade, and Maggie Demkin. 2024. Konwinski prize. https: //kaggle.com/competitions/konwinski-prize . Kaggle. Microsoft. 2025. Introducing github copilot agent mode for vscode. Accessed on February 27, 2025. OpenAI et al. 2024a. GPT-4 Technical Report. arXiv preprint . ArXiv:2303.08774 [cs.CL]. OpenAI et al. 2024b. Openai gpt-4o system card. Ac- cessed on March 6, 2025. Jiayi Pan, Xingyao Wang, Graham Neubig, Navdeep Jaitly, Heng Ji, Alane Suhr, and Yizhe Zhang. 2024. Training Software Engineering Agents and Verifiers with SWE-Gym. arXiv preprint . ArXiv:2412.21139 [cs]. Rangeet Pan, Vu Le, Nachiappan Nagappan, Sumit Gul- wani, Shuvendu Lahiri, and Mike Kaufman. 2021. Can Program Synthesis be Used to Learn Merge Conflict Resolutions? An Empirical Analysis. In Proceedings of the 43rd International Conference on Software Engineering , pages 785–796. Timo Schick, Jane Dwivedi-Yu, Roberto Dessi, Roberta Raileanu, Maria Lomeli, Eric Hambro, Luke Zettle- moyer, Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: Language models can teach themselves to use tools. In Thirty-seventh Conference on Neural Information Processing Systems . Chaochao Shen, Wenhua Yang, Minxue Pan, and Yu Zhou. 2023. Git Merge Conflict Resolution Lever- aging Strategy Classification and LLM. In 2023 IEEE 23rd International Conference on Software Quality, Reliability, and Security (QRS) , pages 228– 239. Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. 2023. Re- flexion: language agents with verbal reinforcement learning. In Advances in Neural Information Pro- cessing Systems , volume 36, pages 8634–8652. Alexey Svyatkovskiy, Sarah Fakhoury, Negar Ghorbani, Todd Mytkowicz, Elizabeth Dinella, Christian Bird, Jinu Jang, Neel Sundaresan, and Shuvendu K. Lahiri. 2022. Program merge conflict resolution via neural transformers. In Proceedings of the 30th ACM Joint European Software Engineering Conference and Sym- posium on the Foundations of Software Engineering , ESEC/FSE 2022, pages 822–833. Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, and Heng Ji. 2024. Executable Code Actions Elicit Better LLM Agents. In Proceed- ings of
https://arxiv.org/abs/2505.22583v1
the 41st International Conference on Machine Learning , pages 50208–50232.John Yang, Kilian Leret, Carlos E. Jimenez, Alexan- der Wettig, Kabir Khandpur, Yanzhe Zhang, Binyuan Hui, Ofir Press, Ludwig Schmidt, and Diyi Yang. 2025. SWE-smith: Scaling Data for Software Engi- neering Agents. arXiv preprint . ArXiv:2504.21798 [cs]. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R Narasimhan, and Yuan Cao. 2023. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations . Andrew Zakonov. 2025. Meet junie, your coding agent by jetbrains. Accessed on February 27, 2025. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, Hao Zhang, Joseph E Gonzalez, and Ion Stoica. 2023. Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. InAdvances in Neural Information Processing Sys- tems, volume 36, pages 46595–46623. A Dataset Details In this section we provide further details about the diversity of our datasets with respect to represented repositories, and MCR difficulty. For GitGood- Bench Train we also provide information on the distribution across programming languages, for all other datasets this distribution is fixed to ensure diversity (see Section 3.2). Please also refer to our dataset cards on HuggingFace: GitGoodBench Lite, GitGoodBench, GitGoodBench Train. In Table 5 we provide statistics on the diversity of our datasets with respect to the repositories rep- resented. Notably, there is a heavy skew toward Python and to a lesser extent Java. However, this is in line with our expectations and the popularity of the programming languages we consider in our datasets. Table 6 provides further information re- garding the distribution of MCR difficulties across our datasets. We note that the difficulty of MCR is overall relatively well-distributed with a spike in difficulty on GitGoodBench. Despite stratifying on difficulty, these spikes can occur because we also stratify on other features such as the programming language. A.1 Sample Data Table 7 shows the complete structure of a data point in our dataset. The detailed contents of thescenario field vary depending on the sample_type and are presented in Appendix A.2. Scenario Type Easy Medium Hard Success Rate Easy Medium Hard Solve Rate IR 100 86.36 95.52 93.33 13.33 31.82 30.43 26.67 ICC 100 90.91 91.3 93.33 20 27.27 21.74 23.33 MCR 80.64 84.62 62.5 76.76 22.58 7.69 0 13.33 Total 90.16 87.72 85.48 88 19.67 24.56 19.35 21.11 Table 4: Success and solve rates (%) by scenario type and difficulty, rounded to two decimal places. GitGoodBench Lite contains 31 easy, 13 medium, 16 hard MCR samples and 15 easy, 22 medium, and 23 hard FCC samples. Statistic GitGoodBench Lite GitGoodBench GitGoodBench Train Total Repositories 100 479 816 Mean Samples Per Repo 1.20 1.87 21.40 Standard Deviation 0.79 2.80 48.80 Minimum 1 1 1 25th Percentile 1 1 2 Median 1 1 6 75th Percentile 1 2 18 Maximum 8 46 644 Table 5: The diversity of our datasets with respect to unique repositories from which we mined our samples. Our datasets consist of 816 (525 Python, 284 Java, and 79 Kotlin) unique repositories overall. A.2
https://arxiv.org/abs/2505.22583v1
The Scenario Field In this section we provide further details regarding the contents of the scenario for the two sample types in our datasets. A.2.1 Contents For FCC Samples In Table 8 we show the structure of the scenario field for FCC samples. Furthermore, Table 11 provides an exemplary FCC datapoint’s scenario field contents. The scenario contains information regarding the source of the sample (e.g., the branch from which it was mined), the length of the FCC and its starting and end commits. A.2.2 Contents For Merge Samples In Table 8 we detail the structure of the scenario field for MCR samples. Table 10 shows a repre- sentative example of a MCRscenario field from our GitGoodBench Lite. The scenario field con- tains the metadata based on which we compute the difficulty of this sample. In this case, the sample is hard, because there are multiple conflicts across multiple files. Furthermore, the sample contains the merge commit that serves as ground truth. We use the parent commits of this merge commit to generate a merge conflict that is resolved in the merge commit.A.2.3 File-Commit Chain (FCC) Difficulty Heuristic For FCC scenarios we define their difficulty through the purity of the FCC: dFCC (p) =  pure, 0.5≤p < 0.75 mixed ,0.75≤p < 1 noisy,otherwise where prefers to the ratio of changes within the file for which we mined a FCC to the overall changes in aFCC . We consider any line with a Git change prefix (+ or -) a change. We expected this to capture the difficulty of IR andICC scenarios, because it captures how dis- tributed changes the agent has to reason with are across files. The intuition being that it is easier to coherently generate commits and a plan for rebas- ing, when the reasoning spans fewer files. While, forFCC samples easy scenarios exhibit the max- imum success rate of 100%, they have a lower success rate than medium and hard scenarios (Ta- ble 4). One possible explanation could be that we are simply considering the ratio of changes and not the overall number of changes. A large overall num- ber of changes forces the agent to reason across a much larger context window than a smaller num- ber, yet in the purity-based difficulty heuristic we investigated, both are assigned the same difficulty. DifficultyGitGoodBench GitGoodBench GitGoodBench Lite Train Easy 51.67 41.33 51.65 Medium 21.67 24.44 18.39 Hard 26.67 34.22 29.97 Table 6: Difficulty distribution (in %) across GitGoodBench datasets. Field Value Description id mockito_mockito_merge_0002 Unique identifier name mockito/mockito Repository name (owner/repository) default_branch main Primary repository branch license MIT License Repository license stargazers 14,617 GitHub stars count created_at 2012-10-13T08:27:12 Repository creation date topics java;java-library;mock;... Repository topics/tags programming_language java Primary language scenario <scenario-details> Scenario-specific data (see Tables 10 and 11) sample_type merge Type of code sample project_size medium Estimated project size difficulty easy Complexity level Table 7: Structure of a sample data point from our dataset. Each entry contains metadata about the repository, along with scenario-specific information that varies based on the sample type. The topics field is truncated for brevity. Field
https://arxiv.org/abs/2505.22583v1
Description file The relative path of the file this sample refers to. branch The branch name from which this FCC originates. times_seen_consecutively The number of times this particular file was modified in succession. purity ∈[0; 1] . Ratio between changes in the file and the total changes in all files of a FCC scenario. newest_commit The commit hash corresponding to the newest or last commit in this FCC. oldest_commit The commit hash corresponding to the oldest or first commit in this FCC. contains_non_pl_files A boolean indicating whether any commit in this sample includes changes to files with types not covered by the supported PLs. Table 8: Contents For FCC Samples. Table 11 shows a representative example of the scenario field from our dataset. Due to apurity of0.68, we consider this sample to be of medium difficulty. We define the purity-based difficulty we investigated in more detail in Appendix A.2.3. B Prompts In this section we will provide the prompts used by our system for the individual scenarios and the LLM-as-a-Judge evaluation. For any missing de- tails please refer to our repository.B.1 Merge Conflict Resolution (MCR) Scenarios In Figures 2 to 4 we provide the prompt our system uses for MCR scenarios. We show information on (1) the temporal ordering of the merge parent com- mits, (2) which conflicts occur ( git show output) and (3) detailed instructions for resolving conflicts. Furthermore, we provide examples for the tools we provide in various conflict resolution contexts. Field Description merge_commit_hash The ground truth merge commit in which the conflicts are resolved. parents List of parent commit hashes of the merge commit. number_of_files_with_merge_conflict The overall number of distinct files in which a merge conflict occurs. total_number_of_merge_conflicts Total number of distinct merge conflicts across all files. files_in_merge_conflict Relative paths of the files that contain merge conflicts. Table 9: Contents For Merge Conflict Resolution (MCR) Samples. Field Value merge_commit_hash baa37f65fdff5b780a50d5b5c6bf8bc3ade43815 parents [d758810c59a9134f437d60f73a82036749688ccb, 5dcd493c67ff863c69c1214f0892a80e4951087e] number_of_files_with_merge_conflict 2 total_number_of_merge_conflicts 2 files_in_merge_conflict [cogs/gpt_3_commands_and_converser.py, models/openai_model.py] Table 10: A sample Merge Conflict Resolution ( MCR ) scenario field from GitGoodBench Lite. Each entry contains metadata about a specific merge conflict instance, including commit identifiers and statistics about the conflicting files. B.2 Interactive Rebase (IR) Scenarios In Figures 5 to 7 we provide the prompt our sys- tem uses for IRscenarios. We provide information on the commits participating in the rebase ( git show output) to save agent turns spent reading the commit information. Then we provide detailed instructions for performing an interactive rebase. Finally, we provide examples for the tools we pro- vide and the JSON schema the agent must use to interact with the rebase-todo file. B.3 Iterative Committing of Changes (ICC) Scenarios In Figures 8 and 9 we provide the prompt our sys- tem uses for ICC scenarios. First, we provide de- tailed instructions for chunking changes into log- ically cohesive commits that incrementally build toward the final patch. Next, we show the contents of the hunks the agent can select to save agent turns spent reading the commit information. Finally, we provide examples for the tools that the agent can use
https://arxiv.org/abs/2505.22583v1
in these scenarios. B.4 LLM-as-a-Judge Evaluation In Figures 10 and 11 we provide the prompt our system uses when evaluating the Git histories gen- erated by the agent in FCC samples. First, we pro- vide detailed instructions regarding the dimensions based on which the LLM should assess the qualityof a history. Next, we show the model one exam- ple response for each evaluation case. By doing so, we help the model follow the response schema. We also specify the response schema directly in the model configuration. Finally, we present the ground truth and agent-generated Git history. We use the same prompt for both evaluation runs when re-evaluating to mitigate the position bias. Field Value file composer/models/huggingface.py branch origin/vincent-mlflow-logger-verbose times_seen_consecutively 3 purity 0.68 newest_commit c24b29f19c4c131a3ea7098dd8b8a5edde344819 oldest_commit c1ff80900f46d4e36feb4b326689fe14fc41cbc6 Table 11: A sample File-Commit Chain ( FCC) scenario field from GitGoodBench Lite. This example records a file’s modification pattern across multiple commits, including branch information and a purity metric defined in Appendix A.2.3 and Section 3.1. Merge Conflict Resolution (MCR) Prompt - Part 1 You are a staff software engineer with expertise in {programming_language} and git. You are helping a junior team member who has initiated a merge that resulted in one or more merge conflicts in one or more files. Your task is to help your junior colleague with resolving all {total_amount_of_merge_conflicts} merge conflicts. The semantic meaning and temporal relationship of the two sides of the merge conflicts are as follows for ALL merge conflicts you will encounter: {commit_temporal_ordering} The following files have merge conflicts: {files_with_conflicts} Below are all merge conflicts that need to be resolved, delimited by <CONFLICT-i> tags where i is the 0-based index: {all_merge_conflicts} Figure 2: Our MCR prompt. MCR Prompt - Part 2 Instructions: - Start with resolving the conflict at index 0 (CONFLICT-0) and proceed in ascending order through the conflicts. CONFLICT-0 is the current conflict that needs to be resolved. - Consider the context around the merge conflicts, of the overall diffs and files in which the conflicts occur. - Resolve the conflicts in a cohesive manner. For example, if you remove a function in a conflict, make sure that you also remove any invocations of that function in any other conflicts. - If you are just choosing one of the two sides, without changing any of the actual content, make sure to also reproduce the whitespaces exactly. - If the merge conflict occurs due to a NOP (e.g. one side of the conflict is empty, the other is a commented code block) favor resolving the conflict to the most maintainable and concise way. Avoid dead code. - Make sure to consider the implications your previous resolutions have on the remaining resolutions, especially when resolving multiple conflicts in a single file. - If you find simple bugs, such as typos, copy and paste errors in variable assignments or parameters, feel free to help your junior developer fix these. Do not perform complex refactorings or attempt to change code drastically. Make as few changes to the side that you are accepting as possible. - Consider the context of the
https://arxiv.org/abs/2505.22583v1
temporal relationship of the branches that are being merged and the intent of the junior developer, with respect to which side of the conflict contains the local and which the incoming changes. The intent of the developer is to merge the incoming changes into the local changes. You must only use the following tools and follow their specification exactly and always provide a reason for calling a tool. All tools other than the ones listed below are INVALID and you MUST NOT use them under any circumstances. Valid tools: - view_current_merge_conflict_with - view_merge_conflict_at - resolve_current_merge_conflict_with - view_diff_for - view_file_at: You must not use this command more than once per file as it is costly. Below follow some examples detailing the usage of the above tools: view_current_merge_conflict_with(context_window_size=15, reason= 'to get a more comprehensive overview of the local context around the current merge conflict ') view_current_merge_conflict_with(context_window_size=0, reason= 'to view only the current merge conflict without any local context ') view_current_merge_conflict_with(context_window_size=5, reason= 'to view only the current merge conflict with some local context ') view_merge_conflict_at(conflict_index=1, context_window_size=5, reason= 'To ensure that the resolution for CONFLICT-0 is cohesive with CONFLICT-1 ') view_merge_conflict_at(conflict_index=1, context_window_size=10, reason= 'To remind myself of the changes and context around CONFLICT-3 so that I can decide whether to delete the import for ShoppingClient in the current conflict ') view_diff_for(relative_path_from_project_root= 'src/app/io/FileParser.java ', reason= 'view the full diff between the local and incoming changes for the file at path ') view_diff_for(relative_path_from_project_root= 'src/app/api/quantative_methods/ regression.python ', reason= 'understand how to resolve the current conflict such that the resolution is cohesive and makes sense in the context of the overall changes ') view_file_at(relative_path_from_project_root= 'src/tests/ test_data_transformations.py ', reason= 'understand the full context of the merge conflict, because I think I might have found a small bug, but I need more context to make sure it is one before applying a minor fix as part of the conflict resolution. ') view_file_at(relative_path_from_project_root= 'src/app/utils.py ', reason= 'to check whether there are other commented out code blocks ') Figure 3: Our MCR prompt continued. MCR Prompt - Part 3 view_file_at(relative_path_from_project_root= 'src/app/Authenticator.java ', reason= 'to see how the changes I made so far fit into the file and to ensure I resolve the current conflicts such that it is cohesive with these previous resolutions ') resolve_current_merge_conflict_with(content= 'from app.api.auth import PremiumUser\\n ', reason= 'The premium user class is a new authentication class that is being used in the incoming changes and thus is most likely part of what the junior developer wants to have access to ') resolve_current_merge_conflict_with(content= 'bool debug = conf.shouldDebug; \\n bool enableCaching = conf.enableCaching;\\n bool shouldRetry = conf.shouldRetry;\\n ', reason= 'both of these configuration flags are being used in the local changes, also I fixed a copy-paste bug and now the enableCaching flag is correctly initialized to conf.enableCaching. The shouldRetry flag is an incoming change that conflicts with what the developer introduced, I will thus keep all three flags. ') Key Requirements: - Once the conflicts are resolved you are done and must terminate. - Your decisions must be cohesive across merge conflicts. - Make sure that all your lines end with a
https://arxiv.org/abs/2505.22583v1
newline character to avoid introducing accidental changes. - You must always fill all parameters of the provided tools. This includes the "reason" parameter. Figure 4: Our MCR prompt continued. Interactive Rebase (IR) Prompt - Part 1 You are a staff software engineer with expertise in {programming_language} and Git. You are helping a junior team member who has been committing all day without pushing their commits to the remote. Help them perform an interactive rebase to clean up their local tree. The rebase has already been initiated for you and is currently paused so that you can inspect the commits participating in the rebase and edit the rebase todo list. The commits involved in the rebase are listed below. When referring to them in function calls, use the commit index "i" to refer to <COMMIT-i>. Avoid viewing all commits again, they are already presented below. The commits are delimited by the <COMMIT-i> and </COMMIT-i> tags: {participating_commits} Instructions: Consider the changes in the commits and make adjustments if necessary such that the local tree: - contains logically cohesive commits - all commits have meaningful, descriptive commit messages that follow a unified format - does not contain commits with duplicate commit messages - follows best practices for maintainable code You must only use the following tools and follow their specification exactly. Always provide a reason for calling a tool. List of valid tools for this scenario: - view_rebase_todo: View current rebase todo list - execute_rebase: Execute the rebase with the current rebase todo list, thereby all rebase-todo-list-items are processed in an ascending order Figure 5: Our IR prompt. IR Prompt - Part 2 - show_changes_in: If you want to spend more time thinking about some of the presented commits, use this tool to inspect the changes introduced by commit with index i Below are some examples of how to use this function: show_changes_in(commit_index=4, reason= 'to inspect the changes in COMMIT-4 ') show_changes_in(commit_index=0, reason= 'to understand how the changes in COMMIT-0 relate to its commit message ') - update_rebase_todo_list: Update the rebase todo list, reordering items or adjusting the commands to perform on commits. Each item in the list that you must pass to update_rebase_todo_list must be a string that complies with the rebase-todo-list-item JSON schema specified below: {{ "type": "json", "schemaName": "rebase-todo-list-item", "schema": {{ "type": "object", "properties": {{ "commit_index": {{"type": "integer"}}, "command": {{"enum": ["pick", "drop", "fixup", "fixup -c", "squash", "reword"]}}, "commit_msg": {{"type": "string"}}, }} }}, "required": ["operations"], "additionalProperties": False }} Below are some examples of how to use this function: Note: Positioning the rebase todo item with index 2 at the first position in the list, will swap it to the topmost position in the rebase todo list update_rebase_todo_list(rebase_todo_list_items=[ '{{"commit_index": 2, "command": "pick"}} ', '{{"commit_index": 1, "command": "reword", "commit_msg": "FIX: Explicitly handle division by zero edge case"}} ', '{{"commit_index": 0, "command": "fixup"}} ', '{{"commit_index": 3, "command": "pick"}} ', '{{"commit_index": 4, "command": "drop"}} ' ], reason= 'to remove an unnecessary, noise, experimental commit, improve the commit message of COMMIT-1 and consolidate the changes in COMMIT-0 and COMMIT-1 ') Note: Example for a different sample, you must
https://arxiv.org/abs/2505.22583v1
ensure to always have exactly one item per commit. update_rebase_todo_list(rebase_todo_list_items=[ '{{"commit_index": 0, "command": "pick"}} ', '{{"commit_index": 2, "command": "squash", "commit_msg": "ADD: Define interfaces and test cases for ShoppingBasketService"}} ', '{{"commit_index": 1, "command": "pick"}} ' ], reason= 'to reorder the local tree, yielding more coherent and logical increments of changes in the local tree and to consolidate the changes in COMMIT-0 and COMMIT-2 ') Only the following commands are allowed for the rebase todo list items. Make sure to only provide the required fields for each command, all fields other than the required fields are invalid: - pick: Use this commit as is. Required fields: ["commit_index", "command"] - drop: Remove this commit. Required fields: ["commit_index", "command"] - fixup: Meld this commit into previous commit, reducing the total amount of commits by 1. Only keep the previous commit 's log message. Required fields: ["commit_index", "command"] - fixup -C: Meld this commit into previous commit, reducing the total amount of commits by 1. Only keep this commit 's log message. Required fields: ["commit_index", "command"] - squash: Meld this commit into previous commit, reducing the total amount of commitsby 1. Commit message of resulting commit must be specified. Required fields: ["commit_index", "command", "commit_msg"] - reword: Use commit, but edit commit message. Commit message must be specified. Required fields: ["commit_index", "command", "commit_msg"] Figure 6: Our IR prompt. IR Prompt - Part 3 Key Requirements: - You must not simply pick all commits without modifying anything in the rebase todo list. Do your best to improve the local tree however you see fit. - Avoid squashing all commits into a single commit, consider for which commits this would improve the resulting commit history. - Try to consolidate the total size of the local tree such that the resulting tree has length k<{times_seen_consecutively} - You must always fill all parameters of the provided tools. This includes the "reason" parameter. Figure 7: Our IR prompt. Iterative Committing of Changes (ICC) Prompt - Part 1 You are a staff software engineer with expertise in {programming_language} and Git. You are helping a junior team member who has been working all day without creating a commit to iteratively create commits and introduce their changes into the repository in a maintainable way. Help them to select hunks such that you can create multiple, small, but logically cohesive commits that are structurally sound, and follow best practices for maintainable code. Instructions: - Review the remaining hunks of code and help the junior engineer select the appropriate hunks for each commit. - Ensure that you select as many hunks as you need to ensure structural integrity, ie avoid breaking changes by, for example, removing a variable definition or initialization in one commit, but removing the usages of the variable in another commit. - Identify the ids of the hunks that you should pass by the number following "HUNK-" in the list of remaining hunks below. For HUNK-8, the id you need to pass, if you want to select this hunk, would be 8. - Each commit should be focused, small, and logically cohesive. - Provide a
https://arxiv.org/abs/2505.22583v1
clear and concise commit message for each commit following the format provided in the example usages. Key Requirements: - Avoid apply all changes in a single commit unless you are absolutely sure this will yield the best possible git history. - You must always fill all parameters of the provided tools. This includes the "reason" parameter. Process all of the following {number_of_remaining_hunks} hunks: {remaining_hunks} Task: Pass a list of hunks to include in the commit and a descriptive commit message to the provided tool. You must only use the following tools and follow their specification exactly and always provide a reason for calling a tool. All tools other than the ones listed below are INVALID and you MUST NOT use them under any circumstances. Valid tools: - commit_changes_in - commit_remaining_changes Figure 8: Our ICC prompt. Iterative Committing of Changes (ICC) Prompt - Part 2 Example usages: commit_changes_in(selected_hunks=[1,3], commit_message="FIX: Handle edge case of uninitialized object",reason="to group the fixing of uninitialized objects together") commit_changes_in(selected_hunks=[4], commit_message="ADD: Introduced new enum class CarConfiguration", reason="to isolate the addition of the new enum class") commit_changes_in(selected_hunks=[2,5], commit_message="REFACTOR: Migrate car configurator to CarConfiguration enum", reason="The remaining changes both deal with migrating the existing implementation to the enum introduced in the previous commits. This way the commits build on each other in a logical progression and the migration takes place once we ensure that the class we migrate to is already present, thus avoiding breaking changes.") Once you have received a signal that you are done, you must always call the tool in the example below to terminate: commit_remaining_changes(commit_message="UPDATE: Implement data streaming feature", reason="because all hunks were processed and I must now terminate") Figure 9: Our ICC prompt continued. LLM-as-a-Judge Evaluation Prompt - Part 1 Please act as an impartial judge and evaluate the quality of the two git histories that are displayed below. Your evaluation should consider the following aspects: - The quality of the commit messages with respect to consistency, conciseness, duplication and correctness with respect to the content of the commit. - The logical cohesion of the changes present within the commits. Changes in a commit should have high logical cohesion. - The logical progression and common thread between the commits and especially the order in which the commits are presented. - The size of the commits. Commits should be as small as possible without breaking the system (e.g. changing a method signature in a non-backwards compatible way without also changing all uses of the method in the same commit). Your job is to evaluate which git history is of higher quality. Avoid any position biases and ensure that the order in which the responses were presented does not influence your decision. Do not allow the length of the responses to influence your evaluation. Be as objective as possible. You must adhere to the response format demonstrated in example responses below: {{ 'evaluation_result ':'HISTORY-1 ', 'evaluation_reason ':'The first git history has more descriptive commit and non-duplicate messages that align much more accurately with the content of the commits. ' }} {{ 'evaluation_result ':'HISTORY-2 ', 'evaluation_reason ':'The commits in git
https://arxiv.org/abs/2505.22583v1
history 2 are more concise and introduce logically coherent changes. The changes are introduces in such a way that they are unlikely to break the system as the commits are self- contained with respect to the part of the system that they affect and correctly propagate changes throughout the system. Thus I chose history 2 despite it having poorer quality commit messages. ' }} Figure 10: Our LLM-as-a-Judge evaluation prompt. We use the same prompt for both evaluation runs, we simply swap the positions of the histories that are evaluated in the prompt. LLM-as-a-Judge Evaluation Prompt- Part 2 {{ 'evaluation_result ':'TIE', 'evaluation_reason ':'Both histories introduces changes that are logically coherent and have similar commit messages. None of the two histories have fundamental issues, such as duplicate commit messages or changes that obviously would break the system if they were introduced as presented. As I am unsure, I am declaring a tie. ' }} <HISTORY-1> {history_1} </HISTORY-1> <HISTORY-2> {history_2} </HISTORY-2> Figure 11: Our LLM-as-a-Judge evaluation prompt continued.
https://arxiv.org/abs/2505.22583v1
arXiv:2505.22591v1 [cs.CL] 28 May 2025Self-Error-Instruct: Generalizing from Errors for LLMs Mathematical Reasoning Erxin Yu1, Jing Li1,2*, Ming Liao1, Qi Zhu3, Boyang Xue4, Minghui Xu3, Baojun Wang3,Lanqing Hong3,Fei Mi3,Lifeng Shang3 1Department of Computing, The Hong Kong Polytechnic University 2Research Centre for Data Science & Artificial Intelligence 3Huawei Noah’s Ark Lab,4The Chinese University of Hong Kong erxin.yu@connect.polyu.hk ,jing-amelia.li@polyu.edu.hk Abstract Although large language models demonstrate strong performance across various domains, they still struggle with numerous bad cases in mathematical reasoning. Previous approaches to learning from errors synthesize training data by solely extrapolating from isolated bad cases, thereby failing to generalize the extensive pat- terns inherent within these cases. This paper presents Self-Error-Instruct (SEI), a framework that addresses these model weaknesses and syn- thesizes more generalized targeted training data. Specifically, we explore a target model on two mathematical datasets, GSM8K and MATH, to pinpoint bad cases. Then, we generate er- ror keyphrases for these cases based on the instructor model’s (GPT-4o) analysis and iden- tify error types by clustering these keyphrases. Next, we sample a few bad cases during each generation for each identified error type and in- put them into the instructor model, which syn- thesizes additional training data using a self- instruct approach. This new data is refined through a one-shot learning process to ensure that only the most effective examples are kept. Finally, we use these curated data to fine-tune the target model, iteratively repeating the pro- cess to enhance performance. We apply our framework to various models and observe im- provements in their reasoning abilities across both in-domain and out-of-domain mathemat- ics datasets. These results demonstrate the ef- fectiveness of self-error instruction in improv- ing LLMs’ mathematical reasoning through er- ror generalization. 1 Introduction Large language models (LLMs) (Brown et al., 2020; Ouyang et al., 2022; Jiang et al., 2023; Team, 2024) have demonstrated remarkable capa- bilities across various domains, particularly after instruction-based fine-tuning. Yet, LLMs are still facing substantial challenges in complex reasoning *Corresponding authortasks, particularly in mathematical reasoning. They continue to encounter numerous bad cases, often committing errors that compromise their reliability. Previous work has taken advantage of these errors to improve model performance. Mistake- tuning and self-rethinking (Tong et al., 2024b) leverage the historical errors of LLMs to enhance their performance during both the fine-tuning and inference stages. LLMs like ChatGPT (Ouyang et al., 2022) are utilized to synthesize training datasets based on the bad cases from smaller mod- els (Ying et al., 2024; Tong et al., 2024a). LLMs are also employed to optimize the reasoning steps of smaller models (An et al., 2024), generating corrective data to train these models. However, current methods predominantly syn- thesize training data from individual bad cases. While this can somewhat enhance model perfor- mance, the data often suffers from a lack of gen- eralization because it is too reliant on specific in- stances, which limits its ability to cover a wider array of error patterns. To overcome this limitation, we introduce the Self-Error-Instruct (SEI) frame- work, which aims to generalize training data based on error types instead of focusing solely on indi-
https://arxiv.org/abs/2505.22591v1
vidual cases. For example, in Figure 1, the left subfigure displays various error types of Qwen2.5- Math. We enhanced its mathematical reasoning by generalizing the data according to these error types, which is depicted in the right subfigure. To the best of our knowledge, we are the first to explore data synthesis and selection for LLMs to generalize from errors based on error types in math reasoning. Specifically, we begin by assessing target model to identify bad cases. An instructor model is first used to pinpoint errors from these bad cases and generate relevant keyphrases, then cluster these keyphrases into distinct error types. We select a few samples from each error type as prompts for the instructor model in a self-instruct manner to synthesize new data. We further apply a one-shot 0 50 100 150 200 250 300 350 400Problem Understanding ErrorsVariable and Setup ErrorsMixed Calculation and Setup ErrorsArithmetic ErrorsAlgebraic ErrorsProbability and Counting ErrorsLogical and Reasoning ErrorsGeometric Errors 23114648239364171286387 GSM8K MATH TAL GaoKao SAT College020406080100Performance (%)+29.7 +10.8 +26.1 +42.1+29.4 +11.5Qwen2.5-Math-7B Train with SEIFigure 1: The left table shows some error types of Qwen2.5-Math-7B on Math and GSM8K training set, while the right presents the results after training on data generalized from error categories. learning-based refinement to the new data to ver- ify its effectiveness to rectify the target model’s deficiencies while maintaining the target model’s current success, only keeping the data that works. This refinement process is iteratively repeated to improve the model’s performance. We employ LLaMA3-8B-Instruct, Qwen2.5- Math-7B, and Mathstral-7B-v0.1 as the target models to identify bad cases within the training datasets, GSM8K and MATH. We conduct compre- hensive evaluations using both in-domain and out- of-domain testing. For in-domain tests, we use test sets from GSM8K and MATH. For out-of-domain tests, we utilize four additional mathematical rea- soning datasets: TAL, GaoKao, SAT, and College. Experimental results show that training the tar- get models with our synthesized data significantly improves performance on both in-domain and out- of-domain test sets. Specifically, LLaMA3 and Mathstral achieve average improvements of 1.72% and 0.98%, respectively, while Qwen2.5 shows a more significant gain of 24.94%. Additionally, our one-shot learning-based data selection method is highly effective, outperforming both random selec- tion and LESS (Xia et al., 2024), a recently pro- posed gradient-based data selection method. It also surpasses the performance of models trained on the full dataset. This demonstrates that our approach can accurately identify high-quality training data to enhance model performance. Our experiments further highlight the importance of resolving bad cases in the one-shot learning selection process and maintaining the model’s correctness on the original good cases. Finally, we analyze the fix rate of bad cases at each iteration, examine the impact of gen- eralized data volume on model performance, and compare two training strategies: iterative training with data synthesized in each round versus training from scratch with all synthesized data. In summary, our contributions are as follows: •We improve data generalization by organiz- ing mathematical reasoning data according to errortypes instead of individual bad cases. •We propose the Self-Error-Instruct framework, which analyzes bad cases through keyphrases
https://arxiv.org/abs/2505.22591v1
ex- traction and clustering, then performs data general- ization for each cluster. •Experiments show that our method efficiently generalizes data based on error types, enhancing mathematical reasoning skills and validating the effectiveness of our data selection strategy. 2 Related Work 2.1 Mathematical Reasoning With the rapid advancement of large language mod- els, they have shown remarkable capabilities across a wide range of NLP tasks, as demonstrated by models like ChatGPT (Ouyang et al., 2022), Claude (Anthropic, 2024), and Gemini (Team, 2024). How- ever, mathematical reasoning remains a significant challenge for these models. To address this issue, many models, such as OpenAI o1 (OpenAI, 2024), Qwen-2.5-Math (Yang et al., 2024), and DeepSeek- Math (Shao et al., 2024), have undergone special- ized training for mathematical tasks. Researchers have explored various strategies to enhance perfor- mance in this area, including prompting, pretrain- ing, and fine-tuning. Among these techniques, some focus specifi- cally on learning from errors to enhance model performance. LEMA (An et al., 2024) leveraged GPT-4 (OpenAI, 2024a) to correct the model’s erro- neous reasoning paths and used the refined reason- ing paths to fine-tune the model. Self-rethinking and mistake tuning (Tong et al., 2024b) analyze the causes of model errors to improve reasoning perfor- mance. The former uses an iterative process to help the model avoid repeating past mistakes, while the latter fine-tunes the model by incorporating correct and erroneous reasoning examples. LLM2LLM (Tong et al., 2024a) generates new synthetic data based on error cases to improve model performance iteratively. Learning from error and learning from error by contrast (Ying et al., 2024) are two strate- gies designed to improve the performance of target models. The former generates targeted training data by analyzing erroneous responses, while the latter by contrasting correct and incorrect responses. In contrast to these approaches, which focus solely on individual bad cases, our method generalizes data based on error types. This allows for more sys- tematic coverage of diverse issues, enhances data diversity, and improves generalization ability. 2.2 Data Selection Data selection plays a crucial role in instruction tuning, as it helps identify high-quality data, en- hancing model performance and generalization while minimizing noise to optimize training. LIMA (Zhou et al., 2023) achieved exceptional perfor- mance by selecting 1,000 high-quality question- answer pairs for instruction tuning, delivering re- sults comparable to those obtained through large- scale instruction tuning and reinforcement learn- ing. Instruction-following difficulty (Li et al., 2024a) was proposed to evaluate the difficulty of following instructions for each sample. LESS (Xia et al., 2024) identified training data most similar to the validation set based on gradient features. NUGGETS (Li et al., 2024b) assessed the impact of candidate instructions on a predefined task set’s perplexity using one-shot learning, comparing the score differences between zero-shot and one-shot learning as a reference for data selection. Building on NUGGETS, we designed a one-shot learning data selection method tailored for mathematical rea- soning. This method selects data based on whether the generated data can address the target model’s bad cases while preserving its good cases. 3 Our Self-Error-Instruct Framework Our framework1aims to enhance
https://arxiv.org/abs/2505.22591v1
the mathemati- cal reasoning ability of the target model Mtarget by identifying its weaknesses, referred to as bad cases, on an existing mathematical training dataset Dtrain. These bad cases are analyzed to guide the synthesis of targeted training data that directly addresses the model’s specific shortcomings. By progressively training on this tailored data, the mathematical ca- pabilities of Mtarget are effectively improved. As shown in Figure 2, our process consists of four key steps: 1) Bad Case Extraction 1Our code is available at https://github.com/ ErxinYu/SEI .(Section 3.1 ), which identifies the incorrect cases where the target model Mtarget fails on the existing mathematical reasoning dataset Dtrain. 2)Self Er- ror Instruct (Section 3.2 ) generates targeted data forMtarget by first identifying error keyphrase, then clustering similar errors, and finally synthesizing data specifically tailored to address the identified error types. 3) Data Selection (Section 3.3 ) filters and selects high-quality data from the generated dataset, ensuring that only the most relevant and effective examples are used for training. 4) Itera- tive Training (Section 3.4 ) uses the selected data to retrain Mtarget, iterating this process to continu- ously refine and enhance the model’s performance, thereby improving its mathematical reasoning ca- pabilities with each cycle. 3.1 Bad Case Extraction For each problem with its correct reasoning path (qi, ri)in the training dataset Dtrain, we use Mtarget to generate a reasoning path. During this process, we identify and collect the bad case (qi, ri,ˆri)into the error dataset Derror, where the answers derived from the reasoning paths differ, i.e., Ans(ˆri)̸= Ans(ri), where Ans(·)is the function that ex- tracts the answer from a given reasoning path. Thus, the error dataset is defined as: Derror={(qi, ri,ˆri)|Ans(ˆri)̸=Ans(ri)}. (1) 3.2 Self Error Instruct In this phase, for each bad case in Derror, we lever- age the Minstructor model to perform error analy- sis by examining the reasoning paths and gener- ating an error keyphrase that captures the nature of the mistake. These error keyphrases are then clustered into distinct groups based on similarity. For each error type, targeted data synthesis gener- ates new training samples specifically designed to address model weaknesses. This process produces the curated dataset DSEI, containing diversity and error-specific training samples to enhance the target model’s reasoning ability. Error Keyphrase Generation. During this stage, we address each bad case (qi, ri,ˆri)in the dataset Derrorusing the Minstructor model for detailed error analysis. This process generates an error keyphrase ei, which captures the specific nature of the error. To achieve this, we employ a structured function Extract [·]with a keyphrase extraction prompt to analyze the incorrect reasoning path ˆriand produce Wrong Output: James trains for 4 hours twice a day, 5 days a week. Sohe trains for 8 hours a week. There are 52 weeks in a year, so he trains for 8 * 52 = 416 hours a year. The answer is 416. Target ModelSection 3.1 : Bad Case Extraction"incorrect conversion from feet to yards"“excluded relevant planks”“weekly hours miscalculation” General Calculation Errors1kTime and Duration Calculation ErrorsError Keyphrase GenerationError Keyphrase Clustering ErrorType-Specific Data SynthesisSection 3.2: Self Error Instruct Question: James
https://arxiv.org/abs/2505.22591v1
trains for the Olympics. He trains twice a day for 4 hours each time for all but 2 days per week. How many hours does he train a year? SyntheticDataSelectedDataSection 3.3: One-shot Learning SelctionSection 3.4: Iterative Training Optimization Ignored Calculations and Omissions2...Figure 2: An overview of our Self-Error-Instruct framework. It consists of four key steps: (1) Bad case extraction identifies failure cases from the target model. (2) Self-error-instruct generates error keyphrases, clustering, and synthesizes data for each error type. (3) One-shot learning data selection retains only high-quality and effective examples for training. (4) Iterative training refines the target model by fine-tuning it with the curated data and repeating the process to further improve performance. the corresponding error keyphrase. Details of the prompt are provided in the Appendix A.2. The process is mathematically represented as follows: EK-Set = ei|ei=Extract [Minstructor ,(qi, ri,ˆri)], ∀(qi, ri,ˆri)∈Derror ,(2) where EK-Set represents the collection of error keyphrases generated for all bad cases in Derror. This approach ensures that each eiaccurately cap- tures the underlying issue in the model’s reasoning path, providing a solid foundation for subsequent clustering and data synthesis steps. Error Keyphrases Clustering. After obtaining theEK-Set , we utilize the Minstructor model to clus- ter the keyphrases within this set. This clustering process identifies distinct error types, denoted as theET-Set . The process can be mathematically expressed as: ET-Set =Cluster [Minstructor ,EK-Set ],(3) where Cluster [·]is a clustering prompt (see Ap- pendix A.3) designed to group the error keyphrases into coherent and distinct types. Each type is man- ually reviewed (see Appendix C) to filter and vali- date its relevance and appropriateness. Error Type-Specific Data Synthesis. For each error type within the ET-Set , we begin by samplinga subset of bad cases from the same error type, which serve as in-context learning prompts. These prompts are then used to guide Minstructor in gener- ating additional data that falls under the same error type. This process ensures that the generated data remains consistent with the specific error patterns of the given type, thereby expanding our dataset with more diverse but relevant examples. Through this process, we ultimately obtain a synthesized dataset DSEI, which enriches our data with exam- ples covering distinct error patterns. The specific prompt used for this generalization process can be found in the Appendix A.4. 3.3 One-shot Learning Selection After obtaining the generalized dataset DSEItar- geting specific errors, our goal is to select a small subset of high-quality data for training the target model. In previous work, NUGGETS (Li et al., 2024b) uses a one-shot learning approach to filter data. It calculates a score for each instruction ex- ample based on its impact on the perplexity of a set of pre-defined tasks, allowing for the identification of the most beneficial data for instruction tuning. In our approach to mathematical reasoning tasks, instead of relying on perplexity, we directly evalu- ate whether the newly generalized data can effec- tively serve as a one-shot prompt to guide the target model in resolving bad cases. Furthermore, we aim to ensure that the target model maintains its perfor-
https://arxiv.org/abs/2505.22591v1
mance on good cases originally answered correctly, preserving its effectiveness across challenging and straightforward examples. First, we randomly sam- ple a subset of bad cases and good cases to create a validation set, Ddev. Next, we evaluate each sam- ple in DSEIby measuring the number of cases in Ddevthat can be resolved when the sample is used as a one-shot prompt. This evaluation serves as the criterion for selecting high-quality data. The process can be represented as: rj i=Mtarget(qjrj |{z} One-Shot Prompt⊕qi) (4) Sj osl=X iI[Ans(rj i) =Ans(ri)] (5) The expression qjrjrepresents the j-th synthetic data point from the dataset DSEI. The score Sj oslis the one-shot learning score, calculated by summing the indicator function I[·], which is 1 if the answer fromrj imatches ri, and 0 otherwise. Here, qiriare elements from Ddev, where riis the correct reason- ing path for qi. The prompt for one-shot learning is shown in Appendix 7. For each synthetic data in DSEI, calculate the set of one-shot learning scores {S1 osl, S2 osl, . . . , Sm osl}. By sorting these scores, we obtain the selection Dosl SEI. 3.4 Iterative Training Optimization The selected data, Dosl SEI, is used to train the tar- get model, Mtarget. After the model is enhanced through this training, it is applied to Dtrainonce more to identify new bad cases that it still struggles with. This process is iterated, continuously opti- mizing the target model by improving its ability to handle challenging examples, thereby enhancing its overall mathematical reasoning ability. 4 Experimental Setup 4.1 Data Synthetic We identify bad cases from the training datasets of GSM8K and MATH, using GPT-4o2(Ope- nAI, 2024b) as the instructor model to generate error keyphrases, perform clustering, and synthe- size data. For each error type, during the self-error instruct process, we sample 5 data points from the error dataset Derrorand 3 data points from the al- ready generated data within the current error type 2We use the Microsoft Azure AI services at https:// azure.microsoft.com/Dataset Difficulty Difficulty Train Test GSM8K Elementary Easy 7,473 1,319 MATH Competition ExHard 7,498 5,000 TAL-SCQ K12 Math Medium - 1,496 GaoKaoBech-Math High School Hard - 508 SAT-MATH High School Hard - 102 CollegeMath College ExHard - 2,818 Table 1: Statistics of Different Datasets. We extract bad cases from the GSM8K and MATH training sets and use the test sets of all datasets for evaluation. Datasets marked with “-” indicate only test data is available and are used for out-of-domain evaluation. to serve as prompts. Each time, GPT-4o general- izes 20 new math data. We then filter out data with a Rouge-L score greater than 0.7 compared to the GSM8K and MATH training and test datasets to enhance diversity and prevent test set leakage. We randomly select 100 data points, comprising 50 good and 50 bad cases, to construct the validation setDdev. The number of iterations for data synthe- sis and model training is 3. In each iteration, we generate 10,000 data points by synthesizing 5,000 examples for the error types of GSM8K and 5,000 for MATH. We select the top 5%
https://arxiv.org/abs/2505.22591v1
of the synthetic data from each part and combine them into a uni- fied dataset for training. Over three iterations, we generate a total of 30,000 data points and select 1,500 for training. We also compared two meth- ods for training the target model: iterative training, which starts from the model trained in the previous round, and training from scratch, which uses the selected data in a single step. The results of these two methods are shown in Table 5. 4.2 Target Model Setting We use the instruction-tuned Llama3-8b-instruct model (Grattafiori et al., 2024), the math- specialized Qwen2.5-Math-7B (Yang et al., 2024), and Mathstral-7B-v0.1 (Jiang et al., 2023) as our target models. During training, we employ LoRA (Hu et al., 2021) with a maximum sequence length of 2048 tokens, set the number of training epochs to 3, and use a learning rate 2e-05. 4.3 Evaluation We used the GSM8K (Cobbe et al., 2021) and Math (Hendrycks et al., 2021) test sets for in- domain evaluation. For out-of-domain evaluation, we utilized four challenging datasets: 1) TAL-SCQ (TAL, 2023): A K-12 mathematics test set contain- ing 1,496 test examples. 2) GaoKaoBench-Math (Zhang et al., 2024): Comprising 508 test exam- ModelsIn-Domain Out-of-DomainA VG GSM8K MATH TAL GaoKao SAT College Llama3-8B-Instruct 77.56 27.36 37.03 15.55 39.22 15.54 35.38 + Training data 63.99 23.32 29.01 12.00 34.31 13.41 29.34 + Bad Cases 65.13 23.20 30.08 11.22 33.33 13.41 29.40 + Self-Instruct 74.83 26.20 35.44 14.76 37.25 15.26 33.96 + LLMs-as-Instructors 79.37 27.84 36.17 16.14 38.24 15.79 35.59 + LLM2LLM 76.61 27.60 40.10 15.16 38.24 15.51 35.54 + SEI-ICL 79.76 28.42 39.91 16.73 42.15 15.61 37.10 Qwen2.5-Math-7B 57.92 50.52 28.07 3.93 39.22 16.96 32.77 + Training data 57.54 56.22 46.19 38.78 65.69 24.20 48.10 + Bad Cases 64.21 56.90 45.45 34.44 63.73 22.36 47.85 + Self-Instruct 80.57 58.24 52.66 43.31 65.69 26.87 54.56 + LLMs-as-Instructors 79.31 58.76 54.62 45.43 63.73 28.07 54.99 + LLM2LLM 81.17 58.88 53.56 43.11 65.69 27.96 55.06 + SEI-ICL 87.64 61.28 54.21 46.06 68.62 28.42 57.71 Mathstral-7B-v0.1 80.67 52.58 48.66 47.83 61.76 25.80 52.88 + Training data 72.10 44.40 41.44 42.91 56.86 24.17 46.98 + Bad Cases 70.58 46.06 41.24 43.11 59.80 24.59 47.56 + Self-Instruct 79.68 52.02 47.13 44.69 58.82 25.28 51.27 + LLMs-as-Instructors 79.61 52.42 48.13 43.31 63.73 25.19 52.07 + LLM2LLM 81.35 52.64 46.79 45.87 59.08 25.16 51.82 + SEI-ICL 82.87 53.70 49.47 48.62 62.75 25.72 53.86 Table 2: Main results on in-domain and out-of-domain mathematical test sets, evaluated using the exact match (EM). All experiments are conducted in a zero-shot setting. SEI-ICL refers to our proposed method, which leverages the self-error-instruct framework to generalize and train using the top 5% of data selected through one-shot learning. For fair comparison, the generalized data sizes for the baselines are kept consistent with SEI-ICL. ples, this dataset features math problems from the Chinese high-school curriculum. 3) SAT-MATH (Zhong et al., 2024): Consisting of 102 questions, this dataset includes math problems from the U.S. high-school curriculum. 4) CollegeMath (Tang et al., 2024): This dataset contains 2,818 test exam- ples of college-level math problems. The
https://arxiv.org/abs/2505.22591v1
detailed dataset statistics are provided in Table 1. We evaluated the models on these datasets us- ing greedy decoding in a zero-shot setting, with the maximum generation length set to 2048. Per- formance was measured using Exact Match (EM), where answers were extracted from the generated reasoning paths and compared to the correct ones. All evaluations were conducted using the MWP- Bench framework3. 4.4 Baselines We compare with several baselines: 1) Training Data , where the model is trained on the combined GSM8K and MATH datasets; 2) Bad Cases , using 3https://github.com/microsoft/unilm/tree/ master/mathscale/MWPBenchbad cases from the initial target model; 3) LLMs- as-Instructors , using Learning from error (LE) by generating tailored training data for errors. (Ying et al., 2024) 4) Self-Instruct (Wang et al., 2023), generating 1,500 data points; 7) LLM2LLM (Tong et al., 2024a), also generating 1,500 data points; 8)Rand , randomly selecting 500 data points per iteration for a total of 1,500; and 9) LESS (Xia et al., 2024), selecting 1,500 data points based on gradient similarity. We adopt the same setting as SEI for self- instruct, except that the sampled examples are se- lected randomly. Eight samples (five bad cases and three generated data) are selected in each itera- tion, and GPT-4o generates 20 new samples. This process is repeated to produce a total of 30,000 samples, from which 1,500 training samples are selected using the ICL method. For LLM2LLM and LLMs-as-Instructors, one new sample is gener- ated per bad case using GPT-4o, with 500 samples generated per round over three rounds, resulting in 1,500 samples. We filter out samples with a Rouge- L similarity score above 0.7 during data synthesis Models # SamplesIn-Domain Out-of-DomainA VG GSM8K MATH TAL GaoKao SAT College Llama-3-8B-Instruct - 77.56 27.36 37.03 15.55 39.22 15.54 35.38 SEI-FULL 100% 78.01 28.02 38.64 15.94 41.18 16.25 36.34 -Rand 5% (1,500) 77.80 28.54 37.43 15.16 40.20 15.72 35.81 -LESS 5% (1,500) 77.95 28.18 36.83 14.96 39.22 15.87 35.50 -One-shot ICL5% (1,500) 79.76 28.42 39.91 16.73 42.15 15.61 37.10 10% (3,000) 79.98 27.96 39.37 15.75 40.19 16.22 36.58 20% (6,000) 79.37 28.18 39.65 15.94 39.22 15.51 36.31 Qwen2.5-Math-7B - 57.92 50.52 28.07 3.93 39.22 16.96 32.77 SEI-FULL 100% 83.45 60.34 53.57 44.61 67.65 28.22 56.30 -Rand 5% (1,500) 82.52 58.82 53.44 43.58 65.69 27.81 55.31 -LESS 5% (1,500) 83.13 59.76 53.69 45.28 66.67 28.14 56.11 -One-shot ICL5% (1,500) 87.64 61.28 54.21 46.06 68.62 28.42 57.71 10%(3,000) 85.74 61.56 54.89 45.76 65.69 28.33 57.16 20% (6,000) 86.58 60.78 54.76 44.29 63.73 28.57 56.45 Mathstral-7B-v0.1 - 80.67 52.58 48.66 47.83 61.76 25.80 52.88 SEI-FULL 100% 81.12 53.56 49.13 49.61 59.80 25.62 53.14 -Rand 5% (1,500) 79.98 52.50 48.21 47.05 60.78 25.19 52.29 -LESS 5% (1,500) 79.68 52.20 48.60 48.03 60.78 25.23 52.42 -One-shot ICL5% (1,500) 82.87 53.70 49.47 48.62 62.75 25.72 53.86 10% (3,000) 80.52 53.50 48.79 48.23 61.76 24.88 52.95 20% (6,000) 83.24 53.40 49.53 46.85 63.73 24.77 53.59 Table 3: Model performance under different data selection strategies and samples. The bolded results highlight the best performance achieved using the FULL dataset and the top 5% of samples selected
https://arxiv.org/abs/2505.22591v1
through Rand, LESS, and one-shot ICL methods. by comparing them against the GSM8K and MATH training and test datasets. For rand selection, data is proportionally sam- pled from each error type, with more samples drawn from types with more bad cases. For LESS, following the original setting, we randomly select 10 examples from GSM8K and MATH as the val- idation set, compute the average gradient of the validation set, and select generated data with the most similar gradients. 5 Experimental Results 5.1 Main Results Table 2 presents our main results, from which we can draw several conclusions. 1) Our method, SEI- ICL, outperforms others by substantial margins in all math datasets. Specifically, after training, Llama-3-8B-Instruct improves by 1.72% and Math- stral by 0.98%, while Qwen2.5-Math-7B achieves an improvement of 24.94%, highlighting the ef- fectiveness of our error-type-guided data genera- tion approach. 2) Training solely on the original GSM8K and MATH datasets or the identified badcases results in performance degradation for the Llama3 and Mathstral models. This suggests that existing math training datasets offer limited bene- fits for already instruction-tuned models. It high- lights the necessity of data synthesis. 3) With the same amount of data, our data generation method outperforms other baselines. As shown in Table 2, the average improvement achieved by SEI-ICL on all the models is higher than that of these base- lines. Furthermore, combined with the results in Table 3, we observe that even without data selec- tion, randomly selecting the same amount of data (Rand) performs better than self-instruct (random generation), LLMs-as-Instructors and LLM2LLM (based on a single bad case), demonstrating that our error-type-guided data generation is more effective. 5.2 Data Selection Table 3 presents the results of different data se- lection methods. By selecting the top 5% of the data using our one-shot learning method, the per- formance of the trained models on target models surpasses that of SEI-FULL, which uses the full dataset for training. Furthermore, our models con- GSM8K MATH TAL GaoKao SAT College101520253035404550Improvement (%)29.72 10.7626.1442.13 29.40 11.4627.31 9.1425.6144.68 28.01 10.39One-shot ICL Strategy: Combine(Bad+Good) vs Bad Cases Bad+Good Cases Bad CasesFigure 3: The effects of two one-shot ICL strategies on the improvement of Qwen2.5. tinue to outperform SEI-FULL as the amount of selected data increases. Under the same data size, the one-shot learning method achieves better results than rand selection and LESS, shows the effective- ness of the one-shot learning approach specifically designed for mathematical problem selection. We conducted analysis experiments on the data selection validation set Ddevmentioned in Section 3.3. Specifically, we compared the approach of using only bad cases as Ddevwith the combined approach that includes both good and bad cases. The results of these experiments are shown in Fig- ure 3. It can be observed that the combined ap- proach outperforms the method using only bad cases across most datasets. This demonstrates that, when performing one-shot learning for data selec- tion, it is important to ensure that the generated data addresses bad cases effectively and to maintain the correctness of the original good cases. 5.3 Iterative Improvement Result Bad Case (Fix Rate)
https://arxiv.org/abs/2505.22591v1
Testset (EM Score) GSM8K MATH GSM8K MATH Iter-0 (ori) 0 0 55.50 32.32 Iter-1 29.98 23.17 79.48 57.21 Iter-2 38.01 39.44 84.70 58.19 Iter-3 39.13 40.57 87.79 59.18 Table 4: Bad Case Fix Rate of Qwen2.5-Math on GSM8K and MATH during iterative improvement, along with its performance on the test sets. Bad cases refer to the errors made by Qwen2.5-Math in the train- ing data of GSM8K and MATH. Table 4 presents the bad case fix rate and test set performance of the Qwen2.5-Math model across different iterations. As shown, with the increase in iterations, the bad case fix rate consistently im- proves for both datasets, accompanied by a steady improvement in test set performance. This in-dicates that our method effectively identifies the model’s error types in each iteration and generates targeted data for training, thereby enhancing the model’s overall performance. 5.4 Iterative vs. From-scratch Training GSM8K MATH Model Iterative From-scratch Iterative From-scratch Llama3 78.09 79.76 27.62 28.42 Qwen2.5 87.79 87.64 59.18 61.28 Mathstral 81.96 82.87 48.02 53.70 Table 5: Comparison of model performance on GSM8K and MATH tasks under different training methods (Iter- ative and From-scratch). Table 5 highlights the differences between itera- tive training and from-scratch training within our framework. In iterative training, each new itera- tion continues training the target model obtained in the previous round. In contrast, from-scratch training involves directly training the initial target model once the data is obtained after three rounds of data generation. The results show that from- scratch training outperforms iterative training. A possible explanation for this is that in each round of iterative training, we only select the top 5% of the data for training. With such a small amount of data, iterative fine-tuning may lead to overfitting over multiple rounds. On the other hand, training from scratch aggregated datasets helps mitigate this issue, resulting in better overall performance. 5.5 Different Synthetic Size We conducted an analysis between the amount of unfiltered synthetic data and performance, with the results presented in Figure 4. It can be observed that for all target models, the size of the generaliza- tion data is not proportional to performance. For Llama3, performance initially improves but even- tually starts to decline. Specifically, the best perfor- mance on GSM8K is achieved with 15,000 train- ing samples, while the optimal result on MATH is reached with 25,000 samples. In contrast, the results for Qwen2.5 and Mathstral are relatively in- consistent. These findings further highlight the im- portance of data selection. For models like Llama3 and Mathstral, which have already undergone ex- tensive instruction tuning, the quantity of data may not be the key to improving performance. Instead, the focus should shift to constructing small but high-quality datasets. 5000 10000 15000 20000 25000 30000 Synthetic Data Size78798081828384GSM8K Performance GSM Performance with Different Synthetic Data Sizes 525456586062 Llama-3-8B-Instruct Qwen2.5-Math-7B Mathstral-7B-v0.1 5000 10000 15000 20000 25000 30000 Synthetic Data Size26272829 MATH Performance with Different Synthetic Data SizesMATH PerformanceFigure 4: Comparison of GSM8K and MATH performance under different synthetic data sizes. 6 Conclusion We propose Self-Error-Instruct, a novel frame- work to improve LLMs mathematical
https://arxiv.org/abs/2505.22591v1
reasoning by generalizing training data based on error types rather than individual bad cases. Our method en- hances data diversity and mitigates overfitting by analyzing errors, clustering them into categories, and synthesizing targeted data using a self-instruct approach. Experiments on LLaMA3-8B-Instruct, Qwen2.5-Math-7B, and Mathstral demonstrate no- table performance improvements with our method, achieving average gains of 1.72%, 24.94%, and 0.98%, respectively, across both in-domain and out-of-domain evaluations. Limitations Our framework has three main limitations: the high cost of using GPT-4o as the instructor model, the focus on GSM8K and MATH datasets for bad case extraction, which may limit the diversity of errors, and the increased time consumption caused by one- shot learning. Our approach is the reliance on GPT-4o as the instructor model for error analysis and data synthe- sis. While GPT-4o is highly effective in identifying error keyphrases and generating targeted training data, its use incurs significant computational and financial costs, which may limit the scalability and accessibility of the framework. The second limitation lies in the scope of our bad case extraction and iterative refinement pro- cess, which is currently confined to the GSM8K and MATH datasets. As a result, the error types identified and addressed may be limited to those specific to these datasets, potentially restricting the generalizability of the framework to other mathe- matical reasoning tasks or datasets. In the future, a more dynamic approach could be adopted, where bad cases are extracted from the initial datasets andcontinuously identified within the synthesized data during the iterative process. This would allow the framework to discover new and diverse error types as the training data evolves, further broadening the issues addressed and enhancing the model’s math- ematical reasoning capabilities. This expansion would help ensure the framework adapts to various problems, improving its robustness and applicabil- ity to real-world scenarios. The third limitation lies in the one-shot data se- lection process. Although this approach is a one- time operation and produces results superior to LESS and random selection, the one-shot learning phase requires significant computational resources. This is because each of the 30,000 generated sam- ples needs to be validated against an ICL-formatted validation set containing 100 samples. Ethics Considerations This study strictly uses OpenAI’s GPT-4o model for research purposes, in compliance with Ope- nAI’s Business Terms, Section 2-(e). Our work analyzes reasoning errors to improve AI models and does not involve developing or commercial- izing competing products. We ensure no derived models are distributed or made available to third parties, maintaining full adherence to ethical and legal standards. Acknowledgements This work is supported by a grant from the Re- search Grants Council of the Hong Kong Spe- cial Administrative Region, China (Project No. PolyU/25200821), the Innovation and Technology Fund (Project No. PRP/047/22FX), PolyU Internal Fund from RC-DSAI (Project No. 1-CE1E), and a gift fund from Huawei (N-ZGM3). References Shengnan An, Zexiong Ma, Zeqi Lin, Nanning Zheng, Jian-Guang Lou, and Weizhu Chen. 2024. Learning from mistakes makes llm better reasoner. Preprint , arXiv:2310.20689. Anthropic. 2024. The claude 3 model family: Opus, sonnet, haiku. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie
https://arxiv.org/abs/2505.22591v1
Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam Mc- Candlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learn- ers.Preprint , arXiv:2005.14165. Lingjie Chen, Ruizhong Qiu, Siyu Yuan, Zhining Liu, Tianxin Wei, Hyunsik Yoo, Zhichen Zeng, Deqing Yang, and Hanghang Tong. 2024. Wapiti: A water- mark for finetuned open-source llms. arXiv preprint arXiv:2410.06467 . Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word prob- lems. Preprint , arXiv:2110.14168. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schel- ten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mi- tra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurelien Ro- driguez, Austen Gregerson, Ava Spataru, Baptiste Roziere, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Al- lonsius, Daniel Song, Danielle Pintz, Danny Livshits, Danny Wyatt, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip Radenovic, Francisco Guzmán, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Lewis An- derson, Govind Thattai, Graeme Nail, Gregoire Mi- alon, Guan Pang, Guillem Cucurell, Hailey Nguyen, Hannah Korevaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel Kloumann, Is- han Misra, Ivan Evtimov, Jack Zhang, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah, Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang,Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Jun- teng Jia, Kalyan Vasuden Alwala, Karthik Prasad, Kartikeya Upasani, Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone, Khalid El-Arini, Krithika Iyer, Kshitiz Malik, Kuenley Chiu, Kunal Bhalla, Kushal Lakhotia, Lauren Rantala-Yeary, Laurens van der Maaten, Lawrence Chen, Liang Tan, Liz Jenkins, Louis Martin, Lovish Madaan, Lubo Malo, Lukas Blecher, Lukas Landzaat, Luke de Oliveira, Madeline Muzzi, Mahesh Pasupuleti, Mannat Singh, Manohar Paluri, Marcin Kardas, Maria Tsimpoukelli, Mathew Oldham, Mathieu Rita, Maya Pavlova, Melanie Kam- badur, Mike Lewis, Min Si, Mitesh Kumar Singh, Mona Hassan, Naman Goyal, Narjes Torabi, Niko- lay Bashlykov, Nikolay Bogoychev, Niladri Chatterji, Ning Zhang, Olivier Duchenne, Onur Çelebi, Patrick Alrassy, Pengchuan Zhang, Pengwei Li, Petar Va- sic, Peter Weng, Prajjwal Bhargava, Pratik Dubal, Praveen Krishnan, Punit Singh Koura, Puxin Xu, Qing He, Qingxiao Dong, Ragavan Srinivasan, Raj Ganapathy, Ramon Calderer, Ricardo Silveira Cabral, Robert Stojnic, Roberta Raileanu, Rohan Maheswari, Rohit Girdhar, Rohit Patel, Romain Sauvestre, Ron- nie Polidoro, Roshan Sumbaly, Ross Taylor, Ruan Silva, Rui Hou, Rui Wang, Saghar Hosseini,
https://arxiv.org/abs/2505.22591v1
Sa- hana Chennabasappa, Sanjay Singh, Sean Bell, Seo- hyun Sonia Kim, Sergey Edunov, Shaoliang Nie, Sha- ran Narang, Sharath Raparthy, Sheng Shen, Shengye Wan, Shruti Bhosale, Shun Zhang, Simon Van- denhende, Soumya Batra, Spencer Whitman, Sten Sootla, Stephane Collot, Suchin Gururangan, Syd- ney Borodinsky, Tamar Herman, Tara Fowler, Tarek Sheasha, Thomas Georgiou, Thomas Scialom, Tobias Speckbacher, Todor Mihaylov, Tong Xiao, Ujjwal Karn, Vedanuj Goswami, Vibhor Gupta, Vignesh Ramanathan, Viktor Kerkez, Vincent Gonguet, Vir- ginie Do, Vish V ogeti, Vítor Albiero, Vladan Petro- vic, Weiwei Chu, Wenhan Xiong, Wenyin Fu, Whit- ney Meers, Xavier Martinet, Xiaodong Wang, Xi- aofang Wang, Xiaoqing Ellen Tan, Xide Xia, Xin- feng Xie, Xuchao Jia, Xuewei Wang, Yaelle Gold- schlag, Yashesh Gaur, Yasmine Babaei, Yi Wen, Yiwen Song, Yuchen Zhang, Yue Li, Yuning Mao, Zacharie Delpierre Coudert, Zheng Yan, Zhengxing Chen, Zoe Papakipos, Aaditya Singh, Aayushi Sri- vastava, Abha Jain, Adam Kelsey, Adam Shajnfeld, Adithya Gangidi, Adolfo Victoria, Ahuva Goldstand, Ajay Menon, Ajay Sharma, Alex Boesenberg, Alexei Baevski, Allie Feinstein, Amanda Kallet, Amit San- gani, Amos Teo, Anam Yunus, Andrei Lupu, An- dres Alvarado, Andrew Caples, Andrew Gu, Andrew Ho, Andrew Poulton, Andrew Ryan, Ankit Ramchan- dani, Annie Dong, Annie Franco, Anuj Goyal, Apara- jita Saraf, Arkabandhu Chowdhury, Ashley Gabriel, Ashwin Bharambe, Assaf Eisenman, Azadeh Yaz- dan, Beau James, Ben Maurer, Benjamin Leonhardi, Bernie Huang, Beth Loyd, Beto De Paola, Bhargavi Paranjape, Bing Liu, Bo Wu, Boyu Ni, Braden Han- cock, Bram Wasti, Brandon Spence, Brani Stojkovic, Brian Gamido, Britt Montalvo, Carl Parker, Carly Burton, Catalina Mejia, Ce Liu, Changhan Wang, Changkyu Kim, Chao Zhou, Chester Hu, Ching- Hsiang Chu, Chris Cai, Chris Tindal, Christoph Fe- ichtenhofer, Cynthia Gao, Damon Civin, Dana Beaty, Daniel Kreymer, Daniel Li, David Adkins, David Xu, Davide Testuggine, Delia David, Devi Parikh, Diana Liskovich, Didem Foss, Dingkang Wang, Duc Le, Dustin Holland, Edward Dowling, Eissa Jamil, Elaine Montgomery, Eleonora Presani, Emily Hahn, Emily Wood, Eric-Tuan Le, Erik Brinkman, Este- ban Arcaute, Evan Dunbar, Evan Smothers, Fei Sun, Felix Kreuk, Feng Tian, Filippos Kokkinos, Firat Ozgenel, Francesco Caggioni, Frank Kanayet, Frank Seide, Gabriela Medina Florez, Gabriella Schwarz, Gada Badeer, Georgia Swee, Gil Halpern, Grant Herman, Grigory Sizov, Guangyi, Zhang, Guna Lakshminarayanan, Hakan Inan, Hamid Shojanaz- eri, Han Zou, Hannah Wang, Hanwen Zha, Haroun Habeeb, Harrison Rudolph, Helen Suk, Henry As- pegren, Hunter Goldman, Hongyuan Zhan, Ibrahim Damlaj, Igor Molybog, Igor Tufanov, Ilias Leontiadis, Irina-Elena Veliche, Itai Gat, Jake Weissman, James Geboski, James Kohli, Janice Lam, Japhet Asher, Jean-Baptiste Gaya, Jeff Marcus, Jeff Tang, Jen- nifer Chan, Jenny Zhen, Jeremy Reizenstein, Jeremy Teboul, Jessica Zhong, Jian Jin, Jingyi Yang, Joe Cummings, Jon Carvill, Jon Shepard, Jonathan Mc- Phie, Jonathan Torres, Josh Ginsburg, Junjie Wang, Kai Wu, Kam Hou U, Karan Saxena, Kartikay Khan- delwal, Katayoun Zand, Kathy Matosich, Kaushik Veeraraghavan, Kelly Michelena, Keqian Li, Ki- ran Jagadeesh, Kun Huang, Kunal Chawla, Kyle Huang, Lailin Chen, Lakshya Garg, Lavender A, Leandro Silva, Lee Bell, Lei Zhang, Liangpeng Guo, Licheng Yu, Liron Moshkovich, Luca Wehrst- edt, Madian Khabsa, Manav Avalani, Manish Bhatt, Martynas Mankus, Matan Hasson, Matthew Lennie, Matthias Reso, Maxim Groshev, Maxim Naumov, Maya Lathi, Meghan
https://arxiv.org/abs/2505.22591v1
Keneally, Miao Liu, Michael L. Seltzer, Michal Valko, Michelle Restrepo, Mihir Pa- tel, Mik Vyatskov, Mikayel Samvelyan, Mike Clark, Mike Macey, Mike Wang, Miquel Jubert Hermoso, Mo Metanat, Mohammad Rastegari, Munish Bansal, Nandhini Santhanam, Natascha Parks, Natasha White, Navyata Bawa, Nayan Singhal, Nick Egebo, Nicolas Usunier, Nikhil Mehta, Nikolay Pavlovich Laptev, Ning Dong, Norman Cheng, Oleg Chernoguz, Olivia Hart, Omkar Salpekar, Ozlem Kalinli, Parkin Kent, Parth Parekh, Paul Saab, Pavan Balaji, Pe- dro Rittner, Philip Bontrager, Pierre Roux, Piotr Dollar, Polina Zvyagina, Prashant Ratanchandani, Pritish Yuvraj, Qian Liang, Rachad Alao, Rachel Rodriguez, Rafi Ayub, Raghotham Murthy, Raghu Nayani, Rahul Mitra, Rangaprabhu Parthasarathy, Raymond Li, Rebekkah Hogan, Robin Battey, Rocky Wang, Russ Howes, Ruty Rinott, Sachin Mehta, Sachin Siby, Sai Jayesh Bondu, Samyak Datta, Sara Chugh, Sara Hunt, Sargun Dhillon, Sasha Sidorov, Satadru Pan, Saurabh Mahajan, Saurabh Verma, Seiji Yamamoto, Sharadh Ramaswamy, Shaun Lind- say, Shaun Lindsay, Sheng Feng, Shenghao Lin, Shengxin Cindy Zha, Shishir Patil, Shiva Shankar, Shuqiang Zhang, Shuqiang Zhang, Sinong Wang, Sneha Agarwal, Soji Sajuyigbe, Soumith Chintala, Stephanie Max, Stephen Chen, Steve Kehoe, Steve Satterfield, Sudarshan Govindaprasad, Sumit Gupta, Summer Deng, Sungmin Cho, Sunny Virk, SurajSubramanian, Sy Choudhury, Sydney Goldman, Tal Remez, Tamar Glaser, Tamara Best, Thilo Koehler, Thomas Robinson, Tianhe Li, Tianjun Zhang, Tim Matthews, Timothy Chou, Tzook Shaked, Varun V ontimitta, Victoria Ajayi, Victoria Montanez, Vijai Mohan, Vinay Satish Kumar, Vishal Mangla, Vlad Ionescu, Vlad Poenaru, Vlad Tiberiu Mihailescu, Vladimir Ivanov, Wei Li, Wenchen Wang, Wen- wen Jiang, Wes Bouaziz, Will Constable, Xiaocheng Tang, Xiaojian Wu, Xiaolan Wang, Xilun Wu, Xinbo Gao, Yaniv Kleinman, Yanjun Chen, Ye Hu, Ye Jia, Ye Qi, Yenda Li, Yilin Zhang, Ying Zhang, Yossi Adi, Youngjin Nam, Yu, Wang, Yu Zhao, Yuchen Hao, Yundi Qian, Yunlu Li, Yuzi He, Zach Rait, Zachary DeVito, Zef Rosnbrick, Zhaoduo Wen, Zhenyu Yang, Zhiwei Zhao, and Zhiyu Ma. 2024. The llama 3 herd of models. Preprint , arXiv:2407.21783. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the math dataset. Preprint , arXiv:2103.03874. Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. Preprint , arXiv:2106.09685. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Men- sch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guil- laume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7b. Preprint , arXiv:2310.06825. Nicholas Lee, Thanakul Wattanawong, Sehoon Kim, Karttikeya Mangalam, Sheng Shen, Gopala Anu- manchipalli, Michael Mahoney, Kurt Keutzer, and Amir Gholami. 2024. LLM2LLM: Boosting LLMs with novel iterative data enhancement. In Findings of the Association for Computational Linguistics: ACL 2024 , pages 6498–6526, Bangkok, Thailand. Associ- ation for Computational Linguistics. Ming Li, Yong Zhang, Zhitao Li, Jiuhai Chen, Lichang Chen, Ning Cheng, Jianzong Wang, Tianyi Zhou, and Jing Xiao. 2024a. From quantity to quality: Boosting LLM performance with self-guided data selection for instruction tuning. In Proceedings of the 2024 Conference
https://arxiv.org/abs/2505.22591v1
of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 7602–7635, Mexico City, Mexico. Association for Computational Linguistics. Yunshui Li, Binyuan Hui, Xiaobo Xia, Jiaxi Yang, Min Yang, Lei Zhang, Shuzheng Si, Ling-Hao Chen, Jun- hao Liu, Tongliang Liu, Fei Huang, and Yongbin Li. 2024b. One-shot learning as instruction data prospector for large language models. In Proceed- ings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers) , pages 4586–4601, Bangkok, Thailand. Associ- ation for Computational Linguistics. Lin Long, Rui Wang, Ruixuan Xiao, Junbo Zhao, Xiao Ding, Gang Chen, and Haobo Wang. 2024. On llms- driven synthetic data generation, curation, and evalu- ation: A survey. Preprint , arXiv:2406.15126. OpenAI. 2024a. Gpt-4 technical report. Preprint , arXiv:2303.08774. OpenAI. 2024b. Gpt-4o. OpenAI. 2024. O1 Model. https://openai.com/o1/ . Accessed: 2024-12-11. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems , volume 35, pages 27730–27744. Curran Associates, Inc. Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y . K. Li, Y . Wu, and Daya Guo. 2024. Deepseekmath: Pushing the limits of mathemati- cal reasoning in open language models. Preprint , arXiv:2402.03300. TAL. 2023. Tal-scq5k. https://github.com/ math-eval/TAL-SCQ5K . GitHub repository. Ruixiang Tang, Xiaotian Han, Xiaoqian Jiang, and Xia Hu. 2023. Does synthetic data generation of llms help clinical text mining? Preprint , arXiv:2303.04360. Zhengyang Tang, Xingxing Zhang, Benyou Wang, and Furu Wei. 2024. Mathscale: Scaling instruc- tion tuning for mathematical reasoning. Preprint , arXiv:2403.02884. Gemini Team. 2024. Gemini: A family of highly capa- ble multimodal models. Preprint , arXiv:2312.11805. Terry Tong, Qin Liu, Jiashu Xu, and Muhao Chen. 2024a. Securing multi-turn conversational language models from distributed backdoor attacks. In Find- ings of the Association for Computational Linguistics: EMNLP 2024 , pages 12833–12846, Miami, Florida, USA. Association for Computational Linguistics. Yongqi Tong, Dawei Li, Sizhe Wang, Yujia Wang, Fei Teng, and Jingbo Shang. 2024b. Can LLMs learn from previous mistakes? investigating LLMs’ errors to boost for reasoning. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 3065– 3080, Bangkok, Thailand. Association for Computa- tional Linguistics.Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2023. Self-instruct: Aligning language models with self-generated instructions. In Proceed- ings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 13484–13508, Toronto, Canada. Association for Computational Linguistics. Zhepei Wei, Wei-Lin Chen, and Yu Meng. 2025. In- structRAG: Instructing retrieval-augmented genera- tion via self-synthesized rationales. In The Thirteenth International Conference on Learning Representa- tions . Mengzhou Xia, Sadhika Malladi, Suchin Gururangan, Sanjeev Arora, and Danqi Chen. 2024. LESS: Se- lecting influential data for targeted instruction
https://arxiv.org/abs/2505.22591v1
tuning. InInternational Conference on Machine Learning (ICML) . Ran Xu, Hejie Cui, Yue Yu, Xuan Kan, Wenqi Shi, Yuchen Zhuang, May Dongmei Wang, Wei Jin, Joyce Ho, and Carl Yang. 2024. Knowledge-infused prompting: Assessing and advancing clinical text data generation with large language models. In Find- ings of the Association for Computational Linguistics: ACL 2024 , pages 15496–15523, Bangkok, Thailand. Association for Computational Linguistics. Boyang Xue, Fei Mi, Qi Zhu, Hongru Wang, Rui Wang, Sheng Wang, Erxin Yu, Xuming Hu, and Kam-Fai Wong. 2024a. Ualign: Leveraging uncertainty esti- mations for factuality alignment on large language models. Preprint , arXiv:2412.11803. Boyang Xue, Hongru Wang, Rui Wang, Sheng Wang, Zezhong Wang, Yiming Du, Bin Liang, and Kam-Fai Wong. 2024b. A comprehensive study of multilin- gual confidence estimation on large language models. Preprint , arXiv:2402.13606. Boyang Xue, Weichao Wang, Hongru Wang, Fei Mi, Rui Wang, Yasheng Wang, Lifeng Shang, Xin Jiang, Qun Liu, and Kam-Fai Wong. 2023. Improving fac- tual consistency for knowledge-grounded dialogue systems via knowledge enhancement and alignment. Preprint , arXiv:2310.08372. An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jian- hong Tu, Jingren Zhou, Junyang Lin, Keming Lu, Mingfeng Xue, Runji Lin, Tianyu Liu, Xingzhang Ren, and Zhenru Zhang. 2024. Qwen2.5-math tech- nical report: Toward mathematical expert model via self-improvement. Preprint , arXiv:2409.12122. Jiahao Ying, Mingbao Lin, Yixin Cao, Wei Tang, Bo Wang, Qianru Sun, Xuanjing Huang, and Shuicheng Yan. 2024. LLMs-as-instructors: Learn- ing from errors toward automating model improve- ment. In Findings of the Association for Compu- tational Linguistics: EMNLP 2024 , pages 11185– 11208, Miami, Florida, USA. Association for Com- putational Linguistics. Erxin Yu, Jing Li, Ming Liao, Siqi Wang, Gao Zuchen, Fei Mi, and Lanqing Hong. 2024a. CoSafe: Evalu- ating large language model safety in multi-turn dia- logue coreference. In Proceedings of the 2024 Con- ference on Empirical Methods in Natural Language Processing , pages 17494–17508, Miami, Florida, USA. Association for Computational Linguistics. Erxin Yu, Jing Li, and Chunpu Xu. 2024b. PopALM: Popularity-aligned language models for social media trendy response prediction. In Proceedings of the 2024 Joint International Conference on Computa- tional Linguistics, Language Resources and Evalu- ation (LREC-COLING 2024) , pages 12867–12878, Torino, Italia. ELRA and ICCL. Erxin Yu, Jing Li, and Chunpu Xu. 2024c. RePALM: Popular quote tweet generation via auto-response augmentation. In Findings of the Association for Computational Linguistics: ACL 2024 , pages 9566– 9579, Bangkok, Thailand. Association for Computa- tional Linguistics. Xiaotian Zhang, Chunyang Li, Yi Zong, Zhengyu Ying, Liang He, and Xipeng Qiu. 2024. Evaluating the performance of large language models on gaokao benchmark. Preprint , arXiv:2305.12474. Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. 2024. AGIEval: A human-centric benchmark for evaluating foundation models. In Findings of the Association for Computational Lin- guistics: NAACL 2024 , pages 2299–2314, Mexico City, Mexico. Association for Computational Lin- guistics. Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, and Omer Levy. 2023.
https://arxiv.org/abs/2505.22591v1
Lima: Less is more for alignment. Preprint , arXiv:2305.11206. A Overview of Prompts Used A.1 Prompt for Training and Inference For all the models, we use the built-in chat tem- plates for training and inference. Figure 7 illus- trates the one-shot learning prompt for the Qwen2.5 model, where the model generates a response by being presented with an example of a synthetic question paired with its solution. A.2 Prompt for Error Keyphrase Generation Figure 5 illustrates the prompt used to generate error keyphrases for identifying and summarizing mistakes in mathematical reasoning. The input to the prompt includes a math question, the cor- rect reasoning path leading to the answer, and the model’s incorrect reasoning path. The prompt in- structs the model to analyze where the error oc- curred in its reasoning process, identify the cause,and summarize it as a concise yet descriptive keyphrase. The output is a single keyphrase in list format, effectively capturing the primary reason for the model’s mistake, which can then be used for further error analysis and targeted data synthesis. A.3 Prompt for Error Clustering Generation Figure 6 presents a prompt designed to guide the analysis and categorization of error keyphrases gen- erated from a model’s reasoning mistakes. The in- put to this prompt is a list of error keyphrases, and the task involves clustering these keyphrases based on common themes, causes, or areas of occurrence. For each cluster, the model is instructed to list the included keyphrases, explain their grouping, and assign a concise, descriptive name to the cluster. This process helps identify patterns in the model’s errors, offering meaningful insights into the types of mistakes made and enabling targeted improve- ments in the model’s reasoning capabilities. A.4 Prompt for Error Type-Specific Data Synthesis The prompt in Figure 8 and 9 guides the creation of 20 challenging math problems targeting specific error types in the GSM8K and MATH datasets. By analyzing the examples provided, the instruct model identifies patterns or issues causing errors and generates diverse, difficult problems aligned with these error types. The output follows a strict JSON format with detailed solutions and final nu- merical answers. B Related Work on Data Synthesis The generation of synthetic data driven by large language models has become an essential method for addressing the issues of data quantity and qual- ity in the field of deep learning (Long et al., 2024). LLMs, with their powerful language understand- ing and generation capabilities, can produce syn- thetic data that closely resembles the characteristics and patterns of real-world data (Wang et al., 2023). This synthetic data can not only serve as a substi- tute or supplement for real data but can also be generated according to specific instructions and conditions to meet the needs of different applica- tions (Yu et al., 2024c). The use of LLM-driven synthetic data generation is widespread across vari- ous fields, including general alignment (Chen et al., 2024; Yu et al., 2024a; Xue et al., 2024a), math- ematical reasoning (Lee et al., 2024; Ying et al., 2024), medical (Tang et al., 2023; Xu et al., 2024), Error Keyphrase
https://arxiv.org/abs/2505.22591v1
Generation Prompt:Based on the given mathematical problem, identify the step where the model made an error in its reasoning process. Analyze the reason for this error and summarize it using a keyphrase. The input consists of a math question, the correct answer, and the model's incorrect answer. Please output the result in the following format:[”Error keyphrase"]Ensure that your analysis focuses on the mistake in the model's problem-solving process. The keyphrases should be concise yet descriptive, effectively summarizing the primary reason for the model's mistake. Strictly adhere to the list format output without any additional information.Math Question: {Question𝒒𝒊}Answer: {Correct Reasoning Path𝒓𝒊}Model Output: {Incorrect Model Reasoning Path𝒓#𝒊}Figure 5: Prompt for Generating Error Keyphrases. Error Keyphrases Clustering Prompt:You are an expert in error analysis and categorization. You will be given a list of error keyphrases. Your task is to:1. Analyze the given error keyphrases and identify common themes or patterns.2. Group similar keyphrases together based on their likely causes, effects, or areas of occurrence.3. For each cluster:a. List the keyphrases in the cluster.b. Explain why these keyphrases are grouped together.c. Assign a concise but descriptive name to the cluster that captures its essence.4. Clusters should cover all the keyphrases.5. Present your results in a clear, structured format.Strictly output in plain text according to the following format, do not output in other formats or with extra symbols:[{{"Cluster name":, "Keyphrases":[], "explanation":,}}, {{"Cluster name":, "Keyphrases":[], "explanation":,}} ...]Your clustering should aim to provide meaningful insights that can help in understanding and addressing the errors more effectively.Here is the list of error keyphrases: {Error Keyphrases Set 𝐄-𝐬𝐞𝐭} Figure 6: Prompt for Clustering Error Keyphrases Ono–shot Learning Prompt:Please reason step by step, andput your final answer within \boxed{}.Here is an example:### Instruction: {Synthetic Question from 𝑫𝑺𝑬𝑰}### Response: {Synthetic Solution from 𝑫𝑺𝑬𝑰}### Instruction: {Question from 𝑫𝒅𝒆𝒗}### Response:Figure 7: One-Shot Learning Prompt for Selecting Syn- thetic Data Error Type-Specific Data Synthesis for GSM8K:Based on the given examples and error type, create 20 difficult math problems that are likely to cause errors in the model.Requirement:1. Identify the commonality in the given examples and consider what issues in these examples might cause the model to make mistakes.2. Make the new problems more challenging and diverse.3. Format the output strictly as a string in this structure: [{{"question":,"solution":}}, {{"question":,"solution":,}},...]. Ensure no additional output beyond the specified structure. Output in JSON format.4. The reasoning process for each step should be provided in the solution.5. Ensure the final answer is a number and place it on a new line, denoted by \n#### num.6. Don’t make any mathematical mistakes of your own!Provided Questions: {Sampled Error Question𝒒𝟏}{Sampled Error Question𝒒𝟐}{Sampled Error Question𝒒𝟑}{Sampled Error Question𝒒𝟒}{Sampled Error Question𝒒𝟓}{Sampled Error Question𝒒𝟔}{Sampled Error Question𝒒𝟕}{Sampled Error Question𝒒𝟖}ErrorType:{Errortype}Generated Questions: Figure 8: Prompt for GSM8K Error Type-Specific Data Synthetic. Error Type-Specific Data Synthesis for Math:Based on the given examples and error type, create 20 difficult math problems that are likely to cause errors in the model.Requirement:1. Identify the commonality in the given examples and consider what issues in these examples might cause the model to make mistakes.2. Make the new problems more challenging and diverse.3. Format the output strictly
https://arxiv.org/abs/2505.22591v1
as a string in this structure: [{{"question":,"solution":}}, {{"question":,"solution":,}},...]. Ensure no additional output beyond the specified structure. Output in JSON format.4. The reasoning process for each step should be provided in the answer.5. The final answer should be marked with \\boxed{{}}When generating math problems in JSON format:1) Use \\\\( and \\\\) for inline math2) Avoid complex LaTeX commands3) Use simple alternatives for arrows and dots4) Keep solutions concise and avoid unnecessary formatting5) Escape special characters properly6) Test the JSON validity before finalizing6.Don’t make any mathematical mistakes of your own!Provided Questions: {Sampled Error Question𝒒𝟏}{Sampled Error Question𝒒𝟐}{Sampled Error Question𝒒𝟑}{Sampled Error Question𝒒𝟒}{Sampled Error Question𝒒𝟓}{Sampled Error Question𝒒𝟔}{Sampled Error Question𝒒𝟕}{Sampled Error Question𝒒𝟖}ErrorType:{Errortype}Generated Questions:Figure 9: Prompt for MATH Error Type-Specific Data Synthetic. social media (Wei et al., 2025; Yu et al., 2024b), and hallucination (Xue et al., 2024b, 2023). C Manual Category Review We applied two manual adjustments after clus- tering: merging categories andexcluding cate- gories . During the clustering process, some duplicate or similar categories may be generated, such as “Timezone and Duration Calculation Errors” and “Time and Duration Calculation Errors,” or “Cal- culation Errors” and “General Calculation Errors.” These categories essentially represent the same or closely related error types, so we merge them into a unified category to avoid redundancy. We identify bad cases by comparing the model’s extracted answers with the correct ones. However, this method may lead to a small number of cor- rect answers being mistakenly identified as errors, which is a common issue in math evaluations. For- tunately, GPT-4o is usually able to determine that these responses are actually correct. Consequently, a special category like “No Error” or “Correct Pro- cess” may appear after clustering, and we manually exclude this category because it does not represent actual error types. Through these manual reviews, we can more accurately organize and analyze error categories, ensuring the reliability and consistency of the results.
https://arxiv.org/abs/2505.22591v1
arXiv:2505.22597v1 [cs.AI] 28 May 2025HDDLGym: A Tool for Studying Multi-Agent Hierarchical Problems Defined in HDDL with OpenAI Gym Ngoc La1, Ruaridh Mon-Williams2,andJulie A. Shah1 1MIT 2University of Edinburgh ntmla@mit.edu, ruaridh.mw@ed.ac.uk, julie.a.shah@csail.mit.edu Abstract In recent years, reinforcement learning (RL) methods have been widely tested using tools like OpenAI Gym, though many tasks in these environments could also benefit from hi- erarchical planning. However, there is a lack of a tool that enables seamless integration of hierarchical planning with RL. Hierarchical Domain Definition Language (HDDL), used in classical planning, introduces a structured approach well- suited for model-based RL to address this gap. To bridge this integration, we introduce HDDLGym, a Python-based tool that automatically generates OpenAI Gym environments from HDDL domains and problems. HDDLGym serves as a link between RL and hierarchical planning, supporting multi- agent scenarios and enabling collaborative planning among agents. This paper provides an overview of HDDLGym’s de- sign and implementation, highlighting the challenges and de- sign choices involved in integrating HDDL with the Gym interface, and applying RL policies to support hierarchical planning. We also provide detailed instructions and demon- strations for using the HDDLGym framework, including how to work with existing HDDL domains and problems from In- ternational Planning Competitions, exemplified by the Trans- port domain. Additionally, we offer guidance on creating new HDDL domains for multi-agent scenarios and demonstrate the practical use of HDDLGym in the Overcooked domain. By leveraging the advantages of HDDL and Gym, HDDL- Gym aims to be a valuable tool for studying RL in hierarchi- cal planning, particularly in multi-agent contexts. Code — https://github.com/HDDLGym/HDDLGym 1 Introduction Hierarchical planning is essential for addressing complex, long-horizon planning problems by decomposing them into smaller, manageable subproblems. In reinforcement learn- ing (RL), hierarchical strategies can guide exploration along specific pathways, potentially enhancing learning efficiency. However, implementing RL policies within hierarchical frameworks often requires custom modifications to the origi- nal environments to incorporate high-level actions (Wu et al. 2021; Liu et al. 2017; Xiao, Hoffman, and Amato 2020). For example, in a Bayesian inference study using the Over- cooked game, subtasks are integrated as high-level actions Copyright © 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.through specific rules embedded in the system codebase (Wu et al. 2021). Similarly, several RL studies use author-defined high-level actions, or macro-actions, to organize complex tasks (Liu et al. 2017; Xiao, Hoffman, and Amato 2020). While these studies highlight the benefits of hierarchical ap- proaches in complex scenarios, the additional programming required to integrate hierarchical layers can make it chal- lenging for external users to modify or implement alternative high-level strategies. This limitation reduces users’ flexibil- ity to implement diverse hierarchical strategies tailored to their specific requirements. The Hierarchical Domain Definition Language (HDDL) (H¨oller et al. 2020) is an extension of Planning Domain Def- inition Language (PDDL) (McDermott et al. 1998) that in- corporates hierarchical task networks (HTN) (Erol, Hendler, and Nau 1994). HDDL provides a standardized language for hierarchical planning systems and is supported by extensive documentation as well as a variety of domains and prob- lems. Many of
https://arxiv.org/abs/2505.22597v1
these resources are sourced from the hier- archical task network tracks of the International Planning Competitions (IPC-HTN) (IPC 2023 HTN Tracks). HDDL’s intuitive and flexible design also allows users to define or modify problem-solving approaches by adjusting the hierar- chical task networks to suit their specific needs. To lever- age HDDL’s strengths in studying RL within hierarchical planning problems, we present HDDLGym — a framework that integrates HDDL with OpenAI Gym (Brockman et al. 2016), a standardized RL interface. HDDLGym is a Python- based tool that automatically generates Gym environments from HDDL domain and problem files. Multi-agent contexts are a key area in automated planning and hierarchical planning research. While HDDL is not in- herently designed for multi-agent systems, multi-agent fea- tures have been explored in planning formalisms like MA- PDDL (Kovacs 2012) and MA-HTN (Cardoso and Bordini 2017). However, to utilize the extensive, well-documented HDDL domains and problems from IPC-HTN, HDDLGym is designed to work closely with the HDDL defined by H¨oller et al. (2020). We introduce a new protocol for ex- tending HDDL domains and problems to support HDDL- Gym with multi-agent features. This includes making minor modifications to existing HDDL files from IPC-HTN. Main contributions This paper makes the following three key contributions: • We introduce HDDLGym, a novel framework that auto- matically bridges reinforcement learning and hierarchi- cal planning by automatically generating Gym environ- ments from HDDL domains and problems. • We provide a protocol for modifying HDDL domains to support multi-agent configurations within HDDLGym, thereby extending hierarchical planning techniques to complex multi-agent environments. • We detail HDDLGym’s design and usage, demonstrating its effectiveness with examples from the Transport do- main (in IPC-HTN) and the Overcooked environment (as shown in Figures 1a and 1b, respectively). Core features HDDLGym offers five core features, which are detailed throughout the paper. • Support verifying and adapting HDDL files to the tool (Section 4); • Support multi-agent collaborations (Sections 5.5 and 6); • Support centralized and decentralized planning (Section 5.5); • Support modifying design of RL policy in hierarchical planning (Sections 5.3 and 5.4); • Support training and deploying trained RL policies with various evaluation metrics and visualization (Sections 6). The remainder of this paper is organized as follows. Sec- tion 2 provides background information on HDDL and Ope- nAI Gym, the two foundational frameworks on which our system is built. Section 3 discusses relevant prior work and thus highlights our contributions to the field. Section 4 then introduces the formal framework of HDDLGym, detailing how HDDL is modified to align with the agent- centric design of this tool. Section 5 covers the design and implementation details of HDDLGym. Following this, sec- tion 6 demonstrates the use of HDDLGym with examples from the Transport domain, representing domains from IPC- HTN, and Overcooked, representing customized environ- ments. Section 7 discusses the key benefits and current lim- itations of the HDDLGym tool, along with future develop- ments to address these limitations and expand its applica- tions within artificial intelligence research. Finally, Section 8 concludes the paper. 2 Background 2.1 HDDL HDDL (H ¨oller et
https://arxiv.org/abs/2505.22597v1
al. 2020) is an extension of PDDL (Mc- Dermott et al. 1998). H ¨oller et al. (2020) define the domain and problem as follows. Definition of Planning Domain: A planning domain Dis a tuple (L, TP, TC, M)defined as follows. •Lis the underlying predicate logic. •TPandTCare finite sets of primitive and compound tasks, respectively. •Mis a finite set of decomposition methods with com- pound tasks from TCand task networks over the set TP∪TC.Definition of Planning Problem: A planning problem P is a tuple (D, sI, tnI, g), where: •sI∈Sis the initial state, a ground conjunction of posi- tive literals over the predicates assuming the closed world assumption. •tnIis the initial task network that may not necessarily be grounded. •gis the goal description, being a first-order formula over the predicates (not necessarily ground). In other words, beyond the action definition in PDDL, which establishes the rules of interaction with the environ- ment, HDDL introduces two additional operators: task and method . In HDDL, a task represents a high-level action, while a method is a strategy to accomplish a task. Multi- ple methods can exist to perform a single task. Essentially, a method is a task network that decomposes a high-level task into a partially or totally ordered list of tasks and actions. In HDDL, a task is defined with its parameters, and a method is defined with parameters, the associated task, pre- conditions, a list of subtasks with their ordering or a list of ordered subtasks. Examples of task and method definitions from the original HDDL work (H ¨oller et al. 2020) are: 1(:task get-to :parameters (?l - location)) 2(:method m-drive-to-via 3 :parameters (?li ?ld - location) 4 :task (get-to ?ld) 5 :precondition () 6 :subtasks (and 7 (t1 (get-to ?li)) 8 (t2 (drive ?li ?ld))) 9 :ordering (and 10 (t1 < t2))) In HDDL, state-based goal definition is optional. Goals are instead defined as a list of goal tasks in the HDDL prob- lem file. An example of the goal in a transport problem is as follows. 1(:htn 2 :tasks (and 3 (deliver package-0 city-loc-0) 4 (deliver package-1 city-loc-2)) 5 :ordering ()) More details about HDDL domain and problem files can be found in H ¨oller et al. (2020). In addition to the original format of HDDL, some modifications are required to make the HDDL domains and problems work smoothly with HD- DLGym. Details of the modifications are in Section 4. 2.2 OpenAI Gym OpenAI Gym (Brockman et al. 2016) is a widely adopted toolkit that provides a standardized interface for bench- marking and developing reinforcement learning (RL) algo- rithms. Its consistent API includes methods for environ- ment initialization, resetting, and interaction, allowing re- searchers to focus on RL algorithm development without dealing with environment-specific details. With a diverse range of environments, from simple tasks to complex sim- ulations like Atari games, Gym enhances reproducibility task-deliver package city-0 ⇒ method-deliver ⇒ task-unload package ⇒ method-unload package ⇒ action-drop package 12 0 1 2 0 12 0 12 0 12 0 task-deliver package city-0 ⇒ method-deliver ⇒ get-to city-1 ⇒ method-drive-to city-1 ⇒
https://arxiv.org/abs/2505.22597v1
action-drive city-1 task-deliver package city-0 ⇒ method-deliver ⇒ task-load package ⇒ method-load package ⇒ action-pickup package task-deliver package city-0 ⇒ method-deliver ⇒ task-get-to city-0 ⇒ method-drive-to city-0 ⇒ action-drive city-0 t = 0 t = 1 t = 2 t = 3 t = 4(a) Transport scenario Hierarchy of chef-1 t-make-soup 1-onion soup t-add-ingredient onion pot1 m-make-soup 1-onion soup a-interact chef-1 onion-pile Task Method Primitive Action Belief about chef-2’s hierarchy None chef-2 m-add-ingredient onion pot t-interact chef-1 onion-pile m-interact-direct chef-1 onion-pile t-make-soup 1-onion soup m-make-soup 1-onion soup (b) Overcooked scenario Figure 1: Examples of the Transport and Overcooked environments in HDDLGym and enables direct comparisons across RL methodologies. Therefore, integrating OpenAI Gym with HDDL creates a unified framework for designing and evaluating hierarchical RL approaches, combining RL’s adaptive learning with the structured decision-making of hierarchical planning. 3 Related Work PDDLGym (Silver and Chitnis 2020) constructs Gym en- vironments from PDDL domains and problems, serving as a valuable reference for our work. However, HDDL signif- icantly differs from PDDL, particularly in managing hier- archical task networks or task and method operators. Ad- ditionally, PDDLGym operates under a single-action-per- step model, which suits many PDDL domains but lacks the complexity needed for advanced applications, such as multi- agent contexts. In contrast, our framework, HDDLGym, is designed to accommodate multi-agent environments, en- abling the study of RL policies in more complex settings. Similarly, pyRDDLGym (Taitler et al. 2023) integrates a planning domain language, Relational Dynamic Influ- ence Diagram Language (RDDL) (Sanner 2010), with Gym. RDDL is adept at modeling probabilistic domains with in- tricate relational structures. However, it does not inher- ently support multi-level actions. This limitation requires significant adjustments when defining hierarchical prob- lems within PyRDDLGym. Users must creatively structure RDDL descriptions to represent sequences of actions, which can complicate the modeling of hierarchical tasks. NovelGym is a versatile platform that supports hybrid planning and learning agents in open-world environments (Goel et al. 2024). It effectively combines hierarchical task decomposition with modular environmental interactions to facilitate agent adaptation in unstructured settings. Never- theless, its hierarchical structure is relatively straightfor- ward, primarily relying on primitive and parameterized ac- tions defined in PDDL. Conversely, HDDLGym offers more advanced hierarchical capabilities through HDDL, granting users greater flexibility and complexity in specifying high- level strategies and problem-solving approaches. HDDLGym implements several critical extensions to sup-port hierarchical and multi-agent planning. As outlined in the Introduction, its five key features enable users to system- atically design and study hierarchical planning in conjunc- tion with RL approaches. In addition, the framework inte- grates visualization tools and evaluation metrics to facilitate both qualitative and quantitative policy analysis. These en- hancements allow HDDLGym to capture the full complex- ity of hierarchical decision-making in multi-agent environ- ments, enabling capabilities that prior frameworks have not fully supported. 4 Formal Framework Due to various differences in the original formalities and purposes between HDDL and Gym, some modifications in HDDL domain files are required to enable HDDLGym to work smoothly. In this section, we introduce the agent- centric extension of HDDL, modified from the standard HDDL by
https://arxiv.org/abs/2505.22597v1
H ¨oller et al. (2020). The agent-centric extension only includes changes to the HDDL domain. The agent- centric planning domain is defined below: Definition 1. An agent-centric planning domain Dis a tuple D=⟨ta, L, T P, TC, M⟩, where: •tais an agent type hierarchy in the domain. •L is the underlying predicate logic. •TPis a finite set of primitive tasks, also known as actions. Actions can be further classified into agent actions and environment actions. •TCis a finite set of compound tasks. •Mis a finite set of decomposition methods with com- pound tasks from TCand task networks over the name TP∪TC. We next discuss the elements in Def. 1 that are different from the definition of planning domain in Sec. 2. Agent type hierarchy taOne major difference compared to the standard HDDL (H ¨oller et al. 2020) is the addition ofta.tais used to specify which types are classified as agent types within the domain. In an HDDL domain, this classification is done by defining the type “agent” within the :types block. For instance, in the Transport domain, the “vehicle” is designated as an agent type, as shown in the line 5 of the types block below. This approach allows the domain to clearly differentiate agent types from other enti- ties, enabling more structured interactions within hierarchi- cal planning tasks. 1(:types 2 location target locatable - object 3 vehicle package - locatable 4 capacity-number - object 5 vehicle - agent ) Primitive Task Set TPThe primitive task set, TP, encom- passes all actions defined within the domain, classified as ei- ther agent actions or environment actions. Agent actions in- clude one or more agents as parameters, while some actions - initially defined without agent parameters due to the nature of their predicates - must be modified to include agents if these actions are performed on behalf of agents. Addition- ally, in RL, particularly in multi-agent settings, it is essen- tial to ensure that the domain includes a none action for each agent, enabling an agent to choose no action for a given step. Therefore, the HDDL domain file should incorporate the fol- lowing action block to support the none action functionality. 1(:action none 2 :parameters (?agent - agent) 3 :precondition () 4 :effect ()) On the other hand, environment actions exclude agents from their parameters, making them non-agent actions. These actions execute automatically as soon as their precon- ditions are met immediately after all agents have completed their actions, enabling flexible environment dynamics. Compound Task Set TCThe compound task set, TC, in- cludes all high-level tasks, aligning with the standard HDDL structure as described by H ¨oller et al. (2020). However, in HDDLGym’s implementation, additional task definition de- tails are required. Specifically, to ensure task completion, HDDLGym checks the current world state against the de- fined task effects. Thus, task definitions must include ex- plicit effects. In the following example from the Transport domain, the text in bold highlights the revisions made to the original HDDL task definition. 1(:task get-to 2 :parameters ( ?agent - agent ?dest - location) 3 :effect (at ?agent
https://arxiv.org/abs/2505.22597v1
?destination) ) The remaining components in the tuple, LandM, are con- sistent with the standard HDDL formulation as defined by H¨oller et al. (2020). Note that to facilitate the modification and verification of HDDL domains for compatibility with HDDLGym, the codebase includes an interactive platform featuring autonomous task effect generation, agent parame- ter augmentation for actions, and related capabilities. 5 HDDLGym Framework This section explains the design and implementation of HD- DLGym. It covers (1) details of HDDLEnv as a Gym envi- ronment, (2) the definition of the Agent class, (3) observa-tion and action space details, (4) RL policy, (5) planning in multi-agent scenarios, and (6) the HDDLGym architecture. 5.1 Gym and HDDLEnv In the HDDLGym framework, we introduce HDDLEnv, a Python class that extends the Gym environment to support hierarchical planning with HDDL. Initialize and reset functions HDDLEnv is initialized us- ing HDDL domain and problem files, together with an op- tional list of policies for all agents. During initialization, the main parser function converts HDDL files into an en- vironment dictionary, setting up the initial state and goals. Agents are then initialized with their associated policies. The reset function optionally accepts a new or updated list of agents’ policies and resets the environment to its initial state and goal tasks as specified in the HDDL problem file. It also re-initializes agents with their associated policies. Step function The step function in HDDLEnv accepts an action dictionary from the agents and returns the new state, reward, ‘done’ flag (indicating win or loss), and debug infor- mation, similar to the format of OpenAI Gym’s step func- tion. After executing agents’ actions, it also checks and ap- plies any valid environment actions. Environment actions are any actions that are not associated with any agent. This design enables the environment to change independently from agents’ behaviors. If the current state of the environ- ment satisfies the precondition of an environment action, that action is executed automatically. 5.2 Agent HDDLGym is designed as an agent-centric system. It inher- ently focuses on the interactions and actions of agents within the environment. Therefore, defining an Agent class, as in Definition 2 below, is critical in implementing HDDLEnv. Definition 2. An agent Ais defined with a tuple ⟨N, P, B, H, U ⟩where: •Nis agent name, •Pis a policy, •Bis set of agents, representing the agent’s belief about other agents’ configuration in the environment, •His a list of tasks, methods, and action, representing the action hierarchy of the agent, •Uis a function to update the action hierarchy of agent based on the current state of the world. Initialize an agent All agents in the environment are ini- tialized when an HDDLEnv instance is created or reset. Each agent is initialized with a name Nand a policy P. The agent’s name Nis derived directly from the HDDL domain and problem files. The policy Prefers to an RL strategy that the agent employs to support its hierarchical planning process. This initialization configures the agent to operate within the hierarchical planning framework. Function U: updates agent’s action hierarchy H An im-
https://arxiv.org/abs/2505.22597v1
portant method in the Agent class is the update hierarchy function U. This function checks whether any tasks or ac- tions in the agent’s hierarchy Hhave been completed by comparing their effects with the current state of the world. Once tasks or actions are completed, they are removed from both the agent’s hierarchy Hand the agent’s belief about other agents’ action hierarchies ( B).Uis called for each agent after the environment’s step function is executed, en- suring that the agents are prepared to plan the next step. 5.3 Observation and Action Spaces Observation space In general multi-agent problems, each agent can be assumed to have knowledge about the current state of the world, its own hierarchical actions, and other agents’ previous actions. Different RL methods have differ- ent designs for which information should be included in the inputs and outputs of the RL policies. For example, in the default setup, we set the observation of each agent to in- clude information on (1) current state of the world, (2) goal tasks, (3) the agent’s action hierarchy, and (4) other agents’ previous primitive actions. Meanwhile, the RL model should return information about the action hierarchy of each agent. In our current design, we use dynamic grounded pred- icates to represent the current state of the world. Dy- namic grounded predicates represent a subset of all possible grounded predicates within the environment. In HDDL, and PDDL more broadly, predicates can either be static or dy- namic. Static predicates define unchanging world conditions (e.g., spatial relationships between locations), while dy- namic predicates represent changing world conditions (e.g., agent positions). Dynamic predicates can be added or re- moved from the world state by actions. Our default setup focuses on using dynamic grounded predicates, rather than the full set of grounded predicates, to reduce the observation space. This scalability trade-off is illustrated in Table 1, specifically in Overcooked domain. However, this design choice may limit the generalizabil- ity of the RL policy, as it is tailored to a specific set of HDDL problem instances and may not transfer well to prob- lems with different agents, objects, and/or static world con- ditions. To address this, HDDLGym also allows users to customize the state representations to accommodate diverse needs across domains, for example, using grounded predi- cates in the Transport domain, where their dimensions re- main manageable as the problem size increases (Table 1). Action space Unlike PDDL or non-hierarchical planning problems, HDDLGym aims to provide not only primitive actions but also the full action hierarchies that reflect high- level strategies guiding agent behavior. As shown in Table 1, the set of all possible grounded operators can grow pro- hibitively large in complex domains, while lifted operators offer a more compact alternative. A middle-ground approach uses lifted operators with associated objects to retain contex- tual information. Our default setup uses the middle-ground approach. The action hierarchy of each agent is one-hot en- coded over all lifted operators and objects, reducing obser- vation and action space sizes by omitting subtask ordersDomain Transport Overcooked # of Agents 1 2 3 2
https://arxiv.org/abs/2505.22597v1
3 G. predicates 35 79 269 937,158 1,186,066 G. dynamic pred. 13 38 132 90 101 G. operators 184 1570 11,979 200,597 300,860 G. actions 58 610 4,093 175,791 263,668 L. operators 14 14 14 18 18 L. actions 4 4 4 5 5 Objects 8 13 23 18 19 Table 1: Dimensions of Lifted and Grounded Represen- tations in Transport and Overcooked Problems. G.de- notes Grounded, and L.denotes Lifted. The large number of grounded predicates in Overcooked highlights the need to use dynamic grounded predicates for state representation. Likewise, grounded operators are impractical for defining the action space in RL model training due to their scale. and specific object-operator links. HDDLGym also supports flexible state and action space designs, including multiple RL models for different operator types and customizable en- coding schemes for diverse domains and experiments. 5.4 RL Policy The RL policy plays a crucial role in the HDDLGym frame- work by supporting the search for an optimal hierarchical plan for each agent. In the default setup, the policy takes the observation as input, which includes information about dy- namic grounded predicates, goal tasks, and previous action hierarchies. Its output is the probabilities of lifted operators and objects, which are then used to compute the probabilities of grounded operators. These probabilities guide the search for action hierarchies within the HDDLGym planner, as dis- cussed in Sec. 5.5. In this work, we implemented Proximal Policy Optimiza- tion (PPO) (Schulman et al. 2017) for discrete domains to effectively explore the application of RL in hierarchical planning problems. HDDLGym is designed to enable users to flexibly integrate their preferred RL models, including multi-layer perceptrons (MLPs), recurrent neural networks (RNNs), and others. Comprehensive guidance and detailed examples are provided in the tutorial Python notebook in- cluded in the codebase. 5.5 Planning for Multi-agent Scenarios HDDLGym is designed to work in multi-agent settings; therefore, the planner also considers collaboration between agents. The HDDLGym planner is designed in a centralized format. In decentralized planning, each agent runs the cen- tralized planner using its own information and beliefs about the other agents. Algorithm 1 outlines the approach of the HDDLGym Planner, where agents determine their action hierarchies by iteratively updating through valid operator combinations. Particularly, HDDLGym Planner’s inputs are list Aof all agents with uncompleted hierarchies, policy P, and deter- ministic flag d. The HDDLGym planner is a centralized planner. In case of decentralized planning, the list Ainclude a real agent and that agent’s belief about other agents. The deterministic flag ddetermines whether the selection pro- cess should follow a deterministic or probabilistic approach when choosing operators to form agents’ action hierarchies. The policy Pguides the search for a suitable hierarchy based on the flag d. The HDDLGym Planner outputs an updated list of agent instances, Aupdated , with each agent’s action hierarchy terminating in a grounded primitive action. The planner begins by initializing an empty list, Done , to keep track of agents whose hierarchies end with an action (line 1). The while loop from lines 2 to 28 continues until all agents have completely updated
https://arxiv.org/abs/2505.22597v1
their hierarchies. Within this loop, an empty list, OA, is initialized to store the valid operators of all agents in A(line 3). Next, the for-loop from lines 4 to 17 iterates to find all valid operators Oafor each agent a, for a∈ A. To do this, the algorithm first checks if ais in Done , meaning its hierarchy is complete (line 5). If so, then Oais set as a list containing the agent a’s primitive action (line 6). Otherwise, the while loop from lines 8 to 14 runs until it finds a non-empty Oa. In this while loop, the list of valid operators for ais validated in line 9; if no valid operators are found (line 10), the last operator in a’s hierarchy is removed, and the loop is rerun. However, if a’s hierarchy is already empty, indicating that no valid operator can be found for a, thenone action is added to Oa(line 12). The operator list Oafor each agent is then added to OA, the list of operators for all agents (line 16). This list, OA, is subsequently used to generate all combinations of joint op- erators, C(line 18). Line 19 details the pruning of invalid combinations in C. A combination is invalid if it violates either of two conditions: first, no agent should perform mul- tiple different actions; and second, no action in the combina- tion should have effects that conflict with the preconditions of other actions. After this pruning, Ccontains only valid operator combinations. Lines 20 to 25 describe how the policy Pis applied to select a combination cfrom the list of valid combinations, C. The probability list, PO, corresponding to Cis generated using policy P. Depending on the deterministic flag d, the chosen combination cis selected in either deterministically (line 22) or probabilistically (line 24). With the combination of operators determined, the next step is to use it to update each agent’s hierarchy (line 26). The list Done is then updated if any agents have completed hierarchies (line 27). This process is repeated until all agents inAhave completed their hierarchies. At this point, the HDDLGym planner returns the list of fully updated agents, Aupdated , as shown on line 29. 5.6 HDDLGym Architecture The high-level architecture of HDDLGym is demonstrated in Figure 2. As discussed in Section 5.3, The RL policy, described in Section 5.4, takes an observation as input and outputs a probability distribution over action representations (see Section 5.3). These probabilities guide the HDDLGym planner in selecting the most appropriate action hierarchy for each agent, as outlined in Algorithm 1 and Section 5.5. Primitive actions are then extracted from the updated hier-Algorithm 1: HDDLGym Planner Input : list of agents A, deterministic flag d, policy P Output : updated list of agents A 1:Initialize an empty list Done to keep track of agents whose hierarchies reached action. 2:while not all agents in Done do 3: Initialize an empty list OAfor valid operators of A 4: foragent ainAdo 5: ifainDone then 6: Oa←[action of agent a] 7: else 8: while Oanot empty do 9: Oa←a list of
https://arxiv.org/abs/2505.22597v1
valid operators for a 10: ifOais empty then 11: Remove the last operator of agent ahierar- chy from its hierarchy 12: If no more operator from a’s hierarchy to remove, add none action to Oa 13: end if 14: end while 15: end if 16: AddOatoOA 17: end for 18: C←Combinations of joint operators from OA 19: Remove any invalid combinations from C 20: PO←Probability of combinations in Cwith P 21: ifdis True then 22: c←argmaxc∈CPO 23: else 24: c←Randomly from Cwith weights be PO 25: end if 26: Update hierarchies of all agent Awith operators in c 27: Check each agent’s hierarchy and update Done if any hierarchy ends with action 28:end while 29:return Aupdated archies and executed in the environment, resulting in a new world state via the step function (Section 5.1). Completed tasks or actions are archived and removed from the each agent’s action hierarchy Hwith method U(Section 5.2) be- fore proceeding to the next cycle. This integrated process supports dynamic and adaptive agent behavior based on both learned policies and hierarchical planning. 6 Applications Table 2 lists the domains, primarily from IPC-HTN (IPC 2023 HTN Tracks) and custom designs, that have been stud- ied with HDDLGym and are included in the codebase1. We highlight two representative examples: Transport from IPC- HTN and Overcooked, a popular multi-agent problem in OpenAI Gym. 1This list will be updated as the tool evolves. For the most recent version of this work, please refer to the latest arXiv version or visit: https://ngocla.github.io/files/HDDLGym.pdf Action hierarchies of all agents HDDL Env Action: last element of each agent’s hierarchy HDDLGym Planner Observation (world state, goals, and current action hierarchy) RL Policy Action (operators, or actions, or methods, etc.)New world state, Archived action hierarchy Figure 2: HDDLGym high-level architecture. Outputs of RL policy help HDDLGym Planner update the action hier- archy of each agent. Then, primitive actions are extracted from the hierarchies, and applied to the environment. Domain Source Agent-centric Collab Transport IPC-HTN Yes No Transport Collab. Modified IPC-HTN Yes Yes Overcooked Ours Yes Yes Rover IPC-HTN Yes No Satellite IPC-HTN Yes No Depots IPC-HTN Hidden No Minecraft-Player IPC-HTN Hidden No Barman-BDI IPC-HTN Hidden Yes Search and Rescue Ours Yes Yes ZenoTravel IPC-HTN Yes No Taxi Ours Yes No Factories-simple IPC-HTN No No Table 2: List of Domains Included in the HDDLGym Codebase (As of June 2025) .Agent-centric domains are those in which the agent can be explicitly identified. Hid- denindicates that the domain is agent-centric, but the agent is not specified as an explicit parameter. 6.1 IPC-HTN Domains As previously discussed, Gym defines interactions between agents and the environment. Therefore, not all HDDL do- mains from IPC-HTN (IPC 2023 HTN Tracks) are di- rectly compatible with HDDLGym. Since agent specifica- tion within a domain is necessary, this requirement may not be feasible or appropriate for every IPC-HTN domain. HD- DLGym is particularly well-suited to domains with agent- centric systems, such as Transport (where the vehicle serves as the agent), Rover (with the rover as the agent), and Satel- lite (with the satellite as the agent). To
https://arxiv.org/abs/2505.22597v1
better illustrate the ap- plications of these agent-centric environments, we provide a detailed discussion on Transport domain as follows. Transport domain The goal of a Transport problem is to deliver one or more packages from their original locations to designated locations. The resulting action hierarchy of an 1-agent Transport problem is illustrated in Figure 1a. In this scenario, the truck completes the delivery package goal task after four actions. At each step, the truck’s action hierarchy begins with the goal task and concludes with a primitive action. The action hierarchy updates after each step, following a se- quence of subtasks in method-deliver to accomplish thedelivery package goal.To evaluate the capability of handling collaborative in- teractions in Transport domain, we embed the collaborative task, method, and action to the Transport domain. Specif- ically, task transfer , method m-deliver-collab , and action transfer-package are added in the domain to enable the packages to be transferred from one vehicle to another when the vehicles are at adjacent locations. Details of these collaborative operators can be found in the code- base. Following this template, users can explore more inter- esting interactions and modifying Transport domain to study heterogeneous multi-agent problems. Above is an example of how to modify an existing IPC- HTN domain to study with HDDLGym and explore more interesting features for multi-agent hierarchical planning. A similar process can be applied to other domains such as Rover, Satellite, and Barman-BDI, to plan with HDDLGym in single or multi-agent contexts. 6.2 OpenAI Gym-based Domains Writing HDDL domains and problems for an environment is not trivial, especially domains with complicated interaction rules. While there are many ways to do so, we suggest start- ing with the goal task, then designing methods to achieve the goal task, then coming up with other intermediate tasks and methods for them, and gradually working to the primitive actions. Here is an example of how HDDLgym is applied in Overcooked environment (Carroll et al. 2019). Overcooked Overcooked (Carroll et al. 2019) is a popu- lar Gym-based environment for studying RL, modeled after the cooperative, fast-paced mechanics of the original game. Players, acting as chefs (agents), collaborate to prepare an onion soup by placing onions in a pot, cooking, pouring the soup into a bowl, and serving it (see Figure 1b). In typical Overcooked scenarios, each agent can perform six primitive actions: moving in a 2D gridworld (up, down, left, right), interacting with objects, or doing nothing. Al- though the whole Overcooked scenarios could be fully de- fined using HDDL, we found it more efficient to utilize HD- DLGym for high-level planning and then apply A∗(Ducho ˇn et al. 2014) for motion planning to find the primitive actions as listed above. The core HTN for Overcooked domain is en- tailed in Figure 3. In the HDDL domain, we define follow- ing tasks: make-soup, add-ingredient, cook, deliver, wait, and task-interact. Each of them has one or more method to complete the tasks. Figure 3 only lists several key HTNs of the domain, though all HDDL domain and problem files of Overcooked environment can be
https://arxiv.org/abs/2505.22597v1
found from the codebase. Additionally, Figure 1b shows an example of a hierarchy of an agent and its belief about the other agent’s hierarchy. The following videos help visualize the result of combin- ing HDDLgym in task planning and use A∗for motion plan- ning in various Overcooked layouts: Bottleneck — https://tinyurl.com/hddlBottleNeckRoom Coord. ring — https://tinyurl.com/hddlCoordinationRing Left isle — https://tinyurl.com/hddlLeftIsle Counter circuit — https://tinyurl.com/hddlCounterCircuit Cramped room — https://tinyurl.com/hddlCrampedRoom make-soup (?dish, ?serving) Add-ingredient (?onion, ?pot) deliver (?dish, ?serving) cook (?pot) task-interact (?chef, ?onion-pile) task-interact (?chef, ?pot) task-interact (?chef, ?pot) task-interact (?chef, ?bowl-pile) task-interact (?chef, ?pot) wait (?pot) task-interact (?chef, ?serving) Figure 3: HTNs of the Overcooked domain. 6.3 Evaluation Metrics Complexity and Difficulty HDDLGym provides a range of metrics to evaluate RL models within hierarchical plan- ning contexts, with particular focus on plan complexity and planning difficulty. Plan complexity can be assessed by ex- amining the dimensionality of the elements defining a prob- lem, as summarized in Table 1. Complementing this, plan- ning difficulty can be estimated prior to training by mea- suring planning time, number of planning steps, and success rate using random exploration. Table 3 presents such evalua- tions across the Transport, Overcooked, Rover, and Satellite domains as the number of agents and problem size increase. Several domains show a 0% success rate with three or more agents, highlighting scalability issues during training. Low success probabilities from exploration increase the risk of low-quality data, which can degrade model performance. These metrics also enable comparisons between different methods, also known as hierarchical task networks (HTNs). For example, in the Transport domain, adding a collabo- rative method in simple scenarios with one or two homo- geneous agents can unnecessarily increase planning time. However, in more complex settings with heterogeneous agents, collaborative strategies improve efficiency, reducing plan lengths (e.g., 25 vs. 18 steps). In summary, evaluating the complexity and difficulty of the plan in HDDLGym prior to training provides valuable information on the scalability of the domain. This helps users better tailor HTN structures and adjust training strate- gies, such as reward shaping or extending exploration hori- zons, to improve convergence and model performance. Training Several metrics are implemented to evaluate the RL training process. The first is the loss value, which mon- itors the convergence rate and is illustrated in plots A–C of Figure 4 for the Transport 1-agent, 2-agent, and 3-agent problems. Loss plots are generated and saved at specified intervals during training to help users track the learning progress. Secondly, the policy is periodically evaluated us- ing quantitative metrics such as cumulative discounted re- ward, success rate, planning time, and planning steps to as- sess whether the RL policy is converging toward an opti- mal solution. These evaluation results are recorded in two graphs, as shown for the Transport 1-agent problem in parts D and E of Figure 4.Domain # agents Plan time (sec) Avg steps. Success rate Transport 1 0.059 21 100% Transport Collab 1 0.077 22 100% Transport 2 0.374 41 38% Transport Collab. 2 1.317 35 46% Transport Collab. hetero. 2 0.795 18 1% Transport hetero. 2 0.367 26
https://arxiv.org/abs/2505.22597v1