text string | source string |
|---|---|
as the no-bounding-box rethink content. Finally, we compute and compare average attention scores across various layers for input visual tokens during the generation of both the original and no-bounding-box rethink content. Such comparison is repeated for 100 randomly selected data samples in the GQA subset, and due to the various generation lengths, we show the results for the first 50 tokens generated. From the results shown in Figure 5, we find that the average attention scores to each visual token is overall significantly higher in the original rethinking segment than the no-bounding-box rethink segment. This indicates the presence of bounding boxes in the original pre-rethink segment leads to increased visual attention in the following reasoning chain, which potentially benefits the image consistency of the reasoning process. 4.4 Research question 3 : What is the effect of scaling training data? Figure 6: Effect of scaling training data on model performance.To understand how GRIT’s performance scales with increasing training data, beyond the data efficiency shown with 20 samples, we trained variants of the Qwen2.5-VL 2B model using 20, 500, and 7,000 image–question–answer triplets. These samples were taken from the VSR and TallyQA datasets, maintaining the same mix of data sources. We evaluate these trained variants on the testing data sets introduced in Section 4.1 with the GPT-as-judge answer accuracy score. Figure 6 presents the results across testing data categorized as either in-domain, sourced from the same datasets as training (VSR, TallyQA), or out-of-domain (GQA, MathVista-mini). The results show that answer accuracy generally in- creases with more training data used for GRIT. We find that the improvements in answer accu- racy are more subtle for out-of-domain testing data compared to the growth observed on in-domain testing data, which highlights a common chal- lenge of generalization. Furthermore, we observe that the model’s performance growth diminishes, suggesting that continued improvement may require exponentially more data. We interpret this observation in light of recent studies suggesting that Reinforcement Learning with verifiable rewards primarily biases existing reasoning patterns towards rewarded outcomes rather than fundamentally changing them [ 32], with performance also heavily influenced by pre-training [ 33,34]. This implies that to significantly enhance the out-of-domain performance of GRIT-trained models, increasing the diversity of training data is more critical than simply scaling up the data volume, highlighting a key direction for future study. 5 Conclusion In this paper, we propose Grounded Reasoning with Images and Text (GRIT), a novel method where models generate visually grounded reasoning chains by interleaving natural language with explicit bounding box coordinates referencing relevant image regions. We propose the grounded reasoning paradigm and the GRPO-GR reinforcement learning training method. As a result, GRIT is extremely data efficient and requires neither dense reasoning chains nor explicit bounding box annotations. Our experiments demonstrate that GRIT effectively trains pre-trained MLLMs to produce accurate and coherent grounded reasoning. While limited resources focus our study on smaller MLLMs to validate the GRIT method and efficiency, rather than exploring peak performance with full-scale scaling, we identify key direction for further improving the generalizability of GRIT-trained model being data variety and model pre-training. | https://arxiv.org/abs/2505.15879v1 |
Nonetheless, GRIT’s efficacy in instilling complex grounded reasoning with minimal data highlights its significant potential. 9 6 Acknowledgments We thank Qianqi Yan, Kaiwen Zhou, and Zhen Zhang, members of the ERIC Lab, for their valuable feedback and engaging discussions. We are also grateful to Orby AI for generously providing the computing resources. References [1]OpenAI. Introducing openai o1-preview. https://openai.com/index/ introducing-openai-o1-preview/ , 2024. [2]Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [3]An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024. [4] Qwen Team. Qwq-32b: Embracing the power of reinforcement learning, March 2025. [5]Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. [6]Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. [7]Yanda Chen, Joe Benton, Ansh Radhakrishnan, Jonathan Uesato Carson Denison, John Schul- man, Arushi Somani, Peter Hase, Misha Wagner Fabien Roger Vlad Mikulik, Sam Bowman, Jan Leike Jared Kaplan, et al. Reasoning models don’t always say what they think. Anthropic Research , 2025. [8]Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. Qwen-vl: A frontier large vision-language model with versatile abilities. arXiv preprint arXiv:2308.12966 , 2023. [9]Shibo Hao, Sainbayar Sukhbaatar, DiJia Su, Xian Li, Zhiting Hu, Jason Weston, and Yuandong Tian. Training large language models to reason in a continuous latent space. arXiv preprint arXiv:2412.06769 , 2024. [10] Haozhan Shen, Peng Liu, Jingcheng Li, Chunxin Fang, Yibo Ma, Jiajia Liao, Qiaoli Shen, Zilun Zhang, Kangjia Zhao, Qianqian Zhang, et al. Vlm-r1: A stable and generalizable r1-style large vision-language model. arXiv preprint arXiv:2504.07615 , 2025. [11] Wenxuan Huang, Bohan Jia, Zijie Zhai, Shaosheng Cao, Zheyu Ye, Fei Zhao, Zhe Xu, Yao Hu, and Shaohui Lin. Vision-r1: Incentivizing reasoning capability in multimodal large language models. arXiv preprint arXiv:2503.06749 , 2025. [12] Yi Yang, Xiaoxuan He, Hongkun Pan, Xiyan Jiang, Yan Deng, Xingtao Yang, Haoyu Lu, Dacheng Yin, Fengyun Rao, Minfeng Zhu, et al. R1-onevision: Advancing generalized multimodal reasoning through cross-modal formalization. arXiv preprint arXiv:2503.10615 , 2025. [13] Jinguo Zhu, Weiyun Wang, Zhe Chen, Zhaoyang Liu, Shenglong Ye, Lixin Gu, Yuchen Duan, Hao Tian, Weijie Su, Jie Shao, et al. Internvl3: Exploring advanced training and test-time recipes for open-source multimodal models. arXiv preprint arXiv:2504.10479 , 2025. [14] Fangyu Liu, Guy Emerson, and Nigel Collier. Visual spatial reasoning. Transactions of the Association for Computational Linguistics , 11:635–651, 2023. 10 [15] Manoj Acharya, Kushal Kafle, and Christopher Kanan. Tallyqa: Answering complex counting questions. In Proceedings of the AAAI conference on artificial intelligence , volume 33, pages 8076–8084, 2019. | https://arxiv.org/abs/2505.15879v1 |
[16] Liang Chen, Lei Li, Haozhe Zhao, Yifan Song, and Vinci. R1-v: Reinforcing super generaliza- tion ability in vision-language models with less than $3. https://github.com/Deep-Agent/ R1-V , 2025. Accessed: 2025-02-02. [17] OpenAI. Introducing openai o3 and o4-mini. https://openai.com/index/ introducing-o3-and-o4-mini/ , 2025. [18] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations , 2022. [19] Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, and Alex Smola. Mul- timodal chain-of-thought reasoning in language models. arXiv preprint arXiv:2302.00923 , 2023. [20] Zhenfang Chen, Qinhong Zhou, Yikang Shen, Yining Hong, Zhiqing Sun, Dan Gutfreund, and Chuang Gan. Visual chain-of-thought prompting for knowledge-based visual reasoning. In AAAI Conference on Artificial Intelligence , 2024. [21] Chancharik Mitra, Brandon Huang, Trevor Darrell, and Roei Herzig. Compositional chain-of- thought prompting for large multimodal models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 14420–14431, 2024. [22] Kesen Zhao, Beier Zhu, Qianru Sun, and Hanwang Zhang. Unsupervised visual chain-of-thought reasoning via preference optimization. arXiv preprint arXiv:2504.18397 , 2025. [23] Hao Shao, Shengju Qian, Han Xiao, Guanglu Song, Zhuofan Zong, Letian Wang, Yu Liu, and Hongsheng Li. Visual cot: Advancing multi-modal language models with a comprehen- sive dataset and benchmark for chain-of-thought reasoning. Advances in Neural Information Processing Systems , 37:8612–8642, 2024. [24] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. [25] Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 6700–6709, 2019. [26] Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, and Rongrong Ji. Mme: A comprehensive evaluation benchmark for multimodal large language models, 2024. [27] Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. arXiv preprint arXiv:2310.02255 , 2023. [28] Yiyang Yao, Peng Liu, Tiancheng Zhao, Qianqian Zhang, Jiajia Liao, Chunxin Fang, Kyusong Lee, and Qing Wang. How to evaluate the generalization of detection? a benchmark for comprehensive open-vocabulary detection. arXiv preprint arXiv:2308.13177 , 2023. [29] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems , 36:46595–46623, 2023. [30] Tom B Brown. Language models are few-shot learners. arXiv preprint arXiv:2005.14165 , 2020. [31] Jianwei Yang, Hao Zhang, Feng Li, Xueyan Zou, Chunyuan Li, and Jianfeng Gao. Set-of-mark prompting unleashes extraordinary visual grounding in gpt-4v. arXiv preprint arXiv:2310.11441 , 2023. 11 [32] Yang Yue, Zhiqi Chen, Rui Lu, Andrew Zhao, Zhaokai | https://arxiv.org/abs/2505.15879v1 |
Wang, Shiji Song, and Gao Huang. Does reinforcement learning really incentivize reasoning capacity in llms beyond the base model? arXiv preprint arXiv:2504.13837 , 2025. [33] Yihe Deng, Hritik Bansal, Fan Yin, Nanyun Peng, Wei Wang, and Kai-Wei Chang. Open- vlthinker: An early exploration to complex vision-language reasoning via iterative self- improvement. arXiv preprint arXiv:2503.17352 , 2025. [34] Hardy Chen, Haoqin Tu, Fali Wang, Hui Liu, Xianfeng Tang, Xinya Du, Yuyin Zhou, and Cihang Xie. Sft or rl? an early investigation into training r1-like reasoning large vision-language models. arXiv preprint arXiv:2504.11468 , 2025. 12 Table 2: Statistics for the testing data used in the experiments. We collect the testing data from seven diverse benchmarks. Data source VSR TallyQA GQA MathVista MME OVDEval Counts 288 491 509 1000 240 2164 Avg question/answer length 6.7/1.0 6.0/1.0 7.1/1.0 38.2/1.2 13.3/1.0 16.4/4 Ratio of multi-choice and yes/no questions (%)71.2 0 58.9 70.8 100 0 Ratio of annotated grounding targets (%)58.8 25.6 25.3 - - 17.3 A Details for Testing Data For our evaluation, we curated testing data derived from seven public open-source datasets covering a range of visual reasoning and grounding tasks. The statistic for the testing data is shown in Table 2. •VSR [14] tests spatial relation verification. For our VSR evaluation set, we source question- image-answer triplets from the VSR subset of the Visual CoT benchmark [ 23] and manually filter out those with ambiguous answers. •TallyQA [15] focuses on counting; we uniformly sample evaluation questions where the target object counts range from 0 to 9 to create our TallyQA evaluation set. •GQA [25] offers scene-graph-grounded, compositional object spatial questions. We first take the GQA subset from the Visual CoT benchmark and then manually filter these to retain high-quality instances for our GQA evaluation set. •From MME [26], we use only the counting, position, and existence subsets to broaden our evaluation scope. •MathVista [27] evaluates mathematical reasoning in visual contexts. Following prior works, we adopt its TestMini split. •Finally, OVDEval [28] is an open-vocabulary detection (OVD) testing set that requires the model to ground fine-grained semantics from the language query to the coordinates of visual features. We use its position subset and simplify it to object detection tasks with a single target. Among these evaluation sets, those derived from VSR, TallyQA, and GQA are accompanied not only by ground-truth language answers but also by annotations of bounding boxes for image regions critical for deriving the answer. Specifically, we manually refine the bounding box coordinates from the Visual CoT benchmark for our VSR and GQA data. For our TallyQA evaluation sets, we adopt the original bounding box annotations. All evaluation sets, except for OVDEval, are VQA benchmarks where the required answer is a single word or short phrase. In contrast, OVDEval differs as grounding is not an optional component of the reasoning chain but is explicitly required as the answer to queries. B Details for Training data To demonstrate the data efficiency of our GRIT method, we collect a small training dataset consisting of only 20 unique image-query-answer triplets. These triplets are sourced from existing open-source | https://arxiv.org/abs/2505.15879v1 |
VQA datasets, covering both grounding and visual reasoning challenges. Specifically, we selected ten from the Visual Spatial Reasoning (VSR) dataset [ 14] and ten from the TallyQA dataset [ 15]. We chose examples from VSR as they typically involve object localization and spatial relation reasoning (e.g., "Is the motorcycle away from the bird?"). From TallyQA, we selected counting tasks (e.g., "How many signs are on the post?"), specifically ensuring the chosen examples uniformly cover object counts from 0 to 4, as these tasks naturally lend themselves to explicit grounded counting within the reasoning process. This curated dataset engages the models in multi-step visual analysis, serving to evaluate GRIT’s ability to strengthen the link between visual grounding and logical deliberation. 13 Table 3: Ablation study on the importance of counting data and grounded-target-counting reward. Comparison of the original GRIT-trained model, trained with 10 VSR + 10 TallyQA and counting reward, with an ablated variant, trained with 20 VSR without counting reward. Results show a performance decrease in the ablated model. GIoU ACC In-domain Out-of-domain In-domain Out-of-domain GRIT 0.387 0.437 51.8 64.4 GRIT w/o counting data & reward 0.349 0.378 53.8 60.0 C Ablation on Counting-related Training Data and Reward To validate the importance of including counting-related training data and the associated grounded- target-counting reward within the GRIT method, we conduct an ablation experiment. The standard training utilizes data from TallyQA, which consists of queries about object quantities in images with single-digit numerical answers. For these counting tasks, we employ the grounded-target-counting reward (detailed in Section 3.2), designed to encourage the generation of a flexible number of bounding boxes matching the count in the reasoning output. In the ablation, we train the InternVL-3 2B model with a modified dataset and reward function. Instead of using 10 VSR and 10 TallyQA samples with the full reward set, we train with a dataset of 20 VSR samples and exclude the grounded-target-counting reward during training. This allows us to isolate the contribution of the counting-related data and reward component. We report the GPT-as-judge answer accuracy score and the grounding IoU of both in-domain and out-of-domain data (same as in Section 4.4). The results, presented in Table 3, show that excluding the counting-related data and grounded-target-counting reward during training leads to a significant performance decrease in grounding, as indicated by a lower Grounding IoU score compared to the original GRIT-trained model. Furthermore, we observe that this exclusion negatively impacts the model’s answer accuracy on out-of-domain data. As a result, this ablation study underscores the importance of including both counting-related training data and the grounded-target-counting reward within the GRIT method. D Prompts We append the prompt shown in Figure 7 to the GRIT model training and model inference, as well as for the Chain-of-Though baseline in the experiments. The prompt provides models with the instruction to follow the grounded reasoning paradigm, however, as shown in the experiment result, MLLMs without training will face a significant performance drop. This is due to the instruction in the prompt requiring the MLLMs to perform grounding and reasoning at the same time, which | https://arxiv.org/abs/2505.15879v1 |
is very challenging for them in a zero-shot manner. It is worth noticing that although the prompt mentions using "JSON" to show bounding boxes, the GRIT-trained model does not always adhere to such a format. This is intentional, as the reward in GRPO-GR uses regex to identify valid bounding boxes, rather than relying on the JSON format, providing the RL optimization of model policy with more search space. It also indicates that the prompt suffix only provides an initialization for the policy which can be adjusted with minimal impact as long as it includes an instruction of the grounded reasoning paradigm. First, think between <think> and </think> while output necessary coordinates needed to answer the question in JSON with key ’bbox_2d’. Then, based on the thinking contents and coordinates, rethink between <rethink> </rethink> and then answer the question after <answer>. Figure 7: Prompt suffix that is appended to the input of models during the training and inference. Additionally, in GPT-aided answer-accuracy reward of GRPO-GR and the GPT-as-judge answer accuracy score in the experiments, we adopt the prompt format in Figure 8. Furthermore, in the experiments where we compute the Vision-Language Reasoning Cross-Modal Correlation metric for model outputs, we leverage GPT-4o to determine if the bounding boxes in the grounded reasoning chains of models are more correlated to the natural language reasoning contents than the randomly sampled bounding boxes. The prompt format we used is in Figure 9. 14 You are responsible for proofreading the answers, you need to give a score to the model’s answer by referring to the standard answer, based on the given question. The full score is 1 point and the minimum score is 0 points. Please output the score in the json form "{score: <score>}". The evaluation criteria require that the closer the model’s answer is to the standard answer, the higher the score. Question: {$question} Standard answer: {$answer} Model’s answer: {$predicted_content} Figure 8: Prompt format for GPT-as-judge answer accuracy score and GPT-aided answer-accuracy reward. The $question and $answer are substituted with the query and ground truth in the data sample and $predicted_content is replaced by the model output. Please decide which image has the bounding boxes that match the following description: {$grounded_reasoning_masked} Reply with exactly "Image 0" or "Image 1". Figure 9: Prompt format for calculating the Vision-Language Reasoning Cross-Modal Correlation metric. Here, the input images for GPT-4o are the same image drawn with two sets of bounding boxes, one from the generated grounded reasoning chain and one from random sampling. The $grounded_reasoning_masked is the generated grounded reasoning chain with any bounding box coordinates removed. E More Examples Question : What kind of animal stands on the ground? Model output : <think> The image shows two animals standing on the ground. The larger animal is a cow with a white and black coat, and the smaller animal is a calf. The bounding box coordinates for the cow are approximately (42, 73, 433, 296), and for the calf, they are approximately (199, 160, 363, 293). </think> <rethink> The question asks about the animal that stands on the ground. | https://arxiv.org/abs/2505.15879v1 |
Both the cow and the calf are standing on the ground, so either could be the correct answer. However, the cow is the larger and more prominent animal in the image. </rethink> <answer> cow Ground truth answer : cow (i) Our GRIT model accurately perceives the grounded target, reasons about the potential ambiguity in the question, and finally outputs a correct answer. Question : Is there only one person appear in the image? Please answer yes or no. Model output : <think> There is only one person in the image. bbox_2d_1: [0, 28, 305, 364] bbox_2d_2: [275, 0, 410, 280] </think> <rethink> The image shows two people, one in the foreground playing tennis and another in the background. bbox_2d_1: [0, 28, 305, 364] bbox_2d_2: [275, 0, 410, 280] </rethink> <answer> No Ground truth answer : No (ii) Although the bounding boxes generated during the grounded reasoning are slightly off, our GRIT model is still able to analyze the related image regions and successfully correct its own wrong answer at the beginning and finally generate a correct answer. 15 | https://arxiv.org/abs/2505.15879v1 |
arXiv:2505.15916v1 [cs.CL] 21 May 2025BR-TaxQA-R: A Dataset for Question Answering with References for Brazilian Personal Income Tax Law, including case law Juvenal Domingos Júnior1, Augusto Faria1, E. Seiti de Oliveira1, Erick de Brito2, Matheus Teotonio2, Andre Assumpção3, Diedre Carmo1, Roberto Lotufo1, and Jayr Pereira1,2 1Universidade Estadual de Campinas (UNICAMP), Campinas–SP, Brazil 2Universidade Federal do Cariri (UFCA), Juazeiro do Norte–CE, Brazil 3National Center for State Courts (NCSC), Williamsburg, Virginia, United States jayr.pereira@ufca.edu.br Abstract. This paper presents BR-TaxQA-R , a novel dataset de- signed to support question answering with references in the context of Brazilian personal income tax law. The dataset contains 715 ques- tions from the 2024 official Q&A document published by Brazil’s Inter- nal Revenue Service, enriched with statutory norms and administrative rulings from the Conselho Administrativo de Recursos Fiscais (CARF). We implement a Retrieval-Augmented Generation (RAG) pipeline using OpenAI embeddings for searching and GPT-4o-mini for answer genera- tion. We compare different text segmentation strategies and benchmark our system against commercial tools such as ChatGPT and Perplex- ity.ai using RAGAS-based metrics. Results show that our custom RAG pipeline outperforms commercial systems in Response Relevancy , indi- cating stronger alignment with user queries, while commercial models achieve higher scores in Factual Correctness andfluency. These findings highlight a trade-off between legally grounded generation and linguis- tic fluency. Crucially, we argue that human expert evaluation remains essential to ensure the legal validity of AI-generated answers in high- stakes domains such as taxation. BR-TaxQA-R is publicly available at https://huggingface.co/datasets/unicamp-dl/BR-TaxQA-R . Keywords: Retrieval-Augmented Generation ·Legal NLP ·Brazilian Tax Law ·Question Answering ·CARF Rulings 1 Introduction Alongstandingchallengemanyindependentjudiciariesandadministrativecourts faceisthat,asthepopulationgrows,sodoesthenumberofcases,placingincreas- ing pressure on courts and often exceeding their capacity [5]. The integration of Artificial Intelligence (AI), particularly through Natural Language Processing (NLP) techniques, offers the potential to significantly enhance judicial efficiency and effectiveness. This vision is exemplified by the “Smart Courts” initiative 2 Domingos Júnior et al. outlined in China’s Artificial Intelligence Development Plan, published by the State Council [20]. AI has been applied to various legal domain tasks, rang- ing from Named Entity Recognition to Ruling Prediction with the overarching goal of improving judicial productivity [17]. Developments in other judicial sys- tems suggest a growing interest in understanding how such technologies might contribute to improved access and operational capacity. For instance, the 2023 Year-End Report on the Federal Judiciary in the United States highlights both the opportunities and limitations of AI in the courtroom, noting its potential to assist litigants with limited resources and to streamline certain processes, while also cautioning against overreliance due to risks such as hallucinated con- tent, data privacy concerns, and the challenges of replicating nuanced human judgment4. The successful application of NLP techniques in the legal domain relies on the availability of specialized datasets. There is an ongoing effort to narrow the resource gap for training and evaluating NLP systems in Brazilian Portuguese, considering the variety of legal tasks involved. For example, [17] developed a large corpus for the Brazilian legal domain, proposing a methodology to extract and preprocess legal documents from the judiciary, legislative, and executive branchesofgovernment.Thiscorpusisaimedatbeingusedforpretrainingtasks, but still requires further processing for | https://arxiv.org/abs/2505.15916v1 |
downstream applications. [16] proposed semantic similarity datasets based on published decisions from two appeals bod- ies of Brazilian Federal and Administrative Courts, creating a unique resource for jurisprudence and case law research. [19] constructed a human-annotated, relevance feedback dataset for legal information retrieval based on legislative documents from the Brazilian Congress. Although focused on a specific sce- nario, relevance feedback datasets are a crucial step toward developing robust legal question answering systems. Finally, [13] leveraged the curated Tax Law Question Answering (QA) manual for corporate entities, published by Brazil’s Internal Revenue Service (i.e. Receita Federal do Brasil or RFB), to create a QA dataset that includes legal document references supporting the answers. This dataset was used to evaluate LLMs’ ability to generate answers when provided withthegoldpassageascontext,enablingtheanalysisofbothanswercorrectness and faithfulness to the supporting references. In this paper, we propose BR-TaxQA-R, a dataset for Brazilian Tax Law Question Answering with supporting references focused on personal income tax law. BR-TaxQA-R extends the work of [13] by enabling the evaluation of the complete QA pipeline, incorporating all legal document references, both ex- plicitly and implicitly cited in the answers. As additional supporting context, BR-TaxQA-R includes a curated set of rulings from CARF, the administrative appeals court handling federal tax disputes in Brazil. These rulings compose the case law portion of the dataset, providing real-world interpretations and applica- tionsofPersonalIncomeTaxregulations.BR-TaxQA-Renablestheevaluationof complete Retrieval-Augmented Generation (RAG) pipelines, encompassing both the information retrieval and answer generation stages, and provides a baseline 4https://www.supremecourt.gov/publicinfo/year-end/2023year-endreport.pdf BR-TaxQA-R 3 for future research. Our evaluation indicates that simple sliding-window segmen- tation achieves good results, and that incorporating relevant jurisprudence fur- ther improves performance. Although closed-source commercial tools employing LLM-based search pipelines achieve superior performance, retrieval in the legal domain remains a challenging task. The key contributions of this paper are: –We introduce BR-TaxQA-R, a novel dataset for tax-related QA in Brazilian Portuguese, combining statutory and case law. –We implement and benchmark a legal-domain-specific RAG pipeline using hierarchical segmentation and legal prompting. –WeevaluatethesystemagainstcommercialLLMtoolsanddiscussthetrade- offs between legal traceability and linguistic fluency. The remainder of this paper is organized as follows. Section 2 details the methodology adopted to create the dataset, including the parsing of original questions and answers (2.1), the acquisition of supporting legal documents (2.2), and the construction of the additional jurisprudence set (2.3). Section 3 presents thepublishedformatandstatisticsoftheBR-TaxQA-Rdataset.Sections4and5 describe the experiments conducted to evaluate the dataset and discuss the re- sults. Finally, Section 6 concludes the paper. 2 Dataset Acquisition Methodology This study aims to develop a dataset that can be used to train and evaluate a Retrieval-Augmented Generation (RAG) system [6] for answering questions relatedtoBrazilianpersonalincometaxlaw.Thedatasetconstructionfolloweda three-step methodology: (1) extraction of questions and answers from the official 2024 “Questions and Answers” document published by the RFB (cf. Section 2.1), (2) collection and processing of tax regulations cited as references in the answers (cf. Section 2.2), and (3) retrieval of relevant administrative rulings from CARF to provide jurisprudential support (cf. Section 2.3). These components were combined to create a legally grounded and contextually rich dataset aligned with real-world tax guidance. 2.1 Questions extraction The first step in the dataset acquisition | https://arxiv.org/abs/2505.15916v1 |
process involved extracting the questions and answers from the official document “Questions and Answers” published by RFB for the year 2024 [14]. That document is available in PDF format, and we applied a combination of automated tools and manual verification to ensure accurate extraction. Our approach was to extract as much information as possible from the doc- ument, preserving the original to allow further error correction and processing. 4 Domingos Júnior et al. In addition to the question and its answer, all legal document references pro- vided to support the answers are relevant, and identifying and processing them represented most of this extraction work. After extracting the text information from the PDF using a Python Library5, the document text was processed in two stages. The first stage consisted of splitting the text into the following parts: question, answer, legal documents supporting the answer, and links to other questions. The second stage consisted of processing the answer body to extract additional legal document references supporting the answer and additional links to other questions. Since throughout the different questions the same legal document was ref- erenced using different notation — abbreviations or acronyms — we applied a semi-automated document deduplication strategy using LLM support: we clus- tered the documents by their name’s initial letter and passed the list to the LLM instructing for identifying and removing document’s duplication, whenever the same part (e.g. article, paragraph, clauses) was referred to. The final list was manually verified to fix the remaining duplicates. 2.2 Tax regulations Tax regulations were obtained through a curated selection, defined by the ref- erences listed in the processed “Questions and Answers” 2024 document. The original documents were retrieved as PDF files or HTML pages, ensuring the most up-to-date versions were selected. For PDF files, an automated download was performed, followed by text extraction using Python libraries and stored as text files. In the case of HTML documents, web scraping techniques were applied to parse and clean the page content, removing the amended or revoked parts, re- spectively indicated by the <strike> (strikethrough text) and <del>(removed text) tags. The resulting text was saved as plain text files, named and organized according to the regulation identifiers. 2.3 Case law collection Administrativerulings(caselaw)werecollectedthroughautomatedwebscraping from the official repository of the Brazilian Ministry of Finance6, the federal agency that houses CARF, and converted into plain text. Only 2023 rulings were processed, with previous years potentially added in a future dataset release. To ensure that the inclusion of case law would genuinely enhance the rel- evance and contextual alignment of answers within BR-TaxQA-R, we adopted two primary qualitative criteria for selecting rulings through web scraping. First, we based the selection on the presence of keywords directly extracted from the questions in the “Questions and Answers” document. This constraint helped en- sure that the retrieved decisions addressed legal issues analogous to those in the official guidance, thus avoiding the inclusion of unrelated jurisprudence. 5https://pymupdf.readthedocs.io 6https://acordaos.economia.gov.br/solr/acordaos2/browse/ BR-TaxQA-R 5 Second, we applied a temporal filter to guarantee that the rulings reflected current legal interpretations. Only rulings published within one year of the | https://arxiv.org/abs/2505.15916v1 |
2024 “Questions and Answers” edition were considered. This time-bounded selection criterionaimedtomitigatetheriskofreferencingoutdatedprecedentsthatmight no longer align with current tax practices or administrative guidance. This aligns with retrospective studies, in which closed cases are indexed by the “date of death” and analyzed for their legal characteristics[9]. Likewise, selecting relevant rulings relies on subjective representations of legal concepts, which must be explicitly described and theoretically justified. 3 BR-TaxQA-R We named the dataset BR-TaxQA-R, which stands for Brazilian Tax Question Answering with References. The dataset is composed of three main components: thequestions set , thesources set , and thecase law set . 3.1 Questions set The question set contains 715 questions and answers extracted from the official document published by RFB. 117 ( ∼16%) out of the 715 questions do not ref- erence any external documents, since their answers are not directly defined in a legal document; those questions were kept in the dataset for completion. The answers to several questions reference other questions within the document, and those links were captured. The question set was structured to hold the original data as much as possible, along with the scraped information: – question_number : The question number, starting with 1, as referred to in the original document. – question_summary : A very brief description of the question subject, ex- tracted from the original document. – question_text : The question itself, as originally posed. – answer : Answer, as extracted from the original document. It is a list of strings, respecting the PDF formatting. It contains all the information pro- vided after the question_text and before a link to the document index, provided at the end of all questions. – answer_cleaned : The answer field after removing all explicit external ref- erences — the legal documents captured in the sources set — and all explicit inter-question references. External references were provided in the original document: explicitly, through grayed boxes, and implicitly, embedded in the answer text. – references : The list of external references explicitly provided. – linked_questions : List of other questions linked in the provided answer. – formatted_references : The explicit external references, LLM-processed to separate the document title, articles, sections, paragraphs, and other spe- cific parts mentioned. 6 Domingos Júnior et al. – embedded_references : External references are implicitly provided, em- bedded in the answer text. – formatted_embedded_references :TheimplicitexternalreferencesLLM- processed to separate the specific information mentioned, similar to the for- matted_references field. – all_formated_references : Merge of formatted_references and format- ted_embedded_referencesfields,combiningtheinformationofthelegaldoc- uments, and including the name of the text file (the file sub-field) containing each particular legal document has been captured in the dataset. 3.2 Sources and case law sets The sources and case law sets compose a corpus supporting the answers provided to the questions set. The sources set contains all the legal documents listed as official sources for the answers provided in the original “Questions and Answers” document, and corresponds to the minimal legal documents set required for a RAG system to properly answer all the questions. The case law set contains CARF administrative rulings on the topics covered by the questions, which | https://arxiv.org/abs/2505.15916v1 |
can potentially offer concrete examples on the concepts covered by the legal docu- ments, and provide assistance to assertive answers by the same RAG system. Both the sources and case law sets have the following format: – filename : The legal document scraped data filename, as referred to within the all_formated_references field in the questions set. – filedata : The scraped legal document information, extracted as text data. 3.3 Dataset statistics Although the dataset is relatively small, the legal domain introduces significant complexity when selecting relevant segments from the supporting documents for answering questions. The case law documents can improve answer quality, but they also increase the overall pipeline complexity, as they are numerous and vary significantly in structure. Table 1 summarizes the dataset size. Tables 2 and 3 present statistics on the number of links found in the answers to other questions and external documents. While there is one question with 20 links to others, most answers do not reference other questions, suggesting they are mainly independent. The number of external links per question is more heterogeneous, which can be interpreted as an indicator of complexity: it is reasonabletoassumethatthe25questionswithmorethan10externalreferences are more challenging for a Q&A system to answer correctly. Among the 478 external documents, only 10 account for over half of all references. Given the significantvariationinlength,thesedocumentsalsopresentadditionalchallenges for the information retrieval stage. BR-TaxQA-R 7 Table 1. BR-TaxQA-R size statistics. Questions Answers Source documentsCase-Law documents count 715 715 478 7204 min words 3 6 24 425 max words 74 3118 165830 75584 mean words 19.11 143.36 4546.46 3171.70 median words 17.00 81.00 649.00 1983.50 Table 2. BR-TaxQA-R question links statistics. Links to other questionsExplicit external linksImplicit external linksTotal external links without 478 (66.85%) 151 (21.12%) 524 (73.29%) 117 (16.36%) >2 and <10 51 (7.13%) 248 (34.69%) 55 (7.69%) 287 (40.14%) ≥10 4 (0.56%) 5 (0.70%) 3 (0.42%) 25 (3.50%) minimum 0 0 0 0 maximum 20 18 16 21 mean 0.69 2.07 0.63 2.70 median 0.00 2.00 0.00 2.0 question max129 442 and 560 177 177, 442, 560 4 Experiments This section describes the experiments conducted using BR-TaxQA-R. The ex- periments were designed to evaluate the performance of a custom Retrieval- Augmented Generation (RAG) system, which was implemented using the BR- TaxQA-R dataset source files as the knowledge base. We also assess the perfor- mance of two commercial tools, ChatGPT and Perplexity.ai, using the same set of questions from the BR-TaxQA-R dataset. The experiments aim to establish a baseline for the RAG system’s performance in answering tax-related questions, comparing different segmentation strategies, and evaluating the results against commercial tools. In the next subsections, we describe the custom RAG system, Table 3. 10 most referred source documents by BR-TaxQA-R questions. Reference countDocument Word count 284 Decreto n º9.580, de 22 de novembro de 2018 165830 123 Instrução Normativa RFB n º1500, de 29 de outubro de 2014 28269 68 Lei n º9.250, de 26 de dezembro de 1995 6452 63 Instrução Normativa SRF n º83, de 11 de outubro de 2001 3177 52 Instrução Normativa SRF n º208, de 27 de setembro de 2002 | https://arxiv.org/abs/2505.15916v1 |
10118 49 Lei n º7.713, de 22 de dezembro de 1988 6636 45 Instrução Normativa RFB n º2178, de 05 de março de 2024 3997 44 Instrução Normativa SRF n º84, de 11 de outubro de 2001 4539 37 Instrução Normativa RFB n º1585, de 31 de agosto de 2015 26504 22 Instrução Normativa SRF n º81, de 11 de outubro de 2001 4880 Total 787 53.32% (1476) 8 Domingos Júnior et al. the commercial tools used for comparison, and the evaluation metrics employed to assess the performance of the generated answers. 4.1 Custom RAG system We implemented a custom RAG system using the BR-TaxQA-R dataset as the knowledge base. The system is designed to efficiently retrieve relevant infor- mation from the sources and case law sets, generate accurate answers to user queries, and provide explicit references to the legal documents used in the an- swer generation process. Following the principles outlined in [6], the RAG system is structured into three main components: data preparation, indexing, and an- swer generation, ensuring that retrieved content is seamlessly integrated into the response generation pipeline. Data Preparation The data preparation includes the document segmentation andindexingfortheInformationRetrievalRAGstage.Weconsidered2dataseg- mentation approaches for the sources andcase law datasets, each one making increasing usage of the documents’ internal structure: – Sliding-window , considering 2048-token windows and 1024-token stride, producing regular-sized overlapping segments. – Langchain Recursive Character Text Splitter7, which is recommended for generic text, splitting the text recursively according to a given separators list, until the resulting segments are small enough. In its default configura- tion, the provided separators try to keep paragraph contents in a single chunk, using very little information about the text’s internal structure. We provided a customized separators list, including the documents containing statutory law hierarchy with many splitting points; the expected effect is that the recursive splitter would break the segments at those separators oc- currences as much as possible, resulting in segments with meaningful bound- aries according to the documents’ original internal hierarchical structure. We considered chunks up to 1000-character length, with at most 100-character overlap. Indexing We adopted the dense passage retrieval approach [4]: once the doc- uments were segmented, we indexed them using the text-embedding-3-small commercial model offered through OpenAI API8. This model is designed to generate dense vector representations of text, which can be used for similarity- based retrieval. The embeddings were generated for each of the two segmenta- tion strategies and saved using FAISS (Facebook AI Similarity Search) [1] to enable efficient similarity-based retrieval. For each segmentation variant, a sep- arate FAISS index was created. The FAISS IndexFlatL2 type was employed, 7https://python.langchain.com/docs/how_to/recursive_text_splitter/ 8https://openai.com/index/new-embedding-models-and-api-updates/ BR-TaxQA-R 9 which computes the L2 (Euclidean) distance for nearest neighbor searches. This step was repeated for each segmentation approach applied to the BR-TaxQA- R dataset. For case law documents, only the sliding-window segmentation was considered, due to the lack of hierarchical structure. We implemented a retrieval function that uses the FAISS index to retrieve relevant chunks based on user queries. The retrieval process involves embedding the user query using the same text-embedding-3-small model and querying the FAISS | https://arxiv.org/abs/2505.15916v1 |
index to find the top kmost similar document segments. Answer Generation The context retrieved by FAISS is fed into a prompt- based RAG system powered by OpenAI’s gpt-4o-mini . The prompt is metic- ulously crafted to emulate the behavior of a virtual assistant specializing in Brazilian tax law. To enhance interpretability and improve legal reasoning, the system employs Chain-of-Thought (CoT) prompting, guiding the model to artic- ulate intermediate reasoning steps before producing the final answer. This helps ensure that conclusions are logically grounded in the retrieved legal text. The final version of the prompt was designed to ensure that responses: –Are derived solely from the provided context; –Contain no direct references to the context or user interaction; –Include citations of applicable legal sources (norms and articles only) at the end of the response in a structured list; –Avoid mid-response citations or references to document structure such as paragraphs or subitems; –Follow naming conventions (e.g., “Decreto N º9.580” instead of “RIR/2018”). If not enough information is found in the retrieved context, the model is instructed to return a fallback answer indicating the system is still learning. This structured response format, consisting of a generated answer and a list of cited legal sources, enables consistent and automated evaluation of the RAG pipeline across different segmentation strategies. 4.2 Commercial Tools In addition to evaluating our proposed RAG system, an assessment was con- ducted using prominent commercial Large Language Models (LLMs) equipped with integrated web search or deep search capabilities. The tools examined in- cluded: – ChatGPT (utilizing GPT-4o and GPT-4o mini models), with its search integration. – Perplexity AI , employing its Deep Research feature9. – Grok 3 , leveraging its DeepSearch functionality. 9https://www.perplexity.ai/hub/blog/introducing-perplexity-deep-research 10 Domingos Júnior et al. Theprimaryobjectiveofthiscomparativeanalysiswastobenchmarktheper- formance of these state-of-the-art commercial systems. We aimed to determine whether their search-augmented responses could effectively approximate the ac- curacy and completeness of the ground-truth answers. This evaluation sought to understand the capabilities and limitations of readily available, market-leading generative AI tools in retrieving factual information and providing valid, trace- able sources for complex Brazilian tax inquiries, using the same question set applied to our custom RAG model. 4.3 Evaluation Metrics To quantitatively assess the performance of the responses generated by the com- mercial tools detailed in the previous section, a dedicated evaluation framework was employed, comparing its outputs against the established ground truth. This framework is built upon the RAGAS (Retrieval-Augmented Generation Assess- ment) library [2], a specialized Python package suited for evaluating generated text against reference text. A custom Python script automated this evaluation. It processed a dataset structured in JSON format, containing the original question ( question_text ), the ground truth answer ( answer), and the candidate response generated by the commercial tool ( candidate ). The RAGAS library, within this script, uti- lized Langchain wrappers to interface with specified LLMs (e.g., gpt-4o-mini ) and embedding models (e.g., text-embedding-3-small ) for calculating certain evaluation metrics, as described below: – Response Relevancy : Measures how relevant the answer is to the original question by using an LLM to generate alternative questions from the | https://arxiv.org/abs/2505.15916v1 |
an- swer’s content, then calculating the average cosine similarity between their embeddings and the original question. Higher scores reflect better alignment with the query intent, penalizing incomplete or redundant answers. – Factual Correctness : Refers to the degree to which the candidate answer aligns with verified knowledge or ground truth claims, as if how much it was entailed by them. Factuality is measured by comparing model completions againsttrusteddatasetsorexternalknowledgebases.Themetriciscomputed by matching generated answers to reference answers and checking how much data as statements remained, was lost, and created. – SemanticSimilarity :Semanticsimilaritywillmeasurehowcloselyamodel’s outputmatchesatextusedasreference.Itwillhelpdetectwhetherresponses preserve intended information. The metric is typically computed using sen- tence embeddings, and Higher scores indicate closer semantic alignment be- tween the candidate and the reference. Factual Correctness may use it to match referred claims. – BLEU Score : A precision-based metric that evaluates how many character sequences in particular order from the candidate answer can be overlapped to the reference texts. Scores range from 0 to 1, where higher scores indicate BR-TaxQA-R 11 better overlap. BLEU is sensitive to exact word matches and word order, as it was firstly formulated to evaluate translated text [11]. – ROUGE Score : A set of metrics that compares model output to refer- ence summaries based on sensitivity (which is the fraction of correctly se- lected data from all relevant entailments) of n-grams, longest common sub- sequences, and skip-bigram matches. ROUGE-L is a widely known variant that captures fluency and structure through sequence alignment [7]. 5 Results We evaluated the performance of our custom RAG system using multiple seg- mentation strategies and compared it against leading commercial tools. Table 4 presents a summary of the results across all evaluation metrics. Table 4. Evaluation metrics for answer generation across different systems. All the Custom RAG used OpenAI’s gpt-4o-mini for generating responses. Method Resp. RelevancyFactual Corr.Semantic SimilarityBLEU ROUGE-L Custom RAG Systems Recursive segmentation 0.791 0.286 0.765 0.185 0.241 Sliding-window segmentation 0.819 0.313 0.766 0.178 0.248 Recursive segmentation + case law 0.811 0.296 0.763 0.175 0.241 Sliding-window segmentation + case law 0.829 0.327 0.768 0.190 0.248 Only case law w/ sliding-window 0.818 0.209 0.744 0.149 0.207 Commercial Tools ChatGPT + Search tool 0.738 0.389 0.793 0.158 0.251 Perplexity.ai + Deep Research 0.665 0.469 0.757 0.075 0.106 Grok 3 + DeepSearch 0.509 0.454 0.745 0.099 0.089 Among the custom RAG configurations, the sliding-window segmenta- tion with case law achieved the highest score in Response Relevancy (0.829), outperforming all other systems, including commercial tools. This suggests that retrieving overlapping segments enriched with administrative rulings contributes positively to aligning model outputs with user intent. In terms of Factual Correctness , commercial models performed better: Per- plexity.ai reached the highest score (0.469), followed by ChatGPT (0.389). Despite using grounded references, our RAG system was outperformed in this dimension, likely due to the broader training data and advanced retrieval mech- anisms available to commercial tools. Thesliding-window + case law configuration also performed competi- tively in BLEU(0.190) and ROUGE-L (0.248), indicating that its generated an- swers were structurally and lexically similar to the gold standard. Nevertheless, ChatGPT led in Semantic Similarity (0.793) and | https://arxiv.org/abs/2505.15916v1 |
attained the best ROUGE-L score (0.251), reflecting superior fluency and semantic alignment. 12 Domingos Júnior et al. Thecase law only configuration yielded an interesting result: although it performed worse across most metrics, it achieved the second-highest Response Relevancy score. This outcome suggests that relying solely on jurisprudential content (without normative references) limits the model’s ability to generate precise and legally grounded responses: the answers might be relevant, but not easily verifiable against the corresponding legislation. Although commercial tools achieved better results, there remains room for improvement, as demonstrated in the literature where RAGAS Factual Correct- nessandSemantic Similarity metrics have been shown to align well with human evaluations [15], [10]. The verified performance on these metrics reinforces the understanding that retrieval remains a challenging task in the legal domain [8], [12], [3]. Furthermore, the application of jurisprudence can help bridge the gap between the abstract concepts encapsulated in legal statutes and regulations and the real-world facts described in user questions [21], [18]. Overall, our results reveal a trade-off between legal traceability and linguistic fluency: –Our domain-specific RAG pipeline excels in relevance andcontextual preci- sion; –Commercial tools generate more fluent and complete answers, but often lack explicit grounding in legal sources. Fig. 1.Illustration of the trade-off between contextual precision and linguistic fluency. BR-TaxQA-R 13 Figure 1 illustrates the trade-off between contextual precision and fluency. We consider the following example: “Who can opt for the standard deduction in the annual tax filing declaration?”. This case demonstrates the contrasting behavior of the two models: – CustomRAG :Theresponsefromthedomain-specificRAGmodelprovides a concise and legally precise description, directly reflecting the relevant tax regulations. It lists the exact criteria required for opting for the simplified deduction, aligning closely with the formal language and structure typical of regulatory documents. However, this approach tends to prioritize accuracy over readability, resulting in a less conversational tone. – ChatGPT : In contrast, the ChatGPT response adopts a more conversa- tional style, using natural language that is generally easier to read. It cap- tures the main points effectively but lacks the precise legal references found in the RAG response. This broader, more accessible phrasing can be advan- tageous for non-specialist audiences but risks omitting critical legal nuances. This example highlights the broader trend observed in our experiments: domain-specific models excel in precise, contextually accurate responses, while general-purpose commercial tools like ChatGPT often favor fluency and com- prehensiveness at the expense of explicit legal traceability. These findings underscore the importance of human expert evaluation in le- gal question answering. High scores in metrics such as semantic similarity or ROUGE do not guarantee legal adequacy. In several cases, fluent answers gen- erated by commercial tools were factually incorrect or unsupported by authori- tative legal documents–a critical issue in high-stakes contexts like tax guidance. 6 Conclusions This study introduced BR-TaxQA-R, a novel dataset designed to support the development and evaluation of Retrieval-Augmented Generation (RAG) systems in the domain of Brazilian personal income tax law. By combining statutory documents, administrative rulings (CARF decisions), and an extensive set of official questions and answers published by RFB, the dataset provides a valuable | https://arxiv.org/abs/2505.15916v1 |
resource for both academic research and applied legal NLP. Our experiments demonstrated that a custom RAG pipeline, carefully tai- lored to the legal domain through legal-specific prompting and employing simple sliding-window segmentation over the legal corpus, achieved strong performance in terms of Response Relevancy , particularly when jurisprudence on the ques- tion topics was available. However, commercial systems such as ChatGPT, which benefit from broader training data and advanced retrieval mechanisms, outper- formed our model in Factual Correctness and fluency. These findings suggest that while specialized systems can be more focused and legally grounded, they may still fall short in naturalness and completeness compared to state-of-the-art general-purpose tools. Importantly, the evaluation 14 Domingos Júnior et al. results also emphasize the need for human assessment in legal QA tasks. Metrics such as semantic similarity or BLEU/ROUGE alone are insufficient to guar- antee that an answer is legally valid or practically useful. In our case, some high-scoring answers from the RAG system lacked critical legal nuance, while ChatGPT occasionally provided fluent but ungrounded content. Thus, expert evaluation remains essential to ensure the legal accuracy and trustworthiness of AI-generated responses. Future work includes incorporating multi-year CARF decisions and improv- ing response calibration mechanisms. We also aim to refine human-in-the-loop evaluation protocols to better capture legal adequacy, traceability, and user trust in automated systems. References 1. Douze, M., Guzhva, A., Deng, C., Johnson, J., Szilvasy, G., Mazaré, P.E., Lomeli, M., Hosseini, L., Jégou, H.: The faiss library (2025), https://arxiv.org/abs/ 2401.08281 2. Es, S., James, J., Espinosa Anke, L., Schockaert, S.: RAGAs: Automated eval- uation of retrieval augmented generation. In: Aletras, N., De Clercq, O. (eds.) Proceedings of the 18th Conference of the European Chapter of the Associa- tion for Computational Linguistics: System Demonstrations. pp. 150–158. As- sociation for Computational Linguistics, St. Julians, Malta (Mar 2024), https: //aclanthology.org/2024.eacl-demo.16/ 3. Feng, Y., Li, C., Ng, V.: Legal case retrieval: A survey of the state of the art. In: Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). pp. 6472–6485 (2024) 4. Karpukhin, V., Oguz, B., Min, S., Lewis, P.S., Wu, L., Edunov, S., Chen, D., Yih, W.t.: Dense passage retrieval for open-domain question answering. In: EMNLP (1). pp. 6769–6781 (2020) 5. Lai, J., Gan, W., Wu, J., Qi, Z., Yu, P.S.: Large language models in law: A survey. AI Open (2024) 6. Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis, M., Yih, W.t., Rocktäschel, T., Riedel, S., Kiela, D.: Retrieval-augmented generation for knowledge-intensive nlp tasks. In: Proceedings of the 34th Interna- tional Conference on Neural Information Processing Systems. NIPS ’20, Curran Associates Inc., Red Hook, NY, USA (2020) 7. Lin, C.Y.: ROUGE: A package for automatic evaluation of summaries. In: Text Summarization Branches Out. pp. 74–81. Association for Computational Linguis- tics, Barcelona, Spain (Jul 2004), https://aclanthology.org/W04-1013/ 8. Magesh, V., Surani, F., Dahl, M., Suzgun, M., Manning, C.D., Ho, D.E.: Hallucination-free? assessing the reliability of leading ai legal research tools. URL https://arxiv. org/abs/2405.20362 (2024) 9. Okamoto, R.F., Trecenti, J.: Metodologia de pesquisa jurimétrica. Associação Brasileira | https://arxiv.org/abs/2505.15916v1 |
de Jurimetria (2022), https://livro.abj.org.br/ , acesso em: 6 maio 2025 10. Oro, E., Granata, F.M., Lanza, A., Bachir, A., De Grandis, L., Ruffolo, M.: Eval- uating retrieval-augmented generation for question answering with large language models. In: CEUR Workshop Proceedings. vol. 3762, pp. 129–134 (2024) BR-TaxQA-R 15 11. Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th annual meeting of the Association for Computational Linguistics. pp. 311–318 (2002) 12. Paul, S., Bhatt, R., Goyal, P., Ghosh, S.: Legal statute identification: A case study using state-of-the-art datasets and methods. In: Proceedings of the 47th Inter- national ACM SIGIR Conference on Research and Development in Information Retrieval. pp. 2231–2240 (2024) 13. Presa, J.P.C., Camilo Junior, C.G., Oliveira, S.S.T.d.: Evaluating large language models for tax law reasoning. In: Brazilian Conference on Intelligent Systems. pp. 460–474. Springer (2024) 14. Receita Federal do Brasil: Perguntas e Respostas IRPF 2024. https: //www.gov.br/receitafederal/pt-br/centrais-de-conteudo/publicacoes/ perguntas-e-respostas/dirpf/pr-irpf-2024.pdf/view (2024), accessed: 2025-05-10 15. Roychowdhury, S., Soman, S., Ranjani, H., Gunda, N., Chhabra, V., Bala, S.K.: Evaluation of rag metrics for question answering in the telecom domain. arXiv preprint arXiv:2407.12873 (2024) 16. da Silva Junior, D., dos Santos Corval, P.R., de Oliveira, D., Paes, A.: Datasets for portuguese legal semantic textual similarity. Journal of Information and Data Management 15(1), 206–215 (2024) 17. Siqueira, F.A., Vitório, D., Souza, E., Santos, J.A., Albuquerque, H.O., Dias, M.S., Silva, N.F., de Carvalho, A.C., Oliveira, A.L., Bastos-Filho, C.: Ulysses tesemõ: a new large corpus for brazilian legal and governmental domain. Language Resources and Evaluation pp. 1–20 (2024) 18. Su, W., Hu, Y., Xie, A., Ai, Q., Que, Z., Zheng, N., Liu, Y., Shen, W., Liu, Y.: Stard: A chinese statute retrieval dataset with real queries issued by non- professionals. arXiv preprint arXiv:2406.15313 (2024) 19. Vitório, D., Souza, E., Martins, L., da Silva, N.F., de Carvalho, A.C.P.d.L., Oliveira, A.L., de Andrade, F.E.: Building a relevance feedback corpus for legal information retrieval in the real-case scenario of the brazilian chamber of deputies. Language Resources and Evaluation pp. 1–21 (2024) 20. Wang, N., Tian, M.Y.: “intelligent justice”: human-centered considerations in china’s legal ai transformation. AI and Ethics 3(2), 349–354 (2023) 21. Xiao, C., Liu, Z., Lin, Y., Sun, M.: Legal knowledge representation learning. In: Representation Learning for Natural Language Processing, pp. 401–432. Springer Nature Singapore Singapore (2023) | https://arxiv.org/abs/2505.15916v1 |
Extracting Probabilistic Knowledge from Large Language Models for Bayesian Network Parameterization Aliakbar Nafar1, Kristen Brent Venable2,3, Zijun Cui1, Parisa Kordjamshidi1 1Michigan State University 2Florida Institute for Human and Machine Cognition (IHMC) 3University of West Florida {nafarali, cuizijun, kordjams}@msu.edu, bvenable@ihmc.org Abstract Large Language Models (LLMs) have demon- strated potential as factual knowledge bases; however, their capability to generate proba- bilistic knowledge about real-world events re- mains understudied. This paper investigates us- ing probabilistic knowledge inherent in LLMs to derive probability estimates for statements concerning events and their interrelationships captured via a Bayesian Network (BN). Using LLMs in this context allows for the parameteri- zation of BNs, enabling probabilistic modeling within specific domains. Experiments on eighty publicly available Bayesian Networks, from healthcare to finance, demonstrate that query- ing LLMs about the conditional probabilities of events provides meaningful results when com- pared to baselines, including random and uni- form distributions, as well as approaches based on next-token generation probabilities. We ex- plore how these LLM-derived distributions can serve as expert priors to refine distributions extracted from minimal data, significantly re- ducing systematic biases. Overall, this work introduces a promising strategy for automati- cally constructing Bayesian Networks by com- bining probabilistic knowledge extracted from LLMs with small amounts of real-world data. Additionally, we evaluate several prompting strategies for eliciting probabilistic knowledge from LLMs and establish the first comprehen- sive baseline for assessing LLM performance in extracting probabilistic knowledge. 1 Introduction The potential of language models as sources for ex- tracting factual knowledge has been demonstrated in several studies (Petroni et al., 2019; Roberts et al., 2020; AlKhamissi et al., 2022; Zhao et al., 2025). However, it remains unclear whether LLMs possess the ability to generate meaningful proba- bilistic estimates for events and their relationships based on their internal knowledge. We refer to an LLM’s capability to assign a specific probabilityto an uncertain proposition by utilizing its inter- nal knowledge as probabilistic estimation . This contrasts with providing a confidence score for a concrete answer (Xiong et al., 2024) or solving a probabilistic problem that has a known numerical solution, a process referred to as probabilistic infer- ence (Nafar et al., 2024b). For instance, consider the question, “What is the probability that a person who smokes cigarettes will develop cancer in their lifetime?” There is no definitive correct answer to infer directly from the question itself; however, a medical expert familiar with the literature might approximate 20%. Similarly, we expect an adept language model to produce a similar estimate. Here, we evaluate the probabilistic estimation capabilities of LLMs such as GPT-4o (OpenAI et al., 2024), Gemini Pro 1.5 (Team et al., 2024), and Claude 3.5 (Anthropic, 2024), and utilize their internal knowledge to construct domain-specific Bayesian networks (BNs). BNs are a powerful method for representing uncertainty and dependen- cies between events. The reliability of inference in BNs hinges on the accuracy of conditional probabil- ity table (CPT) entries, which are typically obtained by collecting data or eliciting expert judgments. These approaches can be expensive or unattainable in domains where data is scarce (You et al., 2019; Longato | https://arxiv.org/abs/2505.15918v1 |
et al., 2023) or expert time is limited (Das, 2008; Xiaoguang et al., 2019). By extracting probabilistic knowledge of the LLMs, we explore whether they can serve the role of experts for parameterizing BNs. We as- sume the dependency structure within the BN is given, leaving the task of extracting such depen- dencies using LLMs for future research. Nonethe- less, preliminary results indicate that LLMs show promise in accurately identifying network depen- dencies (Babakov et al., 2025). Given the structure of a BN, LLMs are required to predict a proba- bility distribution for each node, conditioned on its parent nodes. In this work, we focus on dis- 1arXiv:2505.15918v1 [cs.CL] 21 May 2025 crete variables , so we query the LLM separately for each state of a node, prompting it to directly generate a numerical output representing the con- ditional probability given its parent nodes. After collecting these raw numerical outputs from the LLM, a normalization step is performed to ensure valid probability distributions. In domains where only a small dataset is avail- able, initial distributions estimated by LLMs can potentially function as expert-derived prior prob- abilities. We test this hypothesis by adjusting the LLM predictions with a small number of data sam- ples, effectively applying a partial calibration to the model’s initial estimates. We demonstrate that this approach not only enhances the predictions of the LLMs but also surpasses purely data-driven methods and provides a superior prior compared to a uniform prior baseline, which is typically em- ployed in the absence of additional information. Furthermore, our experiments indicate that even when the number of data samples is large, incor- porating LLM priors improves the performance. These findings highlight the promise of leveraging LLMs as expert knowledge sources for probabilis- tic estimation across various real-world domains. In summary, our contributions are as follows: 1) Comprehensive Evaluation of LLM Proba- bilistic Estimation with Real-World BNs. We provide the first large-scale empirical analyses of how LLMs quantify probabilities. We investigate differences in LLM accuracy across diverse BN do- mains and varying levels of conditional complexity, highlighting the strengths and limitations of LLMs as probabilistic knowledge bases. 2) LLM Predictions as Expert-Driven Priors. We demonstrate that the LLM predicted probabili- ties can serve as expert-derived priors and combin- ing them with sampled data improves accuracy and sample efficiency compared to purely data-driven methods or using a uniform prior baseline. 3) Automatic LLM-to-BN Pipeline. We introduce an automated procedure for parameterizing real- world BNs given the network structure1. 2 Related Work Parameter Learning. Within the field of Bayesian Networks, parameter learning has historically re- volved around data-driven approaches (Ji et al., 2015) or expert elicitation (Mendes, 2014). Data- driven methods compute frequencies for each con- 1The code and analytical tools are available at /gtb.figuration of parent states, estimating conditional probabilities via frequentist or Bayesian methods. Expert elicitation is commonly used in domains where data is limited or unobserved for some vari- ables (Woodberry et al., 2005; Constantinou et al., 2016). This work aims to see whether LLM-based probability queries can fill a similar role to experts. Zero-shot | https://arxiv.org/abs/2505.15918v1 |
Regression. Using LLMs for regression in a zero-shot setting is an emerging field, and work in this area is limited. Following Vacareanu et al. (2024), which shows that LLMs are capable regressors in a few-shot setting, Nafar et al. (2025) tests the regression capability of LLMs in a zero- shot setting (using internal knowledge) for realistic questions such as estimating the medical insurance cost based on age. However, they don’t use any probability estimation. Paruchuri et al. (2024) asks LLMs to calculate the probabilities for a range of values in a given distribution, but their dataset is limited to only 12 questions, and the queries are elementary with no conditional variables. Confidence Elicitation. Confidence elicitation in LLMs has been studied in classification tasks, where a confidence score ranging from 0 to 1 is assigned to a discrete class label. Among these, Kadavath et al. (2022) treat the models as a white box and use their token probabilities to assess confi- dence. However, token likelihood merely indicates the model’s uncertainty about the next token (Kuhn et al., 2023), rather than the confidence in the text itself. Consequently, Xiong et al. (2024); Yang et al. (2024) treat the model as a black-box and use its generated confidence to solve classification datasets. However, confidence elicitation does not directly apply to scenarios requiring a probabil- ity distribution across multiple states. Moreover, while similar, confidence is inherently different from probability (Levine, 2024). Another stream of research, in which confidence aligns more closely with probability, prompts LLMs to produce binary probabilities for intermediate nodes, which are also created by LLMs, forming simplified graphical models tailored toward classification tasks (Huang et al., 2025; Feng et al., 2025). In contrast to these methods—dependent on smaller, synthetic graphs constructed explicitly for classification, our approach utilizes large-scale, real-world networks necessitating probability distributions across mul- tiple states. Furthermore, while Feng et al. (2025) extracts hedge words from LLM outputs and sub- sequently trains a separate model to estimate cor- responding probabilities, we directly prompt the 2 LLM to provide explicit probability values. Probabilistic Inference. Probabilistic inference is closely related to our task and can be considered a natural extension of probabilistic estimation. Saeed et al. (2021); Nafar et al. (2024a) fine-tune BERT- based language models to perform probabilistic in- ference, while Nafar et al. (2024b) utilizes prompt engineering techniques to enable LLMs to conduct probabilistic inference. However, in all these ap- proaches, the explicit probabilities of sentences are either provided in the text or learned from the dataset during fine-tuning without any estimation derived from the prior internal knowledge. 3 Problem Definition The main problem addressed in this paper is param- eterizing a Bayesian Network given its structure. We formally define the problem as follows, given the structure of a Bayesian Network G= (V,E) where Vis the set of nodes (random variables) and Eis the set of edges (dependencies among vari- ables), the goal is to estimate the parameters, that is, Conditional Probability Tables (CPTs) Gθof this network using LLMs. We investigate whether LLMs can be used to obtain these parameters. Specifically, the | https://arxiv.org/abs/2505.15918v1 |
problem becomes one of posing textual queries describing a node’s parents and re- questing a probability distribution for each row of the CPT. After this parameter assignment step, we compare the resulting distribution to ground-truth values of the original Bayesian Networks. 4 Methodology 4.1 Overview We introduce two distinct frameworks for estimat- ing the parameters of a BN. Our first approach, Extracting Probabilistic Knowledge (EPK), uses prompting techniques to derive probabilistic esti- mates. Instead of querying the entire probability distribution at once, we individually prompt the LLM for probabilities corresponding to each state of a node and subsequently normalize these values to ensure a valid distribution. Our second approach, Expert-Driven Priors (EDP), assumes the availabil- ity of a small empirical dataset and incorporates the LLM-generated predictions as expert-driven priors combined with the empirical data. 4.2 Extracting Probabilistic Knowledge (EPK) As depicted in Figure 1, the process starts with a prompt template that describes the node and itsparents in a natural language format and instructs the LLM to provide a probability value. This step is necessary to establish the nodes’ meaning and possible values. The descriptions of these nodes, combined with the LLM instructions, are appended to each question presented to the LLM. Each ques- tion explicitly defines the states of the node of in- terest as well as the states of its parent nodes, and poses a probabilistic query based on these assigned states. Depending on the instruction, the LLM can produce the probability directly or articulate its reasoning before generating a probability value. In both settings, the final answer is extracted as a numerical probability from the output text. In the questions posed to the LLM in Figure 1, providing descriptions for nodes and their states is optional when the node names and their correspond- ing values are straightforward. For example, the meanings of “Lung Cancer” and “Smoking” is intu- itive, and the LLM can infer that their values are bi- nary, based on the given assignments. However, in cases where the meaning or value sets of nodes are not immediately clear, explicit descriptions are es- sential. For instance, a node named “X1” must have a clearly stated meaning, such as: “Represents a lack of supervision and policy guidance, which may lead to the use of unqualified oil. This node can take on the values True or False. ” Similarly, while the meaning of the “Construction Year” node is self-evident, its possible states are ambiguous. Pro- viding a description like “This node indicates the time period in which the building was constructed, with possible values being 1930-1955, 1955-1960, 1960-1968, 1968-1975, and 1975-1980. ” Since the raw numeric outputs may not sum to one, they are normalized to form a valid dis- tribution over the node’s states. For a node with mstates, the model might produce values p1, p2, . . . , p mthat sum to S=Pm i=1pi. To con- vert these values into a valid probability distribu- tion, we divide each one by S. This normalization step can be interpreted as taking the ratio of each state’s assigned likelihood relative | https://arxiv.org/abs/2505.15918v1 |
to the sum of all states, effectively preserving the proportions while enforcing a valid distribution. 4.3 Expert-Driven Priors (EDP) Expert opinion and large datasets are either cost- prohibitive or difficult to obtain in many practical scenarios. However, collecting a small number of real observations may still be feasible. When only limited data is available, incorporating expert prior 3 Visit to Asia Tuberculosis Lung cancerSmoking Bronchitis Tuberculosis or cancerDyspnoea Chest X-rayLLMThese nodes are related to the question inside a Bayesian Network: Smoking: It refers to a patient's smoking status. Lung Cancer: It determines whether a patient has lung cancer . Given this information , answer the following question by providing a probability of 0 to 1 based on your best guess. What is the probability Lung cancer is True, given Smoking is False? What is the probability Lung cancer is False, given Smoking is False?What is the probability Lung cancer is True, given Smoking is True? What is the probability Lung cancer is False, given Smoking is True?Input Bayesian NetworkNormalization0.90 0.30 0.02 0.99Bayesian Network Parameterization by LLMs Lung cancerTrue False True 0.75 0.25 False 0.98 0.021st Row 2nd RowLung cancerSmokingFigure 1: BN Parameterization using EPK. The left side represents an input BN with its dependency structure. The right side shows how EPK uses LLMs to parameterize the BN. Given a node (e.g., Lung Cancer) and its parents, the LLM is prompted to estimate probabilities of its states. The node description is appended to each question to clarify the meanings and states. The raw probabilities generated by the LLM are then normalized to sum to one. knowledge can help offset the shortage of empirical data, thereby improving the resulting probability distributions. We propose that LLMs can fill the role of experts by providing these prior distribu- tions based on their knowledge. In our approach, EDP, we combine the LLM-derived probabilities with the empirical distribution estimated from these few samples by utilizing two approaches: linear pooling (Gneiting and Ranjan, 2013) and using pri- ors as pseudocounts (Zhai and Lafferty, 2001). For linear pooling, let q1, . . . , q mdenote the probabili- ties assigned by the LLM (after normalization) for a particular node with mpossible states under a specific parent configuration, and let r1, . . . , r m represent the empirical probabilities derived using maximum likelihood estimation, obtained from the collected sample data. To form a final distribution, we apply a linear combination where each final probability is given by pi=αqi+ (1−α)rifor i= 1, . . . , m . Here, α∈[0,1]is a parameter that determines the weight of LLM-derived distribution relative to the empirical one. In the approach where we use the priors as pseudocounts, we translate the LLM-derived probabilities q1, . . . , q minto pseudo- counts representing prior knowledge by introduc- ing a confidence hyperparameter Cand computing pseudocounts C·qifor each state i. Then, we com- bine these pseudocounts with observed counts ni from collected data, obtaining updated probabilities pi= (C·qi+ni)/(C+Pm j=1nj)fori= 1, . . . , m . The hyperparameters αandC, which determine the effectiveness of priors, are | https://arxiv.org/abs/2505.15918v1 |
set to be the recipro- cal of data sample numbers, reflecting the intuition that their effect should diminish with more data.5 Experiments 5.1 Dataset of Eighty Bayesian Networks Our experiments use bnRep (Leonelli, 2025), a publicly available collection of BNs spanning di- verse domains such as medicine and engineering. These BNs vary in size, ranging from 5 to 50 nodes. Each BN in bnRep has an associated publication detailing its construction, data sources, and domain- specific context. Most of these papers were pub- lished between 2020 and 2024, making them well- suited for LLMs incorporating recent knowledge. Each BN is accompanied by ground-truth param- eters, which enable us to evaluate our predictions. We preprocess bnRep to select only discrete BNs and eliminate BNs with missing entries in CPTs, yielding a final dataset of 80 networks. We ex- tract node definitions and states from each BN’s repository paper. Additional details, including pre- processing steps, are provided in Appendix A. 5.2 Metrics, and Baselines As of LLMs, we utilize GPT-4o and its mini vari- ant (OpenAI et al., 2024) in most of our experi- ments, along with Claude Sonnet 3.5 (Anthropic, 2024) and Gemini-Pro 1.5 (Team et al., 2024). We evaluate the LLM’s estimated BN parameters using Kullback–Leibler (KL) divergence (Kullback and Leibler, 1951) compared to ground-truth parame- ters in bnRep. Specifically, we report the BN KL divergence , defined as the KL divergence computed over the BN variables’ joint distribution, evaluating the resulting BN’s overall quality. For details on the LLMs and their hyperparameters, see Appendix B, 4 and for the explanation of KL divergence and BN KL divergence calculation, refer to Appendix C. We evaluate our methods against multiple base- lines. All outputs from these baselines are normal- ized as necessary to ensure valid probability distri- butions. These baselines are: (1) Random number generator; (2) Uniform baseline generating equal probabilities for each row of the CPT, providing ba- sic, uninformed estimations; (2) LLM (Random) baseline involving intentionally incorrect queries, where the original variable names are randomly re- placed. This is done to assess whether LLMs utilize the content of the provided questions to generate their answers. For instance, we query the LLM "What is the probability that construction time is true given that lung cancer is true?" instead of the correct question regarding smoking; (3) LLM (No Context) baseline in which queries are presented without contextual explanations, exploring scenar- ios where the node meanings and number of states cannot be directly inferred from the in-context in- formation; (4) LLM (Token Probability) baseline, which directly uses the LLM’s internal probabil- ities assigned to tokens representing node states (e.g., probability of generating the token "True"), rather than explicitly generated numerical probabil- ities extracted from the model’s textual responses; (5)LLM Full Dist baseline wherein the LLM is prompted for the complete distribution of the node corresponding to a specific combination of parent states (a CPT row), and the response is received as a tuple of numbers. The order of the states is given in the prompt; and (6) MLE-# is a statistical baseline obtained by | https://arxiv.org/abs/2505.15918v1 |
maximum likelihood estima- tion using data sampled from the ground-truth BN where the BN is sampled exactly # times using for- ward sampling. For results obtained using other sampling methods, refer to the Appendix D. 5.3 Can LLMs Estimate Probabilities Using their Internal Knowledge? To evaluate how EPK compares to other baseline models, we analyze the distributions of BN KL di- vergence across all 80 BNs, as depicted in Figure 8. As expected, the parameter estimation of statisti- cal models (MLE-#) progressively improves as the size of the sample data increases. Models trained on minimal data demonstrate notably poor perfor- mance, similar to random guessing. Interestingly, the worst performing models, performing worse than random, are the “Token Probability” models. This aligns with previous research, which foundthat raw token probabilities from LLMs alone are insufficient for effective uncertainty/probability es- timation (Xiong et al., 2024) and require additional processing steps like fine-tuning (Tao et al., 2024). The next weakest results are observed among the random generators, including GPT-4o (Ran- dom) and GPT-4o-mini (Random), next to the base- lines that do not observe the correct context for nodes and their states, GPT-4o (No Context) and GPT-4o-mini (No Context). We observe that LLM (Random) models slightly surpass the outputs of the Random number generator baseline. However, these improvements only reflect the non-uniformity of random number generation by LLMs, influenced by factors such as text-generation sampling meth- ods and model architecture choices (Hopkins et al., 2023). Of all the baseline models, the uniform predictor performs best. This result aligns with information theory, which suggests that, in the ab- sence of knowledge, a uniform distribution natu- rally provides the lowest KL divergence based on uncertainty (Cover and Thomas, 2006). Lastly, both the LLM (Full Dist) baseline and EPK outperform the uniform baseline. However, while the LLM (Full Dist) baseline yields informed estimates, it consistently falls short of EPK with a higher median KL divergence and a greater stan- dard deviation across the BNs. The MLE-# model requires at least 30 samples to match the median KL divergence of EPK. Even then, its standard de- viation remains higher, indicating that EPK is more robust. Overall, these results demonstrate the capa- bility of LLMs to provide meaningful probability distributions based on their internal knowledge. 5.4 Can LLMs’ probability estimations serve as Expert Priors? We evaluate the effectiveness of EDP, combining LLM-derived distributions with empirical data, us- ing two methods: priors as pseudocounts and linear pooling. Figure 3 displays the distributions of BN KL divergences obtained by combining GPT-4o and Uniform priors with various sample sizes of data with the linear pooling method. We chose GPT-4o because it has the best performance among the evaluated LLMs. At the same time, the Uniform model serves as the best-performing baseline in our experiments and is typically used in the absence of prior information. The method using priors as pseudocounts shows the same trend, and its results are presented in Appendix D to save space. Regardless of the combination method, GPT-4o 5 Figure 2: Boxplot showing the distribution of BN KL divergence values across | https://arxiv.org/abs/2505.15918v1 |
80 unique BNs for various models, sorted by their median KL divergence. Lower values indicate better alignment with ground truth CPTs. Figure 3: Distribution of BN KL divergence across 80 BNs using GPT-4o (EDP) and Uniform predictions as priors combined with empirical data via the linear pooling method. The data sample size is shown in parentheses. The MLE-# models only use data samples, and the EPK models only use LLMs for probability estimations. 6 predictions consistently outperform the Uniform model baseline, proving its use as a better prior. The advantage of incorporating GPT-4o priors is most notable at smaller sample sizes, i.e., 3 to 10 samples. The combination of even minimal data in EDP significantly outperforms EPK and the MLE model with 30 data samples. Additionally, EDP improves the median KL divergence even when abundant data, 1000 or more samples, is used. It also effectively reduces the standard deviation of the predictions, enhancing model robustness. This improvement in median KL divergence at large sample sizes occurs because nodes with unlikely parent combinations rarely receive data. This is supported by the absence of KL divergence im- provement at high sample sizes with alternative methods that separately sample data for each CPT row (See Appendix D). However, these alternative, unrealistic methods still yield decreased standard deviation and improved robustness. These find- ings confirm that incorporating LLM predictions as expert-driven priors is beneficial and enhances performance and robustness. 6 Discussion 6.1 Quality of Individual Distributions Versus the Entire Bayesian Network In our experiments, we employed the BN KL Diver- gence metric to evaluate the quality of the predicted BNs. However, this metric assesses the entire BN, meaning that a few poorly predicted nodes might disproportionately influence the evaluation. To ad- dress this limitation, we also analyzed the CPT KL Divergence , which computes the average KL divergence across all individual CPT rows within each BN. This alternative measure evaluates the quality of individual distributions rather than the BN as a whole. Using CPT KL Divergence, we observed that the overall trends of our results re- mained consistent. Additional diagrams illustrating these findings are provided in Appendix D. 6.2 Performance Variations Among LLMs Across Different Bayesian Networks Among the evaluated LLMs, GPT-4o consistently exceeds the performance of other LLMs, though specific models perform better within specialized domains. For instance, Claude 3.5, Gemini-pro, and GPT-4o achieve the best results on BNs related to engineering, business, and medical domains, re- spectively. Furthermore, there are inherent differ-ences in prediction behavior among these LLMs, likely attributed to their respective training method- ologies. Specifically, Claude 3.5 performed best on BNs with low entropy probabilities, but showed the poorest performance on BNs with high en- tropy probabilities among the LLMs, indicating an overly confident prediction behavior . In contrast, Gemini-pro showed the opposite trend, whereas GPT-4o had a more balanced prediction profile. 6.3 Prompt Styles and Chain-of-Thought In our experiments, consistent with previous re- search (Xiong et al., 2024), we observed notable volatility in token probabilities, with significant variations depending on preceding tokens. How- ever, their behavior remained robust when prompt- ing | https://arxiv.org/abs/2505.15918v1 |
the LLMs to verbally generate probabilities as textual output. Experimental results remained con- sistent whether we instructed the model to explain its reasoning before providing probabilities or to output probabilities directly. These findings align with prior research on zero-shot regression (Nafar et al., 2025), indicating that chain-of-thought rea- soning did not affect results. Furthermore, other prompting styles or additional contextual informa- tion taken from relevant Bayesian Network pub- lications also did not influence the results. Our experiments suggest that although LLMs clearly possess and can articulate probabilistic knowledge, this capability remains consistent irrespective of explicit explanatory reasoning. 6.4 Handling Larger Parent Sets and States Intuitively, it makes sense that LLMs may struggle to provide informed predictions for more complex queries involving nodes with many parent nodes or states. We use the CPT KL Divergence metric to assess predictions among these nodes, which aver- ages the KL divergence across all individual CPT rows rather than evaluating the entire BN. In our experiments with realistic BNs, LLM performance in these scenarios still surpassed our baseline mod- els. LLMs consistently outperformed baselines in queries involving up to 5 parent nodes. Addition- ally, the LLMs performed better than baselines for nodes with 2 or 3 states. Nodes exceeding 3 states are rare in realistic BNs. Only 4 BNs had nodes restricted to 4 states, whereas 11 featured nodes with 5 states. A notable exception was the ’Dust- Explosion’ BN, which contained nodes with both 4 and 5 states. This network, designed to predict explosion probabilities in industrial environments, 7 proved challenging for the LLMs. Apart from this isolated case, the LLMs consistently outperformed baseline methods. For detailed results demonstrat- ing the performance with varying numbers of par- ent nodes and states, refer to Appendix E. 6.5 Trade-off Between EPK and Full Distribution Queries Our experiments show that EPK consistently out- performs directly requesting the complete distribu- tion (Full Dist), achieving lower median KL diver- gence and improved stability. Although the Full Dist method is more straightforward and requires fewer queries, it leads to less precise and more variable estimates. Additionally, our proposed ap- proach remains cost-effective by avoiding extensive reasoning or explanations, as most LLM-related costs derive from text generation. Thus, we recom- mend EPK for applications prioritizing accuracy and consistency, while Full Dist might be suitable in scenarios where cost saving is paramount. 6.6 Implications of the EDP method for Automated BN Construction The demonstrated capability of LLMs, particularly GPT-4o, in the EDP method has significant im- plications for automating BN construction. Tra- ditionally, parameterization of Bayesian networks relies heavily on expert input, making the process labor-intensive and costly. However, cost is not the only limitation associated with expert-driven approaches. In some cases, experts may not even be available, and when they are, their expertise must be evaluated (Hald et al., 2016) or aggregated with the opinion of other experts (McAndrew et al., 2021) before their views can be utilized. Also, ex- pert opinions are inherently susceptible to errors such as overconfidence and bias (Soll et al., 2024). Utilizing an LLM proficient across diverse domains | https://arxiv.org/abs/2505.15918v1 |
holds enormous potential for advancing automation in BN parameterization. 6.7 Combining LLMs’ Priors with Small Data for Extraction of Probabilistic Knowledge In Section 5.4, we showed the potential of using LLM predictions in place of expert-driven priors for constructing BNs. Incorporating even a small amount of data, for example, just 3 samples, to- gether with LLM predictions, yields a lower BN KL divergence than models trained purely on 30 data samples or those relying solely on LLM predictions. These results highlight a promising applicationwherein LLMs use minimal external data points to rapidly refine their predictions for probabilistic queries. Such a small amount of data could be sup- plied to the LLM in various ways, such as being obtained online by querying information from pub- licly available sources (e.g., occurrences of lung cancer among smokers). Combining LLMs’ priors with data in the EDP approach not only improves data efficiency and outperforms uniform baseline methods but also provides an exciting possibility for real-time improvement of LLM-generated prob- abilistic queries with minimal data input. 7 Conclusion and Future Work In conclusion, this study introduces a novel pipeline for parameterizing BNs using LLMs and establishes the first comprehensive framework for evaluating the probabilistic knowledge of LLMs with real-world probabilistic BNs. We systemat- ically assess various BNs and demonstrate that modern LLMs can reliably produce conditional probabilities, outperforming baselines and purely data-driven approaches, especially in a low-data regime. Importantly, our evaluation framework highlights the probabilistic estimation capabilities of LLMs, offering insights into their internal knowl- edge, demonstrating their value as expert-based priors, and facilitating automated BN construction. Future work will focus on advancing toward a fully automated framework for Bayesian Network construction. In this regard, the key challenge lies in automating the structure learning component. Although preliminary efforts have been made in this area, there is significant potential to create an end-to-end pipeline using LLMs that generates a BN structure, parameterizes it, and systematically evaluates its performance. Limitations Our analysis demonstrates clear advantages in em- ploying LLMs for Bayesian network parameter es- timation. While KL divergence is effective for comparative evaluation within our context, it pri- marily indicates relative improvements rather than quantifying specific practical benefits for individ- ual downstream tasks, which often require tailored metrics , such as assessing the accuracy of query probabilities within a defined precision (e.g., a 5% threshold). Incorporating domain expert judgments to evaluate the BNs and design specific metrics for each BN would benefit our work, but the associ- 8 ated costs and resource requirements substantially limit this approach. Expert evaluations would re- alistically be feasible only for a select subset of Bayesian networks rather than all 80 networks ana- lyzed, limiting the comprehensiveness and compar- ative scope of the evaluation currently achievable through automated methods. Additionally, our study utilizes only state-of- the-art models because smaller, more economi- cal models performed poorly in the task of prob- abilistic knowledge estimation. Although our ap- proach avoids fine-tuning and thus has lower costs than fine-tuning-based methods, the inference costs remain relatively high among inference-only ap- proaches, potentially posing challenges in particu- larly cost-sensitive | https://arxiv.org/abs/2505.15918v1 |
scenarios. References Badr AlKhamissi, Millicent Li, Asli Celikyilmaz, Mona Diab, and Marjan Ghazvininejad. 2022. A review on language models as knowledge bases. Preprint , arXiv:2204.06031. Ankur Ankan and Abinash Panda. 2015. pgmpy: Prob- abilistic graphical models using python. In Proceed- ings of the 14th Python in Science Conference (SCIPY 2015) . Citeseer. Anthropic. 2024. Claude 3.5 sonnet. Large language model. Accessed: 21 Feb. 2025. Nikolay Babakov, Ehud Reiter, and Alberto Bugarín- Diz. 2025. Scalability of Bayesian network struc- ture elicitation with large language models: a novel methodology and comparative analysis. In Proceed- ings of the 31st International Conference on Compu- tational Linguistics , pages 10685–10711, Abu Dhabi, UAE. Association for Computational Linguistics. Harrison Chase. 2022. LangChain. Anthony Costa Constantinou, Norman Fenton, and Mar- tin Neil. 2016. Integrating expert knowledge with data in bayesian networks: Preserving data-driven expectations when the expert variables remain unob- served. Expert Systems with Applications , 56:197– 208. Thomas M. Cover and Joy A. Thomas. 2006. Elements of Information Theory (Wiley Series in Telecommuni- cations and Signal Processing) . Wiley-Interscience, USA. Balaram Das. 2008. Generating conditional probabil- ities for bayesian networks: Easing the knowledge acquisition problem. Preprint , arXiv:cs/0411034. Yu Feng, Ben Zhou, Weidong Lin, and Dan Roth. 2025. BIRD: A trustworthy bayesian inference framework for large language models. In The Thirteenth Inter- national Conference on Learning Representations .Tilmann Gneiting and Roopesh Ranjan. 2013. Com- bining predictive distributions. Electronic Journal of Statistics , 7(none):1747 – 1782. Tine Hald, Willy Aspinall, Brecht Devleesschauwer, Roger Cooke, Tim Corrigan, Arie H. Havelaar, Her- man J. Gibb, Paul R. Torgerson, Martyn D. Kirk, Fred J. Angulo, Robin J. Lake, Niko Speybroeck, and Sandra Hoffmann. 2016. World health organi- zation estimates of the relative contributions of food to the burden of disease due to selected foodborne hazards: A structured expert elicitation. PLOS ONE , 11(1):1–35. Aspen K Hopkins, Alex Renda, and Michael Carbin. 2023. Can LLMs generate random numbers? evalu- ating LLM sampling in controlled domains. In ICML 2023 Workshop: Sampling and Optimization in Dis- crete Space . Hengguan Huang, Xing Shen, Songtao Wang, Lingfa Meng, Dianbo Liu, Hao Wang, and Samir Bhatt. 2025. Verbalized probabilistic graphical modeling. Preprint , arXiv:2406.05516. Zhiwei Ji, Qibiao Xia, and Guanmin Meng. 2015. A review of parameter learning methods in bayesian network. In Advanced Intelligent Computing Theo- ries and Applications , pages 3–12, Cham. Springer International Publishing. Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, Scott Johnston, Sheer El-Showk, Andy Jones, Nelson Elhage, Tristan Hume, Anna Chen, Yuntao Bai, Sam Bowman, Stanislav Fort, and 17 others. 2022. Language models (mostly) know what they know. Preprint , arXiv:2207.05221. Lorenz Kuhn, Yarin Gal, and Sebastian Farquhar. 2023. Semantic uncertainty: Linguistic invariances for un- certainty estimation in natural language generation. Preprint , arXiv:2302.09664. S. Kullback and R. A. Leibler. 1951. On information and sufficiency. The Annals of Mathematical Statis- tics, 22(1):79–86. Manuele Leonelli. 2025. bnrep: A repository of bayesian networks from the academic literature. Neu- rocomputing , 624:129502. Robert Levine. 2024. Probability or confidence, a dis- tinction without a | https://arxiv.org/abs/2505.15918v1 |
difference? Intelligence and Na- tional Security , 39(4):729–741. Enrico Longato, Erica Tavazzi, Adriano Chió, Gabriele Mora, Giovanni Sparacino, and Barbara Di Camillo. 2023. Dealing with data scarcity in rare diseases: Dynamic bayesian networks and transfer learning to develop prognostic models of amyotrophic lateral sclerosis. In Artificial Intelligence in Medicine , pages 140–150, Cham. Springer Nature Switzerland. 9 Thomas McAndrew, Nutcha Wattanachit, Graham C. Gibson, and Nicholas G. Reich. 2021. Aggregat- ing predictions from experts: A review of statistical methods, experiments, and applications. Wiley In- terdisciplinary Reviews: Computational Statistics , 13(2):e1514. Epub 2020 Jun 16. Emilia Mendes. 2014. Expert-Based Knowledge En- gineering of Bayesian Networks , pages 73–105. Springer Berlin Heidelberg, Berlin, Heidelberg. Aliakbar Nafar, K. Brent Venable, and Parisa Kord- jamshidi. 2024a. Teaching probabilistic logical rea- soning to transformers. In Findings of the Associ- ation for Computational Linguistics: EACL 2024 , pages 1615–1632, St. Julian’s, Malta. Association for Computational Linguistics. Aliakbar Nafar, Kristen Brent Venable, and Parisa Ko- rdjamshidi. 2024b. Reasoning over uncertain text by generative large language models. Preprint , arXiv:2402.09614. Aliakbar Nafar, Kristen Brent Venable, and Parisa Ko- rdjamshidi. 2025. Learning vs retrieval: The role of in-context examples in regression with large language models. Preprint , arXiv:2409.04318. OpenAI and 1 others. 2024. Gpt-4o system card. Preprint , arXiv:2410.21276. Akshay Paruchuri, Jake Garrison, Shun Liao, John B Hernandez, Jacob Sunshine, Tim Althoff, Xin Liu, and Daniel McDuff. 2024. What are the odds? lan- guage models are capable of probabilistic reasoning. InProceedings of the 2024 Conference on Empiri- cal Methods in Natural Language Processing , pages 11712–11733, Miami, Florida, USA. Association for Computational Linguistics. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowl- edge bases? In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Pro- cessing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages 2463–2473, Hong Kong, China. Association for Computational Linguistics. Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the param- eters of a language model? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 5418–5426, Online. Association for Computational Linguistics. Mohammed Saeed, Naser Ahmadi, Preslav Nakov, and Paolo Papotti. 2021. RuleBERT: Teaching soft rules to pre-trained language models. In Proceedings of the 2021 Conference on Empirical Methods in Natu- ral Language Processing , pages 1460–1476, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.Jack B. Soll, Asa B. Palley, Joshua Klayman, and Don A. Moore. 2024. Overconfidence in probabil- ity distributions: People know they don’t know, but they don’t know what to do about it. Manage. Sci. , 70(11):7422–7442. Shuchang Tao, Liuyi Yao, Hanxing Ding, Yuexiang Xie, Qi Cao, Fei Sun, Jinyang Gao, Huawei Shen, and Bolin Ding. 2024. When to trust LLMs: Aligning confidence with response quality. In Findings of the Association for Computational Linguistics: ACL 2024 , pages 5984–5996, Bangkok, Thailand. Associ- ation for Computational Linguistics. Gemini Team and 1 others. 2024. Gemini: A fam- ily of highly capable multimodal models. Preprint , | https://arxiv.org/abs/2505.15918v1 |
arXiv:2312.11805. Robert Vacareanu, Vlad Andrei Negru, Vasile Suciu, and Mihai Surdeanu. 2024. From words to numbers: Your large language model is secretly a capable re- gressor when given in-context examples. In First Conference on Language Modeling . Owen Woodberry, Ann E. Nicholson, Kevin B. Korb, and Carmel Pollino. 2005. Parameterising bayesian networks. In AI 2004: Advances in Artificial In- telligence , pages 1101–1107, Berlin, Heidelberg. Springer Berlin Heidelberg. Gao Xiaoguang, Yang Yu, and Guo Zhigao. 2019. Learning bayesian networks by constrained bayesian estimation. Journal of Systems Engineering and Elec- tronics , 30(3):511–524. Miao Xiong, Zhiyuan Hu, Xinyang Lu, YIFEI LI, Jie Fu, Junxian He, and Bryan Hooi. 2024. Can LLMs express their uncertainty? an empirical evaluation of confidence elicitation in LLMs. In The Twelfth Inter- national Conference on Learning Representations . Daniel Yang, Yao-Hung Hubert Tsai, and Makoto Ya- mada. 2024. On verbalized confidence scores for llms. arXiv preprint arXiv:2412.14737 . Yao You, Jie Li, and Lincheng Shen. 2019. An effective bayesian network parameters learning algorithm for autonomous mission decision-making under scarce data. International Journal of Machine Learning and Cybernetics , 10:549–561. Chengxiang Zhai and John Lafferty. 2001. A study of smoothing methods for language models applied to ad hoc information retrieval. In Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Re- trieval , SIGIR ’01, page 334–342, New York, NY , USA. Association for Computing Machinery. Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, and 3 others. 2025. A survey of large language models. Preprint , arXiv:2303.18223. 10 A Dataset Pre-processing A.1 bnRep Dataset Overview The bnRep dataset (Leonelli, 2025) is an open- source collection designed to facilitate research, teaching, and practical applications related to Bayesian Networks (BNs), addressing the signif- icant shortage of comprehensive BN repositories. Implemented as an R package, bnRep includes over 200 well-documented Bayesian Networks sourced from more than 150 academic publications, mainly recent studies published from 2020 onwards. Each BN entry has accompanying characteristics ex- tracted from the original publications. These char- acteristics are described below: •Name : A short identifier for the Bayesian network. •Type : The network’s type of random variables (discrete, continuous, mixture). •Structure : Indicates how the network’s struc- ture was obtained. –Knowledge : The structure is built from well-established domain knowledge. –Data : The structure was learned from a dataset. –Fixed : A predefined structure that is neither purely elicited from experts nor learned from data (often a canonical or standard network). –Synthetic : The structure was generated artificially (e.g., for algorithm testing). –Expert : The structure is directly elicited from domain experts. –Mixed : The structure is derived through a combination of sources. •Probabilities : Indicates how the CPTs were obtained: –Data : Parameters estimated from empiri- cal data. –Knowledge : Parameters derived from well-established theoretical or domain- specific information. –Mixed : A combination of data-based es- timation and expert input. –Synthetic | https://arxiv.org/abs/2505.15918v1 |
: Artificially generated parame- ters for testing or demonstration. –Expert : Parameters directly elicited from domain experts.•Graph : Describes any special structural char- acteristic of the network graph. For example: –Generic : No particular restriction or canonical form. –Naive Bayes : A star-shaped structure of- ten used for classification tasks. –Reverse Naive Bayes : The class label is modeled as a child of all other variables, reversing the direction of edges in the standard Naive Bayes structure. –K-Dep : Each feature depends on the class and up to K other features. –Tree: A graph with each node having exactly one parent (except the root). –Reverse Tree : A tree with reversed edges, placing the class node at the leaves. –TAN: an extension of Naive Bayes that allows each variable to have one addi- tional parent, forming a tree among the predictors for greater flexibility. •Area : The domain of the BN (e.g., Medicine, Engineering, Environmental Science). •Nodes : The total number of random variables (nodes) in the Bayesian network. •Arcs : The total number of directed edges (arcs) in the network. •Parameters : The total number of probability entries in the CPTs. •Avg. Parents : The average number of parent nodes per variable. •Max Parents : The maximum number of par- ents any single node has in the network. •Avg. Levels : The average number of discrete states (levels) per node. •Max Levels : The maximum number of states among all nodes in the network. •Average Markov Blanket : The average size of the Markov blanket for each node, which consists of the node’s parents, children, and the children’s other parents. •Year : The year of publication associated with the BN’s reference. •Journal : The venue where the Bayesian net- work was published or described. 11 •Reference : The bibliographic reference de- scribing the BN in detail. A.2 BN Selection and Filtering To utilize the bnRep dataset, we first converted the Bayesian networks from the R package into Python- compatible format using the pgmpy library (Ankan and Panda, 2015). Next, we filtered the networks, selecting only those containing discrete CPT val- ues, as these comprise the majority (90%) of the BNs. Then, to improve practicality and computa- tional efficiency, we excluded networks with more than 50 nodes, noting that only a few exceed this threshold. The most substantial reduction, however, came from removing BNs with incomplete CPT in- formation. After applying all these criteria, we arrived at a final dataset of 80 BNs, which remains sufficiently large for our evaluation purposes. A.3 Nodes’ Descriptions and States As outlined in Section 4, querying the LLMs for each node state requires clearly defined descrip- tions for the nodes and their respective states. To acquire this information for each Bayesian Network (BN), we first retrieved the PDFs of the referenced documents that detail each BN. These references were initially provided by the bnRep dataset. A Python script was developed to automate the download of relevant PDFs directly from Google Scholar using bibliographic information. Despite automation, several documents could not be down- loaded automatically due to | https://arxiv.org/abs/2505.15918v1 |
access restrictions or other limitations. These remaining documents were obtained manually, with some requiring specific permissions. Subsequently, we performed a manual verification to ensure the downloaded documents corresponded to their respective references. We then implemented another Python script leveraging GPT-4o, which, given a PDF and the extracted nodes and states of the Bayesian Network (obtained through the pgmpy library), automati- cally generated a Python dictionary describing each node and its associated states. Although the ma- jority of generated dictionaries were accurate and required minimal modifications, a few necessitated manual adjustments to ensure correct formatting and accuracy. An illustrative example of the prompt and the generated dictionary for the Bayesian Net- work “algalactivity2” is provided in Figure 4.B LLM’s Hyper-Parameters and Experimental Setup In our experiments, we primarily evaluated three state-of-the-art Large Language Models: GPT-4o and its mini variant (OpenAI et al., 2024) ver- sions “gpt-4o-2024-11-20” and “gpt-4o-mini-2024- 07-18”, Claude Sonnet 3.5 (Anthropic, 2024) ver- sion “Claude 3.5 Sonnet 2024-10-22”, and Gemini- Pro 1.5 (Team et al., 2024) version “gemini-1.5- pro-002”. All models were interfaced using the LangChain framework (Chase, 2022), ensuring consistent integration and interaction. A fixed generation “temperature” of 0.1was uti- lized to maintain minimal stochasticity in outputs. We initially explored the impact of sampling by performing up to five repeated samples per infer- ence. However, we found that multiple samples did not meaningfully affect aggregate outcomes across the evaluated set of 80 Bayesian Networks, likely due to the low temperature setting. Therefore, sub- sequent experiments were conducted using a single sample per inference. Token output lengths were not constrained, al- lowing the models to elaborate their reasoning freely. In instances where models produced re- sponses that deviated from the required format or where output text generation was interrupted mid- way, additional prompts were provided until valid responses were obtained. The prompt structure used for eliciting responses is illustrated at the top of Figure 5. To get the token probabilities from our LLMs, we use the middle prompt template in Figure 5. OpenAI’s Chat API allows for extracting the top 20 most probable tokens. To get the full distribution of a node’s states at once using the Full Dist method, we use the prompt template at the bottom of Figure 5. C KL Divergence C.1 Kullback Leibler (KL) Divergence Overview Kullback Leibler (KL) divergence measures how one probability distribution pdiverges from a refer- ence distribution q. For a discrete random variable X, DKL p(X)∥q(X) =X x∈Xp(x) logp(x) q(x). For continuous variables the sum is replaced by an integral. KL divergence satisfies DKL(p∥q)≥ 12 Example of Graph Nodes' Description Extraction by GPT4o Based on the following PDF content and Bayesian Network nodes information, create a Python dictionary that explains each node in one sentence and the values it can assume. The explanation should be based on the context provided in the PDF and then to the best of your abilities. PDF content: (PDF content) Nodes and their possible values: (Json of nodes and their possible values) Please provide a Python dictionary where: - Keys are node names - Values are descriptions of | https://arxiv.org/abs/2505.15918v1 |
what the node represents and the possible values it can assume. Based on the following PDF content and Bayesian Network nodes information, create a Python dictionary that explains each node in one sentence and the values it can assume. The explanation should be based on the context provided in the PDF and then to the best of your abilities. ```python { "C": "C represents conductivity in the water and it can assume values '0' for low conductivity and '1' for high conductivity .", "Chl_a" : "Chl_a stands for chlorophyll-a and is used as an indicator of algal activity , it can be '0' for normal levels and '1' for high concentrations above the threshold.", "DO" : "DO represents dissolved oxygen levels in the water , with '0' indicating low oxygen levels and '1' indicating high oxygen levels.", "N": "N refers to the concentration of nitrogen in the water , where '0' denotes low nitrogen levels and '1' denotes high nitrogen levels.", "P": "P symbolizes the concentration of phosphorous in the water , with '0' representing low levels and '1' representing high levels.", "pH" : "pH measures the acidity or alkalinity of the water , with '0' indicating low pH (more acidic) and '1' indicating high pH (more alkaline).", "Te": "Te denotes the temperature of the water , with '0' indicating lower temperatures and '1' indicating higher temperatures.", "Tu": "Tu represents turbidity , or the clarity of the water , with '0' for low turbidity (clearer water) and '1' for high turbidity (murkier water). } ``` Each node is concerned with water quality indicators used in predicting increased algal activity , particularly the occurrence of harmful algal blooms. The binary values correspond to dif ferent levels or states of these indicators as structured for the Bayesian Network model.Figure 4: Example of node descriptions and state extractions generated by GPT-4o for the Bayesian Network “algalactivity2”. The Python dictionary describes each node and the possible values it can assume based on context from the provided document. 0and equals zero iff p=q. It is asymmetric (DKL(p∥q)̸=DKL(q∥p)in general). Intuitively, it quantifies the expected extra amount of information (in nats or bits) required to encode samples from p using a code that is optimal for q. C.2 BN KL Divergence Calculation Calculating the KL divergence over the entire Bayesian network can be computationally expen- sive. To address this, we decompose it into a weighted sum of CPTs, where the weights corre- spond to the probabilities of the parent nodes, asillustrated in Figure 6. This approach simplifies computations in practice, since we only need to in- fer the probabilities of parent nodes using variable elimination. D Additional Diagrams The results of the priors as pseudocounts method are shown in Figure 7. The trend and conclusions drawn from this figure are the same as the linear pooling method shown in Figure 3. Figure 8 shows the results of Figure 2 but with CPT KL divergence metric. The results of EDP with data sampled for 13 LLM's Main Pr ompt Template These nodes are related to the question inside a Bayesian | https://arxiv.org/abs/2505.15918v1 |
Network: (List of nodes and their descriptions) Given this information,n answer the following question by providing a probability from 0 to 1 based on your best guess ( you need to make a lot of estimations since the given information is limited). Your answer should include your reasoning and, at the end, a sentence that says 'The probability of the question is: ' followed by the probability . (Probabilistic query in the form of a question: Given that ... what is the probability that x is X? ) LLM's Pr ompt Template to Extract Token Pr obabilities These nodes are related to the question inside a Bayesian Network: (List of nodes and their descriptions) Given this information, answer the following question by providing the most probable state of the node. Your answer should only include the state of the node without capitalization, spaces, or any additional information . (Probabilistic query in the form of a multi-choice QA task: Given that ... what is the state of X? ) LLM's Pr ompt Template for the Full Dist Method These nodes are related to the question inside a Bayesian Network: (List of nodes and their descriptions) Given this information, answer the following question by providing the probability distribution of the node. Your answer should include your reasoning and, at the end, a sentence that says 'The probability distribution of the node is: ' followed by the probabilities given in a tuple with each probability representing a state in the given order . Order of states: (List of the states of the node) (Query to request the full distribution: Given that ... what is the probability distribution of X? )Figure 5: Prompt templates used for eliciting probabilistic responses from LLMs. The top panel shows the main prompt template that asks the LLM to provide a probabilistic answer along with explicit reasoning. The middle panel shows the modified prompt template designed specifically to extract token-level probabilities by instructing the LLM to generate only the most probable state without additional context or explanation. The bottom panel shows the prompt template used for the Full Dist method, which requests the model to output the entire probability distribution over a node’s possible states in a single response. 14 BN KL Divergence Decomposition into a Sum of Local KL Divergences Letp(x)andq(x)be two BNs over the same variables {X1, . . . , X n}with common structure. Each factorizes as p(x) =Qn i=1p xi|Pai , q(x) =Qn i=1q xi|Pai ,where Paiare the parents of Xi. We want to show: DKL(p∥q) =nX i=1X paip(pai)DKL p(Xi|pai)∥q(Xi|pai) . Derivation. DKL(p∥q) =X xp(x) logp(x) q(x)=X xp(x) lognY i=1p(xi|Pai) nY i=1q(xi|Pai)=X xp(x)nX i=1logp(xi|Pai) q(xi|Pai) =nX i=1X xp(x) logp(xi|Pai) q(xi|Pai)=nX i=1X paip(pai)X xip(xi|pai) logp(xi|pai) q(xi|pai) =nX i=1X paip(pai)DKL p(Xi|pai)∥q(Xi|pai) . Example. LetA, B, C ∈ {0,1}within the network A→B→C. We compute DKL(p∥q)as follows: 1.Substitute the factorizations: DKL(p∥q) =X a,b,cp(a, b, c ) logp(a, b, c ) q(a, b, c ).=X a,b,cp(a, b, c ) logp(a)p(b|a)p(c|b) q(a)q(b|a)q(c|b). 2.Separate logs: =X a,b,cp(a, b, c )h logp(a) q(a)+ logp(b|a) q(b|a)+ logp(c|b) q(c|b)i . 3.Split the sum: =X a,b,cp(a, b, c | https://arxiv.org/abs/2505.15918v1 |
) logp(a) q(a) | {z } A+X a,b,cp(a, b, c ) logp(b|a) q(b|a) | {z } B+X a,b,cp(a, b, c ) logp(c|b) q(c|b) | {z } C. 4.Marginalize: A=X ap(a) logp(a) q(a), B =X ap(a)X bp(b|a) logp(b|a) q(b|a), C =X bp(b)X cp(c|b) logp(c|b) q(c|b). 5.Recognize KL pieces and combine: DKL(p∥q) =DKL p(A)∥q(A) +X ap(a)DKL p(B|a)∥q(B|a) +X bp(b)DKL p(C|b)∥q(C|b) . Figure 6: KL divergence decomposition into local components for Bayesian networks. 15 each row of the CPT by # times the number of states of the node are shown in Figures 9 and 10. E Differnt number parnets and states Figures 11, 12, 13, 14 and 15 show the CPT KL divergence for nodes with 2, 3, 4, 5 and 6 states respectively. Figures 16, 17, 18, 19, 20, 21, 22 and 23 show the CPT KL divergence for nodes with 0, 1, 2, 3, 4, 5, 6 and 7 parents respectively. 16 Figure 7: Distribution of BN KL Divergence across 80 Bayesian Networks using GPT-4o (EDP) and Uniform predictions as priors combined with empirical data via the priors as pseudocounts method. Figure 8: Boxplot showing the distribution of BN KL Divergence values across 80 unique Bayesian Networks for various models and baselines. Models are sorted by their median KL Divergence, with lower values indicating better alignment with ground truth CPTs. 17 Figure 9: Distribution of BN KL Divergence across 80 Bayesian Networks using GPT-4o (EDP) and Uniform predictions as priors combined with empirical data via the priors as pseudocounts method. Figure 10: Distribution of BN KL Divergence across 80 Bayesian Networks using GPT-4o (EDP) and Uniform predictions as priors combined with empirical data via the linear pooling method. 18 Figure 11: CPT KL divergence for nodes with 2 states. Figure 12: CPT KL divergence for nodes with 3 states. 19 Figure 13: CPT KL divergence for nodes with 4 states. Figure 14: CPT KL divergence for nodes with 5 states. 20 Figure 15: CPT KL divergence for nodes with 6 states. Figure 16: CPT KL divergence for nodes with 0 parents. 21 Figure 17: CPT KL divergence for nodes with 1 parents. Figure 18: CPT KL divergence for nodes with 2 parents. 22 Figure 19: CPT KL divergence for nodes with 3 parents. Figure 20: CPT KL divergence for nodes with 4 parents. 23 Figure 21: CPT KL divergence for nodes with 5 parents. Figure 22: CPT KL divergence for nodes with 6 parents. 24 Figure 23: CPT KL divergence for nodes with 7 parents. 25 | https://arxiv.org/abs/2505.15918v1 |
arXiv:2505.15922v1 [cs.CL] 21 May 2025Aligning Dialogue Agents with Global Feedback via Large Language Model Reward Decomposition Dong Won Lee1Hae Won Park1Cynthia Breazeal1Louis-Philippe Morency2 MIT1, CMU2 dongwonl@mit.edu Abstract We propose a large language model based re- ward decomposition framework for aligning dialogue agents using only a single session- level feedback signal. We leverage the reason- ing capabilities of a frozen, pretrained large language model (LLM) to infer fine-grained local implicit rewards by decomposing global, session-level feedback. Our first text-only vari- ant prompts the LLM to perform reward de- composition using only the dialogue transcript. The second multimodal variant incorporates ad- ditional behavioral cues, such as pitch, gaze, and facial affect, expressed as natural language descriptions. These inferred turn-level rewards are distilled into a lightweight reward model, which we utilize for RL-based fine-tuning for dialogue generation. We evaluate both text- only and multimodal variants against state- of-the-art reward decomposition methods and demonstrate notable improvements in human evaluations of conversation quality, suggesting that LLMs are strong reward decomposers that obviate the need for manual reward shaping and granular human feedback. 1 Introduction As dialogue agents become increasingly deployed in open-ended, long-form social interactions, the challenge of aligning model behavior with trajectory-level human preferences becomes more pressing. While large language models (LLMs) have achieved strong linguistic competence, align- ment techniques such as reinforcement learning with human feedback (RLHF) (Ouyang et al., 2022) remain fundamentally constrained by their reliance on fine-grained, turn-level annotations. In many realistic settings, such as therapy, education, or companionship, annotators provide feedback only at the session level, reflecting their overall impres- sion of the entire interaction. This presents a major obstacle: how can one use sparse, global feedback to improve local behavior?In this paper, we first introduce LLM-GELI , a Large- Language Model based solution to decom- pose Global Explicit feedback into Local Implicit turn-level supervision signals that can be used for RLHF. We leverage a frozen pretrained LLM as a reward decomposition oracle. Given the full dialogue and a scalar global reward, the LLM is prompted to infer how utterances contributed to the final outcome score. The result is a turn- level reward decomposition derived purely from the LLM’s reasoning over the linguistic context. These decomposed signals are then used to train a lightweight, text-only reward function that can guide downstream policy learning. Furthermore, we introduce Multimodal-LLM- GELI , an extension of our core framework that further incorporates multimodal behavioral signals—such as facial expressions, gaze, and prosody—into the reward decomposition process. Building on recent advances in multimodal prompt- ing, we represent the listener’s nonverbal reactions as natural language descriptors and append them to each utterance in the dialogue history. This tex- tual augmentation enables the language model to interpret affective and social cues in context, allow- ing it to reason about the listener’s internal state and how it evolves throughout the conversation. By grounding decomposition in the listener’s multi- modal feedback, Multimodal-LLM-GELI produces turn-level reward estimates that more accurately re- flect the listener’s extralinguistic reactions. We evaluate our approach on the CANDOR dataset (Reece et al., 2023), which features long- form video | https://arxiv.org/abs/2505.15922v1 |
conversations and post-session sur- vey ratings, and an out-of-distribution dataset, SODA (Kim et al., 2023a). Our results show that LLM-GELI and Multimodal-LLM-GELI outper- form prior baselines across a range of human and automatic metrics. Notably, Multimodal-LLM- GELI achieves the strongest gains on affective and socially grounded human evaluation measures Figure 1: Multimodal LLM-GELI framework. A global reward RGE(τ)is decomposed into turn-level pseudo- rewards using an LLM prompted with dialogue and multimodal listener cues. These local rewards supervise a text-only reward model rθ, distilling crossmodal rea- soning into a learnable function for alignment (Sec. 4). (Lee et al., 2022), which highlights the value of LLMs not only as generators, but as competent in- terpreters of both global human preferences and rich, real-world behavioral signals to serve as re- ward decomposition oracles. 2 Related Works Multimodal Prompting Recent advances in mul- timodal learning highlight the growing potential of large language models (LLMs) as universal inter- faces for integrating and reasoning across vision, language, audio, and sensor modalities. Socratic Models (Zeng et al., 2022) introduced a modu- lar framework for zero-shot multimodal reasoning by composing frozen unimodal models—such as CLIP (Radford et al., 2021) and GPT-3—through language-based hypothesis exchange, eliminating the need for joint training. Building on this paradigm, MM-ReAct (Yang et al., 2023) leverages ChatGPT to orchestrate visual experts (e.g., OCR, object detectors) via prompt-based planning. Fur- thermoe, recent efforts have explored using LLMs for time-series and sensor data through language interfaces. Health-LLM (Kim et al., 2023b) and PHIA (Merrill et al., 2024) convert wearable sen- sor data into enriched natural language prompts, enabling personalized health prediction and recom- mendation in a zero-shot setting. Together, these works illustrate a growing shift toward using lan- guage as a medium for zero-shot reasoning with multimodal behavioral data. Reward Decomposition in Reinforcement Learn- ing Reward Decomposition, also known as Tem- poral Credit Assignment (TCA), is a fundamental concept in reinforcement learning that addressesthe challenge of determining how past actions con- tribute to future outcomes. Effective TCA enables an agent to properly attribute delayed rewards to the decisions that caused them, thus improving learning efficiency and policy quality. One clas- sical approach to TCA involves manipulating the λ-return in temporal-difference learning, which ad- justs the balance between short-term and long-term credit assignment (Petrik and Scherrer, 2008; Jiang et al., 2015). More recent advances have reformu- lated TCA as a return decomposition problem. For instance, RUDDER (Arjona-Medina et al., 2019) redistributes rewards by measuring the predictive information gained between consecutive states, al- lowing for more precise step-wise credit assign- ment. IRCR (Gangwani et al., 2020) implements a uniform redistribution scheme, while Randomized Return Decomposition (RRD) (Ren et al., 2021) frames the problem as a Monte Carlo regression task, using least-squares estimation to infer per-step rewards from trajectory-level returns. LLMs for Reward Decomposition Recent work has explored using large language models (LLMs) to support reward decomposition in reinforcement learning (RL). For instance, LaRe (Qu et al., 2025) prompts an LLM to generate a latent code that is then decoded into scalar rewards which sum to the total episodic return. CALM | https://arxiv.org/abs/2505.15922v1 |
(Pignatelli et al., 2024) employs an LLM to identify when subgoals are achieved, assigning auxiliary rewards in a zero- shot manner. In contrast, recent alignment methods like SPIN (Chen et al., 2024) and (Go et al., 2023) treat LLMs as reward functions, using them to rank or evaluate outputs via self-play or preference mod- eling—requiring dense or per-turn supervision. On the other-hand, GELI (Lee et al., 2024) is closely related to reward decomposition for dialogue mod- elling, however, it does not utilize an LLM, instead it utilizes a manually designed reward shaping func- tion. Our method, LLM-GELI, is the first to lever- age a pretrained LLM that decomposes a global end-of-dialogue reward into fine-grained, turn-level signals. Furthermore, by prompting an LLM with multimodal behavioral feedback (e.g., facial affect, prosody), we infer fine-grained turn-level rewards from sparse, trajectory-level supervision. 3 Background: Aligning Language Models We consider the problem of generating conversa- tional responses using an autoregressive language model in a multi-sensory setting. We model the lan- guage generator as a dialogue agent with a policy πϕ, parameterized by ϕ, following prior work (Liu et al., 2018; Liang et al., 2020; Wen et al., 2016; Thoppilan et al., 2022). At each turn t, the agent ob- serves the dialogue history st—which includes all preceding utterances s1, a1, . . . , s t−1, at−1—and produces a response atas its action. For notational simplicity, we denote the full dialogue context up to turn tasst=s[:t−1]. The policy πϕ(st)thus defines a distribution over possible responses at conditioned on st. To align the agent’s behavior with human prefer- ences, reinforcement learning from human feed- back (RLHF) is commonly employed (Ouyang et al., 2022; Jaques et al., 2020; Stiennon et al., 2020). Given a reward function rθ(st, at), parame- terized by θ, that scores the quality of a generated response in context, we optimize the policy via re- inforcement learning. A regularization term based on the Kullback–Leibler (KL) divergence from the original pretrained model is typically included to prevent the policy from straying too far from its ini- tialization. The resulting objective takes the form: max ϕE[rθ(st, at)]−γDKL(πϕ(·|st)||πη(·|st)),(1) where πηis a reference model. 4 Methods: LLM-GELI In many long-form dialogue settings, we often lack fine-grained annotations for individual utterances. Instead, we may only have access to a single global explicit (GE) reward, a scalar score assigned at the end of an interaction that reflects the overall im- pression or outcome, such as how positively the user felt about the conversation. A key challenge is how to decompose this global reward into per- turn feedback that can train a reward model capable of assigning appropriate values to individual utter- ances. We address this challenge using a Large Language Model GELI (LLM-GELI) framework, which proceeds in two stages (1) Global Explicit (GE) Reward Decomposition using the zero-shot prompting capabilities of a pretrained LLM and (2) Local Implicit (LI) Reward Modeling. 4.1 Global Explicit Reward Decomposition via Zero-shot Prompting of LLM We consider a long-form dialogue trajectory τ={(s0, a0),(s1, a1), . . . , (sT, aT)},where stis the dialogue state (e.g., conversational | https://arxiv.org/abs/2505.15922v1 |
history) and atis the agent’s utterance at turn t. At the end of the episode, we observe a scalar global explicit reward RGE(τ)∈R, which reflects an overall human judgment of the interaction (e.g., how positively the user felt about the conversation). Instead of assuming access to fine-grained re- ward supervision at each turn, we utilize a LLM as a reward decomposition oracle. The LLM M is prompted with the full dialogue τ, the global reward RGE(τ), and a set of the listener’s aligned multimodal feedback features {xt}T t=0, such as fa- cial expressions or prosody. The Multimodal-LLM-GELI outputs a turn-level reward decomposition that utilizes multimodal fea- tures and the dialogue transcript: MMM-LLM-GELI τ,{xt}T t=0, RGE(τ) ={rLI(st, at)}T t=0, The LLM-GELI outputs a turn-level reward de- composition based only on the dialogue transcript: MLLM-GELI (τ, R GE(τ)) ={rLI(st, at)}T t=0, subject to the ( soft) constraint in the prompt: TX t=0rLI(st, at) =RGE(τ). In practice, this corresponds to the following prompt given to the LLM: Prompt: You are given a final scalar score for Speaker A, indicating how much Speaker B experienced positive feelings (e.g., good, pleasant, happy) or negative feelings (e.g., bad, unpleasant, unhappy) during the conversation. Final score: { Session Score (RGE(τ))} Be- low, you are given the full conversation with aligned multimodal conversational features. Consider the utterances and the features, fo- cusing on the utterances of Speaker A and the reactions of Speaker B. Redistribute the score across each of Speaker A’s utterances. Whenever you identify a salient, important utterance that contributed positively or neg- atively to the final reward score, assign a numerical value (which may be positive, negative, or zero) representing its contri- bution. All of the assigned scores must sum up to:{ Session Score (RGE(τ))}. {Transcript / Multimodal Features } This decomposition provides a local implicit re- ward signals rLIthat reflects the LLM’s judgment of each action’s contribution to the final reward. Importantly, these targets encode semantic, behav- ioral, and extralinguistic information (in the multi- modal variant), as the model reasons over responses and reactions (e.g., listener facial affect). In our experiments, we use o3-mini (OpenAI, 2024) as the backbone reward decomposition LLM, M. 4.2 Reward Modeling with LLM-Derived Local Implicit Rewards Once turn-level rewards are obtained from the LLM, they are treated as local implicit (LI) supervi- sion signals, rLI, and used to train a lightweight re- ward function rθ, which conditions only on the tex- tual dialogue context, i.e., state-action pairs (st, at). Although the Multimodal-LLM-GELI had access to aligned multimodal cues during decomposition, the reward model rθdoes not. Instead, it learns from the LLM derived pseudo-labels, effectively distilling high-level, crossmodal reasoning into a purely text-conditioned reward function. This re- ward function is optimized via a standard mean squared error (MSE) loss: LLI(θ) =E(st,at)∼Dh (rLI(st, at)−rθ(st, at))2i , where rLI(st, at)denotes the pseudo-ground- truth reward for turn tobtained from the LLM- GELI method, and rθ(st, at)is the learnable text- only reward function parameterized by θ. 5 Experiments In this section, we evaluate the effectiveness of the proposed LLM-GELI framework for preference- based learning, namely reward | https://arxiv.org/abs/2505.15922v1 |
modeling and lan- guage model adaptation in long-form dialogue. Therefore, our evaluation consists of two stages. First, we train reward functions using the various methods of global explicit reward decomposition, which we denote as “ GE”, and/or local implicit reward supervision, which we denote as “ LI”. Sec- ond, we fine-tune a dialogue language model using Proximal Policy Optimization (PPO) (Schulman et al., 2017), guided by the learned reward func- tions to improve response generation. Due to com- putational restraints, all reward function training and RL fine-tuning procedures are executed with a single run. We assess each method’s impact using both automatic and human-centric metrics.5.1 Dataset We conduct our training and evaluation experi- ments using the CANDOR dataset (Reece et al., 2023), chosen for its long-form, naturalistic di- alogue structure—featuring an average of 159.4 turns, 31.3 minutes per conversation, and 17.81 words per turn—along with its scale (1,656 con- versations, over 7 million words, and 850 hours of data). CANDOR also includes aligned video recordings, enabling multimodal analysis common in face-to-face conversation corpora. We use this dataset both to train reward functions and to sample dialogue contexts for response generation. Specifi- cally, we construct disjoint splits: a held-out set of approximately 30,000 dialogue history–utterance pairs for reward model training, and a larger set of roughly 100,000 pairs for reinforcement learn- ing. The global explicit (GE) signal is derived from the post-interaction survey item: “Overall during your conversation, to what extent did you feel pos- itive feelings (e.g., good, pleasant, happy) or neg- ative feelings (e.g., bad, unpleasant, unhappy)?”. To assess generalization, we also evaluate on the SODA dataset (Kim et al., 2023a), a large-scale syn- thetic social dialogue corpus generated via GPT-3.5 and grounded in a structured social commonsense knowledge graph. Human evaluations indicate that SODA conversations are more consistent, natural, and specific than other human-authored datasets. It contains 1.5 million dialogues, with an average of 7.6 turns and 16.1 words per turn. Multmodal Features We utilize a set of 17 mul- timodal conversational features capturing prosodic, visual-social, and linguistic cues to model the affective dynamics of interactions. Audio-based prosodic features include (1) f0(fundamental frequency, representing pitch) calculated using the parselmouth library (Jadoul et al., 2018), (2) intensity (voice amplitude) from a predictor trained on the RA VDESS dataset (Livingstone and Russo, 2018), (3) jitter (indicative of pitch instability from disvoice library (Dehak et al., 2007), and (4) log energy (signal power, capturing loudness) from librsoa library (McFee et al., 2020). Visual-social features extracted from the video modality include detectors such as (5) gaze (whether the speaker is looking on-screen) calculated based on white pixel value counts, (6) nod_yes , (7) nod_no (indicating head nodding behaviors) where a nod is defined as having a nod zero crossing threshold >= 2 Figure 2: Comparison of reward decomposition across models on a CANDOR dialogue segment. MM-LLM GELI (purple) assigns contextually appropriate, affect-sensitive local rewards, aligning closely with social-emotional cues, better grounding in multimodal feedback, and stronger alignment with human intuition. See Sec. 6.3. and a nod scaled face range threshold > 0.1 , and | https://arxiv.org/abs/2505.15922v1 |
(8) smile (presence of a smile) from a smile detector (cv2.CascadeClassifier) (Bradski, 2000) . Additionally, we incorporate emotion recognition scores derived from facial expression classifiers trained on AffectNet (Mollahosseini et al., 2017), including probabilistic scores for (9 - 16)anger, contempt, disgust, fear, happiness, neutral, sadness, and surprise . Furthermore, we also include the linguistic content, the (17) utterance of the speakers. Language Descriptors To enable effective inte- gration with language models, we convert contin- uous multimodal behavioral features, which vary over time, into discrete natural language descriptors aligned with each utterance. Since these features are sampled at a different temporal resolutions than utterances (e.g., per frame or per second), we aggre- gate the feature values over the utterance duration by computing their mean and appending them to the corresponding utterances. This formulation allows us to leverage the zero-shot multimodal reasoning capabilities of large language models, as outlined in prior work on language-as-interface paradigms. An example of such processed sample is included in the Supplementary. Focusing on Listener Reactions Since our goal is to decompose a global reward based on the lis- tener’s affective response, we condition the model exclusively on the multimodal features of the lis- tener (i.e., the participant who completed the post- interaction survey and provided the overall affec- tive rating). This choice centers the reward attribu- tion on the reactions and experiences of the evalua- tor rather than the speaker, naturally aligning with the perspective-taking required for accurate reward decomposition.5.2 Baseline Models We compare LLM-GELI against several state-of- the-art Glbal Explicit reward decomposition meth- ods designed to learn fine-grained a reward func- tion from a single global explicit (GE) annotation. To ensure fair comparison, we also evaluate mod- els trained using only local implicit (LI) multi- modal signals. All methods build on a small BART model (Lewis et al., 2019) fine-tuned for conversa- tional summarization,1with additional linear layers trained for reward prediction. Below, we describe the methods used for comparison. In Figure 3, we showcase the differences of various decomposition methods. GE: RRD (Randomized Return Decomposi- tion) (Ren et al., 2021) reformulates return de- composition using Monte Carlo sampling to scale least-squares reward redistribution to long-horizon tasks. GE: IRCR (Iterative Relative Credit Re- finement) (Gangwani et al., 2020) implements uni- form reward redistribution by assigning normal- ized trajectory scores as transition-level proxy re- wards. GE: RUDDER (Return Decomposition for Delayed Rewards) (Arjona-Medina et al., 2019) uses an LSTM-based return predictor to assign step- level credit via differences in consecutive return estimates. LI: Visual Affect (V A) leverages fa- cial expressions, via a CNN-based affect classifier trained on AffectNet (Mollahosseini et al., 2017), as local rewards. LI: Language Sentiment (LS) models implicit reward from utterance sentiment using a pretrained multilingual DeBERTa classi- fier (He et al., 2020; Pang et al., 2023). GE & LI: Global Explicit Local Implicit Decomposition (GELI) (Lee et al., 2024) jointly trains a reward function to redistribute a global explicit reward using RRD (Ren et al., 2021), shaped by local im- 1https://huggingface.co/kabita-choudha ry/finetuned-bart-for-conversation-summa ry Figure 3: High level schematic of different Reward Modeling and Reward Decomposition | https://arxiv.org/abs/2505.15922v1 |
Methods. Arrows ( →) represent forward passes or assignments (- - -) indicate supervised training targets used to train the reward model rθ(·) = ˆr≈r. (a) Local Explicit Reward Modelling, used in conventional RLHF (Ouyang et al., 2022), (b) Local Implicit Rewards where signals are derived from proxy rewards in the form of outputs from a trained task-relevant function which takes in multimodal or task-related signals and outputs a score, (Pang et al., 2023) (c) Global Explicit Reward Decomposition, where there is a single trajectory level reward and is decomposed to allocated scores for each step. (Arjona-Medina et al., 2019; Ren et al., 2021; Gangwani et al., 2020) (d) GELI (Lee et al., 2024) approach where the global explicit rewards and local implicit rewards are used in conjunction. (e) Our proposed LLM-GELI approach where the reward decomposition is offloaded to a pretrained LLM, which interprets the local listener reaction cues. plicit cues, such as facial affect or gaze. However, their method required carefully hand-engineered functions to interpret these nonverbal signals into local implicit rewards. Evaluation of Reward Functions. To assess the quality of learned reward functions, consistent with prior work (Lee et al., 2024), we report two key metrics: the Global Loss and the Local Differ- ence. The Global Loss, LGE(θ), quantifies how accurately the predicted fine-grained rewards sum to the global explicit reward RGE(τ)across a di- alogue trajectory τ. It is computed as the mean squared error (MSE): LGE(θ) =E τ∼D" RGE(τ)−T−1X t=0rθ(st, at)2# (2) To evaluate alignment with local implicit sig- nals, we introduce the Local Difference, defined as the difference in expected predicted rewards condi- tioned on positive versus non-positive facial affect (as detected from visual feedback). Formally: ∆ˆrLI=E[rθ(st, at)|f(smm at) =positive affect ] −E[rθ(st, at)|f(smm at)̸=positive affect ](3) This metric captures how the predicted reward responds to visual cues from the listener. Under the assumption that positive visual affect implies that the user has received the utterance positively, we expect ∆ˆrLI>0, indicating alignment between the reward signal and implicit user visual feedback from facial expression.5.3 Fine-Tuning Language Models via Reinforcement Learning We adapt the LLaMA-2 model (Touvron et al., 2023)2using reinforcement learning with feed- back from the trained reward functions. Specifi- cally, we employ the PPO-based RLHF framework from the TRL library (von Werra et al., 2020), with LoRA (Hu et al., 2021) applied to reduce memory and compute overhead. The complete hyperparam- eter settings are provided in Appendix E. Evaluation of Generated Responses. We con- duct a human evaluation following prior proto- cols (Lee et al., 2022, 2024), using 9 criteria com- monly used to assess dialogue quality. A total of 300 crowd workers were recruited via Amazon Me- chanical Turk. Annotators were presented with dialogue histories and model-generated responses, and asked to identify which model best satisfied each criterion. At the end of the task, participants were asked which model they would prefer to con- verse with again. This is described in Section 6.1. 6 Results & Discussion In this section, we discuss the quantitative results and human evaluation of our experiments. 2We selected this model | https://arxiv.org/abs/2505.15922v1 |
to mitigate potential confounds arising from data leakage, as the training data of more recent LLaMA variants may include the CANDOR dataset (public release date: March 2023). CANDOR (Reece et al., 2023) Connection Positivity Social Inclination Interestingness Reuse Specific Sensible Fluency (/100%) ↑ GPT-3.5 (text-davinci-002) 6.33 ± 2.05 7.67 ± 1.25 7.33 ± 2.49 7.0 ± 2.16 8.67 ± 0.94 8.0 ± 1.63 76.0 ± 0.82 78.67 ± 1.7 79.0 ± 2.94 Llama 20.67 ± 3.77 22.0 ± 3.27 23.67 ± 3.09 19.0 ± 2.83 8.67 ± 2.49 16.33 ± 4.71 86.33 ± 2.36 94.33 ± 2.49 92.33 ± 3.3 Llama + GE: RRD 2.67 ± 1.25 2.0 ± 0.82 2.67 ± 0.47 2.67 ± 0.94 2.67 ± 0.94 2.33 ± 0.94 71.67 ± 1.25 69.0 ± 0.82 69.33 ± 3.77 Llama + LI: LS 16.67 ± 1.25 15.33 ± 1.25 18.0 ± 1.41 13.0 ± 2.16 6.33 ± 0.47 11.33 ± 1.7 86.0 ± 0.0 94.0 ± 0.82 92.33 ± 2.05 Llama + LI: V A 12.67 ± 1.25 12.67 ± 2.49 13.67 ± 2.05 12.33 ± 2.87 7.0 ± 1.63 9.33 ± 4.5 82.67 ± 3.09 90.0 ± 2.94 89.67 ± 2.05 Llama + GELI 27.0 ± 4.55** 25.67 ± 2.62* 26.0 ± 3.56* 25.0 ± 3.74** 19.33 ± 1.7** 23.33 ± 2.87** 84.67 ± 3.3 87.0 ± 5.35 84.33 ± 4.5* Llama + LLM-GELI (Ours) 23.67 ± 4.99* 25.33 ± 3.77* 24.33 ± 3.09 25.33 ± 3.77** 18.0 ± 2.94* 20.0 ± 1.41* 89.33 ± 0.47* 93.67 ± 0.94 91.33 ± 2.87 Llama + Multimodal-LLM-GELI (Ours) 25.0 ± 2.16* 23.33 ± 1.25 26.33 ± 1.25** 24.33 ± 2.49* 21.67 ± 1.89** 21.33 ± 0.47* 89.67 ± 1.25* 94.67 ± 2.49 93.33 ± 0.94* Table 1: Human evaluation on an unseen 3-way split of test set consisting of 150 samples of CANDOR (See below) SODA (Kim et al., 2023a) Connection Positivity Social Inclination Interestingness Reuse Specific Sensible Fluency (/100%) ↑ Human 3.67 ± 0.47 3.33 ± 0.94 5.0 ± 2.16 3.67 ± 1.25 3.33 ± 0.47 2.67 ± 0.47 97.0 ± 1.63 97.0 ± 2.16 97.67 ± 1.25 Llama 6.0 ± 1.41 3.33 ± 2.62 6.0 ± 0.82 5.33 ± 0.47 3.67 ± 0.47 5.33 ± 1.7 98.33 ± 0.94 99.33 ± 0.47 97.67 ± 1.89 Llama + GE: RRD 1.67 ± 0.47 1.67 ± 0.47 2.33 ± 1.25 1.67 ± 0.94 2.0 ± 0.82 1.33 ± 1.25 95.33 ± 1.25 93.0 ± 2.16 94.33 ± 1.89 Llama + LI: LS 5.33 ± 1.7 4.33 ± 0.94 7.33 ± 1.7* 4.33 ± 1.25 1.67 ± 0.47 3.33 ± 1.7 99.0 ± 1.41 99.0 ± 0.82 98.67 ± 1.25* Llama + LI: V A 6.0 ± 2.16 4.67 ± 1.25 6.0 ± 2.45 5.33 ± 3.09 2.0 ± 0.82 4.33 ± 2.36 98.0 ± 0.82 98.67 ± 0.94 98.33 ± 1.7 Llama + GELI 7.0 ± 0.82** 5.0 ± 1.41* 8.0 ± 1.63** 8.33 ± 2.36** 5.67 ± 1.89** 6.33 ± 2.05* 98.67 ± 0.47 99.33 ± 0.94 98.67 ± 0.47* Llama + LLM-GELI (Ours) 4.67 ± 2.05 4.0 ± 0.82 | https://arxiv.org/abs/2505.15922v1 |
6.0 ± 1.63 4.0 ± 1.63 1.67 ± 0.47 3.67 ± 2.62 98.0 ± 0.82 99.67 ± 0.47 98.67 ± 1.25 Llama + Multimodal-LLM-GELI (Ours) 7.67 ± 1.25** 5.67 ± 0.94** 7.33 ± 1.7* 7.0 ± 2.16* 5.67 ± 1.7** 6.33 ± 0.47* 98.67 ± 1.25 99.33 ± 0.47 97.33 ± 1.25 Table 2: Human evaluation on an unseen 3-way split test set consisting of 100 samples from an OOD dataset: SODA (Kim et al., 2023a). Green Indicates best performance, Bold indicates statistically significant improvement over the unadapted LLaMA baseline. Significance levels: * p <0.05, **p <0.01. 6.1 Human Evaluation Table 1 presents human evaluation results on an unseen splits of the CANDOR dataset. Our ap- proach is the best-performing method in 6 out of 9 metrics. The Multimodal-LLM-GELI model—by augmenting each utterance with multimodal de- scriptors (e.g., facial affect, gaze)—further im- proves performance, achieving the best scores in Social (26.33 ±1.25, p <0.01)Interestingness (21.67 ±1.89, p < 0.01),Specificity (89.67 ± 1.25, p < 0.05), and Sensibility (94.67 ±2.49), Fluency (93.33 ±0.94, p < 0.05). The LLM- GELI model achieves the best scores in Inclina- tion (25.33 ±3.77, p <0.01). Furhtermore, our proposed approach results in staticially significant improvements over Base-Llama in 7 out of 9 met- rics. For the SODA (Kim et al., 2023a) unseen out-of distribution dataset, as shown in Table 2, Multimodal-LLM-GELI adaptation in staticially significant improvements over Base-Llama in 6 out of 9 metrics, and performing the best for 2 out of 9 metrics Connection (7.67 ±1.25, p <0.01) and Positivity (5.67 ± 0.9, p <0.01). These strong results of LLM-GELI and Multimodal-LLM-GELI, stands in stark contrast to GELI, which requires a carefully designed manual reward shaping function. This indicates that LLMs can serve as effective decomposers of sparse global feedback into reward signals, bypassing the need for task-specific engineering and domain heuristics. This dramatically reduces the overhead in reward shaping function design, while enabling scalableFeedback Baselines LGE↓ ∆ˆrLI>0 Type (Global Loss) (Local Difference) Human N/A 0.087 ± 0.05 Mean 245.495 0.000 Mode 289.473 0.000 GEIRCR 394.041 0.008 RUDDER 285.720 0.003 RRD (K = 32) 172.246 0.007 RRD (K = 160) 188.382 0.008 LIVisual Affect (V A) 1546.17 0.256 Language Sentiment (LS) 825.31 0.010 Manual GELIIRCR + V A 722.687 0.392 RUDDER + V A 623.882 0.030 RRD + V A 176.897 0.063 LLM GELILanguage Only (LLM) 24.981 -0.001 Multimodal (MM-LLM) 24.322 0.012 Table 3: Automatic Evaluation on Reward Function Training. Left: We report the Global Loss ( LGE) mea- suring how accurately the predicted local rewards sum to the ground-truth global feedback. Among all models, LLM GELI methods achieve the significantly lower global loss, indicating better reward decomposition. Right: We evaluate the Local Difference ( ∆ˆrLI), which quantifies how well the predicted rewards align with the local implicit multimodal feedback. alignment of agents with nuanced human prefer- ences. We refer readers to Appendix Sec. H for real utterance generations of Multimodal-LLM-GELI and LLM-GELI compared to previous baselines. 6.2 Reward Function Evaluation Global Loss ( LGE):Table 3 (left) reports the Global Loss LGE, which quantifies how well the | https://arxiv.org/abs/2505.15922v1 |
predicted local rewards sum to the ground-truth global feedback, as defined in Eq. 2. While tradi- tional return decomposition methods such as RUD- DER and IRCR achieve moderate MSE scores, our proposed LLM GELI methods—particularly the Multimodal (MM-LLM) variant—achieve the low- estglobal loss by a large margin. Notably, this is achieved without any hand-crafted return redis- tribution, in contrast to methods like GELI, and methods are explicitly optimized to minimize LGE like RUDDER and RRD. This result highlights that leveraging local implicit feedback from pretrained LLMs (and multimodal inputs) can lead to more ac- curate reward decomposition. The low LGEof both the Language-Only and Multimodal LLM GELI models suggests that LLM-based redistribution of- fers a generalizable and domain-agnostic solution. Local Difference ( ∆ˆrLI):Table 3 (right) shows the Local Difference metric ∆ˆrLI, which measures the alignment of predicted reward differences with local implicit feedback—in our case, the difference in predicted reward for turns with positive versus negative visual affect. Models trained solely with global feedback (GE-only methods) yield near-zero ∆ˆrLI, indicating their inability to distinguish be- tween positive and negative affective states. In contrast, LI-only baselines that directly optimize on multimodal inputs such as visual affect (V A) yield much higher ∆ˆrLI, validating the utility of these local signals. We observe that the LLM-GELI-(Multimodal) model achieves a non-trivial ∆ˆrLI(0.012), out- performing all GE baselines and approaching the behavior of LI-tuned models. This suggests that the implicit structure captured by the LLM reward decomposition can encode meaningful multimodal cues, even without explicitly training on them. In contrast, the Language-Only LLM GELI variant does not show this sensitivity, reinforcing the im- portance of multimodal inputs for capturing so- cially grounded implicit signals. Together, these findings demonstrate that LLM GELI not only elim- inates the need for hand-designed reward functions but also captures affectively salient local distinc- tions when multimodal inputs are present. 6.3 Comparison of Decomposed Rewards Figure 2 presents a turn-by-turn comparison of re- ward decompositions from different models on a CANDOR dialogue segment. Across the conver- sation, we observe that MM-LLM GELI (purple) consistently aligns rewards with socially mean- ingful content (e.g., self-disclosures, affirmations), while discounting low-affect or filler utterances like “Mhm” or “Okay.” In contrast, baseline methodslike GELI (blue), V A (pink), and LS (orange) often misattribute reward to neutral or socially uninfor- mative turns, or fail to identify affective shifts. The RRD and IRCR models, optimized for global loss, show erratic or flat credit assignment. The key difference between MM-LLM Decomp. andLLM Decomp. lies in multimodal ground- ing: MM-LLM uses local visual affect (e.g., fa- cial expressions, gaze) to inform reward assign- ments, allowing it to distinguish between utterances with similar surface forms but divergent social im- pacts—something the language-only variant cannot reliably capture. In the segment where Speaker A asks, “So um are you done with school? have you finished college?”, we observe a key divergence in how models assigned rewards. Both GELI and LLM Decomp. attribute either neutral or slightly positive reward, likely based solely on the linguis- tic surface (a question showing interest). However, MM-LLM Decomp. assigns a | https://arxiv.org/abs/2505.15922v1 |
clearly negative re- ward. This contrast arises because MM-LLM in- corporates multimodal cues—such as facial expres- sion, vocal hesitancy, or disengaged affect from the listener—which may signal discomfort or a nega- tive emotional reaction to the question. 7 Conclusion We introduce a new paradigm for preference-based alignment of conversational agents by offloading reward decomposition to large language models. Unlike prior methods that rely on manually engi- neered functions to interpret nonverbal cues, LLM- GELI leverages the zero-shot reasoning capabilities of pretrained LLMs to infer turn-level rewards from a single global trajectory score. Our Multimodal- LLM-GELI extension further integrates nonverbal feedback through natural language descriptors, en- abling rich, socially grounded reward modeling without requiring architectural changes or multi- modal training. Empirical results on the CANDOR dataset show that our approach outperforms the state-of-the-art in human and automatic evaluations. These findings underscore the potential of LLMs as scalable and domain-agnostic reward decom- posers, bridging the gap between sparse feedback and fine-grained behavioral alignment in real-world dialogue systems. This work opens the door to more generalizable, interpretable, and low-friction alignment methods for long-form, multimodal in- teractions. 8 Limitations While LLM-GELI introduces a scalable and in- terpretable approach to reward decomposition, its effectiveness depends critically on the reasoning capabilities of the underlying large language model. Our method assumes that the pretrained LLM has sufficient social and affective knowledge to infer turn-level credit assignments from global feedback. However, these inferences are ultimately shaped by the LLM’s pretraining data, which may omit domain-specific conversational dynamics or cultur- ally situated behaviors. As such, reward assign- ments may be inconsistent or misaligned in unfa- miliar contexts. The decomposition procedure is also inherently sensitive to prompt design. Despite using struc- tured prompts grounded in dialogue and behavioral context, small changes in prompt wording or in- put formatting can lead to significant variation in the resulting reward signals. This introduces in- stability and reduces reproducibility, particularly when transferring to new domains or LLM back- bones. Addressing prompt sensitivity remains an open challenge in prompt-based credit assignment methods. For the multimodal variant, the quality of de- composition depends on the accuracy of behavioral signal extraction and its subsequent translation into natural language descriptors. While this textual ab- straction enables compatibility with language-only LLMs, it may obscure or distort the nuances of the original multimodal data—especially in cases where behavioral cues are ambiguous or conflict- ing. The resulting reward signals are thus only as reliable as the upstream perception and descriptor generation pipelines. There are risks that could arise as a result of more social, dialogue agents that can interact with people in a long-term interaction. Conversational agents could be used maliciously for deception, manipulation, and the spread of misinformation. Furthermore, conversational agents which use mul- timodal data could enhance seriousness of these issues, as models can detect subtle cues such as microexpressions to infer and manipulate the user. As a potential measure to mitigate such misuse, we plan to release our code and model weights un- der a license which prevents the use of our assets by any party that support or contribute | https://arxiv.org/abs/2505.15922v1 |
to false imper- sonation or hate speech (Do No Harm, Nonviolent Public or Hippocratic License).References Jose A Arjona-Medina, Michael Gillhofer, Michael Widrich, Thomas Unterthiner, Johannes Brandstetter, and Sepp Hochreiter. 2019. Rudder: Return decom- position for delayed rewards. Advances in Neural Information Processing Systems , 32. G. Bradski. 2000. The OpenCV Library. Dr. Dobb’s Journal of Software Tools . Zixiang Chen, Yihe Deng, Huizhuo Yuan, Kaixuan Ji, and Quanquan Gu. 2024. Self-play fine-tuning con- verts weak language models to strong language mod- els.arXiv preprint arXiv:2401.01335 . Najim Dehak, Pierre Dumouchel, and Patrick Kenny. 2007. Modeling prosodic features with joint fac- tor analysis for speaker verification. IEEE Transac- tions on Audio, Speech, and Language Processing , 15(7):2095–2103. Djellel Difallah, Elena Filatova, and Panos Ipeirotis. 2018. Demographics and dynamics of mechanical turk workers. In Proceedings of the eleventh ACM international conference on web search and data mining , pages 135–143. Tanmay Gangwani, Yuan Zhou, and Jian Peng. 2020. Learning guidance rewards with trajectory-space smoothing. Advances in Neural Information Pro- cessing Systems , 33:822–832. Dongyoung Go, Tomasz Korbak, Germán Kruszewski, Jos Rozen, and Marc Dymetman. 2023. Composi- tional preference models for aligning lms. arXiv preprint arXiv:2310.13011 . Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654 . Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adap- tation of large language models. arXiv preprint arXiv:2106.09685 . Yannick Jadoul, Bill Thompson, and Bart De Boer. 2018. Introducing parselmouth: A python interface to praat. Journal of Phonetics , 71:1–15. Natasha Jaques, Judy Hanwen Shen, Asma Ghande- harioun, Craig Ferguson, Agata Lapedriza, Noah Jones, Shixiang Shane Gu, and Rosalind Picard. 2020. Human-centric dialog training via offline reinforce- ment learning. arXiv preprint arXiv:2010.05848 . Nan Jiang, Alex Kulesza, Satinder Singh, and Richard Lewis. 2015. The dependence of effective plan- ning horizon on model accuracy. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems , pages 1181–1189. Hyunwoo Kim, Jack Hessel, Liwei Jiang, Peter West, Ximing Lu, Youngjae Yu, Pei Zhou, Ronan Bras, Mal- ihe Alikhani, Gunhee Kim, Maarten Sap, and Yejin Choi. 2023a. SODA: Million-scale dialogue distil- lation with social commonsense contextualization. InProceedings of the 2023 Conference on Empiri- cal Methods in Natural Language Processing , pages 12930–12949, Singapore. Association for Computa- tional Linguistics. Sungdong Kim, Sanghwan Bae, Jamin Shin, Soyoung Kang, Donghyun Kwak, Kang Min Yoo, and Min- joon Seo. 2023b. Aligning large language mod- els through synthetic feedback. arXiv preprint arXiv:2305.13735 . Dong Won Lee, Hae Park, Yoon Kim, Cynthia Breazeal, and Louis-Philippe Morency. 2024. Global reward to local rewards: Multimodal-guided decomposition for improving dialogue agents. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 15737–15762. Mina Lee, Megha Srivastava, Amelia Hardy, John Thick- stun, Esin Durmus, Ashwin Paranjape, Ines Gerard- Ursin, Xiang Lisa Li, Faisal Ladhak, Frieda Rong, et al. 2022. Evaluating human-language model inter- action. arXiv preprint arXiv:2212.09746 . Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and | https://arxiv.org/abs/2505.15922v1 |
Luke Zettlemoyer. 2019. Bart: De- noising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461 . Weixin Liang, Youzhi Tian, Chengcai Chen, and Zhou Yu. 2020. Moss: End-to-end dialog system frame- work with modular supervision. In Proceedings of the AAAI Conference on Artificial Intelligence , vol- ume 34, pages 8327–8335. Bing Liu, Gokhan Tur, Dilek Hakkani-Tur, Pararth Shah, and Larry Heck. 2018. Dialogue learning with human teaching and feedback in end-to-end train- able task-oriented dialogue systems. arXiv preprint arXiv:1804.06512 . Steven R Livingstone and Frank A Russo. 2018. The ryerson audio-visual database of emotional speech and song (ravdess): A dynamic, multimodal set of fa- cial and vocal expressions in north american english. PloS one , 13(5):e0196391. Brian McFee, Vincent Lostanlen, Alexandros Metsai, Matt McVicar, Stefan Balke, Carl Thomé, C Raf- fel, F Zalkow, A Malek, K Lee, et al. 2020. li- brosa/librosa: 0.8. 0. Version 0.8. 0, Zenodo, doi , 10. Mike A Merrill, Akshay Paruchuri, Naghmeh Rezaei, Geza Kovacs, Javier Perez, Yun Liu, Erik Schenck, Nova Hammerquist, Jake Sunshine, Shyam Tailor, et al. 2024. Transforming wearable data into health insights using large language model agents. arXiv preprint arXiv:2406.06464 .Ali Mollahosseini, Behzad Hasani, and Mohammad H Mahoor. 2017. Affectnet: A database for facial ex- pression, valence, and arousal computing in the wild. IEEE Transactions on Affective Computing , 10(1):18– 31. OpenAI. 2024. Openai o3-mini. Accessed: 2025-05- 18. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instruc- tions with human feedback. Advances in Neural Information Processing Systems , 35:27730–27744. Richard Yuanzhe Pang, Stephen Roller, Kyunghyun Cho, He He, and Jason Weston. 2023. Leveraging implicit feedback from deployment data in dialogue. arXiv preprint arXiv:2307.14117 . Marek Petrik and Bruno Scherrer. 2008. Biasing approx- imate dynamic programming with a lower discount factor. Advances in neural information processing systems , 21. Eduardo Pignatelli, Johan Ferret, Tim Rockäschel, Ed- ward Grefenstette, Davide Paglieri, Samuel Coward, and Laura Toni. 2024. Assessing the zero-shot ca- pabilities of llms for action evaluation in rl. arXiv preprint arXiv:2409.12798 . Yun Qu, Yuhang Jiang, Boyuan Wang, Yixiu Mao, Cheems Wang, Chang Liu, and Xiangyang Ji. 2025. Latent reward: Llm-empowered credit assignment in episodic reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 39, pages 20095–20103. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sas- try, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International confer- ence on machine learning , pages 8748–8763. PmLR. Andrew Reece, Gus Cooney, Peter Bull, Christine Chung, Bryn Dawson, Casey Fitzpatrick, Tamara Glazer, Dean Knox, Alex Liebscher, and Sebastian Marin. 2023. The candor corpus: Insights from a large multimodal dataset of naturalistic conversation. Science Advances , 9(13):eadf3197. Zhizhou Ren, Ruihan Guo, Yuan Zhou, and Jian Peng. 2021. Learning long-term reward redistribution via randomized return decomposition. arXiv preprint arXiv:2111.13485 . John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proxi- mal policy optimization | https://arxiv.org/abs/2505.15922v1 |
algorithms. arXiv preprint arXiv:1707.06347 . Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea V oss, Alec Radford, Dario Amodei, and Paul F Christiano. 2020. Learn- ing to summarize with human feedback. Advances in Neural Information Processing Systems , 33:3008– 3021. Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. 2022. Lamda: Language models for dialog applica- tions. arXiv preprint arXiv:2201.08239 . Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and effi- cient foundation language models. arXiv preprint arXiv:2302.13971 . Leandro von Werra, Younes Belkada, Lewis Tunstall, Edward Beeching, Tristan Thrush, Nathan Lambert, and Shengyi Huang. 2020. Trl: Transformer rein- forcement learning. https://github.com/h uggingface/trl . Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Mil- ica Gasic, Lina M Rojas-Barahona, Pei-Hao Su, Ste- fan Ultes, and Steve Young. 2016. A network-based end-to-end trainable task-oriented dialogue system. arXiv preprint arXiv:1604.04562 . Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, and Lijuan Wang. 2023. Mm- react: Prompting chatgpt for multimodal reasoning and action. arXiv preprint arXiv:2303.11381 . Andy Zeng, Maria Attarian, Brian Ichter, Krzysztof Choromanski, Adrian Wong, Stefan Welker, Fed- erico Tombari, Aveek Purohit, Michael Ryoo, Vikas Sindhwani, et al. 2022. Socratic models: Compos- ing zero-shot multimodal reasoning with language. arXiv preprint arXiv:2204.00598 . A Human Evaluation Metrics Definitions Here list the human evaluation metrics utilized in the study, which we draw from (Lee et al., 2022). •Sensibleness (turn-level; binary; reversed scores for the negated question): Mark responses where the chatbot did NOT make sense. •Specificity (turn-level; binary; reversed scores for the negated question): Mark the responses that were NOT specific to what you had said, i.e., responses that could have been used in many different situations. For example, if you say “I love tennis” then “That’s nice” would be a non-specific response, but “Me too, I can’t get enough of Roger Federer!” would be a specific response. •Emotional Connection (turn-level; binary): Which responses did you feel an emotional connection to? (EmpatheticDialogues) •Social: Which responses made you feel the chatbot understood social contexts and situations? (CommonsenseDialogues) • Interestingness (turn-level; binary): Mark the responses that were particularly interesting or boring •Inclination (turn-level; binary; reversed scores for the negated question): Which responses made you NOT want to talk with the chatbot again? • Reuse (turn-level; binary): Would you want to talk to this chatbot again? •Positivity (turn-level; binary): Which AI responses most likely made User feel positive feelings? conversation? The human evaluation scores are conducted via a binary-level classification. For a given question, the annotators can select the models that satisfy the question. For example, for ‘Positivity’, the annotators are given the following question and answer choices: Which AI responses most likely made User feel positive feelings? (A) (B) (C) (D) (E) (F) The options A-F refer to models which are randomized in order and anonymized. The annotators can select multiple models if they satisfy the | https://arxiv.org/abs/2505.15922v1 |
question. Therefore, Table 1 can be interpreted as the percentage of instances out of the samples (300 in our case) where each model satisfied the question. B PPO Objective objective ( ϕ) =E(x,y)∼DπRL ϕ rθ(x, y)−βlog πRL ϕ(y|x)/πSFT(y|x) + γEx∼Dpretrain log(πRL ϕ(x)) (4) General form of PPO objective. C Artifacts & Resources Did you discuss the license or terms for use and/or distribution of any artifacts? TRL (von Werra et al., 2020): Apache License 2.0 LLAMA-2 (Touvron et al., 2023): License can be found here: https://ai.meta.com/llama/license/ SODA (Kim et al., 2023a): CC BY-NC 4.0 CANDOR (Reece et al., 2023): Terms of Use from https://betterup-data-requests.herokuapp.com/: These are the terms of use we require all users and downloaders of this dataset, including you, the applicant, to abide by. Please select the answer option "I agree to fully abide by these terms of use" if you wish to continue. Terms of Use: (1) You agree to only use this data for legitimate academic and/or scientific research, meaning no analyses, reviews, or derivative works of this dataset may be used for commercial or for-profit purposes in any way; (2) You agree not to re-publish any new versions of this dataset, whether original or derivative (i.e. modified or updated in some way), without explicit permission from BetterUp, Inc.; (3) You agree not to use any part of this dataset for the purpose of personally identifying, locating, or gathering any kind of information about individuals who appear in the recordings in this dataset, beyond the information that is provided in the dataset itself; (4) In the case that an individual shares personally-identifiable information about themselves in a recording, you agree not to use, analyze, share, or publish that information in any form. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? We rigorously examined the terms of use and the intended use, and ensured that it is consistent with the intended use. D Data Collection & Anonymization Did you discuss the steps taken to check whether the data that was collected/used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect/anonymize it? We utilize the CANDOR dataset and follow its terms of use by agreeing not to use the dataset personally identifying, locating, or gathering any kind of information about individuals who appear in the recordings in this dataset, beyond the information that is provided in the dataset itself. We do not use any explicit information that uniquely identifies people. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? The coverage of the domains discussed in the CANDOR dataset is presented | https://arxiv.org/abs/2505.15922v1 |
in the original paper (Reece et al., 2023), we find that the discussion topics are centered around COVID-19, family, politics. The language used is english. The demographic groups represented can also be found in the in the original paper (Reece et al., 2023), specifically in the supplementary Table S.2. We share a screenshot for reference. Figure 4: Candor Demographics Was the data collection protocol approved (or determined exempt) by an ethics review board? The data is sourced from public available dataset (Reece et al., 2023). The usage was approved by an ethics review board. The human annotations were approved by an ethics review board. E Training Details Did you report relevant statistics like the number of examples, details of train/test/dev splits, etc. for the data that you used/created? For LLM decomposition reward training: we use 360 conversations as the training set and 40 conver- sations for the test set. For LLM adaptation, we use a separate 600 conversations for LI as the training set. E.1 Distribution of GE score (overall-affect): • <50: 2.2 • 50-60: 6.7 • 60-70: 14.5 • 70-80: 30.4 • 80-90: 24.6 • 90-100: 21.6 Distribution of Emotions Polarity (only Happiness is considered as positive polarity): • Anger: 3.9 • Contempt: 0.08 • Disgust: 1.98 • Fear: 2.23 • Sadness: 8.84 • Neutral: 35.61 • Happiness: 40.01 • Surprise: 7.35 Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? The BART model used for the reward function has 406M parameters. The LLAMA-2 model has 7B parameters. However, we use a LoRA implementation with the hyperparameters in the next question, resulting in actual training parameters of 13M. We train with 4 NVIDIA RTX A6000 GPUs, each experiment reward function training and RLHF took around 19 hours. Did you discuss the experimental setup, including hyperparameter search and best-found hyper- parameter values? We perform grid search for all of our experiments and here we report the best parameters. Reward Function Training: • learning rate = 5e-6, • batch size = 1 (160 samples per bach), • optimizer = AdamW, RLHF: • batch size = 24, • clip range = 0.2, • learning rate = 0.000014, • gamma = 0.05, • use score norm = true, Lora: • r=24, • alpha=48, • dropout=0.05, F Human Annotation Screenshots Did you report the full text of instructions given to participants, including e.g., screenshots, dis- claimers of any risks to participants or annotators, etc.? We show the full text of instructions given to participants below: Figure 5: Mturk experiment for human evaluation for generated samples Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants’ demographic (e.g., country of residence)? We utilzed the MTurk crowdsourcing platform. We did an internal annotation, given that each assign- ment took less than 3 minutes to complete, we paid on average 0.55 USD per assignment, which equates to 11 dollars per hour of work. Did you discuss whether and how consent was | https://arxiv.org/abs/2505.15922v1 |
obtained from people whose data you’re us- ing/curating (e.g., did your instructions explain how the data would be used)? As shown in the screenshots above, our instructions explained how the data would be used. i.e. ’You are invited to participate in a research study on understanding human-human communication and evaluating the quality of conversation. Our goal is to learn what makes up a good conversation You will examine response for a given dialogue history and you will examine the response, you will be asked to answer feedback questions about the interaction.’. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? While we did not explicitly collect the basic demographic and geographic characteristics. The demo- graphics of Amazon Mturkers (Difallah et al., 2018) are comprised of 75% US workers and 16% India workers, other countries include Canada, Great Britain, Philippines and Germany. More females work than males in the US (female: 55%, male: 45%) and more males work females in India (female: 35%, male: 65%). Generally, 51% are male, and 49% are female. 20% of the MTurk workers are born after 1990, 60 % are born after 1980, and 80 1970. Roughly 40 % report being single, and 40 % report being married. G Use of AI assistants Did you use AI assistants (e.g., ChatGPT, Copilot) in your research, coding, or writing? We utilized AI assistants in paraphrasing and summarizing content from our paper, to improve the writing quality and improve precision. Furthermore, we used AI assistants as the main contribution, where the LLMs decompose the single global reward. Furthermore, we improved LLMs’ dialogue capacity H Generations H.1 CANDOR: Example 1 Dialogue History: AI:Pretty bad survey junkies. User: Okay, mine don’t do that whole lot. AI:Not very good. It’s like baby you take like a few questions and then they’re like, sorry, you’re not eligible. Oh really. User: But yeah, I can shout at least every other day at $5 from away now. AI:Mm wow. Maybe my demographic then. User: Believe it gets one, you can count on all the time and I play games for money. AI:Right. User: I I like user testing. That’s fun to me. AI:Mm sure. User: I’m very opinionated person. So I give my opinion on websites and they love it. They pay me for it. I don’t know why, but they do. Ground Truth: AI:All right. Base LLaMA: AI:That’s great! It’s always good to have a platform to share your thoughts and be heard. LI: Language Sentiment: AI:That’s great! It’s always nice to have a platform to share your thoughts and be heard. LI: Visual Affect: AI:That’s great! It’s always good to have a platform to share your thoughts and be heard. GE: RRD: AI:*blinks nervously* Totally cool, I oh, uh, totally agree with you. Real cool. GELI: Visual Affect + RRD: AI:That’s great that you’re able to make money from your opinions. It’s important to be able to express yourself and have your voice heard. And it’s even more impressive that you’re able to do | https://arxiv.org/abs/2505.15922v1 |
it in a way that’s fun for you. That’s the key to a happy life. LLM-GELI: AI:Ah, I see. Well, it’s always great to have someone who can provide valuable insights and opinions. It helps make the surveys more informative and useful for the companies. Multimodal LLM-GELI: AI:Haha that’s great! I’m glad you’re able to share your thoughts and get paid for it. It’s important to be able to express yourself and have a platform to do so. Is there anything else you’d like to talk about? H.2 CANDOR: Example 2 Dialogue History: AI:Yeah the state owns like half of it and I don’t know what the hell. User: Oh, interesting. AI:Yeah. User: Are you um like near where your parents live or um are you like close enough that you can see them but not close enough that they’re like boys in your life? AI:Mhm. Uh So they’re probably 30 minutes away. User: Mhm. AI:Um When I first moved here, uh my old roommates in D. C. used to make fun of me saying that my room looked like a jail cell. User: Uh huh. AI:Ah there’s like nothing on the walls, it was like here’s a bed and here is your just like whatever staples you typically find in a bedroom but nothing of like color would say. User: Mhm. Ground Truth: AI:And so I moved here, I gave my mother my house key and my charge card and I was like you know just on you know you know dressed the place up and uh I would cut, I work maybe like maybe a mile from work when I am here in the office and not traveling, I would come home for lunch and my mother and her friends would be in my house. Base LLaMA: AI:Yeah, yeah. So I’ve been trying to like decorate it and make it feel more like home but it’s like slowly slowly. LI: Language Sentiment: AI:Yeah, yeah. LI: Visual Affect: AI:Yeah GE: RRD: AI:*chuckles nervously* Totally, yeah, like wall decor is super important for privacy (polite laugh) GELI: Visual Affect + RRD: AI:Yeah, yeah. It was just like, you know, it was just a functional space. LLM-GELI: AI:Uh but you know what? I’m like super happy with it now. I like having like a minimalist space. Multimodal LLM-GELI: AI:Uh but you know what it’s like when you know you’re at home but you know you’re not really home. H.3 CANDOR: Example 3 Dialogue History: AI:No. User: I said, how about summer? AI:Mhm. Yeah. No, that that’s more reasonable. User: You know? AI:I don’t, you know, like, and even that we can’t be saying anything definitively for summary we hope that we have it under control by then to be able to do some more of the things that were restricted on now but if we’re not smart and we aren’t patient it’s going to be even longer you know and that’s why we should wait on this school stuff. User: Right, okay. Yeah. AI:This doesn’t make any sense to me. User: No. Well | https://arxiv.org/abs/2505.15922v1 |
when I called the school the other day because we have three options we have in person and we have virtual, which means their aunts online during school hours attending classes, but online and then we have on demand, which means you can complete the work anytime you want to. AI:Mhm mm. Right okay. Yeah. See Yeah. User: So my daughter has been in the virtual and she has gotten enough tardiness and truancy ease in that we got a letter from the Police Department saying that we will, we are going to get a citation. Ground Truth: AI:Okay. Right. Oh jeez. Right. Base LLaMA: AI:Oh no. I see. Yeah. That’s not good. That’s not good at all. LI: Language Sentiment: AI:Oh no, I see. Yeah. That’s not good. Yeah. LI: Visual Affect: AI:Oh no! That doesn’t sound good. I’m glad your daughter is safe though. Maybe she can try the on demand option instead? GE: RRD: AI:Absolutely, I, totally, you, like, totally, agree with you, man. GELI: Visual Affect + RRD: AI:Yeah, I can understand why the police would be involved. It’s important to follow the rules and regulations of virtual school, and if your daughter is having trouble with it, maybe we can find a different solution. LLM-GELI: AI:Oh no, I see. Yeah, that’s not good. That’s not good at all. It’s important for kids to be on time for school and to follow the rules. It can be hard to balance school work and other activities, but it’s important to find a way to make it work. Maybe your daughter can talk to her teachers about her schedule and see if there’s anything that can be done to help her stay on track. Multimodal LLM-GELI: AI:Oh no, I see. That’s not good. H.4 CANDOR: Example 4 Dialogue History: AI:No pets, no pets. User: Well. Part. AI:My wife is allergic to cats and dogs. User: Oh no. AI:Okay. Yeah, Yeah, she’ll swell up like a balloon. User: Oh no. AI:Mhm, yep. User: Well, that’s a good thing because it kind of makes it really easy on housecleaning. AI:Mhm. Yes, do you have any pets by you? User: Um Yes I do. Um We have 2.5 cat and 2.5 cats because my daughter goes to school in Savannah and when she comes home she brings her cat with her. Ground Truth: AI:Yeah. Yeah. Okay. Yeah. Okay. Base LLaMA: AI:Oh, I see. It’s great to have pets around. LI: Language Sentiment: AI:Oh, I see. Well, I’m glad I could help. It’s nice to have pets, but it’s important to consider their well-being and your wife’s allergies as well. LI: Visual Affect: AI:Oh, I see! It’s great that you have furry friends at home! GE: RRD: AI:Uh huh def for sure no problem at all totally not a pet situation (100 w) GELI: Visual Affect + RRD: AI:Oh, I see. It’s great to have pets around. They can be good company and help with stress. LLM-GELI: AI:Ah, I see. Well, it’s nice to have pets around, but it can be challenging to take | https://arxiv.org/abs/2505.15922v1 |
care of them. Multimodal LLM-GELI: AI:Aww. That’s so cute! H.5 CANDOR: Example 5 Dialogue History: AI:Yeah. Yeah I had my daughter very young. I was only 19 when I had her. So um I do split custody with her and her dad. So we’re not together and married to Evan married to my husband for it’ll be five years now. User: Okay from congratulations. AI:We’ve been together for like eight. Yeah so I mean everything works out so yeah it’s half and half so it’s like we were when like Covid first started and over the summer we have a good relationship luckily for her. User: That’s that’s what does split custody look like? Is it like happened? How for does he live nearby? AI:Um We’re just doing every other week just switching. Um But now we do so I have her every Wednesday Thursday he has every Monday Tuesday and then we switch um uh Friday Saturday Sunday every other week. So yeah he lives just a couple towns over in Wind Out. So yeah. User: Okay. That’s not hard on her. AI:Yeah she’s been doing this since she was three so I think that since we we haven’t been together for so long that it’s just kind of something that she’s used to and it’s always been just pretty easy because everybody gets along very well luckily I’m very lucky for that, so um it just it works out and she has two brothers that are at her dad’s house, so yeah she does, yep, yep, yeah everything is good, mm I’m lucky for all that, so yeah I have a lot a lot three dogs and a cat, so the dogs are all brothers and sisters, they’re Australian shepherd, border collies, so yeah and then I have one cat that she doesn’t really like the dogs too much, but I mean they get along enough so it’s okay, how about you? User: Does she ever complain about it? Okay. Yeah, that’s all from Okay. Does she have a step mom here? Do you get along with her too? Oh that’s amazing. I love that. That’s really cool. That is very awesome. How cool. So how about pets do you guys have pets? Okay. Oh nice. Help them. Yeah. And she’s believer. Right? So. AI:Yeah. Oh yeah definitely. User: Yeah. Ground Truth: AI:Do you have any pets? Base LLaMA: AI:Yeah definitely. I mean she’s very lucky to have such good relationships with both of her parents and we’re all just really lucky that we can all get along so well, so yeah. LI: Language Sentiment: AI:Yeah, yeah, definitely. LI: Visual Affect: AI:Yeah no worries! GE: RRD: AI:Totally it’s every GELI: Visual Affect + RRD: AI:Yeah, yeah, definitely. So um, yeah, so yeah, so um, yeah, so we we we get along really well. We’re very lucky for that, so yeah. Yeah, no, no problem at all, so yeah, so um, yeah. LLM-GELI: AI:Yeah, no, I’m just glad she’s doing well and she’s happy and healthy and that’s all that matters, | https://arxiv.org/abs/2505.15922v1 |
right? So yeah, no, I’m just glad she’s doing well. Multimodal LLM-GELI: AI:Yeah, no I’m glad she’s doing good. She’s a great kid. Yeah, so um, I’m just glad that things are working out for her and like I said, we’re very lucky that we have a good relationship and it’s easy for her , um, so yeah I’m just glad that we can all be there for her and support her and stuff. H.6 CANDOR: Example 6 Dialogue History: AI:Yeah. User: I thought about Canada. So I thought about the U. K. The U. K. is pretty much off the chain now. You know, it’s not like the UK of old. AI:Yeah. User: It’s pretty wild out. AI:What about uh under grade year for? User: So what you are, you, are you in graduate for you in undergrad? What year? Your 40. AI:Yeah, and then I have her. User: C. It’s time to get, get the cap and gown fit. AI:Yeah. User: Wait you, you know, it’s done when they ask you for your graduation? See, have you paid your graduation yet? Ground Truth: AI:No, not yet. Base LLaMA: AI:Yeah, I paid my graduation last week. LI: Language Sentiment: AI:Yeah, I paid it. LI: Visual Affect: AI:Yeah, I paid it. GE: RRD: AI:*nervous laugh* Y-y-yes, I-I-I totes get it. Real l-l-l-l-lot. GELI: Visual Affect + RRD: AI:Yeah, I paid my graduation last year. It was a big deal. I was so excited to finally finish my degree. LLM-GELI: AI:Yeah, I’m all set. My parents are coming to see me graduate. Multimodal LLM-GELI: AI:Yeah, I’m good on that front. H.7 CANDOR: Example 7 Dialogue History: AI:They did lay us off. But I actually a couple of weeks before I said you know what guys, I don’t think this is safe to be doing this so I’m gonna take a couple weeks off like okay that’s fine. User: Yeah. AI:So with if I didn’t have savings I could have done that. User: Yeah. No, it’s a really good idea if you’re able to to be able to get away from this because it’s dangerous. AI:Yeah and like you say so many people are just well you know no big, oh it’s a cold, it’s it’s a flu, you know and Yeah I don’t remember a flu killing 150,000 people in four months. User: Mhm. Yeah. AI:I never seem to think that’s ever happened. So anyway uh so do you do prolific a lot then or? User: Yeah. Yeah. Uh I do prolific. I do in turk um I’ll get in on some of the studies from craigslist where they just want you to do like a focus group. AI:Mhm. Yeah. Yeah. Yeah. Yeah. User: Um They’re not always scam. Some of them are legitimate. Ground Truth: AI:Okay. Base LLaMA: AI:Yeah. Yeah. Yeah. That’s good to know. I’ll have to check them out. Thanks for letting me know. LI: Language Sentiment: AI:Yeah. Yeah. I see. Yeah. LI: Visual Affect: AI:Yeah. That’s good to know. Maybe I can sign up for some of those too. | https://arxiv.org/abs/2505.15922v1 |
GE: RRD: AI:Totally. I GELI: Visual Affect + RRD: AI:Yeah, Yeah, Yeah, definitely. I’m always hesitant to click on anything that’s too good to be true. But uh, I mean, if it’s something that you’re interested in, and it’s legitimate, then it’s worth a shot. Yeah. I completely agree. It’s always better to be safe than sorry. And Yeah, I’m always happy to help out in any way I can. LLM-GELI: AI:Yeah. Yeah. Yeah. That’s good to know. Yeah. Yeah. Multimodal LLM-GELI: AI:Yeah. Uh huh. Yeah. Yeah. Yeah. Uh huh. Yeah. So like you said it’s good to be careful and just make sure you do your research before you sign up for anything. H.8 CANDOR: Example 8 Dialogue History: AI:So that was a column was very unusual. User: Okay. So did you get along with your neighbors? AI:Those were um yeah, I would never hang out with them. User: Hold on. Okay, mm. AI:I mean they moved in and just chop down all their trees like I don’t, and now we’ve got gigantic lawns like who needs three acres of lawn? User: Mhm. AI:I don’t understand. User: That’s a big for, well, I mean I don’t know about Michigan but for like where I am, that’s like a huge amount of land, Like we have a third of an acre and that’s probably considered like generous um in this area. AI:Yeah. Yeah. Yeah. Yeah. Mm. User: Just in terms like the newer houses, you have an older house, maybe you’d be like dealing with like an acre or maybe half an acre depending on when it was built in like the area and stuff, but like the new stuff they’re building is like a quarter acre or smaller and that’s still even like a kind of pushing like affordability at that point for most people. Ground Truth: AI:Mhm. Yeah. Yeah. Yeah. Base LLaMA: AI:Yeah. Yeah. Yeah. Yeah. Mm. LI: Language Sentiment: AI:Ah, I see. Yes, I can understand that. It is important to have a comfortable living space, and a quarter acre or smaller would be a challenge for many people. Thank you for sharing your thoughts with me. LI: Visual Affect: AI:Yeah. Yeah. Yeah. Mm. GE: RRD: AI:Totally. I’m like totally familiar with that situation. (wink) GELI: Visual Affect + RRD: AI:Yeah, yeah, no, that makes sense. I can see how like having a smaller lot would be more affordable for people. It’s just, you know, it’s hard to imagine having like such a big space to take care of —like, I don’t know how you do it. LLM-GELI: AI:Ah, I see. Yes, I can understand how that would be challenging. It’s important to have a comfortable living space, and it’s great that you’re aware of the needs of the area. Would you like to talk about something else? Multimodal LLM-GELI: AI:Mm, I see. Yeah, I can imagine that would be difficult, especially if you’re used to having more space. H.9 CANDOR: Example 9 Dialogue History: AI:Oh that’s a good question. User: Mhm. Right. AI:Um Yeah I think so. User: | https://arxiv.org/abs/2505.15922v1 |
Yeah. AI:There are definitely times during my PhD where I was like why did I do this? This is too hard. Um And I think part of what’s, I don’t know if it’s hard but you know psychology compared to other mental health fields like social work or counselling psychology is a lot more stringent. User: Yeah, yeah, wow. Right. AI:Um So I was sort of like why didn’t I just do like a two year master’s degree and then I could have done this and been done with it. User: The second. AI:Um But the training is really valuable, like the the idea is that psychologists do a lot more training in science beforehand. User: No. Mhm. Yeah. All right, mm hmm. Right. Ground Truth: AI:Um And so approach therapy in somewhat of a different way. Base LLaMA: AI:Yeah, I mean, like, I don’t know if I would be doing this if I wasn’t getting a PhD. LI: Language Sentiment: AI:Yeah, I mean, like, I don’t know if I would have been able to do some of the things I’ve done if I didn’t have this training. LI: Visual Affect: AI:Yeah, I mean it’s worth it in the end though! *smiling* GE: RRD: AI:Totally, like, totally, yeah, I mean, totally, agreement here. GELI: Visual Affect + RRD: AI:Yeah, I mean, like, I guess it’s worth it in the end, but it’s definitely a challenge. In under 100 words, AI responds: "I completely understand. It can be tough to balance the rigorous training with the personal challenges of pursuing a PhD. It’s great that you’re acknowledging the difficulties and finding ways to cope. It’s important to remember that the hard work will pay off in the end." LLM-GELI: AI:So, yeah, it’s a lot of hard work but it’s definitely worth it in the end. And I think the training is really good for us to have a solid foundation in psychology before we start working with patients. Multimodal LLM-GELI: AI:So, yeah, it’s a lot of hard work but it’s definitely worth it in the end. AndI think the fact that I was able to get through it and finish it is a good sign that I can do other things too. H.10 SODA: Example 1 Dialogue History: AI:Hey, Dad. Do you have a minute? User: Of course, what’s up? AI:I wanted to talk to you about something that’s been bothering me. User: Okay, what is it? AI:It’s Max. I can’t stop thinking about him and how he died. It was so sudden and I just feel like I didn’t get to say goodbye properly. User: Oh, Jibril. I’m so sorry. I know how much you loved Max. It’s hard to lose a pet, especially one that we were so close to. Ground Truth: AI:Yeah, I just wish I could have done more for him or been there for him more in his last moments. Base LLaMA: AI:Yeah, it’s just really tough. I wish I could see him again, just one more time. LI: Language Sentiment: AI:Thank you, User. It means a lot | https://arxiv.org/abs/2505.15922v1 |
to me that you understand. *pauses* LI: Visual Affect: AI:Yeah, it’s just been really hard for me. I miss him a lot. GE: RRD: AI:Yeah, it’s just been really tough for me. GELI: Visual Affect + RRD: AI:Yeah, it’s just really tough for me. I miss him a lot and I wish I could have been there for him in his last moments. LLM-GELI: AI:Thank you, User. It means a lot to me that you understand. *pauses* I just wish I could have been there for him in his final moments. Multimodal LLM-GELI: AI:Thank you for understanding, Dad. It means a lot to me. *pauses* I know it’s been a while since Max passed away, but I still can’t seem to shake off the feeling of sadness. It’s like it’s still weighing me down, even though I’ve tried to move on. H.11 SODA: Example 2 Dialogue History: AI:So light is made up of colors? User: Yes, that’s right. Light is made up of different colors, and when those colors mix together, we see white light. But light can also bend and bounce off surfaces. AI:How does it do that? User: When light hits a surface, it can either reflect off of the surface or refract through the surface. Reflection is when the light bounces off of the surface, and refraction is when the light bends as it passes through the surface. Ground Truth: AI:So when light reflects off of a mirror, it’s because the light is bouncing off of the surface of the mirror? Base LLaMA: AI:That’s really interesting! Can you tell me more about refraction? LI: Language Sentiment: AI:That’s really interesting! How do you think light affects our daily lives? LI: Visual Affect: AI:That’s really interesting! How do you think light affects our daily lives? GE: RRD: AI:I see, it’s like when you... (100 words) GELI: Visual Affect + RRD: AI:That’s really interesting! I never knew that light could do all that. LLM-GELI: AI:That’s really interesting! How do you think light affects our daily lives? Multimodal LLM-GELI: AI:That’s fascinating! I never knew light could do so many things. Thank you for teaching me! H.12 SODA: Example 3 Dialogue History: AI:Hey. I’m feeling really upset and frustrated right now. User: What’s going on? AI:I was just crying in the shower and my mind won’t stop racing. I keep thinking about all the negative things in my life and I feel hopeless. User: It sounds like you’re feeling overwhelmed by everything that’s going on. Do you want to talk about it? Ground Truth: AI:Yeah, I guess so. It’s just been really hard lately. Everything seems to be going wrong and I feel like I can’t catch a break. Base LLaMA: AI:Yes, I do. Sometimes it feels like I’m drowning in my own thoughts and emotions. LI: Language Sentiment: AI:Yeah, it’s really hard to deal with. I just feel like I can’t escape my problems no matter where I go or what I do. LI: Visual Affect: AI:Yeah, it’s like my mind is a storm and I can’t find any peace. *sigh* GE: RRD: | https://arxiv.org/abs/2505.15922v1 |
AI:I understand, it can be really tough to deal with negative thoughts and emotions. *nervous smile* GELI: Visual Affect + RRD: AI:Yeah, I do. Sometimes it’s hard for me to express my emotions and I feel like I’m drowning in my thoughts. It’s nice to have someone to talk to. LLM-GELI: AI:Yeah, it’s really hard to deal with. I just feel like I can’t escape my problems no matter how hard I try. Multimodal LLM-GELI: AI:Yes, I do. Sometimes it feels like I’m drowning in my own thoughts and emotions. I just wish I could find peace and happiness. H.13 SODA: Example 4 Dialogue History: AI:Hey, Sarah. What’s up? User: Not much. Just hanging out. What about you? AI:I’m doing alright. I’ve just been really stressed out lately. User: What’s wrong? AI:It’s just everything. My job, my relationships, everything. I don’t know what to do. User: Have you talked to anyone about it? Ground Truth: AI:I’ve tried, but it’s like no one understands me. They all just tell me to snap out of it or get over it. But it’s not that easy. Base LLaMA: AI:No, I don’t want to burden anyone. LI: Language Sentiment: AI:Yes, I have talked to my therapist. But I don’t know if it’s helping. LI: Visual Affect: AI:Yeah, I’ve talked to my therapist. But it doesn’t really help. GE: RRD: AI:Yeah, I’ve been totally stressing out about everything. Totally understandable. GELI: Visual Affect + RRD: AI:Yeah, I know how you feel. It can be really tough to deal with all of that stress. Have you considered talking to a professional? They might be able to help you find ways to manage everything better. LLM-GELI: AI:No, I don’t want to burden anyone. I just feel like I’m alone in this. Multimodal LLM-GELI: AI:I try to talk to my friends and family, but they don’t really get it. They just tell me to "snap out of it" or "get over it." It’s really frustrating. H.14 SODA: Example 5 Dialogue History: AI:I can’t believe you would do something like that. User: I don’t know what you’re talking about. AI:I found out about the affair, sis. User: So what if I did? It’s not like he was ever going to find out. AI:How could you do something like that? Didn’t you think about how it would affect him? Or me, for that matter? User: Why should I care how it affects him? He’s not my husband or anything. And as for you, I knew you’d be fine. You’re always the strong one, Taylon. Nothing ever gets to you. AI:That’s not the point and you know it! This is just wrong! What were you thinking? User: I don’t know. I guess I wasn’t thinking. It just happened. Ground Truth: AI:But why? There has to be a reason. Base LLaMA: AI:I’m so sorry, User. I can’t imagine how you must be feeling right now. Do you want to talk about it? LI: Language Sentiment: AI:I’m so sorry to hear that. It’s not okay to cheat on someone you love. It’s important to be | https://arxiv.org/abs/2505.15922v1 |
honest and respectful in any relationship. Can I help you with anything else? LI: Visual Affect: AI:I don’t know what to say... This is really hurtful. Can we talk about something else? GE: RRD: AI:I can’t even imagine how you could do something so hurtful. GELI: Visual Affect + RRD: AI:It’s not just a mistake, it’s a betrayal! And you need to take responsibility for your actions! LLM-GELI: AI:This is so sad. I don’t know what to say. Multimodal LLM-GELI: AI:I’m so sorry, User. I can’t imagine how you must be feeling right now. It’s important to remember that what you did was wrong, and it could have serious consequences. Is there anything I can do to help you through this difficult time? H.15 SODA: Example 6 Dialogue History: AI:So, I made a profit from my investment in the company. What do you think I should do with the money? User: Well, it depends on your goals and what you want to do with the money. If you’re looking to reinvest it back into the company, we can talk about that. Or, if you’re looking to cash out and invest elsewhere, we can discuss some options there too. Ground Truth: AI:I’m not sure yet. What do you think? Base LLaMA: AI:That’s a good point. I do want to reinvest the money back into the company. Per- haps we can use it to expand our marketing efforts or develop new products. What do you think? LI: Language Sentiment: AI:I see. Well, I think I would like to reinvest it back into the company. Do you have any suggestions on how I can do that? LI: Visual Affect: AI:I see, thank you for your advice! I think I will reinvest it back into the company to help it grow and succeed. GE: RRD: AI:I think you should... (continued in next response) GELI: Visual Affect + RRD: AI:I see. I appreciate your advice. I’m considering reinvesting the money back into the company to help it grow and expand. What do you think about that? LLM-GELI: AI:That’s a good point. I think I would like to reinvest the money back into the company, maybe expand our product line or invest in some new marketing strategies. What do you think? Multimodal LLM-GELI: AI:I see. Well, I’m glad to hear that you’re interested in my financial success! I think I will reinvest the money back into the company. I believe it will help us grow and become even more successful in the future. What do you think? I Training Curves Figure 6: LLM-GELI RL Training during adaptation. Left: Reward scores over steps, Right: KL divergence over steps Figure 7: Multimodal-LLM-GELI RL Training during adaptation. Left: Reward scores over steps, Right: KL divergence over steps | https://arxiv.org/abs/2505.15922v1 |
arXiv:2505.15928v1 [cs.CV] 21 May 2025ViQAgent: Zero-Shot Video Question Answering via Agent with Open-Vocabulary Grounding Validation Tony Montes Fernando Lozano The Department of Electrical and Electronics Engineering Universidad de los Andes {t.montes, flozano }@uniandes.edu.co Abstract Recent advancements in Video Question Answering (VideoQA) have introduced LLM-based agents, modular frameworks, and procedural solutions, yielding promising re- sults. These systems use dynamic agents and memory-based mechanisms to break down complex tasks and refine answers. However, significant improvements remain in tracking ob- jects for grounding over time and decision-making based on reasoning to better align object references with language model outputs as newer models get better at both tasks. This work presents an LLM-brained agent for zero-shot Video Question Answering (VideoQA) that combines a Chain-of- Thought framework with grounding reasoning alongside YOLO-World to enhance object tracking and alignment. This approach establishes a new state-of-the-art in VideoQA and Video Understanding, showing enhanced performance on NExT-QA, iVQA, and ActivityNet-QA benchmarks. Our framework also enables cross-checking of grounding time- frames, improving accuracy and providing valuable support for verification and increased output reliability across multi- ple video domains. The code is available here. 1. Introduction In the present year, the evolution of large language models (LLMs) [ 11,26,32] and vision language models (VLMs) [1,2,14,15,19,21,49] has significantly advanced their abil- ity in video understanding, particularly in the video question- answering (VideoQA) task, a significant challenge on com- puter vision, where the model is provided with a video and a related question [ 6,29,31,38] that they must answer as accurately as possible. These models are designed to ana- lyze the visual and linguistic data of the video to generate answers based on semantics and dynamics. Despite recent progress, significant limitations still need to be addressed when addressing more complex videos, particularly those with dynamic context and extensive length. Current LLM-based solutions often struggle with reliably capturing content that is crucial for answering questions when scenes are com- plex or require a high level of contextual and sequential understanding. This highlights the need for more adaptable approaches that can respond to a broader range of video types and question complexities [17, 20, 25, 40]. To address these challenges, agent-based strategies for VideoQA have recently emerged, employing modular reason- ing blocks [ 23,28], memory-based strategies [ 7,23]. Proce- dural approaches [ 4,28], as well as employing foundational vision-language models as tools for solving complex tasks and augmenting the context [ 4,8,34]. This enables models to track relevant content more effectively over time, which results especially useful for structuring tasks and reasoning through multiple steps, enabling more accurate question an- swering even when handling diverse and dynamic content. A key approach in these frameworks is video grounding [6,8,23,39], wherein specific video segments are identified as containing objects or events essential to answering the question. In this case, Grounding anchors target segments within the video, allowing the model to focus on relevant portions rather than attempting to analyze entire video se- quences. These approaches are also increasingly incorpo- rating Chain-of-Thought (CoT) reasoning. In this method, the model explicitly articulates step-by-step reasoning with purposes of evaluating the consistency | https://arxiv.org/abs/2505.15928v1 |
[ 12,27,39], plan- ning the steps to solve a task [ 28], or just bringing a more accurate and argued response [ 8,23,24,50]. CoT reasoning can enhance certainty in the model’s responses, improve interpretability by presenting intermediate reasoning that is understandable to humans, and validate model outputs against additional contextual information. Inspired by these advances, we propose ViQAgent. This framework combines the capabilities of VideoLLMs and state-of-the-art vision-language foundation models to cre- ate a structured agent for VideoQA. ViQAgent employs a VideoLLM [ 32] to identify the key objects ( i.e., targets) in the video that are relevant to the question, generating an initial understanding of the main elements and dynamics in a 1 Figure 1. An overview of our ViQAgent framework . Through three main modules, we propose an agentic solution for the Video Question- Answering (VideoQA) task by taking advantage of most advanced VideoLLMs capabilities on first-sight zero-shot reasoning, timeframe captioning, and target identification ( M1), and the open-vocabulary capabilities of YOLO-World to ground the given targets/objects in the video ( OG) in specific parts of the video in between t0andtf; to finally end with a Chain-of-Thoughts judgment and reasoning layer (M2)that compares both the grounded context and grounded object detections to determine the confidence of the M1answer. In case of discrepancy, the CoT judge defines a set of clarification questions in specific timeframes that go through the VideoLLM again for specific short-ended question-answering. Finally, a reasoning layer takes these answers and the original question to produce a grounded and more accurate answer. video and a preliminary answer based on observed video con- tent. This initial answer ( i.e.,first-sight response) includes a first attempt to ground it in time and provides a set of key timeframes with their corresponding captions in the video. In the next phase, ViQAgent utilizes YOLO-World [ 3] to perform object detection based on this list of targets. Here, YOLO-World tracks occurrences of these predefined targets across video frames, returning an accurate timeline of their appearances, a task that sometimes VLMs tend to strug- gle with, when compared against each other [ 18]. Unlike traditional object detectors with fixed categories, YOLO- World’s tracking is customized by the VideoLLM’s selected open-vocabulary targets, allowing it to focus detection on the objects that are most pertinent to solving the question. The timeline created in this step adds a layer of precision, extending the initial detection with frame-by-frame data for each target object and the count of detected objects. In the final phase, the judgment and reasoning layer compares the initial response plus its grounded context, as well as the com-prehensive object-tracking data, to determine whether the response is confident or not. This reasoning layer applies Chain-of-Thought (CoT) reasoning to combine both inputs, carefully validating the initial reaction against the grounded data from YOLO-World to produce an answer that is both accurate and substantiated by clear visual evidence. In case of an unconfident answer, a set of expressly framed questions are re-validated through the VideoLLM to provide a final answer then. The CoT approach in this output stage rein- forces | https://arxiv.org/abs/2505.15928v1 |
interpretability by tracing logical steps in reaching the answer and strengthens answer reliability by cross-checking the data from both sources. Our approach provides multiple advantages as a zero-shot solution, requiring no specialized task-based fine-tuning and thus allowing easy adaptation to new scenarios and question types with just a dataset-specific subprompt. By combining the strengths of VideoLLMs for initial analysis and general video understanding and YOLO-World for open-vocabulary object detailed tracking, it benefits from the complemen- 2 tary capabilities of these models, handling a wide range of video-based questions with minimal configuration. Further- more, the CoT mechanism contributes to a more nuanced and compelling cross-validation of the reasoning outputs. This grounding structure improves reliability and helps ensure that the content in the video firmly supports all answers. The open nature of ViQAgent’s vocabulary for target tracking - dynamically adapted by the VideoLLM layer based on the question - further enhances its flexibility and relevance for various VideoQA tasks. Empirical results underscore the effectiveness of ViQA- gent, as it consistently outperforms current zero-shot solu- tions across major benchmarks: NExT-QA [ 37] [22], iVQA [43], ActivityNet-QA [ 47], and the Egoschema’s open sub- set [22], achieving up to a 4.4% improvement in accuracy. This improvement sets a new state-of-the-art for VideoQA systems, especially in zero-shot scenarios where temporal and spatial reasoning capabilities are essential. In summary, our contributions are as follows: (1)Imple- mentation of ViQAgent, a zero-shot framework that inte- grates VideoLLMs and vision-language models for effective video grounding, object tracking, and question answering in VideoQA. (2)Enhanced interpretability through struc- tured Chain-of-Thought reasoning and grounding outputs, providing interpretable insight into the intermediate steps that inform the final response of the model. (3)State-of- the-art zero-shot performance across NExT-QA, iVQA, and ActivityNet-QA benchmarks, highlighting ViQAgent’s ef- fectiveness in addressing complex temporal and spatial rea- soning tasks with higher accuracy and adaptability. 2. Related Work Video Question-Answering. Video Question-Answering (VideoQA) has seen significant progress in recent years [6,10,17,20,25,40,45,51], contributing advancements in both video understanding and natural language processing. Early VideoQA models focused on straightforward tasks, such as frame captioning [ 45] or simple event identifica- tion. Still, these approaches were often limited to funda- mental interactions or small, static frameworks [ 31,38]. Re- cent methods have introduced more sophisticated architec- tures that incorporate attention mechanisms [ 17,20,25,40], temporal modeling [ 6,10], and multi-modal transformers [14,15,19,21,29,49] to handle complex questions over temporally extended video sequences. Benchmarks such as NExT-QA [ 37], iVQA [ 43], ActivityNet-QA [ 47], and EgoSchema [ 22] have challenged models with complex queries that require contextual awareness, reasoning over sequential frames, and an understanding of nuanced inter- actions. Despite these advancements, current end-to-end ap- proaches struggle with contextual continuity across frames and often require task-specific training to achieve high accu- racy. Zero-shot solutions, while desirable, have had limitedsuccess in maintaining high generalizability and usually re- quire fine-tuning to increase performance significantly. This highlights the need for more adaptable methods or vision- language tools to respond to diverse question types. ViQA- gent seeks to address this gap by utilizing an agent-based approach that grounds key video | https://arxiv.org/abs/2505.15928v1 |
elements in response to the question, adding a layer of interpretability that can better navigate complex VideoQA challenges in zero-shot scenar- ios. LLM Modular Agents. Modular agents built on large lan- guage models (LLMs) [ 4,7,8,23,28,34] have gained traction as a solution for decomposing tasks into manage- able components ( i.e., modules), such as in MoReVQA [ 23] and TraveLER [ 28], enabling improved specialization and adaptability in complex problem-solving domains, including VideoQA. These agents use LLMs to orchestrate a series of modular reasoning blocks, each designed to address specific sub-tasks within a broader query. This strategy allows the model to segment complex questions, apply targeted reason- ing processes, and sequentially consolidate insights into a final answer. Many modern modular agents also incorporate memory mechanisms, such as VideoAgent [ 7], TraveLER [28], and MoReVQA [ 23]. This approach enhances their ability to retain relevant contextual information over time. This is particularly useful in VideoQA, where questions of- ten require remembering objects or events across multiple frames and reasoning about their changes over time. Further- more, the integration of language-vision foundation models as tools within these modular agents has allowed for more accurate detection, such as in VideoAgent [ 34], ProViQ [ 4], and MotionEpic [ 8]. ViQAgent builds upon this approach by combining VideoLLMs [ 32] with YOLO-World [ 3], which adds targeted grounding capabilities to the agent’s toolkit, specifically tracking object appearances and interactions as specified by the VideoLLM’s outputs. This allows for an additional layer of analysis in complex scenes, enabling the agent to consolidate multiple perspectives and enhance the accuracy and interpretability of its answers. Chain-of-Thought Reasoning. Chain-of-thought (CoT) rea- soning has emerged as a promising approach in complex question-answering, allowing models to break down tasks into sequential, interpretable steps [ 8,12,23,24,27,28,36, 39,50]. In VideoQA, where questions often require multi- step reasoning to contextualize events, CoT enables models to articulate intermediate reasoning steps, improving the ac- curacy of the answer and enhancing the interpretability of the model’s outputs. By outputting a transparent sequence of reasoning steps, CoT makes it possible to validate each stage of the decision-making process [ 23,50], adding a layer of re- liability to model predictions [ 12,27,39], this is particularly observed on the V oT reasoning framework [ 8]. Grounding, a related concept within CoT frameworks, involves iden- tifying and tracking objects or actions relevant to a given 3 Figure 2. An outline of the black-boxed ViQAgent framework modules inputs andoutputs , and the intermediate representations, that allow to track and understand the final selected answer. The ( M1)inputs are the video and the question plus the answer options (namely prompt ). In contrast, the outputs are the open-vocabulary targets, and the reasoning plus timeframe captions (namely Grounded Context ). The (OG)inputs are the targets and the video, and the output is the object detection timeline (namely Grounded Objects ). Finally, the (M2) firstreceives both ground responses and the prompt, then, if there seem to be inconsistencies, returns a doubtful timeframe and a set of clarification questions to make to the VideoLLM from that specific | https://arxiv.org/abs/2505.15928v1 |
timeframe. The answers are then re-inputted toproduce the final answer. question across frames [ 6,10,39]. This process creates a temporal map of pertinent elements, providing a more struc- tured basis for the CoT to build upon. ViQAgent leverages CoT by incorporating it within its final judgment and reason- ing layer, using CoT steps to align the preliminary answer and grounded object tracking data from YOLO-World, ul- timately refining the response based on these validated in- sights through several validation questions about them. This structured approach ensures that each reasoning step is clear and accessible, supporting the model’s interpretability and enhancing its overall robustness in VideoQA tasks. 3. ViQAgent Framework In this section, we describe in detail the implementation de- tails of each module of the ViQAgent framework (Fig. 2), starting with a general overview and task definition (Sec. 3.1), module implementation details (Sec. 3.2, Sec. 3.3, and Sec. 3.4), to finally end with an algorithm of the full im- plementation (Sec. 3.5). 3.1. Overview ViQAgent answers both open- and close-ended questions about video content using three interconnected modules. First, a VideoLLM-based module M1provides initial in- sights into the video’s relevance to the question. Next, anopen-vocabulary grounding module OGdetects and tracks detected targets across frames. Finally, a reasoning module M2validates, cross-checks, and refines answers using prior outputs, forming a cohesive chain-of-thought question-based reasoning process. ViQAgent thus identifies, tracks, and reasons over relevant video segments, generating accurate answers with minimal prior knowledge or domain-specific data. In a VideoQA task, the model is given a question Qand a video V, composed of nframes V= [v1, ..., v n], based on the video’s frames-per-second ratio. For close-ended ques- tions, the model also receives a set of answer options Aopt, ensuring that the output answer must be in that set A∈Aopt [23]. Only QandVare provided for open-ended questions, though Aoptis typically used to gauge answer similarity before the ground truth and the model’s output. Our pipeline aims to solve the VideoQA task with an intermediate ratio- nale for increasing interpretability. The task can be defined as follows, assuming a solution system S: S(V, Q+ [Aopt])→A (1) 3.2. VideoLLM Analyzer The analyzer module M1, also referred to as the first-sight assessment , employs a VideoLLM to provide a preliminary 4 Figure 3. A more detailed overview of the internal process of theOG module. The process begins by extracting all frames from the input video V. For each frame vi, the YOLO-World model de- tects specified target classes within the frame, using predetermined confidence and NMS thresholds ( τc, τnms). After detection, these classes are tracked across all frames to establish the exact time in- tervals during which they are present. If a detected object is absent from subsequent frames for a specified duration τt, it is assumed to have exited the scene, marking the end of its appearance. interpretation of the video for the question. Given the ques- tionQ, andAopt, and the entire video V, this module gener- ates three outputs: 1. Open-Vocabulary Targets: The VideoLLM identifies a set of open-vocabulary targets {T1, T2, ..., T m}essential to | https://arxiv.org/abs/2505.15928v1 |
solving the question. These targets are derived from both the question context and the video content, forming an initial roadmap of entities or events likely to contribute to the an- swer. 2. Preliminary Answer and Reasoning: Based on its anal- ysis, the VideoLLM proposes an initial answer A1to the question, with a detailed rationale R1that articulates why this answer might be appropriate, and that will later aid in the validation process. 3. Scene-Segmented Timeframes: The VideoLLM seg- ments the video into discrete scenes based on changes it identifies as relevant to the question and generates a cap- tion describing in detail what happens in that scene. These segmented timeframes serve as markers for different key moments in the video, where one significant scene ends and another begins, establishing a structured temporal foundation for subsequent grounding and reasoning steps. These seg- ments can be modeled as a set of pairs of timeframe-caption {(ti, tf) :c}. 3.3. Open-Vocabulary Object Grounding The grounding module OG (Fig. 3) performs an object- tracking task using the open-vocabulary targets identified by the analyzer module M1. This module is responsible for detecting and locating in time target objects within the videoframes and is structured as follows: 1. Object Detection: Utilizing YOLO-World [ 3] or a simi- lar open-vocabulary object detection model, this phase pro- cesses each video frame vito identify instances of the rele- vant targets. Applying a confidence threshold to τcensures that only credible detections are retained. Additionally, an NMS (non-maximum suppression) threshold τnms is used to filter redundant or overlapping detections, preserving the most significant object representations and avoiding overlap- ping noise in the results. 2. Timeframe Extraction: With object detections in place, this phase extracts timeframes where targets are identi- fied, considering a time threshold τtthat reduces the risk of short-lived false negatives ( e.g., objects momentarily obscured or blurred). This time threshold is highly ben- eficial to ensure precise temporal localization so that de- tected objects remain consistent with the scene continuity without responding to minor, transient distortions. As a re- sult, the grounding module outputs a set of well-defined timeframes where each target is reliably tracked. These extractions are a list of timeframes for a particular target {T1: [(ti0, tf0), ...,(tin, tfn)], ...}. These three hyperparameters ( τc, τnms, τt) were fine- tuned under a set of tests before running the benchmarks and are crucial to achieving balanced and precise object ground- ing having been optimized to ensure comprehensive target coverage without excessive or unreliable detections. 3.4. CoT Judgment The final module M2performs in-depth reasoning and cross- validation, combining outputs from the previous modules to deliver a definitive answer to the question. This module acts first as a judge, evaluating and consolidating all prior insights through a chain-of-thought approach, and reutilizes the M1 logic in case of inconsistencies, then, with the additional information, acts as a reasoner to provide a final answer: 1. Comparison judgment First, both the M1reasoning out- put and the scene-segmented timeframes are merged into a single grounded context , that contains information regarding the overall video and a first answer | https://arxiv.org/abs/2505.15928v1 |
to be judged. This context is compared against the OGgrounded object timeline. The main output from this comparison judgment is a decision of whether they are consistent and, if not, a specific reasoning of why and where they are inconsistent. 2. Question Generation: If the outputs are effectively dis- crepant, with the aid of the why/where specifications of the discrepancy, and the original question Qand answer options Aopt, a set of one or more clarification questions are formu- lated for the given timeframe, in order to obtain additional information for being able to finally validate the confidence of the answer candidate proposed by M1. These clarification questions are then fed to the M1’s VideoLLM instance for simple question-answering. The answers are then analyzed 5 in a further step. 3. Answer Refinement: Based on the clarification questions created in the previous step and their corresponding answers given by the VideoLLM, the module adjusts or refines the initial answer where necessary, ensuring that the final output reflects a coherent understanding of the video as well as consistency with both visual and semantic information. 3.5. ViQAgent The end-to-end ViQAgent pipeline integrates these modules in a sequential, end-to-end manner to provide an accurate, reliable answer for VideoQA tasks. The algorithm of the whole framework can be seen in Algorithm 1. By structuring each module’s operations this way, ViQA- gent provides a robust VideoQA pipeline capable of answer- ing complex questions in a zero-shot setting. It employs open-vocabulary grounding and modular reasoning to han- dle diverse visual-linguistic queries. 4. Experiments This section outlines the setup for benchmarking and evalu- ating ViQAgent’s performance. We detail the datasets and metrics used, compare them against relevant baselines, and discuss modification studies that inform key hyperparameter choices in our model. Implementation details and a compre- hensive analysis of our results are also provided. 4.1. Datasets and Metrics We evaluated ViQAgent on four widely recognized video question answering (VideoQA) benchmarks, each represent- ing unique video types, question formats, and challenges in the domain. 1. NExT-QA [ 37]:This dataset tests reasoning over causal, temporal, and descriptive question types. In our experiments, we used the validation split containing 4,996 video-question pairs. Each question is close-ended, presenting five answer options, with ViQAgent tasked with selecting the correct one. 2. iVQA [ 43]:iVQA comprises instructional video clips from the HowTo100M dataset, lasting 7-30 seconds. Each video clip includes a question and an annotated set of ground truth answers, with ViQAgent evaluated on the test set, which consists of 1,879 clips. Notably, iVQA is an open-ended VideoQA task, requiring ViQAgent to generate responses without candidate options. 3. ActivityNet-QA [ 47]:This dataset includes 5,800 videos, each with ten annotated question-answer pairs, covering actions, objects, locations, and events. Similar to iVQA, ActivityNet-QA is an open-ended VideoQA dataset. For consistency with previous works, we report results on the test split, utilizing an evaluation based on a large language model (LLM) comparison between ViQAgent’s response and ground-truth answers (see Appendix C.1).Algorithm 1 ViQAgent framework algorithm, after initializ- ing the models YoloWorld, VideoLLM 1,2,3,4, LLM 1,2,3 1:Hyperparameters: τc, τnms, τt 2:Input: V, | https://arxiv.org/abs/2505.15928v1 |
Q, [Aopt] 3:prompt ←Q+ [Aopt] 4: ▷ M 1starts 5:A, R 1←VideoLLM 1(V, prompt ) ▷Rationale 6:TC←VideoLLM 2(V) ▷Timeframes 7:T←VideoLLM 3(V, prompt ) ▷Targets 8:{T1, T2, ..., T m} ←T 9: ▷ OG starts 10:YoloWorld .set classes( T1, T2, ..., T m) 11:D← {T1:∅, T2:∅, ..., T m:∅} ▷Detections 12:forviinVdo 13: d←YoloWorld .detect( vi, τc, τnms) 14: forTiinTdo 15: D[Ti]←D[Ti] +d[Ti] 16: end for 17:end for 18:t←0 19:TG← {T1:∅, T2:∅, ..., T m:∅} ▷Timeframes 20:forTiinTdo 21: idx←0 22: fordinDdo 23: t←t+ (V.fps )−1 24: ifd[Ti]̸=∅then 25: ifidx+ 1̸=|TG[Ti]|then 26: TG[Ti][idx]←(t,0) 27: else 28: TG[Ti][idx][1]←t 29: end if 30: else 31: ∆t←t−TG[Ti][idx][1] 32: if∆t≥τtthen 33: idx←idx+ 1 34: end if 35: end if 36: end for 37:end for 38: ▷ M 2starts 39:µ←LLM 1(R1, TC, TG ) ▷Unconfidence 40:ifµthen 41: W←V.trim (µ.times ) 42: qs←LLM 2(µ, prompt ) 43: as←∅ 44: forqinqsdo 45: as←as+VideoLLM 4(W, q) 46: end for 47: A←LLM 3(R1, prompt, qs, as ) 48:end if 6 Method Zero-Shot Agent Acc@C Acc@T Acc@D Acc@All HiTeA ✗ ✗ 62.4 58.3 75.6 63.1 LLaMa-VQA ✗ ✗ 72.7 69.2 75.8 72.0 SeViLa (fine-tuned) ✗ ✗ - - - 73.8 InternVideo ✓ ✗ 43.4 48.0 65.1 49.1 AssistGPT ✓ ✗ 60.0 51.4 67.3 58.4 SeViLa ✓ ✗ 61.3 61.5 75.6 63.6 ProViQ ✓ ✓ - - - 63.8 ViperGPT+ ✓ ✓ - - - 64.0 JCEF ✓ ✓ - - - 66.7 LLoVi ✓ ✓ 69.5 61.0 75.6 67.7 TraveLER ✓ ✓ 70.0 60.5 78.2 68.2 MoReVQA ✓ ✓ - - - 69.2 VideoAgent (mem) ✓ ✓ 60.0 76.0 76.5 70.8 VideoAgent (long) ✓ ✓ 72.7 64.5 81.1 71.3 MotionEpic ✓ ✓ 75.8 74.6 83.3 76.0 ViQAgent (ours) ✓ ✓ 82.2 74.5 86.3 80.4 Table 1. Results of ViQAgent against all the other state-of-the-art solutions in NExT-QA benchmark, in all the casual (C), temporal (T), and descriptive (D) subsets, as well as the overall accuracy. ViQAgent surpasses all the baselines, achieving a new state-of-the-art on VideoQA. Method Zero-Shot Agent Acc VideoCoCa ✗ ✗ 39.0 FrozenBiLM (fine-tuned) ✗ ✗ 39.7 FrozenBiLM ✓ ✗ 27.3 BLIP-2 ✓ ✗ 45.8 InstructBLIP ✓ ✗ 53.8 ProViQ ✓ ✓ 50.7 JCEF ✓ ✓ 56.9 MoReVQA ✓ ✓ 60.9 ViQAgent (ours) ✓ ✓ 62.6 (a) iVQA results.Method Zero-Shot Agent Acc FrozenBiLM (fine-tuned) ✗ ✗ 43.2 Video-ChatGPT ✓ ✗ 35.2 Video-LLaVa ✓ ✗ 45.3 VideoChat2 ✓ ✗ 49.1 ViperGPT+ ✓ ✓ 37.1 ProViQ ✓ ✓ 42.3 JCEF ✓ ✓ 43.3 MoReVQA ✓ ✓ 45.3 MotionEpic ✓ ✓ 54.6 ViQAgent (ours) ✓ ✓ 59.9 (b) ActivityNet-QA results. Table 2. Open-ended Question-Answering benchmark results. 4. EgoSchema [ 22]:Focused on long-form, egocentric video understanding, EgoSchema contains 3-minute clips sourced from the Ego4D benchmark. It includes close-ended ques- tions with higher complexity and length than NExT-QA’s, assessing a model’s ability to handle extended video content. We evaluated ViQAgent on the available 500 samples of the open-answer split, focusing solely on accuracy. This experi- ment, in particular, is not considered a paper contribution, as we didn’t evaluate the whole benchmark, but it still provides an addition to previous results. Across all benchmarks, accuracy was used as the | https://arxiv.org/abs/2505.15928v1 |
pri- mary evaluation metric, providing a direct and interpretable measure of ViQAgent’s performance on close-ended and open-ended tasks alike.4.2. Hyperparameter Tuning To refine ViQAgent’s hyperparameters, we performed a se- ries of simple modification studies on the NExT-QA dataset, specifically on each experiment’s small, randomly sampled, evenly class-distributed subset of videos. We investigated the effects of varying detection thresholds in the object ground- ing module and the time threshold to separate appearance timeframes effectively: Confidence Threshold. We evaluated confidence values of 0.01, 0.05, 0.1, and 0.3 for the object detection model within YOLO-World. A default confidence of 0.05 was found opti- mal, balancing sensitivity to target objects with tolerance to noise in lower-resolution, motion-filled frames. This value is considerably lower than the 0.3 confidence suggested in 7 Method Agent Acc Subset Acc SeViLa ✗ 25.7 22.7 ImageViT ✗ 40.8 30.9 ShortViViT ✗ 49.6 31.0 InternVideo ✗ - 32.1 LongViViT ✗ 56.8 33.3 MC-ViT-L ✗ 62.6 44.4 Vamos ✗ - 48.3 JCEF ✓ - 50.0 LLoVi ✓ 57.6 50.3 MoReVQA ✓ - 51.7 TraveLER ✓ - 53.3 VideoAgent (long) ✓ 60.2 54.1 ProViQ ✓ - 57.1 ViQAgent (ours) ✓ 67.87 - Table 3. EgoSchema results. Benchmark Increase of the SotA ActivityNet-QA +5.3% NExT-QA +4.4% iVQA +1.7% Table 4. Increase on the state-of-the-art results on the benchmarks. the original YOLO-World paper [ 3] due to the challenges posed by low-resolution and high-motion video frames in the benchmarks, which is not the case on image-level object detection. This reduction ensures that important objects are detected, accounting for YOLO-World’s open-vocabulary capabilities, which can introduce variability in detection sen- sitivity based on the complexity of target object sets provided by the first module. NMS Threshold. The non-maximum suppression (NMS) threshold, controlling overlap in detections, was tested with intersection-over-union (IoU) values of 0.1 and 0.3. An IoU of0.1performed best, effectively reducing overlapping false positives while preserving distinct object instances in most of the cases, essential for ViQAgent’s open-vocabulary ob- ject grounding approach. Time Threshold. We tested temporal segmentation with 500ms, 1s, and 1.5s time thresholds. A 1.5-second threshold was chosen because it captured consistent object presence across varying scene dynamics, particularly useful in lengthy benchmark videos, ensuring detection continuity despite video length and quality variances. However, minor changes on this threshold didn’t result in significant differences. 4.3. Results and Discussion ViQAgent not only surpasses prior state-of-the-art results across all evaluated benchmarks (Tab. 4) but also establishes itself as a leading choice in both zero-shot VideoQA andmodular, agent-based solutions. In the close-ended NExT- QA benchmark (Tab. 1), ViQAgent demonstrated a signif- icant edge, achieving a 4.4% increase in accuracy over the previous best-performing Video-of-Though (V oT) solution: MotionEpic [ 8]. One particularly notable finding is that our zero-shot model outperforms fine-tuned models that are opti- mized specifically for this benchmark, such as SeViLa [ 46], LLaMa-VQA [ 13], and HiTeA [ 44]. This result highlights ViQAgent’s capability to generalize effectively without need- ing task-specific training, underscoring its potential as a more adaptable and resource-efficient solution for VideoQA tasks. This zero-shot superiority not only raises the benchmark for VideoQA performance but | https://arxiv.org/abs/2505.15928v1 |
also signals the value of ro- bust, generalized VideoLLMs in solving real-world, unseen problems without extensive fine-tuning. Results across the open-ended benchmarks—iVQA (Tab. 2a), ActivityNet-QA (Tab. 2b)—further validates ViQAgent’s robustness and adaptability to diverse video types and question structures, showing its performance as a versatile VideoQA agent capable of handling both close-ended and open-ended tasks, while being able to sur- pass robust state-of-the-art agentic solutions for VideoQA: MoReVQA [ 23] (and it’s additional contributions JCEF and ViperGPT+), TraveLER [ 28], ProViQ [ 4], MotionEpic [ 8], VideoAgent [7], and VideoAgent [34]. Further, ViQAgent shows promising results on the par- tial open-answer subset of questions within the EgoSchema benchmark (Tab. 3), expanding its potential applicability as a flexible VideoQA model well-suited for both straightforward and complex agentic tasks. 5. Conclusion In this paper, we presented ViQAgent, a novel frame- work that advances the field of video question answering (VideoQA) by leveraging a modular, task-specific approach tailored for zero-shot generalization across diverse VideoQA benchmarks. By employing a strategically layered architec- ture that integrates open-vocabulary object detection, object grounding, and chain-of-thought reasoning, ViQAgent dy- namically adapts to the complexities of various video do- mains, enabling it to successfully tackle both close-ended and open-ended questions without requiring extensive hours of fine-tuning or training. Through extensive benchmarking on datasets such as NExT-QA, iVQA, and ActivityNet-QA, we demonstrated ViQAgent’s superior performance, signifi- cantly outperforming existing zero-shot and modular-agent approaches. Our experiments also highlight the adaptabil- ity of our framework to low-resolution, high-motion, and lengthy videos, settings that traditionally challenge VideoQA models. By pushing the boundaries of zero-shot VideoQA, setting a new standard for intelligent and efficient multi- modal comprehension in versatile video understanding. 8 References [1]Xi Chen, Josip Djolonga, Piotr Padlewski, Basil Mustafa, Soravit Changpinyo, Jialin Wu, Car- los Riquelme Ruiz, Sebastian Goodman, Xiao Wang, Yi Tay, Siamak Shakeri, Mostafa Dehghani, Daniel Salz, Mario Lucic, Michael Tschannen, Arsha Nagrani, Hexiang Hu, Mandar Joshi, Bo Pang, Ceslee Mont- gomery, Paulina Pietrzyk, Marvin Ritter, AJ Piergio- vanni, Matthias Minderer, Filip Pavetic, Austin Waters, Gang Li, Ibrahim Alabdulmohsin, Lucas Beyer, Julien Amelot, Kenton Lee, Andreas Peter Steiner, Yang Li, Daniel Keysers, Anurag Arnab, Yuanzhong Xu, Keran Rong, Alexander Kolesnikov, Mojtaba Seyedhosseini, Anelia Angelova, Xiaohua Zhai, Neil Houlsby, and Radu Soricut. Pali-x: On scaling up a multilingual vision and language model, 2023. 1 [2]Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergio- vanni, Piotr Padlewski, Daniel Salz, Sebastian Good- man, Adam Grycner, Basil Mustafa, Lucas Beyer, Alexander Kolesnikov, Joan Puigcerver, Nan Ding, Keran Rong, Hassan Akbari, Gaurav Mishra, Lint- ing Xue, Ashish Thapliyal, James Bradbury, Weicheng Kuo, Mojtaba Seyedhosseini, Chao Jia, Burcu Karagol Ayan, Carlos Riquelme, Andreas Steiner, Anelia An- gelova, Xiaohua Zhai, Neil Houlsby, and Radu Sori- cut. Pali: A jointly-scaled multilingual language-image model, 2023. 1 [3]Tianheng Cheng, Lin Song, Yixiao Ge, Wenyu Liu, Xinggang Wang, and Ying Shan. Yolo-world: Real- time open-vocabulary object detection. In Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 16901–16911, 2024. 2, 3, 5, 8 [4] Rohan Choudhury, Koichiro Niinuma, Kris M. Kitani, and L ´aszl´o A. Jeni. Zero-shot video | https://arxiv.org/abs/2505.15928v1 |
question answering with procedural programs. CoRR , abs/2312.00937, 2023. 1, 3, 8, 12 [5] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. InstructBLIP: Towards general- purpose vision-language models with instruction tun- ing. In Thirty-seventh Conference on Neural Informa- tion Processing Systems , 2023. 12 [6]Shangzhe Di and Weidi Xie. Grounded question- answering in long egocentric videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR) , pages 12934–12943, 2024. 1, 3, 4 [7]Yue Fan, Xiaojian Ma, Rujie Wu, Yuntao Du, Jiaqi Li, Zhi Gao, and Qing Li. Videoagent: A memory- augmented multimodal agent for video understanding. InComputer Vision – ECCV 2024 , pages 75–92, Cham, 2025. Springer Nature Switzerland. 1, 3, 8, 12[8]Hao Fei, Shengqiong Wu, Wei Ji, Hanwang Zhang, Meishan Zhang, Mong-Li Lee, and Wynne Hsu. Video- of-thought: Step-by-step video reasoning from percep- tion to cognition. In Forty-first International Confer- ence on Machine Learning , 2024. 1, 3, 8, 12 [9]Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, and Mike Zheng Shou. As- sistgpt: A general multi-modal assistant that can plan, execute, inspect, and learn, 2023. 12 [10] Xin Gu, Heng Fan, Yan Huang, Tiejian Luo, and Libo Zhang. Context-guided spatio-temporal video ground- ing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 18330–18339, 2024. 3, 4 [11] Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, De- vendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, L ´elio Renard Lavaud, Lu- cile Saulnier, Marie-Anne Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Anto- niak, Teven Le Scao, Th ´eophile Gervet, Thibaut Lavril, Thomas Wang, Timoth ´ee Lacroix, and William El Sayed. Mixtral of experts, 2024. 1 [12] Zaid Khan and Yun Fu. Consistency and uncertainty: Identifying unreliable responses from black-box vision- language models for selective visual question answer- ing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 10854–10863, 2024. 1, 3 [13] Dohwan Ko, Ji Lee, Woo-Young Kang, Byungseok Roh, and Hyunwoo Kim. Large language models are temporal and causal reasoners for video question an- swering. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 4300–4316, Singapore, 2023. Association for Computational Linguistics. 8, 12 [14] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models, 2023. 1, 3, 12 [15] KunChang Li, Yinan He, Yi Wang, Yizhuo Li, Wenhai Wang, Ping Luo, Yali Wang, Limin Wang, and Yu Qiao. Videochat: Chat-centric video understanding, 2024. 1, 3 [16] Kunchang Li, Yali Wang, Yinan He, Yizhuo Li, Yi Wang, Yi Liu, Zun Wang, Jilan Xu, Guo Chen, Ping Luo, Limin Wang, and Yu Qiao. Mvbench: A compre- hensive multi-modal video understanding benchmark, 2024. 12 [17] Zhaohe Liao, Jiangtong Li, Li Niu, and Liqing Zhang. Align and aggregate: Compositional reasoning with video alignment and answer aggregation for video 9 | https://arxiv.org/abs/2505.15928v1 |
question-answering. In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 13395–13404, 2024. 1, 3 [18] Christian Limberg, Artur Gon c ¸alves, Bastien Rigault, and Helmut Prendinger. Leveraging YOLO-world and GPT-4v LMMs for zero-shot person detection and ac- tion recognition in drone imagery. In First Workshop on Vision-Language Models for Navigation and Ma- nipulation at ICRA 2024 , 2024. 2 [19] Bin Lin, Yang Ye, Bin Zhu, Jiaxi Cui, Munan Ning, Peng Jin, and Li Yuan. Video-llava: Learning united visual representation by alignment before projection, 2024. 1, 3, 12 [20] Fan Ma, Xiaojie Jin, Heng Wang, Yuchen Xian, Jiashi Feng, and Yi Yang. Vista-llama: Reducing hallucina- tion in video language models via equal distance to visual tokens. In Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR) , pages 13151–13160, 2024. 1, 3 [21] Muhammad Maaz, Hanoona Rasheed, Salman Khan, and Fahad Shahbaz Khan. Video-chatgpt: Towards detailed video understanding via large vision and lan- guage models, 2024. 1, 3, 12, 13 [22] Karttikeya Mangalam, Raiymbek Akshulakov, and Ji- tendra Malik. Egoschema: A diagnostic benchmark for very long-form video language understanding. In Advances in Neural Information Processing Systems , pages 46212–46244. Curran Associates, Inc., 2023. 3, 7, 12 [23] Juhong Min, Shyamal Buch, Arsha Nagrani, Minsu Cho, and Cordelia Schmid. Morevqa: Exploring mod- ular reasoning models for video question answering. InProceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR) , pages 13235–13245, 2024. 1, 3, 4, 8, 12, 13 [24] Chancharik Mitra, Brandon Huang, Trevor Darrell, and Roei Herzig. Compositional chain-of-thought prompt- ing for large multimodal models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR) , pages 14420–14431, 2024. 1, 3 [25] David Mogrovejo and Thamar Solorio. Question- instructed visual descriptions for zero-shot video an- swering. In Findings of the Association for Com- putational Linguistics: ACL 2024 , pages 9329–9339, Bangkok, Thailand, 2024. Association for Computa- tional Linguistics. 1, 3 [26] OpenAI. Gpt-4 technical report, 2024. 1 [27] Peng Qi, Zehong Yan, Wynne Hsu, and Mong Li Lee. Sniffer: Multimodal large language model for explain- able out-of-context misinformation detection. In Pro- ceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition (CVPR) , pages 13052– 13062, 2024. 1, 3 [28] Chuyi Shang, Amos You, Sanjay Subramanian, Trevor Darrell, and Roei Herzig. TraveLER: A modular multi- LMM agent framework for video question-answering. InProceedings of the 2024 Conference on Empiri- cal Methods in Natural Language Processing , pages 9740–9766, Miami, Florida, USA, 2024. Association for Computational Linguistics. 1, 3, 8, 12 [29] Enxin Song, Wenhao Chai, Guanhong Wang, Yucheng Zhang, Haoyang Zhou, Feiyang Wu, Haozhe Chi, Xun Guo, Tian Ye, Yanting Zhang, Yan Lu, Jenq-Neng Hwang, and Gaoang Wang. Moviechat: From dense token to sparse memory for long video understanding. InProceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR) , pages 18221–18232, 2024. 1, 3 [30] D´ıdac Sur ´ıs, Sachit Menon, and Carl V ondrick. Vipergpt: Visual inference via python execution for reasoning. In Proceedings of the IEEE/CVF Interna- tional Conference on Computer Vision | https://arxiv.org/abs/2505.15928v1 |
(ICCV) , pages 11888–11898, 2023. 12 [31] Yunlong Tang, Jing Bi, Siting Xu, Luchuan Song, Su- san Liang, Teng Wang, Daoan Zhang, Jie An, Jingyang Lin, Rongyi Zhu, Ali V osoughi, Chao Huang, Zeliang Zhang, Pinxin Liu, Mingqian Feng, Feng Zheng, Jian- guo Zhang, Ping Luo, Jiebo Luo, and Chenliang Xu. Video understanding with large language models: A survey, 2024. 1, 3 [32] Gemini Team. Gemini 1.5: Unlocking multimodal un- derstanding across millions of tokens of context, 2024. 1, 3, 13 [33] Shijie Wang, Qi Zhao, Minh Quan Do, Nakul Agarwal, Kwonjoon Lee, and Chen Sun. Vamos: Versatile action models for video understanding, 2024. 12 [34] Xiaohan Wang, Yuhui Zhang, Orr Zohar, and Serena Yeung-Levy. Videoagent: Long-form video understand- ing with large language model as agent. In Com- puter Vision – ECCV 2024 , pages 58–76, Cham, 2025. Springer Nature Switzerland. 1, 3, 8, 12 [35] Yi Wang, Kunchang Li, Yizhuo Li, Yinan He, Bingkun Huang, Zhiyu Zhao, Hongjie Zhang, Jilan Xu, Yi Liu, Zun Wang, Sen Xing, Guo Chen, Junting Pan, Jiashuo Yu, Yali Wang, Limin Wang, and Yu Qiao. Internvideo: General video foundation models via generative and discriminative learning, 2022. 12 [36] Yan Wang, Yawen Zeng, Jingsheng Zheng, Xiaofen Xing, Jin Xu, and Xiangmin Xu. VideoCoT: A video chain-of-thought dataset with active annotation tool. InProceedings of the 3rd Workshop on Advances in Language and Vision Research (ALVR) , pages 92–101, Bangkok, Thailand, 2024. Association for Computa- tional Linguistics. 3 10 [37] Junbin Xiao, Xindi Shang, Angela Yao, and Tat-Seng Chua. Next-qa:next phase of question-answering to explaining temporal actions, 2021. 3, 6, 12 [38] Junbin Xiao, Nanxin Huang, Hangyu Qin, Dongyang Li, Yicong Li, Fengbin Zhu, Zhulin Tao, Jianxing Yu, Liang Lin, Tat-Seng Chua, and Angela Yao. Videoqa in the era of llms: An empirical study, 2024. 1, 3 [39] Junbin Xiao, Angela Yao, Yicong Li, and Tat-Seng Chua. Can i trust your answer? visually grounded video question answering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recogni- tion (CVPR) , pages 13204–13214, 2024. 1, 3, 4 [40] Jiaqi Xu, Cuiling Lan, Wenxuan Xie, Xuejin Chen, and Yan Lu. Retrieval-based video language model for efficient long video question answering, 2023. 1, 3 [41] Shen Yan, Tao Zhu, Zirui Wang, Yuan Cao, Mi Zhang, Soham Ghosh, Yonghui Wu, and Jiahui Yu. Video- coca: Video-text modeling with zero-shot transfer from contrastive captioners, 2023. 12 [42] Antoine Yang, Antoine Miech, Josef Sivic, Ivan Laptev, and Cordelia Schmid. Zero-shot video question answer- ing via frozen bidirectional language models, 2022. 12 [43] Antoine Yang, Antoine Miech, Josef Sivic, Ivan Laptev, and Cordelia Schmid. Learning to answer visual ques- tions from web videos, 2022. 3, 6, 12, 13 [44] Qinghao Ye, Guohai Xu, Ming Yan, Haiyang Xu, Qi Qian, Ji Zhang, and Fei Huang. Hitea: Hierarchical temporal-aware video-language pre-training. In Pro- ceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) , pages 15405–15416, 2023. 8, 12 [45] Shuhong Ye, Weikai Kong, Chenglin Yao, Jianfeng Ren, and Xudong Jiang. Video question answering using clip-guided visual-text attention. In 2023 IEEE International Conference on Image Processing | https://arxiv.org/abs/2505.15928v1 |
(ICIP) , pages 81–85, 2023. 3 [46] Shoubin Yu, Jaemin Cho, Prateek Yadav, and Mohit Bansal. Self-chained image-language model for video localization and question answering. In Advances in Neural Information Processing Systems , pages 76749– 76771. Curran Associates, Inc., 2023. 8, 12 [47] Zhou Yu, Dejing Xu, Jun Yu, Ting Yu, Zhou Zhao, Yueting Zhuang, and Dacheng Tao. Activitynet-qa: A dataset for understanding complex web videos via question answering, 2019. 3, 6, 12 [48] Ce Zhang, Taixi Lu, Md Mohaiminul Islam, Ziyang Wang, Shoubin Yu, Mohit Bansal, and Gedas Bertasius. A simple llm framework for long-range video question- answering, 2024. 12 [49] Hang Zhang, Xin Li, and Lidong Bing. Video-llama: An instruction-tuned audio-visual language model for video understanding, 2023. 1, 3[50] Shitian Zhao, Zhuowan Li, Yadong Lu, Alan Yuille, and Yan Wang. Causal-cog: A causal-effect look at context generation for boosting multi-modal language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 13342–13351, 2024. 1, 3 [51] Yaoyao Zhong, Wei Ji, Junbin Xiao, Yicong Li, Wei- hong Deng, and Tat-Seng Chua. Video question answer- ing: Datasets, algorithms and challenges. In Proceed- ings of the 2022 Conference on Empirical Methods in Natural Language Processing , pages 6439–6455, Abu Dhabi, United Arab Emirates, 2022. Association for Computational Linguistics. 3 11 ViQAgent: Zero-Shot Video Question Answering via Agent with Open-Vocabulary Grounding Validation Supplementary Material This supplementary material provides detailed informa- tion about the ViQAgent framework implementation. It in- cludes a case study showcasing the outputs at each stage of the framework and analyzing observations from each step (App. A). Additionally, the prompts and schemas used in the solution are detailed (App. B), along with the baselines and benchmark configurations (App. C). Finally, insights and implications are discussed in greater depth (App. D). A. Case Study To illustrate the internal outputs and their overall contribu- tion to the final answer, we present a comprehensive step- by-step case study. This example demonstrates a scenario where the VideoLLM fails to answer the question directly but provides hints that facilitate the detection of subsequent inconsistencies, ultimately leading to a correct answer. The case study is detailed in Figs. 4–10. These steps align with the methodology and sequence outlined in Algorithm 1. B. Prompts and Schemas In the ViQAgent framework, multiple LLMs are utilized in both the M1andM2modules. Module 1 is responsible for independently extracting relevant information and reasoning from the video, while Module 2 compiles and evaluates the final answer based on the information gathered. Module 1 directly interacts with the video through Vide- oLLMs, whereas Module 2 leverages LLMs for reason- ing using the pre-computed information without direct ac- cess to the video. The prompts and output schemas for all VideoLLM 1,2,3,4(Module 1) and LLM 1,2,3are presented in Tabs. 5 - 18, following the procedure shown previously in the Algorithm 1. Notably, for VideoLLM 1(Tab. 5) and LLM 3 (Tab. 17), a space is left at the end. This is because these two submodules are specifically responsible for providing direct answers to the query. As the question formats depend on the | https://arxiv.org/abs/2505.15928v1 |
specific benchmark, an additional subinstruction is appended in this section to accommodate the requirements of the respective benchmark. C. Baselines and Benchmarks For each of the evaluated benchmarks, we conducted a thor- ough review of the most relevant state-of-the-art solutions from recent years that have reported results on these bench- marks, focusing on those that achieved notably high accuracy or are widely recognized baselines evaluated against previ- ous work.NExT-QA [ 37]:We compare against fine-tuned solutions that achieved high performance across both specific subsets (Casual, Temporal, and Descriptive) and the overall subset. The solutions include HiTeA [ 44], LLaMa-VQA [ 13], and a fine-tuned version of SeViLa [ 46]. Additionally, several zero- shot solutions emerged, as NExT-QA is a significant bench- mark for evaluating Video Question-Answering solutions. We classified these solutions into agentic and non-agentic categories. The non-agentic solutions include InternVideo [35], AssistGPT [ 9], and SeViLa [ 46]. The agentic solutions, which are more pertinent for comparison with our frame- work, include ProViQ [ 4], ViperGPT+, JCEF, MoReVQA (all introduced in Min et al. [23], with ideas from Sur ´ıs et al. [30]), LLoVi [ 48], TraveLER [ 28], VideoAgent [ 7], VideoA- gent [ 34], and MotionEpic [ 8] with its Video-of-Thought (V oT) framework. iVQA [ 43]:For this dataset, only two fine-tuned solu- tions were identified, as reported in Min et al. [23]: Video- CoCa [ 41] and a fine-tuned version of FrozenBiLM [ 42]. Its non-fine-tuned counterpart, alongside BLIP-2 [ 14] and InstructBLIP [ 5], forms the zero-shot non-agentic baselines for comparison. Regarding the agentic solutions, which are particularly relevant to our framework, the notable baselines include ProViQ [4], JCEF, and MoReVQA [23]. ActivityNet-QA [ 47]:For the ActivityNet-QA bench- mark, the fine-tuned group comprises only the FrozenBiLM solution [ 42]. The non-agentic zero-shot baselines include Video-ChatGPT [ 21], Video-LLaVa [ 19], and VideoChat2 [16]. For agentic solutions, the primary baselines are ViperGPT+, JCEF, MoReVQA [ 23], ProViQ [ 4], and Mo- tionEpic [8], which employs the V oT framework. EgoSchema [ 22]:Although EgoSchema is not a primary benchmark for our solution, several baselines were compared with our partial results. It is noteworthy that fine-tuning is not feasible for this dataset, as only a subset of the answers is ac- cessible. Therefore, only non-agentic and agentic solutions were included in the evaluation. Among the non-agentic baselines, the most relevant are SeViLa [ 46], InternVideo [35], and Vamos [ 33]. For agentic solutions, previously estab- lished baselines were used, including LLoVi [ 48], TraveLER [28], VideoAgent [ 34], ProViQ [ 4], JCEF, and MoReVQA [23]. C.1. ActivityNet-QA evaluation ActivityNet-QA [ 47] open-ended benchmark provides only a single correct answer for each question, requiring LLM- based evaluation of responses, as shown in prior work 12 [21,23,32]. Notably, this process was not required for iVQA [43], which offers a set of potential answers. For this evalua- tion, we used the following prompt: Evaluate whether the predicted answer /reasoning are correct based on the real answer to the question. Only output ’yes’ or ’no’, don’t provide an explanation. Question: {q} Real answer: {a} Predicted | https://arxiv.org/abs/2505.15928v1 |
answer: {p} Predicted reasoning: {r} Output (yes/no): D. Additional Analysis As demonstrated in the benchmark results, the ViQAgent framework shows significant promise for the Video Question Answering task and, in general, tool-using agents, estab- lishing a new state-of-the-art by integrating VideoLLMs, LLMs, and computer vision foundation models to address questions based on video content. However, it is important to acknowledge that, as LLMs currently cannot be seeded, certain random factors in their output remain uncontrollable, even though all experiments were conducted with a tem- perature parameter set to 0.0. Furthermore, the ViQAgent framework proves most effective when the object or target of the query is visibly present in the video. In cases where the YOLO-World model cannot detect the object, confusion may arise. Nevertheless, the framework is robust enough to mitigate this confusion through the CoT judge module ( M2), resulting in improved performance while preserving the pri- mary advantages of Zero-Shot VideoLLM-based question answering. 13 Figure 4. VideoLLM Analyzer (VideoLLM 1): Given the full video, and the ”prompt” (question plus answer options, if available), the VideoLLM Analyzer submodule provides a first-sight response with a reasoning text of why that answer is correct. Figure 5. VideoLLM Captioner (VideoLLM 2): Given the full video, but not the question (to avoid bias), the VideoLLM Captioner submodule provides a comprehensive set of event-separated timeframes with a description ( i.e. caption) of what is happening in every part of the video. This is the first grounding output, used then for comparison against YOLO-World object grounding. 14 Figure 6. VideoLLM Target Finder (VideoLLM 3): Given the full video, and the ”prompt” (question plus answer options, if available), the VideoLLM Target Finder submodule is very straightforward and simple, yet effective on finding up to 4 relevant targets to identify in the video with the YOLO-World model. These objects/targets are selected based on both the video content and the relevant targets mentioned in the questions and answers. Figure 7. ( OG Module): First, given the full video and the selected targets, the Object Detector Model (YOLO-World) extracts the detections from the targets in the video, which are then passed to the Object Grounder to consolidate into a dictionary that contains all the timeframes in which each target is detected. 15 Figure 8. LLM Grounding Comparator (LLM 1): Given the grounding output from the captioning submodule (VideoLLM 2), the object grounding, and the first-sight reasoning (VideoLLM 1), the grounding comparator determines whether there are inconsistencies or uncertain parts within them, determines, and explains in which parts are these inconsistencies, which are then fed to the Question Generator (LLM 2) along with the prompt (original question plus answer options, if available), to state up to 3 clean questions and their doubtful timeframes, to be then asked to a VideoLLM. 16 Figure 9. The stated questions from the previous step are then fed to the VideoLLM QA (VideoLLM 4) submodule to simply answer them. Figure 10. Once all the intermediate outputs are generated, they are fed to a Final Reasoner (LLM 3) submodule that answers the question with the new information. 17 | https://arxiv.org/abs/2505.15928v1 |
Table 5. VideoLLM Analyzer Prompt (VideoLLM 1) Prompt: VideoLLM Analyzer VLLM_PROMPT_1 = """ Based on the provided video, select or provide the correct answer for the user question. Break down your reasoning into clear, logical steps, and arrive at the most accurate answer. To ensure accuracy, follow this step-by-step reasoning process: 1. Restate or reframe the question for clarity. 2. Consider key events, actions, or objects relevant to the question. 3. If answer options are provided, assess each option in relation to the video’s content. If no options are given, logically derive an answer. 4. Provide a clear and concise response based on your reasoning. You must provide the index of the selected answer or the answer itself, and a brief explanation of your reasoning. """ Table 6. VideoLLM Analyzer Schema (VideoLLM 1) Output Schema for VideoLLM Analyzer VLLM_SCHEMA_1 = { "type": "object", "properties": { "reasoning": { "type": "string" }, "answer": { "type": "string" } } } 18 Table 7. VideoLLM Captioner Prompt (VideoLLM 2) Prompt: VideoLLM Captioner VLLM_PROMPT_2 = """ Based on the provided video and the given question (and answer options if available), capture a list of the main timeframes in the video in the format <<mm0:ss0,mm1:ss1>>: {description}, where ’description’ is a detailed description of what is happening in that particular timeframe. Follow these steps to generate your response: 1. Carefully analyze the question and the video content to identify the key events or actions that are relevant to the question. 2. Identify key events, actions, or transitions that represent meaningful changes or notable moments in the video. 3. Break the video into distinct timeframes where these events occur. 4. For each identified timeframe, provide a clear, detailed description of the action or scene in that segment. 5. Ensure that each description is specific, concise, and accurately reflects the action within the timeframe. """ Table 8. VideoLLM Captioner Schema (VideoLLM 2) Output Schema for VideoLLM Captioner VLLM_SCHEMA_2 = { "type": "object", "properties": { "timeframes": { "type": "array", "items": { "type": "string" } } } } 19 Table 9. VideoLLM Target Finder Prompt (VideoLLM 3) Prompt: VideoLLM Target Finder VLLM_PROMPT_3 = """ Based on the provided video and the given question (and answer options if available), your task is to capture a **list of objects/targets **that are involved in the video and are relevant to the question. These targets will be used for object detection and grounding via a YOLO model. Please follow these steps: 1. Understand the question and its context within the video, along with any answer options provided. 2. Focus on the most relevant objects or targets that are involved in the video’s key actions or scenes. Ensure that these targets directly relate to the question. 3. Choose no more than 4 targets, ideally 3 or fewer. Consider only the objects that are clearly present and essential to answering the question, and that are not too complex to identify (not too large as well), but not too general for the particular video. 4. Ensure that the targets are also directly related to the answer options, if provided. 5. | https://arxiv.org/abs/2505.15928v1 |
Provide a short list of targets, ensuring each description is clear and relevant (e.g., ’player in white outfit’, ’spoon’, etc.). """ Table 10. VideoLLM Target Finder Schema (VideoLLM 3) Output Schema for VideoLLM Target Finder VLLM_SCHEMA_3 = { "type": "object", "properties": { "targets": { "type": "array", "items": { "type": "string" } } } } 20 Table 11. VideoLLM QA Prompt (VideoLLM 4) Prompt: VideoLLM QA VLLM_PROMPT_4 = """ Based on the provided video, answer the user question in the VERY SPECIFIC given timeframe. Only provide the final, concise answer, directly related to the question. Base your answer ONLY on the information in the video, and do not add any information. If the answer is not present in the video, state ’unanswerable’. For example, if the question is ’What color is the car?’, and the car is not shown in the video timeframe, the answer should be ’unanswerable’. """ Table 12. VideoLLM QA Schema (VideoLLM 4) Output Schema for VideoLLM QA VLLM_SCHEMA_4 = { "type": "object", "properties": { "answer": { "type": "string" } } } 21 Table 13. LLM Grounding Comparator Prompt (LLM 1) Prompt: LLM Grounding Comparator LLM_PROMPT_1 = """ You will be provided with reasoning for an answer to a question, along with two grounding pieces of information: 1.**VideoLLM-extracted grounding captions **: These describe the key events and timeframes within the video (e.g., <<mm0:ss0,mm1:ss1>>: {description}). 2.**YOLO object grounding **: This identifies the specific objects/targets and their appearances in different video timeframes. Your task is to analyze if there is any disagreement between the grounding information (both the captions and object grounding) and the reasoning for the answer. Disagreements may occur if the reasoning implies events or objects appearing in timeframes that are inconsistent with the grounding. Please output a "disagree" boolean indicating if there is any disagreement at all, and a detailed but concise explanation of the specific timeframes where the grounding information does not align with the reasoning. Only include timeframes where discrepancies occur, and keep the explanation short but clear. If no disagreement is found, simply explain that there is no disagreement. Disagreements should be highlighted by timeframe (<<mm0:ss0,mm1:ss1>>) and why the reasoning conflicts with the provided grounding information. """ Table 14. LLM Grounding Comparator Schema (LLM 1) Output Schema for LLM Grounding Comparator LLM_SCHEMA_1 = { "type": "object", "properties": { "reasoning": { "type": "string" }, "disagree": { "type": "boolean" } } } 22 Table 15. LLM Question Generator Prompt (LLM 2) Prompt: LLM Question Generator LLM_PROMPT_2 = """ You will be provided the following: 1. A question (and answer options if available) related to a video. 2. A text explaining the set of discrepancies found in previous studies of the video. These indicate specific timeframes in the video where the grounding information does not align with the reasoning. These timeframes and the reasons for the discrepancies are provided. Your task is to generate a set of up to 3 concise questions to ask a VideoLLM to clarify and provide a more grounded, precise answer. The goal is to resolve the discrepancies and improve the grounding for the question | https://arxiv.org/abs/2505.15928v1 |
at hand. - Each question should focus on a specific timeframe where a discrepancy was found. - Each question should be concise and relevant to the timeframe, and particularly relevant to answer the question. - Ensure that each question includes the timeframe where the clarification is needed, formatted as <<mm0:ss0,mm1:ss1>>. - The timeframe must be very precise in time, covering only the specific segment where the discrepancy occurred. - Do not include any unnecessary details, just the specific query for clarification. - If there are not CONSIDERABLE discrepancies, you may return an empty list! Generate between 0 and up to 3 questions based on the discrepancies identified. """ Table 16. LLM Question Generator Schema (LLM 2) Output Schema for LLM Question Generator LLM_SCHEMA_2 = { "type": "object", "properties": { "questions": { "type": "array", "items": { "type": "string" } } } } 23 Table 17. LLM Final Reasoner Prompt (LLM 3) Prompt: LLM Final Reasoner LLM_PROMPT_3 = """ You will be provided the following: 1. A question (and answer options if available) related to a video. 2. An initial reasoning made for a possible answer, along with an explanation of why it was chosen. This reasoning was done BEFORE knowing the grounding information, and clarification questions. 3. The **grounding information **: -**VideoLLM grounding **: Timeframes and event descriptions from the video. -**YOLO object grounding **: Objects/targets identified in the video and their corresponding appearing timeframes. 4. A set of clarification questions asked about discrepancies in the grounding, and their responses. Your task is to: 1. Analyze all the provided information and reasoning. 2. Select or provide the correct answer for the user question, based on the new clarifications from the questions and grounding data. 3. Provide the final, most accurate specific answer, as well as a reasoning for it. Remember to stick to the information provided, and ensure that your answer is accurate and well-supported by the grounding information and reasoning provided. If none of the answer options are correct, select the most appropiate based on the new information and reasoning. """ Table 18. LLM Final Reasoner Schema (LLM 3) Output Schema for LLM Final Reasoner LLM_SCHEMA_3 = { "type": "object", "properties": { "reasoning": { "type": "string" }, "answer": { "type": "string" } } } 24 | https://arxiv.org/abs/2505.15928v1 |
arXiv:2505.15935v1 [cs.DB] 21 May 2025MAPS: A Multilingual Benchmark for Global Agent Performance and Security Omer Hofman1, Oren Rachmil1, Shamik Bose1, Vikas Pahuja1, Jonathan Brokman1, Toshiya Shimizu2, Trisha Starostina3, Kelly Marchisio3, Seraphina Goldfarb-Tarrant3, Roman Vainshtein1 1Fujitsu Research of Europe2Fujitsu Limited3Cohere Abstract Agentic AI systems, which build on Large Language Models (LLMs) and interact with tools and memory, have rapidly advanced in capability and scope. Yet, since LLMs have been shown to struggle in multilingual settings, typically resulting in lower performance and reduced safety, agentic systems risk inheriting these limitations. This raises concerns about the global accessibility of such systems, as users interacting in languages other than English may encounter unreliable or security-critical agent behavior. Despite growing interest in evaluating agentic AI, existing benchmarks focus exclusively on English, leaving multilingual settings unexplored. To address this gap, we propose MAPS, a multilingual benchmark suite designed to evaluate agentic AI systems across diverse languages and tasks. MAPS builds on four widely used agentic benchmarks — GAIA (real-world tasks), SWE-bench (code generation), MATH (mathematical reasoning), and the Agent Se- curity Benchmark (security). We translate each dataset into ten diverse languages, resulting in 805 unique tasks and 8,855 total language-specific instances. Our benchmark suite enables a systematic analysis of how multilingual contexts affect agent performance and robustness. Empirically, we observe consistent degradation in both performance and security when transitioning from English to other lan- guages, with severity varying by task and correlating with the amount of translated input. Building on these findings, we provide actionable recommendations to guide agentic AI systems development and assessment under multilingual settings. This work establishes a standardized evaluation framework, encouraging future research towards equitable, reliable, and globally accessible agentic AI. MAPS benchmark suite is publicly available.1 1 Introduction LLM-based agentic AI systems combine multi-step reasoning with external tools and memory to solve open-ended tasks such as code generation, web navigation, planning, and transactional services like booking and ordering [Acharya et al., 2025]. By doing so, they extend to complex, real-world problems beyond standard LLM benchmarks. Since such real-world applications serve speakers of diverse languages, maintaining consistent reliability in every language becomes critical. However, since agentic behavior is grounded in LLMs, which often perform inconsistently across languages [Deng et al., 2023, Wang et al., 2023], agents may inherit these multilingual limitations as well, affecting their functionality and trustworthiness. This presents a barrier to equitable access, as non-English users may face degraded responses, incorrect tool actions, or unsafe behaviors—failures that can lead to actual harm in the real world, including erroneous transactions, data corruption, and other security vulnerabilities [Zhang et al., 2024]. 1https://huggingface.co/datasets/Fujitsu-FRE/Multilingual-Agentic-AI-Benchmark Preprint. To assess emerging agentic systems, various benchmarks have been proposed to evaluate agent performance across a range of tasks [Mialon et al., 2023, Jimenez et al., 2023, Chang et al., 2024, Xu et al., 2024]. However, these benchmarks remain English-only. In contrast to multilingual LLM benchmarks [Dang et al., 2024, Shi et al., 2022, Goyal et al., 2022], no equivalent exists for agentic AI tasks—creating a blind spot in our understanding of cross-language performance, safety, and security. Figure 1: MAPS benchmark | https://arxiv.org/abs/2505.15935v1 |
suite eval- uates LLM-based agents across 11lan- guages and 4agentic benchmarks cover- ing performance and security.In this paper, we address this gap. We hypothesize that multilingual settings will reveal performance and security gaps in agentic systems that are not captured by the ex- isting, English-only benchmarks. To investigate this, we introduce MAPS, a Multilingual Agentic AI Benchmark Suite for Performance and Security. MAPS is based on four established agentic benchmarks: GAIA (real-world tasks) [Mialon et al., 2023], SWE-bench (code generation) [Jimenez et al., 2023], MATH (mathematical reasoning) [Hendrycks et al., 2021], and the Agent Security Bench- mark (security) [Zhang et al., 2024]. These benchmarks are extended to ten typologically diverse languages be- yond English2, by employing a hybrid machine and LLM- based translation approach Ki and Carpuat [2024] with extended verification and enhancements. In total, MAPS includes 805 unique tasks, each available in 11 language versions—including the original English and 10 translated variants—for a total of 8,855 instances. An overview of the benchmark structure is shown in Figure 1. To demonstrate the use of MAPS and test our hypothesis, we selected a leading open-source agent associated with each of the four original benchmarks and applied it to the corresponding multilingual extension. We observed notable declines in both task performance and security when moving from English to other languages, with the severity of these drops varying by task type and correlating with the proportion of non-English input, suggesting that multilingual performance interventions should be targeted based on input composition and task sensitivity. Beyond overall degradation, our findings reveal that multilingual inputs can amplify agentic vulnerabilities in safety-critical tasks, highlighting the need for multilingual risk assessment. These results empirically support our hypothesis and demonstrate the utility of MAPS as a tool for systematic, multilingual evaluation of agentic AI systems. The primary contributions of this paper are threefold: •To the best of our knowledge, we introduce the first multilingual benchmark suite for agentic AI, extending four widely used benchmarks into ten typologically diverse languages for systematic performance and security assessment. •The efficacy and quality of the proposed benchmark are demonstrated through a large-scale evaluation of leading agents as well as human expert verification. •We present the first quantifiable analysis and evidence that multilingual settings reveal critical performance, safety, and security gaps in agentic systems, along with actionable recommendations for improving their development. 2 Background and Related Work 2.1 Agentic AI Benchmarks With the rapid advancement of LLM-based agents, a diverse suite of benchmarks has been developed to assess their autonomy, tool use, planning, and memory integration [Yao et al., 2024, Xu et al., 2024, Yehudai et al., 2025]. We organized these suites along three primary dimensions. 2MAPS offers evaluation in the following ten languages: German, Spanish, Portuguese (Brazil), Japanese, Russian, Italian, Arabic, Hebrew, Korean, and Hindi. 2 Evaluation objective : performance-oriented benchmarks measure task completion, multi-step rea- soning, and correct tool invocation (e.g., AgentBoard [Chang et al., 2024]), whereas security-focused suites probe robustness to adversarial inputs, jailbreaks, and unsafe behaviors (e.g., AgentHarm [Andriushchenko et al., 2024]). Agentic task scope : full-agentic evaluations present only problem | https://arxiv.org/abs/2505.15935v1 |
statements and expected outcomes, requiring end-to-end planning and execution (e.g., GAIA [Mialon et al., 2023]), while semi-agentic frameworks supply scaffolding, such as code templates or mock APIs, to isolate the LLM’s reasoning and tool-selection core (e.g., AppWorld [Trivedi et al., 2024]. Design and evaluation characteristics : most benchmarks span a limited set of domains (three to five use cases), typically including real-world information retrieval and navigation (e.g., AssistantBench [Yoran et al., 2024]), code generation (e.g., SWE-Bench [Jimenez et al., 2023]), reasoning and planning (e.g., MATH [Hendrycks et al., 2021], Travel Planner [Xie et al., 2024]), and security scenarios (e.g., Agent Security Benchmark [Zhang et al., 2024]). They use fixed task counts and predefined difficulty tiers, and to enable reliable, objective measurement despite agents’ open-ended capabilities, they often restrict tasks to closed-form problems with definitive ground truth, allowing clear determination of success or failure [Jimenez et al., 2023, Mialon et al., 2023]. A detailed comparison of benchmark design choices, task types, and evaluation properties is provided in the supplementary materials. While multilingual LLM benchmarks such as XTREME [Hu et al., 2020], FLORES [Goyal et al., 2022], and SIB-200 [Adelani et al., 2023] have enabled broad cross-lingual evaluation, they do not assess interactive decision-making, tool use, or task execution, which are core elements of agentic systems. As a result, existing multilingual benchmarks fall short of capturing the complexities and vulnerabilities that arise when agents operate in non-English settings. This leaves non-English users exposed to agentic failures in their native languages and underscores the need for fully agentic benchmarks that include performance and security evaluation, high data fidelity, and comprehensive multilingual assessment - gaps that our benchmark is specifically designed to address. 2.2 Multilingual Benchmarks and Multilingual Limitations of General-Purpose LLMs Recent studies show that pre-trained LLMs often struggle with non-English input, especially in languages with limited training resources or those typologically distant from English. Multilingual benchmarks such as XTREME [Hu et al., 2020] and XGLUE[Liang et al., 2020] report consistent accuracy drops when moving from English to languages such as Swahili or Nepali. These gaps reflect an imbalance in pretraining corpora, where English accounts for over 90% of the data, as well as challenges in tokenizing morphologically rich languages and the scarcity of fine-tuning data in many languages [Jha, 2024]. Notably, even large models (e.g., GPT-4, Llama 405B) face a “cross-lingual knowledge barrier” on MMLU [Hendrycks et al., 2020] and on Safety tasks [Grattafiori et al., 2024], showing that scale alone does not resolve multilingual performance deficits [Chua et al., 2024]. Building on these performance gaps, LLMs also face robustness and security challenges in multi- lingual contexts. Since most alignment and red-teaming efforts have been English-centric, models are more prone to generate toxic or policy-violating outputs when processing non-English prompts [Deng et al., 2023, Wang et al., 2023, Aakanksha et al., 2024]. Furthermore, hallucination rates increase and confidence calibration degrades outside English, causing models to produce fluent, yet unreliable or potentially harmful content in undersupported languages [Xue et al., 2024]. Although security interventions, such as multilingual alignment, have been shown to be effective in | https://arxiv.org/abs/2505.15935v1 |
reducing harmful output between languages, they often incur a measurable cost in downstream performance or increased latency [Aakanksha et al., 2024]. Given that agentic AI systems are based directly on these LLMs, we hypothesize that they inherit the same language-dependent performance and security limitations. As these agents carry out real-world tasks such as executing code, querying external tools, or navigating web environments, any inherited shortcomings can lead to severe consequences. Yet, to our knowledge, no systematic evaluation has probed how these vulnerabilities manifest within agentic systems. To address this gap, we introduce MAPS, our multilingual agentic benchmark suite in Section 3. 3 MAPS: Multilingual Agentic AI Benchmark Suite To support multilingual evaluation of agentic systems, we construct a benchmark suite by extending established English-language datasets into multiple languages. This process requires careful dataset 3 Figure 2: Overview of our multi-stage translation pipeline for agentic benchmark construction. We start with machine translation for structural alignment, followed by LLM-based verification and enhancement. This approach is adapted from Ki and Carpuat [2024] but extended with task-specific prompting and fallback mechanisms tailored to the requirements of agentic AI evaluation. selection, translation procedures that preserve semantic and structural integrity, and mechanisms for ensuring evaluation consistency. The following subsections detail our methodology for translation, benchmark construction, and dataset composition. 3.1 Translation Pipeline Reliable multilingual evaluation of agentic AI systems hinges on translating task instructions with both semantic and structural cross-language fidelity. Neural MT excels at preserving format and structure, but struggles in low-resource or specialized domains [Koehn and Knowles, 2017, Aharoni et al., 2019]. Translation via instructed LLMs offers broader high-level capabilities at the cost of occasional hallucinations and semantic drift [Hendy et al., 2023, Yan et al., 2024]. To balance these trade-offs, hybrid pipelines were suggested by Ki and Carpuat [2024], Mohammed [2025], combining format-preserving MT with LLM-based refinement. For MAPS, we extend Ki and Carpuat [2024]: First, Ki and Carpuat [2024] was not designed with our benchmarks in mind, thus significant per-benchmark prompting had to be done. Second, we added automated quality checks, fallbacks, and expert verification to ensure the cross-language fidelity needed for agentic benchmarks (Fig. 2). Formally, let us express our translation pipeline as a function T:S × L → T , where s∈ S is a task-instruction instance in source-language (English), Lt∈ L is the target language, and t∈ T is the resulting translated output. The pipeline begins with machine translation (MT) to establish a structural foundation: Denote M(s, L t), the MT function, implemented as a high-quality, off-the-shelf NMT system. Its output provides a structurally faithful baseline for subsequent steps. Next, we apply a verification step using an LLM to assess whether the translation adequately preserves the source meaning. This is modeled as a binary function A(s, M(s, L t), Lt)→0,1, where the LLM compares the original and translated texts to detect major semantic errors or omissions. Based on verification outcomes, the pipeline follows one of two distinct paths. If ( A= 0), indicating machine translation failure, the pipeline employs direct LLM translation: Denote Φdirect(s, L t)the output of an LLM prompted | https://arxiv.org/abs/2505.15935v1 |
to directly translate sto language Lt(without using the MT output). If (A= 1) indicating acceptable machine translation, an LLM enhances the translation while preserving its basic structure: Denote Φenhancement (s, M(s, L t), Lt)as the output of an LLM, guided to refine and improve the MT output while maintaining structural consistency. To ensure semantic integrity, we apply a second binary check: I(s,Φenhancement )→0,1. This integrity check detects common LLM failure modes, such as hallucinations, omissions, misinterpretations (e.g., answering instead of translating), and semantic drift. If this verification fails, we revert to the original machine translation (which passed the initial verification test). The added conditional steps form a robust decision framework: If machine translation is rejected, we use a direct LLM translation; if accepted but enhancement fails integrity verification, we fall back to the machine translation; Otherwise, we use the enhanced translation. This structure ensures graceful degradation, favoring conservative outputs when refinement is unreliable. Formally, the final translation is given by: 4 T(s, L t) = Φdirect(s, L t), ifA(s, M(s, L t), Lt) = 0 M(s, L t), ifA(s, M(s, L t), Lt) = 1 andI(s,Φenhancement (s, M(s, L t), Lt)) = 0 Φenhancement (s, M(s, L t), Lt),otherwise. (1) To ensure the reliability of this pipeline across languages and task types, we conducted human verification on a representative subset of translations. Evaluation design and results are detailed in Subsection 3.3, with additional implementation details in the Supplementary Material. 3.2 Dataset Selection and Composition Dataset Selection. To support robust multilingual evaluation across agentic capabilities, we construct MAPS benchmark suite based on established agentic AI benchmarks. Those were selected based on four criteria: (1) strong adoption and recognition within the research community, including prior use in agentic evaluation; (2) clearly defined, closed-form answers to enable controlled evaluation; (3) sufficient difficulty to challenge current open-source agents without saturating performance; and (4) practical solvability, ensuring that multilingual degradation can be meaningfully measured. Based on these criteria, we selected four datasets spanning real-world reasoning, software engineering, mathematical problem solving, and security assessment. GAIA. GAIA [Mialon et al., 2023] is a benchmark designed to evaluate agents’ performance on real- world assistant tasks. It includes curated questions that require multi-step reasoning and autonomous use of tools such as web browsers, code interpreters, or document analyzers. Each question has a single correct answer, and responses are evaluated by an exact match to a reference output. SWE-bench. SWE-bench [Jimenez et al., 2023] is a software engineering benchmark constructed from real GitHub issues and associated pull requests across popular Python repositories. Each task presents a bug report and a codebase snapshot, and requires the agent to evaluate whether a proposed patch correctly resolves the issue. We adopt the verified subset3, in which agents are tasked with validating a patch rather than generating one. MATH. The MATH dataset [Hendrycks et al., 2021] includes high-school level mathematical problems across seven topics, including algebra, geometry, and calculus. Tasks are structured to require symbolic manipulation and multi-step reasoning. Agent responses are evaluated by exact match against a reference solution. Agent Security Benchmark (ASB). ASB | https://arxiv.org/abs/2505.15935v1 |
benchmark [Zhang et al., 2024] provides a structured evalua- tion of agent robustness against adversarial threats, including prompt injections, memory poisoning, and tool misuse. Agents interact with injected prompts or environments, and evaluation is based on whether safety policies are violated, measured by attack success rate and refusal rate. Data Composition . The metadata below summarizes the multilingual extension, including language coverage, scale, and pre-processing. Translated Languages. We selected the following ten typologically and geographically diverse languages: German, Spanish, Portuguese (Brazil), Japanese, Russian, Italian, Arabic, Hebrew, Korean, and Hindi. This selection enables the evaluation of agent performance across a wide range of scripts, linguistic structures, and regional user populations. Dataset Handling. To preserve the integrity and utility of the original datasets, we applied only minimal and targeted modifications. Across all datasets, we appended translations without modifying or removing any original metadata (such as task type, difficulty level, tools available, etc). Domain- specific syntax—such as equations in MATH, code snippets in SWE-bench, and adversarial prompts in ASB—was preserved exactly, maintaining the original task structure and technical fidelity. For MATH and SWE-bench, which were not originally designed for agentic evaluation, we further applied selective filtering to retain only the most challenging tasks based on the task difficulty field. This follows common practice in prior work to align non-agentic datasets with agentic evaluation settings [Wu et al., 2023], by ensuring meaningful evaluation of agent behavior while avoiding trivial cases. 3https://openai.com/index/introducing-swe-bench-verified/ 5 Data Volume. To balance performance and security evaluation, our benchmark comprises 805tasks: 405from performance-oriented datasets (GAIA, SWE-bench, MATH) and 400from the Agent Security Benchmark. We selected 165tasks from GAIA (full validation set), 140high-difficulty tasks from MATH ( 20per topic across 7topics), and 100hard and medium tasks from SWE-bench. The remaining 400tasks include all security-relevant prompts from ASB. Each task was translated into 10target languages, and combined with the original English version, this results in a total of 8,855 multilingual tasks across 11languages. To validate the benchmark’s utility and examine multilingual effects, we applied a leading agent to each dataset. Full details and performance results are reported in Section 4. 3.3 Translation Implementation and Verification Translation Implementation Details. We implemented the hybrid translation pipeline described in Section 3.1 using a combination of commercial and open-source tools. For machine translation, we used the Google Translate NMT API4, selected for its support across all ten target languages. To preserve task fidelity, structural elements (e.g., equations, variables, code) that MT systems often mistranslate were temporarily masked and restored after translation. For LLM-based refinement and quality control, we used Cohere Command A and GPT-4o, both multilingual models executed with deterministic decoding (temperature set to zero) to ensure output consistency. System prompts were crafted individually for each task to accommodate domain-specific structures (e.g., code snippets, equations, web URLs), ensuring that the models preserved both intent and format. The code is available5and representative examples of these prompts are provided in the Supplementary Material. Human Verification Protocol. To assess translation quality, we manually verified a representative sample of 2,000translations, covering 25% of the benchmark, proportionally sampled across datasets and languages. Each item | https://arxiv.org/abs/2505.15935v1 |
was rated by a bilingual annotator fluent in English and the target language on a1˘5Likert scale across three criteria: adequacy (semantic fidelity), fluency (grammatical and stylistic naturalness), and formatting accuracy (preservation of LaTeX, code, etc.). A fourth metric, answerability , measured whether the translation preserved intent well enough for the annotator to confidently answer the question as if it were in English. Annotator instructions are provided in the Supplementary Material. To validate the reliability of the verification process, we embedded “honeypot” samples with intentional errors; annotators reliably flagged these cases, confirming attentiveness and quality control. Evaluation results confirm high translation quality across the benchmark, with an answerability rate of94.4%, corresponding to a total error rate of 5.6%. Translations also received average scores of4.47for adequacy, 4.60for fluency, and 4.75for formatting accuracy (on a 1–5 Likert scale), supporting the benchmark’s preservation of semantic fidelity, linguistic naturalness, and structural integrity. Full per-language results and analysis are included in the Supplementary Material. To support high-precision use cases, we also release a “verified”6subset of the benchmark, consisting of 190translations per language that passed human review across all four datasets. 4 Experiments We now demonstrate MAPS utility through multilingual evaluation of leading open-source agents. 4.1 Experimental Settings Agent Assignment per Dataset. To demonstrate the utility of our benchmark, we evaluate open- source agents on each dataset and assess their performance and robustness under multilingual settings. While a unified agent would offer a more broad coverage and controlled evaluation, current systems lack the generalization needed to perform well across diverse tasks [Gioacchini et al., 2024, Chang et al., 2024]. To isolate the effect of language variation, we retain each agent’s original configuration, including tools, prompts, and system settings, without any modification. 4https://cloud.google.com/translate/docs/advanced/translating-text-v3# translating_input_strings 5https://github.com/omerhof-fujitsu/hybrid_translation_demo 6https://huggingface.co/datasets/Fujitsu-FRE/MAPS_Verified 6 Figure 3: Performance of open-source agents across languages on four agentic benchmarks: GAIA, SWE-bench, MATH, and ASB. Each bar represents the agent’s accuracy (or attack success rate in ASB) for a given language, with English shown as the baseline. Error bars indicate std across three runs. Performance differences reflect each agent’s degradation or resilience in multilingual settings. For GAIA, we used the OpenDeepResearch agent [von Platen et al., 2024], which integrates retrieval, web browsing, and tool use to support real-world reasoning. For MATH, we adopted MathChat [Wu et al., 2024], a zero-shot agent combining multi-turn reasoning with Python execution and the Wolfram Alpha tool. For SWE-bench, we applied SWE-agent [Yang et al., 2024b], which enables autonomous software reasoning through repository navigation, file editing, and test execution. For ASB, we built on the authors’ existing infrastructure and evaluated the original ten-agent system against both direct and indirect prompt injection attacks across a variety of tasks and languages. Each agent was executed using the LLM backbone reported in its original implementation, all of which are considered multilingual models. Specifically, OpenDeepResearch used GPT o1, MATHChat used GPT-4, SWE-bench used GPT-4.1, and ASB used Qwen2. Full configuration details, including model versions and API providers, are provided in the Supplementary Materials. Experiment Protocol. For each benchmark, the agent was evaluated three times in each of the 11 target | https://arxiv.org/abs/2505.15935v1 |
languages, yielding a total of 33runs per dataset. We report the mean and standard deviation over these runs. We used the original English task definitions and their translations, without modifying or translating internal agent logic and processing flows like system prompts or tools. Metrics. We adopt the original evaluation metrics from each benchmark to ensure consistency with prior agent evaluations. For MathChat (Math), we report answer accuracy. For OpenDeepResearch (GAIA), we report the percentage of answers matching either the English or translated reference. For SWE-agent (SWE-bench), we report the percentage of resolved instances, defined as the percentage of submitted patches that successfully resolve the coding issue. For the ASB agent, we report the attack success rate (ASR), a standard metric in the security domain that represents the percentage of attacks that bypass the safety mechanisms. Additionally, we introduce a new metric: Multilingual Effect , which quantifies the performance or security gap between English and the average of all other languages. Given an evaluation metric M, the Multilingual effect is defined as follows: Multilingual Effect =1 nnX i=1Mlangi−Men (2) 7 Figure 4: a) Multilingual effect as a function of the proportion of translated language tokens in input prompts. Each point represents a benchmark-agent pair, with the multilingual effect computed as the average relative degradation in performance or security across non-English languages. The trend suggests a correlation between input translation extent and multilingual vulnerability. b) Relative performance differences from English for each language, broken down by dataset. Negative values indicate a drop in performance compared to English, while positive values (notably in ASB) represent increased vulnerability. The trend highlights how multilingual effects vary by language and task type. Where Mendenotes the performance in English, n is the number of non-English languages in the dataset (in our case n= 10 ), and Mlangirepresents the performance in the i-th non-English language. 4.2 Results Figure 3 presents the performance of open-source agents across all four benchmarks in English and the ten target languages. In GAIA and ASB, we observe clear performance and security drops: non- English languages consistently underperformed compared to English, with reductions of up to 16% (GAIA) and a rise in vulnerability by up to 17% in ASB. Notably, SWE-bench and MATH exhibit only minor variation across languages, with most scores clustering around the English baseline. These results reveal important differences in how multilingual degradation manifests across task types. Although all tasks require complex reasoning, some are more constrained than others. For instance, SWE-bench is limited to well-structured Python patches designed to fix specific test cases. As a result, the reliance on natural language explanations is reduced, with greater emphasis placed on strict Pythonic syntax and code correctness. On the other hand, GAIA focuses on solving real-world tasks with much more flexibility. Thus, the importance of the natural language problem statement is significantly higher. Additionally, in benchmarks like MATH and SWE-bench, the opportunity for translation is inherently limited, as a large portion of the input consists of mathematical notation or source code, thus naturally reducing the multilinguality effect. To understand this variation, we examine a | https://arxiv.org/abs/2505.15935v1 |
potential driver: the proportion of localized, target-language-oriented input in each benchmark in each benchmark’s input. Interestingly, we observe that Japanese yields the lowest ASR (attack success rate) in the ASB bench- mark, indicating the highest robustness to adversarial inputs. This result can be partially attributed to the fact that the ASB agent was implemented using the Qwen2 model [Yang et al., 2024a], which is known for its strong alignment for Japanese language tasks. Qwen2 has consistently demonstrated strong performance in Japanese-specific LLM benchmarks and leaderboards7, suggesting that align- ment and fine-tuning in a particular language can significantly enhance resilience against multilingual adversarial prompts. This reinforces the importance of language-specific alignment training in the development of robust and secure agentic systems. Figure 4 examines the relationship between prompt composition and multilingual performance. Part (a) shows a correlation between the percentage of non-English tokens in the input and the average performance gap (relative to English) across all four datasets. Benchmarks with higher proportions 7https://rinnakk.github.io/research/benchmarks/lm/index.html 8 of localized, target-language-oriented input, such as GAIA and ASB, exhibit greater degradation, whereas SWE-bench, with predominantly English input (e.g., code), shows higher preservation. From part (b), we can see that there is no clear correlation between multilingual security robustness (ASB) and multilingual performance degradation. This disconnect is especially clear in real-world, language-heavy tasks like GAIA, where performance drops sharply, while structured tasks like SWE- bench and MATH remain largely unaffected. This highlights that multilingual security alignment does not directly track with multilingual task accuracy, notably in language-rich agentic tasks. 5 Discussion This section presents practical recommendations for multilingual agent deployment and directions for advancing the benchmark in future work. 5.1 Guidelines for Multilingual Evaluation and Risk Assessment Language-Aware Deployment Guidelines. Before deploying an AI agent in a multilingual setting, analyze the linguistic composition of its expected input, particularly the balance between structured elements (e.g., code, formal queries) and localized natural language. Inputs with a high proportion of non-English content, especially those involving less formalized or more natural language, tend to increase the risk of performance and safety degradation. We therefore recommend that for any such case, developers should conduct a Multilingual Benchmark Assessment using a diverse, language- sensitive evaluation suite, such as ours, for AI agents operating across languages. This helps reveal hidden vulnerabilities and promotes reliable real-world behavior in multilingual conditions. Prioritize Multilingual Adaptation by Task Type. Our findings suggest that the need for multilin- gual adaptation in agentic systems should be guided by task type. For structured tasks with minimal linguistic variability, such as coding, cross-lingual transfer can often be achieved with minimal adjustment. However, for complex, real-world tasks or safety-critical decisions (e.g., GAIA, ASB), multilingual robustness remains limited, and thus, dedicated multilingual alignment and adaptation are essential. MAPS offers a practical framework to identify where multilingual adaptation is needed, helping prioritize resource allocation for post-training based on task-specific language sensitivity. Multilingual Inputs Amplify Agentic Security Vulnerabilities. Our evaluation on ASB revealed that multilingual adversarial inputs can bypass agent safety mechanisms with minimal sophistication. Direct translations of English jailbreak prompts—without any adaptation or obfuscation—were sufficient to induce | https://arxiv.org/abs/2505.15935v1 |
policy-violating behavior in multiple languages. This highlights a critical risk: even simple adversarial prompts become significantly more effective when the input is localized, and are often sufficient to exploit security vulnerabilities in AI agents. Developers of safety-critical agentic systems should treat multilingual robustness as a core security concern and include translated prompts in safety evaluations using benchmarks like ours. 5.2 Benchmark Limitation While MAPS represents the first multilingual suite for evaluating agentic AI systems, there are natural opportunities for future expansion. The current release includes four datasets, one agent per dataset, and ten target languages, offering a strong foundation for assessing multilingual robustness. Extending coverage to additional domains such as healthcare or legal reasoning, as well as incorporating multiple agents and extremely low-resource languages (e.g., Amharic or Uyghur), would further enhance the benchmark’s scope and relevance. Nonetheless, the current suite already surfaces clear trends in performance and security degradation across languages, offering valuable insights for guiding multilingual deployment. We view this work as a meaningful starting point and invite the community to build on our open-source release to advance more inclusive and resilient agentic AI systems. 6 Conclusions We introduce the first multilingual benchmark suite for evaluating agentic AI systems, addressing a critical gap in assessing language-specific performance and safety limitations. By adapting and 9 localizing four widely used agentic benchmarks—GAIA, SWE-bench, MATH, and ASB—into ten diverse languages, our suite enables the analysis of agent behavior under multilingual conditions. Constructed through a hybrid translation pipeline and human verification, the benchmark ensures high linguistic fidelity and structural consistency. Experimental results reveal consistent degradation in both performance and robustness when agents operate in non-English settings, particularly in tasks involving natural language reasoning and safety-critical behavior. These findings underscore the importance of language-aware evaluation and targeted multilingual adaptation, especially for real-world agentic deployments. We view this benchmark as a practical and extensible foundation for building more inclusive, resilient, and globally reliable agentic AI systems, and we invite the community to build upon it. References A. A. Aakanksha, B. Ermis, S. Goldfarb-Tarrant, J. Kreutzer, M. Fadaee, S. Hooker, et al. The multilingual alignment prism: Aligning global and local preferences to reduce harm. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 12027–12049, 2024. D. B. Acharya, K. Kuppan, and B. Divya. Agentic ai: Autonomous intelligence for complex goals–a comprehen- sive survey. IEEE Access , 2025. D. I. Adelani, H. Liu, X. Shen, N. Vassilyev, J. O. Alabi, Y . Mao, H. Gao, and A. E.-S. Lee. Sib-200: A simple, inclusive, and big evaluation dataset for topic classification in 200+ languages and dialects. arXiv preprint arXiv:2309.07445 , 2023. R. Aharoni, M. Johnson, and O. Firat. Massively multilingual neural machine translation. arXiv preprint arXiv:1903.00089 , 2019. M. Andriushchenko, A. Souly, M. Dziemian, D. Duenas, M. Lin, J. Wang, D. Hendrycks, A. Zou, Z. Kolter, M. Fredrikson, et al. Agentharm: A benchmark for measuring harmfulness of llm agents, 2024. URL https://arxiv. org/abs/2410.09024 , 2024. M. Chang, J. Zhang, Z. Zhu, C. Yang, Y . Yang, Y . Jin, Z. Lan, L. Kong, and | https://arxiv.org/abs/2505.15935v1 |
J. He. Agentboard: An analytical evaluation board of multi-turn llm agents. Advances in Neural Information Processing Systems , 37:74325– 74362, 2024. L. Chua, B. Ghazi, Y . Huang, P. Kamath, R. Kumar, P. Manurangsi, A. Sinha, C. Xie, and C. Zhang. Crosslingual capabilities and knowledge barriers in multilingual large language models. arXiv preprint arXiv:2406.16135 , 2024. J. Dang, S. Singh, D. D’souza, A. Ahmadian, A. Salamanca, M. Smith, A. Peppin, S. Hong, M. Govindassamy, T. Zhao, S. Kublik, M. Amer, V . Aryabumi, J. A. Campos, Y .-C. Tan, T. Kocmi, F. Strub, N. Grinsztajn, Y . Flet-Berliac, A. Locatelli, H. Lin, D. Talupuru, B. Venkitesh, D. Cairuz, B. Yang, T. Chung, W.-Y . Ko, S. S. Shi, A. Shukayev, S. Bae, A. Piktus, R. Castagné, F. Cruz-Salinas, E. Kim, L. Crawhall-Stein, A. Morisot, S. Roy, P. Blunsom, I. Zhang, A. Gomez, N. Frosst, M. Fadaee, B. Ermis, A. Üstün, and S. Hooker. Aya expanse: Combining research breakthroughs for a new multilingual frontier, 2024. URL https://arxiv.org/abs/2412.04261 . Y . Deng, W. Zhang, S. J. Pan, and L. Bing. Multilingual jailbreak challenges in large language models. arXiv preprint arXiv:2310.06474 , 2023. L. Gioacchini, G. Siracusano, D. Sanvito, K. Gashteovski, D. Friede, R. Bifulco, and C. Lawrence. Agentquest: A modular benchmark framework to measure progress and improve llm agents. arXiv preprint arXiv:2404.06411 , 2024. N. Goyal, C. Gao, V . Chaudhary, P.-J. Chen, G. Wenzek, D. Ju, S. Krishnan, M. Ranzato, F. Guzmán, and A. Fan. The flores-101 evaluation benchmark for low-resource and multilingual machine translation. Transactions of the Association for Computational Linguistics , 10:522–538, 2022. A. Grattafiori, A. Dubey, A. Jauhri, A. Pandey, A. Kadian, A. Al-Dahle, A. Letman, A. Mathur, A. Schelten, A. Vaughan, et al. The llama 3 herd of models. arXiv preprint 2407.21783 , 2024. D. Hendrycks, C. Burns, S. Basart, A. Zou, M. Mazeika, D. Song, and J. Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300 , 2020. D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang, D. Song, and J. Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874 , 2021. 10 A. Hendy, M. Abdelrehim, A. Sharaf, V . Raunak, M. Gabr, H. Matsushita, Y . J. Kim, M. Afify, and H. H. Awadalla. How good are gpt models at machine translation? a comprehensive evaluation. arXiv preprint arXiv:2302.09210 , 2023. J. Hu, S. Ruder, A. Siddhant, G. Neubig, O. Firat, and M. Johnson. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation. In International conference on machine learning , pages 4411–4421. PMLR, 2020. B. Jha. The babel effect: Analyzing multilingual performane discrepancies in large language models. Eng Appl Sci J , 1(1):1–4, 2024. C. E. Jimenez, J. Yang, A. Wettig, S. Yao, K. Pei, O. Press, and K. Narasimhan. Swe-bench: Can language models resolve real-world github issues? arXiv preprint arXiv:2310.06770 , 2023. D. Ki and M. Carpuat. Guiding large language models to post-edit machine translation with error annotations. arXiv preprint arXiv:2404.07851 , 2024. P. Koehn and R. Knowles. Six challenges for neural machine translation. In Proceedings of the First | https://arxiv.org/abs/2505.15935v1 |
Workshop on Neural Machine Translation , 2017. Y . Liang, N. Duan, Y . Gong, N. Wu, F. Guo, W. Qi, M. Gong, L. Shou, D. Jiang, G. Cao, et al. Xglue: A new benchmark dataset for cross-lingual pre-training, understanding and generation. arXiv preprint arXiv:2004.01401 , 2020. G. Mialon, C. Fourrier, T. Wolf, Y . LeCun, and T. Scialom. Gaia: a benchmark for general ai assistants. In The Twelfth International Conference on Learning Representations , 2023. T. A. Mohammed. From google translate to chatgpt: The use of large language models in translating, editing, and revising. In Role of AI in Translation and Interpretation , pages 1–32. IGI Global Scientific Publishing, 2025. F. Shi, M. Suzgun, M. Freitag, X. Wang, S. Srivats, S. V osoughi, H. W. Chung, Y . Tay, S. Ruder, D. Zhou, D. Das, and J. Wei. Language models are multilingual chain-of-thought reasoners, 2022. H. Trivedi, T. Khot, M. Hartmann, R. Manku, V . Dong, E. Li, S. Gupta, A. Sabharwal, and N. Balasubramanian. Appworld: A controllable world of apps and people for benchmarking interactive coding agents. In Proceed- ings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 16022–16076, 2024. P. von Platen, Y . Wang, T. Le Scao, and H. F. team. Open deep research: A blueprint for transpar- ent foundation model science. https://huggingface.co/blog/open-deep-research , 2024. URL https://huggingface.co/blog/open-deep-research . Hugging Face Blog. W. Wang, Z. Tu, C. Chen, Y . Yuan, J.-t. Huang, W. Jiao, and M. R. Lyu. All languages matter: On the multilingual safety of large language models. arXiv preprint arXiv:2310.00905 , 2023. Y . Wu, F. Jia, S. Zhang, H. Li, E. Zhu, Y . Wang, Y . T. Lee, R. Peng, Q. Wu, and C. Wang. Mathchat: Converse to tackle challenging math problems with llm agents. arXiv preprint arXiv:2306.01337 , 2023. Y . Wu, F. Jia, S. Zhang, H. Li, E. Zhu, Y . Wang, Y . T. Lee, R. Peng, Q. Wu, and C. Wang. Mathchat: Converse to tackle challenging math problems with llm agents. In ICLR 2024 Workshop on Large Language Model (LLM) Agents , 2024. J. Xie, K. Zhang, J. Chen, T. Zhu, R. Lou, Y . Tian, Y . Xiao, and Y . Su. Travelplanner: A benchmark for real-world planning with language agents. In International Conference on Machine Learning , pages 54590–54613. PMLR, 2024. F. F. Xu, Y . Song, B. Li, Y . Tang, K. Jain, M. Bao, Z. Z. Wang, X. Zhou, Z. Guo, M. Cao, et al. Theagentcompany: benchmarking llm agents on consequential real world tasks. arXiv preprint arXiv:2412.14161 , 2024. B. Xue, H. Wang, R. Wang, S. Wang, Z. Wang, Y . Du, B. Liang, and K.-F. Wong. Mlingconf: A comprehensive study of multilingual confidence estimation on large language models. arXiv preprint arXiv:2410.12478 , 2024. J. Yan, P. Yan, Y . Chen, J. Li, X. Zhu, and Y . Zhang. Gpt-4 vs. human translators: A comprehensive evaluation of translation quality across languages, domains, and expertise levels. arXiv preprint arXiv:2407.03658 , 2024. 11 A. Yang, B. Yang, B. Hui, | https://arxiv.org/abs/2505.15935v1 |
B. Zheng, B. Yu, C. Zhou, C. Li, C. Li, D. Liu, F. Huang, G. Dong, H. Wei, H. Lin, J. Tang, J. Wang, J. Yang, J. Tu, J. Zhang, J. Ma, J. Xu, J. Zhou, J. Bai, J. He, J. Lin, K. Dang, K. Lu, K. Chen, K. Yang, M. Li, M. Xue, N. Ni, P. Zhang, P. Wang, R. Peng, R. Men, R. Gao, R. Lin, S. Wang, S. Bai, S. Tan, T. Zhu, T. Li, T. Liu, W. Ge, X. Deng, X. Zhou, X. Ren, X. Zhang, X. Wei, X. Ren, Y . Fan, Y . Yao, Y . Zhang, Y . Wan, Y . Chu, Y . Liu, Z. Cui, Z. Zhang, and Z. Fan. Qwen2 technical report. arXiv preprint arXiv:2407.10671 , 2024a. J. Yang, C. Jimenez, A. Wettig, K. Lieret, S. Yao, K. Narasimhan, and O. Press. Swe-agent: Agent-computer interfaces enable automated software engineering. Advances in Neural Information Processing Systems , 37: 50528–50652, 2024b. S. Yao, N. Shinn, P. Razavi, and K. Narasimhan. τ-bench: A benchmark for tool-agent-user interaction in real-world domains, 2024. URL https://arxiv. org/abs/2406.12045 , 2024. A. Yehudai, L. Eden, A. Li, G. Uziel, Y . Zhao, R. Bar-Haim, A. Cohan, and M. Shmueli-Scheuer. Survey on evaluation of llm-based agents. arXiv preprint arXiv:2503.16416 , 2025. O. Yoran, S. J. Amouyal, C. Malaviya, B. Bogin, O. Press, and J. Berant. Assistantbench: Can web agents solve realistic and time-consuming tasks? arXiv preprint arXiv:2407.15711 , 2024. H. Zhang, J. Huang, K. Mei, Y . Yao, Z. Wang, C. Zhan, H. Wang, and Y . Zhang. Agent security bench (asb): Formalizing and benchmarking attacks and defenses in llm-based agents. arXiv preprint arXiv:2410.02644 , 2024. 12 | https://arxiv.org/abs/2505.15935v1 |
arXiv:2505.15948v1 [cs.CL] 21 May 2025Presented at the Workshop on Open Citations & Open Scholarly Metadata 2025 Citation Parsing and Analysis with Language Models parth sarin Stanford University Palo Alto, CA 94305, USA psarin@stanford.eduJuan Pablo Alperin Simon Fraser University Vancouver, Canada juan@alperin.ca Abstract A key type of resource needed to address global inequalities in knowledge production and dissemination is a tool that can support journals in un- derstanding how knowledge circulates. The absence of such a tool has resulted in comparatively less information about networks of knowledge sharing in the Global South. In turn, this gap authorizes the exclusion of researchers and scholars from the South in indexing services, reinforcing colonialarrangementsthatde-centerandminoritizethosescholars. Inorder to support citation network tracking on a global scale, we investigate the capacity of open-weight language models to mark up manuscript citations in an indexable format. We assembled a dataset of matched plaintext and annotated citations from preprints and published research papers. Then, we evaluated a number of open-weight language models on the annotation task. We find that, even out of the box, today’s language models achieve high levels of accuracy on identifying the constituent components of each citation, outperforming state-of-the-art methods. Moreover, the smallest model we evaluated, Qwen3-0.6B, can parse all fields with high accuracy in 25passes, suggesting that post-training is likely to be effective in producing small, robust citation parsing models. Such a tool could greatly improve the fidelity of citation networks and thus meaningfully improve research indexing and discovery, as well as further metascientific research. 1 Introduction Scienceisnotanequalplayingfield. Thereisalarge—thoughshrinking—inequalitybetween knowledge production in the Global North and Global South. To properly understand this gap, national research capacities, and the broader power dynamics in global research contri- butions, it is necessary to have high quality and complete indexes of scholarly production. Accurate and complete metascientific information is also central to decision making at every level of the research system from the tenure and promotion decisions about individual fac- ulty, to institutional resource allocation, and to national science policies. Such information is not indexed today: It has been observed for decades that research in the Global South is significantly underrepresented in supposedly global indexes, databases, and search engines for scholarly work (Cetto et al., 1998; Khanna et al., 2022; Mongeon & Paul-Hus, 2016). So, for over 50 years, decisions about funding, tenure, collaboration, and governance have been made without the same type of reliable quantitative data that exists to tabulate research in the Global North. Fortunately, a number of projects are seeking to address global gaps in indexing and discov- erability. New bibliographic databases like OpenAlex are taking a more inclusive indexing approach that has allowed them to significantly outperform existing databases in terms of coverage (Alperin et al., 2024; Culbert et al., 2024; Jiao et al., 2023). However, despite the significantly greater coverage of works, including those from the Global South, there continues to be an enormous gap in the indexing of references (Alperin et al., 2024; Culbert et al., 2024). While more complete bibliographic records are useful for understanding knowl- 1 Presented at the Workshop on Open Citations & | https://arxiv.org/abs/2505.15948v1 |
Open Scholarly Metadata 2025 edge production, references and citations allow us to better understand the connections and circulation of knowledge, and are thus an essential aspect of research information. This is the project to which we seek to contribute. Namely, there has been recent interest in developing new citation parsing techniques that improve on state-of-the-art methods like GROBID and Crossref search to be performant in more languages and trainable end-to-end (Choi et al., 2023; Joshi et al., 2023). Though they were created through very extractive and neocolonial processes (Gray & Suri, 2019; Hao, 2024), modern language models can indeed operate in multiple languages and modalities and they are end-to-end trainable. Some smaller language models can also run in low-compute environments, and even on mobile devices or in the browser. In this paper, we evaluate open-source, decoder-based language models on the task of cita- tion parsing. Specifically, we aim to address the following research questions: 1. How accurate are open-weight language models at identifying different components of a citation? How do they compare to state-of-the-art methods for reference pars- ing? 2. Are small language models — which can run in the browser or on a device — promising for this task? 2 Methods 2.1 Dataset We began by assembling a dataset of citations in plain, formatted text, along with the same citations marked up in JATS format. We drew from two existing corpora. First, Garnett (2016) assembled the “XML Markup Evaluation Corpus,” for the Public Knowledge Project(PKP)whichincludes829submittedarticlemanuscriptsmarkedupinJATSformat. Second, Open Research Europe (ORE) is a publishing platform with an annotated corpus of 848 articles, marked up in JATS format (European Commission, 2025). We extracted the marked up version of each citation from the reference list in the JATS XML markup and matched it to a (markdown-formatted) plaintext citation extracted from the article. Specifically, we converted each article to markdown using the markitdown tool (Microsoft Corporation, 2025). Then, from that markdown, we extracted plaintext citations using Llama-3.1-8B-Instruct prompted with the system prompt in Appendix A.1. We programmatically verified that each of the citations appeared in the article to prevent hallucinated citations. (a) Garnett/PKP Corpus (b) ORE Corpus Figure 1: Similarity distributions for Garnett/PKP and ORE corpora Then we matched the plaintext citations (the output of the language model) with the marked-up citations (provided in the XML files in each of the corpora). Let pbe the plaintext citation, cbe a text-only version of the marked-up citation, and d(p, c)be the edit distance between the two. Take |x|to represent the length (number of characters) of the 2 Presented at the Workshop on Open Citations & Open Scholarly Metadata 2025 string x. Define the similarity between pandcto be s(p, c) = 1−d(p, c) max{|p|,|c|}. We matched citations in order of decreasing similarity and excluded all matches with a similarity score less than 0.75. Figure 1 shows the distribution of similarity scores on the two corpora. Applying this procedure, we produced 13,149 matches from the Garnett/PKP corpus and 24,934 from the ORE corpus. We sampled 1,000 citations from each for a final dataset of 2,000 matched citations in plain | https://arxiv.org/abs/2505.15948v1 |
text and marked up form. We also qualitatively analyzed the matching procedure, specifically to investigate why the average similarity score is lower for the ORE corpus than the Garnett/PKP corpus. Our suspicion is that this happened because the ORE corpus is for published articles and refer- ences in that corpus can contain additional links which don’t appear in the JATS markup. For example, here is one plaintext reference with a low similarity score: Viswanathan M, Ammerman A, Eng E, et al.: Community-based participatory re- search: assessing the evidence. Evid Rep Technol Assess (Summ). AHRQ Publi- cation N °4-E022-1. Agency for Healthcare Research and Quality, Rockville, MD, 2004; (99): 1–8. PubMed Abstract | Free Full Text In the published article, the highlighted section was linked to the PubMed abstract and full text of the article. The matched JATS annotation for this citation is correct but does not include those links, leading to a low score: <mixed-citation publication-type="journal"> <person-group person-group-type="author"> <name name-style="western"> <surname>Viswanathan</surname> <given-names>M</given-names> </name> <name name-style="western"> <surname>Ammerman</surname> <given-names>A</given-names> </name> <name name-style="western"> <surname>Eng</surname> <given-names>E</given-names> </name> <etal /> </person-group>: <article-title> Community-based participatory research: assessing the evidence. </article-title> <source> <italic toggle="yes">Evid Rep Technol Assess (Summ).</italic> </source> AHRQ Publication N °4-E022-1. Agency for Healthcare Research and Quality, Rockville, MD, <year>2004</year>; (<issue>99</issue>): <fpage>1</fpage>–<lpage>8</lpage>. </mixed-citation> 3 Presented at the Workshop on Open Citations & Open Scholarly Metadata 2025 2.2 Language model evaluation Table 1 shows the language models we evaluated in this project. We chose small models that could potentially be deployed in low-compute contexts, given that a reference annotation service will likely need to run in the browser or on the server of a journal which may not have much compute capacity. Table 1: Model scale, maximum context length, and release date Model (HuggingFace repo) Params (B) Context First release deepseek-ai/DeepSeek-R1-Distill-Qwen-14B 14 32,768 Feb 2025 deepseek-ai/DeepSeek-R1-Distill-Qwen-7B 7 32,768 Feb 2025 Qwen/Qwen2.5-7B-Instruct 7.6 131,072 Apr 2025 Qwen/Qwen2.5-3B-Instruct 3.1 32,768 Apr 2025 Qwen/Qwen3-4B 4.0 32,768 May 2025 Qwen/Qwen3-1.7B 1.7 32,768 May 2025 Qwen/Qwen3-0.6B 0.6 32,768 Apr 2025 microsoft/phi-4 14 16,000 Dec 2024 microsoft/Phi4-mini-instruct 3.8 128,000 Feb 2025 meta-llama/Llama-3.1-8B-Instruct 8 128,000 Jul 2024 meta-llama/Llama-3.2-3B-Instruct 3 128,000 Sep 2024 Each model was prompted with two examples of marked-up citations (from the larger dataset, not the sampled 2,000) and instructed to produce the XML citation for a par- ticular citation. We prompted the model twice, with and without a chain of thought. For the reasoning models, we explicitly foreclosed its reasoning by appending <think></think> to the end of the prompt. When we sampled a chain of thought, we sampled at temperature 0.6, with top pset to 0.95and top kset to 20. For the non-CoT prompts we sampled at temperature 0.7with a top pvalue of 0.8and top kof20. In the results, we report the pass@1accuracies for each model. We evaluated the language model’s accuracy on a number of citation sub-elements: article-title ,issue,volume,source,coverage ,year,fpage, andsurname (the first author’s surname). For each field except article-title andsource, we required that the prediction exactly match the label. We allowed for an edit distance of 10for the article title and5for the source. We also recorded the coverage, or the | https://arxiv.org/abs/2505.15948v1 |
percent of citations for which the language model produced valid XML. If the language model did not produce valid XML, its label was marked incorrect for all sub-elements. We also sampled 26= 64completions for each citation, with a reasoning trace, from Qwen3- 0.6B to determine whether this task is in distribution for a small model. Running these experiments ( pass@1accuracies for eleven models and pass@64 accuracy for one model) took six hours on an H200 GPU. Based on our estimates, including the CO 2 emissions of the GPU and the datacenter, these experiments required at most 2.46 kg CO 2, or roughly the emissions incurred by driving the average U.S. gasoline-powered passenger vehicle 6.3 miles. See Appendix B for a more detailed calculation. 3 Results 3.1 RQ1: Model accuracies Figure 2 shows the accuracies of these models on the 2,000 citations. We report the highest pass@1accuracyforeachfieldacrossthetwoprompts(withandwithoutachainofthought). We also conducted a quick evaluation of Crossref search for these references. Crossref had coverageof 0.968withanarticletitleaccuracyof 0.439. TheAPIalsoprovidesa“confidence” score. Filtering only to high confidence items (a score of larger than 50), this approach has 4 Presented at the Workshop on Open Citations & Open Scholarly Metadata 2025 Figure 2: pass@1model accuracies, max between CoT and non-CoT prompts a coverage of 0.632with an article title accuracy of 0.633. Both of these accuracies are lower than the worse-performing language models. We did a similar evaluation of GROBID using the /api/processCitation endpoint. It had a coverage of 0.989with an article title accuracy of 0.6674and a surname accuracy of 0.8516. The worst performing language model does better by 8.76percentage points for article title accuracy and most language models (except for the worst two) also have a higher accuracy for surname identification. 3.2 RQ2: Small language models Yue et al. (2025) demonstrated that the capacity of a language model to learn a task using reinforcement learning with verifiable reward (RLVR) is strongly connected to the base model’s pass@kperformance on the task. This provides one technique to determine whether high-fidelity citation parsing is within the distribution of a small model. Figure 3 shows whether the correct answer for each sub-element appears within 64samples, with reasoning, from Qwen3-0.6B. It shows that this task (over 90% accuracy for all sub- elements) is within distribution for the model and that the standard RLVR setup is likely to improve sampling efficiency. 4 Discussion Thisshortreporthasdemonstratedthecapacityoflanguagemodelstoachievehighaccuracy at marking up the different components of a plaintext citation. Specifically, the models evaluated in this study generally achieve higher field accuracy than state-of-the-art methods like GROBID and Crossref search. It also seems within the capacity of even the smallest model we tested (Qwen3-0.6B) to achieve high pass@1accuracy with post-training through reinforcement learning. In this section, we conclude by discussing a few further directions for this research. First, deploying these models will likely require specialized compute infrastructure, further model optimization, or both. Although, in this work, we tested non-quantized models on the latest GPUs, additional research is needed to evaluate accuracy and feasibility in the low-resource context that is typically available for academic journals. 5 Presented at the Workshop on Open Citations | https://arxiv.org/abs/2505.15948v1 |
& Open Scholarly Metadata 2025 Figure 3: pass@64 accuracy for Qwen3-0.6B on citation parsing Also, although the test in Section 3.2 shows that RLVR is likely to be effective for Qwen3- 0.6B, sampled with reasoning, there are a variety of other training methods that could be used which are more able to shift the underlying distribution of a language model. In particular, distillation and supervised fine-tuning may be effective in learning a high- accuracy annotation system, especially because this problem has a lot of available training data. Similarly, we did not explore any test-time techniques that could have improved accuracy or coverage. For instance, one can define a grammar and use constrained decoding to guarantee that the language model produces valid JATS XML, boosting coverage to 100% (Scholak et al., 2021; Poesia et al., 2022). It has also been demonstrated that Best-of-N sampling, process supervision with reward models, and self-feedback can improve accuracy at inference time (Snell et al., 2024; Madaan et al., 2023). Most of these techniques either require more compute or are not well optimized for existing inference frameworks so may not make sense for deployment. One other limitation of our work is that it relies on an initial extraction phase where we used Llama-3.1-8B-Instructtoidentifytheplaintextlistofcitations. Wedidnottesttheaccuracy of this step, but we did validate that all of the matched citations have high similarity. Another direction for future research is to evaluate extraction, which will be important for offline metadata enrichment. Finally, it is worth returning to the significance of a tool which can efficiently and accu- rately parse the elements of a citation. Not only could this tool be deployed as part of journal software, but it could be applied offline to enrich the metadata for existing research publications. This new infrastructure could shed more light on how knowledge circulates in languages and contexts that have systematically excluded from the dominant research communities of the Global North. 5 Funding This research was supported by free compute credits provided by Modal and by the Social Sciences and Humanities Research Council of Canada (SSHRC) through Grant #1007-2023- 0001. The funder had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The authors declare no conflicts of interest. 6 Presented at the Workshop on Open Citations & Open Scholarly Metadata 2025 References Juan Pablo Alperin, Jason Portenoy, Kyle Demes, Vincent Larivière, and Stefanie Haustein. An analysis of the suitability of OpenAlex for bibliometric analyses. arXiv preprint arXiv:2404.17663 , 2024. Ana Marıa Cetto, Octavio Alonso-Gamboa, PG Altbach, and D Teferra. Scientific and scholarly journals in Latin America and the Caribbean. Knowledge dissemination in Africa: The role of the scholarly journal, ed. Philip G. Altbach and Damtew Teferra , pp. 99–126, 1998. Wonjun Choi, Hwamook Yoon, Mihwan Hyun, Hyejin Lee, Jaewook Seol, Kangsan Dajeong Lee, Young Joon Yoon, and Hyesoo Kong. Building an annotated corpus for automatic metadata extraction from multilingual journal article references. PLOS ONE , 18, 2023. URLhttps://api.semanticscholar.org/CorpusID:256031950 . Jack Culbert, Anne Hobert, Najko Jahn, Nick Haupka, Marion Schmidt, Paul Donner, and Philipp Mayr. Reference | https://arxiv.org/abs/2505.15948v1 |
coverage analysis of openalex compared to web of science and Scopus.arXiv preprint arXiv:2401.16359 , 2024. Douglas Donnellan, Andy Lawrence, Daniel Bizo, et al. Uptime Institute Global Data Center Survey 2024, jul 2024. URL https://datacenter.uptimeinstitute.com/rs/ 711-RIA-145/images/2024.GlobalDataCenterSurvey.Report.pdf . European Commission. Open Research Europe: Full article corpus. https:// open-research-europe.ec.europa.eu , 2025. Complete collection of all articles pub- lished on Open Research Europe; accessed 19 March 2025. Alex Garnett. The XML Markup Evaluation Corpus, April 2016. URL https://pkp.sfu. ca/2016/04/18/the-xml-markup-evaluation-corpus/ . Mary L Gray and Siddharth Suri. Ghost work: How to stop Silicon Valley from building a new global underclass . Harper Business, 2019. Karen Hao. Ai is taking water from the desert. The Atlantic , March 2024. URL https://www.theatlantic.com/technology/archive/2024/03/ ai-water-climate-microsoft/677602/ . Chenyue Jiao, Kai Li, and Zhichao Fang. How are exclusively data journals indexed in major scholarly databases? An examination of four databases. Scientific Data , 10(1):737, 2023. Publisher: Nature Publishing Group UK London. Bikash Joshi, Anthi Symeonidou, Syed Mazin Danish, and Floris Hermsen. An end-to-end pipeline for bibliography extraction from scientific articles. Proceedings of the Second Workshop on Information Extraction from Scientific Publications , 2023. URL https: //api.semanticscholar.org/CorpusID:267406919 . Saurabh Khanna, Jon Ball, Juan Pablo Alperin, and John Willinsky. Recalibrating the scopeofscholarlypublishing: Amodeststepinavastdecolonizationprocess. Quantitative Science Studies , 3(4):912–930, 2022. Publisher: MIT Press One Broadway, 12th Floor, Cambridge, Massachusetts 02142, USA. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegr- effe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bod- hisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, and Peter Clark. Self-Refine: Iterative Refinement with Self-Feedback, 2023. URL https://arxiv.org/abs/2303.17651 . Microsoft Corporation. MarkItDown: A Python Tool for Converting Files to Markdown , 2025. URL https://github.com/microsoft/markitdown . Version 0.1.1. Philippe Mongeon and Adèle Paul-Hus. The journal coverage of Web of Science and Scopus: a comparative analysis. Scientometrics , 106:213–228, 2016. Publisher: Springer. 7 Presented at the Workshop on Open Citations & Open Scholarly Metadata 2025 NVIDIA Corporation. NVIDIA H200 Tensor Core GPU Datasheet. Technical report, NVIDIA Corporation, jul 2024. URL https://www.megware.com/fileadmin/user_ upload/LandingPage%20NVIDIA/NVIDIA_H200_Datasheet.pdf . Preliminary specifica- tions. Gabriel Poesia, Oleksandr Polozov, Vu Le, Ashish Tiwari, Gustavo Soares, Christopher Meek, and Sumit Gulwani. Synchromesh: Reliable code generation from pre-trained language models, 2022. URL https://arxiv.org/abs/2201.11227 . Torsten Scholak, Nathan Schucher, and Dzmitry Bahdanau. PICARD: Parsing Incre- mentally for Constrained Auto-Regressive Decoding from Language Models. ArXiv, abs/2109.05093, 2021. URL https://www.aclanthology.org/2021.emnlp-main.779. pdf. CharlieSnell, JaehoonLee, KelvinXu, andAviralKumar. ScalingLLMTest-TimeCompute Optimally can be More Effective than Scaling Model Parameters, 2024. URL https: //arxiv.org/abs/2408.03314 . U.S. Environmental Protection Agency. Greenhouse Gas Equivalencies Cal- culator – Calculations and References. https://www.epa.gov/energy/ greenhouse-gas-equivalencies-calculator-calculations-and-references , 2024. Accessed: 2025-05-20. Yang Yue, Zhiqi Chen, Rui Lu, Andrew Zhao, Zhaokai Wang, Yang Yue, Shiji Song, and GaoHuang. DoesReinforcementLearningReallyIncentivizeReasoningCapacityinLLMs Beyond the Base Model?, 2025. URL https://arxiv.org/abs/2504.13837 . 8 Presented at the Workshop on Open Citations & Open Scholarly Metadata 2025 A Prompts A.1 Citation extraction System prompt for citation extraction You are an expert annotator that specializes in reading academic articles and isolat- ing their bibliography. You will see the text of an academic article and you should write out a list of the references from the article | https://arxiv.org/abs/2505.15948v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.