SlowGuess commited on
Commit
e551e51
·
verified ·
1 Parent(s): 4650d26

Add Batch 67e4236a-733e-4bdf-9c79-002d7c884233

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. feelsfemininetomeunderstandingperceivedgenderedstylethroughhumanannotations/b4471a63-2fc4-4d1d-a9a9-4d2acc698e72_content_list.json +3 -0
  2. feelsfemininetomeunderstandingperceivedgenderedstylethroughhumanannotations/b4471a63-2fc4-4d1d-a9a9-4d2acc698e72_model.json +3 -0
  3. feelsfemininetomeunderstandingperceivedgenderedstylethroughhumanannotations/b4471a63-2fc4-4d1d-a9a9-4d2acc698e72_origin.pdf +3 -0
  4. feelsfemininetomeunderstandingperceivedgenderedstylethroughhumanannotations/full.md +0 -0
  5. feelsfemininetomeunderstandingperceivedgenderedstylethroughhumanannotations/images.zip +3 -0
  6. feelsfemininetomeunderstandingperceivedgenderedstylethroughhumanannotations/layout.json +3 -0
  7. flsalearningsemanticstructuresindocumentcollectionsusingfoundationmodels/8b674ce1-5fe0-467a-9406-92edcbb5cab2_content_list.json +3 -0
  8. flsalearningsemanticstructuresindocumentcollectionsusingfoundationmodels/8b674ce1-5fe0-467a-9406-92edcbb5cab2_model.json +3 -0
  9. flsalearningsemanticstructuresindocumentcollectionsusingfoundationmodels/8b674ce1-5fe0-467a-9406-92edcbb5cab2_origin.pdf +3 -0
  10. flsalearningsemanticstructuresindocumentcollectionsusingfoundationmodels/full.md +322 -0
  11. flsalearningsemanticstructuresindocumentcollectionsusingfoundationmodels/images.zip +3 -0
  12. flsalearningsemanticstructuresindocumentcollectionsusingfoundationmodels/layout.json +3 -0
  13. iknowaudiointegratingknowledgegraphswithaudiolanguagemodels/0abba7d3-2569-484c-9709-1487d1f8a82e_content_list.json +3 -0
  14. iknowaudiointegratingknowledgegraphswithaudiolanguagemodels/0abba7d3-2569-484c-9709-1487d1f8a82e_model.json +3 -0
  15. iknowaudiointegratingknowledgegraphswithaudiolanguagemodels/0abba7d3-2569-484c-9709-1487d1f8a82e_origin.pdf +3 -0
  16. iknowaudiointegratingknowledgegraphswithaudiolanguagemodels/full.md +442 -0
  17. iknowaudiointegratingknowledgegraphswithaudiolanguagemodels/images.zip +3 -0
  18. iknowaudiointegratingknowledgegraphswithaudiolanguagemodels/layout.json +3 -0
  19. itoolreinforcedfinetuningwithdynamicdeficiencycalibrationforadvancedtooluse/1bc87092-a237-4665-9650-f330b7703936_content_list.json +3 -0
  20. itoolreinforcedfinetuningwithdynamicdeficiencycalibrationforadvancedtooluse/1bc87092-a237-4665-9650-f330b7703936_model.json +3 -0
  21. itoolreinforcedfinetuningwithdynamicdeficiencycalibrationforadvancedtooluse/1bc87092-a237-4665-9650-f330b7703936_origin.pdf +3 -0
  22. itoolreinforcedfinetuningwithdynamicdeficiencycalibrationforadvancedtooluse/full.md +516 -0
  23. itoolreinforcedfinetuningwithdynamicdeficiencycalibrationforadvancedtooluse/images.zip +3 -0
  24. itoolreinforcedfinetuningwithdynamicdeficiencycalibrationforadvancedtooluse/layout.json +3 -0
  25. ivedecidedtoleakprobinginternalsbehindpromptleakageintents/3a394495-8a16-47e4-b753-a14853f62ff9_content_list.json +3 -0
  26. ivedecidedtoleakprobinginternalsbehindpromptleakageintents/3a394495-8a16-47e4-b753-a14853f62ff9_model.json +3 -0
  27. ivedecidedtoleakprobinginternalsbehindpromptleakageintents/3a394495-8a16-47e4-b753-a14853f62ff9_origin.pdf +3 -0
  28. ivedecidedtoleakprobinginternalsbehindpromptleakageintents/full.md +0 -0
  29. ivedecidedtoleakprobinginternalsbehindpromptleakageintents/images.zip +3 -0
  30. ivedecidedtoleakprobinginternalsbehindpromptleakageintents/layout.json +3 -0
  31. ivisparaninteractivevisualspatialreasoningbenchmarkforvlms/d3e89823-af08-4e34-b602-4ae677b1f7cc_content_list.json +3 -0
  32. ivisparaninteractivevisualspatialreasoningbenchmarkforvlms/d3e89823-af08-4e34-b602-4ae677b1f7cc_model.json +3 -0
  33. ivisparaninteractivevisualspatialreasoningbenchmarkforvlms/d3e89823-af08-4e34-b602-4ae677b1f7cc_origin.pdf +3 -0
  34. ivisparaninteractivevisualspatialreasoningbenchmarkforvlms/full.md +777 -0
  35. ivisparaninteractivevisualspatialreasoningbenchmarkforvlms/images.zip +3 -0
  36. ivisparaninteractivevisualspatialreasoningbenchmarkforvlms/layout.json +3 -0
  37. mmwatdetectingotherinitiatedrepairrequestsindialogue/3260e39b-0e66-41cc-aa16-e84b99650b99_content_list.json +3 -0
  38. mmwatdetectingotherinitiatedrepairrequestsindialogue/3260e39b-0e66-41cc-aa16-e84b99650b99_model.json +3 -0
  39. mmwatdetectingotherinitiatedrepairrequestsindialogue/3260e39b-0e66-41cc-aa16-e84b99650b99_origin.pdf +3 -0
  40. mmwatdetectingotherinitiatedrepairrequestsindialogue/full.md +314 -0
  41. mmwatdetectingotherinitiatedrepairrequestsindialogue/images.zip +3 -0
  42. mmwatdetectingotherinitiatedrepairrequestsindialogue/layout.json +3 -0
  43. pfedgpthierarchicallyoptimizingloraaggregationweightsforpersonalizedfederatedgptmodels/6cae076b-0e4d-416f-8cba-50855d2287bf_content_list.json +3 -0
  44. pfedgpthierarchicallyoptimizingloraaggregationweightsforpersonalizedfederatedgptmodels/6cae076b-0e4d-416f-8cba-50855d2287bf_model.json +3 -0
  45. pfedgpthierarchicallyoptimizingloraaggregationweightsforpersonalizedfederatedgptmodels/6cae076b-0e4d-416f-8cba-50855d2287bf_origin.pdf +3 -0
  46. pfedgpthierarchicallyoptimizingloraaggregationweightsforpersonalizedfederatedgptmodels/full.md +438 -0
  47. pfedgpthierarchicallyoptimizingloraaggregationweightsforpersonalizedfederatedgptmodels/images.zip +3 -0
  48. pfedgpthierarchicallyoptimizingloraaggregationweightsforpersonalizedfederatedgptmodels/layout.json +3 -0
  49. rewordbenchbenchmarkingandimprovingtherobustnessofrewardmodelswithtransformedinputs/9308c9c9-dc4a-4610-9331-248e0fb07376_content_list.json +3 -0
  50. rewordbenchbenchmarkingandimprovingtherobustnessofrewardmodelswithtransformedinputs/9308c9c9-dc4a-4610-9331-248e0fb07376_model.json +3 -0
feelsfemininetomeunderstandingperceivedgenderedstylethroughhumanannotations/b4471a63-2fc4-4d1d-a9a9-4d2acc698e72_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:59a9206fe1ccf7a06791258c00406de5bff1010dc921369ad110dca66a59d362
3
+ size 163579
feelsfemininetomeunderstandingperceivedgenderedstylethroughhumanannotations/b4471a63-2fc4-4d1d-a9a9-4d2acc698e72_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:70fdd3c3cdb40d2a6c2c5414edd985eddeb3c3eda8d6972fee4bd370d5344fda
3
+ size 197513
feelsfemininetomeunderstandingperceivedgenderedstylethroughhumanannotations/b4471a63-2fc4-4d1d-a9a9-4d2acc698e72_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:12347ac6371a2b29f4dca383291ec9b8a3b764b8826e7111e83ccfef507f3d41
3
+ size 1701215
feelsfemininetomeunderstandingperceivedgenderedstylethroughhumanannotations/full.md ADDED
The diff for this file is too large to render. See raw diff
 
feelsfemininetomeunderstandingperceivedgenderedstylethroughhumanannotations/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3628830cfc6071d68995f0a7920d9882757be56757d87b97e935b39000a9631d
3
+ size 1457150
feelsfemininetomeunderstandingperceivedgenderedstylethroughhumanannotations/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1af1fc0475dab89ccf6fe53d0cb595e891431899d6bbea126397acba3fb76f68
3
+ size 687427
flsalearningsemanticstructuresindocumentcollectionsusingfoundationmodels/8b674ce1-5fe0-467a-9406-92edcbb5cab2_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3c66264ca0db50f830d1158cd7abd73a4b49ce0d680b3a04e8026fa8933d737
3
+ size 81267
flsalearningsemanticstructuresindocumentcollectionsusingfoundationmodels/8b674ce1-5fe0-467a-9406-92edcbb5cab2_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ffbb4ebea4025d3d1c09636c6f82b353273a94c0e76a244813d87fd332c316b
3
+ size 93843
flsalearningsemanticstructuresindocumentcollectionsusingfoundationmodels/8b674ce1-5fe0-467a-9406-92edcbb5cab2_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fef6f3ce2749d9243b6a2aaa73e2b9b58c8f42954944cebbd7be6bb158a45955
3
+ size 1104911
flsalearningsemanticstructuresindocumentcollectionsusingfoundationmodels/full.md ADDED
@@ -0,0 +1,322 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # fLSA: Learning Semantic Structures in Document Collections Using Foundation Models
2
+
3
+ Weijia Xu, Nebojsa Jojic & Nicolas Le Roux
4
+
5
+ Microsoft Research
6
+
7
+ Redmond, WA 98052, USA
8
+
9
+ {weijiaxu, joic, nleroux}@microsoft.com
10
+
11
+ # Abstract
12
+
13
+ Humans can learn to solve new tasks by inducing high-level strategies from example solutions to similar problems and then adapting these strategies to solve unseen problems. Can we use large language models to induce such high-level structure from example documents or solutions? We introduce $fLSA$ , a foundation-model-based Latent Semantic Analysis method that iteratively clusters and tags document segments based on document-level contexts. These tags can be used to model the latent structure of given documents and for hierarchical sampling of new texts. Our experiments on story writing, math, and multi-step reasoning datasets demonstrate that $fLSA$ tags are more informative in reconstructing the original texts than existing tagging methods. Moreover, when used for hierarchical sampling, $fLSA$ tags help expand the output space in the right directions that lead to correct solutions more often than direct sampling and hierarchical sampling with existing tagging methods. $^{1}$
14
+
15
+ # 1 Introduction
16
+
17
+ Large language models (LLMs) have shown impressive performance on a wide range of tasks, such as reasoning (Suzgun et al., 2022; Liu et al., 2023), math problem solving (Wu et al., 2023), and open-ended text generation tasks (Katz et al., 2024; Dubey et al., 2024; OpenAI et al., 2024). Given natural language instructions or in-context examples with chain-of-thought steps, LLMs can adapt quickly to a new task and achieve outstanding performance on challenging tasks that require multi-step reasoning or planning (Wei et al., 2022). However, such methods typically rely on humans to induce the common strategy for solving a type of problems and demonstrate the strategy through few-shot chain-of-thought prompting. By contrast,
18
+
19
+ humans learn to solve a new type of problems by analyzing some example problems and their solutions, inducing the common strategies (i.e. latent semantic structure) underlying these problem solutions, and testing them out on the new problems.
20
+
21
+ Inducing the latent semantic structure in a set of documents can be modeled as an unsupervised clustering and tagging problem, where given a set of coarsely segmented documents, we cluster the text segments that share common characteristics into the same set and assign a tag to each set of segments. Based on these segment tags, we can then uncover the latent structure by learning a dynamic model over the latent tags and their transition probabilities in the document set. As an example, Figure 1 shows a dynamic model over learned tags in mathematical solutions. Such dynamic models can help humans better understand and analyze large collections of documents. They also encode more generalizable information compared to few-shot examples, providing a useful guide for LLMs to solve new problems without manual intervention (as shown by the example in Figure 2). Additionally, they can also aid in searching algorithms on complex reasoning tasks (Guan et al., 2025) through hierarchical sampling: one can sample from the dynamic model over latent tags as an outline for the actual solution steps to explore more diverse solution paths during the rollout stage.
22
+
23
+ In this paper, we introduce $fLSA$ , an iterative algorithm that alternatively clusters and tags document segments using LLMs based on segment- and document-level contexts. $fLSA$ combines the merits of traditional topic modeling approaches such as Latent Semantic Analysis (LSA) (Hofmann et al., 1999) and LLM-based approaches, and captures shared semantic features among text segments more effectively. We evaluate 1) the informativeness of $fLSA$ tags by measuring how well they help reconstruct the original text spans, and 2) their usefulness in expanding the search space in the right
24
+
25
+ ![](images/b4db4e8cc868254febb01783499e3632b2ea9356b4f30a6fcaba672a36338571.jpg)
26
+ Figure 1: Visualizing the bigram dynamic model over the latent tags learned on MATH solutions. For each tag, we list the three most probable next tags based on the transition probabilities $p(t_k|t_{k-1})$ . The transition probabilities are annotated on the arrows. For Tag 24, we also list two example next tags outside the top-3 choices with transition probabilities $p \approx 0.01$ .
27
+
28
+ ![](images/0b392be13c125d94cd7ae8cdb91832f38551a04a8170471c262fec02b9393768.jpg)
29
+ Figure 2: An example of using the sampled tag sequence as an outline (in purple) to aid an LLM in generating a solution (italicized) to the given problem (in blue).
30
+
31
+ directions by measuring the Hits@K accuracy of the generated solutions through hierarchical sampling using the tags. Experiments on story writing, math and multi-step reasoning datasets show that fLSA leads to higher reconstruction likelihood than existing tagging approaches. Furthermore, on math and reasoning tasks, hierarchical sampling using fLSA tags helps expand the output space in the right directions more effectively than both direct sampling and existing tagging methods.
32
+
33
+ # 2 Related Work
34
+
35
+ # 2.1 Document Segmentation and Labeling
36
+
37
+ To model the structure and topic shifts in a document, prior work has introduced unsupervised
38
+
39
+ document segmentation and labeling approaches that leverage term co-occurrence features (Hearst, 1997), co-occurrence shifts in topic vectors (Riedl and Biemann, 2012), lexical features and word embeddings (Glavaš et al., 2016). These approaches focus mostly on lexical features which are limited in modeling the high-level semantic structure of documents. On the other hand, Neural-based approaches have the potential of modeling sentence-level semantics and document-level topic flows more effective, but rely heavily on supervised training samples in the target domain (Koshorek et al., 2018; Arnold et al., 2019; Zhang et al., 2019). Our algorithm infers the structure of documents based on segment- and document-level contexts using LLMs in an unsupervised fashion.
40
+
41
+ # 2.2 Topic Modeling
42
+
43
+ Topic modeling is a widely used technique in natural language processing for uncovering hidden thematic structures in large text corpora. The most foundational methods in this domain include Latent Dirichlet Allocation (LDA) (Blei et al., 2003) and Latent Semantic Analysis (LSA) (Hofmann et al., 1999; Hofmann, 1999, 2001). Both methods represent each document as a bag of words and models word-document relationships using a mixture of latent topics, where each topic is represented by a list of top words. These algorithms are mathematically grounded, but typically rely on manual topic interpretation, which often leads to incorrect or incomplete labels (Gillings and Hardie, 2022). More recent work introduces neural topic models (Miao et al., 2016; Dieng et al., 2020; Srivastava
44
+
45
+ and Sutton, 2017), which combine traditional topic models with word embeddings. These models have shown improved performance in handling large and complex vocabularies. However, they still model each document as a bag of words, disregarding the sentence- and document-level semantics. Additionally, the resulting topics are represented either by semantic vectors or lists of closest words, which still rely on manual interpretation. Furthermore, studies have shown that incorporating expert knowledge in topic modeling improves over traditional unsupervised methods (Lee et al., 2017).
46
+
47
+ Moreover, the advent of large language models (LLMs) has led to LLM-based topic modeling approaches. Li et al. (2023) propose to use LLMs for topic labeling based their top terms produced by traditional topic models. For short text spans, however, the bag-of-words representation of texts provides limited information for topic modeling. Akash et al. (2023) address the issue by extending each text span into longer sequences using LLMs and extracting topics from the extended texts using neural topic models. Furthermore, Pham et al. (2024); Wang et al. (2023); Mu et al. (2024) propose prompt-based techniques to generate, merge, and assign topics using LLMs. These approaches leverage the domain knowledge embedded in LLMs and produce more interpretable topics based on sentence or document-level contexts beyond bag of words.
48
+
49
+ However, the generate-and-merge approach limits the model's potential for discovering shared features among various text spans across documents of different themes and often leads to overly abstract, thematical topics, especially on a large-scale document collection. We propose $fLSA$ , which combines the merits of traditional LSA, which uses an iterative EM algorithm to model topic and text distributions, and LLM-based approaches.
50
+
51
+ # 3 Approach
52
+
53
+ We propose $fLSA$ , a foundation-model-based EM algorithm that learns the latent tags on a set of segmented documents. We draw inspiration from the traditional Probabilistic Latent Semantic Analysis and use iterative EM steps to learn the latent tags that maximize the estimated likelihood of segmented documents.
54
+
55
+ # 3.1 Probabilistic Latent Semantic Analysis (PLSA)
56
+
57
+ PLSA models the distribution over words $w$ in a document $d$ as a mixture of conditionally independent multinomial distributions, each such distribution representing a topic $t$ . This generative model of words in a document is usually expressed mathematically in terms of the distribution:
58
+
59
+ $$
60
+ p _ {\Theta} (w | d) = \sum_ {t} p _ {\Theta} (t | d) p _ {\Theta} (w | t), \tag {1}
61
+ $$
62
+
63
+ which can be sampled by first sampling a topic $t$ for the given document $d$ from $p_{\Theta}(t|d)$ and then sampling words conditioned on the topic from $p_{\Theta}(w|t)$ . $\Theta$ represents the parameters of the PLSA model. PLSA aims to find $\Theta$ that maximizes the log-likelihood of words in all documents:
64
+
65
+ $$
66
+ \mathcal {L} = \sum_ {d, w} \log \sum_ {t} p _ {\Theta} (t | d) p _ {\Theta} (w | t) \tag {2}
67
+ $$
68
+
69
+ To estimate the parametric distributions $p_{\Theta}(t|d)$ and $p_{\Theta}(w|t)$ , PLSA relies on an EM algorithm, which is an iterative method to find the maximum likelihood estimate of parameters in statistical models. Specifically, an EM iteration alternates between an expectation (E) step and a maximization (M) step. At iteration $i$ , the E-step estimates the posterior distribution $p_{\Theta_{i-1}}(t|w,d)$ of topics $t$ conditioned on each document $d$ and word $w$ in it based on fixed parameters $\Theta_{i-1}$ from the previous iteration:
70
+
71
+ $$
72
+ p _ {\Theta_ {i - 1}} (t | w, d) = \frac {p _ {\Theta_ {i - 1}} (t | d) p _ {\Theta_ {i - 1}} (w | t)}{\sum_ {t ^ {\prime}} p _ {\Theta_ {i - 1}} \left(t ^ {\prime} \mid d\right) p _ {\Theta_ {i - 1}} \left(w \mid t ^ {\prime}\right)} \tag {3}
73
+ $$
74
+
75
+ The M-step optimizes the parameters $\Theta$ such that the expectation of the log-likelihood $p_{\Theta}(w|d)$ of words in each document given $t$ sampled from the estimated posterior $p_{\Theta_{i - 1}}(t|w,d)$ is maximized:
76
+
77
+ $$
78
+ \arg \max _ {\Theta} \sum_ {d, w} \mathbb {E} _ {t \sim p _ {\Theta_ {i - 1}} (t | w, d)} \log p _ {\Theta} (t | d) p _ {\Theta} (w | t) \tag {4}
79
+ $$
80
+
81
+ Theoretically, each EM iteration will yield a larger likelihood in Eq 2 until it converges to a local maximum. In topic modeling literature, various generalized EM variants exist, including the ones that approximate the posterior distribution with a small number of samples, or just the mode of it, and which alter the parameters so that they do not necessarily maximize the likelihood under the posterior, but simply improve it.
82
+
83
+ ![](images/505fcdfeaf19e29484da1fa87dd106692eec61eefcf4c58abea5f17ace6b3465.jpg)
84
+ Figure 3: An illustration of the E-step and M-step in $fLSA$ . At the E-step, we assign each text segment to a tag through prompting given the tag descriptions at the previous iteration. At the M-step, we prompt the LLM to generate new tag descriptions based on the segments assigned to each tag at the E-step.
85
+
86
+ # 3.2 Foundation-Model-Based LSA (fLSA)
87
+
88
+ We introduce $fLSA$ , which learns the latent tags (similar to topics in LSA) on a set of segmented documents $d = (x_{1}, x_{2}, \ldots, x_{L})$ , where the document $d$ is segmented into $L$ segments $x_{k}$ . A core difference between $fLSA$ and PLSA is that PLSA models the generative probability of each word in a document independently, while $fLSA$ models the probability of the sequence of words $(w_{1}, w_{2}, \ldots, w_{n})$ in each text segment $x_{k}$ jointly as $p_{\Theta}(w_{1}, w_{2}, \ldots, w_{n}|t)$ . Moreover, PLSA models the distribution over tags $p_{\Theta}(t|d)$ for each document independently of other documents, while $fLSA$ models the distribution over tags $t$ conditioned not only on current segment $x_{k}$ but also on the document $d$ .
89
+
90
+ To express the difference mathematically, in $fLSA$ , the generative model of a segment $x_{k} = w_{1..n}$ in a document $d$ can be written as:
91
+
92
+ $$
93
+ p _ {\Theta} \left(w _ {1.. n} \mid x _ {k}, d\right) = \sum_ {t} p _ {\Theta} (t \mid x _ {k}, d) p _ {\Theta} \left(w _ {1.. n} \mid t\right), \tag {5}
94
+ $$
95
+
96
+ which can be sampled by first sampling a tag $t$ for the current segment $x_{k}$ in document $d$ and then sampling the word sequence $w_{1..n}$ for that segment given the tag.
97
+
98
+ Another core difference between $fLSA$ and PLSA is that we model the parametric distribu
99
+
100
+ tions $p_{\Theta}(t|x_k,d)$ and $p_{\Theta}(w_{1..n}|t)$ using an LLM with frozen parameters, and the tunable "parameters" $\Theta$ in $fLSA$ are the textual description $\Theta(t)$ for each tag $t$ and the tag assignment for each segment.
101
+
102
+ Analogously to the (generalized) EM algorithms for traditional topic models, we are seeking $\Theta$ that corresponds to high likelihood of the word sequence in each document:
103
+
104
+ $$
105
+ \mathcal {L} = \sum_ {d, x _ {k}} \log \sum_ {t} p _ {\Theta} (t | x _ {k}, d) p _ {\Theta} \left(w _ {1.. n} | t\right) \tag {6}
106
+ $$
107
+
108
+ Our iterative EM steps are shown in Figure 3. At the E-step in iteration $i$ , we approximate the posterior distribution $p_{\Theta_{i-1}}(t | w_{1..n}, x_k, d)$ of tags $t$ for each segment $x_k = w_{1..n}$ in document $d$ by prompting the LLM to greedily assign a tag given the tag descriptions $\Theta_{i-1}(t)$ from the previous iteration, the current segment $x_k = w_{1..n}$ and neighbouring segments $(x_{k-W/2}, x_{k+1-W/2}, \ldots, x_{k+W/2})$ as document-level context, where $W$ is the context window size. At the M-step, in lieu of maximizing (or just improving) the expected log-likelihood $p_{\Theta}(w_{1..n} | x_k, d)$ of words in each segment given the tag assignments from the E-step,
109
+
110
+ $$
111
+ \underset {\Theta} {\arg \max } \sum_ {d, x _ {k}} \mathbb {E} _ {t \sim p _ {\Theta_ {i - 1}}} (t | w _ {1.. n}, x _ {k}, d) \tag {7}
112
+ $$
113
+
114
+ $$
115
+ \log p _ {\Theta} (t | x _ {k}, d) p _ {\Theta} (w _ {1.. n} | t),
116
+ $$
117
+
118
+ we obtain updated tag descriptions $\Theta (t)$ by inviting the LLM itself to summarize the segments assigned
119
+
120
+ to the tag $t$ : We aggregate the segments assigned to tag $t$ and prompt the LLM to generate a tag description that best summarizes what these segments share in common (Fig. 3).
121
+
122
+ # 4 Experimental Setup
123
+
124
+ # 4.1 Datasets
125
+
126
+ We evaluate $fLSA$ against various baselines on WritingPrompts (a story writing dataset (Fan et al., 2018)), MATH (which contains math problems and the corresponding solution texts (Hendrycks et al., 2021)), and Big-Bench Hard (BBH) benchmark (which contains diverse types of reasoning problems and their solutions (Suzgun et al., 2022)). We set the number of tags to 100 for WritingPrompts and MATH, and 50 for BBH (see the Appendix for more details).
127
+
128
+ # 4.2 Evaluation Metrics
129
+
130
+ Reconstruction Likelihood To measure the informativeness of learned tags (either through $fLSA$ or a baseline algorithm), we measure the reconstruction log-likelihood of the test documents (stories in the test set of WritingPrompts or problem solutions in the test set of MATH) conditioned on the tags.
131
+
132
+ Specifically, for each test case $x_{k}$ , which is a segment randomly sampled from a test document $x_{1\dots L}$ (randomly sampled from the test corpus), we approximate the reconstruction log-likelihood of $x_{k}$ given latent tags $t_{k}$ predicted given $x_{k}$ and its neighboring segments under the LLM:
133
+
134
+ $$
135
+ \mathbb {E} _ {t _ {k} \sim p _ {L L M} (t | x _ {k}, d)} [ \log p _ {L L M} (x _ {k} | x _ {1 \dots k - 1}, t _ {k}) ] \tag {8}
136
+ $$
137
+
138
+ Specifically, we first sample $S$ alternative segments at position $k$ independently by $\{\tilde{x}_k^{(1)},\tilde{x}_k^{(2)},\dots,\tilde{x}_k^{(S)}\} \sim p_{LLM}(\cdot |x_{1\dots k - 1})$ . Next, we conduct $T$ repeated experiments to approximate the log-likelihood of $x_{k}$ given the previous segments $x_{1\dots k - 1}$ and the tag $t_k$ predicted on $x_{k}$ under the LLM. Each time, we randomly sample $C$ alternative segments from $\{\tilde{x}_k^{(1)},\tilde{x}_k^{(2)},\dots,\tilde{x}_k^{(S)}\}$ and put it together with $x_{k}$ (in randomly shuffled order) as options and ask the LLM which one is the true continuation conditioned on $x_{1\dots k - 1}$ and $t_k$ . Based on the number of times (denoted as $c_{k}$ ) that the LLM chooses $x_{k}$ as the true continuation among all $T$ experiments, we estimate the reconstruction
139
+
140
+ log-likelihood with alpha-smoothing $(\alpha = 0.1)$
141
+
142
+ $$
143
+ \begin{array}{l} \mathbb {E} _ {t _ {k} \sim p _ {L L M} (t | x _ {k}, d)} [ \log p _ {L L M} (x _ {k} | x _ {1 \ldots k - 1}, t _ {k}) ] \\ = \log \frac {c _ {k} + \alpha}{T + \alpha S} \tag {9} \\ \end{array}
144
+ $$
145
+
146
+ As a baseline, we compare the reconstruction log-likelihood with the log-likelihood computed the same way as above but without conditioning on any tags:
147
+
148
+ $$
149
+ \mathbb {E} \left[ \log p _ {L L M} \left(x _ {k} \mid x _ {1 \dots k - 1}\right) \right] = \log \frac {c _ {k} ^ {\prime} + \alpha}{T + \alpha S} \tag {10}
150
+ $$
151
+
152
+ where $c_k'$ is the number of times that the LLM chooses $x_k$ as the true continuation among $T$ experiments, which is computed the same way as above except that when asking the LLM to choose the true continuation, we only provide the previous text segments $x_{1\dots k - 1}$ without any tags.
153
+
154
+ In our experiments, we evaluate the reconstruction log-likelihood of all methods on the same set of 1K randomly sampled test cases.
155
+
156
+ Hits@K Accuracy To demonstrate that the learned tags can also help expand the search space in the right directions when searching for effective solutions to a complex reasoning task, we learn a dynamic model over the latent tags (as shown by the example in Figure 1) and use it for hierarchical sampling, where we first sample a sequence of tags as an outline and then sample the actual text based on the outline. And then, we evaluate the Hits@K accuracy of hierarchical sampling with latent tags, and compare it with the Hits@K accuracy of direct sampling without tags. Specifically, for each problem, we sample $K = 50$ solutions independently from an LLM given the problem description either directly or through hierarchical sampling with latent tags. If any of the $K$ solutions leads to the correct answer, it gets a score of 1, otherwise 0. Finally, we compute the average score over all testing problems.
157
+
158
+ For hierarchical sampling, we first sample a sequence of tags $(t_1,t_2,\dots,t_l)$ (up till the special tag <END>) with maximum length L using a bigram model learned on the training data (without conditioning on the test problem):
159
+
160
+ $$
161
+ \begin{array}{l} p \left(t _ {1}, t _ {2}, \dots , t _ {l}\right) \tag {11} \\ = p \left(t _ {1}\right) p \left(t _ {2} \mid t _ {1}\right) \dots p \left(t _ {l} \mid t _ {l - 1}\right) p (< \mathrm {E N D} > \mid t _ {l}) \\ \end{array}
162
+ $$
163
+
164
+ And then, we prompt the LLM to generate a solution to the given problem based on the tag sequence $(t_1,t_2,\dots,t_l)$ using the prompt template shown in Figure 2.
165
+
166
+ # 4.3 fLSA Setup
167
+
168
+ For the EM procedure, we set the maximum number of iterations to 30.4 At the E-step (where the LLM assigns a tag to each segment conditioned not only on the current segment but also on neighbouring segments within the context window), we use a context window size of 2 on WritingPrompts and use unlimited context window (such that the whole solution is used as context) on MATH and BBH. At the M-step, we randomly sample 10 segments assigned to each tag to update the tag description.
169
+
170
+ # 4.4Baselines
171
+
172
+ TradLDA We compare our approach with the traditional Latent Dirichlet Allocation (TradLDA), a type of LSA algorithm designed to discover latent topics in a collection of text spans (Blei et al., 2003).
173
+
174
+ TradLDA+LLM As Li et al. (2023) showed that the topic labels generated by LLMs based on the key terms learned through TradLDA are preferred more often than the original labels, we also include TradLDA+LLM as a baseline. Specifically, we first learn the topics and the key terms for each topic using TradLDA, and then use GPT-4 to generate a description for each topic based on the key terms.
175
+
176
+ Prompting Recent work showed that, with appropriate prompts, LLMs are capable of directly generating topic labels given a set of text documents and condensing overarching topics (Pham et al., 2024; Wang et al., 2023; Mu et al., 2024). As a baseline, we adapt the approach (along with the prompts) in Mu et al. (2024) to generate topic descriptions for each text segment.
177
+
178
+ GenOutline For Hits@K accuracy, we also include a two-step sampling baseline, where we first prompt the LLM to generate a multi-step outline for solving this type of problem and then prompt the LLM to generate the actual solution based on the problem description and the outline.
179
+
180
+ # 4.5 Large Language Model Setup
181
+
182
+ For clustering and tagging, we use GPT-4 (OpenAI et al., 2024) and Qwen-2.5-7B (a much smaller LLM introduced in Qwen et al. (2025)). We also use GPT-4 to estimate the reconstruction log-likelihood. To measure Hits@K Accuracy,
183
+
184
+ we use ChatGPT (gpt-3.5-turbo; OpenAI (2023)) instead of GPT-4, because GPT-4 has achieved high accuracy on MATH and BBH (e.g. $84\%$ on MATH (Zhou et al., 2023)), possibly due to data contamination issues (Deng et al., 2024; Bubeck et al., 2023). Thus, we use ChatGPT for solution sampling to show the potential of using learned tags to diversify the sampled outputs and improve the chance of finding a correct answer when the model cannot find it through direct sampling.[5]
185
+
186
+ # 5 Results
187
+
188
+ # 5.1 Reconstruction Likelihood
189
+
190
+ First, we compare the reconstruction log-likelihood of $fLSA$ with the No Tag baseline (without conditioning on any tags). As shown in Table 1, conditioning on $fLSA$ tags helps predict the original texts: $fLSA$ brings 0.7–1.4 higher log-likelihood than the No Tag baseline.
191
+
192
+ TradLDA also brings higher reconstruction log-likelihood over the No Tag baseline. However, since TradLDA only captures word or term co-occurrences, it still underperforms fLSA consistently on all three datasets. Moreover, TradLDA+LLM fails to improve over TradLDA. As shown by the examples in Table 2, it is extremely challenging for LLMs and even humans to extract meaningful semantic information from the key terms learned on short text segments through TradLDA, and the resulting tag descriptions are overly generic, making it challenging to reconstruct the original text segments accurately.
193
+
194
+ Compared with the Prompting baseline, $fLSA$ achieves 0.2-0.5 higher log-likelihood on all three datasets. We further compared the tags learned using Prompting versus $fLSA$ . As shown by the examples in Table 3, Prompting tends to merge unrelated topics into a mixed topic (e.g. Tag 1 and 2), and the resulting topics become overly broad. Even for tags sharing a common theme, the descriptions often lack specificity and detail (e.g. Tag 3). By contrast, $fLSA$ identifies segments with similar themes, groups them into a single cluster and produces more detailed tag descriptions with example plots.
195
+
196
+ # 5.2 Hits@K Accuracy
197
+
198
+ We further evaluate how the tags and semantic structure learned through fLSA help expand the output space in the right directions that lead to
199
+
200
+ <table><tr><td></td><td>No Tag</td><td>TradLDA</td><td>TradLDA+LLM</td><td>Prompting</td><td>fLSA</td></tr><tr><td>WritingPrompts</td><td>-4.81</td><td>-3.75</td><td>-4.12</td><td>-3.62</td><td>-3.43</td></tr><tr><td>MATH-Num</td><td>-3.32</td><td>-2.96</td><td>-3.28</td><td>-3.06</td><td>-2.64</td></tr><tr><td>MATH-All</td><td>-3.67</td><td>-3.16</td><td>-3.57</td><td>-3.44</td><td>-2.94</td></tr></table>
201
+
202
+ Table 1: Reconstruction log-likelihood of fLSA versus the baseline without tags (No Tag), traditional LDA (TradLDA), traditional LDA with LLM-generated tag descriptions (TradLDA+LLM) (Li et al., 2023), and the prompting baseline (Prompting) (Mu et al., 2024) on WritingPrompts story dataset, Number Theory dataset from MATH (MATH-Num), and the MATH (MATH-All) dataset.
203
+
204
+ <table><tr><td>Key Terms</td><td>Tag Description</td></tr><tr><td>nothing, get, life, else, light, across, best, ca, sin-gle, come, got, death, together, running, power, system, entire, could, control, everything</td><td>The words you’ve provided span a broad range of concepts, but they share a common denom-inator in that they can all be associated with themes commonly found in science fiction liter-ature and media.</td></tr><tr><td>continued, surface, wait, raised, floor, slowly, give, new, sure, needed, around, also, face, body, fact, made, bitch, girl, guy, much</td><td>The words listed seem to be common English words that could appear in a wide range of con-texts. However, given their generic nature, they could be particularly prevalent in narrative or de-scriptive writing, such as in fiction, storytelling, or personal narratives.</td></tr></table>
205
+
206
+ Table 2: Examples of key terms learned on short story segments in WritingPrompts through TradLDA and the corresponding tag descriptions generated by GPT-4. Given only the key terms without context, the tag descriptions produced by GPT-4 are too generic to recover the original text spans.
207
+
208
+ <table><tr><td>Prompting Tags</td><td>fLSA Tags</td></tr><tr><td>Tag 1: Stories involving themes of sacrifice, duty, friendship, companionship, hope, and resilience in the face of crisis.</td><td>Tag 1: Scenes involving intense, often dangerous situations, like explosions, retreats, long nights, empty streets, fires, and storms.</td></tr><tr><td>Tag 2: Stories involving time travel, genetic irregularities, and strange creatures that feed on negative emotions.</td><td>Tag 2: The protagonist experiences surreal and unexpected events, often involving time travel or strange bodily functions, and narrates them in a casual, humorous tone.</td></tr><tr><td>Tag 3: Stories involving emotional moments and first hugs.</td><td>Tag 3: This tag is associated with story segments that feature intense emotional moments, often involving fear, anger, or distress, and frequently serve as turning points or climactic scenes in the narrative.</td></tr></table>
209
+
210
+ Table 3: Example tags learned on short story segments in WritingPrompts through Prompting versus $fLSA$ . Prompting tags are either too mixed (e.g. Tag 1 and 2) or too generic (e.g. Tag 3), while $fLSA$ groups segments of similar themes into the same cluster and describes each cluster with detailed explanations and example plots.
211
+
212
+ correct solutions by measuring the Hits@K Accuracy of various sampling methods with or without tags. First, compared with direct sampling without using any tags, hierarchical sampling with $fLSA$ tags leads to significantly higher Hits@K accuracy by +10.0 points on MATH and +16.6 points on
213
+
214
+ BBH on average. Additionally, we compare $fLSA$ with GenOutline, a two-step sampling approach where we prompt the LLM to generate an outline before generating the actual solution. GenOutline improves over direct sampling on most tasks, but still underperforms hierarchical sampling with
215
+
216
+ <table><tr><td colspan="7">MATH</td></tr><tr><td>Algebra</td><td>88.6</td><td>90.1</td><td>93.6</td><td>89.6</td><td>91.1</td><td>90.1</td></tr><tr><td>Counting</td><td>61.3</td><td>60.4</td><td>69.8</td><td>65.1</td><td>69.8</td><td>70.8</td></tr><tr><td>Geometry</td><td>53.1</td><td>55.2</td><td>58.3</td><td>57.3</td><td>62.5</td><td>60.4</td></tr><tr><td>InterAlgebra</td><td>55.7</td><td>51.7</td><td>58.7</td><td>59.2</td><td>61.2</td><td>61.2</td></tr><tr><td>Number</td><td>65.4</td><td>76.0</td><td>77.9</td><td>74.0</td><td>78.8</td><td>83.7</td></tr><tr><td>PreAlgebra</td><td>74.2</td><td>79.1</td><td>81.3</td><td>81.3</td><td>84.6</td><td>89.0</td></tr><tr><td>PreCalculus</td><td>42.2</td><td>46.8</td><td>51.4</td><td>46.8</td><td>49.5</td><td>55.0</td></tr><tr><td>Average</td><td>62.9</td><td>65.6</td><td>70.1</td><td>67.6</td><td>71.1</td><td>72.9</td></tr><tr><td colspan="7">BBH</td></tr><tr><td>Date</td><td>92.8</td><td>94.4</td><td>95.6</td><td>95.2</td><td>95.2</td><td>98.8</td></tr><tr><td>Formal</td><td>45.2</td><td>61.2</td><td>65.6</td><td>52.8</td><td>57.2</td><td>93.2</td></tr><tr><td>Geometric</td><td>70.8</td><td>76.8</td><td>83.6</td><td>84.0</td><td>80.0</td><td>87.6</td></tr><tr><td>Logical</td><td>89.2</td><td>95.6</td><td>95.6</td><td>96.0</td><td>96.5</td><td>99.5</td></tr><tr><td>Movie</td><td>84.8</td><td>88.0</td><td>92.8</td><td>92.0</td><td>93.2</td><td>95.2</td></tr><tr><td>ObjCount</td><td>93.2</td><td>96.8</td><td>99.2</td><td>100.0</td><td>100.0</td><td>95.2</td></tr><tr><td>Penguins</td><td>93.8</td><td>99.3</td><td>99.3</td><td>100.0</td><td>99.3</td><td>99.3</td></tr><tr><td>ReasonColored</td><td>92.8</td><td>97.6</td><td>98.4</td><td>98.8</td><td>98.8</td><td>100.0</td></tr><tr><td>RuinNames</td><td>64.8</td><td>74.8</td><td>69.6</td><td>70.0</td><td>80.0</td><td>93.6</td></tr><tr><td>TranslationError</td><td>52.4</td><td>68.4</td><td>60.4</td><td>60.0</td><td>63.6</td><td>75.2</td></tr><tr><td>Temporal</td><td>86.4</td><td>98.4</td><td>93.2</td><td>96.8</td><td>98.0</td><td>100.0</td></tr><tr><td>WordSort</td><td>27.2</td><td>36.4</td><td>16.0</td><td>14.8</td><td>42.0</td><td>56.0</td></tr><tr><td>Average</td><td>74.5</td><td>82.3</td><td>80.8</td><td>80.0</td><td>83.7</td><td>91.1</td></tr></table>
217
+
218
+ Table 4: Hits@K accuracy of fLSA versus directly sampling without tags (No Tag), two-step sampling with LLM-generated outline (GenOutline), traditional LDA (TradLDA), traditional LDA with LLM-generated tag descriptions (TradLDA+LLM) (Li et al., 2023), and the prompting baseline (Prompting) (Mu et al., 2024) on 12 challenging tasks from BBH benchmark (Suzgun et al., 2022) and 7 tasks from MATH (Hendrycks et al., 2021).
219
+
220
+ $fLSA$ by 7-9 points. These results indicate that hierarchical sampling using tags derived from the domain-specific documents via $fLSA$ produces more effective output solutions, thereby increasing the likelihood of hitting the correct answer with K samples.
221
+
222
+ Next, we compare $fLSA$ with hierarchical sampling with existing tagging approaches. $fLSA$ tags expand the output space in the directions that lead to correct answers more often than TradLDA on 16 out of 19 tasks. It brings a significant improvement of 3-10 points over TradLDA. Similarly, compared with TradLDA+LLM, $fLSA$ achieves higher Hits@K Accuracy on 17 out of 19 tasks and improves the average accuracy by 5-11 points across BBH and MATH. Compared with the Prompting baseline, $fLSA$ achieves higher Hits@K Accuracy
223
+
224
+ on 14 out of 19 tasks. Overall, hierarchical sampling with $fLSA$ tags improves Hits@K Accuracy significantly over existing tagging approaches by 2-11 points on average.
225
+
226
+ # 5.3 Learning Tags with Smaller LLMs
227
+
228
+ In addition to GPT-4, we also evaluate $fLSA$ using a smaller LLM - Qwen-2.5-7B. We run $fLSA$ using Qwen-2.5-7B as the base model (while the other hyper-parameters remain unchanged) and measure the Hits@K Accuracy of hierarchical sampling using the learned tags on BBH. We discover that the average accuracy drops by 5 points compared to the tags learned using GPT-4, but it still outperforms TradLDA and Prompting (using the much larger GPT-4 model) by 3-6 points.
229
+
230
+ # 5.4 Ablation Study
231
+
232
+ We further examine how the number of tags learned through $fLSA$ influences its ability to expand the
233
+
234
+ output space. Specifically, we compare the Hits@K Accuracy of hierarchical sampling with 20, 50, and 100 fLSA tags on BBH tasks. Results show that the accuracy drops by 3 points when using 20 instead of 50 tags, whereas increasing the number of tags from 50 to 100 yields minimal change (see the Appendix for detailed results). This suggests that learning a sufficient - even redundant - number of tags can be beneficial for effectively expanding the output space.
235
+
236
+ # 6 Conclusion
237
+
238
+ We introduced $fLSA$ , a foundation-model-based Latent Semantic Analysis method that aims to uncover the latent semantic structures in document collections by iteratively clustering and tagging document segments based on document-level contexts. Our experiments on story writing, math and multi-step reasoning tasks show that $fLSA$ tags are more informative in reconstructing the original texts than tags generated by existing tagging methods. $fLSA$ tags are also useful in expanding the output space via hierarchical sampling to increase the likelihood of discovering correct solutions to complex reasoning problems. These results suggest the potential of $fLSA$ for generating effective task guidelines given some worked-out examples, along with hierarchical sampling and searching for problem solutions on challenging reasoning tasks.
239
+
240
+ # 7 Limitations
241
+
242
+ One limitation of $fLSA$ is that some of the tags produced by $fLSA$ may be semantically similar to each other, which can be ideally merged into a single tag. This limitation could be addressed by incorporating a tag fusion step in the EM algorithm, which we leave for future work. In addition, although the $fLSA$ algorithm is agnostic to the LLM being used, we only test it on GPT-4 (which is one of the most powerful and widely used LLMs). Testing the algorithm on smaller models can be an interesting future work.
243
+
244
+ This work also has potential risks. One major risk is that the tags learned using $fLSA$ may reflect the undesirable biases within the LLM being used. Integrating bias detection and mitigation techniques within the algorithm could be useful for addressing the issue.
245
+
246
+ # References
247
+
248
+ Pritom Saha Akash, Jie Huang, and Kevin Chen-Chuan Chang. 2023. Let the pretrained language models "imagine" for short texts topic modeling. Preprint, arXiv:2310.15420.
249
+ Sebastian Arnold, Rudolf Schneider, Philippe Cudre-Mauroux, Felix A. Gers, and Alexander Loser. 2019. SECTOR: A neural model for coherent topic segmentation and classification. Transactions of the Association for Computational Linguistics, 7:169-184.
250
+ David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of machine Learning research, 3(Jan):993-1022.
251
+ Sebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang. 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. Preprint, arXiv:2303.12712.
252
+ Chunyuan Deng, Yilun Zhao, Xiangru Tang, Mark Gerstein, and Arman Cohan. 2024. Investigating data contamination in modern benchmarks for large language models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 8706-8719, Mexico City, Mexico. Association for Computational Linguistics.
253
+ Adji B. Dieng, Francisco J. R. Ruiz, and David M. Blei. 2020. Topic Modeling in Embedding Spaces. Transactions of the Association for Computational Linguistics, 8:439-453.
254
+ Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783.
255
+ Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889-898, Melbourne, Australia. Association for Computational Linguistics.
256
+ Mathew Gillings and Andrew Hardie. 2022. The interpretation of topic models for scholarly analysis: An evaluation and critique of current practice. Digital Scholarship in the Humanities, 38(2):530-543.
257
+ Goran Glavas, Federico Nanni, and Simone Paolo Ponzetto. 2016. Unsupervised text segmentation using semantic relatedness graphs. In Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics, pages 125-130, Berlin, Germany. Association for Computational Linguistics.
258
+
259
+ Xinyu Guan, Li Lyna Zhang, Yifei Liu, Ning Shang, Youran Sun, Yi Zhu, Fan Yang, and Mao Yang. 2025. rstar-math: Small llms can master math reasoning with self-evolved deep thinking. Preprint, arXiv:2501.04519.
260
+ Marti A. Hearst. 1997. Text tiling: Segmenting text into multi-paragraph subtopic passages. Computational Linguistics, 23(1):33-64.
261
+ Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the MATH dataset. CoRR, abs/2103.03874.
262
+ T Hofmann. 1999. Probabilistic latent semantic indexing. In Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval.
263
+ Thomas Hofmann. 2001. Unsupervised learning by probabilistic latent semantic analysis. Machine learning, 42:177-196.
264
+ Thomas Hofmann et al. 1999. Probabilistic latent semantic analysis. In UAI, volume 99, pages 289-296.
265
+ Daniel Martin Katz, Michael James Bommarito, Shang Gao, and Pablo Arredondo. 2024. Gpt-4 passes the bar exam. Philosophical Transactions of the Royal Society A, 382(2270):20230254.
266
+ Omri Koshorek, Adir Cohen, Noam Mor, Michael Rotman, and Jonathan Berant. 2018. Text segmentation as a supervised learning task. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 469-473, New Orleans, Louisiana. Association for Computational Linguistics.
267
+ Tak Yeon Lee, Alison Smith, Kevin Seppi, Niklas Elmqvist, Jordan Boyd-Graber, and Leah Findlater. 2017. The human touch: How non-expert users perceive, interpret, and fix topic models. International Journal of Human-Computer Studies, 105:28-42.
268
+ Dai Li, Bolun Zhang, and Yimang Zhou. 2023. Can large language models (llm) label topics from a topic model?
269
+ Hanmeng Liu, Ruoxi Ning, Zhiyang Teng, Jian Liu, Qiji Zhou, and Yue Zhang. 2023. Evaluating the logical reasoning ability of chatgpt and gpt-4. arXiv preprint arXiv:2304.03439.
270
+ Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neural variational inference for text processing. In Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 1727-1736, New York, New York, USA. PMLR.
271
+
272
+ Yida Mu, Chun Dong, Kalina Bontcheva, and Xingyi Song. 2024. Large language models offer an alternative to the traditional approach of topic modelling. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 10160-10171, Torino, Italia. ELRA and ICCL.
273
+ OpenAI. 2023. Gpt-4 technical report. Preprint, arXiv:2303.08774.
274
+ OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Mohammad Bavarian, Jeff Belgium, Irwan Bello, Jake Berdine, Gabriel Bernadett-Shapiro, Christopher Berner, Lenny Bogdonoff, Oleg Boiko, Madelaine Boyd, Anna-Luisa Brakman, Greg Brockman, Tim Brooks, Miles Brundage, Kevin Button, Trevor Cai, Rosie Campbell, Andrew Cann, Brittany Carey, Chelsea Carlson, Rory Carmichael, Brooke Chan, Che Chang, Fotis Chantzis, Derek Chen, Sully Chen, Ruby Chen, Jason Chen, Mark Chen, Ben Chess, Chester Cho, Casey Chu, Hyung Won Chung, Dave Cummings, Jeremiah Currier, Yunxing Dai, Cory Decareaux, Thomas Degry, Noah Deutsch, Damien Deville, Arka Dhar, David Dohan, Steve Dowling, Sheila Dunning, Adrien Ecoffet, Atty Eleti, Tyna Eloundou, David Farhi, Liam Fedus, Niko Felix, Simón Posada Fishman, Juston Forte, Isabella Fulford, Leo Gao, Elie Georges, Christian Gibson, Vik Goel, Tarun Gogineni, Gabriel Goh, Rapha GontijoLopes, Jonathan Gordon, Morgan Grafstein, Scott Gray, Ryan Greene, Joshua Gross, Shixiang Shane Gu, Yufei Guo, Chris Hallacy, Jesse Han, Jeff Harris, Yuchen He, Mike Heaton, Johannes Heidecke, Chris Hesse, Alan Hickey, Wade Hickey, Peter Hoeschele, Brandon Houghton, Kenny Hsu, Shengli Hu, Xin Hu, Joost Huizinga, Shantanu Jain, Shawn Jain, Joanne Jang, Angela Jiang, Roger Jiang, Haozhun Jin, Denny Jin, Shino Jomoto, Billie Jonn, Heewoo Jun, Tomer Kaftan, Lukasz Kaiser, Ali Kamali, Ingmar Kanitscheider, Nitish Shirish Keskar Tabarak Khan, Logan Kilpatrick, Jong Wook Kim Christina Kim Yongjik Kim Jan Hendrik Kirchner Jamie Kiros Matt Knight Daniel Kokotajlo Lukasz Kondraciuk Andrew Kondrich Aris Konstantinidis Kyle Kosic Gretchen Krueger Vishal Kuo Michael Lampe Ikai LanTeddy Lee Jan Leike,Jade Leung,Daniel LevyChak Ming Li Rachel LimMolly Lin Stephanie LinMateusz Litwin Theresa Lopez Ryan Lowe Patricia Lue Anna Makanju Kim Malfacini Sam Manning,Todor Markov,Yaniv Markovski Bianca Martin Katie MayerAndrew MayneBob McGrewScott Mayer McKinney Christine McLeaveyPaul McMillan Jake McNeil David Medina Alok Mehta Jacob Menick Luke Metz Andrey Mishchenko Pamela Mishkin,Vinnie Monaco,Evan MorikawaDaniel MossingTong Mu,Mira Murati,Oleg MurkDavid Mély,Ashvin Nair,Reiichiro Nakano,Rajeev Nayak Arvind Neelakantan Richard Ngo,Hyeonwoo Noh
275
+
276
+ Long Ouyang, Cullen O'Keefe, Jakub Pachocki, Alex Paino, Joe Palermo, Ashley Pantuliano, Giambatista Parascandolo, Joel Parish, Emy Parparita, Alex Passos, Mikhail Pavlov, Andrew Peng, Adam Perelman, Filipe de Avila Belbute Peres, Michael Petrov, Henrique Ponde de Oliveira Pinto, Michael, Pokorny, Michelle Pokrass, Vitchyr H. Pong, Tolly Powell, Alethea Power, Boris Power, Elizabeth Proehl, Raul Puri, Alec Radford, Jack Rae, Aditya Ramesh, Cameron Raymond, Francis Real, Kendra Rimbach, Carl Ross, Bob Rotsted, Henri Roussez, Nick Ryder, Mario Saltarelli, Ted Sanders, Shibani Santurkar, Girish Sastry, Heather Schmidt, David Schnurr, John Schulman, Daniel Selsam, Kyla Sheppard, Toki Sherbakov, Jessica Shieh, Sarah Shoker, Pranav Shyam, Szymon Sidor, Eric Sigler, Maddie Simens, Jordan Sitkin, Katarina Slama, Ian Sohl, Benjamin Sokolowsky, Yang Song, Natalie Staudacher, Felipe Petroski Such, Natalie Summers, Ilya Sutskever, Jie Tang, Nicolas Tezak, Madeleine B. Thompson, Phil Tillet, Amin Tootoonchian, Elizabeth Tseng, Preston Tuggle, Nick Turley, Jerry Tworek, Juan Felipe Cerón Uribe, Andrea Vallone, Arun Vijayvergiya, Chelsea Voss, Carroll Wainwright, Justin Jay Wang, Alvin Wang, Ben Wang, Jonathan Ward, Jason Wei CJ Weinmann, Akila Welihinda, Peter Welinder, Jiayi Weng, Lilian Weng, Matt Wiethoff, Dave Willner, Clemens Winter, Samuel Wolrich, Hannah Wong Lauren Workman, Sherwin Wu, Jeff Wu, Michael Wu, Kai Xiao, Tao Xu, Sarah Yoo, Kevin Yu Qiming Yuan Wojciech Zaremba Rowan Zellers Chong Zhang, Marvin Zhang Shengjia Zhao Tianhao Zheng,Juntang ZhuangWilliam Zhuk and Barret Zoph.2024.Gpt-4 technical report.Preprint arXiv:2303.08774.
277
+
278
+ Chau Pham, Alexander Hoyle, Simeng Sun, Philip Resnik, and Mohit Iyyer. 2024. *TopicGPT: A prompt-based topic modeling framework*. In *Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies* (Volume 1: Long Papers), pages 2956–2984, Mexico City, Mexico. Association for Computational Linguistics.
279
+
280
+ Qwen, :: An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tianyi Tang, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. 2025. Qwen2.5 technical report. Preprint, arXiv:2412.15115.
281
+
282
+ Martin Riedl and Chris Biemann. 2012. *TopicTiling: A text segmentation algorithm* based on LDA. In Proceedings of ACL 2012 Student Research Workshop, pages 37-42, Jeju Island, Korea. Association for Computational Linguistics.
283
+
284
+ Akash Srivastava and Charles Sutton. 2017. Autoencod
285
+
286
+ ing variational inference for topic models. Preprint, arXiv:1703.01488.
287
+
288
+ Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, et al. 2022. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261.
289
+
290
+ Han Wang, Nirmalendu Prakash, Nguyen Khoi Hoang, Ming Shan Hee, Usman Naseem, and Roy Ka-Wei Lee. 2023. Prompting large language models for topic modeling. In 2023 IEEE International Conference on Big Data (BigData), pages 1236-1241.
291
+
292
+ Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824-24837.
293
+
294
+ Yiran Wu, Feiran Jia, Shaokun Zhang, Hangyu Li, Erkang Zhu, Yue Wang, Yin Tat Lee, Richard Peng, Qingyun Wu, and Chi Wang. 2023. An empirical study on challenging math problem solving with gpt-4. arXiv preprint arXiv:2306.01337.
295
+
296
+ Weijia Xu, Andrzej Banburski, and Nebojsa Jojic. 2024. Reprompting: Automated chain-of-thought prompt inference through Gibbs sampling. In Proceedings of the 41st International Conference on Machine Learning, volume 235 of Proceedings of Machine Learning Research, pages 54852-54865. PMLR.
297
+
298
+ Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, and Xueqi Cheng. 2019. Outline generation: Understanding the inherent content structure of documents. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 745-754.
299
+
300
+ Aojun Zhou, Ke Wang, Zimu Lu, Weikang Shi, Sichun Luo, Zipeng Qin, Shaoqing Lu, Anya Jia, Linqi Song, Mingjie Zhan, and Hongsheng Li. 2023. Solving challenging math word problems using gpt-4 code interpreter with code-based self-verification. Preprint, arXiv:2308.07921.
301
+
302
+ # A Appendix
303
+
304
+ # A.1 Datasets
305
+
306
+ We evaluate $fLSA$ against various baselines on story writing, math problem solving and multi-step reasoning benchmarks. We use WritingPrompts (Fan et al., 2018), a story writing dataset that contains 300K human-written stories paired with writing prompts from an online forum. We randomly sample 100 stories from the training set for clustering and tagging. We set the number of tags to 100 for all tagging approaches. For math problem solving, we use MATH (Hendrycks et al., 2021), a popular math benchmark that contains high school math competition problems on seven subjects including Prealgebra, Algebra, Number Theory, Counting and Probability, Geometry, Intermediate Algebra and Precalculus. We learn 100 tags on 1K randomly sampled problem solutions from the training set. We also experiment on the Big-Bench Hard (BBH) benchmark (Suzgun et al., 2022). The original benchmark includes 23 challenging multi-step reasoning tasks, but each task only includes three step-by-step solution examples. Instead, we take the 12 tasks used in Xu et al. (2024) and learn the tags on the problem solutions (produced by their automatic prompt inference algorithm) for the 179 training problems. We set the number of tags to 50 for BBH.<sup>7</sup>
307
+
308
+ # A.2 Large Language Model Setup
309
+
310
+ For clustering and tagging, we use GPT-4 (OpenAI et al., 2024) and Qwen-2.5-7B (a much smaller LLM introduced in Qwen et al. (2025)). For GPT-4, we set $top\_p = 0.5$ , sampling temperature $\tau = 1.0$ , zero frequency and presence penalty. For Qwen-2.5-7B, we set $top\_p = 0.5$ , sampling temperature $\tau = 0.1$ , zero frequency and presence penalty.
311
+
312
+ We also use GPT-4 with $top\_p = 0.5$ to estimate the reconstruction log-likelihood. We set the temperature $\tau = 1.0$ when sampling alternative segments and $\tau = 0$ when choosing the best continuation.
313
+
314
+ To measure Hits@K Accuracy, we use ChatGPT (gpt-3.5-turbo; OpenAI (2023)) instead of GPT-4. We set top_p = 0.5 and temperature τ = 1.0 when sampling solutions from ChatGPT.
315
+
316
+ # A.3 Ablation Study
317
+
318
+ 5 shows the ablation study results on the number of tags.
319
+
320
+ <table><tr><td></td><td>20 Tags</td><td>50 Tags</td><td>100 Tags</td></tr><tr><td>Date</td><td>98.0</td><td>98.8</td><td>99.2</td></tr><tr><td>Formal</td><td>63.2</td><td>93.2</td><td>80.8</td></tr><tr><td>Geometric</td><td>86.4</td><td>87.6</td><td>86.0</td></tr><tr><td>Logical</td><td>98.9</td><td>99.5</td><td>99.1</td></tr><tr><td>Movie</td><td>93.6</td><td>95.2</td><td>94.8</td></tr><tr><td>ObjCount</td><td>99.6</td><td>95.2</td><td>99.6</td></tr><tr><td>Penguins</td><td>99.3</td><td>99.3</td><td>99.3</td></tr><tr><td>ReasonColored</td><td>100.0</td><td>100.0</td><td>100.0</td></tr><tr><td>RuinNames</td><td>90.8</td><td>93.6</td><td>95.6</td></tr><tr><td>TranslationError</td><td>72.8</td><td>75.2</td><td>72.4</td></tr><tr><td>Temporal</td><td>98.8</td><td>100.0</td><td>99.2</td></tr><tr><td>WordSort</td><td>57.6</td><td>56.0</td><td>61.6</td></tr><tr><td>Average</td><td>88.3</td><td>91.1</td><td>90.6</td></tr></table>
321
+
322
+ Table 5: Ablation Study: Hits@K Accuracy on BBH tasks using varying number of fLSA tags.
flsalearningsemanticstructuresindocumentcollectionsusingfoundationmodels/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b764df43b92bdc33d153711016dfda7312f8bc82d1a527bfa47b56b3072a04ab
3
+ size 700380
flsalearningsemanticstructuresindocumentcollectionsusingfoundationmodels/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:18902f7f68d3ff90c84187d0a025ba052f4a541e03d2a8ea1f7ab7316ec17606
3
+ size 404505
iknowaudiointegratingknowledgegraphswithaudiolanguagemodels/0abba7d3-2569-484c-9709-1487d1f8a82e_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:edc6b7689b7d937146802ca09f8c1fa0dfbd470add27c69e205910a06fd5e92d
3
+ size 109512
iknowaudiointegratingknowledgegraphswithaudiolanguagemodels/0abba7d3-2569-484c-9709-1487d1f8a82e_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:912c6c4ffd315dcfc2d38b3f0a5817a864c8519b45915ae1173d03303d390059
3
+ size 131027
iknowaudiointegratingknowledgegraphswithaudiolanguagemodels/0abba7d3-2569-484c-9709-1487d1f8a82e_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ddbbca597d151af95fa08e138a31f582878bd98ab0fc623de13b891bcb4e62af
3
+ size 1054014
iknowaudiointegratingknowledgegraphswithaudiolanguagemodels/full.md ADDED
@@ -0,0 +1,442 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # iKnow-audio: Integrating Knowledge Graphs with Audio-Language Models
2
+
3
+ Michel Olvera<sup>1</sup> Changhong Wang<sup>1</sup> Paraskevas Stamatiadis<sup>1</sup> Gael Richard<sup>1</sup> Slim Essid<sup>2*</sup>
4
+ <sup>1</sup>LTCI, Télécom Paris, Institut Polytechnique de Paris
5
+ <sup>2</sup>NVIDIA {olvera, changhong.wang}@telecom-paris.fr
6
+
7
+ # Abstract
8
+
9
+ Contrastive Language-Audio Pretraining (CLAP) models learn by aligning audio and text in a shared embedding space, enabling powerful zero-shot recognition. However, their performance is highly sensitive to prompt formulation and language nuances, and they often inherit semantic ambiguities and spurious correlations from noisy pretraining data. While prior work has explored prompt engineering, adapters, and prefix tuning to address these limitations, the use of structured prior knowledge remains largely unexplored. We present iKnow-audio, a framework that integrates knowledge graphs with audio-language models to provide robust semantic grounding. iKnow-audio builds on the Audio-centric Knowledge Graph (AKG), which encodes ontological relations comprising semantic, causal, and taxonomic connections reflective of everyday sound scenes and events. By training knowledge graph embedding models on the AKG and refining CLAP predictions through this structured knowledge, iKnow-audio improves disambiguation of acoustically similar sounds and reduces reliance on prompt engineering. Comprehensive zero-shot evaluations across six benchmark datasets demonstrate consistent gains over baseline CLAP, supported by embedding-space analyses that highlight improved relational grounding. Resources are publicly available at https://github.com/michelolzam/iknow-audio
10
+
11
+ # 1 Introduction
12
+
13
+ In recent years, self-supervised and multimodal models such as contrastive language-audio pretraining (CLAP) (Elizalde et al., 2023) have shown impressive performance in audio understanding tasks by leveraging large-scale contrastive learning between audio and natural language descriptions. While excelling at capturing general semantic correspondences, these models often lack a deeper understanding of the relational and contextual structure of real-world sound events. Common deficiencies include disambiguating acoustically similar sounds, modeling co-occurrence patterns or hierarchical relationships, and a lack of commonsense
14
+
15
+ ![](images/11d5d1f229329b74fa17d60c3b141245d75497ecfd534749ce00bc33c038ddbc.jpg)
16
+ Figure 1: Audio understanding requires contextual and background knowledge, which can be represented using a knowledge graph linking sounds and related concepts.
17
+
18
+ grounding necessary for reasoning about sounds in novel contexts. Additionally, the performance of these models relies heavily on prompt engineering. Indeed, previous work has shown that changes in prompt wording and formatting can substantially affect performance in zero-shot audio classification tasks (Olvera et al., 2024).
19
+
20
+ Understanding real-world sounds often requires contextual and background knowledge. For example, the scenario illustrated in Figure 1, shows that the sound of sirens may indicate the presence of emergency vehicles,—often associated with accidents, fires, or emergencies—and frequently co-occurs with engine noise, people shouting, or braking sounds. Such relationships extend beyond mere labels; they reflect structured, situational knowledge that is paramount for accurate interpretation.
21
+
22
+ Yet existing datasets for sound event detection and classification largely catalog sounds as independent categories. Their annotations and underlying taxonomies lack a structured semantic representation of how sounds interconnect.
23
+
24
+ To address this gap, we introduce iKnow-audio, a framework for integrating Knowledge Graphs (KGs) with audio-language models. iKnow-audio is built on two key components: (i) the Audio-centric Knowledge Graph (AKG), a general-purpose, text-based KG that encodes rich relational information about sounds, and (ii) CLAP-KG, a pipeline that refines CLAP predictions using embeddings derived from our proposed AKG.
25
+
26
+ While a knowledge graph like AKG is a powerful source of relational knowledge, querying it directly using symbolic methods (e.g., rule-based lookup or SPARQL-style queries) is limited to exact matches and fails to generalize or infer new knowledge beyond what's explicitly encoded. Knowledge Graph Embedding (KGE) models address this limitation by mapping entities and relations into continuous vector spaces, allowing for: generalization to unseen or sparse triples through latent similarity, robust reasoning under uncertainty or label noise, and efficient link prediction (e.g., inferring yelping as a plausible child category of dog even if not explicitly stated). By combining these embeddings with CLAP, iKnow-audio grounds audio-language predictions in factual knowledge while reducing reliance on prompt engineering and improving robustness in low-resource or zero-shot settings.
27
+
28
+ In summary, we present the following contributions: (1) iKnow-audio: a novel framework that integrates knowledge graphs with audio-language models for contextual and relational audio understanding. (2) AKG: Audio-centric Knowledge Graph. A comprehensive KG for audio understanding that encodes rich relational semantics among everyday sounds. (3) CLAP-KG: a pipeline that leverages AKG embeddings to refine CLAP predictions. (4) Systematic zero-shot evaluation on six benchmark datasets, showing consistent improvements over baseline CLAP.
29
+
30
+ # 2 Related Work
31
+
32
+ Multimodal and Domain-Specific Knowledge Graphs Conventional knowledge graphs are typically limited to the textual space, restricting their efficacy on other modalities (Hogan et al., 2021).
33
+
34
+ Recent research has aimed to overcome this limitation by integrating cross-modal knowledge. Wang et al. (Wang et al., 2023) first constructed a multimodal KG incorporating text, image, video, and audio modalities, supported by extensively annotated datasets. A unified pipeline was proposed in (Gong et al., 2024) to help construct multimodal KGs. Wei et al. built domain-specific KGs by connecting medical images and their related biomedical concepts (Wei et al., 2024). To the best of our knowledge, there are currently no knowledge graphs representing rich relational semantics among everyday sounds.
35
+
36
+ Vision-Language Models with KGs Large language models (LLMs) are prone to hallucinations, which has motivated the integration of factual knowledge to improve reasoning in vision-language models. One approach leverages knowledge graphs constructed via vision-language alignment and cross-modal similarity recalibration to enhance LLMs' multimodal reasoning abilities (Liu et al., 2025). Similarly, GraphAdapter (Li et al., 2023) fine-tunes models using dual KGs to strengthen vision-language understanding. Other work introduces cross-modal alignment modules to reconcile knowledge from images and text during fine-tuning (Lee et al., 2024), while retrieve-and-rerank frameworks have been proposed to augment Contrastive Language-Image Pretraining with structured knowledge (Gao et al., 2025). Together, these methods show that KGs improve semantic grounding and mitigate spurious correlations in vision-language tasks.
37
+
38
+ Leveraging KGs for Audio While knowledge graphs have been actively explored in vision-language research, their use in audio understanding remains limited. Penamakuri et al. (2025) introduced Audiopedia, a framework for audio question answering augmented with external knowledge. While their method also leverages KGs, it relies on general-purpose knowledge resources (e.g., from Wikidata) rather than knowledge bases tailored to audio understanding. In contrast, our work contributes the first KG specifically designed for sound events and auditory scenes. Our work is closely related to (Gao et al., 2025), but their method is based on prompt engineering. In contrast, we only use class labels as prompts. This simplification shifts the focus to the core semantic connection between audio and language while leveraging the AKG to enhance reasoning.
39
+
40
+ ![](images/a13d1fe9f0298125f19a3eb968c69e8594806baa66eda7883059b1e7b47e3aab.jpg)
41
+ Figure 2: iKnow-audio: Our framework enhances zero-shot audio classification via reasoning over the Audiocentric Knowledge Graph (AKG). (a) CLAP initially misranks the correct label (e.g., baby) due to acoustic ambiguity with other labels. (b) We query the AKG using top-k predictions to retrieve related concepts via relevant relations (e.g., has parent). (c) Enriched prompts are compared with the audio embedding, and similarity scores are aggregated to re-rank predictions, this time correctly identifying baby as the top label. This refinement demonstrates the utility of structured symbolic knowledge for disambiguating acoustic scenes and improving interpretability.
42
+
43
+ # 3 iKnow-audio: Integrating Knowledge Graphs with Audio-Language Models
44
+
45
+ We introduce iKnow-audio, a framework that enhances audio-language models with structured knowledge for improved reasoning. As outlined in Figure 2, it combines a Knowledge Graph Embedding (KGE) model with a pipeline for refining zero-shot predictions of CLAP. We demonstrate iKnow-audio using CLAP, but the framework is adaptable to any aligned audio-language model.
46
+
47
+ # 3.1 Knowledge Graph Embedding Models
48
+
49
+ To enable structured reasoning over audio-centric relationships, we employ KGE models that learn vector representations for entities and relations. These embeddings support link prediction, inferring plausible but unobserved relations between audio concepts.
50
+
51
+ We represent the knowledge graph as $\mathcal{G} = (\mathcal{E},\mathcal{R})$ , where $\mathcal{E}$ denotes the set of entities (e.g., siren, barking) and $\mathcal{R}$ the set of relation types (e.g., belongs to class, co-occurs with). Each factual statement is encoded as a triple $(h,r,t)\in \mathcal{E}\times \mathcal{R}\times \mathcal{E}$ , where $h$ is the head entity, $r$ the relation, and $t$ the tail entity. For example, the triple (dish clinking, occurs in, kitchen) captures a spatial context in which the sound typically appears.
52
+
53
+ We define a scoring function $\phi_{\mathrm{KG}}: \mathcal{E} \times \mathcal{R} \times \mathcal{E} \to \mathbb{R}$ , which assigns a plausibility score to a given triple $(h, r, t)$ . In our zero-shot classification pipeline, this function is primarily used for link prediction, specifically tail prediction, where, given a head entity $h$ and relation $r$ , we rank candidate tail entities $t \in \mathcal{E}$ based on their plausibility.
54
+
55
+ Higher scores indicate greater semantic compatibility, enabling the discovery of relevant or missing connections between audio concepts.
56
+
57
+ To model these interactions, we experiment with several KGE models which include: (1) TransE (Bordes et al., 2013), which models relations as translations in the embedding space. (2) TransH (Wang et al., 2014) and TransR (Lin et al., 2017), which extend TransE by introducing relation-specific projection spaces; (3) ComplEx (Trouillon et al., 2016), which leverages complex-valued embeddings to model asymmetric relations; (4) RotatE (Sun et al., 2019), which represents each relation as a rotation in the complex vector space $\mathbb{C}^d$ ; and (5) GCN -based (graph convolutional network) models (Schlichtkrull et al., 2018), which propagate information through the graph structure via message passing.
58
+
59
+ In this work, we adopt RotatE as the KGE model due to its strong empirical performance on our proposed AKG (see Section 4). RotatE embeds entities and relations in a complex vector space $\mathbb{C}^d$ , and models each relation as a rotation in that space. The score of a triple $(h,r,t)$ is given by:
60
+
61
+ $$
62
+ \phi_ {\mathrm {K G}} (h, r, t) = - \left\| \mathbf {h} \circ \mathbf {r} - \mathbf {t} \right\| _ {2}, \tag {1}
63
+ $$
64
+
65
+ where $\mathbf{h},\mathbf{r},\mathbf{t}\in \mathbb{C}^d$ are the embeddings of the head, relation, and tail, respectively, and $\circ$ denotes the element-wise (Hadamard) product. A higher score indicates a more plausible triple.
66
+
67
+ This scoring mechanism enables structured reasoning over multi-relational knowledge, which we exploit to retrieve semantically related entities via link prediction.
68
+
69
+ # 3.2 Zero-Shot Classification with CLAP
70
+
71
+ We leverage CLAP (Elizalde et al., 2023), a pretrained model that embeds audio and text into a shared representation space. This enables zero-shot audio classification by computing similarity scores between audio inputs and candidate label embeddings.
72
+
73
+ Let $\mathcal{A}$ denote the space of input audio signals and $\mathcal{L}$ the space of textual labels. Given a set of target class labels $C = \{c_1,\dots ,c_N\} \subset \mathcal{L}$ and an input audio sample $a\in \mathcal{A}$ , CLAP maps both modalities into a joint embedding space via an audio encoder $\phi_{\mathrm{A}}:\mathcal{A}\to \mathbb{R}^{d}$ , and a text encoder $\phi_{\mathrm{T}}:\mathcal{L}\rightarrow \mathbb{R}^d$
74
+
75
+ CLAP formulates classification as a nearest-neighbor retrieval task (Figure 2 (a)), where the predicted label $\hat{c} \in C$ is obtained by maximizing cosine similarity:
76
+
77
+ $$
78
+ \hat {c} = \underset {c \in C} {\arg \max } \operatorname {s i m} \left(\phi_ {\mathrm {A}} (a), \phi_ {\mathrm {T}} (c)\right), \tag {2}
79
+ $$
80
+
81
+ where $\mathrm{sim}(\cdot ,\cdot)$ denotes cosine similarity. We denote the top- $k$ retrieved labels as:
82
+
83
+ $$
84
+ C _ {k} = \{\hat {c} ^ {(1)}, \dots , \hat {c} ^ {(k)} \}, \quad \text {r a n k e d b y s i m i l a r i t y}.
85
+ $$
86
+
87
+ # 3.3 Enhancing CLAP Inference with AKG
88
+
89
+ To enhance interpretability and robustness, we refine the predictions $C_k$ via symbolic reasoning over $\mathcal{G}$ . This produces enriched, context-aware prompts that reflect the semantic neighborhood of each class. This process is depicted in Figure 2 (b).
90
+
91
+ Link Prediction To enrich top- $k$ CLAP predictions with structured knowledge, we perform link prediction using the trained KGE model $\phi_{\mathrm{KG}}$ . Given a predicted class label $\hat{c} \in C_k$ , we use $\phi_{\mathrm{KG}}$ to infer the most semantically plausible tail entities $t \in \mathcal{E}$ connected to $\hat{c}$ via a curated subset of informative relations $\mathcal{R}_q \subset \mathcal{R}$ . These predicted tails serve as contextual signals to refine and expand the textual prompts used for similarity computation within the CLAP model.
92
+
93
+ Contextual Prompt Expansion For each top prediction $\hat{c} \in C_k$ , we query the knowledge graph to retrieve candidate tail entities connected via informative relations:
94
+
95
+ $$
96
+ \mathcal {T} _ {c} = \left\{\left(\hat {c}, r, t\right) \in \mathcal {T} \mid r \in \mathcal {R} _ {q} \right\},
97
+ $$
98
+
99
+ where $\mathcal{R}_q\subset \mathcal{R}$ is a curated set of relations used for semantic enrichment (e.g., produces).
100
+
101
+ Using the KGE model $\phi_{\mathrm{KG}}$ , we rank tail candidates $t \in \mathcal{E}$ for each relation $r \in \mathcal{R}_q$ based on their
102
+
103
+ plausibility in completing the triple $(\hat{c},r,t)$ . We select the top- $m$ most plausible tails:
104
+
105
+ $$
106
+ \mathcal {T} _ {c} ^ {\mathrm {t o p}} = \left\{t _ {1} ^ {*}, \dots , t _ {m} ^ {*} \right\},
107
+ $$
108
+
109
+ where $t_i^* \in \arg \max_{t \in \mathcal{E}} \text{score}(\hat{c}, r, t; \phi_{\mathrm{KG}})$ , and $\text{score}(\cdot)$ is the plausibility score assigned by $\phi_{\mathrm{KG}}$ .
110
+
111
+ To generate enriched prompts, we concatenate each class label $\hat{c}$ with its associated tail entities $t_i^*$ . For example, prompts can take the form:
112
+
113
+ $$
114
+ p _ {\hat {c}, t _ {i} ^ {*}} = \operatorname {c o n c a t} (\hat {c}, t _ {i} ^ {*}).
115
+ $$
116
+
117
+ Let $P_{\hat{c}} = \{p_{\hat{c},t_1^*},\dots ,p_{\hat{c},t_m^*}\}$ be the set of knowledge-enriched prompts for class $\hat{c}$ .
118
+
119
+ Scoring with Enriched Prompts Each enriched prompt $p \in P_{\hat{c}}$ is encoded using the CLAP text encoder $\phi_{\mathrm{T}}$ , and scored against the input audio $a \in \mathcal{A}$ via cosine similarity:
120
+
121
+ $$
122
+ s (p) = \operatorname {s i m} \left(\phi_ {\mathrm {A}} (a), \phi_ {\mathrm {T}} (p)\right). \tag {3}
123
+ $$
124
+
125
+ This yields a refined similarity score for each knowledge-augmented prompt, enabling reranking of the initial predictions $C_k$ based on semantically enriched textual context.
126
+
127
+ Aggregation and Re-ranking To consolidate evidence from both the original label and its augmented prompts, we aggregate their similarity scores into a single score per class (Figure 2 (c)).
128
+
129
+ For each class $\hat{c} \in C_k$ , let $s(\hat{c}) = \mathrm{sim}(\phi_{\mathrm{A}}(a), \phi_{\mathrm{T}}(\hat{c}))$ denote the original CLAP score, and $\{s(p) \mid p \in P_{\hat{c}}\}$ the scores of its enriched prompts. We define the aggregated score $\tilde{s}(\hat{c})$ using a log-sum-exp fusion:
130
+
131
+ $$
132
+ \tilde {s} (\hat {c}) = \log \left(\exp (s (\hat {c})) + \sum_ {p \in P _ {\hat {c}}} \exp (s (p))\right). \tag {4}
133
+ $$
134
+
135
+ This operation softly pools evidence across the original and contextualized prompts, allowing the model to benefit from both raw CLAP predictions and knowledge-enriched signals. Aggregation in Equation 4 is crucial in striking this balance: without it, performance may degrade due to overreliance on contextual prompts, which risks introducing noise or ambiguity. The final class prediction is then obtained by:
136
+
137
+ $$
138
+ c ^ {*} = \arg \max _ {\hat {c} \in C _ {k}} \tilde {s} (\hat {c}). \tag {5}
139
+ $$
140
+
141
+ A detailed description of the algorithm is provided in Appendix A.4.
142
+
143
+ ![](images/eceb0b59b25cd88b3b8a10e100101c3c171ff7f0a3a9697ceaddb7d8d0785122.jpg)
144
+ Figure 3: Generation of knowledge triples from SALT.
145
+
146
+ # 4 Knowledge Graph Construction
147
+
148
+ Sound events are ubiquitous and seldom occur in isolation. They are situated within broader contexts that encompass temporal dynamics, causal relations, environmental cues, perceptual attributes, and even human intent. Capturing such relationships is essential for integrating commonsense knowledge, easing robust inference and better generalization in audio tasks. To move beyond conventional classification paradigms, we construct a domain-specific knowledge graph that encodes these relational semantics among everyday sounds.
149
+
150
+ Unlike general-purpose KGs such as DBpedia (Auer et al., 2007), ConceptNet (Speer et al., 2017), and Wikidata (Vrandecic and Krötzsch, 2014), which offer limited coverage of everyday sounds and lack fine-grained audio semantics and perceptual grounding, our knowledge graph is tailored for auditory scenes, enabling symbolic reasoning aligned with audio-language models.
151
+
152
+ We construct the Audio-centric Knowledge Graph (AKG) to encode structured knowledge about sound events and their semantic and contextual properties. We derive this graph from standardized sound event labels aggregated across over 27 publicly available datasets, as cataloged in the Standardized Audio event Label Taxonomy (SALT) (Stamatiadis et al., 2024). Our AKG includes entities such as sound-producing sources (e.g., dog, engine), sound events (e.g., barking, idling), and higher-level categorical labels (e.g., domestic animal, vehicle).
153
+
154
+ The schema comprises nine high-level relation
155
+
156
+ ![](images/1f5bf4b6a4a0c0a693e6d18894f4c925e9386b7cd00bc7c76663230655bca450.jpg)
157
+ Figure 4: Generation of knowledge triples from LLMs.
158
+
159
+ categories, each reflecting distinct aspects of auditory context. These categories guide the generation of plausible triples in the format (head, relation, tail), where the head is a standardized sound event label and the relation contextualizes its link to the tail concept. The AKG is formally represented as a collection of triples with relations such as has parent and occurs in. The full relation schema is detailed in Appendix A.1.
160
+
161
+ The AKG triples are generated through two complementary approaches: (1) exploiting the hierarchical structure of the SALT taxonomy (Figure 3), and (2) prompting a Large Language Model (LLM) (Figure 4), both applied to SALT labels. For the LLM-based method, we use Mistral-7B-Instruct (Jiang et al., 2023). The outputs of both methods are merged into an initial raw AKG containing 51,254 triples. The subset of LLM-generated triples is then refined through a two-stage filtering pipeline: an LLM-based plausibility check followed by manual validation. This process yields a curated set of 20,387 unique, high-quality triples, which we refer to as the pruned AKG. The triples derived directly from the SALT taxonomy remain unchanged throughout this process. In subsequent experiments, we train KGE models on both the raw and pruned variants to compare their effectiveness. Details of the LLM prompt templates are provided in Appendix A.5, and summary statistics of the resulting KGs are reported in Appendix A.2.
162
+
163
+ # 5 Evaluation
164
+
165
+ We evaluate the iKnow-audio framework on zero-shot audio classification across multiple benchmark
166
+
167
+ datasets, using a standardized prompt setup and common retrieval metrics. We also detail the training setup of KGE models on the AKG variants.
168
+
169
+ # 5.1 Datasets
170
+
171
+ We evaluate our approach on six benchmark datasets designed for single-class or multi-label environmental sound classification: ESC50 (Piczak, 2015): A dataset of 2,000 labeled 5-second audio clips spanning 50 environmental sound classes. UrbanSound8K (Salamon et al., 2014): Comprises 8,732 labeled audio excerpts, each with a duration of up to 4 seconds, across 10 urban sound categories. TUT2017 (Mesaros et al., 2016): Contains 6,300 10-second recordings representing 15 distinct acoustic scenes. FSD50K (Fonseca et al., 2022): A collection of 51,197 variable-length audio clips (0.3–30 seconds) from Freesound, annotated across 200 classes. AudioSet (Gemmeke et al., 2017): A large-scale dataset with over 2 million 10-second YouTube clips, covering 527 diverse sound categories. DCASE17-T4 (Mesaros et al., 2017): A curated subset of AudioSet focusing on 17 warning and vehicle sound classes, consisting of 52,763 10-second clips. We utilize all cross-validation folds for ESC50, US8K, and TUT2017, and test sets for AudioSet (20,371), FSD50K (20,462), and DCASE17-T4 (488).
172
+
173
+ # 5.2 Prompt Format
174
+
175
+ We use standard labels from the SALT taxonomy as prompts, formatted in lowercase with underscores replaced by spaces (e.g., dog_barking $\rightarrow$ dog_barking). This deliberate choice avoids the variability and required dataset-specific tuning typically introduced by prompt engineering. This setup allows isolating the contribution of structured knowledge in refining CLAP's predictions, without confounding effects from prompt engineering. Although not optimized for best-case accuracy, it offers a clean and consistent basis for evaluating the impact of knowledge-based reasoning in audio classification.
176
+
177
+ # 5.3 Metrics
178
+
179
+ We use two metrics to measure the performance across datasets.
180
+
181
+ Hit@k: For a given query, Hit@k measures whether the ground-truth label appears within the top 1, 3, 5, 10 retrieved candidates, reporting the proportion of successful hits.
182
+
183
+ Mean reciprocal rank (MRR): The average of the reciprocal ranks of ground truth across multiple queries. For each query, the reciprocal rank is the inverse of the position at which the ground truth appears in the ranked list.
184
+
185
+ # 5.4 KGE Model Training
186
+
187
+ To learn structured representations over our AKG, we trained a suite of KGE models using the PyKEEN library (Ali et al., 2021). We evaluated six established models: TransE (Bordes et al., 2013), TransH (Wang et al., 2014), TransR (Lin et al., 2017), ComplEx (Trouillon et al., 2016), RGCN (Schlichtkrull et al., 2018), and RotatE (Sun et al., 2019). For each model, we conducted a grid search over the following hyperparameters: batch size (values in $\{2^{8}, 2^{9}, 2^{10}, 2^{11}, 2^{12}\}$ ), learning rate (in $\{10^{-1}, 10^{-2}, 10^{-3}, 10^{-4}\}$ ), and embedding dimensionality ( $\{64, 128, 256\}$ ). Training was carried out on two variants of the AKG: (i) a raw version composed of raw triples without refinement, and (ii) the pruned version obtained through LLM-based plausibility verification and manual post-processing to remove duplicates, spurious entries, and inconsistencies in label granularity.
188
+
189
+ # 6 Results
190
+
191
+ We first report the retrieval performance of the selected KGE models on the AKG, and then evaluate their effectiveness in zero-shot audio classification (ZSAC) using AKG embeddings.
192
+
193
+ # 6.1 Performance of KGE Models
194
+
195
+ <table><tr><td>Model</td><td>Hit@1</td><td>Hit@3</td><td>Hit@5</td><td>Hit@10</td><td>MRR</td></tr><tr><td colspan="6">Raw AKG</td></tr><tr><td>TransE</td><td>1.0</td><td>36.0</td><td>47.4</td><td>59.8</td><td>22.2</td></tr><tr><td>TransH</td><td>6.0</td><td>12.1</td><td>16.7</td><td>22.5</td><td>11.8</td></tr><tr><td>TransR</td><td>3.4</td><td>7.1</td><td>9.6</td><td>13.3</td><td>7.1</td></tr><tr><td>ComplEx</td><td>19.6</td><td>34.3</td><td>40.9</td><td>50.5</td><td>30.1</td></tr><tr><td>R-GCN</td><td>17.4</td><td>33.8</td><td>43.9</td><td>56.7</td><td>30.0</td></tr><tr><td>RotatE</td><td>37.0</td><td>56.9</td><td>64.8</td><td>73.2</td><td>49.5</td></tr><tr><td colspan="6">Pruned AKG</td></tr><tr><td>TransE</td><td>1.6</td><td>40.8</td><td>50.9</td><td>60.6</td><td>24.3</td></tr><tr><td>TransH</td><td>17.3</td><td>28.9</td><td>35.5</td><td>43.5</td><td>26.1</td></tr><tr><td>TransR</td><td>7.3</td><td>15.0</td><td>18.8</td><td>25.1</td><td>13.6</td></tr><tr><td>ComplEx</td><td>22.7</td><td>35.1</td><td>40.1</td><td>48.2</td><td>31.3</td></tr><tr><td>R-GCN</td><td>28.6</td><td>47.7</td><td>57.4</td><td>68.8</td><td>41.7</td></tr><tr><td>RotatE</td><td>46.4</td><td>61.9</td><td>67.7</td><td>74.0</td><td>56.1</td></tr></table>
196
+
197
+ Table 1: Comparison of KGE models on raw and pruned variants of the AKG. Retrieval results $(\%)$ in terms of Hit@1, Hit@3, Hit@5, Hit@10, and MRR. Best performances are in bold and second-best are underlined.
198
+
199
+ <table><tr><td rowspan="2">Metric</td><td colspan="3">ESC50</td><td colspan="3">US8K</td><td colspan="3">TUT2017</td><td colspan="3">FSD50K</td><td colspan="3">AudioSet</td><td colspan="3">DCASE17-T4</td></tr><tr><td>CLAP</td><td>+KG-agg</td><td>+KG</td><td>CLAP</td><td>+KG-agg</td><td>+KG</td><td>CLAP</td><td>+KG-agg</td><td>+KG</td><td>CLAP</td><td>+KG-agg</td><td>+KG</td><td>CLAP</td><td>+KG-agg</td><td>+KG</td><td>CLAP</td><td>+KG-agg</td><td>+KG</td></tr><tr><td>Hit@1</td><td>93.2</td><td>93.5</td><td>95.4</td><td>82.5</td><td>84.5</td><td>85.9</td><td>37.8</td><td>49.3</td><td>47.9</td><td>61.1</td><td>63.6</td><td>64.0</td><td>18.4</td><td>19.6</td><td>19.9</td><td>37.7</td><td>43.0</td><td>45.9</td></tr><tr><td>Hit@3</td><td>98.8</td><td>99.1</td><td>99.2</td><td>96.6</td><td>95.6</td><td>96.9</td><td>74.9</td><td>82.1</td><td>83.3</td><td>82.8</td><td>80.7</td><td>84.2</td><td>33.1</td><td>31.2</td><td>34.4</td><td>77.3</td><td>76.4</td><td>78.5</td></tr><tr><td>Hit@5</td><td>99.5</td><td>99.5</td><td>99.5</td><td>98.8</td><td>96.3</td><td>98.8</td><td>91.3</td><td>82.8</td><td>91.3</td><td>88.9</td><td>81.1</td><td>88.9</td><td>41.1</td><td>31.5</td><td>41.1</td><td>91.2</td><td>79.5</td><td>91.2</td></tr><tr><td>MRR</td><td>95.9</td><td>96.2</td><td>97.2</td><td>89.6</td><td>90.1</td><td>91.5</td><td>57.7</td><td>64.3</td><td>65.4</td><td>72.2</td><td>71.7</td><td>74.3</td><td>26.5</td><td>25.0</td><td>27.7</td><td>57.3</td><td>58.5</td><td>63.1</td></tr></table>
200
+
201
+ Table 2: Retrieval results (%) in terms of hit@1, hit@3, hit@5, and MRR on the six benchmark datasets. Each dataset has three sub-columns: CLAP (baseline), +KG-agg (CLAP-KG w/o aggregation), and +KG (CLAP-KG). Performance improvement larger than $1\%$ over CLAP is in bold, and improvement of $1\%$ or less is underlined.
202
+
203
+ Table 1 presents a comparison of KGE models trained on our proposed AKG. We evaluated each model on the link prediction task, comparing performance under both the raw and pruned variants of the AKG.
204
+
205
+ Raw vs Pruned Settings Transitioning from the raw to the pruned AKG yields substantial performance gains for all models, underscoring the importance of post-processing triples. Notable improvements include TransH's MRR rising from 11.8 to 26.1 and R-GCN's from 30.0 to 41.7. This supports the notion that spurious triples and inconsistencies in entity labeling can obscure latent relational patterns crucial to learning effective embeddings for link prediction.
206
+
207
+ Model-based Performance RotatE outperforms all models in both raw and pruned settings, achieving the highest MRR (56.1) and leading in all Hit@k metrics. Its performance effectively captures asymmetric and compositional relations such as produces, or causes, outperforming simpler translational models like TransE and TransH. RGCN performs well on the pruned graph due to its use of structural information but is highly sensitive to noise, where simpler models like TransE and ComplEx perform better. Despite its strengths, RGCN slightly underperforms RotatE, possibly due to weaker handling of relation directionality or suboptimal tuning. ComplEx, effective for asymmetric relations, shows no notable gains in the pruned setting, performing similarly across both conditions.
208
+
209
+ KGE Model Selection Based on the comparative analysis above, we select RotatE as the backbone model for downstream knowledge reasoning/querying. Its superior link prediction capabilities ensure that the semantic augmentations introduced to CLAP are grounded in plausible, relationally informed expansions of the label space. The robustness of RotatE in both raw and pruned settings
210
+
211
+ further supports its integration into our proposed iKnow-audio framework.
212
+
213
+ # 6.2 Zero-Shot Audio Classification
214
+
215
+ Table 2 presents ZSAC retrieval results across six benchmark datasets. For each dataset the table reports, left to right, the CLAP baseline, the ablated variant without the aggregation module (+KG-agg), and the full CLAP-KG model (+KG).
216
+
217
+ We observe that the full CLAP-KG model consistently outperforms the CLAP baseline across datasets, with notable gains in the Hit@1 metric. The only exception is Hit@5, where CLAP-KG matches the baseline performance. This trend can be explained by the semantic closeness of top-k candidates to the ground truth: as the number of candidates increases, both CLAP and CLAP-KG are more likely to include the correct label.
218
+
219
+ The most striking improvement is observed in Hit@1 on TUT2017, with a gain of $10.1\%$ . Since TUT2017 targets acoustic scene classification, the additional context provided by the AKG helps disambiguate between scenes, making classification easier. Relations like scene contains or described as disentangle the auditory scene into its sound event components.
220
+
221
+ Importance of Aggregation We assess the role of the aggregation step introduced in Section 3.3 via Equation 4. To this end, we evaluate CLAP-KG without aggregation, denoted as $+\mathrm{KG}$ -agg, and compare it with the full model, $+\mathrm{KG}$ , which includes aggregation.
222
+
223
+ Table 2 reports the results across datasets, with the +KG-agg and +KG columns highlighting the impact of the aggregation step. Removing the aggregation step corresponds to relying solely on the scores of contextual prompts. This setting already improves over the CLAP baseline in terms of mean reciprocal rank (MRR) on several datasets, though it underperforms on FSD50K and AudioSet. However, compared to the full CLAP-KG, the +KG-agg
224
+
225
+ ![](images/93baf2fe5c797356e405257a49ab9d519255599cbe0cdf87cee5029f28bb580f.jpg)
226
+ Figure 5: Performance change $(\%)$ of CLAP-KG as compared to CLAP in terms of Hit@1, Hit@3, and MRR. Only the top 10 relationships are displayed. associated w. env. = associated with environment; emo. associated w. = emotionally associated with.
227
+
228
+ variant consistently lags behind.
229
+
230
+ These results highlight the importance of aggregation: the LogSumExp pooling in Equation 4 balances raw CLAP predictions with knowledge-enriched signals, preventing overreliance on noisy prompts. Integrating knowledge from the AKG in this manner is more effective, as it mitigates the pitfalls of relying solely on augmented prompts while preserving the grounding of the original CLAP predictions.
231
+
232
+ Impact of Relations Datasets often vary in terms of context and structure, reflecting different relations among classes. To shed light on this perspective, we plot ZSAC performance with different relation types, as shown in Figure 5. Clearly, many relations boost the performance across datasets. Among them, has parent provides robust gains for all datasets. This is expected due to the inherent taxonomical categorization of sound events reflected in many datasets, where labels are systematically grouped into categories. The most impactful relations, however, vary by dataset and are often content-specific. For TUT2017, the top relations is a variant of, has parent and scene occurs pertain to acoustic scenes, including sound event variations, label hierarchy, and scene location.
233
+
234
+ Embedding Visualizations While the overall accuracy of ZSAC improves with the integration of knowledge graphs, performance varies across classes. This variation is analyzed in Appendix A.6, using the ESC50 dataset as a case study.
235
+
236
+ To investigate why CLAP-KG improves ZSAC performance for certain classes but degrades it for others, we visualize the Uniform Manifold Approximation and Projection (UMAP) (McInnes et al., 2018) projections of the embeddings, focusing on
237
+
238
+ a subset of classes of the ESC50 dataset, as shown in Figure 6. Although UMAP does not preserve exact distances, the resulting embedding clusters can still offer valuable insights into the relative data distribution.
239
+
240
+ The top row of Figure 6 shows the mean audio embeddings (circle), the embedding of the top-1 CLAP predictions (star), and the top-1 CLAP-KG (triangles). Colors indicate different classes, with each subfigure using a distinct color scheme because of the different set of predictions. For each subfigure, we see multiple triangles as the CLAP predictions can be enriched by the KG in various ways depending on the set of relations and tails. CLAP-KG enriches predictions when the groundtruth is helicopter, bird chirping, crow, crackle, and cow. These are the classes to which CLAP-KG brings the most improvement. Indeed, for all these classes, the CLAP-KG prediction clusters overlap with the audio embeddings, whereas the CLAP predictions remain disjoint.
241
+
242
+ To provide a more balanced perspective, we also visualize five classes where CLAP-KG degrades performance: cricket, rain, laughing, mouse click, and engine, shown in the bottom row of Figure 6. In these cases, the audio embeddings and the correct CLAP predictions (circle and star of the same color) overlap, whereas the CLAP-KG predictions do not in most cases. This indicates that additional information from the AKG is not always beneficial, possibly due to heuristic retrieval strategies (e.g., querying the KGE model with suboptimal relations) or residual noise in the AKG.
243
+
244
+ # 6.3 Discussion
245
+
246
+ Based on the observations and analysis above, we sum-up the following main findings:
247
+
248
+ ![](images/36ce376122c44d8076d5cdd36042b08d15cc579603f87ee73a8fada3face8ca5.jpg)
249
+ Figure 6: UMAP projection of the embeddings of CLAP audio (circle $\bullet$ ), top-1 CLAP prediction ( $star \star$ ), and top-1 CLAP-KG predictions (triangle $\triangle$ ). Colors indicate different classes, with each subfigure using a distinct color scheme. Top: the 5 classes rightmost in Figure 10 that CLAP-KG improves the performance. Bottom: the 5 classes leftmost in Figure 10 that CLAP-KG degrades the performance.
250
+
251
+ A posthoc prediction recalibration with our AKG can boost ZSAC without further training or tuning. Note that in our pipeline, the KG directly operates on CLAP predictions without further training.
252
+
253
+ Meaningful relations are key to integrating the AKG due to the specificity of different datasets. As evidenced by Figure 5, relations that enhance the understanding of context and background knowledge of acoustic scenes augment the performance on TUT2017 by a large margin. This also points out that a powerful and generalizable AKG must encompass a variety of relations.
254
+
255
+ Our AKG frees the efforts on prompt engineering and provides trackable reasoning. Audio-language models can be queried using only semantic cores (e.g., class labels), without the need for extensive prompt design. Labels can be directly enriched with tail predictions from a KGE model trained on the AKG. Moreover, such predictions provide transparency into the classification process (through reasoning or factual knowledge retrieval), revealing both the predicted labels and their interrelations.
256
+
257
+ # 7 Conclusion
258
+
259
+ In this paper, we present iKnow-audio, a framework that integrates knowledge graphs with audio-language models to provide robust semantic grounding and improve zero-shot audio classification. Core to this framework is the first Audio-centric Knowledge Graph (AKG), which captures rich relational semantics among everyday sounds. This structured knowledge is encoded into a knowledge graph embedding model and used to augment predictions of an instantiated CLAP model. Our key finding is that, rather than relying on isolated semantic cores, the AKG provides essential context and background knowledge for interpreting sound events. The proposed method is posthoc and lightweight, akin to Retrieval Augmented Generation (RAG), requiring neither fine-tuning nor prompt engineering when applied to audio-language models. Moreover, the framework shows promise for generalization to other tasks, such as question answering.
260
+
261
+ # Limitations and Future Work
262
+
263
+ Despite the potential of the proposed method, we are aware of the following limitations of the current work and suggest the corresponding future directions: (1) Shallow and Heuristic Reasoning: Our approach currently performs only single-hop reasoning (tail prediction) over the knowledge graph (AKG) and enriches prompts using simple string concatenation. This limits the depth and expressiveness of semantic inference. Future work could explore multi-hop reasoning as relations in the KG space can be chained. (2) Noise and Incompleteness in the AKG: The AKG was automatically constructed and cleaned, yet it may still contain noisy, generic, or missing triples. Additionally, link prediction from the KGE model can be unreliable for rare or ambiguous events, potentially introducing irrelevant or spurious concepts into the reasoning process. (3) Limited Evaluation Scope: We have not evaluated the method on music datasets, although the AKG encodes music-related knowledge (through music-related labels from SALT). Extending evaluation to musical audio and broader domains would help assess the generality of the approach. (4) Design and Efficiency Constraints: The use of top-k selection for both CLAP and KG predictions may not capture the most informative evidence and could be biased toward frequent entities. Moreover, inference-time reasoning introduces additional computational overhead (through a beam search). Future work may explore alternative sampling strategies and efficiency optimizations.
264
+
265
+ # Acknowledgments
266
+
267
+ This work was partially supported by the Audible project, funded by French BPI, and by the European Union (ERC, HI-Audio, 101052978). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.
268
+
269
+ # References
270
+
271
+ Mehdi Ali, Max Berrendorf, Charles Tapley Hoyt, Laurent Vermue, Sahand Sharifzadeh, Volker Tresp, and Jens Lehmann. 2021. PyKEEN 1.0: A Python Library for Training and Evaluating Knowledge Graph Embeddings. Journal of Machine Learning Research, (82):1-6.
272
+ Soren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. In international semantic web conference, pages 722-735. Springer.
273
+ Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Advances in Neural Information Processing Systems, volume 26. Curran Associates, Inc.
274
+ Benjamin Elizalde, Soham Deshmukh, Mahmoud Al Ismail, and Huaming Wang. 2023. Clap learning audio concepts from natural language supervision. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1-5. IEEE.
275
+ Eduardo Fonseca, Xavier Favory, Jordi Pons, Frederic Font, and Xavier Serra. 2022. Fsd50k: An open dataset of human-labeled sound events.
276
+ Meng Gao, Yutao Xie, Wei Chen, Feng Zhang, Fei Ding, Tengjiao Wang, Jiahui Yao, Jiabin Zheng, and Kam-Fai Wong. 2025. *Rerankgc: A cooperative retrieval- and-erank framework for multi-modal knowledge graph completion*. *Neural Networks*, page 107467.
277
+ Jort F. Gemmeke, Daniel P. W. Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R. Channing Moore, Manoj Plakal, and Marvin Ritter. 2017. Audio set: An ontology and human-labeled dataset for audio events. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 776-780.
278
+ Biao Gong, Shuai Tan, Yutong Feng, Xiaoying Xie, Yuyuan Li, Chaochao Chen, Kecheng Zheng, Yu-jun Shen, and Deli Zhao. 2024. Uknow: A unified knowledge protocol with multimodal knowledge graph datasets for reasoning and vision-language pretraining. Advances in Neural Information Processing Systems, 37:9612-9633.
279
+ Aidan Hogan, Eva Blomqvist, Michael Cochez, Claudia d'Amato, Gerard De Melo, Claudio Gutierrez, Sabrina Kirrane, Jose Emilio Labra Gayo, Roberto Navigli, Sebastian Neumaier, and 1 others. 2021. Knowledge graphs. ACM Computing Surveys (Csur), 54(4):1-37.
280
+ Albert Qiaochu Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de Las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier,
281
+
282
+ Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7b. ArXiv, abs/2310.06825.
283
+ Junlin Lee, Yequan Wang, Jing Li, and Min Zhang. 2024. Multimodal reasoning with multimodal knowledge graph. arXiv preprint arXiv:2406.02030.
284
+ Xin Li, Dongze Lian, Zhihe Lu, Jiawang Bai, Zhibo Chen, and Xinchao Wang. 2023. Graphadapter: Tuning vision-language models with dual knowledge graph. Advances in Neural Information Processing Systems, 36:13448-13466.
285
+ Hailun Lin, Yong Liu, Weiping Wang, Yinliang Yue, and Zheng Lin. 2017. Learning entity and relation embeddings for knowledge resolution. Procedia Computer Science, 108:345-354.
286
+ Junming Liu, Siyuan Meng, Yanting Gao, Song Mao, Pinlong Cai, Guohang Yan, Yirong Chen, Zilin Bian, Botian Shi, and Ding Wang. 2025. Aligning vision to language: Text-free multimodal knowledge graph construction for enhanced llms reasoning. arXiv preprint arXiv:2503.12972.
287
+ Leland McInnes, John Healy, Nathaniel Saul, and Lukas Grossberger. 2018. UMAP: Uniform manifold approximation and projection. The Journal of Open Source Software, 3(29):861.
288
+ A. Mesaros, T. Heittola, A. Diment, B. Elizalde, A. Shah, E. Vincent, B. Raj, and T. Virtanen. 2017. DCASE 2017 challenge setup: Tasks, datasets and baseline system. In Proceedings of the Detection and Classification of Acoustic Scenes and Events 2017 Workshop (DCASE2017), pages 85-92.
289
+ Annamaria Mesaros, Toni Heittola, and Tuomas Virtanen. 2016. Tut database for acoustic scene classification and sound event detection. In 2016 24th European Signal Processing Conference (EUSIPCO), pages 1128-1132.
290
+ Michel Olvera, Paraskevas Stamatiadis, and Slim Essid. 2024. A sound description: Exploring prompt templates and class descriptions to enhance zero-shot audio classification. In *The Workshop on Detection and Classification of Acoustic Scenes and Events* (DCASE).
291
+ Abhirama Subramanyam Penamakuri, Kiran Chhatre, and Akshit Jain. 2025. Audiopedia: Audio qa with knowledge. In ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1-5.
292
+ Karol J. Piczak. 2015. ESC: Dataset for Environmental Sound Classification. In Proceedings of the 23rd Annual ACM Conference on Multimedia, pages 1015-1018. ACM Press.
293
+ J. Salamon, C. Jacoby, and J. P. Bello. 2014. A dataset and taxonomy for urban sound research. In 22nd ACM International Conference on Multimedia (ACM-MM'14), pages 1041-1044, Orlando, FL, USA.
294
+
295
+ Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In European semantic web conference, pages 593-607. Springer.
296
+ Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of the AAAI conference on artificial intelligence, volume 31.
297
+ Paraskevas Stamatiadis, Michel Olvera, and Slim Essid. 2024. Salt: Standardized audio event label taxonomy. The workshop on Detection and Classification of Acoustic Scenes and Events (DCASE), 17:26.
298
+ Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. In International Conference on Learning Representations (ICLR).
299
+ Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In International conference on machine learning, pages 2071-2080.
300
+ Denny Vrandecic and Markus Krötzsch. 2014. Wikidata: a free collaborative knowledgebase. Communications of the ACM, 57(10):78-85.
301
+ Xin Wang, Benyuan Meng, Hong Chen, Yuan Meng, Ke Lv, and Wenwu Zhu. 2023. Tiva-kg: A multimodal knowledge graph with text, image, video and audio. In Proceedings of the 31st ACM international conference on multimedia, pages 2391-2399.
302
+ Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In the AAAI conference on artificial intelligence, volume 28.
303
+ Xiaoyang Wei, Zografoula Vagina, Camille Kurtz, and Florence Cloppet. 2024. Integrating expert knowledge with vision-language model for medical image retrieval. In 2024 IEEE International Symposium on Biomedical Imaging (ISBI), pages 1-4. IEEE.
304
+
305
+ # A Appendix
306
+
307
+ # A.1 Knowledge Graph Relation Schema
308
+
309
+ We define a schema comprising nine high-level relation categories, each reflecting a distinct aspect of auditory context. Each category includes a set of relations that guide the generation of plausible triples (head, relation, tail), where the head is a standardized sound event label (from SALT (Stamatiadis et al., 2024)) and the relation contextualizes its link to the tail concept. These categories are summarized in Table 3 and described as follows:
310
+
311
+ Co-occurrence and Temporal relations capture how sound events unfold over time or co-occur within sound scenes. Relations such as co-occurs with, precedes, follows, and overlaps with help model the sequencing of events (e.g., "thunder precedes lightning").
312
+
313
+ Causal and Functional relations express underlying causes or functions of sound events, including produces, caused by, triggers, indicates, responds to, and affects. These relations allow the AKG to represent inferential chains (e.g., "siren triggers emergency response") and explain sound occurrences based on physical or intentional causality.
314
+
315
+ Taxonomic and Hierarchical relations organize sounds into ontological structures using is a type of, has subtype, is instance of, belongs to class, and is variant of. These relations support reasoning about sound categories and enable class-based generalizations (e.g., "laughter is a type of human sound").
316
+
317
+ Spatio-Environmental Relations situate sound events within physical and environmental contexts through relations such as occurs in, can be heard in, localized in, originates from, and associated with environment. These are particularly valuable for acoustic scene classification and localization tasks.
318
+
319
+ Source and Agent Relations focus on the source of origin of a sound event. Relations like emitted by, performed by, generated by, is sound of, and produced during encode associations between sounds and their animate or inanimate sources (e.g., "chirping performed by bird").
320
+
321
+ Perceptual and Qualitative relations model human-centric interpretations of sound, using descriptors such as has loudness, has pitch, has duration, has timbre, perceived as, and emotionally associated with. These attributes provide complementary information that supports
322
+
323
+ affective computing and perceptual modeling.
324
+
325
+ Modality-Crossing relations link auditory signals to language and vision, including described by, associated with event, linked to visual, and transcribed as. Such relations enable multimodal grounding and textual or visual alignment for sound events.
326
+
327
+ Intentionality relations express functional and normative expectations related to sound, via invites action, used for, requires attention, and warns about. These are particularly relevant for modeling listener responses and action-affording cues (e.g., "doorbell invites action open door").
328
+
329
+ Scene Composition and Event Structure captures how individual sound events compose or imply broader scenes or activities, through part of scene, scene contains, event composed of, temporal component of, and entails event. These relations provide a high-level abstraction of the acoustic scene and a structural prior for scene recognition.
330
+
331
+ # A.2 Audio Knowledge Graph Statistics
332
+
333
+ In Figure 7 we present key statistics that provide a detailed characterization of the relational structure of the proposed knowledge graph. This includes measures of reflexivity, transitivity, and relation frequency distributions.
334
+
335
+ Total Relations, Heads and Tails summarize the volume and diversity of relational instances. The total relations count all occurrences, while unique heads and tails reflect the number of distinct entities appearing as the first (head) or second argument (tail) in each relation.
336
+
337
+ Reflexivity is evaluated by counting instances where the head and tail entities are identical. This highlights self-referential relations within the graph.
338
+
339
+ Transitivity is assessed by identifying triples where the relation can be inferred transitively (if $(a,r,b)$ and $(b,r,c)$ exist, then $(a,r,c)$ is expected). The proportion of such inferred triples provides information on potential hierarchical or chain-like relational structures.
340
+
341
+ An overview of the global entity and relation counts, along with the 20 most frequent relations is summarized in Table 4.
342
+
343
+ # A.3 Exemplary triples from the AKG
344
+
345
+ Table 5 presents a set of exemplary triples from the constructed knowledge graph. The first part of
346
+
347
+ <table><tr><td>Category</td><td>Example Relations</td><td>Purpose</td></tr><tr><td>Co-occurrence &amp; Temporal</td><td>co-occurs with, precedes, follows, overlaps with</td><td>Capture temporal ordering and co-occurrence of sound events.</td></tr><tr><td>Causal &amp; Functional</td><td>produces, caused by, triggers, indicates, responds to, affects</td><td>Encode causality, function, and event-response dynamics.</td></tr><tr><td>Taxonomic &amp; Hierarchical</td><td>is a type of, has subtype, is instance of, belongs to class, is variant of</td><td>Structure sound events via type, class, and instance hierarchies.</td></tr><tr><td>Environmental</td><td>occurs in, can be heard in, localized in, originates from, associated with environment</td><td>Anchor sound events in physical, spatial, and environmental contexts.</td></tr><tr><td>Source &amp; Agent</td><td>emitted by, performed by, generated by, is sound of, produced during</td><td>Link sounds to their generating sources.</td></tr><tr><td>Perceptual &amp; Qualitative</td><td>has loudness, has pitch, has duration, has timbre, perceived as, emotionally associated with</td><td>Model perceptual properties and subjective qualities of sound.</td></tr><tr><td>Cross-modality</td><td>described by, associated with event, linked to visual, transcribed as</td><td>Establishes connections to textual or visual modalities.</td></tr><tr><td>Intentionality</td><td>invites action, used for, requires attention, warns about</td><td>Represent expectations, actions, or alerts invoked by sound.</td></tr><tr><td>Compositionality</td><td>part of scene, scene contains, event_composed_of, temporal component of, entails event</td><td>Capture hierarchical and compositional structure of scene and events.</td></tr></table>
348
+
349
+ Table 3: Relation schema for knowledge graph construction. Each category defines semantic relations that support rich contextualization of audio events.
350
+
351
+ the table includes examples generated using a large language model (LLM), selected to depict a wide range of semantic relations such as causality, emotional association, perceptual attributes, and functional use. The second part provides examples derived from SALT, reflecting structured annotations grounded in taxonomies for everyday sound categorization. This combined presentation illustrates both the generative breadth of LLMs in synthetic data creation and the specificity of human-curated data, providing qualitative insight into the diverse relational structure captured in the graph.
352
+
353
+ # A.4 CLAP-KG Algorithm Description
354
+
355
+ Algorithm 1 details the full inference pipeline for knowledge-guided zero-shot audio classification using CLAP and a KGE model. Given an input audio sample and a set of candidate class labels, the algorithm first performs standard CLAP-based
356
+
357
+ retrieval to identify the top- $k$ most similar labels based on cosine similarity in the joint embedding space. For each top-ranked label, it queries a curated set of semantic relations $\mathcal{R}_q$ using the KGE model $\phi_{\mathrm{KG}}$ to predict the most plausible tail entities. These tail entities are concatenated with the original label to form enriched, context-aware textual prompts. The CLAP text encoder then scores these prompts against the input audio. The final prediction is made by aggregating evidence from both the original and enriched prompts using a log-sum-exp fusion strategy, enabling semantic re-ranking of the top- $k$ candidates. This procedure enhances both the interpretability and robustness of zero-shot classification by leveraging structured knowledge.
358
+
359
+ # A.5 Prompt Templates for Triple Generation
360
+
361
+ To extract relational knowledge from large language models, we design a prompt template that
362
+
363
+ <table><tr><td colspan="6">Knowledge Graph Summary</td></tr><tr><td>Subset</td><td>Triples</td><td>Relations</td><td>Heads</td><td>Tails</td><td></td></tr><tr><td rowspan="3">Overall Stats</td><td>Clean</td><td>18,348</td><td>47</td><td>857</td><td>4,282</td></tr><tr><td>Noisy</td><td>49,215</td><td>47</td><td>860</td><td>11,063</td></tr><tr><td>Test</td><td>2,039</td><td>46</td><td>673</td><td>1,068</td></tr></table>
364
+
365
+ Top 20 Most Frequent Relations (Split by Clean and Noisy Sets)
366
+
367
+ <table><tr><td rowspan="2">#</td><td rowspan="2">Relation</td><td colspan="2">Triples</td><td colspan="2">Heads</td><td colspan="2">Tails</td></tr><tr><td>Clean</td><td>Noisy</td><td>Clean</td><td>Noisy</td><td>Clean</td><td>Noisy</td></tr><tr><td>1</td><td>has subtype</td><td>2552</td><td>3773</td><td>331</td><td>528</td><td>1020</td><td>1731</td></tr><tr><td>2</td><td>belongs to class</td><td>2242</td><td>2739</td><td>828</td><td>835</td><td>252</td><td>471</td></tr><tr><td>3</td><td>occurs in</td><td>2052</td><td>2982</td><td>550</td><td>622</td><td>347</td><td>640</td></tr><tr><td>4</td><td>has children</td><td>907</td><td>907</td><td>211</td><td>211</td><td>773</td><td>773</td></tr><tr><td>5</td><td>has sibling</td><td>890</td><td>890</td><td>760</td><td>760</td><td>207</td><td>207</td></tr><tr><td>6</td><td>has parent</td><td>886</td><td>886</td><td>764</td><td>764</td><td>206</td><td>206</td></tr><tr><td>7</td><td>can be heard in</td><td>631</td><td>1212</td><td>289</td><td>366</td><td>249</td><td>378</td></tr><tr><td>8</td><td>localized in</td><td>623</td><td>893</td><td>226</td><td>241</td><td>268</td><td>355</td></tr><tr><td>9</td><td>part of scene</td><td>564</td><td>1531</td><td>164</td><td>253</td><td>337</td><td>752</td></tr><tr><td>10</td><td>is a type of</td><td>529</td><td>929</td><td>233</td><td>304</td><td>251</td><td>460</td></tr><tr><td>11</td><td>generated by</td><td>501</td><td>936</td><td>255</td><td>327</td><td>277</td><td>450</td></tr><tr><td>12</td><td>described by</td><td>393</td><td>661</td><td>242</td><td>295</td><td>368</td><td>627</td></tr><tr><td>13</td><td>event composed of</td><td>390</td><td>1368</td><td>236</td><td>441</td><td>284</td><td>877</td></tr><tr><td>14</td><td>produced during</td><td>363</td><td>712</td><td>161</td><td>219</td><td>241</td><td>395</td></tr><tr><td>15</td><td>overlaps with</td><td>348</td><td>2009</td><td>185</td><td>434</td><td>237</td><td>844</td></tr><tr><td>16</td><td>associated with environment</td><td>330</td><td>593</td><td>128</td><td>180</td><td>210</td><td>323</td></tr><tr><td>17</td><td>precedes</td><td>308</td><td>1010</td><td>122</td><td>227</td><td>220</td><td>643</td></tr><tr><td>18</td><td>originates from</td><td>304</td><td>579</td><td>138</td><td>172</td><td>207</td><td>377</td></tr><tr><td>19</td><td>warns about</td><td>272</td><td>1854</td><td>97</td><td>353</td><td>187</td><td>854</td></tr><tr><td>20</td><td>emitted by</td><td>254</td><td>319</td><td>135</td><td>149</td><td>149</td><td>183</td></tr></table>
368
+
369
+ Table 4: Summary statistics for the knowledge graph. The upper section presents overall statistics including the number of triples, relations, head and tail entities. The lower section lists the 20 most frequent relations, split by clean and noisy subsets, with counts of associated triples, heads, and tails.
370
+
371
+ guides the generation of plausible (head, relation, tail) triples grounded in sound event semantics. The prompt is tailored to elicit contextually relevant relations for each unique sound label in the SALT taxonomy. We apply it at scale to generate an initial pool of candidate triples, which are subsequently refined through a two-stage filtering process involving automated plausibility checks and manual curation. Figure 8 illustrates the prompt used for triple generation, while Figure 9 shows the prompt used to verify their semantic plausibility.
372
+
373
+ # A.6 Additional Results
374
+
375
+ Per-class zero-shot audio classification performance In addition to the overall performance analysis in Section 6.2, we also investigate how CLAP-KG benefits individual classes. Considering ESC50 as a case study, Figure 10 illustrates the class-wise classification performance of CLAP and CLAP-KG. We notice that although the overall accuracy is increased by $2.2\%$ as shown in Table 2, the class-wise performance varies. Large performance increase happens for crow, crackle, and cow, while CLAP-KG degrades performance for cricket,
376
+
377
+ rain, and laughing.
378
+
379
+ # A.7 Dataset Licenses
380
+
381
+ For transparency, we provide a comprehensive summary of the licensing terms associated with each dataset used in our experiments in Table 6. All datasets are publicly available and widely used in academic research on environmental sound classification.
382
+
383
+ ![](images/69a7eb9fc2230c82b26698fad195fabc1f3fa02dee830f9c3964361a0a9b2a08.jpg)
384
+ Figure 7: Overview of key statistics for relations of the clean set in the knowledge graph. (a): Distribution of counts, unique heads, and unique tails for the top 10 most frequent relations. (b): Counts of reflexive relations where the head equals the tail. (c): Proportion of transitive triples identified among the total triples per relation. (d): Distribution of relation frequencies.
385
+ Figure 8: Prompt template to generate synthetic triples via LLM.
386
+ Figure 9: Prompt template to verify synthetic triples via LLM.
387
+
388
+ "You are an expert in sound event classification and knowledge graph generation. Given a sound event label, your task is to reason about and, if appropriate, generate knowledge graph triples that describe real-world, common-sense relationships between the sound event and other entities or events. The relation type is: {relation_type}. The relation details are: {relation_details}. Here is an example for guidance: {examples}.
389
+
390
+ Step 1: Reason about the plausibility of generating real-world, common-sense triples for the sound event label: {label_name}, using the relation type:{relation_type}. Determine if this type of relation is meaningfully applicable to the event in a way that reflects actual, observable relationships in the world.
391
+
392
+ If the relation type is not applicable or would lead to speculative, forced, or non-sensical triples, conclude that no valid triples can be generated.
393
+
394
+ Step 2: If the relation is applicable and meaningful, generate a list of plausible, real-world triples grounded in common sense. Ensure that each triple reflects knowledge that a reasonable person would accept as true in everyday understanding.
395
+
396
+ There is no fixed number of triples required, but include only those that are relevant, accurate, and justifiable by common sense.
397
+
398
+ Respond with only the final list of triples in the exact format: [[head1, relation, tail1], [head2, relation, tail2], ...].
399
+
400
+ If in Step 1 you determine that no meaningful triples can be generated, respond with an empty list: []
401
+
402
+ Do not include any reasoning or explanation in the final output. The head should strictly be the label name: {label_name}.
403
+
404
+ "You are an expert in knowledge graphs for audio understanding. Given a triple in the format [head, relation, tail], assess whether it is pertinent for inclusion in a knowledge graph for audio understanding. The head represents a sound event label, i.e., a sound or an abstraction of the sound emitted, implied, or perceptually associated with an entity. A triple is pertinent if it is non-speculative, grounded in common-sense and real-world experience, and contributes to a taxonomical, hierarchical, temporal, causal, perceptual, compositional, or phisical contextual understanding of sound events. Reject triples which are vague, speculative, or not useful for structuring knowledge about sound. Is the triple {kg_triple} pertinent to structure knowledge about sound? Answer strictly "Yes" or "No" without any reasoning or explanation in the final output."
405
+
406
+ <table><tr><td></td><td>SALT Label</td><td>Head</td><td>Relation</td><td>Tail</td></tr><tr><td colspan="5">Triple examples (generated by LLM)</td></tr><tr><td>1</td><td>vehicle engine</td><td>vehicle engine</td><td>caused by</td><td>combustion</td></tr><tr><td>2</td><td>chicken crowing</td><td>chicken crowing</td><td>caused by</td><td>rooster</td></tr><tr><td>3</td><td>smoke alarm</td><td>smoke alarm</td><td>caused by</td><td>smoke</td></tr><tr><td>4</td><td>crying</td><td>crying</td><td>emotionally associated with</td><td>sadness</td></tr><tr><td>5</td><td>cello</td><td>cello</td><td>emotionally associated with</td><td>melancholy</td></tr><tr><td>6</td><td>lullaby</td><td>lullaby</td><td>emotionally associated with</td><td>calmness</td></tr><tr><td>7</td><td>coffee machine</td><td>coffee machine</td><td>has duration</td><td>medium</td></tr><tr><td>8</td><td>timpani</td><td>timpani</td><td>has duration</td><td>long</td></tr><tr><td>9</td><td>cap gun</td><td>cap gun</td><td>has duration</td><td>short</td></tr><tr><td>10</td><td>bird</td><td>bird</td><td>has pitch</td><td>high</td></tr><tr><td>11</td><td>humming</td><td>humming</td><td>has pitch</td><td>low</td></tr><tr><td>12</td><td>flute</td><td>flute</td><td>has pitch</td><td>high</td></tr><tr><td>13</td><td>thunderstorm</td><td>thunderstorm</td><td>indicates</td><td>thunder</td></tr><tr><td>14</td><td>marching</td><td>marching</td><td>indicates</td><td>parade</td></tr><tr><td>15</td><td>firecracker</td><td>firecracker</td><td>indicates</td><td>celebration</td></tr><tr><td>16</td><td>maraca</td><td>maraca</td><td>is instance of</td><td>percussion instrument</td></tr><tr><td>17</td><td>giggling</td><td>giggling</td><td>is instance of</td><td>laughter</td></tr><tr><td>18</td><td>microphone</td><td>microphone</td><td>is instance of</td><td>audio recording device</td></tr><tr><td>19</td><td>fireworks</td><td>fireworks</td><td>perceived as</td><td>celebratory</td></tr><tr><td>20</td><td>castanets</td><td>castanets</td><td>perceived as</td><td>rhythmic instrument</td></tr><tr><td>21</td><td>pulse</td><td>pulse</td><td>perceived as</td><td>heartbeat rate</td></tr><tr><td>22</td><td>flute</td><td>flute</td><td>performed by</td><td>orchestra</td></tr><tr><td>23</td><td>kwaito music</td><td>kwaito music</td><td>performed by</td><td>musicians</td></tr><tr><td>24</td><td>playing guitar</td><td>playing guitar</td><td>performed by</td><td>guitarist</td></tr><tr><td>25</td><td>clock tick</td><td>clock tick</td><td>precedes</td><td>door opening</td></tr><tr><td>26</td><td>electric guitar</td><td>electric guitar</td><td>precedes</td><td>composing music</td></tr><tr><td>27</td><td>dog</td><td>dog</td><td>precedes</td><td>yelping</td></tr><tr><td>28</td><td>mantra</td><td>mantra</td><td>used for</td><td>self-improvement</td></tr><tr><td>29</td><td>whistle</td><td>whistle</td><td>used for</td><td>alerting</td></tr><tr><td>30</td><td>knife</td><td>knife</td><td>used for</td><td>self-defense</td></tr><tr><td colspan="5">Triple examples (derived by SALT)</td></tr><tr><td>31</td><td>pigeon dove</td><td>pigeon dove</td><td>belongs to class</td><td>bird</td></tr><tr><td>32</td><td>large rotating saw</td><td>large rotating saw</td><td>belongs to class</td><td>sawing</td></tr><tr><td>33</td><td>vehicle compressor</td><td>vehicle compressor</td><td>belongs to class</td><td>large vehicle</td></tr><tr><td>34</td><td>speech</td><td>speech</td><td>has children</td><td>chatter</td></tr><tr><td>35</td><td>wild animal</td><td>wild animal</td><td>has children</td><td>roar</td></tr><tr><td>36</td><td>bowed string instrument</td><td>bowed string instrument</td><td>has children</td><td>cello</td></tr><tr><td>37</td><td>whoosh swoosh swish</td><td>whoosh swoosh swish</td><td>has parent</td><td>wind</td></tr><tr><td>38</td><td>bouncing on trampoline</td><td>bouncing on trampoline</td><td>has parent</td><td>jumping</td></tr><tr><td>39</td><td>swimming</td><td>swimming</td><td>has parent</td><td>water activity</td></tr><tr><td>40</td><td>swimming</td><td>swimming</td><td>has sibling</td><td>diving</td></tr><tr><td>41</td><td>whoosh swoosh swish</td><td>whoosh swoosh swish</td><td>has sibling</td><td>rustling</td></tr><tr><td>42</td><td>bouncing on trampoline</td><td>bouncing on trampoline</td><td>has sibling</td><td>bouncing ball</td></tr><tr><td>43</td><td>piano</td><td>piano</td><td>has subtype</td><td>grand piano</td></tr><tr><td>44</td><td>music genre</td><td>music genre</td><td>has subtype</td><td>jazz</td></tr><tr><td>45</td><td>vehicle</td><td>vehicle</td><td>has subtype</td><td>bicycle</td></tr><tr><td>46</td><td>smash or crash</td><td>smash or crash</td><td>occurs in</td><td>kitchen</td></tr><tr><td>47</td><td>drum kit</td><td>drum kit</td><td>occurs in</td><td>train station</td></tr><tr><td>48</td><td>clatter</td><td>clatter</td><td>occurs in</td><td>gym</td></tr></table>
407
+
408
+ Table 5: Representative examples of knowledge graph triples. The first section includes examples generated using a large language model (LLM), grouped by semantic relation types such as causality, perception, and functionality. The second section includes examples extracted from the SALT. Both sets illustrate complementary richness and diversity of relation types from automated and curated construction approaches.
409
+
410
+ ![](images/46ae6e8ff3362f350b6c15100e0aaf1b02dc5a81a9d612ad743e7cc2fa2ac141.jpg)
411
+ Figure 10: Per-class zero-shot audio classification accuracy with CLAP and CLAP-KG on ESC50 dataset.
412
+
413
+ Algorithm 1 Knowledge-Guided CLAP Inference
414
+ Ensure: Predicted label $\tilde{c} \in C$
415
+ Require: Input audio $a \in \mathcal{A}$ , label set $C = \{c_1, \ldots, c_N\} \subset \mathcal{L}$ , CLAP encoders $\phi_{\mathrm{A}}, \phi_{\mathrm{T}}$ , KGE model $\phi_{\mathrm{KG}}$ , relation set $\mathcal{R}_q \subset \mathcal{R}$ , top- $k$ parameters $k, m$
416
+
417
+ 1: Encode audio: $\mathbf{a}\gets \phi_{\mathbf{A}}(a)$
418
+ 2: Encode labels: $\mathbf{c}_i\gets \phi_{\mathrm{T}}(c_i)$ for all $c_{i}\in C$
419
+ 3: Compute similarities: $s(c_{i}) \gets \mathrm{sim}(\mathbf{a},\mathbf{c}_{i})$
420
+ 4: Retrieve top-k labels: $C_k$ = $\{c^{(1)},\ldots ,c^{(k)}\} \gets \mathsf{TopK}(\{s(c_i)\} ,k)$
421
+ 5: Initialize enriched prompt set: $\mathcal{P}\gets \emptyset$
422
+ 6: for all $c \in C_k$ do
423
+ 7: for all $r \in \mathcal{R}_q$ do
424
+ 8: Predict top- $m$ tails: $\mathcal{T}_c^r\gets$ TopM( $\phi_{\mathrm{KG}}(c,r,\cdot),m)$
425
+ 9: for all $t \in T_c^r$ do
426
+ 0: Form enriched prompt: $p_{c,t} \gets$ concat(c,t)
427
+ 11: Add $p_{c,t}$ to $\mathcal{P}$
428
+ 12: end for
429
+ 13: end for
430
+ 14: end for
431
+ 15: Encode enriched prompts: $\mathbf{p}_j\gets \phi_{\mathrm{T}}(p_j)$ for all $p_j\in \mathcal{P}$
432
+ 16: Compute prompt similarities: $s(p_j) \gets \mathrm{sim}(\mathbf{a}, \mathbf{p}_j)$
433
+ 17: for all $c \in C_k$ do
434
+ 18: Retrieve prompt scores: $\{s(p_j) \mid p_j \in P_c\}$
435
+ 19: Aggregate score: $\tilde{s}(c) \gets \log\left(\exp(s(c)) + \sum_{p_j \in P_c} \exp(s(p_j))\right)$
436
+ 20: end for
437
+ 21: Predict final label: $\tilde{c} \gets \arg \max_{c \in C_k} \tilde{s}(c)$
438
+ 22: return $\tilde{c}$
439
+
440
+ <table><tr><td>Dataset</td><td>License</td></tr><tr><td>ESC50 (Piczak, 2015)</td><td>CC BY-NC 3.0 (Attribution-NonCommercial)</td></tr><tr><td>UrbanSound8K (Salamon et al., 2014)</td><td>CC BY-NC 3.0 (Attribution-NonCommercial)</td></tr><tr><td>TUT2017 (Mesaros et al., 2016)</td><td>Custom EULA: Non-commercial scientific use only</td></tr><tr><td>FSD50K (Fonseca et al., 2022)</td><td>CC BY 4.0 (Attribution)</td></tr><tr><td>AudioSet (dataset) (Gemmeke et al., 2017)</td><td>CC BY 4.0 (Attribution)</td></tr><tr><td>AudioSet (ontology) (Gemmeke et al., 2017)</td><td>CC BY-SA 4.0 (Attribution-ShareAlike)</td></tr><tr><td>DCASE17-T4 (Mesaros et al., 2017)</td><td>Follows AudioSet licensing</td></tr></table>
441
+
442
+ Table 6: Summary of dataset licenses used in this study.
iknowaudiointegratingknowledgegraphswithaudiolanguagemodels/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c1224d262664e66eae8fe56c917c76fc7c6dd546affcadc4533d02089dbc2af0
3
+ size 1091142
iknowaudiointegratingknowledgegraphswithaudiolanguagemodels/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:054c3c0a2988b6cb25c3691c26dd810d5f714839814bb11241b6e57991a5ee47
3
+ size 496194
itoolreinforcedfinetuningwithdynamicdeficiencycalibrationforadvancedtooluse/1bc87092-a237-4665-9650-f330b7703936_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:53cb3bb533bb70726322c71b1399c745eba3504e4697260384887849abc4197c
3
+ size 106238
itoolreinforcedfinetuningwithdynamicdeficiencycalibrationforadvancedtooluse/1bc87092-a237-4665-9650-f330b7703936_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d16f4e3f7f6046fd9889def46e05fe1d3b92c3c485a637823404304330374adb
3
+ size 131850
itoolreinforcedfinetuningwithdynamicdeficiencycalibrationforadvancedtooluse/1bc87092-a237-4665-9650-f330b7703936_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1807e62dd2f8e1fc6af16df9358a888417763658171e0aa7d6ed230b0c697b1b
3
+ size 1866339
itoolreinforcedfinetuningwithdynamicdeficiencycalibrationforadvancedtooluse/full.md ADDED
@@ -0,0 +1,516 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # iTool: Reinforced Fine-Tuning with Dynamic Deficiency Calibration for Advanced Tool Use
2
+
3
+ Yirong Zeng $^{1}$ , Xiao Ding $^{*1}$ , Yuxian Wang $^{2}$ , Weiwen Liu $^{3}$ , Wu Ning $^{2}$ , Xu Huang $^{4}$ , Duyu Tang $^{2}$ , Dandan Tu $^{2}$ , Bing Qin $^{1}$ , Ting Liu $^{1}$ ,
4
+
5
+ $^{1}$ Harbin Institute of Technology SCIR Lab, $^{2}$ Huawei Technologies Co., Ltd, $^{3}$ Shanghai Jiao Tong University, $^{4}$ University of Science and Technology of China
6
+
7
+ # Abstract
8
+
9
+ Augmenting large language models (LLMs) with external tools is a promising approach to enhance their capabilities, especially for complex tasks. Synthesizing tool-use data through real-world simulations is an effective way to achieve this. However, our investigation reveals that training gains significantly decay as synthetic data increases. The model struggles to benefit from additional synthetic data, which fails to endow it with advanced tool-use capabilities in complex scenarios. Moreover, we discovered that the above limitation usually manifests as a fragment deficiency (i.e., parameter errors) in response. To this end, we propose an iterative reinforced fine-tuning strategy designed to alleviate this limitation. This strategy involves: (1) enhancing the diversity of response for synthetic data through path exploration of Monte Carlo Tree Search. (2) iteratively pinpointing the model's deficiency by constructing fine-grained preference pairs, and then improving it by preference optimization algorithms for targeted improvement. The experiments show that our method achieves $13.11\%$ better performance than the same-size base model. It achieves an improvement of $6.5\%$ in complex scenarios compared to the baseline, and it also outperforms larger open-source and closed-source models<sup>1</sup>.
10
+
11
+ # 1 Introduction
12
+
13
+ Integrating LLMs with external tools significantly enhances their capability to tackle complex tasks in real-world scenarios (Li, 2025; Qu et al., 2024). For instance, the tool-use capability allows LLMs to access up-to-date information, perform precise calculations, and reduce the likelihood of hallucinations (Singh et al., 2025). This unlocks a wide range of potential applications in various domains, such as complex reasoning tasks (Li et al., 2025;
14
+
15
+ ![](images/239b3d36273023d02c6c3556f5eabc069fd2004cf9698c92ef7aa2eb0bf24b19.jpg)
16
+ (a) SFT on synthetic data
17
+
18
+ ![](images/b2215703da15041aadb79b7eef10be1d35b4259bba1b7fab46053d76db495f43.jpg)
19
+ (b) Training gains with synthetic data
20
+ Figure 1: The training paradigm of the tool-use model under synthetic data (a). However, as shown in (b), the growth rate of the model's performance gain declines significantly as the training data increases, especially in complex tool-use scenarios.
21
+
22
+ Manduzio et al., 2024), and the scheduling of applications on devices (Gunter et al., 2024; Luo et al., 2025). In essence, tool use involves the following process: Given one or more tools, a user presents a question, and the LLM selects the appropriate tools from the candidate tools and performs the tool call to fulfill the user's demands. In this paper, $\mathbb{X}$ tools are used interchangeably with APIs, functions, and plugins.
23
+
24
+ Recent advancements have found that LLMs can handle simple tool use scenarios through prompt engineering (Ye et al., 2024), but they encounter difficulties with more complex real-world applications (e.g., long contexts or extensive toolsets) (Yan et al., 2024). To address this, some studies simulate real-world scenarios, such as ticketing systems, to mimic more realistic use cases (Lin et al., 2024) to collect synthetic data. Synthetic data are used in supervised fine-tuning (SFT) to improve tool use in complex scenarios, as shown in Figure 1 (a). Despite these solution strides in the development of tool-use models, our investigation reveals a critical weakness: there is a training gains decay as the synthetic tool-use data scales.
25
+
26
+ We conducted tests to explore how the performance of the model changes when synthetic data of different proportions is used, as shown in Figure
27
+
28
+ 1 (b), We find that the model struggles to benefit from more synthetic data with SFT in complex scenarios. More analysis in Section 2.2 indicates that this limitation reflects the failure of the model to extract the parameter name or infer the correct parameter value from the user query. This issue typically affects only a small fragment of the response, differing from the ground truth response.
29
+
30
+ Therefore, we attempt to alleviate the decay of training gains when using synthetic tool-use data, to enhance the ability of tool use in complex scenarios. It is not easy because it requires equipping the model with advanced contextual understanding and reasoning capabilities. Fortunately, the success of OpenAI o1 $^2$ demonstrates complex reasoning through step-by-step slow thinking (e.g., Monte Carlo Tree Search (MCTS) (Coulom, 2006)) and Reinforced Fine-Tuning (ReFT) (Luong et al., 2024) (tailors reinforcement learning and aligns with user intentions to specific tasks).
31
+
32
+ To this end, we propose a novel learning method involving (1) an MCTS-based path exploration to enhance response diversity and (2) ReFT to progressively correct the wrong fragment text of model's response. Specifically, we propose an iterative reinforced fine-tuning strategy for Tool use, named iTool. It first iteratively identifies complex data based on feedback from a policy model. It then performs MCTS to help explore data diversity in response, and further pinpoint wrong fragment by collecting fine-grained preference pairs from search path. Finally, a reinforcement learning policy (i.e., direct preference optimization (Rafailov et al., 2024)) is applied to align the model's response with the ground-truth response and misalign it with wrong fragment. Moreover, before iterative ReFT, we propose an easy-to-hard warm-up SFT strategy for better learning from complex scenarios. Following these advancements, iTool demonstrates $\sim 13\%$ better performance than the base model. It also achieves substantial improvements in tool-use ability under complex scenarios. Despite having only 8B parameters, it outperforms larger open-source models and competes with top-tier closed-source models.
33
+
34
+ # 2 Problem Statement and Analysis
35
+
36
+ # 2.1 Task Overview
37
+
38
+ In tool use, the LLM receives a user query $q$ along with a set of candidate tools, represented as $\mathcal{T} =$
39
+
40
+ ![](images/68c72588993a3a970759ce5a64cda8b283ec4d6db6b12163f20f20095c407353.jpg)
41
+ Figure 2: An illustration of tool-use. Given a user query with candidate tools, LLMs select the tool(s) from candidates, then execute the API call operation, and finally reply with a response. In the bad response, the parameter errors (i.g, red font weather='unknown') account for a small fragment of the response content.
42
+
43
+ $\{t_0, t_1, \ldots, t_{|\mathcal{T}|}\}$ . The purpose of LLM is to fulfill the user's intent by executing a specific sequence of tools. The decision process can be described as $y \sim \pi(y \mid s_0, q, \mathcal{T})$ , where $\pi(\cdot)$ represents the policy model, $s_0$ denotes the initial task state, and $y$ represents the actions taken by the model, such as selecting or executing a specific tool call from $\mathcal{T}$ . A case is illustrated in Figure 2.
44
+
45
+ # 2.2 Preliminary Study
46
+
47
+ This section presents the challenges when finetuning models with tool-use synthetic data, and clarifies the motivation for the proposed methods.
48
+
49
+ We fine-tune the model using synthetic tool-use data of varying proportions. Specifically, training data: ToolACE (Liu et al., 2024) is a general tool-use dataset with up to 100K samples, and created through a novel self-evolution synthesis. Evaluation benchmark: Berkeley Function-Calling Leaderboard (BFCL) (Yan et al., 2024) provides a comprehensive dataset comprising $4\mathrm{k}+$ instances (updating), consisting of Non-live (with expert-curated simple tools), Live (with user-contributed complex tools), Multi-turn (with multi-turn & multi-step tool use) and Hallucination (i.e., relevance and irrelevance detection) samples. Here, Non-live denotes simple tool use scenarios (e.g., single tool), while Live represents more complex tool use scenarios (e.g., multiple parallel tools). For convenient understanding, in this section, we use simple and complex as aliases for the Non-live and Live metrics, respectively.
50
+
51
+ The results are depicted in Figure 1 (b). We ob
52
+
53
+ (a) Error Type Percentage
54
+ ![](images/58364f17acbad207ee539acb36d474d2919645d236b33525a6a56634fd207fa6.jpg)
55
+ #
56
+ Parameter Value
57
+ Parameter Name
58
+ Parameter Count
59
+
60
+ Error Types
61
+ ![](images/035483d0783cbaaae6318b6380708db2cbabeffbe8169035b919bd360e60e86c.jpg)
62
+ Tools Count
63
+
64
+ Figure 3: Error type distribution in bad cases. In bad cases, error types are highly concentrated in Parameter Value & Name.
65
+ ![](images/5acef8cb58017cf3d9a3cf7dacfe7168b77c0beee9cdc97401c9191fde522dfe.jpg)
66
+ Tool Name
67
+ Other
68
+
69
+ serve that the model's performance gain declines significantly as the training data increases. Specifically, with the SFT paradigm shown in Figure 1 (a), The model significantly enhances tool-use ability with small-scale supervised data by mimicking patterns from the training examples. However, the performance improvement significantly declines after $30\%$ of the data is used. The model struggles to benefit from using more synthetic data, we argue that insufficient data diversity is one of the key factors.
70
+
71
+ To explore the manifestations of the above-mentioned issue, we perform a bad case analysis. We counts all error types in Live and Non-live of BFCL, and categorized the error types as shown in Figure 3. Here, Parameter Value error denotes the value of the parameter that does not match the ground truth. Parameter Name error denotes unable to identify the parameter value from the user query. For more details, see Appendix A. From Figure 3, we observed that errors are highly concentrated in Parameter Value & Name errors. In bad cases, parameter error constitutes a small fragment in response, while the majority remains consistent with the ground-truth. An illustration is shown in Figure 2. Therefore, trying to fix the fragment error can help alleviate the limitation of gain decay in training models.
72
+
73
+ In summary, we find that training with synthetic tool-use data causes gain decay, and the model struggles to benefit from additional such data. This limitation is reflected in the model's deficiency (i.e., parameter errors) in responses. Motivated by this line, we utilize the MCTS path to explore diversity in responses for alleviating such gains decay. We further propose an iterative ReFT strategy to progressively pinpoint and optimize the model's
74
+
75
+ deficiencies.
76
+
77
+ # 3 Method
78
+
79
+ In this section, we provide a detailed introduction to our method. Figure 4 shows the overall architecture. It consists of warm-up training and iterative reinforcement learning.
80
+
81
+ # 3.1 Warm-up training
82
+
83
+ In real-world applications, the tool-use model should select multiple tools from a complex candidate toolset and schedule them correctly (a.k.a., hard mode), instead of directly using a single candidate tool to respond (a.k.a., easy mode). Similar to human learning procedures, tool learning models can benefit from an easy-to-hard curriculum during model training (Xu et al., 2020). Therefore, we propose an easy-to-hard SFT for warm-up training.
84
+
85
+ In the warm-up stage, we first divide the dataset evenly into three subsets (i.e., easy, medium, hard) based on difficulty levels. We follow the criteria: (a) the candidate toolset number; (b) the string length of the toolset; and (c) the number of tool calls needed in response to split the dataset. The specific definitions for each subset are as follows: (1) hard: a >= 4 or b > 2000 or c >= 4. (2) medium: 1 < a < 4 or b < 2000 or c < 4. (3) simple: a <= 1 and b < 1000 and c <= 1.
86
+
87
+ $$
88
+ \mathcal {D} = \mathcal {D} _ {\text {e a s y}} \bigcup \mathcal {D} _ {\text {m e d i u m}} \bigcup \mathcal {D} _ {\text {h a r d}}. \tag {1}
89
+ $$
90
+
91
+ Subsequently, we fine-tune the LLM $\mathcal{M}$ sequentially on each subset $\mathcal{D}_i$ using the supervised loss:
92
+
93
+ $$
94
+ \mathcal {L} _ {i} = - \mathbb {E} _ {(q, y) \sim \mathcal {D} _ {i}} \left[ \log P _ {\mathcal {M}} (y \mid q, \mathcal {T}) \right], \tag {2}
95
+ $$
96
+
97
+ with $\mathcal{D}_1$ (easy), $\mathcal{D}_2$ (medium) and $\mathcal{D}_3$ (hard).
98
+
99
+ The total warm-up loss is:
100
+
101
+ $$
102
+ \mathcal {L} _ {\text {w a r m - u p}} = \sum_ {i = 1} ^ {N = 3} \mathcal {L} _ {i}. \tag {3}
103
+ $$
104
+
105
+ # 3.2 MCTS-Based Iterative Reinforcement Learning
106
+
107
+ In order to alleviate training gains decreases using synthetic tool-use data for LLM, in this module, we propose an Iterative Reinforcement Learning scheme to continuously remedy this deficiency. As shown in Figure 4, it iteratively refreshes replay buffer to sample complex data and generates preference data for preference optimization.
108
+
109
+ ![](images/eb8313ae71f5cd97cfd44d2be4f84c984444b1c3d66210d0555f3a9d401bcb1b.jpg)
110
+ Figure 4: The overall architecture of iTool consists of warm-up training and iterative reinforcement learning. Specifically, after warm-up training $①$ , the policy model refreshes the replay buffer $②$ and then actively samples complex data $③$ . Then, step-wise MCTS $④$ is performed to obtain fine-grained preference pairs for pointing out the wrong fragment in response. Finally, the models are updated via direct preference optimization $⑤$ to improve response. The fire $\clubsuit$ and frozen $\clubsuit$ denote parameters are updated and fixed, respectively.
111
+
112
+ Sampling complex data. Given a warm-up model from the previous stage, it is used to refresh the replay buffer by feeding back the complexity of samples. The replay buffer is initialized with a random $50\%$ sample from the tool-use dataset. Each example in the buffer is represented as: $x_{buff} = \langle q,\mathcal{T},c\rangle$ , where $c$ is denote the complexity of sample. In practice, model generation perplexity $h$ is used to measure the complexity of the samples, i.e., $c = h$ . The generation perplexity of the target response can be factorized as follows:
113
+
114
+ $$
115
+ h = \sqrt [ n ]{\frac {1}{P _ {\mathcal {M}} (y \mid q , \mathcal {T})}}, \tag {4}
116
+ $$
117
+
118
+ where the $P_{\mathcal{M}}(y \mid q, \mathcal{T})$ is the generation probability. Since perplexity $h$ represents the degree of generation uncertainty (Gao et al., 2024), we sample top $10\%$ highest $h$ data for subsequent step in each iteration.
119
+
120
+ MCTS for Step-Level Preference. The success of OpenAI o1 provides a compelling illustration of the effectiveness of step-by-step thinking. As a key algorithm, MCTS path exploration can fully traverse the search space and provide greater data diversity (Grill et al., 2020). Inspired by these, we propose to integrate MCTS into training for collecting step-level preference data.
121
+
122
+ The step-wise MCTS is achieved by breaking down the expansion step into discrete steps, transforming instance-level rewards into granular step-level signals. Specifically, it begins from a root node $s_0$ (i.e., user query), and unfolds in three iterative stages: selection, expansion, and backup:
123
+
124
+ (1) Select. It is guided by two key variables:
125
+
126
+ $Q(s_{t},a)$ is the value of taking action $a$ in state $s_t$ , and $N(s_{t})$ is the visitation frequency of state $s_t$ . We employ the Predictor+ Upper Confidence bounds applied to Trees (PUCT) (Rosin, 2011) to navigate the trade-off between exploring and exploiting ones. At node $s_t$ , the subsequent node follows the formula:
127
+
128
+ $$
129
+ s _ {t + 1} = \arg \max _ {a} \left[ Q \left(s _ {t}, a\right) + c \cdot p (a \mid s _ {t}) \frac {\sqrt {N \left(s _ {t}\right)}}{1 + N \left(n \left(s _ {t} , a\right)\right)} \right] \tag {5}
130
+ $$
131
+
132
+ where $p(a\mid s_t) = \pi_\theta (a\mid q,\mathcal{T},s_t)$ denotes the policy $\pi_{\theta}(\cdot)$ 's probability distribution for generating a action step $a$ , and $c$ is the trade-off hyperparameter, and $n(s_{t},a)$ explicitly represents the next state generated by taking action $a$ in states $s_t$ .We enforce the policy model to generate fine-grained fragments (e.g., an argument assignment operation, like weather $=$ 'unknown' in Figure 2) by managing the termination characters (e.g., $\langle \cdot ,\cdot \rangle$ ).
133
+
134
+ (2) Expand. It occurs at a leaf node during the selection process to integrate new nodes and assess rewards. The reward $r(s_{t},a)$ for executing step $a$ in state $s_t$ is quantified by the reward difference between states $\mathcal{R}(s_t)$ and $\mathcal{R}(s_{t + 1})$ , showing the benefit of action $a$ in state $s_t$ . As defined in Eq.6, reward computation merges outcome correctness $\mathcal{O}$ with self-evaluation $\mathcal{C}$ . Following Xie et al. (2024), we define self-evaluation with Eval Prompt 10 as Eq.7.
135
+
136
+ $$
137
+ \mathcal {R} \left(s _ {t}\right) = \mathcal {O} \left(s _ {t}\right) + \mathcal {C} \left(s _ {t}\right), \tag {6}
138
+ $$
139
+
140
+ $$
141
+ \mathcal {C} (s _ {t}) = \pi_ {\theta} (c s \mid p r o m p t _ {e v a l}, q, a, \mathcal {T}, s _ {t}), \tag {7}
142
+ $$
143
+
144
+ where $cs$ denotes the confidence score in token-level probability for correctness. Future rewards
145
+
146
+ are anticipated by simulating upcoming scenarios through roll-outs, following the selection and expansion process until reaching a terminal state (i.e., complete response or exceeds the maximum length).
147
+
148
+ (3) Backup. Once a terminal state is reached, we carry out a bottom-up update from the terminal node back to the root. We update the visit count $N$ , the state value $V$ , and the action value $Q$ :
149
+
150
+ $$
151
+ V (s _ {t}) \leftarrow \sum_ {a} N (s _ {t + 1}) Q (s _ {t}, a) / \sum_ {a} N (s _ {t + 1}), \tag {8}
152
+ $$
153
+
154
+ $$
155
+ Q \left(s _ {t}, a\right) \leftarrow r \left(s _ {t}, a\right) + \gamma V \left(s _ {t + 1}\right), \tag {9}
156
+ $$
157
+
158
+ where $\gamma$ is the discount for future state values.
159
+
160
+ We use the action value $\mathcal{Q}$ to indicate the preference for candidate steps, with higher values showing more preferred next steps. For each node in the search tree, we choose the steps with the highest and lowest $\mathcal{Q}$ as the preferred and dispreferred responses, respectively, and consider the prefix path as the question. See Appendix C.1 for an example. Therefore, our method leverages MCTS to generate numerous negative trajectories with fine-grained deficiencies, thereby enhancing data diversity.
161
+
162
+ Iterative preference optimization. Given the step-level preferences collected via MCTS, we tune the policy model via SimPO (Meng et al., 2024), a variant of DPO (Rafailov et al., 2024), because it reduces computational overhead by eliminating the need for a reference model. After optimization, we obtain the updated policy $\pi_{\theta(i)}$ and repeat sampling the complex data process to iteratively update the policy model.
163
+
164
+ As a variant of DPO, it eliminates the need for a reference model and introduces a simple reference-free reward aligned with generation, i.e., length-normalized reward:
165
+
166
+ $$
167
+ r _ {\text {S i m P O}} (x, y) = \frac {\beta}{| y |} \sum_ {i = 1} ^ {| y |} \log \pi_ {\theta} \left(y _ {i} \mid x, y _ {< i}\right), \tag {10}
168
+ $$
169
+
170
+ where $\beta$ is a constant that controls the scaling of the reward difference. Using the shorthand $h_{\pi_\theta}^{y_w} = \frac{\beta}{|y_w|}\log \pi_\theta (y_w|x), h_{\pi_\theta}^{y_l} = \frac{\beta}{|y_l|}\log \pi_\theta (y_l|x)$ , at the $i$ -th iteration, given a batch of preference data $\mathcal{D}_i$ sampled with the latest policy $\pi_{\theta (i - 1)}$ , we denote the policy objective $\ell_i(\theta)$ as follows:
171
+
172
+ $$
173
+ \ell_ {i} \left(\pi_ {\theta}\right) = - \mathbb {E} _ {\left(x, y _ {w}, y _ {l}\right) \sim \mathcal {D} _ {i}} \left[ \log \sigma \left(h _ {\pi_ {\theta}} ^ {y _ {w}} - h _ {\pi_ {\theta}} ^ {y _ {l}} - \gamma\right) \right], \tag {11}
174
+ $$
175
+
176
+ where $\gamma > 0$ represents the target reward margin, ensuring that the preferred response's reward
177
+
178
+ exceeds that of the dispreferred one; $y_{w}$ and $y_{l}$ represent the step-level preferred and dispreferred responses, respectively.
179
+
180
+ # 4 Experiments
181
+
182
+ # 4.1 Experimental Setup
183
+
184
+ We take the widely used open-source LLM, LLaMA3.1-8B-Instruct as our base model. We use synthetic data from ToolACE for experiments, randomly select $90\%$ for warm-up training, and $50\%$ for reinforcement learning to balance performance and cost. For warm-up training, we adopt the parameter-efficient training strategy LoRA (Hu et al., 2022). For reinforcement learning, we employ SimPO, a variant of DPO, for preference optimization, utilizing the QLora parameter-efficient training strategy (Dettmers et al., 2024). For more implementation details and preferences optimization analysis, see Appendix B.
185
+
186
+ Evaluation Dataset. In addition to BFCL, we use API-Bank (Li et al., 2023), which consists of 314 tool-use dialogues and 753 API calls. This dataset evaluates models' abilities to correctly invoke a known API (L-1) based on a query and to retrieve and call APIs from a tool list (L-2).
187
+
188
+ Baselines We compare the overall performance with the state-of-the-art closed-source models (e.g., GPT-series, Gemini and open-source models (e.g., Llama-3.1-8B-Instruct, Qwen2.5-7B (Team, 2024)), as well as fine-tuned open-source models with tool-use dataset, including ToolACE-8B (finetuning Llama-3.1-8B-Instruct on ToolACE) model, xLAM-series (Zhang et al., 2024) and Hammerseries (Lin et al., 2024).
189
+
190
+ # 4.2 Overall Performance
191
+
192
+ The overall performance of $i\text{Tool - } 8B$ and baseline models are shown in Table 1 and Table 2. Our model consistently achieves superior performance at comparable scales $(\sim 8\mathrm{B})$ . Specifically, it shows consistent advantageous performance on API-Bank and BFCL compared with open-source models, and also outperforms most closed-source and larger open-source models in BFCL (e.g., GPT-4-series models). For example, it outperforms xLAM-8x22b-r by 5.27 in the overall accuracy metrics. Moreover, it demonstrates its superiority in challenging scenarios (e.g., Live), which indicates our method learn advanced tool-use capabilities effectively from synthetic data. This is primarily due to our iterative ReFT strategy, which continuously
193
+
194
+ <table><tr><td>Rank</td><td>Overall Acc</td><td>Model</td><td>Non-live</td><td>Live</td><td>Multi turn</td><td>Rel / Irrel</td></tr><tr><td>1</td><td>63.26</td><td>iTool-8B (FC)</td><td>88.82</td><td>78.29</td><td>23.84</td><td>84.90/80.72</td></tr><tr><td>2</td><td>62.19</td><td>GPT-4o-2024-08-06 (FC)</td><td>86.15</td><td>75.43</td><td>25.00</td><td>63.41/82.93</td></tr><tr><td>3</td><td>61.89</td><td>GPT-4-turbo-2024-04-09 (FC)</td><td>88.80</td><td>76.23</td><td>24.88</td><td>73.17/79.76</td></tr><tr><td>4</td><td>60.47</td><td>GPT-4o-mini-2024-07-18 (FC)</td><td>83.72</td><td>70.19</td><td>27.50</td><td>80.49/71.77</td></tr><tr><td>5</td><td>60.44</td><td>ToolACE-8B (FC)</td><td>88.94</td><td>74.99</td><td>17.38</td><td>80.49/85.71</td></tr><tr><td>6</td><td>58.15</td><td>GPT-4o-mini-2024-07-18 (Prompt)</td><td>88.69</td><td>74.63</td><td>11.13</td><td>75.61/81.00</td></tr><tr><td>7</td><td>57.99</td><td>xLAM-8x22b-r (FC)</td><td>87.51</td><td>71.97</td><td>14.50</td><td>85.37/67.29</td></tr><tr><td>8</td><td>57.92</td><td>Gemini-1.5-Flash-002 (Prompt)</td><td>87.60</td><td>76.28</td><td>9.88</td><td>85.37/78.54</td></tr><tr><td>9</td><td>57.69</td><td>Hammer2.0-7b (FC)</td><td>88.54</td><td>69.79</td><td>14.75</td><td>95.12/68.46</td></tr><tr><td>10</td><td>57.45</td><td>o1-mini-2024-09-12 (Prompt)</td><td>83.84</td><td>75.39</td><td>13.12</td><td>48.78/88.04</td></tr><tr><td>11</td><td>56.80</td><td>mistral-large-2407 (FC)</td><td>81.41</td><td>68.37</td><td>20.62</td><td>75.61/49.44</td></tr><tr><td>12</td><td>56.51</td><td>Gemini-1.5-Pro-002 (Prompt)</td><td>89.63</td><td>74.41</td><td>5.50</td><td>65.85/77.30</td></tr><tr><td>13</td><td>55.86</td><td>Gemini-1.5-Flash-001 (Prompt)</td><td>85.74</td><td>69.21</td><td>12.62</td><td>82.93/67.84</td></tr><tr><td>14</td><td>55.78</td><td>GPT-4-turbo-2024-04-09 (Prompt)</td><td>88.80</td><td>69.04</td><td>9.50</td><td>82.93/58.95</td></tr><tr><td>15</td><td>55.10</td><td>Gemini-1.5-Pro-001 (Prompt)</td><td>86.17</td><td>73.12</td><td>6.00</td><td>56.10/85.00</td></tr><tr><td>16</td><td>54.41</td><td>xLAM-7b-r (FC)</td><td>80.86</td><td>67.88</td><td>14.50</td><td>97.56/64.05</td></tr><tr><td>17</td><td>54.27</td><td>Qwen2.5-7B-Instruct (Prompt)</td><td>85.58</td><td>65.97</td><td>11.25</td><td>92.68/64.95</td></tr><tr><td>18</td><td>53.67</td><td>Llama-3.1-70B-Instruct (Prompt)</td><td>87.50</td><td>61.13</td><td>12.38</td><td>92.68/58.38</td></tr><tr><td>19</td><td>53.66</td><td>Gemma-2-27b-it (Prompt)</td><td>87.39</td><td>69.48</td><td>4.12</td><td>87.80/68.76</td></tr><tr><td>20</td><td>53.00</td><td>GPT-3.5-Turbo-0125 (FC)</td><td>78.52</td><td>61.22</td><td>19.25</td><td>97.56/35.16</td></tr><tr><td>21</td><td>52.50</td><td>Gemma-2-9b-it (Prompt)</td><td>84.52</td><td>69.21</td><td>3.75</td><td>87.80/72.45</td></tr><tr><td>22</td><td>51.59</td><td>Hammer2.0-1.5b (FC)</td><td>84.44</td><td>63.22</td><td>7.13</td><td>92.68/60.64</td></tr><tr><td>23</td><td>51.50</td><td>Meta-Llama-3-70B-Instruct (Prompt)</td><td>85.10</td><td>66.15</td><td>3.25</td><td>92.68/52.78</td></tr><tr><td>27</td><td>50.15</td><td>Llama-3.1-8B-Instruct (Prompt)</td><td>81.15</td><td>57.93</td><td>11.38</td><td>78.05/41.62</td></tr><tr><td>28</td><td>49.02</td><td>xLAM-8x7b-r (FC)</td><td>73.93</td><td>69.12</td><td>4.00</td><td>87.80/68.12</td></tr><tr><td>29</td><td>48.82</td><td>Qwen2.5-1.5B-Instruct (Prompt)</td><td>53.99</td><td>61.71</td><td>6.62</td><td>75.61/67.17</td></tr><tr><td>42</td><td>42.98</td><td>Llama-3.2-3B-Instruct (Prompt)</td><td>11.11</td><td>50.91</td><td>4.00</td><td>63.41/68.81</td></tr></table>
195
+
196
+ Table 1: The leaderboard of different models in four tool-use scenarios of BFCL (v3) benchmark. The top 20 models and baselines are listed for comparison. FC denotes the model is tailored for functional calling. Rel and Irrel denote relevance and irrelevance detection, respectively, indicating whether to call a tool or not. $\spadesuit$ denotes closed-source model, $\heartsuit$ denotes open-source base model, $\clubsuit$ denotes open-source fine-tuned model.
197
+ pinpoints and optimizes the model's deficiencies.
198
+
199
+ <table><tr><td>Model</td><td>API-Bank L1</td><td>API-Bank L2</td></tr><tr><td>♠ GPT-3.5-turbo-0125</td><td>70.43</td><td>52.59</td></tr><tr><td>♠ GPT-4-0613</td><td>75.94</td><td>48.89</td></tr><tr><td>♠ GPT-4-turbo-2024-04-09</td><td>72.43</td><td>39.26</td></tr><tr><td>♠ GPT-4o-mini-2024-07-18</td><td>74.69</td><td>45.93</td></tr><tr><td>♠ GPT-4o-2024-05-13</td><td>76.19</td><td>42.96</td></tr><tr><td>♥ Alpaca-7B</td><td>24.06</td><td>5.19</td></tr><tr><td>♥ ChatGLM-6B</td><td>23.62</td><td>13.33</td></tr><tr><td>♣ Lynx-7B</td><td>49.87</td><td>30.37</td></tr><tr><td>♣ xLAM-7b-fc-r</td><td>32.83</td><td>21.48</td></tr><tr><td>♥ LLaMA-3.1-8B-Instruct</td><td>71.18</td><td>37.04</td></tr><tr><td>♥ Qwen2.5-7B-Instruct</td><td>72.83</td><td>41.98</td></tr><tr><td>♣ ToolACE-8B</td><td>75.94</td><td>47.41</td></tr><tr><td>♣ iTool-8B</td><td>78.89</td><td>52.87</td></tr></table>
200
+
201
+ # 4.3 Ablation Analysis
202
+
203
+ # 4.3.1 Module Ablation
204
+
205
+ To evaluate the effectiveness of the two components in our method, we conduct an ablation study in: (1) the warm-up training phase (w/o warm-up). (2) the
206
+
207
+ Table 2: Accuracy performance comparison on API-Bank evaluation system. Bold values represent the highest performance.
208
+
209
+ <table><tr><td>Models</td><td>Non-live</td><td>Live</td><td>Multi-turn</td></tr><tr><td>Base Model</td><td>81.15</td><td>57.93</td><td>11.38</td></tr><tr><td>+ base SFT</td><td>88.94 ↑7.8</td><td>74.99 ↑17</td><td>17.38 ↑6.0</td></tr><tr><td>+ IRT</td><td>88.86 ↓0.1</td><td>76.51 ↑1.5</td><td>20.65 ↑3.3</td></tr><tr><td>+ warm-up SFT</td><td>88.35 ↓7.2</td><td>75.84 ↑17.9</td><td>19.65 ↑8.3</td></tr><tr><td>+ IRL (iTool)</td><td>88.82 ↑0.5</td><td>78.29 ↑3.2</td><td>23.84 ↑4.2</td></tr><tr><td>Total</td><td>↑9.5</td><td>↑21.2</td><td>↑12.5</td></tr></table>
210
+
211
+ Table 3: The module ablation performance (↑ = increase, ↓ = decrease).
212
+
213
+ Iterative Reinforcement Learning (IRL) module (w/o IRL). We adopt LLaMA-3.1-8B-Instruct as the Base model for benchmarking, ensuring a consistent baseline across all experimental conditions. From Table 3, we find that all components are essential within our method. base SFT denotes SFT with the entire gold labeled dataset. iTool achieves a comparable level to SFT on the Non-live metric, but each module brings substantial improvements on the complex-scenario metrics (Live and Multi). Specifically, the warm-up training and IRL modules individually contribute improvements of 2.3 and 4.2 points, respectively, on the Multi-turn metric. Cumulatively, it gets a 6.5 improvement over
214
+
215
+ ![](images/bd6692a65acb128b923b9fe5b904cc302da39223a954d8ea92bc58a23ea01db3.jpg)
216
+ Figure 5: The performance progression of easy to hard warm-up training on Live and Overall metrics.
217
+
218
+ ![](images/40a9390aefeb266e2d0143cd62bbfb10bb9b32f78d0f81a253ba1798ce4e50e4.jpg)
219
+ Figure 6: The result of ablation study on MCTS in iTool on key metrics.
220
+
221
+ SFT and a 12.5 gain relative to Base, highlighting effects in complex, multi-step reasoning tasks.
222
+
223
+ # 4.3.2 Deeper Ablation
224
+
225
+ (1) In warm-up training, we conducted a study on the easy2hard SFT strategy. We present the performance progression from easy to hard and compare it with base model. The experimental results are summarized in Figure 5. From the results, we observe that our strategy shows a gradual improvement. There is a significant leap from base to easy, and the second largest improvement occurs from the medium to hard. In the synthetic data, the model can quickly learn the task patterns of tool use from the easier stages, which in turn benefits the harder scenario. This indicates that the model benefits from the curriculum learning process that goes from easy to hard.
226
+
227
+ (2) In iterative reinforcement learning, we conducted a study on MCTS and iteration counts.
228
+
229
+ The results are illustrated in Figure 6 and 7 respectively. To replace MCTS, we sample four responses
230
+
231
+ ![](images/47c14f5ad5a3db885dfdcda429a70e02b3236d1f71a0149c32414475a82e4375.jpg)
232
+
233
+ ![](images/1300cd8b22e3cbacd7c9b0df1780fc705a0f2b46eb8ec951fd6874d7341b5684.jpg)
234
+ Figure 7: The performance variation of our model with the increase of iterations.
235
+
236
+ ![](images/668c6dc72b769d8b88a55de08f413323552ed0fd7ecf6904e5a5f3ce9a9fe8ae.jpg)
237
+
238
+ ![](images/8d001caeeae758941548f5de6e5483d7d339f2ff9ff4d5b4fc7edb7c153b9461.jpg)
239
+
240
+ from the policy model and select the responses with the highest and lowest probabilities as preference pairs. These pairs are then used for subsequent preference optimization (w/o MCTS). From Figure 6, we observe that the model's performance deteriorates when MCTS is replaced. From Figure 7, we observe that as iterations increase, our method initially shows an upward trend before declining. The model performs best around 3 iterations, especially in the Multi-turn and Live scenarios. This indicates that MCTS can effectively mitigate the issue of insufficient data diversity with a small number of iterations. However, excessive iterations can lead to overfitting, resulting in a decrease in data diversity.
241
+
242
+ # 4.3.3 Base Model Analysis.
243
+
244
+ To further validate the effectiveness of base models, we applied our method to other base models. Due to computational resource constraints, we compared the following base models ( $< 10B$ ): (1) Llama-3.2-3B-Instruct, (2) Qwen2.5-7B-Instruct (Team, 2024). From Table 4, our method exhibits remarkably stable performance across different base models. This highlights the robustness of our method in various base models. On Llama-3.2-3B, our method improved performance by $18\%$ over the base model. On Qwen2.5-7B, it achieved the best performance at $63.22\%$ .
245
+
246
+ # 4.4 Training Gains Analysis
247
+
248
+ To analyze the training gains of our method, as detailed in Section 2.2, we test the training gains
249
+
250
+ <table><tr><td>Base Model</td><td>Method</td><td>Overall</td><td>Non-live</td><td>Live</td><td>Multi-turn</td><td>Rel / Irrel</td></tr><tr><td rowspan="3">Llama-3.1-8B-Instruct</td><td>Vanilla</td><td>50.15</td><td>81.15</td><td>57.93</td><td>11.38</td><td>78.05 / 41.62</td></tr><tr><td>Baseline</td><td>60.44</td><td>88.94</td><td>74.99</td><td>17.38</td><td>80.49 / 85.71</td></tr><tr><td>Our</td><td>63.26</td><td>88.82</td><td>78.29</td><td>23.84</td><td>84.90 / 80.72</td></tr><tr><td rowspan="3">Llama-3.2-3B-Instruct</td><td>Vanilla</td><td>42.98</td><td>11.11</td><td>50.91</td><td>4.00</td><td>63.41 / 68.81</td></tr><tr><td>Baseline</td><td>58.22</td><td>89.27</td><td>73.90</td><td>11.50</td><td>84.37 / 78.20</td></tr><tr><td>Our</td><td>62.93</td><td>90.59</td><td>76.43</td><td>15.82</td><td>84.27 / 87.82</td></tr><tr><td rowspan="3">Qwen2.5-7B-Instruct</td><td>Vanilla</td><td>54.27</td><td>85.58</td><td>65.97</td><td>11.25</td><td>92.68 / 64.95</td></tr><tr><td>Baseline</td><td>60.69</td><td>90.02</td><td>76.23</td><td>15.92</td><td>73.47 / 86.98</td></tr><tr><td>Our</td><td>63.93</td><td>91.29</td><td>82.28</td><td>22.38</td><td>80.28 / 85.12</td></tr></table>
251
+
252
+ Table 4: The accuracy performance comparison of base models with different methods on BFCL benchmark. Vanilla denotes source base model, Baseline denotes supervised fine-tuned base model, Our denotes iTool.
253
+
254
+ ![](images/c97ca88af53dc79578946c417e06488c26df0cad36ac8275d0baacb760d08dfd.jpg)
255
+ Figure 8: The change curve of training gains as the data scale increases on key metrics.
256
+
257
+ ![](images/dc96fdb2b08936174555a7ba6e0186e3d52f35e4ef7d8c27233e55baad9ec627.jpg)
258
+
259
+ of our method. From Figure 8, our method shows greater training gains as the data scale increases in Live and Overall. Unlike SFT, whose training benefit curve flattens beyond $30\%$ , our model exhibits a steeper curve in the Live metric. This suggests that our model can alleviate the internal decay of training gains by enhancing its advanced capabilities in complex scenarios. A additional training cost analysis is conducted in Appendix B.2.
260
+
261
+ # 4.5 Generalization Evaluation of Synthetic Data
262
+
263
+ We evaluated the generalization capability of our method across diverse datasets type and model architectures. Experiments included synthetic datasets (Toolace, xLAM(Zhang et al., 2024)) and a non-synthetic dataset (BFCL-half, using $50\%$ of BFCL-Live data for training and the remainder for testing). Performance was assessed on Llama3.1-8B-Instruct and Llama3.2-3B-Instruct, with results averaged across Live and Multi-turn metrics.
264
+
265
+ Our method consistently improved performance across all datasets. The largest gains were observed
266
+
267
+ on synthetic datasets (+4.42 to +6.49), with more modest improvements on non-synthetic data (+2.17 to +3.65), demonstrating effective generalization with strongest performance on synthetic benchmarks. A additional training gain dynamics generalize across model sizes is conducted in Appendix B.3.
268
+
269
+ # 5 Related Work
270
+
271
+ # 5.1 Tool use of LLMs
272
+
273
+ Pioneering works like Toolformer (Schick et al., 2023) and ToolAlpaca (Tang et al., 2023) have explored the potential of LLMs in tool use. Previously, several tuning-free methods were proposed, which involves manipulating prompts (e.g., (Xu et al., 2023; Shi et al., 2024; Qiao et al., 2024)) or enhancing execution frameworks (e.g., ReAct (Yao et al., 2023), RestGPT (Song et al., 2023)) to unlock inherent capabilities.
274
+
275
+ Due to the limitation of user-defined tools in prompts of the above methods, tuning-based methods with synthetic data have been focused. ToolLlama (Qin et al., 2023) notably expanded the toolset and investigated the impact of data scaling on performance. More efficient data synthesis techniques have been proposed for tool use (e.g., ToolACE (Liu et al., 2024), BUTTON (Chen et al., 2024), and xLAM (Zhang et al., 2024)).
276
+
277
+ # 5.2 Reinforcement Learning
278
+
279
+ Learning from human feedback is crucial in aligning LLMs with human intentions (Leike et al., 2018), which is known as reinforcement learning. ReFT enhances this process by combining reinforcement learning with SFT to optimize model
280
+
281
+ <table><tr><td rowspan="2">Dataset (Type)</td><td colspan="3">Llama3.1-8B-Instruct</td><td colspan="3">Llama3.2-3B-Instruct</td></tr><tr><td>Baseline (SFT)</td><td>iTool</td><td>Δ</td><td>Baseline (SFT)</td><td>iTool</td><td>Δ</td></tr><tr><td>Toolace†</td><td>46.18</td><td>51.06</td><td>+4.88</td><td>40.36</td><td>46.85</td><td>+6.49</td></tr><tr><td>xLAM†</td><td>42.74</td><td>48.47</td><td>+5.73</td><td>37.72</td><td>42.14</td><td>+4.42</td></tr><tr><td>BFCL-half‡</td><td>41.32</td><td>44.97</td><td>+3.65</td><td>34.65</td><td>36.82</td><td>+2.17</td></tr></table>
282
+
283
+ Table 5: Performance across datasets and models. † denotes synthetic data, and ‡ denotes non-synthetic data.
284
+
285
+ performance using reward signals. Online reinforcement learning algorithms (Schulman et al., 2017; Zheng et al., 2023) are complex and difficult to optimize. Recently, Direct Preference Optimization (DPO) (Rafailov et al., 2024), a simpler offline algorithm, reparameterizes the reward function to learn a policy model from preference data directly, enhancing simplicity and training stability. Besides, a variety of preference optimization objectives have been proposed, e.g., SimPo (Meng et al., 2024), IPO (Azar et al., 2024), ORPO (Hong et al., 2024) and KTO (Ethayarajh et al., 2024).
286
+
287
+ Further studies have extended this approach to an iterative training setup, by continuously updating the reference model with the most recent policy model or generating new preference pairs at each iteration (Dong et al., 2024; Yuan et al., 2024; Kim et al., 2024; Xiong et al., 2024)
288
+
289
+ # 6 Conclusion
290
+
291
+ Equipping LLMs with external tools is becoming a viable method to enhance their capabilities. In this paper, we study enhancing the advanced tool-use capabilities in a complex scenario from synthetic data. We find that there are training decay issues when training with synthetic tool-use data. To alleviate it, we propose an iterative reinforced fine-tuning strategy. It can continually pinpoint the model's wrong fragments in its responses and address these deficiencies by preference optimization. The experimental results demonstrate the effectiveness of the proposed method.
292
+
293
+ # 7 Limitation
294
+
295
+ While our study has achieved notable advancements, it is important to acknowledge several limitations that could be addressed in future work. First, the iterative reinforcement learning process (particularly the Monte Carlo Tree Search) requires substantial computational resources to generate fine-grained preference data. Although it is difficult to solve, we have effectively implemented parame
296
+
297
+ ter constraints to manage computational costs efficiently (e.g., 7 hours on 8 V100 GPUs per iteration), achieving a balance between computational feasibility and model performance. Additionally, due to limited computing resources, we are not able to validate our method on larger 30B or 70B base models. Finally, when analyzing the synthetic tool-use data, only a single dataset was tested. Testing more publicly available datasets would strengthen the validity and persuasiveness of the conclusions. We will address these limitations in our future work.
298
+
299
+ # Acknowledgements
300
+
301
+ The research in this article is supported by the New Generation Artificial Intelligence of China (2024YFE0203700), National Natural Science Foundation of China under Grants U22B2059 and 62176079.
302
+
303
+ # References
304
+
305
+ Mohammad Gheshlaghi Azar, Zhaohan Daniel Guo, Bilal Piot, Remi Munos, Mark Rowland, Michal Valko, and Daniele Calandriello. 2024. A general theoretical paradigm to understand learning from human preferences. In International Conference on Artificial Intelligence and Statistics, pages 4447-4455. PMLR.
306
+ Mingyang Chen, Haoze Sun, Tianpeng Li, Fan Yang, Hao Liang, Keer Lu, Bin Cui, Wentao Zhang, Zenan Zhou, and Weipeng Chen. 2024. Facilitating multi-turn function calling for llms via compositional instruction tuning. arXiv preprint arXiv:2410.12952.
307
+ Rémi Coulom. 2006. Efficient selectivity and backup operators in monte-carlo tree search. In International conference on computers and games, pages 72-83. Springer.
308
+ Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. 2024. Qlora: Efficient finetuning of quantized llms. Advances in Neural Information Processing Systems, 36.
309
+ Hanze Dong, Wei Xiong, Bo Pang, Haoxiang Wang, Han Zhao, Yingbo Zhou, Nan Jiang, Doyen Sahoo,
310
+
311
+ Caiming Xiong, and Tong Zhang. 2024. Rlhf workflow: From reward modeling to online rlhf. arXiv preprint arXiv:2405.07863.
312
+ Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. 2024. Kto: Model alignment as prospect theoretic optimization. arXiv preprint arXiv:2402.01306.
313
+ Shen Gao, Zhengliang Shi, Minghang Zhu, Bowen Fang, Xin Xin, Pengjie Ren, Zhumin Chen, Jun Ma, and Zhaochun Ren. 2024. Confucius: Iterative tool learning from introspection feedback by easy-to-difficult curriculum. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 18030-18038.
314
+ Jean-Bastien Grill, Florent Altché, Yunhao Tang, Thomas Hubert, Michal Valko, Ioannis Antonoglou, and Rémi Munos. 2020. Monte-carlo tree search as regularized policy optimization. In International Conference on Machine Learning, pages 3769-3778. PMLR.
315
+ Tom Gunter, Zirui Wang, Chong Wang, Ruoming Pang, Andy Narayanan, Aonan Zhang, Bowen Zhang, Chen Chen, Chung-Cheng Chiu, David Qiu, et al. 2024. Apple intelligence foundation language models. arXiv preprint arXiv:2407.21075.
316
+ Jiwoo Hong, Noah Lee, and James Thorne. 2024. Orpo: Monolithic preference optimization without reference model. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 11170-11189.
317
+ Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. 2022. Lora: Low-rank adaptation of large language models. In International Conference on Learning Representations.
318
+ Dahyun Kim, Yungi Kim, Wonho Song, Hyeonwoo Kim, Yunsu Kim, Sanghoon Kim, and Chanjun Park. 2024. sdpo: Don't use your data all at once. arXiv preprint arXiv:2403.19270.
319
+ Jan Leike, David Krueger, Tom Everitt, Miljan Martic, Vishal Maini, and Shane Legg. 2018. Scalable agent alignment via reward modeling: a research direction. arXiv preprint arXiv:1811.07871.
320
+ Minghao Li, Yingxiu Zhao, Bowen Yu, Feifan Song, Hangyu Li, Haiyang Yu, Zhoujun Li, Fei Huang, and Yongbin Li. 2023. Api-bank: A comprehensive benchmark for tool-augmented llms. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 3102-3116.
321
+ Wenjun Li, Dexun Li, Kuicai Dong, Cong Zhang, Hao Zhang, Weiwen Liu, Yasheng Wang, Ruiming Tang, and Yong Liu. 2025. Adaptive tool use in large language models with meta-cognition trigger. arXiv preprint arXiv:2502.12961.
322
+
323
+ Xinzhe Li. 2025. A review of prominent paradigms for lmm-based agents: Tool use, planning (including rag), and feedback learning. In Proceedings of the 31st International Conference on Computational Linguistics, pages 9760-9779.
324
+ Qiqiang Lin, Muning Wen, Qiuying Peng, Guanyu Nie, Junwei Liao, Jun Wang, Xiaoyun Mo, Jiamu Zhou, Cheng Cheng, Yin Zhao, et al. 2024. Hammer: Robust function-calling for on-device language models via function masking. arXiv preprint arXiv:2410.04587.
325
+ Weiwen Liu, Xu Huang, Xingshan Zeng, Xinlong Hao, Shuai Yu, Dexun Li, Shuai Wang, Weinan Gan, Zhengying Liu, Yuanqing Yu, et al. 2024. Toolace: Winning the points of llm function calling. arXiv preprint arXiv:2409.00920.
326
+ Ne Luo, Aryo Pradipta Gema, Xuanli He, Emile van Krieken, Pietro Lesci, and Pasquale Minervini. 2025. Self-training large language models for tool-use without demonstrations. arXiv preprint arXiv:2502.05867.
327
+ Trung Quoc Luong, Xinbo Zhang, Zhanming Jie, Peng Sun, Xiaoran Jin, and Hang Li. 2024. Reft: Reasoning with reinforced fine-tuning. Preprint, arXiv:2401.08967.
328
+ Graziano A Manduzio, Federico A Galatolo, Mario GCA Cimino, Enzo Pasquale Scilingo, and Lorenzo Cominelli. 2024. Improving small-scale large language models function calling for reasoning tasks. arXiv preprint arXiv:2410.18890.
329
+ Yu Meng, Mengzhou Xia, and Danqi Chen. 2024. Simpo: Simple preference optimization with a reference-free reward. In Advances in Neural Information Processing Systems (NeurIPS).
330
+ Shuofei Qiao, Ningyu Zhang, Runnan Fang, Yujie Luo, Wangchunshu Zhou, Yuchen Eleanor Jiang, Huajun Chen, et al. 2024. Autoact: Automatic agent learning from scratch for qa via self-planning. In ICLR 2024 Workshop on Large Language Model (LLM) Agents.
331
+ Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, et al. 2023. Toollm: Facilitating large language models to master $16000+$ real-world apis. In The Twelfth International Conference on Learning Representations.
332
+ Changle Qu, Sunhao Dai, Xiaochi Wei, Hengyi Cai, Shuaiqiang Wang, Dawei Yin, Jun Xu, and Ji-Rong Wen. 2024. Tool learning with large language models: A survey. arXiv preprint arXiv:2405.17935.
333
+ Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. 2024. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36.
334
+
335
+ Christopher D Rosin. 2011. Multi-armed bandits with episode context. Annals of Mathematics and Artificial Intelligence, 61(3):203-230.
336
+ Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Eric Hambro, Luke Zettle-moyer, Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: Language models can teach themselves to use tools. Advances in Neural Information Processing Systems, 36:68539-68551.
337
+ John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.
338
+ Zhengliang Shi, Shen Gao, Xiuyi Chen, Yue Feng, Lingyong Yan, Haibo Shi, Dawei Yin, Pengjie Ren, Suzan Verberne, and Zhaochun Ren. 2024. Learning to use tools via cooperative and interactive agents. In *Findings of the Association for Computational Linguistics: EMNLP* 2024, pages 10642-10657, Miami, Florida, USA. Association for Computational Linguistics.
339
+ Joykirat Singh, Raghav Magazine, Yash Pandya, and Akshay Nambi. 2025. Agentic reasoning and tool integration for llms via reinforcement learning. arXiv preprint arXiv:2505.01441.
340
+ Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, et al. 2023. Restgpt: Connecting large language models with real-world restful apis. arXiv preprint arXiv:2306.06624.
341
+ Qiaoyu Tang, Ziliang Deng, Hongyu Lin, Xianpei Han, Qiao Liang, Boxi Cao, and Le Sun. 2023. Toolalpaca: Generalized tool learning for language models with 3000 simulated cases. arXiv preprint arXiv:2306.05301.
342
+ Qwen Team. 2024. Qwen2.5: A party of foundation models.
343
+ Yuxi Xie, Anirudh Goyal, Wenyue Zheng, Min-Yen Kan, Timothy P Lillicrap, Kenji Kawaguchi, and Michael Shieh. 2024. Monte carlo tree search boosts reasoning via iterative preference learning. arXiv preprint arXiv:2405.00451.
344
+ Wei Xiong, Chengshuai Shi, Jiaming Shen, Aviv Rosenberg, Zhen Qin, Daniele Calandriello, Misha Khalman, Rishabh Joshi, Bilal Piot, Mohammad Saleh, Chi Jin, Tong Zhang, and Tianqi Liu. 2024. Building math agents with multi-turn iterative preference learning. Preprint, arXiv:2409.02392.
345
+ Benfeng Xu, Licheng Zhang, Zhendong Mao, Quan Wang, Hongtao Xie, and Yongdong Zhang. 2020. Curriculum learning for natural language understanding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6095-6104.
346
+
347
+ Qiantong Xu, Fenglu Hong, Bo Li, Changran Hu, Zhengyu Chen, and Jian Zhang. 2023. On the tool manipulation capability of open-sourced large language models. In NeurIPS 2023 Foundation Models for Decision Making Workshop.
348
+ Fanjia Yan, Huanzhi Mao, Charlie Cheng-Jie Ji, Tianjun Zhang, Shishir G. Patil, Ion Stoica, and Joseph E. Gonzalez. 2024. Berkeley function calling leaderboard.
349
+ Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R Narasimhan, and Yuan Cao. 2023. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations.
350
+ Junjie Ye, Yilong Wu, Sixian Li, Yuming Yang, Tao Gui, Qi Zhang, Xuanjing Huang, Peng Wang, Zhongchao Shi, Jianping Fan, et al. 2024. Tl-training: A task-feature-based framework for training large language models in tool use. arXiv preprint arXiv:2412.15495.
351
+ Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho, Xian Li, Sainbayar Sukhbaatar, Jing Xu, and Jason E Weston. 2024. Self-rewarding language models. In *Forty-first International Conference on Machine Learning*.
352
+ Jianguo Zhang, Tian Lan, Ming Zhu, Zuxin Liu, Thai Hoang, Shirley Kokane, Weiran Yao, Juntao Tan, Akshara Prabhakar, Haolin Chen, et al. 2024. xlam: A family of large action models to empower ai agent systems. CoRR.
353
+ Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin, Qin Liu, Yuhao Zhou, et al. 2023. Secrets of rlhf in large language models part i: Ppo. arXiv preprint arXiv:2307.04964.
354
+ Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, and Yongqiang Ma. 2024. Llamafactory: Unified efficient fine-tuning of $100+$ language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations), Bangkok, Thailand. Association for Computational Linguistics.
355
+
356
+ # A Details in Preliminary Study
357
+
358
+ # A.1 Descriptions of error types
359
+
360
+ Here is the descriptions of all error types.
361
+
362
+ - Parameter Value. The value or type of the parameter does not match the ground truth.
363
+ - Parameter Name. Unable to identify the parameter value from the user query.
364
+ - Parameter Count. Incorrect number of parameters; required parameters are missing.
365
+
366
+ - Tools Count. The wrong number of tools was called.
367
+ - Tool Name. There was an error when calling the tool name, such as calling a non-existent tool name or a tool name that does not match the ground truth.
368
+ - Code Syntax. The tool call does not comply with the syntax of Python, Java, or JavaScript.
369
+ - Other. Errors other than those mentioned above.
370
+
371
+ # B Complementary Experiments
372
+
373
+ # B.1 More Implementation Details
374
+
375
+ The experiments were conducted using the publicly available training repository, LLaMA-Factory (Zheng et al., 2024). The training of our model can be done within 28 hours with 8 NVIDIA Tesla V100-SXM2-32GB GPUs. For the training model, we take the best performance checkpoint on the valid dataset.
376
+
377
+ The Implementation Settings. Due to resource constraints, we employ a parameter-efficient training strategy using LoRA (with rank=16 and alpha=32) during the SFT warm-up phase, and QLoRA (a quantization method from the bitsandbytes $^3$ library with 4 bits) during the reinforcement learning (RL) phase. We utilize a cosine learning rate scheduler with a warm-up ratio of 0.1. More detailed training settings are shown in Table 6.
378
+
379
+ <table><tr><td>Stage</td><td>epoch</td><td>lr</td><td>batch size</td></tr><tr><td>SFT</td><td>3</td><td>easy: 5e-5 medium: 2e-5 hard: 1e-5</td><td>64</td></tr><tr><td>RL</td><td>2</td><td>1e-6</td><td>64</td></tr></table>
380
+
381
+ Implementation Settings in MCTS-base RL. In Expand phase of MCTS, the prompt for self-evaluation is shown in Table 10. When calculating the confidence score for correctness, we evaluate the token-level probabilities of a policy model across four options (A, B, C, D) with respective weights of 1.0, 0.1, -1.0, and -2.0. We sample the
382
+
383
+ model's responses four times and use the weighted average of these samples as the final confidence score.
384
+
385
+ To ensure the quality of the sampled preference data, we exclude the following data: (1) pairs with candidate step similarity above $95\%$ , (2) pairs with a $\mathcal{Q}$ -value difference less than 0.1, and (3) accepted samples with a $\mathcal{Q}$ -value below 0.3. In MCTS, to control algorithm overhead, we limit the following parameters: (1) depth, the maximum depth of the search tree, (2) width, the maximum number of child nodes per node, (3) simulation, the maximum number of simulation steps in Expand phase, and (4) iterations, the maximum number of iterations to construct the MCTS search tree. We summarize these parameters in Table 7.
386
+
387
+ Table 6: The detailed training settings in our method. 1r denotes learning rate. batch size denotes the total batch size, equals 1 (per device) times 8 (accumulation steps) times 8 (devices).
388
+
389
+ <table><tr><td>Parameters</td><td>Value</td><td>Parameters</td><td>Value</td></tr><tr><td>depth</td><td>3</td><td>c</td><td>1.0</td></tr><tr><td>width</td><td>3</td><td>temperature</td><td>1.5</td></tr><tr><td>simulation</td><td>2</td><td>seed</td><td>42</td></tr><tr><td>iterations</td><td>5</td><td></td><td></td></tr></table>
390
+
391
+ # B.2 Cost Analysis
392
+
393
+ We conducted a cost-benefit analysis to evaluate iTool's performance gains against computational overhead, focusing on MCTS sampling efficiency. Experiments compared the base model, SFT baseline, and iTool across accuracy metrics (BFCL-Live and Multi-turn) and time costs, using an $8 \times 32\mathrm{G}$ V100 GPU configuration.
394
+
395
+ Table 7: The parameters setting in MCTS. $c$ denotes the degree of exploration in the Select phase.
396
+
397
+ <table><tr><td>Model</td><td>Live</td><td>Multi-turn</td><td>Time Cost</td></tr><tr><td>Base Model</td><td>57.93</td><td>11.38</td><td>0h</td></tr><tr><td>SFT Baseline</td><td>74.99</td><td>17.38</td><td>10h</td></tr><tr><td>iTool</td><td>78.29 ↑3.3</td><td>23.84 ↑6.46</td><td>28h (×2.8)</td></tr></table>
398
+
399
+ Table 8: Cost-benefit analysis of different models
400
+
401
+ Results in Figure 8 show iTool outperforms the SFT baseline by $3.30\%$ in BFCL-Live accuracy and $6.46\%$ in Multi-turn accuracy, with a $2.8\times$ increase in time cost. The significant gains in complex Multi-turn scenarios, where complexity is highest, demonstrate favorable cost-effectiveness for practical deployment.
402
+
403
+ # B.3 Generalize Across Model Sizes
404
+
405
+ To investigate the efficacy of SFT at scale and examine whether training gain dynamics generalize across model sizes, we conducted a controlled SFT study using three open-source instruction-tuned models of increasing capacity: Llama3.2-3B-Instruct, Llama3.1-8B-Instruct, and Qwen2.5-32B-Instruct. Each model was fine-tuned on incrementally scaled subsets of training data, ranging from minimal to full data regimes. Performance was evaluated on the BFCL-Live benchmark to track accuracy progression as a function of data volume, as shown in Figure 9. The results demonstrate that, across all three model scales, the marginal gains from additional training data follow a decaying trend, that is, performance improvements diminish as data scale increases, indicating consistent saturation behavior regardless of model size. This suggests that while larger models achieve higher absolute performance, their relative gains from scaling data during SFT exhibit predictable attenuation, reinforcing the importance of data efficiency strategies even at large scales.
406
+
407
+ ![](images/3b83abe7201c1b9f701acc01a2b6aabf474744ba12c403fbe9da8a0f7ee50bd6.jpg)
408
+ Figure 9: Training gain dynamics generalize across model sizes.
409
+
410
+ # B.4 Preference Algorithm Analysis
411
+
412
+ In iterative reinforcement learning, we also explore different preference optimization algorithms. Besides the widely used DPO (Rafailov et al., 2024), we also explored SimPO (Meng et al., 2024), IPO (Azar et al., 2024), and ORPO (Hong et al., 2024). DPO reparameterizes the reward function to learn a policy model from preference data directly. IPO is a theoretically grounded approach method that avoids DPO's assumption that pairwise preferences can be replaced with pointwise rewards. ORPO introduces a reference-model-free odd ratio term to directly contrast winning and losing responses
413
+
414
+ with the policy model and jointly trains with the SFT objective. SimPO aligns the reference-free reward function in the preference optimization objective with the generation metric. For fair comparisons, we start these algorithms from the same SFT checkpoints, the reference model is initialized as the policy model.
415
+
416
+ For these algorithms, we conducted a thorough search for the optimal hyperparameter settings to ensure a fair comparison. The results of hyperparameter settings are shown in Table 9. The results of different preference optimization algorithm with optimal hyperparameter settings are shown in Figure 10. From the result, we find iTool with SimDPO achieved the best performance. Different preference algorithms do not create significant performance gaps except for ORPO.
417
+
418
+ ![](images/0c0aafa1edec119b262ad1ccef7d5a7b0e93303d22b7b2ede09f5684d57340fa.jpg)
419
+ Figure 10: The performance $iTool$ using different preference optimization algorithms on BFCL.
420
+
421
+ # C Case Analysis
422
+
423
+ # C.1 An Example of Preference Pair
424
+
425
+ Table 11 illustrates a preference pair example. The chosen response correctly employs the "Get Trending Result" tool with suitable parameters for the user's request. Conversely, the rejected response is improperly formatted, omits necessary parentheses, and incorrectly assigns the value 1 to the timeframe parameter, showcasing an erroneous application of the tool.
426
+
427
+ Table 12 presents another case of preference pair, sampled during the MCTS research tree as depicted in Figure 11. In this scenario, the user's query lacks the specific details necessary for the functions mentioned (i.e., reviews for 'reviewAnalytics.extractSentiment' and metrics for 'socialTrendsfetchTrendingProducts'). The assistant's chosen response correctly identifies the need for these parameter values, whereas the rejected response incorrectly hallucinates when recognizing these parameters.
428
+
429
+ <table><tr><td>Method</td><td>Objective</td><td>Hyperparameters</td><td>Best Setting</td></tr><tr><td rowspan="2">DPO</td><td rowspan="2">- log σ(β log πθ(yw|x)/πref(yw|x) - β log πθ(yl|x)/πref(yl|x))</td><td>β ∈ [0.01, 0.05, 0.1]</td><td>β = 0.1</td></tr><tr><td>lr ∈ [1e - 6, 5e - 7, 3e - 7]</td><td>lr = 3e - 7</td></tr><tr><td rowspan="2">IPO</td><td rowspan="2">(log πθ(yw|x)/πref(yw|x) - log πθ(yl|x)/πref(yl|x) - 1/2τ)2</td><td>τ ∈ [0.01, 0.05, 0.1]</td><td>τ = 0.1</td></tr><tr><td>lr ∈ [1e - 6, 5e - 7, 3e - 7]</td><td>lr = 1e - 6</td></tr><tr><td rowspan="2">ORPO</td><td rowspan="2">- log pθ(yw|x) - λ log σ(log pθ(yw|x)/1-pθ(yw|x) - log pθ(yl|x)/1-pθ(yl|x)), where pθ(y|x) = exp(1/y| log πθ(y|x))</td><td>λ ∈ [0.01, 0.05, 0.1]</td><td>λ = 0.1</td></tr><tr><td>lr ∈ [1e - 6, 5e - 7, 3e - 7]</td><td>lr = 3e - 7</td></tr><tr><td rowspan="3">SimPO</td><td rowspan="3">- log σ(β/(yw| log πθ(yw|x) - β/(yl| log πθ(yl|x) - γ)</td><td>β ∈ [2.0, 2.5]</td><td>β = 2.5</td></tr><tr><td>γ ∈ [0.5, 1.0, 1.4]</td><td>γ = 0.5</td></tr><tr><td>lr ∈ [1e - 6, 5e - 7, 3e - 7]</td><td>lr = 1e - 6</td></tr></table>
430
+
431
+ Table 9: The search for optimal hyperparameter settings of different preference optimization algorithms.
432
+
433
+ # Prompt 1: Eval Prompt
434
+
435
+ Ground Truth Response: $\{\mathrm{gt\_ans}\}$
436
+
437
+ Generated Response by Model: {response}
438
+
439
+ User Instruction:
440
+
441
+ Please assess the quality of the generated response relative to the ground truth response.
442
+
443
+ Note: A generated response that is a fragment of the ground truth response is also excellent.
444
+
445
+ Evaluation Criteria:
446
+
447
+ 1. Function Name: Is the name of all the function called correct?
448
+ 2. Parameter Count: Is the number of parameters for all the function correct?
449
+ 3. Parameter Names: Are the names of all the parameters for the function correct?
450
+ 4. Parameter Value/Types: Are the value/types of all the parameters for the function correct?
451
+ 5. Semantic Similarity: Is the generated response semantically close to the ground truth response?
452
+
453
+ Please directly choose from the following options to judge the overall quality:
454
+
455
+ (A) Excellent: The generated response meets all criteria and is almost identical to the ground truth response.
456
+ (B) Acceptable: The generated response meets most criteria but has minor discrepancies.
457
+ (C) Fair: The generated response meets some criteria but has significant issues.
458
+ (D) Poor: The generated response fails to meet most or all criteria.
459
+
460
+ ASSISTANT: The option of overall quality is
461
+
462
+ You are an AI specialized in tool use.
463
+
464
+ Your task is to assess the potential veracity of {placeholder}.
465
+
466
+ Table 10: The Eval Prompt for self-evaluation in Eq. 7 of Section 3.2.
467
+
468
+ ![](images/fe9c0a63e7e2d54650e6fb393d455bf48680f415c8e42565506e95bd6c647b15.jpg)
469
+ Figure 11: The illustration of example 2 in Table 12 for preference pair derived from MCTS. The floating-point values of nodes denote the $Q$ -value in MCTS.
470
+
471
+ Example 1
472
+ ```jsonl
473
+ Tool list [{"name": "Get Trending Result", "description": "Retrieves trending search results from Google Trends for the United States.", "parameters": {"type": "dict", "properties": {"category": {"description": "The category of trending results (e.g., news, entertainment, etc.)", "type": "string"], "timeframe": {"description": "The timeframe for which to retrieve trending results (e.g., today, this week, etc.)", "type": "string]}, }, "required": ["category"]} , "required": null }, { "name": "Get Server Time", "description": "Returns the current server time in a standardized format", "parameters": {"type": "dict", "properties": {}, "required": [] }, "required": null }}]
474
+ User Can you tell me the latest trending topics in entertainment today?
475
+ Assistant
476
+ chosen [Get Trending Result(category="entertainment", timeframe="today")]
477
+ rejected [Get Trending Result category="entertainment", timeframe="1"]
478
+ ```
479
+
480
+ Table 11: The example 1 of preference pair derived from MCTS.
481
+
482
+ Example 2
483
+ ```txt
484
+ Tool list [{
485
+ "name": "fetchTrendingProducts",
486
+ "description": "Fetched products based on reviews and engagement metrics",
487
+ "parameters": {
488
+ "metrics": {
489
+ "type": "array",
490
+ "items": {
491
+ "productID": "string",
492
+ "engagementScore": {
493
+ "likes": "int",
494
+ "shares": "int"
495
+ }
496
+ }
497
+ },
498
+ {"name": "extractSentiment",
499
+ "description": "Extracts sentiment scores from reviews"},
500
+ {"parameters": {
501
+ "reviews": {
502
+ "type": "array",
503
+ "items": {
504
+ "reviewText": "string",
505
+ "reviewDate": {
506
+ "type": "string",
507
+ "pattern": "{^{\backslash w+}}\d{1,2}, \backslash d{4}$}
508
+ }
509
+ }
510
+ }]}
511
+ User I'm running a marketing campaign and need sentiment analysis from this month's reviews of our main product. We had 300 likes and 50 shares yesterday. Please analyze trends and sentiment for our competitor's product too; they had 450 likes and 75 shares. Assistant
512
+ choosen You need to provide the reviews for the sentiment analysis of the flagship product and the competitor's product.
513
+ rejected [reviewAnalytics.extractSentiment(reviews={'reviewText': "product_name ticks the check for everything'})],
514
+ ```
515
+
516
+ Table 12: The example 2 of preference pair derived from MCTS.
itoolreinforcedfinetuningwithdynamicdeficiencycalibrationforadvancedtooluse/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2488cc457da209c01fde1de7ff564f22aad8e83d4078a5f7bfc66bb4472bd7b8
3
+ size 878817
itoolreinforcedfinetuningwithdynamicdeficiencycalibrationforadvancedtooluse/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1dbb1ae062cd39b948f5452efe86b932163ea1001d1d9472842d9587e833b09c
3
+ size 546071
ivedecidedtoleakprobinginternalsbehindpromptleakageintents/3a394495-8a16-47e4-b753-a14853f62ff9_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:296c29da13e4188d676ef81e5f9098139244a87d5922f42cb1c56226d6f229cc
3
+ size 198209
ivedecidedtoleakprobinginternalsbehindpromptleakageintents/3a394495-8a16-47e4-b753-a14853f62ff9_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2e62b4f5ca24bef319431b989f9f5b8f4dc84f5ae7243dac2413ab1bd38e08f9
3
+ size 238312
ivedecidedtoleakprobinginternalsbehindpromptleakageintents/3a394495-8a16-47e4-b753-a14853f62ff9_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:edf5ca4828027075edaba9a803e961ed15cc3c45d4a1b057f06f2500b63e00b3
3
+ size 5551188
ivedecidedtoleakprobinginternalsbehindpromptleakageintents/full.md ADDED
The diff for this file is too large to render. See raw diff
 
ivedecidedtoleakprobinginternalsbehindpromptleakageintents/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b2afa138655bda7131df6df89ede06959729afbcb086df2560c38a670b5f9ab
3
+ size 4615631
ivedecidedtoleakprobinginternalsbehindpromptleakageintents/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:12324323193e8778593111d26c551a4c00bf8014ce319fdf72a15bc83e340a09
3
+ size 851113
ivisparaninteractivevisualspatialreasoningbenchmarkforvlms/d3e89823-af08-4e34-b602-4ae677b1f7cc_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:34c62b071bb756d1ba0a1a992336a9f8e4716e8f2f28818110d5ba282042dce4
3
+ size 148844
ivisparaninteractivevisualspatialreasoningbenchmarkforvlms/d3e89823-af08-4e34-b602-4ae677b1f7cc_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3037015348572eb4f66a7659032ea14457130f67e290282390f0e6e3e9725028
3
+ size 184882
ivisparaninteractivevisualspatialreasoningbenchmarkforvlms/d3e89823-af08-4e34-b602-4ae677b1f7cc_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d1fd3dbd4f3cb1d07641534238b409856c984b19d6972c25b00d1af9bcd85916
3
+ size 12164187
ivisparaninteractivevisualspatialreasoningbenchmarkforvlms/full.md ADDED
@@ -0,0 +1,777 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # iVISPAR — An Interactive Visual-Spatial Reasoning Benchmark for VLMs
2
+
3
+ Julius Mayer* Mohamad Ballout† Serwan Jassim† Farbod Nosrat Nezami† Elia Bruni
4
+
5
+ Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany research@jmayer.ai
6
+
7
+ # Abstract
8
+
9
+ Vision-Language Models (VLMs) are known to struggle with spatial reasoning and visual alignment. To help overcome these limitations, we introduce iVISPAR, an interactive multimodal benchmark designed to evaluate the spatial reasoning capabilities of VLMs acting as agents. iVISPAR is based on a variant of the sliding tile puzzle—a classic problem that demands logical planning, spatial awareness, and multi-step reasoning. The benchmark supports visual 3D, 2D, and text-based input modalities, enabling comprehensive assessments of VLMs' planning and reasoning skills. We evaluate a broad suite of state-of-the-art open-source and closed-source VLMs, comparing their performance while also providing optimal path solutions and a human baseline to assess the task's complexity and feasibility for humans. Results indicate that while VLMs perform better on 2D tasks compared to 3D or text-based settings, they struggle with complex spatial configurations and consistently fall short of human performance, illustrating the persistent challenge of visual alignment. This underscores critical gaps in current VLM capabilities, highlighting their limitations in achieving human-level cognition. Project website: https://microcosm.ai/ivispar.
10
+
11
+ # 1 Introduction
12
+
13
+ The rapid advancement of Vision-Language Models (VLMs) has spurred significant debate regarding their capacity to achieve human-level cognition. These models are increasingly deployed as general reasoning systems capable of addressing complex problems across diverse domains, with applications extending into dynamic, real-world scenarios such as physical agent-based tasks and planning (Wang et al., 2024a; Xi et al., 2023; Zeng
14
+
15
+ ![](images/e688a838d94903f706acf0f772a056496d49a28336a58030b66c6b65a9209c49.jpg)
16
+ Figure 1: VLMs' success rates of completed games over 900 episodes across vision 3D, vision 2D, and text.
17
+
18
+ et al., 2023). However, critical gaps persist in their spatial reasoning and visual alignment capabilities, areas essential for understanding, interpreting, and manipulating objects and their spatial relationships (Kamath et al., 2023a; Bordes et al., 2024; Campbell et al., 2024).
19
+
20
+ Spatial reasoning, a foundational aspect of problem-solving, navigation, and interaction with the physical world, requires models to bridge vision and cognition by interpreting visual information to understand spatial arrangements. Tasks such as mentally rotating shapes, predicting object movement, and recognizing patterns exemplify the importance of visual-spatial reasoning. Despite these critical requirements, progress in VLMs has been hampered by evaluation benchmarks that fail to capture the dynamic and multi-step complexity of real-world spatial reasoning. Existing benchmarks predominantly rely on static, text- or image-based setups that often oversimplify spatial contexts, focusing on 2D environments without interactivity or dynamic problem-solving capabilities.
21
+
22
+ This limitation perpetuates a lack of meaningful progress in visual-spatial reasoning within more realistic 3D environments.
23
+
24
+ Contributions. To bridge this gap, we introduce iVISPAR (Interactive Visual-Spatial Reasoning), a novel benchmark designed to systematically evaluate VLMs as agents in dynamic 3D environments. iVISPAR is built around the sliding tile puzzle, a well-established problem in developmental psychology that demands logical planning, spatial awareness, and multi-step problem-solving. As part of our contributions, we introduce the Sliding Geom Puzzle, a variant that replaces traditional numbered tiles with geometric objects distinguished by their color and shape, adding an additional layer of visual reasoning.
25
+
26
+ Notably, iVISPAR is grounded in a well-studied, formalized problem with access to optimal solutions, ensuring a robust framework for evaluation. The benchmark supports scalable task complexity by adjusting factors such as board size, the number of tiles, and solution paths, ranging from simple configurations to NP-complete challenges that surpass baseline human performance.
27
+
28
+ Leveraging a prompt-based API, iVISPAR enables VLMs to interact with a simulated environment through an iterative action-perception loop. Experimentation results demonstrate that while state-of-the-art VLMs can handle basic spatial reasoning tasks, they face significant difficulties with more complex scenarios, especially in 3D environments. Evaluating models in such 3D settings is essential, as they more closely mirror the spatial complexity of real-world environments. By contrasting their performance against optimal solutions and human baselines, we highlight the persistent gap between current VLM capabilities and human-level spatial reasoning.
29
+
30
+ Our contributions are threefold: (i) a novel interactive benchmark that systematically evaluates visual-spatial reasoning in VLMs; (ii) a scalable task design rooted in a formalized problem with optimal solutions; and (iii) empirical insights into the strengths and limitations of VLMs across varying task complexities and modalities. iVISPAR lays the foundation for advancing VLM research toward overcoming critical gaps in reasoning and alignment capabilities.
31
+
32
+ # 2 Related work
33
+
34
+ # 2.1 Spatial Reasoning Benchmarks
35
+
36
+ Physical understanding in interactive agents has long been studied through simulation-based benchmarks (Li et al., 2024b; Mecattaf et al., 2024; Jassim et al., 2024; Wang et al., 2025; Hu et al., 2023; Zhao et al., 2025; Guruprasad et al., 2024; Su et al., 2024; Feng et al., 2025), although many of these frameworks are not directly suited for VLM evaluation due to limited language interfaces, low task fidelity, or demanding simulation requirements. Several datasets targeting visual reasoning have been applied to deep learning models (Johnson et al., 2016; Li et al., 2023), but they do not support interactive planning or action execution by language agents. Other works have explored similar setups using geometric object games, primarily in the context of language game learning with deep learning agents (Wang et al., 2016; Kuhnle and Copestake, 2017); related efforts such as Sliding Puzzles Gym and PUZZLES (Oliveira et al., 2024; Estermann et al., 2024) have been proposed as RL benchmarks, but lack the language interface and fine-grained 3D problem generation introduced in our setting.
37
+
38
+ # 2.2 Spatial Reasoning in LLMs
39
+
40
+ Even though Large Language Models (LLMs) are primarily trained via next-token prediction on textual corpora, their capacity for spatial reasoning have attracted recent attention (Abdou et al., 2021; Patel and Pavlick, 2021). LLMs have also been explored as agents for spatial planning (Bohnet et al., 2024), path planning (Aghzal et al., 2024), and spatial path generation (Rizvi et al., 2024) in purely textual or symbolic environments. Several recent studies have examined whether LLMs implicitly encode spatial structures and geometric reasoning, ranging from digital twin generation via symbolic rules (Wang et al., 2024c), to textual spatial question answering in diverse settings (Mirzaee et al., 2021), and evaluations across grid, ring, and tree topologies (Yamada et al., 2024).
41
+
42
+ # 2.3 Spatial Reasoning in VLMs
43
+
44
+ Visual reasoning has emerged as a key focus in evaluating VLMs, with growing interest in their capacity to interpret spatial relationships and object configurations (Zhang et al., 2024; Rajabi and Kosecka, 2024b; Roberts and Roberts, 2024; Campbell et al., 2025); concurrently, several studies have examined the degree to which these mod
45
+
46
+ ![](images/70d45e0398b62d0cb9f63ba49a06abd3f917f7715403f8a9dcfff6dde5b1df16.jpg)
47
+ Figure 2: Example of VLMs' observations for a state (blue) and the goal (green) at each step during an episode of the Sliding Geom Puzzle environment, on a $4 \times 4$ board with 10 geom and an optimal path length of 2. Left to right, each tested modality: vision 3D, vision 2D, and text-based representation. For more examples, see Appendix A.1.2
48
+
49
+ ![](images/eef34f2414aaffb24aa9168de1c45ef8bdd5ad9bc12976e9b1dc6f78d947905b.jpg)
50
+
51
+ ![](images/9550f7eb63f6f262b14c6a69b7ad9dc4922ea6a9295407ce3c129cca067f0afb.jpg)
52
+
53
+ els align visual inputs with linguistic representations (Merullo et al., 2023; Ilharco et al., 2021). Recent advancements in VLMs have prompted a surge in evaluations, yet most studies primarily rely on visual question-answering tests (Liu et al., 2023; Rajabi and Kosecka, 2024a; Wang et al., 2024b; Cheng et al., 2024; Tang et al., 2024; Duan et al., 2025; Wang et al., 2023; Kamath et al., 2023b). Beyond static evaluations, a growing body of work explores the use of VLMs and foundation models as interactive agents within simulated environments, where they are tasked with manipulating objects, navigating spaces, or executing spatial instructions in grounded contexts (Wu et al., 2024; Li et al., 2024b; Mecattaf et al., 2024; Jassim et al., 2024; Wang et al., 2025; Su et al., 2024). This includes applications in embodied AI and robotics, where VLMs are increasingly integrated into control loops to support visuomotor reasoning and spatial decision-making (Hu et al., 2023; Zhao et al., 2025; Guruprasad et al., 2024; Feng et al., 2025).
54
+
55
+ In this context, we present iVISPAR, an interactive multimodal benchmark designed to evaluate the spatial reasoning capabilities of VLMs acting as agents.
56
+
57
+ # 3 The iVISPAR Benchmark
58
+
59
+ iVISPAR $^2$ is an interactive, multimodal puzzle simulator that presents agents with a board state in one of three input modalities: a 3D rendered image, a 2D top-down view, or a text-based representation (see Figure 2). By rendering scenes in 3D space, iVISPAR offers a more realistic depiction of spatial environments compared to traditional 2D grid visualizations and enables systematic comparisons across modalities. Agents interact with the board by issuing natural language commands through a
60
+
61
+ text-based API to apply actions to the board (see Figure 3). iVISPAR supports procedural generation of puzzle instances with finely controlled parameters, allowing for a scalable dataset of tasks with adjustable complexity across many spatial properties, and benchmarking performance with multiple baseline models.
62
+
63
+ # 3.1 Sliding Geom Puzzle
64
+
65
+ A central environment in iVISPAR is the Sliding Geom Puzzle (SGP), a reimagining of the classic sliding tile puzzle (see Appendix A.3). Instead of numbered tiles, SGP uses geometric objects (geoms) uniquely defined by combinations of color and shape, increasing visual-spatial complexity and enhancing task scalability. This design shift requires models to interpret object features rather than follow numerical sequences, mirroring real-world spatial reasoning where items are distinguished by appearance, size, or structure. The task draws inspiration from physical scenarios such as organizing items, assembling structures, or packing, promoting a more authentic evaluation of real-world spatial capabilities.
66
+
67
+ # 3.2 Game dynamics
68
+
69
+ The objective is to rearrange the pieces on the board by moving them over free spaces to match a given goal configuration. In each episode, agents receive observations of the start and goal states (see Figure 2), accompanied by task instructions (see Appendix A.1.1). Agents apply move actions to geom by referencing their unique color and shape combination and specifying the direction of intended movement. Geoms can be moved in cardinal directions (LEFT, RIGHT, UP, DOWN), with actions formatted as "move <color><shape><direction>":
70
+
71
+ "move blue sphere right"
72
+
73
+ ![](images/40091bcba666894fb470477faabd6bf99259dd13bcc7a455f12c9d73e90c22ff.jpg)
74
+ Figure 3: Depiction of the interaction flow between VLM agents and the iVISPAR simulator with a progression through an episode with the shortest path solution of 4 steps being solved by prompted actions from a VLM agent. For a full example of an episode progression, see Appendix A.1.4.
75
+
76
+ Actions are validated and applied if legal, with agents receiving updated board states regardless of the action's success after each move command. Effective and ineffective actions both result in valid new board states but, respectively, decrease or increase the path length to the goal state. Invalid moves, such as occupied destination and out-of-bounds actions, fail to alter the board state, as do illegal commands, which violate the instructed action format. This action-perception loop repeats until the goal state is achieved or a step limit is reached. Due to limited context windows, VLM agents receive task instructions at each time step. A sample agent-environment interaction is provided in Appendix A.1.3.
77
+
78
+ # 3.3 Observation Spaces
79
+
80
+ Agents observe a combination of the current board state and the goal state. Additionally, they can receive a sequence of past state-action pairs, determined by the size of the configured context window. Images for 3D observations are presented from an angled top-down perspective and may include partially occluded objects, whereas 2D observations follow a graph-like layout with fully visible elements. Both may optionally include embedded, text-based chess-style coordinate labels as spatial cues along the outer edge of the grid board as well as on free tiles. In 2D observations, shapes are mapped consistently from their 3D counterparts to preserve object identity across modalities. Images can also be marked with an embedded text label and a colored background to differentiate between past (grey), current (blue), and goal state (green). Figure 2 shows 3D vision (left) and 2D vision (middle) for the active state (top) and the goal state (bottom). The text-based representation encodes past, active, and goal states directly in the
81
+
82
+ prompt string supplied to the agent. Agents receive the list of geom in the order of board coordinates. A visualization of the text-based active (top) and goal states (bottom) is shown in Figure 2 (right). This modality does not rely on images.
83
+
84
+ # 3.4 Complexity Scalability
85
+
86
+ The GSTP is a well-known NP-hard problem due to the need for multi-step planning across a constrained grid (Gozon and Yu, 2024). SGP inherits this complexity but introduces greater flexibility in scaling difficulty without altering the game's core mechanics. This flexibility provides more degrees of freedom, making the task more tractable for VLM agents. Key scaling factors include board size, number of objects, object variability, length of the shortest path solution, and the geom interference factor (see Appendix A.1.2). The shortest path solution for all episode configurations is calculated using the $\mathbf{A}^*$ algorithm (Hart et al., 1968), as detailed in Appendix A.7.1. The interference factor denotes the extent to which objects obstruct one another's optimal paths, increasing the global solution length beyond the cumulative Manhattan distances of individual paths. This interference can create configurations with short optimal paths but increased planning requirements, significantly raising the problem's difficulty. Available geometric shapes include ["cube," "pyramid," "sphere," "cylinder," "cone," "prism"], with colors freely selectable by referencing RGB values. Agents must navigate combinatorial complexity by matching shapes and colors, promoting spatial strategies over the sequential patterns seen in numerical tile puzzles. Episode configurations are generated procedurally, requiring models to generalize across puzzle instances. Human and algorithmic benchmarks for these experiments are detailed in Section
87
+
88
+ 4.2.
89
+
90
+ # 4 Experiments
91
+
92
+ Performance of VLMs is tested for the SGP to assess their capabilities in scene understanding, problem-solving, and multi-step planning within constrained environments.
93
+
94
+ # 4.1 Dataset Generation
95
+
96
+ Experiments were conducted on a dataset of SGPs on a fixed board size to $4 \times 4$ : smaller grids (e.g., $3 \times 3$ ) collapse many spatial-relation cases, while larger ones ( $\geq 5 \times 5$ ) dilute object visibility without yielding further complexity benefits. Performance is assessed by varying complexity across two parameters: the number of objects (2-11) and the shortest path length (2-11). Configurations maintain a geom interference factor of 0, ensuring the shortest path equals the cumulative Manhattan distance. Initial experiments indicated that VLM agents faced significant challenges at higher task complexities. Three episodes are sampled for each complexity level, producing a dataset of 300 diverse board configurations. The set of geom properties consists of four shapes, sphere, pyramid, cube, and cylinder, and four colors, red, green, blue, and yellow, resulting in 16 unique combinations. VLM agents are tested on the same dataset for each modality, resulting in 900 episodes for each model.
97
+
98
+ # 4.2 Baselines
99
+
100
+ To contextualize agent performance and provide upper and lower bounds, we establish four baselines encompassing human and AI agents.
101
+
102
+ Human performance was evaluated with 30 participants using a web app GUI of the SGP, where participants interacted by prompting text commands over a command line, mirroring the interaction method of VLM agents. Baselines were provided for the 3D vision modality on the same dataset as the VLM agents.
103
+
104
+ AI baselines were introduced for two agents: an optimal agent executing shortest path solutions computed by $\mathrm{A}^*$ (Hart et al., 1968), and a random agent performing uninformed but valid actions uniformly sampled from those leading to new board states. Algorithms for the AI agents are detailed in Appendix A.7.
105
+
106
+ # 4.3 Models
107
+
108
+ We evaluate a selection of open- and closed-source VLMs that scored high on OpenCompass<sup>3</sup> and which support multi-image inputs and a minimum context length of 800 tokens. Selected models are: Sonnet-3.5 (Claude Team, 2024), Gemini2.0-flash (Gemini Team, 2024), GPT-4o (OpenAI et al., 2024), InternVL2.5-78B (Chen et al., 2024), LLaVA-OneVision-72B (Li et al., 2024a), Qwen2-72B (Wang et al., 2024d). For closed-source models, we rely on the official APIs and for open-source models, on the publicly available checkpoints. We use a temperature of 1.0, top-p of 0.95, and top-k of 50 for all open-source models. An overview of all models and their details can be found in the Appendix A.2.
109
+
110
+ # 4.4 Context-Aware Zero-Shot Reasoning
111
+
112
+ The models employ Chain-of-Thought (CoT) reasoning (Wei et al., 2022) to break down complex problems into smaller sub-tasks, enhancing accuracy and interpretability (Appendix A.1.3). We constrain VLMs' context windows to the past two steps, incorporating state representations alongside the model's action responses. This approach prioritizes extracting maximum value from limited experience to preserve the models' sequential coherence and minimize computational overhead. Operating within this context-aware zero-shot reasoning framework, the models interpret task requirements without examples, drawing exclusively from pretrained knowledge, task instructions, and limited past interactions.
113
+
114
+ # 4.5 Instruction Prompts
115
+
116
+ We avoided prompt engineering for any single model; the chosen template is the same for all systems and contains only the minimal information needed. Fixing one validated template provides a consistent basis for comparison and makes the benchmark easily reproducible. The visual and text prompts are isomorphic: the image placeholder is the only difference, so no modality receives extra hints. Our human-baseline study likewise found the final wording easy to follow. This supports our aim of testing spatial-reasoning ability itself, without relying on prompt engineering, so we use one clear, uniform template for all models.
117
+
118
+ ![](images/48244c4dba214ab0bf62a6a5b791b9147dad5e747149a2417200dc1a78bde50c.jpg)
119
+
120
+ ![](images/d2c965ada988845f4908dd7c0d42e5bc2c0f6daadfb649bb6cf3bcfde29a843c.jpg)
121
+ Figure 4: VLM evaluation on 900 episodes per model across all three modalities, with $95\%$ confidence intervals. Baseline comparisons for human performance and random moves are shown. Top: VLMs' success rates of episodes completed with higher values denoting better performance. Bottom: VLMs' mean step deviation from the optimal path with lower values denoting better performance. Full numerical results are provided in Appendix A.4
122
+
123
+ # 4.6 Evaluation
124
+
125
+ Agent performance is evaluated through two primary metrics: the fraction of solved environments and mean step-deviation from the optimal path
126
+
127
+ Mean step-deviation from optimal path measures the deviation from optimal behavior during problem-solving. At each step $t$ , the shortest path solution from the current board state to the goal, computed by $A^*$ , is used to assess efficiency. Formally,
128
+
129
+ $$
130
+ R (t) = d (s _ {t}, s ^ {*}) - \left[ d (s _ {0}, s ^ {*}) - t \right].
131
+ $$
132
+
133
+ where $d(s, s^{*})$ denotes the shortest path length from state $s$ to the goal $s^{*}$ . This metric quantifies how much further the agent is from the goal compared to an optimal agent after the same number of steps. A regret value of zero indicates that the agent follows an optimal trajectory, while positive regret reflects inefficiencies or unnecessary detours. By capturing performance even in unsolved environments, this approach provides insights into agent behavior under varying complexities.
134
+
135
+ To gain deeper insights, we analyze the most common error patterns exhibited by agents. This allows us to identify model weaknesses, recurring failure cases, and patterns of suboptimal decision-making.
136
+
137
+ # 4.7 Auxiliary Task
138
+
139
+ Additionally, we evaluate the models' ability to infer and represent board states from visual input across all 300 episodes. Given an image and accompanying instructions, each model is tasked with predicting the corresponding board configuration in text form, using the same format as the textual representation shown in Figure 2. This auxiliary task further enriches our understanding of the models' behavior and their capacity to interpret spatial information from visual inputs.
140
+
141
+ To analyze this task, we frame the comparison between the true and predicted board states as a set matching problem, solved using the Hungarian algorithm. A match is defined as any pair of geom sharing at least either color or shape. Geoms that share neither are considered missed (if only present in the true state) or hallucinated (if only present in the prediction). Matched geom may still contain mismatches in coordinates, color, or shape. Predicted elements that cannot be parsed into valid geom triplets are counted as format errors.
142
+
143
+ # 5 Results
144
+
145
+ We evaluated the spatial reasoning capabilities of VLMs in our SGP environment on 3D vision and
146
+
147
+ ![](images/40a6603f5dc246da3e75e6f3b840dc78c2aba24dc1e4424336516575db495926.jpg)
148
+
149
+ ![](images/f2c780f7eabf669259eed6c2bd412094bf5d79ab38fac56d008cac9ec43f4932.jpg)
150
+ Figure 5: Error patterns showing average action counts per episode during SGP interaction (top) and average geoms per episode for the board state inference auxiliary task (bottom), both averaged across modalities (see Sections 5 and 4.7), each aggregated across modalities. Full numerical results are provided in Appendix A.4.
151
+
152
+ compared it to 2D vision and text-based modalities across 300 episodes each (see Figure 4). To standardize gameplay, the number of actions per episode was capped at 20.
153
+
154
+ Success rates: The percentage of episodes completed and the mean deviations of steps from the optimal path were measured for each modality and compared to human performance as well as random actions (Figure 4).
155
+
156
+ Action classification: We classified actions based on their effects on the board and calculated their average occurrence per episode to provide insights into the challenges VLMs face in efficiently completing episodes (see Figure 5 top). Effective and ineffective actions both result in valid new board states but, respectively, decrease or increase the path length to the goal state. Invalid moves, such as occupied destination and out-of-bounds actions, while illegal commands break the instructed action format, all of which leave the board state unchanged.
157
+
158
+ Auxiliary Task: For the board state inference task, we evaluate the number of geom that were correctly inferred, missed, hallucinated, or contained a mismatch in coordinates, color, or shape. Format errors denote cases where the output failed to follow the expected structure (Figure 5, bottom).
159
+
160
+ Complexity scales: We evaluated the cumulative performance of VLMs across the three modal-
161
+
162
+ ities using two complexity scales, the shortest path length required to solve an episode and the number of geom on the board. Longer shortest paths demand a broader global planning horizon and consistent goal-directed progress, while higher geom counts require efficient local planning to optimize rearrangement order and manage free spaces. Figure 7 illustrates the performance of VLMs in 100 combinations of complexity, highlighting the average minimal distance to the goal state in 20 steps.
163
+
164
+ # 6 Discussion
165
+
166
+ # 6.1 Model Performance
167
+
168
+ All models show basic task understanding and spatial reasoning, progressing toward the goal state (see Figure 4). Performance, however, varies widely. Closed-source models outperform open-source ones: Sonnet-3.5 achieves the highest success rate at $89.7\%$ in the 2D visual modality, followed by Gemini-2.0-Flash and GPT-4o. In contrast, open-source models such as InternVL2.5-78B, LLaVA-OneVision-72B, and Qwen2-72B perform near the random baseline. Human participants solve the tasks perfectly with near-optimal paths, setting a high benchmark.
169
+
170
+ Notably, even models solving fewer than $1\%$ of tasks often produce more efficient paths than a random baseline (see Figure 4, bottom), indicating traces of goal-directed behavior despite overall failure. These task performances are also consistent
171
+
172
+ ![](images/2977e41aae8ec6f6318f09e69c609e5d11f4f6982f0ea8bdacf24783d0b48734.jpg)
173
+ Figure 6: Error patterns showing average action counts per episode during SGP interaction (left; see Section 5) and average geom per episode for the board state inference auxiliary task (right; see Section 4.7), shown per modality and aggregated across agents. Full numerical results are provided in Appendix A.4.
174
+
175
+ ![](images/1e681f903c5ff77258c7edc38f141dbc31b51db5bec063c9338daa5e27f14109.jpg)
176
+
177
+ with the further analysis of the models' error types and their accuracy in the board state inference task, which we discuss in Section 6.2.
178
+
179
+ # 6.2 Error Patterns
180
+
181
+ We analyzed the types of mistakes models make during interaction with the simulator and evaluated their ability to infer board states from visual input. Overall, models rarely issue illegal commands or exhibit format errors (see Figure 5, top and bottom), suggesting that most VLMs understand how to follow instructions and interact with the environment appropriately.
182
+
183
+ However, board state inference accuracy reveals a sharp performance drop from 2D to 3D inputs: while models correctly identify an average of 4.2 objects in 2D, this number falls to 1.4 in the 3D setting (see Figure 6, right). This is primarily due to substantial increases in coordinate prediction errors, alongside moderate rises in color, shape mismatches, and missed detections. In contrast, hallucinations and format-related issues remain largely stable across both modalities.
184
+
185
+ These findings offer a clear explanation for the weaker performance in the 3D vision condition: precise localization of objects remains a critical challenge. As illustrated in Figure 5, this results in more ineffective moves, including frequent attempts to place objects out-of-bounds or onto already occupied cells.
186
+
187
+ # 6.3 Modality Impact
188
+
189
+ Despite being evaluated on identical tasks, model performance varied substantially across input modalities (see Figure 4). All closed-source models (Sonnet-3.5, Gemini-2.0-flash, GPT-4o) performed best on 2D vision, followed by text, and worst on 3D vision. This suggests that these models may have undergone more training on 2D
190
+
191
+ visual inputs, which are more common in spatial benchmarks. Interestingly, text input, despite posing significant challenges for humans, ranked second, indicating some robustness in linguistic reasoning. In contrast, open-source models (InternVL2.5, LLaVA-OneVision, Qwen2) performed poorly across the board, with near-random scores on visual inputs. Their relatively stronger performance on text tasks may reflect a reliance on superficial pattern recognition rather than grounded spatial understanding. As shown in Figure 6 (left), error patterns for ineffective moves and collisions align with the overall performance ranking across modalities. Out-of-bounds errors are most frequent in the text condition, nearly twice as common as in 2D vision, indicating that understanding board dimensions was a primary challenge in the textual setting. Additional results from our board state inference task further support this view, showing that models, predict more correct objects on the board in Vision 2D compared to Vision 3D (Figure 6, right).
192
+
193
+ # 6.4 Complexity Scaling
194
+
195
+ We analyzed the correlation matrix between the number of objects on the board and the shortest path solution length to assess how different types of complexity affect model performance (see Figure 7, top). While performance consistently drops with increasing complexity in both dimensions, the heatmaps reveal modality-specific trends. Performance declines more steeply with increasing geom count (particularly in 3D), suggesting that sequential planning under visual conditions poses a major challenge. In contrast, in the text-only setting, the number of geom seems to have little effect, with errors mostly determined by the length of the shortest path solution. This highlights limitations in spatial reference from language alone.
196
+
197
+ ![](images/72d8c6085ee262aa08f0aef45f933eee7e61e797d605ef2d0d8c000e750e234d.jpg)
198
+ Figure 7: Cumulative graphs aggregated across agents. Top: Correlation matrix of remaining shortest-path lengths to the goal for tasks with optimal paths between 2-11 steps. Each run is capped at 20 actions, and the metric is computed at the agent's final state, either upon reaching the goal or, if unsolved, after the 20th action. Bottom: Error types in the board state inference auxiliary task over increasing number of geom on the board.
199
+
200
+ Data from the auxiliary task of board state inference show that, while errors to predict the coordinates of geom on the board increase with the number of geom on the board, other error types remain relatively stable even for a higher number of geom on the board (see Figure 7). Format errors and the number of hallucinated geom is overall low, mismatches with colors and shapes increasing only slightly, and surprisingly the number of missed objects stays relatively stable as well.
201
+
202
+ # 7 Conclusion
203
+
204
+ We have introduced iVISPAR, a novel interactive multimodal benchmark designed to evaluate the spatial reasoning capabilities in 3D vision of VLMs acting as agents. The benchmark, centered on the Sliding Geom Puzzle, evaluates VLMs' abilities in logical planning, spatial awareness, and multi-step problem-solving, aiming to reflect real-world spatial reasoning. Our evaluation tested a suite of state-of-the-art open-source and closed-source VLMs on a dataset of board configurations, scaled across two levels of complexity. We compared them to baselines for human capabilities, optimal and random agents, providing insight into their performance under varying conditions.
205
+
206
+ Our findings demonstrate that VLMs struggle with spatial reasoning in 3D vision and that there are significant performance differences between the tested VLMs. While they understand the instructions and outperform random agents in simple spatial tasks, they struggle with more complex configurations and intricate problem properties. Interestingly, VLMs show stronger performance in 2D vision compared to 3D or text-based tasks. Our auxiliary board state inference task revealed that VLMs frequently miss geoms, misplace them on the board, or mismatch their colors or shapes, errors that occur more often with 3D vision input than with 2D. This suggests that visual alignment for 3D spatial reasoning continues to pose a significant challenge, underscoring persistent gaps in VLM capabilities and highlighting barriers to achieving human-level cognitive performance.
207
+
208
+ Future Work Looking ahead, we plan to expand the benchmark to incorporate additional tasks focused on scene understanding, as well as rotation and transformation challenges.
209
+
210
+ Resources For the most up-to-date results on state-of-the-art models and access to the leaderboard, please visit:
211
+
212
+ https://microcosm.ai/ivispar.
213
+
214
+ # Acknowledgments
215
+
216
+ This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) — 456666331, 321892712.
217
+
218
+ # Limitations
219
+
220
+ We restricted the context window, limiting the number of images VLMs can process. Extended image inputs often disrupt VLMs' understanding of sequential coherence and increase computational demands and API costs. This contrasts with human participants, who recall each step of an episode and draw from past experiences.
221
+
222
+ Additionally, while some models are optimized for long-context reasoning or "deep thinking," their architecture and usage patterns are ill-suited for step-wise, interactive simulations. Their per-frame API costs are disproportionately higher, making them impractical for the interaction format used in our benchmark. This also limits direct comparisons to human participants, who recall previous steps and integrate episodic knowledge more efficiently.
223
+
224
+ # Impact Statement
225
+
226
+ This paper contributes to advancements in vision-language models. While our work has potential applications in broader AI research, it does not introduce immediate ethical or societal risks beyond those already associated with the field. As our work is largely theoretical and not at a scale that could pose significant concerns, it does not raise specific risks of misuse or unintended consequences.
227
+
228
+ # References
229
+
230
+ Mostafa Abdou, Artur Kulmizev, Daniel Hershcovich, Stella Frank, Ellie Pavlick, and Anders Søgaard. 2021. Can language models encode perceptual structure without grounding? a case study in color.
231
+ Mohamed Aghzal, Erion Plaku, and Ziyu Yao. 2024. Can large language models be good path planners? a benchmark and investigation on spatial-temporal reasoning. Preprint, arxiv:2310.03249 [cs].
232
+ Bernd Bohnet, Azade Nova, Aaron T. Parisi, Kevin Swersky, Katayoon Goshvadi, Hanjun Dai, Dale Schuurmans, Noah Fiedel, and Hanie Sedghi. 2024. Exploring and benchmarking the planning capabilities of large language models. Preprint, arxiv:2406.13094 [cs].
233
+ Florian Bordes, Richard Yuanzhe Pang, Anurag Ajay, Alexander C. Li, Adrien Bardes, Suzanne Petryk, Oscar Manas, Zhiqiu Lin, Anas Mahmoud, Bargav
234
+
235
+ Jayaraman, Mark Ibrahim, Melissa Hall, Yunyang Xiong, Jonathan Lebensold, Candace Ross, Srihari Jayakumar, Chuan Guo, Diane Bouchacourt, Haider Al-Tahan, and 22 others. 2024. An introduction to vision-language modeling. CoRR, abs/2405.17247.
236
+ Declan Campbell, Sunayana Rane, Tyler Gialanza, Nicolò De Sabbata, Kia Ghods, Amogh Joshi, Alexander Ku, Steven M. Frankland, Thomas L. Griffiths, Jonathan D. Cohen, and Taylor W. Webb. 2024. Understanding the limits of vision language models through the lens of the binding problem. CoRR, abs/2411.00238.
237
+ Declan Campbell, Sunayana Rane, Tyler Giallanza, Nicolò De Sabbata, Kia Ghods, Amogh Joshi, Alexander Ku, Steven M. Frankland, Thomas L. Griffths, Jonathan D. Cohen, and Taylor W. Webb. 2025. Understanding the limits of vision language models through the lens of the binding problem. *Preprint*, arxiv:2411.00238 [cs].
238
+ Zhe Chen, Weiyun Wang, Yue Cao, Yangzhou Liu, Zhangwei Gao, Erfei Cui, Jinguo Zhu, Shenglong Ye, Hao Tian, Zhaoyang Liu, and 1 others. 2024. Expanding performance boundaries of open-source multimodal models with model, data, and test-time scaling. arXiv preprint arXiv:2412.05271.
239
+ An-Chieh Cheng, Hongxu Yin, Yang Fu, Qiushan Guo, Ruihan Yang, Jan Kautz, Xiaolong Wang, and Sifei Liu. 2024. SpatialRGPT: Grounded spatial reasoning in vision language models. Preprint, arxiv:2406.01584 [cs]. Version: 3.
240
+ Claude Team. 2024. Introducing the next generation of claude. https://www.anthropic.com/news/claude-3-family.
241
+ Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. ArXiv, abs/2010.11929.
242
+ Lin Duan, Yanming Xiu, and Maria Gorlatova. 2025. Advancing the understanding and evaluation of AR-generated scenes: When vision-language models shine and stumble. Preprint, arxiv:2501.13964 [cs].
243
+ Benjamin Estermann, Luca A. Lanzendorfer, Yannick Niedermayr, and Roger Wattenhofer. 2024. PUZZLES: A benchmark for neural algorithmic reasoning. Preprint, arxiv:2407.00401 [cs].
244
+ Yunhai Feng, Jiaming Han, Zhuoran Yang, Xiangyu Yue, Sergey Levine, and Jianlan Luo. 2025. Reflective planning: Vision-language models for multi-stage long-horizon robotic manipulation. Preprint, arxiv:2502.16707 [cs].
245
+ Gemini Team. 2024. Gemini 2.0 flash (experimental).
246
+
247
+ Marcus Gozon and Jingjin Yu. 2024. On computing makespan-optimal solutions for generalized sliding-tile puzzles. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38(9), pages 10288-10296.
248
+ Pranav Guruprasad, Harshvardhan Sikka, Jaewoo Song, Yangyue Wang, and Paul Pu Liang. 2024. Benchmarking vision, language, & action models on robotic learning tasks. Preprint, arxiv:2411.05821 [cs].
249
+ Peter E Hart, Nils J Nilsson, and Bertram Raphael. 1968. A formal basis for the heuristic determination of minimum cost paths. IEEE transactions on Systems Science and Cybernetics, 4(2):100-107.
250
+ Yingdong Hu, Fanqi Lin, Tong Zhang, Li Yi, and Yang Gao. 2023. Look before you leap: Unveiling the power of GPT-4v in robotic vision-language planning. Preprint, arxiv:2311.17842 [cs].
251
+ Gabriel Ilharco, Rowan Zellers, Ali Farhadi, and Hannaheh Hajishirzi. 2021. Probing contextual language models for common ground with visual representations. *Preprint*, arxiv:2005.00619 [cs].
252
+ Serwan Jassim, Mario Holubar, Annika Richter, Cornelius Wolff, Xenia Ohmer, and Elia Bruni. 2024. GRASP: A novel benchmark for evaluating language Gounding and situated physics understanding in multimodal language models. Preprint, arxiv:2311.09048.
253
+ Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C. Lawrence Zitnick, and Ross Girshick. 2016. CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning. Preprint, arxiv:1612.06890 [cs].
254
+ Amita Kamath, Jack Hessel, and Kai-Wei Chang. 2023a. What's "up" with vision-language models? investigating their struggle with spatial reasoning. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 9161-9175. Association for Computational Linguistics.
255
+ Amita Kamath, Jack Hessel, and Kai-Wei Chang. 2023b. What's "up" with vision-language models? investigating their struggle with spatial reasoning. Preprint, arxiv:2310.19785 [cs].
256
+ Alexander Kuhnle and Ann Copestake. 2017. ShapeWorld - a new test methodology for multimodal language understanding. Preprint, arxiv:1704.04517 [cs].
257
+ Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Yanwei Li, Ziwei Liu, and Chunyuan Li. 2024a. Llavaonevision: Easy visual task transfer. arXiv preprint arXiv:2408.03326.
258
+
259
+ Kanxue Li, Baosheng Yu, Qi Zheng, Yibing Zhan, Yuhui Zhang, Tianle Zhang, Yijun Yang, Yue Chen, Lei Sun, Qiong Cao, Li Shen, Lusong Li, Dapeng Tao, and Xiaodong He. 2024b. MuEP: A multimodal benchmark for embodied planning with foundation models. In Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, pages 129-138. International Joint Conferences on Artificial Intelligence Organization.
260
+ Zhuowan Li, Xingrui Wang, Elias Stengel-Eskin, Adam Kortylewski, Wufei Ma, Benjamin Van Durme, and Alan Yuille. 2023. Super-CLEVR: A virtual benchmark to diagnose domain robustness in visual reasoning. Preprint, arxiv:2212.00259 [cs].
261
+ Fangyu Liu, Guy Emerson, and Nigel Collier. 2023. Visual spatial reasoning. Preprint, arxiv:2205.00363 [cs].
262
+ Matteo G. Mecattaf, Ben Slater, Marko Tesić, Jonathan Prunty, Konstantinos Voudouris, and Lucy G. Cheke. 2024. A little less conversation, a little more action, please: Investigating the physical common-sense of LLMs in a 3d embodied environment. Preprint, arxiv:2410.23242 [cs].
263
+ Jack Merullo, Louis Castricato, Carsten Eickhoff, and Ellie Pavlick. 2023. Linearly mapping from image to text space. Preprint, arxiv:2209.15162 [cs].
264
+ Roshanak Mirzaee, Hossein Rajaby Faghihi, Qiang Ning, and Parisa Kordjmashidi. 2021. *SpartQA: : A textual question answering benchmark for spatial reasoning.* Preprint, arxiv:2104.05832 [cs].
265
+ Bryan Lincoln Marques de Oliveira, Bruno Brandão, Murilo Lopes da Luz, Luana Guedes Barros Martins, Telma Woerle de Lima Soares, and Luckeciano Carvalho Melo. 2024. Sliding puzzles gym: A scalable benchmark for state representation in visual reinforcement learning. In NeurIPS 2024 Workshop on Open-World Agents.
266
+ OpenAI, Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, Aleksander Madyry, Alex Baker-Whitcomb, Alex Beutel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kirillov, Alex Nichol, and 400 others. 2024. Gpt-4o system card. Preprint, arXiv:2410.21276.
267
+ Roma Patel and Ellie Pavlick. 2021. Mapping language models to grounded conceptual spaces. In International Conference on Learning Representations.
268
+ Navid Rajabi and Jana Kosecka. 2024a. GSR-BENCH: A benchmark for grounded spatial reasoning evaluation via multimodal LLMs. Preprint, arxiv:2406.13246 [cs]. Version: 2.
269
+ Navid Rajabi and Jana Kosecka. 2024b. Towards grounded visual spatial reasoning in multi-modal vision language models. Preprint, arxiv:2308.09778 [cs].
270
+
271
+ Md Imbesat Hassan Rizvi, Xiaodan Zhu, and Iryna Gurevych. 2024. SpaRC and SpaRP: Spatial reasoning characterization and path generation for understanding spatial reasoning capability of large language models. Preprint, arxiv:2406.04566 [cs].
272
+ Denisa Roberts and Lucas Roberts. 2024. Smart vision-language reasoners. Preprint, arxiv:2407.04212 [cs].
273
+ Ying Su, Zhan Ling, Haochen Shi, Jiayang Cheng, Yauwai Yim, and Yangqiu Song. 2024. ActPlan1k: Benchmarking the procedural planning ability of visual language models in household activities. Preprint, arxiv:2410.03907 [cs].
274
+ Yihong Tang, Ao Qu, Zhaokai Wang, Dingyi Zhuang, Zhaofeng Wu, Wei Ma, Shenhao Wang, Yunhan Zheng, Zhan Zhao, and Jinhua Zhao. 2024. Sparkle: Mastering basic spatial capabilities in vision language models elicits generalization to composite spatial reasoning. Preprint, arxiv:2410.16162 [cs].
275
+ Jiayu Wang, Yifei Ming, Zhenmei Shi, Vibhav Vineet, Xin Wang, and Neel Joshi. 2024a. Is A picture worth A thousand words? delving into spatial reasoning for vision language models. CoRR, abs/2406.14852.
276
+ Jiayu Wang, Yifei Ming, Zhenmei Shi, Vibhav Vineet, Xin Wang, Yixuan Li, and Neel Joshi. 2024b. Is a picture worth a thousand words? delving into spatial reasoning for vision language models. Preprint, arxiv:2406.14852 [cs].
277
+ Jingquan Wang, Harry Zhang, Huaifa Mustafa Unjhawala, Peter Negrut, Shu Wang, Khailanii Slaton, Radu Serban, Jin-Long Wu, and Dan Negrut. 2024c. SimBench: A rule-based multi-turn interaction benchmark for evaluating an LLM's ability to generate digital twins. Preprint, arxiv:2408.11987 [cs].
278
+ Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Yang Fan, Kai Dang, Mengfei Du, Xuancheng Ren, Rui Men, Dayiheng Liu, Chang Zhou, Jingren Zhou, and Junyang Lin. 2024d. Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution. arXiv preprint arXiv:2409.12191.
279
+ Sida I. Wang, Percy Liang, and Christopher D. Manning. 2016. Learning language games through interaction. Preprint, arxiv:1606.02447 [cs].
280
+ Xingrui Wang, Wufei Ma, Zhuowan Li, Adam Kortylewski, and Alan Yuille. 2023. 3d-aware visual question answering about parts, poses and occlusions. arXiv preprint.
281
+ Xinyu Wang, Bohan Zhuang, and Qi Wu. 2025. Are large vision language models good game players? Preprint, arxiv:2503.02358 [cs].
282
+ Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, and 1 others. 2022. Chain-of-thought prompting
283
+
284
+ elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837.
285
+ Qiucheng Wu, Handong Zhao, Michael Saxon, Trung Bui, William Yang Wang, Yang Zhang, and Shiyu Chang. 2024. VSP: Assessing the dual challenges of perception and reasoning in spatial planning tasks for VLMs. Preprint, arxiv:2407.01863 [cs].
286
+ Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, and 1 others. 2023. The rise and potential of large language model based agents: A survey. arXiv preprint arXiv:2309.07864.
287
+ Yutaro Yamada, Yihan Bao, Andrew K. Lampinen, Jungo Kasai, and Ilker Yildirim. 2024. Evaluating spatial understanding of large language models. Preprint, arxiv:2310.14540 [cs].
288
+ An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, Guanting Dong, Haoran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, and 40 others. 2024a. Qwen2 technical report. arXiv preprint arXiv:2407.10671.
289
+ An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, and 22 others. 2024b. Qwen2.5 technical report. arXiv preprint arXiv:2412.15115.
290
+ Fanlong Zeng, Wensheng Gan, Yongheng Wang, Ning Liu, and Philip S Yu. 2023. Large language models for robotics: A survey. arXiv preprint arXiv:2311.07226.
291
+ Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. 2023. Sigmoid loss for language image pre-training. Preprint, arXiv:2303.15343.
292
+ Wenqi Zhang, Zhenglin Cheng, Yuanyu He, Mengna Wang, Yongliang Shen, Zeqi Tan, Guiyang Hou, Mingqian He, Yanna Ma, Weiming Lu, and Yueting Zhuang. 2024. Multimodal self-instruct: Synthetic abstract image and visual reasoning instruction using language model. Preprint, arxiv:2407.07053 [cs].
293
+ Baining Zhao, Ziyou Wang, Jianjie Fang, Chen Gao, Fanhang Man, Jinqiang Cui, Xin Wang, Xinlei Chen, Yong Li, and Wenwu Zhu. 2025. Embodied-r: Collaborative framework for activating embodied spatial reasoning in foundation models via reinforcement learning. Preprint, arxiv:2504.12680 [cs].
294
+
295
+ # A Appendix
296
+
297
+ # A.1 Episode Details
298
+
299
+ # A.1.1 System Prompt Instructions
300
+
301
+ # Interactive Sliding Geom Puzzle Game
302
+
303
+ You are a highly intelligent AI solving a shape puzzle on a 4x4 grid. The board has two states: the current active state and the goal state. Your task is to generate valid actions that transform the current state into the goal state along the shortest path.
304
+
305
+ # Steps:
306
+
307
+ (1) Analyze current state.
308
+ (2) Compare to goal.
309
+ (3) Check past actions.
310
+ (4) Propose next move.
311
+
312
+ Movement Rules: Each object occupies one tile. Objects cannot leave the grid or overlap.
313
+
314
+ Action Format: move <color> <shape> <direction>
315
+
316
+ Use only the following:
317
+
318
+ Colors: green, red, blue, yellow
319
+
320
+ Shapes: cube, sphere, pyramid, cylinder
321
+
322
+ Directions: up, down, left, right
323
+
324
+ Examples: move green cube down, move red pyramid left
325
+
326
+ Important: No coordinates. Each action must change the state. Invalid if blocked or out of bounds.
327
+
328
+ Explain Reasoning: Before suggesting an action, explain why. End with:
329
+
330
+ action: move <color> shape> <direction> (no extra characters after action: ...)
331
+
332
+ # Visual Input:
333
+
334
+ Current: {text_snippet.active};
335
+
336
+ Goal: {text_snippet_goal};
337
+
338
+ Past: {text_snippetpast}.
339
+
340
+ Final Requirement: Always end your output with:
341
+
342
+ description:<your object coordinate list>
343
+
344
+ Do not add characters after the word description.
345
+
346
+ # Board State Inference Auxiliary Task
347
+
348
+ You are a highly intelligent AI with exceptional spatial reasoning skills, and you are given the following task:
349
+
350
+ # ## Task Overview
351
+
352
+ 1. You are provided with an input image of colored geometric objects on a $4 \times 4$ board.
353
+ 2. Analyze the current board state and locate the position of all objects on the board.
354
+ 3. Respond with a list of the chess-style coordinates and their objects.
355
+
356
+ # Board Overview
357
+
358
+ The board has labeled columns, rows, and fields
359
+
360
+ - Columns a-d run from left to right in the image.
361
+ - Rows 1-4 run from bottom to top in the image.
362
+
363
+ # ## Object Overview
364
+
365
+ - On the board are various objects, uniquely defined by their color and shape:
366
+ - Colors: green, red, blue, yellow
367
+ - Shapes: cube, sphere, pyramid, cylinder
368
+
369
+ # Solution Format
370
+
371
+ - Start your solution with 'Solution: ' and list each object in any order, separated by a comma and a single space $(\cdot ,\cdot)$
372
+ - Your solution for each object must follow this exact format: <coordinate> <object_color> <object_shape>
373
+
374
+ - coordinate must use a letter a-d followed by a digit 1-4.
375
+ -object_color must be exactly one of: green, red, blue, yellow.
376
+ -object_shape must be exactly one of: cube, sphere, pyramid, cylinder.
377
+
378
+ - Only list coordinates that contain an object; do not mention empty squares.
379
+ - Do not use quotation marks or angle brackets $<>$ in your action.
380
+ - Do not include any extra text, reasoning, or punctuation after the formatted list.
381
+
382
+ # Example
383
+
384
+ Solution: a3 green sphere, d1 blue cylinder, b4 yellow cube, c2 red pyramid
385
+
386
+ # ## Validation
387
+
388
+ - No two objects share the same coordinate.
389
+ - Every listed object uses one of the four allowed colors and shapes.
390
+
391
+ # A.1.2 Observations of Scaling Episode Complexity
392
+
393
+ ![](images/00481fad0bbfc81e0166b3d0a5867b8a5790da8ebf48d64d8ed102742341affc.jpg)
394
+
395
+ ![](images/a8746be1b8f7b8637e4a946eba68857d17a09b32a920696082dc5fb347f0892e.jpg)
396
+
397
+ ![](images/30ce214173b6ee708b438427cbb06e56d95754e54d652e816bd639c354878f60.jpg)
398
+
399
+ Sample from the Geom Board Environment in our dataset (ds25a), showing a state (blue) and the goal (green) at each step of an episode on a $4 \times 4$ board, with 2 geoms and an optimal path length of 8. All modalities are shown.
400
+
401
+ ![](images/7d830096bd5c2e36fe69b87ac0ef3d9a63c8f2904200a0bbe5e81bafef1fe1ab.jpg)
402
+
403
+ ![](images/fd65d87e0029fd56af98f0fa6df7cafa3452159010608e94ec22aa63037c294f.jpg)
404
+
405
+ ![](images/282e812841506827fd6eee24517910b05cfc0f6150cc940bb95c088a23427c03.jpg)
406
+
407
+ Sample from the Geom Board Environment in our dataset (ds25a), showing a state (blue) and the goal (green) at each step of an episode on a $4 \times 4$ board, with 5 geoms and an optimal path length of 4. All modalities are shown.
408
+
409
+ ![](images/ebf4cce1b6faf00e8a7003f3f008142daa88f4094ab050837d4b8739c020999c.jpg)
410
+
411
+ ![](images/d7f0e6f862e8f603b3a0bb36523d33bfb391cbd357dd4c37a77aaada99400fb1.jpg)
412
+
413
+ ![](images/b213ffa5c66c5e63d1549b0a540de6a5e6401b0f6f33962138133943e6bd5bcb.jpg)
414
+
415
+ Sample from the Geom Board Environment in our dataset (ds25a), showing a state (blue) and the goal (green) at each step of an episode on a $4 \times 4$ board, with 8 geoms and an optimal path length of 2. All modalities are shown.
416
+
417
+ ![](images/2d6cafb016a23512f7009e91c5276f3d83c354094ceea8b8d25ea62611714b4b.jpg)
418
+ Figure 8: Examples of VLMs' observations for a state (blue) and the goal (green) at each step during an episode of the Sliding Geom Puzzle environment, on a $4 \times 4$ board with 2, 5, 8 and 11 geom and an optimal path length of 8, 4, 2, and 6 respectively. Left to right, each tested modality: vision 3D, vision 2D, and text-based representation.
419
+
420
+ ![](images/d23f076e0092a6b129f421d09c04920270c459054e8e45dcf8eea9968e9db700.jpg)
421
+
422
+ ![](images/bbd4967b33722c3a69f324b964a73c30148e20a269cc4ce549910fe2fee79255.jpg)
423
+
424
+ Sample from the Geom Board Environment in our dataset (ds25a), showing a state (blue) and the goal (green) at each step of an episode on a $4 \times 4$ board, with 11 geom and an optimal path length of 6. All modalities are shown
425
+
426
+ # A.1.3 Interaction Example
427
+
428
+ # Input Example
429
+
430
+ ![](images/2998b55358f98b7327d7ec900723bbf2554b131b99b85c8a7f290bbbd4bcf24d.jpg)
431
+
432
+ ![](images/6498c2292d841da9424b737ae4fe050f01b10f424ff25f3c2de6dd676b1b6f38.jpg)
433
+
434
+ ![](images/cd17554da01d47bac19420ffe4515fa006db6a21eebb90be094a889788d4e5f7.jpg)
435
+
436
+ # Analyze the Images
437
+
438
+ You can view your current active board state in the last image marked with the label active in the image and a blue background. Study this image and the objects with their positions carefully.
439
+
440
+ Your goal is to match the goal state, shown in the image marked with the label goal in the image and a green background. Study this image and the objects with their positions carefully.
441
+
442
+ # Additionally, you are provided with:
443
+
444
+ - The previous state image(s) marked with the label past in the image and a grey background.
445
+ - Your previous suggested action
446
+ - Use this information by comparing it to your current active state to determine your next action.
447
+
448
+ # Invalid Actions:
449
+
450
+ - No Overlap: You are not allowed to position two objects in the same tile.
451
+ - If the suggested action does not move any objects, it is invalid (e.g., blocked by another object or out of bounds).
452
+ - Use the previous image(s) and action to understand why it failed and suggest a different action.
453
+
454
+ It is of most importance you always end your response with this exact format:
455
+
456
+ action: move <object color> <object shape> <direction>
457
+
458
+ where you replace <object color> <object shape> <direction> with the valid move action based on your reasoning and do not add any characters after your action.
459
+
460
+ # Output Example
461
+
462
+ To reach the goal state I need to compare the current active state with the goal state.
463
+
464
+ # Current active state
465
+
466
+ - In the current active state the green sphere is on top blocking movements of objects below it.
467
+ - The yellow sphere is on the bottom row B1.
468
+
469
+ # Goal state
470
+
471
+ - In the goal state the green sphere is two steps down in column B.
472
+ - The red cylinder is in the bottom left, two steps down.
473
+ - The yellow sphere is in position B2, two steps up.
474
+
475
+ Analyzing the images I see that the yellow spheres path is blocked by the green cylinder I'll first move another object to clear the way. Moving the blue cube rightward will help clear the blocked pathway for the green cylinder.
476
+
477
+ Action: Move the blue cube right
478
+
479
+ # A.1.4 Episode Progression Example
480
+
481
+ ![](images/df158a86bb8937e943dce50e94b301e2483ddff261b50372a284a495db9ce82b.jpg)
482
+
483
+ ![](images/b2714c352cb9ad49863f9f514b51ff9197bc4f998b9fe6a8add1a7c89cd3adbf.jpg)
484
+
485
+ ![](images/277bae9e726c59dd7f6b561fe58ba3ee628c00cd4bc4cd886a160935db2da1cd.jpg)
486
+
487
+ ![](images/0be950eef54023f0a39f9e58ee5925522db8d9a038251bf7275bc6b09ccc89c4.jpg)
488
+
489
+ ![](images/a6191f1b9f94a2150271b6592ac160083a29e1fb891f969441943d24f6d808f1.jpg)
490
+
491
+ ![](images/4e3f26d4253cb9f324ffc7a4213bc785fdfc8bcc4e16bc4855ba907af39a6a88.jpg)
492
+
493
+ ![](images/afec77ab0fd08bc43ea4e417351fcc7c3abf905c30608462b0c95bf7ac24b5d5.jpg)
494
+
495
+ ![](images/5eca84a171b61208ab5009227f56d6f765188f636df1a9175070e3cd83767d34.jpg)
496
+
497
+ ![](images/09d22f7a4c0318591151ba3aa71adebf87552264b4628b2b9c124f358c4f87d2.jpg)
498
+
499
+ ![](images/58c504f177b52441dddfc2e3b345f7b1eb59c225c1446f918f3f8b1270a44cae.jpg)
500
+ Figure 9: Example of an episode progression for an environment in vision 3D (other modalities progress analogously) with an optimal path length of 9, showing steps 1 to 12 in order, including 3 mistakes (red action text).
501
+
502
+ ![](images/d7deb3cf2718e18f50db1c4afe94526b743607553729194478d5ac396e473a24.jpg)
503
+
504
+ ![](images/200360e09b550907d17e520c395aa813f6f9a74c77b9b7bee38b6e6415a3e84f.jpg)
505
+
506
+ # A.2 Models
507
+
508
+ <table><tr><td>Name</td><td>LLM</td><td>Vision Encoder</td><td>Model Size</td></tr><tr><td colspan="4">Closed Source Models</td></tr><tr><td>Sonnet-3.5 (Claude Team, 2024)</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Gemini-2.0-flash (Gemini Team, 2024)</td><td>-</td><td>-</td><td>-</td></tr><tr><td>GPT-4o (OpenAI et al., 2024)</td><td>-</td><td>-</td><td>-</td></tr><tr><td colspan="4">Open Source Models</td></tr><tr><td>InternVL 2.5 (Chen et al., 2024)</td><td>Qwen 2.5 (Yang et al., 2024b)</td><td>InternViT (Chen et al., 2024)</td><td>78.4B</td></tr><tr><td>LLaVA OneVision (Li et al., 2024a)</td><td>Qwen 2 (Yang et al., 2024a)</td><td>SigLIP (Zhai et al., 2023)</td><td>73.2B</td></tr><tr><td>Qwen 2 VL (Wang et al., 2024d)</td><td>Qwen 2 (Yang et al., 2024a)</td><td>ViT (Dosovitskiy et al., 2020)</td><td>73.4B</td></tr></table>
509
+
510
+ Table 1: Overview of evaluated models. - indicates unavailable information.
511
+
512
+ # A.3 Sliding Tile Puzzle
513
+
514
+ ![](images/d5db02f52ffe987116a9634009d49509b29785fb091e4de417366597c6439ce2.jpg)
515
+ Figure 10: Visualization of a current state and the goal state in a classic 15-tile Sliding Tile Puzzle (STP) on a $4 \times 4$ board, playable by agents within the iVISPAR benchmark.
516
+
517
+ ![](images/90da14d1b17b0471593e561201e952201cbf8054a34b90136c8fd0643d683e22.jpg)
518
+
519
+ The sequential generalized sliding-tile puzzle (SGSTP) is a generalization of the classic 15-Tile Sliding Tile Puzzle (STP), see Figure 10. In the SGSTP, a set of $n < m_1 \times m_2$ tiles, each uniquely labeled $1, \ldots, n$ , are placed on a rectangular grid of size $m_1 \times m_2$ , denoted by $G = (V, E)$ . The grid has $m_1 \times m_2 - n$ empty positions that allow tile movement.
520
+
521
+ A configuration of tiles is represented as an injective mapping from the set $\{1, \ldots, n\}$ to positions $V = \{(v_x, v_y) : 1 \leq v_x \leq m_2, 1 \leq v_y \leq m_1\}$ . Each tile must be repositioned from an arbitrary initial configuration $S = \{s_1, \ldots, s_n\}$ to a specified goal configuration $G = \{g_1, \ldots, g_n\}$ , such as an ordered row-major layout.
522
+
523
+ Let the movement path of tile $i$ , where $1 \leq i \leq n$ , be expressed as $p_i : \mathbb{N}_0 \to V$ . The puzzle seeks a set of feasible paths $P = \{p_1, \ldots, p_n\}$ that satisfy the following conditions for all $1 \leq i, j \leq n$ with $i \neq j$ , and for all time steps $t \geq 0$ :
524
+
525
+ Incremental Movement: $p_i(t + 1) = p_i(t)$ or $(p_i(t + 1), p_i(t)) \in E$ . Tiles move to adjacent, unoccupied positions or stay still.
526
+
527
+ Goal Achievement: $p_i(0) = s_i$ and $p_i(T) = g_i$ for some $T \geq 0$ . Each tile must start at $s_i$ and reach $g_i$ . Exclusive Occupancy: $p_i(t) \neq p_j(t)$ for all $i \neq j$ . Two tiles cannot occupy the same position at the same time.
528
+
529
+ In this sequential version, tiles move one at a time. Therefore, the head-on collision and corner-following constraints found in the generalized sliding-tile puzzle are omitted, as simultaneous tile movements are not permitted.
530
+
531
+ # A.4 Detailed Results
532
+
533
+ # A.4.1 Performance Results
534
+
535
+ <table><tr><td>Model</td><td>Metric</td><td>Avg</td><td>3D</td><td>2D</td><td>Text</td></tr><tr><td colspan="6">Closed Source Models</td></tr><tr><td rowspan="3">Sonnet-3.5</td><td>Completed episodes</td><td>54.56</td><td>28.67</td><td>89.67</td><td>45.33</td></tr><tr><td>Optimal path deviation</td><td>3.05</td><td>4.10</td><td>1.44</td><td>3.60</td></tr><tr><td>Board state inference</td><td>60.00</td><td>35.38</td><td>84.62</td><td>-</td></tr><tr><td rowspan="3">Gemini-2.0-flash</td><td>Completed episodes</td><td>27.11</td><td>12.67</td><td>47.33</td><td>21.33</td></tr><tr><td>Optimal path deviation</td><td>4.87</td><td>5.25</td><td>4.09</td><td>5.26</td></tr><tr><td>Board state inference</td><td>54.08</td><td>28.67</td><td>79.49</td><td>-</td></tr><tr><td rowspan="3">GPT-4o</td><td>Completed episodes</td><td>17.56</td><td>9.33</td><td>37.33</td><td>6.00</td></tr><tr><td>Optimal path deviation</td><td>5.30</td><td>5.45</td><td>4.15</td><td>6.30</td></tr><tr><td>Board state inference</td><td>41.67</td><td>19.49</td><td>63.85</td><td>-</td></tr><tr><td colspan="6">Open Source Models</td></tr><tr><td rowspan="3">InternVL2.5-78B</td><td>Completed episodes</td><td>10.16</td><td>1.67</td><td>9.42</td><td>19.33</td></tr><tr><td>Optimal path deviation</td><td>5.98</td><td>6.39</td><td>5.86</td><td>5.69</td></tr><tr><td>Board state inference</td><td>34.95</td><td>16.51</td><td>53.38</td><td>-</td></tr><tr><td rowspan="3">LLaVA-OneVision-72B</td><td>Completed episodes</td><td>8.22</td><td>0.67</td><td>1.33</td><td>22.67</td></tr><tr><td>Optimal path deviation</td><td>6.35</td><td>6.75</td><td>6.81</td><td>5.50</td></tr><tr><td>Board state inference</td><td>26.36</td><td>14.72</td><td>38.00</td><td>-</td></tr><tr><td rowspan="3">Qwen2-72B</td><td>Completed episodes</td><td>5.89</td><td>0.67</td><td>1.67</td><td>15.33</td></tr><tr><td>Optimal path deviation</td><td>6.37</td><td>6.66</td><td>6.54</td><td>5.90</td></tr><tr><td>Board state inference</td><td>41.54</td><td>18.77</td><td>64.31</td><td>-</td></tr><tr><td colspan="6">Aggregate Averages</td></tr><tr><td rowspan="3">Average</td><td>Completed episodes</td><td>20.59</td><td>7.04</td><td>26.68</td><td>21.83</td></tr><tr><td>Optimal path deviation</td><td>5.32</td><td>5.76</td><td>4.41</td><td>5.32</td></tr><tr><td>Board state inference</td><td>43.10</td><td>22.26</td><td>63.94</td><td>-</td></tr></table>
536
+
537
+ Table 2: Evaluation of models across three modalities. Each row shows average episode completion rate (\%), mean deviation from the optimal path (see Section 4.6), and board state inference accuracy (\%).
538
+
539
+ A.4.2 Error Counts for the Geom Puzzle
540
+
541
+ <table><tr><td>Model</td><td>Metric</td><td>Avg</td><td>3D</td><td>2D</td><td>Text</td></tr><tr><td colspan="6">Closed Source Models</td></tr><tr><td rowspan="5">Sonnet-3.5</td><td>EM</td><td>6.31</td><td>6.51</td><td>6.20</td><td>6.21</td></tr><tr><td>IM</td><td>1.86</td><td>3.34</td><td>0.21</td><td>2.03</td></tr><tr><td>OD</td><td>3.60</td><td>4.77</td><td>2.29</td><td>3.75</td></tr><tr><td>OB</td><td>1.59</td><td>1.95</td><td>0.04</td><td>2.79</td></tr><tr><td>IC</td><td>0.02</td><td>0.07</td><td>0.00</td><td>0.00</td></tr><tr><td rowspan="5">Gemini-2.0-flash</td><td>EM</td><td>5.68</td><td>5.80</td><td>6.34</td><td>4.91</td></tr><tr><td>IM</td><td>2.95</td><td>3.87</td><td>2.35</td><td>2.63</td></tr><tr><td>OD</td><td>6.14</td><td>6.83</td><td>5.51</td><td>6.08</td></tr><tr><td>OB</td><td>2.56</td><td>2.25</td><td>1.11</td><td>4.33</td></tr><tr><td>IC</td><td>0.01</td><td>0.01</td><td>0.00</td><td>0.03</td></tr><tr><td rowspan="5">GPT-4o</td><td>EM</td><td>4.65</td><td>5.50</td><td>5.95</td><td>2.51</td></tr><tr><td>IM</td><td>2.86</td><td>4.03</td><td>2.36</td><td>2.19</td></tr><tr><td>OD</td><td>6.53</td><td>6.36</td><td>5.51</td><td>7.71</td></tr><tr><td>OB</td><td>3.81</td><td>2.69</td><td>1.85</td><td>6.90</td></tr><tr><td>IC</td><td>0.26</td><td>0.24</td><td>0.52</td><td>0.03</td></tr><tr><td colspan="6">Open Source Models</td></tr><tr><td rowspan="5">InternVL2.5-78B</td><td>EM</td><td>5.00</td><td>4.94</td><td>5.74</td><td>4.39</td></tr><tr><td>IM</td><td>4.24</td><td>5.39</td><td>4.80</td><td>2.59</td></tr><tr><td>OD</td><td>5.90</td><td>6.06</td><td>5.70</td><td>5.92</td></tr><tr><td>OB</td><td>3.38</td><td>3.16</td><td>2.52</td><td>4.38</td></tr><tr><td>IC</td><td>0.59</td><td>0.21</td><td>0.29</td><td>1.26</td></tr><tr><td rowspan="5">LLaVA-OneVision-72B</td><td>EM</td><td>3.95</td><td>3.41</td><td>3.22</td><td>5.23</td></tr><tr><td>IM</td><td>4.12</td><td>4.55</td><td>4.42</td><td>3.40</td></tr><tr><td>OD</td><td>4.89</td><td>4.58</td><td>4.74</td><td>5.36</td></tr><tr><td>OB</td><td>4.17</td><td>4.62</td><td>4.46</td><td>3.44</td></tr><tr><td>IC</td><td>1.38</td><td>1.90</td><td>2.19</td><td>0.07</td></tr><tr><td rowspan="5">Qwen2-72B</td><td>EM</td><td>4.07</td><td>3.88</td><td>3.96</td><td>4.85</td></tr><tr><td>IM</td><td>4.61</td><td>4.81</td><td>4.67</td><td>3.89</td></tr><tr><td>OD</td><td>5.39</td><td>5.55</td><td>5.21</td><td>5.25</td></tr><tr><td>OB</td><td>3.72</td><td>4.05</td><td>3.17</td><td>3.83</td></tr><tr><td>IC</td><td>0.10</td><td>0.07</td><td>0.06</td><td>0.26</td></tr><tr><td colspan="6">Aggregate Averages</td></tr><tr><td rowspan="5">Average</td><td>EM</td><td>4.82</td><td>4.72</td><td>5.04</td><td>4.68</td></tr><tr><td>IM</td><td>3.61</td><td>4.45</td><td>3.34</td><td>2.79</td></tr><tr><td>OD</td><td>5.40</td><td>5.66</td><td>4.87</td><td>5.68</td></tr><tr><td>OB</td><td>3.28</td><td>3.35</td><td>2.33</td><td>4.28</td></tr><tr><td>IC</td><td>0.35</td><td>0.33</td><td>0.45</td><td>0.28</td></tr></table>
542
+
543
+ Table 3: Evaluation of models across three modalities. Each row shows average steps per episode that were effective moves (EM), ineffective moves (IM), occupied destination moves (OD), out of bounds moves (OB) and illegal commands (IC).
544
+
545
+ A.4.3 Error Counts for the Auxiliary Task
546
+
547
+ <table><tr><td>Model</td><td>Metric</td><td>Avg</td><td>3D</td><td>2D</td><td>Text</td></tr><tr><td colspan="6">Closed Source Models</td></tr><tr><td rowspan="7">Sonnet-3.5</td><td>Correct</td><td>3.90</td><td>2.30</td><td>5.50</td><td>-</td></tr><tr><td>Missed</td><td>1.42</td><td>1.84</td><td>1.00</td><td>-</td></tr><tr><td>Hallucinated</td><td>0.00</td><td>0.00</td><td>0.00</td><td>-</td></tr><tr><td>Coord Errors</td><td>1.08</td><td>2.16</td><td>0.00</td><td>-</td></tr><tr><td>Color Errors</td><td>0.38</td><td>0.76</td><td>0.00</td><td>-</td></tr><tr><td>Shape Errors</td><td>0.37</td><td>0.74</td><td>0.00</td><td>-</td></tr><tr><td>Format Errors</td><td>0.00</td><td>0.00</td><td>0.00</td><td>-</td></tr><tr><td rowspan="7">Gemini-2.0-flash</td><td>Correct</td><td>3.52</td><td>1.86</td><td>5.17</td><td>-</td></tr><tr><td>Missed</td><td>0.91</td><td>1.02</td><td>0.80</td><td>-</td></tr><tr><td>Hallucinated</td><td>0.14</td><td>0.13</td><td>0.14</td><td>-</td></tr><tr><td>Coord Errors</td><td>1.98</td><td>3.48</td><td>0.48</td><td>-</td></tr><tr><td>Color Errors</td><td>0.66</td><td>1.14</td><td>0.18</td><td>-</td></tr><tr><td>Shape Errors</td><td>0.65</td><td>1.14</td><td>0.16</td><td>-</td></tr><tr><td>Format Errors</td><td>0.05</td><td>0.00</td><td>0.09</td><td>-</td></tr><tr><td rowspan="7">GPT-4o</td><td>Correct</td><td>2.71</td><td>1.27</td><td>4.15</td><td>-</td></tr><tr><td>Missed</td><td>1.31</td><td>1.67</td><td>0.95</td><td>-</td></tr><tr><td>Hallucinated</td><td>0.03</td><td>0.01</td><td>0.04</td><td>-</td></tr><tr><td>Coord Errors</td><td>2.34</td><td>3.33</td><td>1.35</td><td>-</td></tr><tr><td>Color Errors</td><td>0.77</td><td>1.18</td><td>0.35</td><td>-</td></tr><tr><td>Shape Errors</td><td>0.75</td><td>1.18</td><td>0.32</td><td>-</td></tr><tr><td>Format Errors</td><td>0.02</td><td>0.04</td><td>0.00</td><td>-</td></tr><tr><td colspan="6">Aggregate Averages</td></tr><tr><td rowspan="7">Average</td><td>Correct</td><td>3.37</td><td>1.81</td><td>4.94</td><td>-</td></tr><tr><td>Missed</td><td>1.21</td><td>1.51</td><td>0.92</td><td>-</td></tr><tr><td>Hallucinated</td><td>0.06</td><td>0.05</td><td>0.06</td><td>-</td></tr><tr><td>Coord Errors</td><td>1.80</td><td>2.99</td><td>0.61</td><td>-</td></tr><tr><td>Color Errors</td><td>0.60</td><td>1.03</td><td>0.18</td><td>-</td></tr><tr><td>Shape Errors</td><td>0.59</td><td>1.02</td><td>0.16</td><td>-</td></tr><tr><td>Format Errors</td><td>0.02</td><td>0.01</td><td>0.03</td><td>-</td></tr></table>
548
+
549
+ Table 4: Error analysis for the auxiliary position inference task across vision modalities (closed source models).
550
+
551
+ <table><tr><td>Model</td><td>Metric</td><td>Avg</td><td>3D</td><td>2D</td><td>Text</td></tr><tr><td colspan="6">Open Source Models</td></tr><tr><td rowspan="7">InternVL2.5-78B</td><td>Correct</td><td>2.27</td><td>1.07</td><td>3.47</td><td>-</td></tr><tr><td>Missed</td><td>0.89</td><td>1.00</td><td>0.77</td><td>-</td></tr><tr><td>Hallucinated</td><td>0.03</td><td>0.04</td><td>0.01</td><td>-</td></tr><tr><td>Coord Errors</td><td>1.62</td><td>2.92</td><td>0.32</td><td>-</td></tr><tr><td>Color Errors</td><td>0.59</td><td>1.11</td><td>0.08</td><td>-</td></tr><tr><td>Shape Errors</td><td>0.58</td><td>1.08</td><td>0.08</td><td>-</td></tr><tr><td>Format Errors</td><td>1.63</td><td>1.30</td><td>1.97</td><td>-</td></tr><tr><td rowspan="7">LLaVA-OneVision-72B</td><td>Correct</td><td>1.71</td><td>0.96</td><td>2.47</td><td>-</td></tr><tr><td>Missed</td><td>1.02</td><td>1.18</td><td>0.86</td><td>-</td></tr><tr><td>Hallucinated</td><td>0.34</td><td>0.31</td><td>0.37</td><td>-</td></tr><tr><td>Coord Errors</td><td>3.30</td><td>3.95</td><td>2.65</td><td>-</td></tr><tr><td>Color Errors</td><td>1.28</td><td>1.58</td><td>0.97</td><td>-</td></tr><tr><td>Shape Errors</td><td>1.23</td><td>1.57</td><td>0.90</td><td>-</td></tr><tr><td>Format Errors</td><td>0.37</td><td>0.09</td><td>0.65</td><td>-</td></tr><tr><td rowspan="7">Qwen2-72B</td><td>Correct</td><td>2.70</td><td>1.22</td><td>4.18</td><td>-</td></tr><tr><td>Missed</td><td>0.97</td><td>1.08</td><td>0.85</td><td>-</td></tr><tr><td>Hallucinated</td><td>0.58</td><td>0.81</td><td>0.36</td><td>-</td></tr><tr><td>Coord Errors</td><td>2.52</td><td>3.80</td><td>1.24</td><td>-</td></tr><tr><td>Color Errors</td><td>0.93</td><td>1.42</td><td>0.43</td><td>-</td></tr><tr><td>Shape Errors</td><td>1.12</td><td>1.67</td><td>0.58</td><td>-</td></tr><tr><td>Format Errors</td><td>0.22</td><td>0.06</td><td>0.38</td><td>-</td></tr><tr><td colspan="6">Aggregate Averages</td></tr><tr><td rowspan="7">Average</td><td>Correct</td><td>2.23</td><td>1.08</td><td>3.37</td><td>-</td></tr><tr><td>Missed</td><td>0.96</td><td>1.09</td><td>0.83</td><td>-</td></tr><tr><td>Hallucinated</td><td>0.32</td><td>0.39</td><td>0.25</td><td>-</td></tr><tr><td>Coord Errors</td><td>2.48</td><td>3.56</td><td>1.40</td><td>-</td></tr><tr><td>Color Errors</td><td>0.93</td><td>1.37</td><td>0.49</td><td>-</td></tr><tr><td>Shape Errors</td><td>0.98</td><td>1.44</td><td>0.52</td><td>-</td></tr><tr><td>Format Errors</td><td>0.74</td><td>0.48</td><td>1.00</td><td>-</td></tr></table>
552
+
553
+ Table 5: Error analysis for the auxiliary position inference task across vision modalities (open source models).
554
+
555
+ # A.5 Supplementary Graphs
556
+
557
+ ![](images/6d1c629bcd23fb4942848f72e7678de95dde0f7fa694775ddc89d32217519273.jpg)
558
+ Figure 11: VLMs' average action counts per episode by category for each modality. Number of actions per episode is capped at 20. Effective / ineffective actions respectively decrease / increase the path length to the goal state. Occupied destination and out-of-bounds are invalid moves, while illegal commands break the instructed action format, all of which leave the board state unchanged.
559
+
560
+ ![](images/b1fbf428f7920dffb997bfcc9461941032b4544a9318bfab7a04661aa2b95af0.jpg)
561
+ Figure 12: VLMs' average shortest path to the goal state across all modalities. Number of actions per episode is capped at 20.
562
+
563
+ # A.6 Additional Agent Interaction Data
564
+
565
+ # A.6.1 Systematic Formatting Errors
566
+
567
+ Unless noted otherwise, the numeral in parentheses after a model name is the count of formatting errors for that category. Notably, Sonnet-3.5 is not listed since it did not make any format errors, explaining its high benchmarking score.
568
+
569
+ # (E1) Empty-cell mentions (N = 280)
570
+
571
+ The most common violation is the explicit listing of empty grid cells, even though instructions forbid any mention of empties. Surface forms vary widely, even within a single model:
572
+
573
+ a4 empty
574
+
575
+ c3 blank
576
+
577
+ b1 no object
578
+
579
+ al none none
580
+
581
+ Gemini-2.0-flash (24), InternVL-2.5-78B (21), LLaVA-OneVision-72B (105), Qwen2-72B (130).
582
+
583
+ # (E2) Missing attributes $(\mathbf{N} = 88)$
584
+
585
+ Some lines list an object but drop one of its required attributes (colour or shape):
586
+
587
+ c1 none pyramid
588
+
589
+ b2 sphere
590
+
591
+ Gemini-2.0-flash (2), LLaVA-OneVision-72B (86).
592
+
593
+ # (E3) Illegal attributes $(\mathbf{N} = 21)$
594
+
595
+ Entries introduce colours or shapes outside the predefined vocabulary, or mis-name legitimate ones:
596
+
597
+ b2 black cone
598
+
599
+ b3 red block
600
+
601
+ Gemini-2.0-flash (2), GPT-4o (9), LLaVA-OneVision-72B (9), Qwen2-72B (1).
602
+
603
+ # (E4) Attribute order inversion $(\mathbf{N} = 21)$
604
+
605
+ Colour-shape order is sometimes reversed:
606
+
607
+ b4 cylinder yellow
608
+
609
+ LLaVA-OneVision-72B (21)
610
+
611
+ # (E5) Multiple attributes per object $(\mathbf{N} = 3)$
612
+
613
+ A single coordinate is assigned more than one colour or a compound colour phrase:
614
+
615
+ d4 blue and red cube
616
+
617
+ d1 multicoloured cube
618
+
619
+ a2 a3 green pyramid
620
+
621
+ GPT-4o (1), InternVL-2.5-78B (1), Qwen2-72B (1).
622
+
623
+ # (E6) Out-of-bounds coordinates $(\mathbf{N} = 3)$
624
+
625
+ Objects occasionally appear outside the $4 \times 4$ grid, e.g. likely due to OCR misreads:
626
+
627
+ f4 yellow pyramid
628
+
629
+ c5 green pyramid
630
+
631
+ Gemini-2.0-flash (1), InternVL-2.5-78B (1), LLaVA-OneVision-72B (1).
632
+
633
+ # (E7) Non-list answers $(\mathbf{N} = 47)$
634
+
635
+ A few generations present free-form summaries or spurious URLs instead of the required flat list:
636
+
637
+ d2 blue cylinder summary a4 yellow cylinder b4 blue sphere ...
638
+
639
+ d3 green sphere https user images githubuser-content com /../f45e3211ecb93a6b429446.png
640
+
641
+ obj in objectsamples if obj none output
642
+
643
+ yellow objectshape must be exactly one of cube
644
+
645
+ we will create the solution
646
+
647
+ InternVL-2.5-78B (47).
648
+
649
+ # (E8) Solution not found $(\mathbf{N} = 147)$
650
+
651
+ Prompts that did not begin their solution with "Solution: " are not picked up by the solution parser. GPT-4o (1), InternVL-2.5-78B (145), Qwen2-72B (1).
652
+
653
+ # A.6.2 Chain of Thoughts
654
+
655
+ # Sonnet-3.5 CoT Example
656
+
657
+ Looking at the current state and goal state:
658
+
659
+ 1. The blue cube is correctly positioned in A2 in both states.
660
+ 2. The yellow pyramid needs to move from D1 to C1.
661
+ 3. The red cylinder is in D2 and needs to be in D3.
662
+
663
+ The yellow pyramid needs to be moved to make space for other movements. Looking at the past states we tried moving the yellow pyramid right and the red cylinder up. Now we need to move the yellow pyramid to its goal position. I can see that the yellow pyramid should move left to reach position C1 as shown in the goal state.
664
+
665
+ # Qwen2-72B CoT Example
666
+
667
+ After analyzing the images, it is clear that moving the red sphere down was an invalid action because it resulted in no change to the board state. The red sphere is still in the same position, and the green cylinder has not moved.
668
+
669
+ To progress, I need to find a valid action that will move one of the objects to match the goal state.
670
+
671
+ Upon further inspection, I notice that the red sphere needs to be moved to the top-right corner of the board to match the goal state. However, the current state shows the red sphere in the bottom-right corner, which means I need to move it up to get it closer to the goal position.
672
+
673
+ # GPT-4o CoT Example
674
+
675
+ To reach the goal state I need to compare the current active state with the goal state.
676
+
677
+ In the current active state the green sphere is on top blocking movements of objects below it. The yellow sphere is on the bottom row B1.
678
+
679
+ In the goal state the green sphere is two steps down in column B. The red cylinder is in the bottom left, two steps down. The yellow sphere is in position B2, two steps up.
680
+
681
+ Analyzing the images I see that the yellow spheres path is blocked by the green cylinder I'll first move another object to clear the way.
682
+
683
+ Moving the blue cube rightward will help clear the blocked pathway for the green cylinder.
684
+
685
+ # InternVL-2.5-78B CoT Example
686
+
687
+ to reach the goal state, let's analyze the current board and the objects that need to be moved: current state (active)
688
+
689
+ - the red cylinder is on tile b4.
690
+ - the yellow pyramid is on tile d4.
691
+ - the blue cylinder is on tile c3.
692
+ - the yellow cube is on tile d2.
693
+
694
+ # goal state (goal)
695
+
696
+ - the red cylinder needs to be on tile a4.
697
+ - the blue cylinder needs to be on tile b4.
698
+ - the yellow cube needs to be on tile d3.
699
+ - the yellow pyramid needs to be on tile c4.
700
+
701
+ # step-by-step plan
702
+
703
+ 1. move the red cylinder to a4:
704
+ - action: move the red cylinder left.
705
+ 2. move the yellow pyramid to c4:
706
+ - action: move the yellow pyramid down.
707
+ 3. move the yellow cube to d3:
708
+ - action: move the yellow cube down.
709
+ 4. move the blue cylinder to b4:
710
+ - action: move the blue cylinder up.
711
+
712
+ # first action
713
+
714
+ the first step is to move the red cylinder from tile b4 to tile a4.
715
+
716
+ # Gemini-2.0-flash CoT Example
717
+
718
+ The green sphere needs to move from B2 to C3. It is currently at B2, moving up would move the green sphere to B3 where it is blocked by the red pyramid.
719
+
720
+ I need to move the green sphere to the right, then up. Its already been moved right once, so my next action is to move the green sphere up.
721
+
722
+ # LLaVA-OneVision-72B CoT Example
723
+
724
+ The blue cube on D2 needs to move to the position of the red cylinder on D1.
725
+
726
+ Since the red cylinder cannot be displaced, the blue cube must be moved elsewhere.
727
+
728
+ Moving the blue cube left to C2 does not change the board state because another blue cube already occupies that square.
729
+
730
+ Moving it up to D1 would push the red cylinder out of bounds, which is not allowed.
731
+
732
+ Therefore, the blue cube on D2 should move left to D3.
733
+
734
+ # A.7 Algorithms
735
+
736
+ # A.7.1 Optimal Agent
737
+
738
+ Algorithm 1 A* Algorithm for Pathfinding
739
+ Require: start, goal
740
+ Ensure: Path from start to goal or failure
741
+ 1: openSet $\leftarrow$ {start}
742
+ 2: cameFrom $\leftarrow$ empty map
743
+ 3: gScore[start] $\leftarrow$ 0
744
+ 4: fScore[start] $\leftarrow$ heuristic(start, goal)
745
+ 5: while openSet not empty do
746
+ 6: current $\leftarrow$ node in openSet with lowest fScore
747
+ 7: if current = goal then
748
+ 8: return ReconstructPath(cameFrom, current)
749
+ 9: end if
750
+ 10: Remove current from openSet
751
+ 11: for each neighbor of current do
752
+ 12: tentativeGScore $\leftarrow$ gScore[current] + d(current, neighbor)
753
+ 13: if tentativeGScore < gScore[neighbor] or neighbor not in gScore then
754
+ 14: cameFrom[neighbor] $\leftarrow$ current
755
+ 15: gScore[neighbor] $\leftarrow$ tentativeGScore
756
+ 16: fScore[neighbor] $\leftarrow$ gScore[neighbor] + heuristic(neighbors, goal)
757
+ 17: if neighbor not in openSet then
758
+ 18: Add neighbor to openSet
759
+ 19: end if
760
+ 20: end if
761
+ 21: end for
762
+ 22: end while
763
+ 23: return failure
764
+
765
+ # A.7.2 Random Agent
766
+
767
+ Algorithm 2 Generate Random Valid Path for Sliding Tile Puzzle
768
+ Require: $n$ (board size), initial_state, max_steps
769
+ Ensure: Path from initial to final state
770
+ 1: path $\leftarrow$ [initial_state]
771
+ 2: current_state $\leftarrow$ initial_state
772
+ 3: for step = 1 to max_steps do
773
+ 4: neighbors $\leftarrow$ get_neighbors(current_state, n)
774
+ 5: current_state $\leftarrow$ random choice from neighbors
775
+ 6: Append current_state to path
776
+ 7: end for
777
+ return path
ivisparaninteractivevisualspatialreasoningbenchmarkforvlms/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:68b14138e78390881df24b93489ab2a05cac34ad51ce93f48b971b961f946bf8
3
+ size 1372631
ivisparaninteractivevisualspatialreasoningbenchmarkforvlms/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3d6dc0892bc57f9b5e23940dbb8e9b7e4340f1b5ec1100f0f8730460944aca3
3
+ size 733718
mmwatdetectingotherinitiatedrepairrequestsindialogue/3260e39b-0e66-41cc-aa16-e84b99650b99_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf1d8b52624a0800edee17b89176857f97e5027bd4ed9e4ced6eaf0ef3863d53
3
+ size 85948
mmwatdetectingotherinitiatedrepairrequestsindialogue/3260e39b-0e66-41cc-aa16-e84b99650b99_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:68d2823e606e022e3b116eb10ba1b358af8b64563f5832c1fdffdee5ad702263
3
+ size 103100
mmwatdetectingotherinitiatedrepairrequestsindialogue/3260e39b-0e66-41cc-aa16-e84b99650b99_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:903ff4ccb6a10a10ef708aadc8c078fd9f5ae1f7c3a087ec9d13db25ec0ada28
3
+ size 1023246
mmwatdetectingotherinitiatedrepairrequestsindialogue/full.md ADDED
@@ -0,0 +1,314 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # "Mm, Wat?" Detecting Other-initiated Repair Requests in Dialogue
2
+
3
+ Anh Ngo $^{1,5}$ , Nicolas Rollet $^{1,2}$ , Catherine Pelachaud $^{4}$ , Chloe Clavel $^{1,3}$
4
+
5
+ $^{1}$ ALMAnaCH, INRIA Paris, $^{2}$ Télécom Paris, SES, Institut Polytechnique de Paris, I3-CNRS, $^{3}$ Télécom Paris, LTCI, Institut Polytechnique de Paris, $^{4}$ CNRS, ISIR, Sorbonne University, $^{5}$ ISIR, Sorbonne University
6
+
7
+ {anh.ngo-ha,nicolas.rollet,chloe.clavel}@inria.fr,catherine.pelachaud@upmc.fr
8
+
9
+ # Abstract
10
+
11
+ Maintaining mutual understanding is a key component in human-human conversation to avoid conversation breakdowns, in which repair, particularly Other-Initiated Repair (OIR, when one speaker signals trouble and prompts the other to resolve), plays a vital role. However, Conversational Agents (CAs) still fail to recognize user repair initiation, leading to breakdowns or disengagement. This work proposes a multimodal model to automatically detect repair initiation in Dutch dialogues by integrating linguistic and prosodic features grounded in Conversation Analysis. The results show that prosodic cues complement linguistic features and significantly improve the results of pretrained text and audio embeddings, offering insights into how different features interact. Future directions include incorporating visual cues, exploring multilingual and cross-context corpora to assess the robustness and generalizability.
12
+
13
+ # 1 Introduction
14
+
15
+ Conversational agents (CAs), software systems that interact with users using natural language in written or spoken form, are increasingly being used in multiple domains such as commerce, healthcare, and education (Allouch et al., 2021). While maintaining smooth communication is crucial in these settings, current state-of-the-art (SOTA) CAs still struggle to handle conversational breakdowns. Unlike humans, who rely on conversational repair to resolve issues like mishearing or misunderstanding (Schegloff et al., 1977; Schegloff, 2000), CAs' repair capabilities remain limited and incomplete. Repair refers to the interactional effort by which participants suspend the ongoing talk to address potential trouble, which can be categorized by who initiates and who resolves it: the speaker of the trouble (self) or the co-participant (other) (Schegloff, 2000). This work focuses on Other-initiated Self-repair, in short, Other-initiated Repair (OIR),
16
+
17
+ where the talk of a speaker is treated as problematic by a co-participant via repair initiation, and the original speaker resolves it, as illustrated in Figure 1. Current CAs handle repairs in a limited fashion that mainly support self-initiated repair by the agent (e.g., the agent asks users to repeat what they said) (Li et al., 2020; Cuadra et al., 2021; Ashktorab et al., 2019) or rely on user self-correction when users realize troubles and clarify their own intent (e.g., saying "no, I mean...") (Balaraman et al., 2023). However, CAs struggle to recognize when users signal trouble with the agent's utterances (other-initiated) and fail to provide appropriate repair (self-repaired), while effective communication requires bidirectional repair capabilities (Moore et al., 2024). Supporting this, Gehle et al. (2014) found that robots failing to resolve communication issues quickly caused user disengagement, while van Arkel et al. (2020) showed that basic OIR mechanisms improve communication success and reduce computational and interaction costs compared to relying on pragmatic reasoning.
18
+
19
+ ![](images/26dc8b34ccc6782c94fa8048b9b81394b212248d3646801fe1c9fc0576d93478.jpg)
20
+ Figure 1: Other-initiated Repair (OIR) sequence example from Rasenberg et al. (2022), English translated: repair initiation (green) signals trouble of ambiguous object reference disc with candidate understanding horizontally, confirmed by repair solution yes horizontally.
21
+
22
+ Modeling OIR strategies on CAs that recognize user-initiated repair first requires robust automatic
23
+
24
+ repair initiation detection in human-human interaction. Previous work has established foundations for text-based approaches, training with English corpora, and relying on lexical cues (Höhn, 2017; Purver et al., 2018; Alloatti et al., 2024). However, prosodic cues tend to be more cross-linguistically stable than surface forms (Dingemanse and Enfield, 2015; Benjamin, 2013; Walker and Benjamin, 2017), and can provide valuable insight into the pragmatic functions of expressions like the interjection "huh". Building upon text-based methods, this work focuses on spoken dialogue interaction, where prosodic cues provide additional signals for repair initiation detection that may be missed by text-only models trained on transcriptions. Finally, understanding the OIR sequence also requires examining the local sequential environment of the surrounding turns, which we term "dialogue micro context", based on Schegloff (1987)'s work on local interactional organization.
25
+
26
+ These gaps motivate our main research question: What are the verbal and prosodic indicators of repair initiation in OIR sequences and how can we model them? To address this, we analyze OIR sequences in a Dutch task-oriented corpus, focusing on text and audio patterns where one speaker initiates repair. Drawing on Conversation Analysis literature, we introduce feature sets and a computational model to detect such requests. Our contributions are in two folds: (1) a novel multimodal model for detecting repair initiations in OIR sequences that integrates linguistic and prosodic features extracted automatically based on the literature, advancing beyond text- or audio-only approaches; (2) provide insights into how linguistic and prosodic features interact and contribute in detection performance, grounded in Conversatioration Analysis, and what causes model misclassifications. The remainings of this paper is structured as follows: Section 2 reviews SOTA computational models for OIR detection and related dialogue understanding tasks. Section 3 provides the used OIR coding schema and typology, and Section 4 details our approach, including linguistic and prosodic feature design. Section 5 presents our experiment details and results, followed by error analysis in Section 6.
27
+
28
+ # 2 Related Work
29
+
30
+ An early approach to automatic OIR detection was proposed by Hohn (2017), with a pattern-based
31
+
32
+ chatbot handling user-initiated repair in text chats between native and non-native German speakers. Purver et al. (2018) extended this by training a supervised classifier using turn-level features in English, including lexical, syntactic, and semantic parallelism between turns. More recently, Alloatti et al. (2024) introduced a hierarchical tag-based system for annotating repair strategies in Italian task-oriented dialogue, distinguishing between utterance-specific and context-dependent functions. Closely related, Garí Soler et al. (2025)'s recent work introduced and investigated the task of automatically detecting word meaning negotiation indicators, where speakers signal a need to clarify or challenge word meanings, a phenomenon that can be seen as a specific form of repair initiation.
33
+
34
+ Although direct research on OIR detection is still limited, advances in related dialogue understanding tasks provide promising foundations for our work. Miah et al. (2024) combined pretrained audio (Wav2Vec2) and text (RoBERTa) embeddings to detect dialogue breakdowns in healthcare calls. Similarly, Huang et al. (2023) used BERT, Wav2Vec2.0, and Faster R-CNN for intent classification, introducing multimodal fusion with attention-based gating to balance modality contributions and reduce noise. Saha et al. (2020) proposed a multimodal, multitask network jointly modeling dialogue acts and emotions using attention mechanisms. More recently, high-performing but more opaque and resource-intensive approaches have emerged, such as Mohapatra et al. (2024) showed that larger LLMs outperform smaller ones on tasks like repair and anaphora resolution, albeit with higher computational cost and latency.
35
+
36
+ Despite robust performance, recent largest models remain difficult to interpret due to their black-box nature and multimodal fusion complexity (Jain et al., 2024). To address this gap, we propose a computational model for repair initiation detection in Dutch spoken dialogue that fuses pretrained text and audio embeddings with linguistic and prosodic features grounded in Conversation Analysis. The model also integrates a multihead attention mechanism to weigh and capture nonlinear relationships across modalities, allowing our model to keep the strengths of multimodal deep learning while offering insight from linguistic and prosodic features to understand their interaction and impact towards model's decision.
37
+
38
+ ![](images/9504aee84d225189e29c38c1354dd2131c14ea53078147c91fe19e0206a927f9.jpg)
39
+ Figure 2: OIR sequence organization between 2 speakers A (green) and B (red): (a) Minimal; (b) Non-minimal
40
+
41
+ ![](images/e300d5cfb9dcb0844e70aaec78bc93abe4c7b95eb9ff0a278999584e497c52af.jpg)
42
+
43
+ # 3 OIR Coding Schema and Typology
44
+
45
+ We follow Dingemanse and Enfield (2015)'s coding schema, which structures OIR sequences into three components: trouble source, repair initiation, and repair solution segments, with repair initiation categorized as open request (the least specific, not giving clues of trouble), restricted request (implied trouble source location), or restricted offer (the most specific, proposing a candidate understanding). Throughout this work, repair initiation refers specifically to this component within OIR sequences. We use the corpus and the OIR sequences annotation from Rasenberg et al. (2022), where dialogues were manually transcribed and segmented into Turn Construction Units (TCUs), the smallest meaningful elements of speech that can potentially complete a speaker turn. They align OIR component boundaries with these pre-existing TCU boundaries. Following the conversational analysis practice, such as in (Mondada, 2018), we adopt the "segment" as our unit of analysis, defined as: stretches of talk corresponding to annotated OIR components (e.g., repair initiation) that may span one or more TCUs within larger speaker turns (illustrated in Figure 2). This allows us to target only the stretch of talk relevant to the OIR component, avoiding the overinclusiveness of full turns. Figure 2 illustrates two organizational scenarios of OIR sequences described in Dingemanse and Enfield (2015), including: minimal (repair initiation produced immediately after the turn containing the trouble source) and non-minimal (repair initiation delayed by a few turns).
46
+
47
+ # 4 Proposed Approach
48
+
49
+ # 4.1 Overview
50
+
51
+ Task Formulation. We formulate the repair initiation detection task as a binary classification prob
52
+
53
+ lem. Given a segment $(x_{i})$ , corresponding to one or several TCUs within a speaker turn, the task is to predict whether it is an OIR repair initiation or a regular dialogue (RD) segment (i.e., not belonging to an OIR sequence). In this initial study, we limit the scope to detecting repair initiations only, without classifying other OIR components such as trouble sources or repair solutions. This simplification allows us to establish a baseline for the most critical component in the OIR sequence, the moment when repair is initiated by another speaker.
54
+
55
+ Architecture Overview. Figure 3 shows our proposed multimodal approach for repair initiation detection. We incorporate the handcrafted linguistic and prosodic features, automatically computed based on literature reviews, with embeddings from pretrained models (RobBERT for text, Whisper for audio). For a given segment $(x_{i})$ , we first extract both handcrafted features and pretrained embeddings from text and audio modalities. All features are then projected to a shared dimensionality to ensure consistency across modalities. To capture the complex interactions between text and audio embeddings with handcrafted features, a multihead attention mechanism was employed to weigh and capture nonlinear relationships. Finally, the whole representation is obtained by concatenating the text embedding and the fused representation from multihead attention.
56
+
57
+ # 4.2 Pretrained Models
58
+
59
+ Language model. Our proposed approach utilizes BERT (Devlin et al., 2019), a transformer-based language model, to obtain text embedding of the current given segment. As our corpus is in Dutch, we use the pretrained RobBERT (Delobelle et al., 2020) model, which is based on the BERT architecture, pretrained with a Dutch tokenizer, and
60
+
61
+ ![](images/c0b6c61e77fed0df391c03cfac53533c1ca9ee6f492889cc527a1ab0929fce0f.jpg)
62
+ Figure 3: Multimodal architecture for repair initiation detection
63
+
64
+ 39 GB of training data. We use the latest release of RobBERT-v2-base model which pretrained on the Dutch corpus OSCAR 2023 version, which outperforms other BERT-based language models for several different Dutch language tasks.
65
+
66
+ Audio model. For audio representations, we utilize Whisper (Radford et al., 2023), an encoder-decoder transformer-based model trained on 680,000 hours of multilingual and multitask speech data, to extract audio embeddings from our dialogue segments. Whisper model stands out for its robustness in handling diverse and complex linguistic structures, a feature that is crucial when dealing with Dutch, a language known for its intricate syntax. Besides, Whisper was trained on large datasets including Dutch and demonstrated good performance in zero shot learning, making it ideal serving as a naive baseline for task with small corpus like ours.
67
+
68
+ # 4.3 Dialogue Micro Context
69
+
70
+ Schegloff (1987) demonstrated that the OIR sequence is systematically associated with multiple organizational aspects of conversation, and understanding an OIR repair initiation requires examining the local sequential environment, which he terms the micro context, that we adopt in this work. Therefore, for each given target segment $x_{i}$ , to capture the micro context, we iteratively concatenate the previous $(x_{i - j})$ and following $(x_{i + j})$ segment within a window of size $(j)$ , using special separator token of transformers (e.g. [SEP] for BERT-based models) until reaching the maximum token limit (excluding [CLS] and [EOS]), inspired by similar ideas in (Wu et al., 2020). If the sequence exceeds the limit, we truncate the most recently added segments. The final sequence is enclosed with [CLS] and [EOS], as shown in Figure 9 (Appendix D).
71
+
72
+ # 4.4 Linguistic Feature Extraction
73
+
74
+ Figure 4(a) outlines our linguistic feature set for the representation of the target segment, capturing local properties such as part-of-speech (POS) tagging patterns, question formats, transcribed nonverbal actions (target segment features), and features, which quantify repetition and coreference across turns to reflect backward and forward relations around the repair initiation (cross-segment feature to capture micro context). The detailed description is in the Appendix E.
75
+
76
+ # 4.4.1 Target Segment Features
77
+
78
+ We automatically extracted the linguistic features proposed by (Ngo et al., 2024) at the intra-segment level to capture grammatical and pragmatic patterns related to the repair initiation. For instance, (Ngo et al., 2024) shows that restricted requests often show a POS tag sequence pattern of interrogative pronouns followed by verbs, while OIR open requests and regular dialogue segments differ in key lemmas used of the same tag: modal auxiliary verb küssen ("can") vs. primary auxiliary verb zichn ("to be"). We also include question mark usage, derived from original transcription, which is marked with a question mark if the annotator detected question prosody. It implicitly reflects prosodic cues as interpreted by the human annotators, which are relevant to repair initiation, as described in Schegloff (2000) regarding interrogative and polar question formats. A complete list of features is fully provided in Appendix E.
79
+
80
+ # 4.4.2 Cross-Segment Features
81
+
82
+ Grounded on the literature (Schegloff, 2000; Ngo et al., 2024), we define inter-segment features that capture the sequential dynamics of the repair initiation, including repetitions and the use of coreferences referring to entities in prior turns containing
83
+
84
+ ![](images/607cda1e85146a0f32f705cb2909232db2addfbf56284bfe08c8b36fc293a6d5.jpg)
85
+ Figure 4: Handcrafted linguistic and prosodic features design
86
+
87
+ the trouble source segment. We also compute self and other-repetition in the subsequent turn containing the repair solution segment, to capture how the trouble source speaker responds. These features reflect the global dynamics of OIR sequences.
88
+
89
+ # 4.5 Prosodic Features Extraction
90
+
91
+ Prosody plays a crucial role in signaling repair initiation. Previous studies in Conversation Analysis show that pitch, loudness, and contour shape can indicate whether repair initiation is perceived as "normal" or expresses "astonishment"(Selting, 1996), and that Dutch question types differ in pitch height, final rises, and F0 register (Haan et al., 1997). Building upon these characteristics, we design a prosodic feature set that includes both local features within the target segment, such as pitch, intensity, pauses, duration, and word-level prosody, and global features across segments of the OIR sequence, such as latency between OIR sequence segments, pitch slope transitions at boundaries, and comparison to speaker-specific prosodic baselines. The features are detailed in Figure 4(b) and in the Appendix F.
92
+
93
+ # 4.5.1 Target Segment Features
94
+
95
+ We use Praat (Boersma, 2000) to extract prosodic features at the segment level, including: pitch features (e.g., min, max, mean, standard deviation, range, number of peaks) which are computed from voiced frames after smoothing and outlier removal, with pitch floor/ceiling set between $60 - 500\mathrm{Hz}$ and adapted to each speaker range (van Bezooijen, 1995; Theelen, 2017; Verhoeven and Connell, 2024); first (mean and variability of pitch slope change) and second derivatives (pitch acceleration) of pitch contour, capturing pitch dynamics. Additional features are intensity (e.g., min, max, mean, range, standard deviation), and voice quality
96
+
97
+ measures (jitter, shimmer, and harmonics-to-noise ratio). We also model pause-related features by detecting silent pauses over $200~\mathrm{ms}$ and categorizing them by duration and position in the utterance, reflecting their conversational function associated with repair possibilities (van Donzel and Beinum, 1996; Hoey, 2018). Inspired by findings about prosody of other-repetition in OIR sequences (Dingemanse et al., 2015; Walker and Benjamin, 2017), we extract pitch and intensity features for repeated words from the trouble source segment, and for the specific repair marker "wat" (what/which/any), as indicators of repair initiation type and speaker perspective (Huhtamaki, 2015).
98
+
99
+ # 4.5.2 Cross-Segment Features
100
+
101
+ To model the speaker-specific prosodic variation (van Bezooijen, 1995; Theelen, 2017; Verhoeven and Connell, 2024), we normalize pitch and intensity using z-scores, relative percentage change, and position within the speakers' range. These features capture how far the current segment deviates from the speaker's typical behaviour across previous turns and the normalized range position of the current segment within the speaker's baseline. Inspired by work on prosodic entrainment (Levitan and Hirschberg, 2011), we also compute pitch and intensity slope transitions across segment boundaries (e.g., TS $\rightarrow$ OIR, OIR $\rightarrow$ RS), both within and across speakers, to assess prosodic alignment. We normalized slopes to semitones per second for consistency across speakers.
102
+
103
+ # 5 Experiments & Results
104
+
105
+ To answer the main research question mentioned in Section 1, we design the experiments to answer the following research sub-questions: i) RQ1: To what extent do audio-based features complement text-based features in identifying repair initiation?
106
+
107
+ <table><tr><td>Model</td><td>Modal &amp; Features</td><td>Precision</td><td>Recall</td><td>F1-score</td></tr><tr><td>TextEmb</td><td>U &amp; T</td><td>72.0 ± 4.0</td><td>87.6 ± 7.5</td><td>78.9 ± 4.7</td></tr><tr><td>AudioEmb</td><td>U &amp; A</td><td>72.6 ± 9.7</td><td>76.3 ± 13.1</td><td>70.6 ± 8.1</td></tr><tr><td>MultiEmb</td><td>M &amp; T+A</td><td>79.1 ± 5.4</td><td>82.2 ± 3.8</td><td>82.1 ± 0.9</td></tr><tr><td>TextLing</td><td>U &amp; L</td><td>82.2 ± 3.6</td><td>80.4 ± 6.1</td><td>80.4 ± 3.8</td></tr><tr><td>AudioPros</td><td>U &amp; P</td><td>81.7 ± 4.2</td><td>77.4 ± 5.4</td><td>77.3 ± 2.7</td></tr><tr><td>MultiLingPros</td><td>M &amp; L+P</td><td>81.7 ± 7.6</td><td>82.2 ± 1.5</td><td>81.8 ± 3.4</td></tr><tr><td>MultiOurs</td><td>M &amp; T+A+L+P</td><td>93.2 ± 2.8</td><td>96.1 ± 2.6</td><td>94.6 ± 2.3</td></tr></table>
108
+
109
+ U: Unimodal, M: Multimodal, T: Text, A: Audio, P: Prosodic features, L: Linguistic features
110
+ Table 1: Overall results across modalities for repair initiation detection. The table groups models by research question: RQ1 compares unimodal vs. multimodal combinations of audio and text; RQ2 compares handcrafted features with pretrained embeddings.
111
+
112
+ ii) RQ2: Do our proposed linguistic and prosodic features (see Figures 4(a) and 4(b)) perform better than pretrained embeddings? iii) RQ3: Which prosodic and linguistic features contribute the most to repair initiation detection? iv) RQ4: How does the involvement of dialogue micro context affect detection performance?
113
+
114
+ # 5.1 Implementation Details
115
+
116
+ Dataset. Based on (Colman and Healey, 2011)'s finding that repair occurs more frequently in task-oriented dialogues, we selected a Dutch multimodal task-oriented corpus (Rasenberg et al., 2022; Eijk et al., 2022), containing 19 dyads collaborating on referential communication tasks in a standing face-to-face setting. For each round, participants alternated roles to describe (Director) or identify (Matcher) a geometric object (called "Fribbles") displayed on screens, in which the unconstrained design encouraged natural modality use and OIR sequences. Rasenberg et al. (2022) annotated OIR sequences using Dingemanse and Enfield, 2015's schema, resulting in 10 open requests, 31 restricted requests, and 252 restricted offers. While we acknowledge that OIR sequences are rarer in natural dialogue, our goal in this paper is to study detection performance with sufficient examples of both classes. Therefore, we balanced the dataset with 306 randomly selected regular dialogue segments, stratified across all dyads, resulting in 712 samples overall. The data were split 70:15:15 for training, validation, and testing. Limitations regarding the generalizability of the artificial balancing are discussed in Section 7. Examples of Fribbles objects and repair initiation types are provided in the Appendix A and B.
117
+
118
+ Training Details. We fine-tuned our models using 10-fold cross-validation, in which the optimal learning rate was 2e-5. We employed AdamW optimizer with weight decay of 0.01 and a learning rate scheduler with $10\%$ warm-up steps. Training ran for up to 20 epochs with 3-epoch early stopping patience, and batch size 16. The source code is publicly available<sup>1</sup>.
119
+
120
+ Evaluation Metrics. We evaluated model performance using binary classification metrics including precision, recall, and macro F1-score.
121
+
122
+ # 5.2 Experiment Scenarios & Results Analysis
123
+
124
+ RQ1: Audio vs. Text Complementarity. To address RQ1, we compare the performance of unimodal against multimodal models, including: i) Single $\mathbf{Text}_{\mathbf{Emb}}$ or $\mathbf{Audio}_{\mathbf{Emb}}$ vs. MultiEmb; ii) Single TextLing or $\mathbf{Audio}_{\mathbf{Pros}}$ vs. MultiLingPros. We examine whether integrating the audio-based features, either by pretrained embeddings or by using handcrafted prosodic features, will improve the performance of the text-based models. The multimodal models include MultiEmb, which fuses pretrained text and audio embeddings, and MultiLingPros, which combines handcrafted linguistic and prosodic features, using cross-attention fusion as illustrated in Figure 3.
125
+
126
+ From Table 1, we observe that multimodal models consistently outperform unimodal ones across all metrics. For both pretrained embeddings and handcrafted features, text-based models outperform audio-based ones individually. However, incorporating audio improves performance in both settings. Specifically, in the pretrained setting,
127
+
128
+ the multimodal model MultiEmb achieves an F1-score of 82.1, improving over TextEmb by 3.2 percentage points (pp) and over AudioEmb by 11.5 pp. Similarly, in the handcrafted feature setting, combining linguistic and prosodic features MultiLingPros yields an F1 of 81.8, outperforming TextLing by 1.4 pp and AudioPros by 4.5 pp. Interestingly, the unimodal handcrafted models TextLing, AudioPros show higher precision than recall, whereas MultiLingPros shows slightly higher recall, suggesting a tendency to favor detection over omission. This is potentially beneficial in interactive systems where missing an repair initiation could be more disruptive than a false alarm. For embedding-based models, recall exceeds precision in all cases, but the multimodal model shows a notable gain in precision, indicating a better tradeoff between identifying true repair initiation and minimizing false positives.
129
+
130
+ RQ2: Handcrafted Features vs. Pretrained Embeddings. To address RQ2, we compare the performance of models using handcrafted features against the models using embeddings from pretrained models. We thus compare: i) Text representations: text embeddings $(\mathbf{Text}_{\mathbf{Emb}})$ vs. handcrafted linguistic features $(\mathbf{Text}_{\mathbf{Ling}})$ ; ii) Audio representations: audio embeddings $(\mathbf{Audio}_{\mathbf{Emb}})$ vs. handcrafted prosodic features $(\mathbf{Audio}_{\mathbf{Pros}})$ ; iii) Combined approaches: multimodal models using pretrained embeddings $(\mathbf{Multi}_{\mathbf{Emb}})$ vs. using handcrafted linguistic and prosodic features $(\mathbf{Multi}_{\mathbf{LingPro}s})$ and vs. our proposed approach leveraging both of them MultiOurs.
131
+
132
+ Table 1 shows that handcrafted feature models are comparable to embedding-based approaches. In unimodal settings, TextLing achieves higher precision (+10 pp) with comparable F1-score (+1.5 pp) to TextEmb, despite lower recall (-7.2 pp). Likewise, AudioPros outperforms AudioEmb across all metrics (precision +9.1 pp, recall +1.1 pp, F1-score +6.7 pp). In multimodal settings, MultiEmb and MultiLingPros perform nearly identically (F1-score difference of 0.3 pp). Overall, we observe a general trend emerges: embedding-based approaches tend to achieve higher recall but lower precision, likely because they can learn more complex representation that captures more subtle patterns, whereas handcrafted feature models target specific repair initiation markers, such as question forms, repetition, and pause patterns, resulting in better balanced precision-recall trade-offs. The embedding
133
+
134
+ models may also overgeneralize in the case of our small, task-specific corpus.
135
+
136
+ RQ3: Handcrafted Feature Importance Analysis. Although the linguistic and prosodic features could not solely outperform pretrained text and audio embeddings, they are useful in interpreting the model's behaviours, especially to see if they are aligned with the Conversation Analysis findings. To answer RQ3, we used SHAP (SHapley Additive explanations) analysis to analyze the contribution and behaviours of linguistic and prosodic features towards the model's decision. Figure 5 illustrates the top 10 features by SHAP value, which measures how much each single feature pushed the model's prediction compared to the average prediction. The pausing behaviours (positions and durations), intensity measures (max, mean, and relative change), and harmonic-to-noise ratio (HNR) appear particularly important among prosodic features. For linguistic features, the grammatical structure linking to coreference used, some POS tags, and various word type ratios rank highly, which align well with systematic linguistic patterns, as demonstrated by Ngo et al. (2024). The most important features include the number of long and medium pauses, the relative position of the longest pause, and the verb-followed-by-coref structure, all scoring near 1.0 on the importance scale, which aligned with the works in (Hoey, 2018; Ngo et al., 2024) about pauses in repair initiation and its structure, respectively.
137
+
138
+ ![](images/3a1676ba44e28ba6bd3b07dc8edaf0537da0dd84cf80ea9458b6b21919af7ee7.jpg)
139
+ Figure 5: The top 10 most important handcrafted features ranked by SHAP value. Appendix C provides the full list of the 20 most contributed features.
140
+
141
+ Figure 6 displays the synergy (Ittner et al., 2021) between linguistic and prosodic features, computed based on the SHAP interaction values. It reflects how complementary a pair of linguistic and prosodic features is in improving model performance, in which high synergy means that combining both features adds more value than what each
142
+
143
+ ![](images/1d8e232a261d3e265f571de444f18090187e46f15f57dd3b2e6466addf809650.jpg)
144
+ Synergy Between Linguistic and Prosodic Features
145
+ Figure 6: Handcrafted feature interaction analysis: Linguistic vs Prosodic
146
+
147
+ of them contributes individually. These features do not always need to co-vary, but their combination brings useful information for the model. Coordinating conjunction ratio (CCONJ ratio) shows the strongest synergy (0.26) with harmonics-to-noise ratio (HNR), while other speaker self-repetition ratio has strong synergy (0.23) with maximum intensity. This suggests that certain grammatical patterns work closely with specific voice qualities, particularly how conjunctions interact with voice clarity and how self-repetition correlates with voice intensity. The results indicate that conversation involves a complex interplay between what we say (linguistic elements) and how we say it (prosodic elements), which is aligned with the Conversation Analysis work.
148
+
149
+ RQ4: Dialogue Micro Context Analysis. To address RQ4, we experimented 4 scenarios of concatenating micro context, including: (1) PastContext - concatenated current input segment with the segments in the prior turns and cross-segment handcrafted features (past-related, Figure 4); (2) FutureContext - concatenated current input segment with the segments in the subsequent turns and handcrafted cross-segment features (future-related, Figure 4); (3) CurrentContext - no context concatenation and used only current input segment features (Figure 4); (4) MultiOurs - the full context scenario, where we concatenate current input segment with both the prior and subsequent segments and use full handcrafted feature set. For (1) and (4), we experimented with window_length of 2 and max (the micro context are concatenated as much as possible until it reach maximum token limit) based on results from corpus analysis; for (3) only max was used, as repair solutions typically occur immediately within maximum 2 turns in this corpus.
150
+
151
+ Table 2 highlights the impact of different mi
152
+
153
+ <table><tr><td>Context</td><td>Win. len</td><td>Precision</td><td>Recall</td><td>F1-score</td></tr><tr><td>(1) PastContext</td><td>2</td><td>86.0 ± 3.0</td><td>78.4 ± 5.4</td><td>82.0 ± 4.1</td></tr><tr><td>(1) PastContext</td><td>max</td><td>86.6 ± 5.2</td><td>81.0 ± 6.1</td><td>83.5 ± 4.3</td></tr><tr><td>(2) CurrentContext</td><td>-</td><td>84.6 ± 3.8</td><td>82.9 ± 6.0</td><td>83.6 ± 4.4</td></tr><tr><td>(3) FutureContext</td><td>max</td><td>84.00 ± 1.53</td><td>78.20 ± 5.78</td><td>80.18 ± 2.52</td></tr><tr><td>(4) MultiOurs</td><td>2</td><td>93.2 ± 2.8</td><td>96.1 ± 2.6</td><td>94.6 ± 2.3</td></tr><tr><td>(4) MultiOurs</td><td>max</td><td>87.7 ± 3.5</td><td>89.1 ± 5.3</td><td>88.3 ± 3.7</td></tr></table>
154
+
155
+ Table 2: Performance comparison across different micro context configurations
156
+
157
+ cro context configurations, in which incorporating surrounding segments from prior, and subsequent segments combining with the whole handcrafted feature set leads to the best overall performance, as also stated in Table 1. Notably, our full context setting with smaller window_length=2 achieves the highest results across all metrics, while concatenating to the maximum allowed token limits degrades the performance, with a drop of approximately 6.3 pp of F1-score, 9 pp of precision, and 4.1 pp of recall. It suggests that while surrounding context of input segment is helpful, overly long concatenation may introduce noise and irrelevant information to the model. In addition, integrating past or solely current segments yields moderate performance, with F1-scores ranging from approximately 80.2% to 83.6%, while future context integration results in the lowest scores, indicating that the upcoming dialogue can offer informative cues but less relevant than the prior and current input segments, which aligned with the nature of OIR sequence.
158
+
159
+ # 6 Error Analysis
160
+
161
+ To better interpret model performance, we analyze the False Negative (FN) instances, which are repair initiations that were misclassified as regular dialogue, to identify whether there are common patterns in these instances that our models struggle to predict, illustrated in Table 3. We compare these FN instances across our proposed multimodal model with the unimodal baselines by extracting representative dialogue samples for each model from test set and identifying their common linguistic and prosodic characteristics.
162
+
163
+ Our proposed model shows the lowest FN rate ( $3.8\%$ ) of the test set, compared to $15\%$ and $24\%$ on TextLing and AudioPros, respectively. TextLing seems to struggle in detecting samples with vague references, especially in restricted offers, even when OIR syntactic forms like question mark is present. Besides, AudioPros tends to over-rely on pause structure and pitch contour even though important prosodic cues were presented. Short declar
164
+
165
+ <table><tr><td>Model</td><td>%Error</td><td>Samples</td><td>Patterns</td><td>OIR Type</td></tr><tr><td rowspan="3">\(Text_{Ling}\)</td><td rowspan="3">15%</td><td rowspan="3">(or a) triangleyes uh yes on the right sideright? or ascending yesyes the one with the protrusion</td><td rowspan="3">Vague, elliptical referenceDisfluencies, vague interrogativeReferential expression, lacks direct marker</td><td>RORO</td></tr><tr><td>RO</td></tr><tr><td>RO</td></tr><tr><td rowspan="4">\(Audio_{Pros}\)</td><td rowspan="4">24%</td><td rowspan="4">with a sunshadeuh but the platform sits thatcuts theIs it vertical?ah and is his arm uh round butalso a bit with angles?but what did you say at the beginning?</td><td rowspan="4">Short declarative, flat prosodyFlat intonation, short pauses in beginningQuestion intonation, few short pausesHigh pitch, question intonation, pauses mid-turnRising intonation, wide pitch range</td><td>RORO</td></tr><tr><td>RO</td></tr><tr><td>RO</td></tr><tr><td>RR</td></tr><tr><td rowspan="3">\(Multi_{Ours}\)</td><td rowspan="3">3.8%</td><td rowspan="3">with a sunshadeoh who sorsorry again?</td><td rowspan="3">Short, declarative structureDeclarative, high but flat pitchClear OIR but subtle prosodic signal</td><td>RO</td></tr><tr><td>RO</td></tr><tr><td>OR</td></tr></table>
166
+
167
+ Table 3: Samples of False Negative (FN) instances from unimodal and multimodal models with qualitative patterns. OR: open request; RR: restricted request; RO: restricted offer. The Dutch samples are translated to English by DeepL.
168
+
169
+ atives with flat intonation were often misclassified, suggesting the impact of missing syntactic form information in this model. Finally, our proposed multimodal failed with mostly short phrases and subtle prosodic signals, which are not strongly marked as an repair initiation. Considering the error across 3 types of repair initiations, it seems that only AudioPros struggled with various types of repair initiations; the other 2 models misclassified on restricted offer and open request instances only. However, as this corpus is imbalanced between the 3 types of repair initiation, with a majority of restricted offers, it could be the potential reason.
170
+
171
+ # 7 Conclusion & Future Works
172
+
173
+ This work presents a novel approach for detecting repair initiation in Other-Initiated Repair (OIR) sequences within human-human conversation. It leverages automatically extracted linguistic and prosodic features grounded in Conversation Analysis theories. Our results demonstrate that incorporating handcrafted features significantly enhances detection performance compared to using only pretrained embedding models. Additionally, audio modality complements textual modality, improving detection performance across both pretrained embeddings and handcrafted features. Handcrafted feature analysis revealed both individual impact and complementary contributions between modalities. Key prosodic indicators include pause-related features, intensity, and harmonic-to-noise ratio (HNR), while important linguistic features involve grammatical patterns, POS tags, and lemma ratios.
174
+
175
+ Synergy analysis demonstrates that features do not act independently; for example, coordinating conjunction usage shows strong synergy with HNR, and trouble source speaker self-repetition leads significantly to maximum intensity presence. These patterns highlight the nature of OIR sequences, in which how something is said modulates what is being said.
176
+
177
+ Our results also highlight the importance of dialogue micro context in repair initiation detection: models using both prior and subsequent segments outperform those relying only on the target segment, reflecting the interactional structure crucial for OIR interpretation. However, overusing context can add noise and degrade performance.
178
+
179
+ Finally, error analysis revealed that while the text-based model failed with vague references and disfluencies, the audio-based model was prone to misclassifying flat or subtle prosodic cues, which raised the need for a multimodal model. The proposed multimodal model mitigates these weaknesses, but it still struggles with short, minimally marked repair initiation that lacks both strong syntactic and prosodic cues. This work establishes foundations for conversational agents capable of detecting human repair initiation to avoid communication breakdowns.
180
+
181
+ Building on these insights, future work will explore the integration of visual features to more accurately model the embodied aspects of OIR sequences, as well as the development of multilingual and cross-context corpora to assess the robustness and generalizability of the detection approach.
182
+
183
+ # Limitations
184
+
185
+ Dataset Limitations and Generalizability. Due to the limited multimodal OIR-labeled corpora, our study utilized the only available multimodal OIR-labeled corpus, which is specific to Dutch language and referential object matching tasks. This specificity could limit the generalizability of our model across different OIR categories, languages, and conversation settings. Future works should test the model on more diverse datasets to validate its robustness and establish broader applicability.
186
+
187
+ Dataset Balancing and Class Distribution. In natural conversation, repair initiation instances are much less frequent than regular dialogue. To enable robust model training and evaluation, we balanced repair initiation and regular dialogue samples across dyads. However, this balancing approach may affect the model's performance in real-world settings where OIR sequences are rare, and therefore, the results should be interpreted with caution. Future work should evaluate the performance of models while maintaining the natural class distribution to assess practical applicability.
188
+
189
+ Adaptability in Real-time Processing. Despite the computational efficiency of our approach using handcrafted features compared to Large Language Models, several limitations remain for real-time adaptation. The feature extraction of some linguistic and prosodic features, such as coreference chains, requires additional computation with pretrained models, potentially introducing latency. Future work should explore real-time feature extraction pipelines and incremental processing architectures, while evaluating potential trade-offs between model complexity and real-time performance to make the system practical for CA systems.
190
+
191
+ # Acknowledgments
192
+
193
+ We thank the anonymous reviewers for their constructive feedback. Data were provided (in part) by the Radboud University, Nijmegen, The Netherlands. This work has been supported by the Paris Ile-de-France Région in the framework of DIM AI4IDF. It was also partially funded by the ANR-23-CE23-0033-01 SINNet project.
194
+
195
+ # References
196
+
197
+ Francesca Alloatti, Francesca Grasso, Roger Ferrod, Giovanni Siragusa, Luigi Di Caro, and Federica Cena.
198
+
199
+ 2024. A tag-based methodology for the detection of user repair strategies in task-oriented conversational agents. Computer Speech & Language, 86:101603.
200
+ Merav Allouch, A. Azaria, and Rina Azoulay-Schwartz. 2021. Conversational agents: Goals, technologies, vision and challenges. Sensors (Basel, Switzerland), 21.
201
+ Zahra Ashktorab, Mohit Jain, Q. Vera Liao, and Justin D. Weisz. 2019. Resilient chatbots: Repair strategy preferences for conversational breakdowns. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19, page 1-12, New York, NY, USA. Association for Computing Machinery.
202
+ Vevake Balaraman, Arash Eshghi, Ioannis Konstas, and Ioannis Papaioannou. 2023. No that's not what I meant: Handling third position repair in conversational question answering. In Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 562-571, Prague, Czechia. Association for Computational Linguistics.
203
+ Trevor Michael Benjamin. 2013. *Signaling trouble: on the linguistic design of other-initiation of repair in English conversation*. Ph.D. thesis. Relation: http://www.rug.nl/ Rights: University of Groningen.
204
+ Paul Boersma. 2000. A system for doing phonetics by computer. 5.
205
+ Marcus Colman and Patrick G. T. Healey. 2011. The distribution of repair in dialogue. Cognitive Science, 33.
206
+ Andrea Cuadra, Shuran Li, Hansol Lee, Jason Cho, and Wendy Ju. 2021. My bad! repairing intelligent voice assistant errors improves interaction. Proc. ACM Hum.-Comput. Interact., 5(CSCW1).
207
+ Pieter Delobelle, Thomas Winters, and Bettina Berendt. 2020. RobBERT: a Dutch RoBERTa-based Language Model. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3255-3265, Online. Association for Computational Linguistics.
208
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
209
+ Mark Dingemanse and N. J. Enfield. 2015. Other-initiated repair across languages: Towards a typology of conversational structures.
210
+ Mark Dingemanse, Sean G Roberts, Julija Baranova, Joe Blythe, Paul Drew, Simeon Floyd, Rosa S Gisladottir, Robin H Kendrick, Stephen C Levinson,
211
+
212
+ Elizabeth Manrique, and 1 others. 2015. Universal principles in the repair of communication problems. PloS one, 10(9):e0136100.
213
+ Lotte Eijk, Marlou Rasenberg, Flavia Arnese, Mark Blokpoel, Mark Dingemanse, Christian F. Doeller, Mirjam Ernestus, Judith Holler, Branka Milivojevic, Asli Özyurek, Wim Pouw, Iris van Rooij, Herbert Schriefers, Ivan Toni, James Trujillo, and Sara Bögels. 2022. The cabb dataset: A multimodal corpus of communicative interactions for behavioural and neural analyses. NeuroImage, 264.
214
+ Aina Gari Soler, Matthieu Labeau, and Chloé Clavel. 2025. Toward the automatic detection of word meaning negotiation indicators in conversation. In Findings of the Association for Computational Linguistics: EMNLP 2025. To appear.
215
+ Raphaela Gehle, Karola Pitsch, and Sebastian Benjamin Wrede. 2014. Signaling trouble in robot-to-group interaction.emerging visitor dynamics with a museum guide robot. Proceedings of the second international conference on Human-agent interaction.
216
+ Judith Haan, Vincent Van Heuven, Jos Pacilly, and R.L. Bezooijen. 1997. An anatomy of dutch question intonation. J. Coerts & H. de Hoop (eds.), Linguistics in the Netherlands 1997, 97 - 108 (1997), 14.
217
+ Elliott Hoey. 2018. How speakers continue with talk after a lapse in conversation. Research on Language and Social Interaction, 51.
218
+ Sviatlana Hohn. 2017. A data-driven model of explanations for a chatbot that helps to practice conversation in a foreign language. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 395-405, Saarbrücken, Germany. Association for Computational Linguistics.
219
+ Xuejian Huang, Tinghuai Ma, Li Jia, Yuanjian Zhang, Huan Rong, and Najla Alnabhan. 2023. An effective multimodal representation and fusion method for multimodal intent recognition. Neurocomputing, 548:126373.
220
+ Martina Huhtamaki. 2015. The interactional function of prosody in repair initiation: Pitch height and timing of va 'what' in helsinki swedish. Journal of Pragmatics, 90:48-66.
221
+ Jan Ittner, Lukasz Bolikowski, Konstantin Hemker, and Ricardo Kennedy. 2021. Feature synergy, redundancy, and independence in global model explanations using shap vector decomposition. *ArXiv*, abs/2107.12436.
222
+ D. Jain, Anil Rahate, Gargi Joshi, Rahee Walambe, and K. Kotecha. 2024. Employing co-learning to evaluate the explainability of multimodal sentiment analysis. IEEE Transactions on Computational Social Systems, 11:4673-4680.
223
+ Rivka Levitan and Julia Hirschberg. 2011. Measuring acoustic-prosodic entrainment with respect to multiple levels and dimensions. pages 3081-3084.
224
+
225
+ Toby Jia-Jun Li, Jingya Chen, Haijun Xia, Tom M. Mitchell, and Brad A. Myers. 2020. Multi-modal repairs of conversational breakdowns in task-oriented dialogs. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology, UIST '20, page 1094-1107, New York, NY, USA. Association for Computing Machinery.
226
+ Md Messal Monem Miah, Ulie Schnaithmann, Arushi Raghuvanshi, and Youngseo Son. 2024. Multimodal contextual dialogue breakdown detection for conversational ai models. *ArXiv*, abs/2404.08156.
227
+ Biswesh Mohapatra, Manav Nitin Kapadnis, Laurent Romary, and Justine Cassell. 2024. Evaluating the effectiveness of large language models in establishing conversational grounding. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 9767-9781, Miami, Florida, USA. Association for Computational Linguistics.
228
+ Lorenza Mondada. 2018. Multiple temporalities of language and body in interaction: Challenges for transcribing multimodality. Research on Language and Social Interaction, 51(1):85-106.
229
+ Robert J. Moore, Sungeun An, and Olivia H. Marrese. 2024. Understanding is a two-way street: User-initiated repair on agent responses and hearing in conversational interfaces. Proc. ACM Hum.-Comput. Interact., 8(CSCW1).
230
+ Anh Ngo, Dirk Heylen, Nicolas Rollet, Catherine Pelachaud, and Chloé Clavel. 2024. Exploration of human repair initiation in task-oriented dialogue: A linguistic feature-based approach. In Proceedings of the 25th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 603-609, Kyoto, Japan. Association for Computational Linguistics.
231
+ Matthew Purver, Julian Hough, and Christine Howes. 2018. Computational models of miscommunication phenomena. Topics in Cognitive Science, 10(2):425-451.
232
+ Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. 2023. Robust speech recognition via large-scale weak supervision. In Proceedings of the 40th International Conference on Machine Learning, ICML'23. JMLR.org.
233
+ Marlou Rasenberg, Wim Pouw, Asli Özyürek, and Mark Dingemanse. 2022. The multimodal nature of communicative efficiency in social interaction. *Scientific Reports*, 12.
234
+ Tulika Saha, Aditya Patra, S. Saha, and P. Bhattacharyya. 2020. Towards emotion-aided multi-modal dialogue act classification. pages 4361-4372.
235
+ Emanuel A. Schegloff. 1987. Between micro and macro: contexts and other connections. In Richard Munch Jeffrey C. Alexander, Bernhard Giesen and Neil J. Smelser, editors, The Micro-Macro Link, page 207-234. University of California Press, Berkeley.
236
+
237
+ Emanuel A. Schegloff. 2000. When 'others' initiate repair. Applied Linguistics, 21:205-243.
238
+ Emanuel A. Schegloff, Gail Jefferson, and Harvey Sacks. 1977. The preference for self-correction in the organization of repair in conversation. Language, 53:361.
239
+ Margret Selting. 1996. Prosody as an activity-type distinctive cue in conversation: the case of so-called 'astonished' questions in repair initiation, page 231-270. Studies in Interactional Sociolinguistics. Cambridge University Press.
240
+ Mathilde Theelen. 2017. Fundamental frequency differences including language effects. *Junctions: Graduate Journal of the Humanities*, 2:9.
241
+ Jacqueline van Arkel, Marieke Woensdregt, Mark Dingemanse, and Mark Blokpoel. 2020. A simple repair mechanism can alleviate computational demands of pragmatic reasoning: simulations and complexity analysis. In Proceedings of the 24th Conference on Computational Natural Language Learning, pages 177-194, Online. Association for Computational Linguistics.
242
+ Renee van Bezooijen. 1995. Sociocultural aspects of pitch differences between japanese and dutch women. Language and Speech, 38(3):253-265. PMID: 8816084.
243
+ Monique van Donzel and Florien Beinum. 1996. Pausing strategies in discourse in dutch. pages 1029 - 1032 vol.2.
244
+ Jo Verhoeven and Bruce Connell. 2024. Intrinsic vowel pitch in hamont dutch: Evidence for if0 reduction in the lower pitch range. Journal of the International Phonetic Association, 54(1):108-125.
245
+ Traci Walker and Trevor Benjamin. 2017. Phonetic and sequential differences of other-repetitions in repair initiation. Research on Language and Social Interaction, 50(4):330-347.
246
+ Chien-Sheng Wu, Steven C.H. Hoi, Richard Socher, and Caiming Xiong. 2020. TOD-BERT: Pre-trained natural language understanding for task-oriented dialogue. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 917-929, Online. Association for Computational Linguistics.
247
+
248
+ # A Dataset Details
249
+
250
+ Figure 7 presents samples of 16 geometrical objects called "Fribbles" displayed on the participants' screens. Each dyad completed 6 rounds per session, resulting in 96 trials total. In each trial, participants alternated between Director and Matcher roles: the Director described a highlighted Fribble while the Matcher identified and confirmed the corresponding object by naming it loudly before proceeding to the next trial.
251
+
252
+ ![](images/0facefd20a4ca6ded40fa965b5e3f7e743b0c7ec8e2b6a19882195235140171e.jpg)
253
+ Figure 7: 16 "Fribbles" were used in the object matching task (Rasenberg et al., 2022; Eijk et al., 2022).
254
+
255
+ # B OIR Types Examples
256
+
257
+ # Example 1. Open request sample
258
+
259
+ TS SPEAKER: op dat driehoek (TS) (on that triangle)
260
+
261
+ REPAIR INITIATOR: wat zei je? (RI) (what did you say?)
262
+
263
+ TS SPEAKER: op die driehoek (RS) (on that triangle)
264
+
265
+ # Example 2. Restricted request sample
266
+
267
+ TS SPEAKER: deutsche heeft twee oren die aan de onderkant breder worden en een soort hanekam op+zijn hoofd een kleintje (TS)
268
+
269
+ (this one has two ears that widen at the bottom and a sort of cock's comb on its head a little one)
270
+
271
+ REPAIR INITIATOR:aarwatzeiwat zejein hetbegin? (RI)
272
+
273
+ (but what did you say at the beginning?)
274
+
275
+ TS SPEAKER: een soort oren die aan de onderkant breder worden (RS)
276
+
277
+ (a kind of ears that widen at the bottom)
278
+
279
+ # Example 3. Restricted offer sample
280
+
281
+ TS SPEAKER: waar bij je dus op de bovenkant zo'n zo'n mini uh kegelte hebt (TS)
282
+
283
+ (where you have one of those mini uh cones on the top)
284
+
285
+ REPAIR INITIATOR: oh ja die zo scheef\
286
+ haar anschter staat? (RI)
287
+
288
+ (oh yes which is so slanted backwards?)
289
+
290
+ TS SPEAKER: ja precies (RS) (yes exactly)
291
+
292
+ # C Top 20 Important Features
293
+
294
+ ![](images/c9878714e0ca564a77dbc416641b48a580aa31fd1a6452b3bdee4f75978ff5cb.jpg)
295
+ Figure 8: Top 20 most contributed features by SHAP values.
296
+
297
+ # D Dialogue Micro Context
298
+
299
+ ![](images/4d30cd34fa89055d2bf1d5dbf6bd214eef6e7896d292688361f29dcf3e870135.jpg)
300
+ Figure 9: Dialogue micro context concatenation approach. Micro context refers to the immediate conversational environment, including the prior and the subsequent segments of the current target segment in dialogue (Schegloff, 1987).
301
+
302
+ # E Detailed Linguistic Features
303
+
304
+ Table 4 summarizes the handcrafted feature set that were automatically extracted using the approach proposed in Ngo et al. (2024)'s work.
305
+
306
+ <table><tr><td>Level</td><td>Feature Group</td><td>Feature Type(s)</td><td>Description</td></tr><tr><td rowspan="4">Segment-level</td><td>POS tags sequence</td><td>POS tag bigrams, POS tag ratios</td><td>Binary features for frequent POS tag bigrams (e.g., PRON_Prs→VERB, VERB→COREF); POS tags frequency ratios computed per segment.</td></tr><tr><td>Lemma</td><td>contains_lemma (e.g., nog, hunnen)</td><td>Binary indicators for presence of high-frequency lemmas relevant to different type of repair initiation.</td></tr><tr><td>Question form</td><td>ends_with_question_mark</td><td>Binary feature indicating whether the segment ends with a question mark.</td></tr><tr><td>Non-verbal action</td><td>contains Laugh, contains_sigh, etc.</td><td>Binary features for transcribed non-verbal actions like #laugh#, #sigh#, etc.</td></tr><tr><td rowspan="2">Cross-segment level (prior turns related)</td><td>Repetition from previous turn</td><td>other_repetition_ratio</td><td>Ratio of tokens in the current segment that are repeated from the other speaker&#x27;s previous turn relative to total segment length.</td></tr><tr><td>Coreference from previous turn</td><td>coref_used_ratio</td><td>Ratio of coreference phrases (e.g., pronouns or noun phrases referring to previous turn) relative to total segment length.</td></tr><tr><td rowspan="2">Cross-segment level (subsequent turns related)</td><td>Repair solution TSS self-repetition</td><td>other-speaker_self_rep_ratio</td><td>Ratio of self-repetition in the turn following the repair initiation.</td></tr><tr><td>Repair solution TSS other-repetition</td><td>other-speaker_other_rep_ratio</td><td>Ratio of other-repetition in the turn following the repair initiation</td></tr></table>
307
+
308
+ Table 4: Summary of linguistic feature set used for modeling repair initiation. The full POS tag list includes: ADJ (adjectives), ADP (prepositions and postpositions), ADV (adverbs), AUX (auxiliaries, including perfect tense auxiliaries "hebben" (to have), "zijn" (to be); passive tense auxiliaries "worden" (to become), "zijn" (to be), "krijgen" (to get); and modal verbs "kunnen" (to be able, can), "zullen" (shall), "moeten" (must), "mogen" (to be allow)), CCONJ (coordinating conjunctions such as "en" (and), "of" (or)), DET (determiners), INTJ (interjections), NOUN (nouns), PRON_Dem (demonstrative pronouns), PRON_Int (interrogative pronouns), PRON_Prs (personal pronouns), PUNCT (punctuation), SYM (symbols), and VERB (verbs). The considered common lemma includes: wat (what), kunnen (can), zitten (to sit/set),ijken (to be), nog (yet/still), wachtten (to wait), aan (on/to/at/in/by/beside/upon). And the transcribed non-verbal actions include: laughs, sighs, breath, and mouth noise.
309
+
310
+ # F Detailed Prosodic Features
311
+
312
+ <table><tr><td>Level</td><td>Feature Group</td><td>Feature Type</td><td>Description</td></tr><tr><td rowspan="6">Segment-level</td><td>Pitch features</td><td>min, max, mean, std, range, num_peaks</td><td>Extracted from voiced frames; outliers removed; peaks from smoothed contour</td></tr><tr><td>Pitch dynamics</td><td>slope</td><td>Captures pitch variation within segment.</td></tr><tr><td>Intensity features</td><td>min, max, mean, std, range</td><td>Computed from nonzero intensity frames; reflects loudness.</td></tr><tr><td>Voice quality</td><td>jitter, shimmer, hnr</td><td>Reflects vocal fold irregularity and breathiness.</td></tr><tr><td>Pause features</td><td>num, durations, short/med/long, positional counts, rel_longest</td><td>Pause detection using adaptive thresholds; categorized by duration and position.</td></tr><tr><td>Speech timing</td><td>rate, articulation_rate, duration</td><td>Segment length and estimated speech rate (e.g., syllables/sec).</td></tr><tr><td rowspan="3">Cross-segment level (both prior and subsequent related)</td><td>Transition features</td><td>end_slope, start_slope, transition</td><td>Pitch slope difference across segment boundaries (prev→cur, cur→next); in semitones/sec.</td></tr><tr><td>Baseline comparison</td><td>z_score, rel_change, range_pos</td><td>Comparison to speaker&#x27;s pitch/intensity baseline.</td></tr><tr><td>Latency</td><td>TS→RI, RI→RS</td><td>Silence duration between trouble source and repair initiation, repair initiation and repair solution.</td></tr></table>
313
+
314
+ Table 5: Summary of prosodic feature set used for modeling repair initiation.
mmwatdetectingotherinitiatedrepairrequestsindialogue/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:676de62d6dd95b555a36133843c1cd7e13dfd1e6fc6444d17ca19446c04295da
3
+ size 813280
mmwatdetectingotherinitiatedrepairrequestsindialogue/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bbbed175ac44854a19a8ea3360bba2b1dba28d8c8b8ef57a6bafa805385ea00b
3
+ size 339009
pfedgpthierarchicallyoptimizingloraaggregationweightsforpersonalizedfederatedgptmodels/6cae076b-0e4d-416f-8cba-50855d2287bf_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:66e38f4c709f16e5d38c59b9a9f85e1f0f0fecc66534bdb7f5f4c9974d62e218
3
+ size 92942
pfedgpthierarchicallyoptimizingloraaggregationweightsforpersonalizedfederatedgptmodels/6cae076b-0e4d-416f-8cba-50855d2287bf_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bbd8c0053678dea742180b1ca43022eb73914e481276163be5400eac60475dca
3
+ size 112757
pfedgpthierarchicallyoptimizingloraaggregationweightsforpersonalizedfederatedgptmodels/6cae076b-0e4d-416f-8cba-50855d2287bf_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:762eadb0ce10df6de8ec31b71052f0f77eb7f56e10faa1313d11300a68e5e367
3
+ size 525689
pfedgpthierarchicallyoptimizingloraaggregationweightsforpersonalizedfederatedgptmodels/full.md ADDED
@@ -0,0 +1,438 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # $pFedGPT$ : Hierarchically Optimizing LoRA Aggregation Weights for Personalized Federated GPT Models
2
+
3
+ Zhanming Shen\*, TianQi Xu\*, Hao Wang\*, Jian Li\*, Miao Pan
4
+
5
+ *Zhejiang University, *Stevens Institute of Technology
6
+
7
+ Stony Brook University, University of Houston
8
+
9
+ # Abstract
10
+
11
+ Federated finetuning of Large Language Models (LLMs) using Low-Rank Adaptation (LoRA) offers computational efficiency and preserves data privacy. However, applying LoRA in federated settings faces significant challenges: standard approaches struggle with data heterogeneity, and existing personalization techniques fail to precisely adapt shared global knowledge to individual client needs. To address these issues, we propose pFedGPT, a framework that leverages Hierarchical Bayesian Optimization (HBO) for fine-grained, personalized LoRA aggregation. pFedGPT intelligently partitions LoRA parameters based on model structure and client information, then employs HBO to hierarchically search for optimal, module-specific weights. This enables a nuanced integration of the downloaded global LoRA state with each client's local model, precisely capturing client-specific requirements. To manage the optimization cost inherent in HBO, pFedGPT incorporates efficient multi-fidelity evaluations and a curriculum learning strategy. Extensive experiments demonstrate that pFedGPT achieves state-of-the-art (SOTA) performance on personalized FL benchmarks, showcasing robustness and scalability while introducing only minimal (approx. $4\%$ ) additional optimization overhead. Our results also underscore the limitations of traditional FL methods for LoRA-based LLM personalization, highlighting the need for tailored approaches like pFedGPT.
12
+
13
+ # 1 Introduction
14
+
15
+ The rapid development of Large language models (LLMs) has drawn widespread attention from academia (Devlin et al., 2018; Radford et al., 2019; Raffel et al., 2020; Zhang et al., 2022). To further improve LLM performance on various downstream tasks, the demand for high-quality training data
16
+
17
+ across different domains is growing. However, this creates a conflict with the need to protect data sensitivity and privacy (Balunovic et al., 2022; Gupta et al., 2022; Klymenko et al., 2022).
18
+
19
+ This challenge has led to increasing interest in fine-tuning LLMs within the framework of federated learning (FL) (Yu et al., 2023; Zhang et al., 2024a), as it allows for decentralized training while preserving data privacy. To reduce communication and computational costs, ParameterEfficient Fine-Tuning (PEFT) methods like low-rank adaptation (LoRA) (Hu et al., 2021) have been adopted, offering a more efficient way to update models without transmitting large weights. However, applying LoRA in FL presents challenges. As data becomes more heterogeneous across clients, the gap between fully fine-tuning the model and using LoRA widens (Babakniya et al., 2023). Additionally, privacy-preserving techniques, such as gradient noise for differential privacy, can destabilize LoRA's performance (Sun et al., 2024). Moreover, global models may not perform well for specific personalized tasks (Wang et al., 2023).
20
+
21
+ These problems prompted us to propose a method for achieving model personalization in LLMs. While personalized Federated Learning (pFL) has been well-researched in traditional machine learning, directly applying it to LLM fine-tuning presents challenges. Most existing Personalized Federated Learning (pFL) solutions are designed for fully trained models (Collins et al., 2021; Oh et al., 2021; Zhang et al., 2023b), making them incompatible with Parameter-Efficient Fine-Tuning (PEFT) methods. In addition, some pFL approaches are not specifically optimized for LLMs (Wu et al., 2023; Yi et al., 2023; Zhang et al., 2023a). Although they can theoretically provide personalized solutions, they often fall short in practice due to the complexity of LLMs and the specific needs of PEFT methods. Thus, there is a pressing need for personalized FL approaches tailored to
22
+
23
+ LLMs, capable of leveraging global information while enhancing the performance of local models.
24
+
25
+ Recent efforts to personalize LLMs in FL scenarios still face challenges. FedDPA (Yang et al., 2024) combines local and global LoRA outputs with a single weight, but struggles with managing multiple adapters and fail to capture the personalized information precisely. PerFIT (Zhang et al., 2024b) uses neural architecture search to identify personalized architectures, yet it overlooks that the degree of personalization in the parameter space evolves during training, leading to suboptimal results. In short, these approaches fail to accurately capture the client-specific information in the global model, preventing the local model from fully benefiting from FL. Most importantly, their tailored personalization for LLMs is just superficial and not based on LoRA's own structure.
26
+
27
+ Thus motivated, to better capture the necessary information in the global model downloaded by each client, dynamically implement optimal personalization of the local model, and tailor the training framework for the LLM, we propose Personalized Federated GPT (pFedGPT), a novel pFL method with Hierarchical Bayesian Optimization (HBO) to introduce hierarchical Bayesian optimization based on curriculum learning and multi-fidelity algorithms into the model training. Our contributions are as follows:
28
+
29
+ - We introduce $pFedGPT$ , a method that performs more fine-grained parameter aggregation of local and global model by conducting Bayesian Optimization on a personalized parameter space searched by each client, thus accurately capturing the desired information in the downloaded global LoRA.
30
+ - We propose a new LLM-based distribution of data heterogeneity: Task-specific distribution, which we use together with the traditional Dichilet distribution as a benchmark for evaluating the personalization capability of LLMs in the context of FL. Based on this, it proves the inadaptability of the traditional FL method in the PEFT of the LLMs and the necessity of proposing a new personalized method based on LoRA.
31
+ We conducted extensive experiments on three benchmark datasets. The results show that $pFedGPT$ performs better than state-of-the-art (SOTA) methods but introduces only $4\%$
32
+
33
+ additional optimization time.
34
+
35
+ # 2 Preliminary
36
+
37
+ # 2.1 LoRA
38
+
39
+ LoRA achieves PEFT by constraining the update of model parameters to maintain a low intrinsic rank. For a pre-trained LLM parameterized by $\theta_{init} \in \mathbb{R}^{d \times k}$ , LoRA utilizes a low-rank decomposition $AB$ to represent the update $\Delta \theta$ where $A \in \mathbb{R}^{d \times r}$ and $B \in \mathbb{R}^{r \times k}$ with $r \ll \min(d, k)$ . The pre-trained parameter $\theta$ remains fixed during the fine-tuning while $A$ and $B$ are optimized. The update of $\theta_{init}$ is formed as:
40
+
41
+ $$
42
+ \theta_ {n e w} = \theta_ {i n i t} + \Delta \theta = \theta_ {i n i t} + A B.
43
+ $$
44
+
45
+ # 2.2 Bayesian Optimization (BO)
46
+
47
+ Bayesian optimization is used to optimize objective functions by modeling the objective function $f(\mathbf{x})$ with a Gaussian process. For a given prior, we have $f(\mathbf{x}) \sim \mathcal{GP}\big(\mu (\mathbf{x}),k(\mathbf{x},\mathbf{x}^{\prime})\big)$ , where $\mu (\mathbf{x})$ is the mean function and $k(\mathbf{x},\mathbf{x}^{\prime})$ the covariance function. Given historical data $\mathcal{D} = \{(x_i,y_i)\}_{i = 1}^n$ , the posterior distribution is $p\big(f(\mathbf{x})\mid \mathcal{D},\mathbf{x}\big) = \mathcal{N}\big(\mu_n(\mathbf{x}),\sigma_n^2 (\mathbf{x})\big)$ , where $\mu_{n}(\mathbf{x})$ and $\sigma_n^2 (\mathbf{x})$ are the posterior mean and variance. The next sampling point $\mathbf{x}_{n + 1}$ is selected by maximizing the acquisition function $\alpha (\mathbf{x})$ : $\mathbf{x}_{n + 1} = \arg \max_{\mathbf{x}}\alpha (\mathbf{x})$ .
48
+
49
+ # 2.3 Multi-Fidelity and Curriculum Learning
50
+
51
+ Multi-fidelity optimization aims to reduce the computational cost of evaluating expensive functions by utilizing cheaper, lower-fidelity approximations. The key idea is to combine information of varying fidelity to efficiently guide the optimization process.
52
+
53
+ Curriculum learning (Bengio et al., 2009) is a progressive learning strategy with increasing difficulty, which can accelerate the convergence of the training process and improve the generalization ability of the model. CNAS (Guo et al., 2020) extends the concept of curriculum learning from the data level to the generalized model element level. It starts from a small search space to search for neural structure, and uses the learned knowledge to gradually search in a larger space, which significantly improves the search efficiency and enhances the search effect. Our idea is similar to CNAS. The results of a wide-range rough parameter search in a simpler, lower-cost parameter space are used to guide a small-range parameter search in a more complex parameter space.
54
+
55
+ ![](images/b523bbc0ed367ab25a37f038b30317db5f2806b5ca9b7a5ed873e12bd8074377.jpg)
56
+ Figure 1: Workflow of the proposed method.
57
+
58
+ # 3 Overview
59
+
60
+ Figure 1 provides an overview of the local learning process on the client side. The client downloads the global LoRA parameters from the server and locally aggregates these parameters with the old local LoRA parameters through Hierarchical Bayesian Optimization for initialization (Step 1,2,3), which we refer to as Algorithm HBO in the subsequent discussion. Based on this initialization, the client trains the local model, and finally uploads the trained local LoRA parameters to the server (Step 4). To initialize next round training, the server aggregate all the LoRA modules from last round and distributes the aggregated LoRA module to clients. The details are as follows:
61
+
62
+ Step 1: Bayesian optimization based on basic parameter space: The local client segments the basic parameter modules of LoRA based on the different model structures loaded by LoRA (such as Q, K, V). Then, it performs Bayesian Optimization within the basic parameter space defined by these modules to determine the initial optimal update weights $\mathbf{w}_{\mathbf{Q}}$ , $\mathbf{w}_{\mathbf{K}}$ , and $\mathbf{w}_{\mathbf{V}}$ . These weights are used to aggregate the global LoRA parameters with the local LoRA parameters.
63
+
64
+ Step 2: Personalized parameter space search Mechanism: To enable the local model to better learn the specific parts of the global knowledge at a more fine-grained level, we define a personalized parameter space. Based on the personalized training information, the local client executes the personalized clustering algorithm in each large pa
65
+
66
+ rameter module divided in the first stage, so as to find the parameter layers with similar personalized degree to form a finer granularity small parameter module like $\mathbf{w}_{\mathbf{Q1}}, \mathbf{w}_{\mathbf{Q2}}, \ldots, \mathbf{w}_{\mathbf{Qk}}$ (where $\mathbf{k}$ is the number of clusters). These smaller parameter modules are the basic units that make up the personalized parameter space, which allows for more detailed optimization.
67
+
68
+ Step 3: Bayesian Optimization In Personalized Parameter Space Based on Curriculum Learning: The training strategy follows a curriculum learning approach. The results from Bayesian optimization within the basic parameter space, are then used as priors for Bayesian optimization in the searched personalized parameter space. This advanced optimization seeks to determine the optimal way to aggregate the global LoRA parameters with the old local LoRA parameters in a more refined, personalized parameter space.
69
+
70
+ Step 4: Regular Local Training: Each client performs regular local training using LoRA, which is initialized by optimally combining global knowledge with local knowledge.
71
+
72
+ The backbone of the LLM is frozen throughout federated training processes. The complete federated training process is described in Appendix A.
73
+
74
+ # 4 $p$ FedGPT's Design
75
+
76
+ # 4.1 Multi-fidelity Mechanism at the Data Level
77
+
78
+ To reduce the training costs of Bayesian optimization, we employ a multi-fidelity mechanism at the data level. This mechanism involves selecting a subset of the global dataset that closely resembles the local data distribution for each client as local validation set $\mathbf{V}$ , followed by clustering and sampling to create low-fidelity validation datasets $\mathbf{V}_{\mathrm{sampled}}$ . The full algorithmic details of subset selection, clustering, and sampling are presented in Appendix B.
79
+
80
+ The final outcome is the creation of:
81
+
82
+ 1) High-fidelity validation dataset $\mathbf{V}_{\text{high-fidelity}}$ : The full validation dataset $\mathbf{V}$ obtained from the global data.
83
+ 2) Low-fidelity validation dataset $\mathbf{V}_{\mathrm{low - fidelity}}$ A sampled subset $\mathbf{V}_{\mathrm{sampled}}$ of the validation dataset.
84
+
85
+ # 4.2 BO based on Basic Parameter Space
86
+
87
+ Basic Parameter Space Partitioning Based on Model Internal Structure. We classify model parameters according to their roles within the LoRA
88
+
89
+ layers. Specifically, we focus on value projection $(\theta_{\mathrm{value}})$ , query projection $(\theta_{\mathrm{query}})$ , and key projection $(\theta_{\mathrm{key}})$ . Other projections include output projection, feed-forward network input and output, and word token embeddings (denoted as $\theta_{\mathrm{output}}$ , $\theta_{\mathrm{fc\_in}}$ , $\theta_{\mathrm{fc\_out}}$ , $\theta_{\mathrm{wte}}$ ).
90
+
91
+ Let $\theta$ denote the set of all model parameters. We categorize $\theta$ into subsets based on their original structural roles. For detailed discussion, we focus on the main attention components. Therefore, in the basic parameter space for the first Bayesian optimization stage, we have the parameter subsets are: $\Theta_{\mathrm{basic}} = \{\theta_{\mathrm{value}},\theta_{\mathrm{query}},\theta_{\mathrm{key}}\}$
92
+
93
+ Bayesian optimization based on basic parameter space. In the first stage, we optimize the parameters in the basic parameter space. Each parameter subset $\theta_p \in \Theta_{\mathrm{basic}}$ is assigned a hyperparameter $\mathbf{w}_p$ for global optimization. The aggregation of local and global parameters for each parameter subset $\theta_p$ is defined as:
94
+
95
+ $$
96
+ \theta_ {p, \mathrm {a g g}} = \mathbf {w} _ {p} \cdot \theta_ {p, \mathrm {l o c a l}} + (1 - \mathbf {w} _ {p}) \cdot \theta_ {p, \mathrm {g l o b a l}},
97
+ $$
98
+
99
+ where $\mathbf{w}_p\in [0,1]$ . The objective function for this optimization is defined as the loss function evaluated on a low-fidelity validation dataset $\mathbf{V}_{\mathrm{low - fidelity}}$ The optimal weights for these parameters are denoted as:
100
+
101
+ $$
102
+ \mathbf {w} _ {\text {s t a g e - 1}} = \arg \min _ {\mathbf {w}} L (\mathcal {M} (\mathbf {w}), \mathbf {V} _ {\text {l o w - f i d e l i t y}}).
103
+ $$
104
+
105
+ Then we use Gaussian process to apply Bayesian optimization to the objective function above. In the end, we get an approximate optimal solution $\mathbf{w}_p^{\mathrm{stage - 1}}$ for searching in the basic parameter space based on the low-fidelity dataset, which represents the approximate range of optimal solutions based on which we will conduct further fine-grained searches.
106
+
107
+ # 4.3 Personalized Parameter Space Search
108
+
109
+ To enhance FL by balancing global information with local personalization, we employ a personalized parameter space search mechanism based on the model's internal structure and training information for each client.
110
+
111
+ Personalized Parameter Space Based on Training Information. After partitioning the basic parameter space, we expand it by clustering parameters based on training information to capture the personalized needs of each client.
112
+
113
+ For each classified subset $\theta_p \in \{\theta_{\mathrm{value}}, \theta_{\mathrm{query}}, \theta_{\mathrm{key}}\}$ , we compute the following metrics for each parameter: 1) Mean squared
114
+
115
+ difference between local and global parameters for LoRA-A and LoRA-B matrices, denoted as $\delta_{A,p,i}$ and $\delta_{B,p,i}$ respectively. 2) The difference in parameter change magnitude between local and global models for LoRA-A and LoRA-B matrices, denoted as $\Delta_{A,p,i}$ and $\Delta_{B,p,i}$ respectively. These metrics are defined as:
116
+
117
+ $$
118
+ \delta_ {A, p, i} = \frac {1}{n} \sum_ {j = 1} ^ {n} \left(\theta_ {p, i} ^ {l, A, j} - \theta_ {p, i} ^ {g, A, j}\right) ^ {2},
119
+ $$
120
+
121
+ $$
122
+ \delta_ {B, p, i} = \frac {1}{n} \sum_ {j = 1} ^ {n} (\theta_ {p, i} ^ {l, B, j} - \theta_ {p, i} ^ {g, B, j}) ^ {2},
123
+ $$
124
+
125
+ $$
126
+ \Delta_ {A, p, i} = \frac {1}{n} \sum_ {j = 1} ^ {n} (\Delta \theta_ {p, i} ^ {l, A, j} - \Delta \theta_ {p, i} ^ {g, A, j}) ^ {2},
127
+ $$
128
+
129
+ $$
130
+ \Delta_ {B, p, i} = \frac {1}{n} \sum_ {j = 1} ^ {n} (\Delta \theta_ {p, i} ^ {l, B, j} - \Delta \theta_ {p, i} ^ {g, B, j}) ^ {2}.
131
+ $$
132
+
133
+ Here, $\Delta \theta_{p,i}^{l,A,j}$ and $\Delta \theta_{p,i}^{g,A,j}$ represent the local and global parameter changes for LoRA-A, respectively:
134
+
135
+ $$
136
+ \Delta \theta_ {p, i} ^ {l, A, j} = \theta_ {p, i} ^ {(T), A, j} - \theta_ {p, i} ^ {(0), A, j},
137
+ $$
138
+
139
+ $$
140
+ \Delta \theta_ {p, i} ^ {l, B, j} = \theta_ {p, i} ^ {(T), B, j} - \theta_ {p, i} ^ {(0), B, j}.
141
+ $$
142
+
143
+ The global parameters $\theta_{p,i}^{g,A}$ and $\theta_{p,i}^{g,B}$ , as well as their change magnitudes, are obtained by averaging the local parameters and their changes across all clients:
144
+
145
+ $$
146
+ \theta_ {p, i} ^ {g} = \frac {1}{m} \sum_ {k = 1} ^ {m} \theta_ {p, i} ^ {k}, \quad \Delta \theta_ {p, i} ^ {g} = \frac {1}{m} \sum_ {k = 1} ^ {m} \Delta \theta_ {p, i} ^ {k},
147
+ $$
148
+
149
+ where $m$ is the number of clients, $\theta_{p,i}^{k}$ represents the parameters of the $k$ -th client, and $\Delta \theta_{p,i}^{k}$ represents the parameter changes of the $k$ -th client. These metrics form the feature vectors $\mathbf{F}_{p,i}$ for clustering: $\mathbf{F}_{p,i} = [\delta_{A,p,i}, \delta_{B,p,i}, \Delta_{A,p,i}, \Delta_{B,p,i}]$ .
150
+
151
+ Personalized Parameter Partition Result. Finally, we splice the clustering results of all parameter subsets together to get the personalized parameter subset of the client. The subset is defined as: $\Theta_{\mathrm{personalized}} = \{c_{p,1}, c_{p,2}, \ldots, c_{p,n}\}$ .
152
+
153
+ This personalized parameter subset constitutes the local personalized parameter space of the client. Subsequent fine-grained Bayesian optimization will be based on this space.
154
+
155
+ # 4.4 BO in Personalized Parameter Space
156
+
157
+ Our training strategy uses a curriculum learning approach. In the previous steps, we have completed
158
+
159
+ a simple and inexpensive Bayesian optimization in the basic parameter space and determined the roughly optimal update weights. Now, we want to apply the results of this preliminary phase with the relevant training information as a prior to more complex, costly and high-precision Bayesian optimization. This advanced optimization aims to find the optimal way to aggregate global LoRA parameters with the old local LoRA parameters in a more refined, personalized parameter space.
160
+
161
+ Curriculum Learning for Personalized BO: In the second stage, each parameter cluster $c_{p,i} \in \Theta_{\text{personalized}}$ is assigned a hyperparameter $\mathbf{w}_{c_{p,i}}$ for global optimization. The aggregation of local and global parameters for each parameter cluster $c_{p,i}$ is defined as: $c_{p,i,\mathrm{agg}} = \mathbf{w}_{c_{p,i}} \cdot c_{p,i,\mathrm{local}} + (1 - \mathbf{w}_{c_{p,i}}) \cdot c_{p,i,\mathrm{global}}$ , where $\mathbf{w}_{c_{p,i}} \in [0,1]$ .
162
+
163
+ Before the second stage bayesian optimization, the curriculum learning initialization incorporates two key components:
164
+
165
+ 1) Initialization using $\mathbf{w}_{\mathrm{stage - 1}}$ : The results from the first stage are used to initialize the optimization process for each cluster $c_{p,i}$ based on its parameter subset $\theta_p$ : $\mathbf{w}_{c_{p,i}}^{(0)} = \mathbf{w}_p^{\mathrm{stage - 1}}$ .
166
+ 2) Initialization Incorporating Prior Information: The training information from the basic parameter space optimization serves as the prior for the personalized parameter space optimization. The training process for each cluster $c_{p,i}$ is initialized using the training information from the corresponding parameter subset $\theta_p$ . This is achieved by fitting a Gaussian Process (GP) model using the collected prior information: $\mathrm{GP} \sim \mathcal{N}(\mathbf{X}_{\mathrm{prior}}, \mathbf{y}_{\mathrm{prior}})$ .
167
+
168
+ The optimization objective for the second stage bayesian optimization is defined as:
169
+
170
+ $$
171
+ \mathbf {w} _ {\text {s t a g e - 2}} = \arg \min _ {\mathbf {w}} L (\mathcal {M} (\mathbf {w}), \mathbf {V} _ {\text {l o w - f i d e l i t y}}, \mathcal {P} _ {\text {p r i o r}}).
172
+ $$
173
+
174
+ Selection of Top Results for High-Fidelity Optimization: After performing low-fidelity Bayesian optimization, we select the top $k$ best results and use them to perform high-fidelity optimization on the full validation dataset $\mathbf{V}_{\mathrm{high - fidelity}}$ . The final optimal weights are denoted as $\mathbf{w}_{\mathrm{final}}$ :
175
+
176
+ $$
177
+ \mathbf{w}_{\text{final}} = \arg \min_{\mathbf{w}\in \operatorname {Top} - k}L(\mathcal{M}(\mathbf{w}),\mathbf{V}_{\text{high - fidelity}}).
178
+ $$
179
+
180
+ The final parameters are aggregated with the optimal $\theta_{\mathrm{agg}} = \mathbf{w}_{\mathrm{final}}\cdot \theta_{\mathrm{local}} + (1 - \mathbf{w}_{\mathrm{final}})\cdot \theta_{\mathrm{global}}$
181
+
182
+ # 4.5 Personalized Slow Start Mechanism
183
+
184
+ In FL, as shown in Appendix D, local fine-tuning may achieve faster initial convergence compared
185
+
186
+ to federated training (FedIT), but it often results in lower final accuracy. Since our method involves the aggregation of locally trained LoRA parameters and globally aggregated LoRA parameters, in order to avoid the local optima caused by the aggregation weight being too biased to the local parameters in the early stage of training, we employ a personalized slow start mechanism. Specifically, we monitor the convergence of the FL process using the relative change in evaluation loss over a sliding window. The process is defined as follows:
187
+
188
+ Let $\mathbf{L}_t$ and $\mathbf{L}_{t - 1}$ be the arrays of training losses in the current and the previous sliding window, respectively, and let $\epsilon >0$ be a small constant that avoids division by zero. We denote the relative change at round $t$ by $\Delta_t$ :
189
+
190
+ $$
191
+ \Delta_ {t} = \frac {\left| \mathrm {m e a n} \big (\mathbf {L} _ {t} \big) - \mathrm {m e a n} \big (\mathbf {L} _ {t - 1} \big) \right|}{\operatorname * {m a x} \big (\mathrm {m e a n} \big (\mathbf {L} _ {t - 1} \big) , \epsilon \big)}.
192
+ $$
193
+
194
+ The local client is regarded as having accumulated sufficient global knowledge (the "Slow-Start" phase ends) when $\Delta_t$ falls below a predefined tolerance $\delta_{\mathrm{max}}$ or when the training epoch index $t$ reaches the upper bound $T_{\mathrm{max}}$ :
195
+
196
+ $$
197
+ \operatorname {S l o w S t a r t} (t) = \left\{ \begin{array}{l} \text {T r u e , i f} \Delta_ {t} < \delta_ {\max } \text {o r} t \geq T _ {\max }, \\ \text {F a l s e , o t h e r w i s e .} \end{array} \right.
198
+ $$
199
+
200
+ # 5 Experiments
201
+
202
+ # 5.1 Experimental Settings
203
+
204
+ Dataset. We conducted our experiments on three datasets from the previous federal learning research: Databricks-dolly-15k (Zhang et al., 2024a), Flan 1 and Flan 2 (Yang et al., 2024). Each dataset has eight different NLP tasks. Details of each task can be found in the original article.
205
+
206
+ Data Distribution. To emulate the heterogeneous data distribution in local clients, we proposed two data heterogeneity distribution settings based on these datasets. The first is a Dirichlet distribution parameterized by a coefficient $\beta$ , denoted as $\mathrm{Dir}(\beta)$ , with $\beta$ set to 0.5 throughout the experiments. At the same time, based on the powerful generalization ability of LLMs, we propose a new type of data distribution, which assigns each client a unique task type from the dataset categories, referred to as the Task-Specific distribution, as shown in Appendix C. Other training details are documented in Appendix E.
207
+
208
+ <table><tr><td rowspan="2">Method</td><td colspan="2">Databricks-dolly-15k</td><td colspan="2">Flan 1</td><td colspan="2">Flan 2</td></tr><tr><td>Dir(0.5)</td><td>Task-Specific</td><td>Dir(0.5)</td><td>Task-Specific</td><td>Dir(0.5)</td><td>Task-Specific</td></tr><tr><td>FedAvg</td><td>72.58</td><td>72.59</td><td>73.65</td><td>71.52</td><td>72.76</td><td>69.12</td></tr><tr><td>FedAvgM</td><td>70.26</td><td>64.78</td><td>65.38</td><td>57.99</td><td>63.32</td><td>55.74</td></tr><tr><td>FedAdagrad</td><td>72.19</td><td>73.38</td><td>70.42</td><td>70.25</td><td>68.78</td><td>69.16</td></tr><tr><td>FedAdam</td><td>70.57</td><td>62.03</td><td>61.66</td><td>66.41</td><td>64.39</td><td>64.49</td></tr><tr><td>FedProx</td><td>72.22</td><td>72.80</td><td>72.98</td><td>69.72</td><td>75.89</td><td>69.02</td></tr><tr><td>FedYogi</td><td>69.40</td><td>63.00</td><td>64.43</td><td>65.81</td><td>64.18</td><td>65.35</td></tr><tr><td>FedIT</td><td>72.30</td><td>71.14</td><td>70.01</td><td>71.50</td><td>70.84</td><td>70.88</td></tr><tr><td>PerFIT</td><td>73.49</td><td>71.55</td><td>73.78</td><td>77.91</td><td>75.58</td><td>70.04</td></tr><tr><td>FedDPA</td><td>73.30</td><td>73.83</td><td>72.05</td><td>78.69</td><td>74.25</td><td>72.58</td></tr><tr><td>pFedGPT</td><td>73.90</td><td>74.38</td><td>74.25</td><td>79.13</td><td>76.25</td><td>72.63</td></tr></table>
209
+
210
+ Table 1: Comparison of our method with traditional and recent FL methods under Dir(0.5) and Task-Specific settings on Databricks-dolly-15k, Flan 1, and Flan 2 datasets.
211
+
212
+ <table><tr><td rowspan="2">Method</td><td>Dir(0.5)</td><td>Task-Specific</td><td>Overall</td></tr><tr><td>Mean</td><td>Mean</td><td>Mean</td></tr><tr><td>FedAvg</td><td>72.33</td><td>71.08</td><td>71.71</td></tr><tr><td>FedAvgM</td><td>66.32</td><td>59.50</td><td>62.91</td></tr><tr><td>FedAdagrad</td><td>70.46</td><td>70.93</td><td>70.70</td></tr><tr><td>FedAdam</td><td>65.54</td><td>64.31</td><td>64.93</td></tr><tr><td>FedProx</td><td>73.03</td><td>70.51</td><td>71.77</td></tr><tr><td>FedYogi</td><td>65.34</td><td>64.05</td><td>64.69</td></tr><tr><td>FedIT</td><td>71.05</td><td>71.17</td><td>71.11</td></tr><tr><td>PerFIT</td><td>74.28</td><td>73.17</td><td>73.72</td></tr><tr><td>FedDPA</td><td>73.20</td><td>75.03</td><td>74.12</td></tr><tr><td>pFedGPT</td><td>74.80</td><td>75.38</td><td>75.09</td></tr></table>
213
+
214
+ # 5.2 Main Results
215
+
216
+ We compared our method with traditional FL methods compatible with LoRA (FedAvg (McMahan et al., 2017), FedAvgM (Hsu et al., 2019), FedAdaGrad (Reddi et al., 2020), FedAdam (Reddi et al., 2020)), FedProx (Li et al., 2020), FedYogi (Reddi et al., 2020), as well as recent works specifically designed for applying LoRA in FL with large models (FedIT (Zhang et al., 2024a), PerFIT (Zhang et al., 2024b), FedDPA (Yang et al., 2024)). Our method was evaluated under the two proposed data distribution settings across the three datasets mentioned above. Following FedIT (Zhang et al., 2024a), we use the GPT-4o score as an evaluation indicator of the effectiveness of our model generation. Other baseline details are documented in Appendix E.2.
217
+
218
+ The results indicate the effectiveness of our approach across different tasks and data distributions, as shown in Table 1. Our method consistently outperforms traditional FL methods and recent works designed for LLMs with LoRA, highlighting the improvements in local task performance.
219
+
220
+ The statistical analysis, shown in Table 2, fur
221
+
222
+ Table 2: Mean performance under Dir(0.5), Task-Specific settings, and overall mean across three datasets.
223
+
224
+ <table><tr><td rowspan="2">Method</td><td>Computation</td><td>Communication</td></tr><tr><td>Total time</td><td>Param./iter.</td></tr><tr><td>FedAvg</td><td>1386 min</td><td>2 × Σ</td></tr><tr><td>FedAvgM</td><td>1422 min</td><td>2 × Σ</td></tr><tr><td>FedAdagrad</td><td>1424 min</td><td>2 × Σ</td></tr><tr><td>FedAdam</td><td>1456 min</td><td>2 × Σ</td></tr><tr><td>FedProx</td><td>1506 min</td><td>2 × Σ</td></tr><tr><td>FedYogi</td><td>1448 min</td><td>2 × Σ</td></tr><tr><td>FedIT</td><td>1369 min</td><td>2 × Σ</td></tr><tr><td>PerFIT</td><td>1866 min</td><td>2 × Σ</td></tr><tr><td>FedDPA</td><td>2705 min</td><td>2 × Σ</td></tr><tr><td>pFedGPT</td><td>1431 min</td><td>2 × Σ</td></tr></table>
225
+
226
+ Table 3: Computing and communication cost on dolly dataset. $\sum$ is the parameter amount in the LoRA.
227
+
228
+ ther proves our findings. Under the Dir(0.5), Task-Specific, and combined settings, our method demonstrates higher mean performance compared to traditional FL methods. This consistency across different data distributions and tasks highlights the limitations of traditional methods and emphasizes the necessity for novel pFL approaches.
229
+
230
+ Above all, our findings are:
231
+
232
+ 1) SOTA Performance: Our method achieves SOTA performance across all tested datasets and methods, demonstrating the robustness and effectiveness of our method in enhancing local task performance.
233
+ 2) Limitations of Traditional FL Methods: When faced with LLM scenarios that bring new forms of data heterogeneity to distribution, traditional FL methods often exhibit inferior performance under Task-Specific settings compared to Dir(0.5) settings. In contrast, our method shows improved performance under Task-Specific settings, indicating its superior adaptability to new tasks in LLM +FL scenarios. These results underscore the inadequacy of traditional FL methods in handling the complexities of LLMs and diverse data distribu
234
+
235
+ <table><tr><td>Configuration</td><td>GPT-4o Avg. Score</td></tr><tr><td>Stage-1 BO + low fidelity</td><td>72.84</td></tr><tr><td>Stage-1 BO + high fidelity</td><td>73.56</td></tr><tr><td>Stage-2 BO + low fidelity</td><td>73.51</td></tr><tr><td>Stage-2 BO + high fidelity</td><td>73.63</td></tr><tr><td>Full pFedGPT (ours)</td><td>73.90</td></tr><tr><td>pFedGPT-slow start removed</td><td>73.59</td></tr><tr><td>pFedGPT + high fidelity†</td><td>74.01</td></tr></table>
236
+
237
+ Table 4: Ablation on BO stages and validation fidelity levels on Databricks-dolly-15k (Dir(0.5)). Scores are the mean of three independent GPT-4o judgments per output (higher is better). $\dagger$ Always uses the high-fidelity validation set in both stages (higher cost).
238
+
239
+ tions, thus supporting the need for innovative pFL methods designed for LLMs in the FL context.
240
+
241
+ # 5.3 Ablation Study
242
+
243
+ Effectiveness of $pFedGPT$ . As shown in Table 4, we compare six settings on Databricks-dolly-15k (Dir(0.5)):
244
+
245
+ (i) Stage-1 $BO +$ low fidelity: optimize only the Basic Parameter Space; validation on the low-fidelity subset;
246
+ (ii) Stage-1 $BO +$ high fidelity: same as (i) but validation on the full (high-fidelity) set;
247
+ (iii) Stage-2 $BO +$ low fidelity: each client searches its personalized parameter space; both BO and validation use the low-fidelity subset;
248
+ (iv) Stage-2 $BO +$ high fidelity: identical to (iii) but validation uses the full set;
249
+ (v) Full $pFedGPT$ (ours): curriculum links Stage-1→Stage-2 and switches validation from low→high fidelity, yielding hierarchical BO with multifidelity;
250
+ (vi) $pFedGPT + high$ fidelity: same as (v) but always validates on the full set in both stages (no low-fidelity sampling).
251
+
252
+ As in the main experiment, each reported number is the average of three independent GPT-4o judgments per output.
253
+
254
+ Our full $pFedGPT$ even outperforms "Stage-2 BO + high fidelity" despite the latter's direct use of the expensive validation set. This suggests that, under the same number of optimization rounds, our hierarchical initialization provides a stronger starting point, allowing faster convergence to the optimum. Moreover, even when using high fidelity at every stage, the improvement over full $pFedGPT$ is marginal (only +0.11), highlighting the efficiency and robustness of our multi-fidelity, curriculum-driven HBO framework.
255
+
256
+ Effectiveness under Different Learning Rates. To analyze the effectiveness of our method under
257
+
258
+ different learning rates, we first studied the impact of different learning rates using the Databricks-dolly-15k dataset with the Dir(0.5) distribution. We tested three different learning rates: $5 \times 10^{-5}$ , $1 \times 10^{-4}$ , and $1.5 \times 10^{-5}$ . The evaluation loss over communication rounds for each learning rate is illustrated in Figure 2.
259
+
260
+ Despite the initial slower convergence rate due to the reduced data volume when splitting part of the training set into a global dataset, our method's unique advantages ensure that the model achieves higher final accuracy. Additionally, our approach demonstrates continuous accuracy improvement even when FedIT starts to overfit.
261
+
262
+ Computing and Communication Overhead. We record the total time cost for each method, as shown in Table 3. pFedGPT achieves SOTA performance while introducing only about $4\%$ additional training time and no additional communication cost compared to the baseline methods, placing it among the top performers in terms of efficiency. Moreover, its extra overhead is significantly lower than that of the other two personalization algorithms specifically designed for LLMs, underscoring the superior efficiency of our approach.
263
+
264
+ Impact of different number of clients. To understand the impact of varying the number of clients on the performance of different FL methods, we conducted experiments with 8, 20, and 50 clients under the Dir(0.5) setting across the above three datasets. We selected FedAvg, FedProx, FedIT, PerFIT, and FedDPA based on their strong performance in the main experiments. The results are shown in Table 5, demonstrating the superior scalability and robustness of pFedGPT in real-world scenarios. In addition, we found that the increase in the number of clients brought more performance gains to the FL approach specifically designed for LLM compared to the traditional FL approach, further supporting our view of the need to customize the FL training approach for LLM.
265
+
266
+ Impact of different sizes of sampling weights. In order to evaluate the effect of each client's weight sampled from the global dataset on the model performance, we conducted experiments on all the aforementioned datasets using the sample weights $\{1/8, 1/4, 1/2, 1\}$ . As Figure 3 shows, for a Dirichlet distribution with $\beta$ set to 0.5, the guided validation set required by each client may need to be more generalized, so when the sample weight of the validation set goes up, there is a slight improvement in model performance, representing higher
267
+
268
+ ![](images/648e193727118c6f8df154a05e4069341143c2c312ab5502d38c856eb8980ea7.jpg)
269
+ Figure 2: Evaluation loss vs. communication rounds for learning rate $5 \times 10^{-5} / 1 \times 10^{-4} / 1.5 \times 10^{-4}$ .
270
+
271
+ ![](images/a336ae239d26d96083d12e7e2e8b242528f8f502351b8a5502ec6cde89c5f690.jpg)
272
+
273
+ ![](images/a3e8ea4228c8ccf1f6376dbf1b039ecb19138ff834a71b543ee938b15044de79.jpg)
274
+
275
+ <table><tr><td rowspan="2">Method</td><td colspan="3">Databricks-dolly-15k</td><td colspan="3">Flan 1</td><td colspan="3">Flan 2</td></tr><tr><td>8 Clients</td><td>20 Clients</td><td>50 Clients</td><td>8 Clients</td><td>20 Clients</td><td>50 Clients</td><td>8 Clients</td><td>20 Clients</td><td>50 Clients</td></tr><tr><td>FedAvg</td><td>72.58</td><td>71.74</td><td>72.51</td><td>73.65</td><td>73.82</td><td>73.87</td><td>72.76</td><td>70.12</td><td>71.07</td></tr><tr><td>FedProx</td><td>72.22</td><td>72.39</td><td>74.68</td><td>72.98</td><td>72.35</td><td>73.47</td><td>75.89</td><td>75.10</td><td>74.85</td></tr><tr><td>PerFIT</td><td>73.49</td><td>75.20</td><td>76.05</td><td>73.78</td><td>74.25</td><td>75.10</td><td>75.58</td><td>76.35</td><td>76.90</td></tr><tr><td>FedDPA</td><td>73.30</td><td>73.35</td><td>74.10</td><td>72.05</td><td>72.90</td><td>72.50</td><td>74.25</td><td>76.00</td><td>76.65</td></tr><tr><td>pFedGPT</td><td>73.90</td><td>75.61</td><td>76.74</td><td>74.25</td><td>74.00</td><td>75.90</td><td>76.25</td><td>77.10</td><td>77.85</td></tr></table>
276
+
277
+ Table 5: Performance comparison of different methods with varying number of clients under Dir(0.5) setting across Databricks-dolly-15k, Flan 1, and Flan 2 datasets.
278
+
279
+ ![](images/da229bae5600a8d12f58d5cd7cc404ea01a9ceecd1c91d99fb83e2714b7131b0.jpg)
280
+ Figure 3: Comparison of model performance with different sizes of sampling weights.
281
+
282
+ ![](images/27301663d5e665670fb7fdaa6692ea9e48e35f82285225b16724a3fff186c61a.jpg)
283
+
284
+ precision of model personalization initialization. However, for the Task-Specific distributed data, the high degree of heterogeneity of the tasks (especially the Flan1/2 dataset also contains heterogeneity of output formats) makes it probably a safer choice for each client to select a smaller validation set that is more accurate. Specifically, in our experimental setup, a sample weight of 1/8 for a task-specific distribution approximates finding only the same Task data in the global dataset as the guided validation set, and thus tends to achieve a good experimental result. However, for our method, even if the worst sampling weights are selected, the model trained by our method still outperforms the vast majority of models on a specific task, and can basically achieve SOTA on the overall performance of all tasks.
285
+
286
+ # 6 Related Work
287
+
288
+ Parameter-Efficient Fine-Tuning (PEFT): In order to further liberate the limitations of FL in the context of LLMs, recent work has focused on integrating PEFT methods with FL Settings, including reducing communication costs (Malaviya et al., 2023; Nguyen et al., 2024; Sun et al., 2024; Xu et al., 2023; Zhang et al., 2023c), protecting differential privacy (Sun et al., 2024; Zhang et al.,
289
+
290
+ 2024a), and establishing fine-tuning frameworks (Kuang et al., 2023; Ye et al., 2024; Zhang et al., 2024a). In terms of alleviating data heterogeneity and achieving model personalization, SLoRA (Babakniya et al., 2023) finds a personalized starting point for the model through two-stage training and SVD matrix decomposition. PerFIT (Zhang et al., 2024b) uses neural architecture search to find a personalized architecture for each client. FedDPA (Yang et al., 2024) learns an additional local adapter during training and combines the output of the global and local adapters through an instance-level dynamic weight. Our $pFedGPT$ is more accurate than the above methods and does not introduce additional memory costs. Fine-grained adaptive local aggregation based on model internal structure makes it possible to intelligently aggregate global and local models to fit local targets on each client. In addition, because $pFedGPT$ modifies only local initialization in FL, it can be applied to existing FL methods to improve their performance without modifying other learning processes.
291
+
292
+ Bayesian Federated Learning (BFL) (Cao et al., 2023) extends traditional FL by deriving a global posterior distribution that aggregates knowledge from all clients. There are also some existing methods integrating Bayesian optimization (BO) with FL, such as FTS (Dai et al., 2020) and TFP (Zang et al., 2022), focus on improving efficiency through dimensionality reduction or zeroth-order optimization. However, these approaches lack considerations for applying BO to achieve fine-grained personalization and struggle to adapt to the unique challenges of PEFT in LLMs. Our proposed method, pFedGPT, introduces a Hierar-
293
+
294
+ chical Bayesian Optimization framework tailored for LoRA-based FL, enabling precise integration of global and local information. This approach achieves robust personalization, improved scalability, and state-of-the-art performance, addressing key gaps in existing FL-BO methods.
295
+
296
+ # 7 Conclusion
297
+
298
+ In this paper, we introduced $pFedGPT$ , which leverages hierarchical Bayesian optimization to accurately capture the desired information in the downloaded global LoRA and integrates curriculum learning and multi-fidelity algorithms to reduce computational costs while maintaining accuracy. Our experiments show that $pFedGPT$ outperforms SOTA methods with minimal extra optimization computational cost as well as maintains scalability and robustness. Additionally, we proposed a task-specific distribution benchmark to evaluate LLM personalization, demonstrating the limitations of traditional pFL methods and the necessity of proposing a new personalized methods based on LLMs.
299
+
300
+ # Limitations
301
+
302
+ While our approach demonstrates strong performance, it has a few limitations. First, our experiments focus primarily on the QKV projections, which are the most frequently loaded in LoRA-based models, and further exploration is needed for other projections. Second, we did not investigate the similarity of personalization across different projections, which could lead to more efficient optimization by grouping similar projections. Finally, while we introduce a task-specific distribution, further work is needed to develop more advanced methods for measuring heterogeneity between different tasks, which would improve the understanding of LLM personalization in the context of federated learning.
303
+
304
+ # Acknowledgments
305
+
306
+ The work of Hao Wang was supported in part by NSF 2534286, 2523997, 2315612, and the AWS Cloud Credit for Research program. The work of Jian Li was supported in part by NSF 2315614. The work of Miao Pan was supported in part by NSF 2403249. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funding agencies.
307
+
308
+ # References
309
+
310
+ Sara Babakniya, Ahmed Roushdy Elkordy, Yahya H Ezzeldin, Qingfeng Liu, Kee-Bong Song, Mostafa El-Khamy, and Salman Avestimehr. 2023. SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models. arXiv preprint arXiv:2308.06522.
311
+ Mislav Balunovic, Dimitar Dimitrov, Nikola Jovanovic, and Martin Vechev. 2022. Lamp: Extracting Text from Gradients with Language Model Priors. In Proc. Advances in Neural Information Processing Systems (NeurIPS).
312
+ Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum Learning. In Proc. 26th Annual International Conference on Machine Learning (ICML).
313
+ Longbing Cao, Hui Chen, Xuhui Fan, Joao Gama, Yew-Soon Ong, and Vipin Kumar. 2023. Bayesian federated learning: a survey. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, pages 7233-7242.
314
+ Liam Collins, Hamed Hassani, Aryan Mokhtari, and Sanjay Shakkottai. 2021. Exploiting Shared Representations for Personalized Federated Learning. In Proc. International Conference on Machine Learning (ICML).
315
+ Zhongxiang Dai, Bryan Kian Hsiang Low, and Patrick Jaillet. 2020. Federated bayesian optimization via thompson sampling. Advances in Neural Information Processing Systems, 33:9687-9699.
316
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv:1810.04805.
317
+ Yong Guo, Yaofo Chen, Yin Zheng, Peilin Zhao, Jian Chen, Junzhou Huang, and Mingkui Tan. 2020. Breaking the Curse of Space Explosion: Towards Efficient NAS with Curriculum Search. In Proc. International Conference on Machine Learning (ICML).
318
+ Samyak Gupta, Yangsibo Huang, Zexuan Zhong, Tianyu Gao, Kai Li, and Danqi Chen. 2022. Recovering Private Text in Federated Learning of Language Models. In Proc. Advances in Neural Information Processing Systems (NeurIPS).
319
+ Tzu-Ming Harry Hsu, Hang Qi, and Matthew Brown. 2019. Measuring the Effects of Non-Identical Data Distribution for Federated Visual Classification. arXiv preprint arXiv:1909.06335.
320
+ Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. LoRA: Low-Rank Adaptation of Large Language Models. arXiv preprint arXiv:2106.09685.
321
+ Oleksandra Klymenko, Stephen Meisenbacher, and Florian Matthes. 2022. Differential Privacy in Natural Language Processing: The Story So Far. arXiv preprint arXiv:2208.08140.
322
+
323
+ Weirui Kuang, Bingchen Qian, Zitao Li, Daoyuan Chen, Dawei Gao, Xuchen Pan, Yuexiang Xie, Yaliang Li, Bolin Ding, and Jingren Zhou. 2023. FederatedScope-LLM: A Comprehensive Package for Fine-Tuning Large Language Models in Federated Learning. arXiv preprint arXiv:2309.00363.
324
+ Tian Li, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. 2020. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Processing Magazine, 37(3):50-60.
325
+ Shubham Malaviya, Manish Shukla, and Sachin Lodha. 2023. Reducing Communication Overhead in Federated Learning for Pre-Trained Language Models Using Parameter-Efficient Finetuning. In Proc. Conference on Lifelong Learning Agents.
326
+ Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient Learning of Deep Networks from Decentralized Data. In Proc. International Conference on Artificial Intelligence and Statistics (ICAIS).
327
+ Duy Phuong Nguyen, J Pablo Munoz, and Ali Jannesari. 2024. Flora: Enhancing Vision-Language Models with Parameter-Efficient Federated Learning. arXiv preprint arXiv:2404.15182.
328
+ Jaehoon Oh, Sangmook Kim, and Se-Young Yun. 2021. FedBABU: Towards Enhanced Representation for Federated Image Classification. arXiv preprint arXiv:2106.06042.
329
+ Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, and 1 others. 2019. Language Models are Unsupervised Multitask Learners. OpenAI Blog, 1(8):9.
330
+ Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Journal of Machine Learning Research, 21(140):1-67.
331
+ Sashank Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečný, Sanjiv Kumar, and H Brendan McMahan. 2020. Adaptive Federated Optimization. arXiv preprint arXiv:2003.00295.
332
+ Youbang Sun, Zitao Li, Yaliang Li, and Bolin Ding. 2024. Improving LoRA in Privacy-Preserving Federated Learning. arXiv preprint arXiv:2403.12313.
333
+ Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. 2023. Stanford Alpaca: An Instruction-Following LLaMA Model.
334
+ Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Chandu, David Wadden, Kelsey MacMillan, Noah A Smith, Iz Beltagy,
335
+
336
+ and 1 others. 2023. How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources. In Proc. Advances in Neural Information Processing Systems (NeurIPS).
337
+ Xinghao Wu, Xuefeng Liu, Jianwei Niu, Guogang Zhu, and Shaojie Tang. 2023. Bold but Cautious: Unlocking the Potential of Personalized Federated Learning through Cautiously Aggressive Collaboration. In Proc. IEEE/CVF International Conference on Computer Vision (ICCV).
338
+ Mengwei Xu, Yaozong Wu, Dongqi Cai, Xiang Li, and Shangguang Wang. 2023. Federated Fine-Tuning of Billion-Sized Language Models Across Mobile Devices. arXiv preprint arXiv:2308.13894.
339
+ Yiyuan Yang, Guodong Long, Tao Shen, Jing Jiang, and Michael Blumenstein. 2024. Dual-Personalizing Adapter for Federated Foundation Models. arXiv preprint arXiv:2403.19211.
340
+ Rui Ye, Wenhao Wang, Jingyi Chai, Dihan Li, Zexi Li, Yinda Xu, Yaxin Du, Yanfeng Wang, and Siheng Chen. 2024. OpenFedLLM: Training Large Language Models on Decentralized Private Data via Federated Learning. arXiv preprint arXiv:2402.06954.
341
+ Liping Yi, Han Yu, Gang Wang, and Xiaoguang Liu. 2023. FedLoRA: Model-Heterogeneous Personalized Federated Learning with LoRA Tuning. arXiv preprint arXiv:2310.13283.
342
+ Sixing Yu, J Pablo Muñoz, and Ali Jannesari. 2023. Federated Foundation Models: Privacy-Preserving and Collaborative Learning for Large Models. arXiv preprint arXiv:2305.11414.
343
+ Lu Zang, Yang Qin, and Ruonan Li. 2022. Traffic flow prediction based on federated learning with joint pca compression and bayesian optimization. In 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pages 3330-3335. IEEE.
344
+ Jianqing Zhang, Yang Hua, Hao Wang, Tao Song, Zhengui Xue, Ruhui Ma, and Haibing Guan. 2023a. FedALA: Adaptive Local Aggregation for Personalized Federated Learning. In Proc. AAAI Conference on Artificial Intelligence (AAAI).
345
+ Jianqing Zhang, Yang Hua, Hao Wang, Tao Song, Zhengui Xue, Ruhui Ma, and Haibing Guan. 2023b. FedCP: Separating Feature Information for Personalized Federated Learning via Conditional Policy. In Proc. 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD).
346
+ Jianyi Zhang, Saeed Vahidian, Martin Kuo, Chunyuan Li, Ruiyi Zhang, Tong Yu, Guoyin Wang, and Yiran Chen. 2024a. Towards Building the FederatedGPT: Federated Instruction Tuning. In Proc. ICASSP 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
347
+
348
+ Pengyu Zhang, Yingbo Zhou, Ming Hu, Junxian Feng, Jiawen Weng, and Mingsong Chen. 2024b. Personalized Federated Instruction Tuning via Neural Architecture Search. arXiv preprint arXiv:2402.16919.
349
+
350
+ Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, and 1 others. 2022. OPT: Open Pre-trained Transformer Language Models. arXiv preprint arXiv:2205.01068.
351
+
352
+ Zhuo Zhang, Yuanhang Yang, Yong Dai, Qifan Wang, Yue Yu, Lizhen Qu, and Zenglin Xu. 2023c. Fedpetuning: When Federated Learning Meets the Parameter-Efficient Tuning Methods of Pre-Trained Language Models. In Proc. Annual Meeting of the Association of Computational Linguistics (ACL).
353
+
354
+ Algorithm 1: The pFedGPT Framework
355
+ Require: $N$ clients, global rounds $T$ , initial global LoRA $\Theta_0$ , local learning rate $\alpha$ , slow-start thresholds
356
+ Ensure: Local LoRA parameters $\{\hat{\Theta}_i\}_{i=1}^N$
357
+ 1: Server initializes $\Theta_0$
358
+ 2: for round $t = 1\dots T$ do
359
+ 3: Server selects subset of clients $I_t$ and sends $\Theta_{t-1}$
360
+ 4: for client $i \in I_t$ in parallel do
361
+ 5: if Slow Start = True then
362
+ 6: get $\mathcal{V}_{\mathrm{low - fid}}$ this round (cf. Eq. (1))
363
+ 7: $\theta_{t,i}^{init} \gets \text{Algorithm HBO}$
364
+ 8: else
365
+ 9: $\theta_{t,i}^{init} \gets \theta_{\mathrm{global}}$
366
+ 10: end if
367
+ 11: Local training:
368
+ $\Theta_t^i \gets \theta_{t,i}^{\mathrm{init}} - \alpha \nabla_\theta L(\theta_{t,i}^{\mathrm{init}}, \mathcal{D}_i)$
369
+ 12: Send $\Theta_t^i$ to server
370
+ 13: end for
371
+ 14: Server aggregates
372
+ $\Theta_t \gets \sum_{i \in I_t} \frac{k_i}{\sum_{j \in I_t} k_j} \Theta_t^i$
373
+ 15: end for
374
+ 16: return $\{\hat{\Theta}_i\}_{i=1}^N$
375
+
376
+ # A The pFedGPT Framework
377
+
378
+ The complete federated training process is described in Algorithm 1.
379
+
380
+ # B Details of Validation Subset Construction
381
+
382
+ # B.1 Selection and Clustering of Validation Subsets
383
+
384
+ Each client selects a subset that is most similar to its local data distribution from the global dataset as a local validation set. This is achieved by calculating the cosine similarity between the local and global dataset embeddings. The similarity scores are sorted to select the top $n$ most similar global data points as the local validation dataset $\mathbf{V}$ . We then perform clustering on its normalized embeddings $\mathbf{E}_{\mathrm{V,norm}}$ to identify groups of similar data points. The optimal number of clusters is determined by maximizing the silhouette score, yielding cluster labels $\mathbf{L}$ and cluster sizes $\mathbf{N}_{\mathrm{clusters}}$ .
385
+
386
+ ![](images/e5847b72a0af82acdcbea619fe371f2d9c862cb6ae30b4be24a713fc951e2429.jpg)
387
+ Figure 4: Data distribution of datasets
388
+
389
+ # B.2 Sampling for Low-fidelity Validation Dataset
390
+
391
+ After clustering, we perform weighted probability sampling to obtain a low-fidelity validation dataset that best represents the overall data distribution. Let $\mathbf{V}_{\mathrm{sampled}}$ denote the sampled validation dataset, and let $\alpha$ be a hyper-parameter representing the sampling ratio:
392
+
393
+ $$
394
+ T = \left\lfloor \alpha \times | \mathbf {V} | \right\rfloor .
395
+ $$
396
+
397
+ For each cluster, the number of samples to be drawn is proportional to the size of the cluster:
398
+
399
+ $$
400
+ \mathbf {n} _ {c} = \left\lfloor T \times \frac {\mathbf {N} _ {\text {c l u s t e r s}} [ c ]}{| \mathbf {V} |} \right\rfloor . \tag {1}
401
+ $$
402
+
403
+ The selected points from each cluster form the final low-fidelity validation dataset $\mathbf{V}_{\mathrm{sampled}}$ .
404
+
405
+ # C Data Distribution
406
+
407
+ We conducted our experiments on three datasets from the previous federal learning research: Databricks-dolly-15k (Zhang et al., 2024a), Flan 1 and Flan 2 (Yang et al., 2024). Each dataset has eight different NLP tasks, and their data distribution is shown in the Figure 4.
408
+
409
+ # D Training Evaluation Loss Comparison
410
+
411
+ In FL, as shown in Figure 5, local fine-tuning may achieve faster initial convergence compared to federated training (FedIT (Zhang et al., 2024a)), but it often results in lower final accuracy. Since our method involves the aggregation of locally trained LoRA parameters and globally aggregated LoRA parameters, in order to avoid the local optima caused by the aggregation weight being too biased to the local parameters in the early stage of training, we employ a personalized slow start mechanism.
412
+
413
+ ![](images/8b82a280b3262df7afa8b01d0a15037750d1db19fbda9953d1db6c15b04e4fcf.jpg)
414
+ Figure 5: Training evaluation loss comparison between federated learning and local fine-tuning.
415
+
416
+ # E Training Details
417
+
418
+ # E.1 Dataset Splits
419
+
420
+ To simulate the scarcity of local data on the client (McMahan et al., 2017; Yang et al., 2024), for the Databricks-dolly-15k dataset, we extracted $20\%$ of the data from each NLP task, ensuring its volume is comparable to the other two datasets. We set $20\%$ of the Databricks-dolly-15k data as the local test set for each client (Zhang et al., 2024b), while for Flan datasets, we followed the original training and testing split. For our method, we extract 40 bars without retracting from the training data for each NLP task class to form a global validation set for our method. Our method will be trained using the segmented data and the other methods will be trained using the original data.
421
+
422
+ # E.2 Classical FL Baselines: Additional Details
423
+
424
+ To aid general readers, we provide concise descriptions of all classical FL baselines compared in our main results.
425
+
426
+ - FedAvg (McMahan et al., 2017). Clients perform multiple local SGD steps and the server averages model parameters weighted by client data sizes. This sharply reduces communication while preserving global convergence in many practical settings.
427
+ - FedAvgM (Hsu et al., 2019). Augments FedAvg with server-side momentum, smoothing historical update directions and improving convergence stability under non-IID data.
428
+ - FedAdagrad (Reddi et al., 2020). Maintains each parameter's cumulative squared gradient on the server and scales step sizes with Adagrad, reducing manual tuning and often speeding convergence under heterogeneous data.
429
+
430
+ - FedAdam (Reddi et al., 2020). Incorporates Adam's first- and second-moment estimates at the server, dynamically adapting to gradient magnitude and direction for greater robustness to noisy or sparse gradients.
431
+ - FedYogi (Reddi et al., 2020). Replaces Adam's second-moment update with Yogi's sign-corrected rule, curbing unbounded moment growth and mitigating learning-rate blow-ups on non-IID data.
432
+ - FedProx (Li et al., 2020). Adds a proximal term to each client's objective and allows variable local epochs, constraining update drift and handling both statistical and system heterogeneity.
433
+
434
+ These baselines were designed in the pre-LLM era; their limitations relative to our method highlight the need for LLM-tailored personalized FL (pFL) algorithms. Other baselines specifically designed for applying LoRA in FL with LLMs (e.g., FedIT (Zhang et al., 2024a), PerFIT (Zhang et al., 2024b), FedDPA (Yang et al., 2024)) are discussed in detail in the related-work section (see Section 6).
435
+
436
+ # E.3 Configurations
437
+
438
+ We used Alpaca-7B (Taori et al., 2023) as our base model, and in the hyperparameters of LoRA and how it was initialized, Optimizer Settings, template of the prompt and other model configurations are completely in accordance with the original FedIT setting (Zhang et al., 2024a), and we follow the original setting for different datasets in their FL research (Yang et al., 2024; Zhang et al., 2024a) in terms of learning rate. We set up 8 clients corresponding to 8 different task data and activated all clients per communication round based on the traditional pFL setup. All the experiments run on 2 × A5000 (24 GB).
pfedgpthierarchicallyoptimizingloraaggregationweightsforpersonalizedfederatedgptmodels/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee35e794c059165624cff0546bac71a282e728ce3f1d6f39aa4c3de3949e1b53
3
+ size 442319
pfedgpthierarchicallyoptimizingloraaggregationweightsforpersonalizedfederatedgptmodels/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3a86d34cec2c0d5e441e7d10cafdfe9099bac53127cfcd44f937e502ee5ec95
3
+ size 490643
rewordbenchbenchmarkingandimprovingtherobustnessofrewardmodelswithtransformedinputs/9308c9c9-dc4a-4610-9331-248e0fb07376_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c93b40a26f672b083831122ed77508e8775489d572cafe6bcb33d040227d18ff
3
+ size 159985
rewordbenchbenchmarkingandimprovingtherobustnessofrewardmodelswithtransformedinputs/9308c9c9-dc4a-4610-9331-248e0fb07376_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5302a968f603be5ac7168388edbfe5982d48c1e906595ff753f5f3e19e4d63e6
3
+ size 191156