text
string
source
string
CKA CKA Unlearned Relearned(c) Simple Task ( LR = 3 × 10−5, N= 100 ) 0 510 15 20 25 30 Layer index02Linear CKA CKA Unlearned Relearned (d) Simple Task ( LR = 3 × 10−5, N= 6) 0 510 15 20 25 30 Layer index02Linear CKA CKA Unlearned Relearned(e) Simple Task ( LR = 3 × 10−5, N= 50 ) 0 510 15 20 25 30 Layer index02Linear CKA CKA Unlearned Relearned(f) Simple Task ( LR = 3 × 10−5, N= 100 ) 0 510 15 20 25 30 Layer index02Linear CKA CKA Unlearned Relearned (g) Simple Task ( LR = 3 × 10−5, N= 6) 0 510 15 20 25 30 Layer index02Linear CKA CKA Unlearned Relearned(h) Simple Task ( LR = 3 × 10−5, N= 50 ) 0 510 15 20 25 30 Layer index02Linear CKA CKA Unlearned Relearned(i) Simple Task ( LR = 3 × 10−5, N= 100 ) 0 510 15 20 25 30 Layer index02Linear CKA CKA Unlearned Relearned (j) Simple Task ( LR = 3 × 10−5, N= 6) 0 510 15 20 25 30 Layer index02Linear CKA CKA Unlearned Relearned(k) Simple Task ( LR = 3 × 10−5, N= 50 ) 0 510 15 20 25 30 Layer index02Linear CKA CKA Unlearned Relearned(l) Simple Task ( LR = 3 × 10−5, N= 100 ) 0 510 15 20 25 30 Layer index02Linear CKA CKA Unlearned Relearned (m) Simple Task ( LR = 3 × 10−5, N= 6) 0 510 15 20 25 30 Layer index02Linear CKA CKA Unlearned Relearned(n) Simple Task ( LR = 3 × 10−5, N= 50 ) 0 510 15 20 25 30 Layer index02Linear CKA CKA Unlearned Relearned(o) Simple Task ( LR = 3 × 10−5, N= 100 ) Figure 16: CKA Across Layers. Each row shows results under different unlearning methods: GA+GD (a–c), GA+KL (d–f), NPO (g–i), NPO+KL (j–l), and Rlable (m–o). Simple task on Yi-6B with fixed learning rate LR = 3 ×10−5and varying unlearning requests N∈ {6,50,100}. 25 0 5 10 15 20 25 Layer index02Linear CKA CKA Unlearned Relearned(a) Complex Task ( LR = 3 × 10−6, N= 6) 0 5 10 15 20 25 Layer index02Linear CKA CKA Unlearned Relearned(b) Complex Task ( LR = 5 × 10−6, N= 6) 0 5 10 15 20 25 Layer index02Linear CKA CKA Unlearned Relearned(c) Complex Task ( LR = 3 × 10−5, N= 6) 0 5 10 15 20 25 Layer index02Linear CKA CKA Unlearned Relearned (d) Complex Task ( LR = 3 × 10−6, N= 6) 0 5 10 15 20 25 Layer index02Linear CKA CKA Unlearned Relearned(e) Complex Task ( LR = 5 × 10−6, N= 6) 0 5 10 15 20 25 Layer index02Linear CKA CKA Unlearned Relearned(f) Complex Task ( LR = 3 × 10−5, N= 6) 0 5 10 15 20 25 Layer index02Linear CKA CKA Unlearned Relearned (g) Complex Task ( LR = 3 × 10−6, N= 6) 0 5 10 15 20 25 Layer index02Linear CKA CKA Unlearned Relearned(h) Complex Task ( LR = 5 × 10−6, N= 6) 0
https://arxiv.org/abs/2505.16831v1
5 10 15 20 25 Layer index02Linear CKA CKA Unlearned Relearned(i) Complex Task ( LR = 3 × 10−5, N= 6) Figure 17: CKA Across Layers. Each row shows results under different unlearning methods: GA (a-c) NPO (d–f), Rlable (g–j). All plots are for the complex task on Qwen2.5-7B, using three learning rates{3×10−6,5×10−6,3×10−5}and fixed N= 6. 0 510 15 20 25 30 Layer index02Linear CKA Reversible CKA Unlearned Relearned (a) Simple (Relearned by forget set) 0 510 15 20 25 30 Layer index02Linear CKA CKA Unlearned Relearned (b) Simple (Relearned by retain set) 0 510 15 20 25 30 Layer index02Linear CKA CKA Unlearned Relearned (c) Simple (Relearned by unrelated data) 0 510 15 20 25 30 Layer index02Linear CKA Reversible CKA Unlearned Relearned (d) Simple (input data = forget set) 0 510 15 20 25 30 Layer index02Linear CKA CKA Unlearned Relearned (e) Simple (input data = retain set) 0 510 15 20 25 30 Layer index02Linear CKA CKA Unlearned Relearned (f) Simple (input data = unrelated data) Figure 18: CKA Analysis under Varied Relearning and Evaluation Inputs on Yi-6B (Simple Task). (a–c): Relearning is performed using the forget set, retain set, or unrelated data respectively. (d–f): CKA is measured using the forget set, retain set, or unrelated data as evaluation input. 26 107 1011091017 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned(a) LR= 3×10−6, Layer 28 109 10110111021 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (b) LR= 5×10−6, Layer 28 1017 1013 109 105 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (c) LR= 3×10−5, Layer 28 107 10210111020 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (d) LR= 3×10−6, Layer 22 108 10110101019 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (e) LR= 5×10−6, Layer 22 1018 1014 1010 106 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (f) LR= 3×10−5, Layer 22 106 10210101018 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (g) LR= 3×10−6, Layer 13 107 1001071014 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (h) LR= 5×10−6, Layer 13 1018 1013 108 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (i) LR= 3×10−5, Layer 13 1011 102101510281041 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (j) LR= 3×10−6, Layer 4 1011 101101310251037 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (k) LR= 5×10−6, Layer 4 1024 1016 108 100 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (l) LR= 3×10−5, Layer 4 1012 10410201036 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (m) LR= 3×10−6, Layer 1 1013 10110151029 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (n) LR= 5×10−6, Layer 1 1018 1011 104 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (o) LR= 3×10−5, Layer 1 Figure 19: FIM for GA Across Layers. All plots are for the simple task on Yi-6B, using three learning
https://arxiv.org/abs/2505.16831v1
rates{3×10−6,5×10−6,3×10−5}and fixed N= 100 . 27 1010 107 104 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned(a) LR= 3×10−6, Layer 31 109 105 101 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned (b) LR= 5×10−6, Layer 31 1017 1012 107 102 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned (c) LR= 3×10−5, Layer 31 1010 107 104 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (d) LR= 3×10−6, Layer 28 109 104 101106 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (e) LR= 5×10−6, Layer 28 1017 1012 107 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (f) LR= 3×10−5, Layer 28 1012 109 106 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (g) LR= 3×10−6, Layer 22 1010 105 100105 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (h) LR= 5×10−6, Layer 22 1018 1013 108 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (i) LR= 3×10−5, Layer 22 1010 107 104 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (j) LR= 3×10−6, Layer 13 109 104 101106 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (k) LR= 5×10−6, Layer 13 1018 1013 108 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (l) LR= 3×10−5, Layer 13 1016 1011 106 101 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (m) LR= 3×10−6, Layer 4 1015 108 101 106 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (n) LR= 5×10−6, Layer 4 1019 1013 107 101 105 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (o) LR= 3×10−5, Layer 4 1020 1014 108 102 104 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (p) LR= 3×10−6, Layer 1 1017 109 101 1071015 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (q) LR= 5×10−6, Layer 1 1020 1013 106 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (r) LR= 3×10−5, Layer 1 Figure 20: FIM for GA+GD Across Layers. All plots are for the simple task on Yi-6B, using three learning rates {3×10−6,5×10−6,3×10−5}and fixed N= 100 . 28 109 105 101 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned(a) LR= 3×10−6, Layer 31 1010 105 100105 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned (b) LR= 5×10−6, Layer 31 1023 1017 1011 105 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned (c) LR= 3×10−5, Layer 31 109 105 101 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (d) LR= 3×10−6, Layer 28 1011 106 101 104 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (e) LR= 5×10−6, Layer 28 1015 1011 107 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (f) LR= 3×10−5, Layer 28 1010 106 102
https://arxiv.org/abs/2505.16831v1
102106 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (g) LR= 3×10−6, Layer 22 1010 106 102 102106 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (h) LR= 5×10−6, Layer 22 1015 1012 109 106 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (i) LR= 3×10−5, Layer 22 108 103 102107 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (j) LR= 3×10−6, Layer 13 109 105 101 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (k) LR= 5×10−6, Layer 13 1014 1010 106 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (l) LR= 3×10−5, Layer 13 1015 108 101 106 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (m) LR= 3×10−6, Layer 4 1015 108 101 106 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (n) LR= 5×10−6, Layer 4 1024 1017 1010 103 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (o) LR= 3×10−5, Layer 4 1016 107 1021011 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (p) LR= 3×10−6, Layer 1 1018 1011 104 1031010 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (q) LR= 5×10−6, Layer 1 1018 1011 104 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (r) LR= 3×10−5, Layer 1 Figure 21: FIM for GA+KL Across Layers. All plots are for the simple task on Yi-6B, using three learning rates {3×10−6,5×10−6,3×10−5}and fixed N= 100 . 29 109 105 101 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned(a) LR= 3×10−6, Layer 31 109 105 101 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned (b) LR= 5×10−6, Layer 31 109 105 101 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned (c) LR= 3×10−5, Layer 31 1011 109 107 105 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (d) LR= 3×10−6, Layer 28 1010 107 104 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (e) LR= 5×10−6, Layer 28 1011 108 105 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (f) LR= 3×10−5, Layer 28 1011 108 105 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (g) LR= 3×10−6, Layer 22 1011 108 105 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (h) LR= 5×10−6, Layer 22 1012 108 104 100 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (i) LR= 3×10−5, Layer 22 1010 107 104 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (j) LR= 3×10−6, Layer 13 1010 107 104 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (k) LR= 5×10−6, Layer 13 109 105 101 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (l) LR= 3×10−5, Layer 13 1016 1011 106 101 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4
https://arxiv.org/abs/2505.16831v1
Original Unlearned Relearned (m) LR= 3×10−6, Layer 4 1016 1011 106 101 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (n) LR= 5×10−6, Layer 4 1015 109 103 103 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (o) LR= 3×10−5, Layer 4 1018 1012 106 100 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (p) LR= 3×10−6, Layer 1 1018 1011 104 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (q) LR= 5×10−6, Layer 1 1018 1011 104 1031010 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (r) LR= 3×10−5, Layer 1 Figure 22: FIM for NPO Across Layers. All plots are for the simple task on Yi-6B, using three learning rates {3×10−6,5×10−6,3×10−5}and fixed N= 100 . 30 1010 107 104 101 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned(a) LR= 3×10−6, Layer 31 109 105 101 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned (b) LR= 5×10−6, Layer 31 109 105 101 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned (c) LR= 3×10−5, Layer 31 1011 109 107 105 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (d) LR= 3×10−6, Layer 28 1010 107 104 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (e) LR= 5×10−6, Layer 28 1011 108 105 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (f) LR= 3×10−5, Layer 28 1012 109 106 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (g) LR= 3×10−6, Layer 22 1011 108 105 102 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (h) LR= 5×10−6, Layer 22 1012 109 106 103 100 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (i) LR= 3×10−5, Layer 22 1010 107 104 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (j) LR= 3×10−6, Layer 13 1010 107 104 101 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (k) LR= 5×10−6, Layer 13 1010 107 104 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (l) LR= 3×10−5, Layer 13 1016 1011 106 101 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (m) LR= 3×10−6, Layer 4 1016 1011 106 101 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (n) LR= 5×10−6, Layer 4 1016 1011 106 101 104 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (o) LR= 3×10−5, Layer 4 1019 1013 107 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (p) LR= 3×10−6, Layer 1 1019 1012 105 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (q) LR= 5×10−6, Layer 1 1018 1011 104 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (r) LR= 3×10−5, Layer 1 Figure 23: FIM for NPO+KL Across Layers. All plots are for the simple task on
https://arxiv.org/abs/2505.16831v1
Yi-6B, using three learning rates {3×10−6,5×10−6,3×10−5}and fixed N= 100 . 31 109 105 101 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned(a) LR= 3×10−6, Layer 31 1012 108 104 100 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned (b) LR= 5×10−6, Layer 31 1017 1012 107 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned (c) LR= 3×10−5, Layer 31 1010 107 104 101 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (d) LR= 3×10−6, Layer 28 1011 107 103 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (e) LR= 5×10−6, Layer 28 1016 1012 108 104 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (f) LR= 3×10−5, Layer 28 1010 106 102 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (g) LR= 3×10−6, Layer 22 1011 108 105 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (h) LR= 5×10−6, Layer 22 1015 1011 107 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (i) LR= 3×10−5, Layer 22 109 105 101 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (j) LR= 3×10−6, Layer 13 1010 107 104 101 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (k) LR= 5×10−6, Layer 13 1013 109 105 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (l) LR= 3×10−5, Layer 13 1015 109 103 103 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (m) LR= 3×10−6, Layer 4 1016 1010 104 102 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (n) LR= 5×10−6, Layer 4 1022 1015 108 101 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (o) LR= 3×10−5, Layer 4 1018 1011 104 1031010 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (p) LR= 3×10−6, Layer 1 1018 1011 104 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (q) LR= 5×10−6, Layer 1 1020 1013 106 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (r) LR= 3×10−5, Layer 1 Figure 24: FIM for Rlable Across Layers. All plots are for the simple task on Yi-6B, using three learning rates {3×10−6,5×10−6,3×10−5}and fixed N= 100 . 32 108 102 1041010 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned(a)N= 6, Layer 28 107 1001071014 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (b)N= 50 , Layer 28 1017 1013 109 105 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (c)N= 100 , Layer 28 109 103 103109 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (d)N= 6, Layer 22 107 10210111020 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (e)N= 50 , Layer 22 1018 1014 1010 106 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (f)N= 100 , Layer 22 108 102 1041010
https://arxiv.org/abs/2505.16831v1
Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (g)N= 6, Layer 13 107 10010710141021 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (h)N= 50 , Layer 13 1018 1013 108 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (i)N= 100 , Layer 13 1012 101 10101021 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (j)N= 6, Layer 4 1011 10110131025 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (k)N= 50 , Layer 4 1024 1016 108 100 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (l)N= 100 , Layer 4 1014 101 10121025 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (m)N= 6, Layer 1 1012 10310181033 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (n)N= 50 , Layer 1 1018 1011 104 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (o)N= 100 , Layer 1 Figure 25: FIM for GA Across Layers. Simple task on Yi-6B with fixed learning rate LR = 3 ×10−5 and varying unlearning requests N∈ {6,50,100}. 33 1010 107 104 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned(a)N= 6, Layer 31 1010 107 104 101 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned (b)N= 50 , Layer 31 1017 1012 107 102 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned (c)N= 100 , Layer 31 1011 109 107 105 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (d)N= 6, Layer 28 1011 109 107 105 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (e)N= 50 , Layer 28 1017 1012 107 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (f)N= 100 , Layer 28 1011 108 105 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (g)N= 6, Layer 22 1013 1010 107 104 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (h)N= 50 , Layer 22 1018 1013 108 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (i)N= 100 , Layer 22 1010 107 104 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (j)N= 6, Layer 13 1010 107 104 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (k)N= 50 , Layer 13 1018 1013 108 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (l)N= 100 , Layer 13 1017 1012 107 102 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (m)N= 6, Layer 4 1016 1011 106 101 104 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (n)N= 50 , Layer 4 1019 1013 107 101 105 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (o)N= 100 , Layer 4 1018 1012 106 100 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (p)N= 6, Layer 1 1018 1011 104 103
https://arxiv.org/abs/2505.16831v1
Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (q)N= 50 , Layer 1 1020 1013 106 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (r)N= 100 , Layer 1 Figure 26: FIM for GA+GD Across Layers. Simple task on Yi-6B with fixed learning rate LR = 3×10−5and varying unlearning requests N∈ {6,50,100}. 34 1010 107 104 101 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned(a)N= 6, Layer 31 109 105 101 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned (b)N= 50 , Layer 31 1023 1017 1011 105 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned (c)N= 100 , Layer 31 1011 109 107 105 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (d)N= 6, Layer 28 1010 107 104 101 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (e)N= 50 , Layer 28 1015 1011 107 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (f)N= 100 , Layer 28 1011 108 105 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (g)N= 6, Layer 22 1010 106 102 102106 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (h)N= 50 , Layer 22 1015 1012 109 106 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (i)N= 100 , Layer 22 1010 107 104 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (j)N= 6, Layer 13 109 105 101 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (k)N= 50 , Layer 13 1014 1010 106 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (l)N= 100 , Layer 13 1016 1011 106 101 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (m)N= 6, Layer 4 1015 109 103 103 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (n)N= 50 , Layer 4 1024 1017 1010 103 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (o)N= 100 , Layer 4 1018 1012 106 100 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (p)N= 6, Layer 1 1018 1011 104 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (q)N= 50 , Layer 1 1018 1011 104 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (r)N= 100 , Layer 1 Figure 27: FIM for GA+KL Across Layers. Simple task on Yi-6B with fixed learning rate LR = 3×10−5and varying unlearning requests N∈ {6,50,100}. 35 1010 107 104 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned(a)N= 6, Layer 31 1010 107 104 101 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned (b)N= 50 , Layer 31 109 105 101 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned (c)N= 100 , Layer 31 1011 109 107 105 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28
https://arxiv.org/abs/2505.16831v1
Original Unlearned Relearned (d)N= 6, Layer 28 1011 108 105 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (e)N= 50 , Layer 28 1011 108 105 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (f)N= 100 , Layer 28 1011 108 105 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (g)N= 6, Layer 22 1012 109 106 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (h)N= 50 , Layer 22 1012 108 104 100 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (i)N= 100 , Layer 22 1010 107 104 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (j)N= 6, Layer 13 1010 107 104 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (k)N= 50 , Layer 13 109 105 101 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (l)N= 100 , Layer 13 1016 1011 106 101 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (m)N= 6, Layer 4 1016 1011 106 101 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (n)N= 50 , Layer 4 1015 109 103 103 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (o)N= 100 , Layer 4 1020 1014 108 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (p)N= 6, Layer 1 1019 1012 105 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (q)N= 50 , Layer 1 1018 1011 104 1031010 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (r)N= 100 , Layer 1 Figure 28: FIM for NPO Across Layers. Simple task on Yi-6B with fixed learning rate LR = 3 ×10−5 and varying unlearning requests N∈ {6,50,100}. 36 1010 107 104 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned(a)N= 6, Layer 31 1010 107 104 101 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned (b)N= 50 , Layer 31 109 105 101 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned (c)N= 100 , Layer 31 1011 109 107 105 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (d)N= 6, Layer 28 1011 108 105 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (e)N= 50 , Layer 28 1011 108 105 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (f)N= 100 , Layer 28 1011 108 105 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (g)N= 6, Layer 22 1012 108 104 100 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (h)N= 50 , Layer 22 1012 109 106 103 100 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (i)N= 100 , Layer 22 1010 107 104 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (j)N= 6, Layer 13 1010 107 104 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer
https://arxiv.org/abs/2505.16831v1
13 Original Unlearned Relearned (k)N= 50 , Layer 13 1010 107 104 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (l)N= 100 , Layer 13 1016 1011 106 101 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (m)N= 6, Layer 4 1016 1011 106 101 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (n)N= 50 , Layer 4 1016 1011 106 101 104 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (o)N= 100 , Layer 4 1020 1014 108 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (p)N= 6, Layer 1 1019 1013 107 101 105 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (q)N= 50 , Layer 1 1018 1011 104 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (r)N= 100 , Layer 1 Figure 29: FIM for NPO+KL Across Layers. Simple task on Yi-6B with fixed learning rate LR = 3×10−5and varying unlearning requests N∈ {6,50,100}. 37 1010 107 104 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned(a)N= 6, Layer 31 1011 107 103 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned (b)N= 50 , Layer 31 1017 1012 107 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned (c)N= 100 , Layer 31 1011 108 105 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (d)N= 6, Layer 28 1011 108 105 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (e)N= 50 , Layer 28 1016 1012 108 104 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (f)N= 100 , Layer 28 1011 108 105 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (g)N= 6, Layer 22 1011 108 105 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (h)N= 50 , Layer 22 1015 1011 107 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 22 Original Unlearned Relearned (i)N= 100 , Layer 22 1010 107 104 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (j)N= 6, Layer 13 1010 107 104 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (k)N= 50 , Layer 13 1013 109 105 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (l)N= 100 , Layer 13 1016 1011 106 101 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (m)N= 6, Layer 4 1015 109 103 103 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (n)N= 50 , Layer 4 1022 1015 108 101 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (o)N= 100 , Layer 4 1019 1013 107 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (p)N= 6, Layer 1 1018 1011 104 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (q)N= 50 , Layer 1 1020 1013 106 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM
https://arxiv.org/abs/2505.16831v1
@ Layer 1 Original Unlearned Relearned (r)N= 100 , Layer 1 Figure 30: FIM for Rlable Across Layers. Simple task on Yi-6B with fixed learning rate LR = 3×10−5and varying unlearning requests N∈ {6,50,100}. 38 1010 10110121023 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned(a) LR= 3×10−6, Layer 28 1011 101 10910191029 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (b) LR= 5×10−6, Layer 28 1018 102 10141030 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (c) LR= 3×10−5, Layer 28 1011 101 10910191029 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 25 Original Unlearned Relearned (d) LR= 3×10−6, Layer 24 1013 103 10710171027 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 25 Original Unlearned Relearned (e) LR= 5×10−6, Layer 24 1015 103 1091021 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 25 Original Unlearned Relearned (f) LR= 3×10−5, Layer 24 108 10210121022 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (g) LR= 3×10−6, Layer 12 109 1001091018 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (h) LR= 5×10−6, Layer 12 1014 102 10101022 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (i) LR= 3×10−5, Layer 12 1011 10210151028 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (j) LR= 3×10−6, Layer 4 1012 10010121024 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (k) LR= 5×10−6, Layer 4 1016 104 10810201032 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (l) LR= 3×10−5, Layer 4 1014 10110161031 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (m) LR= 3×10−6, Layer 1 1015 101 10131027 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (n) LR= 5×10−6, Layer 1 1016 103 10101023 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (o) LR= 3×10−5, Layer 1 Figure 31: FIM for GA Across Layers. All plots are for the complex task on Qwen2.5-7B, using three learning rates {3×10−6,5×10−6,3×10−5}and fixed N= 6. 39 1015 1011 107 103 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned(a) LR= 3×10−6, Layer 28 1014 109 104 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (b) LR= 5×10−6, Layer 28 1023 1016 109 102 105 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (c) LR= 3×10−5, Layer 28 1014 109 104 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 25 Original Unlearned Relearned (d) LR= 3×10−6, Layer 24 1014 109 104 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 25 Original Unlearned Relearned (e) LR= 5×10−6, Layer 24 1016 1011 106 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 25 Original Unlearned Relearned (f) LR= 3×10−5, Layer 24 1013 1010 107 104 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (g) LR= 3×10−6, Layer 12 1013 109 105 101 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (h) LR= 5×10−6, Layer 12 1017 1012 107 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer
https://arxiv.org/abs/2505.16831v1
13 Original Unlearned Relearned (i) LR= 3×10−5, Layer 12 1017 1013 109 105 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (j) LR= 3×10−6, Layer 4 1017 1012 107 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (k) LR= 5×10−6, Layer 4 1018 1011 104 1031010 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (l) LR= 3×10−5, Layer 4 1020 1014 108 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (m) LR= 3×10−6, Layer 1 1020 1014 108 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (n) LR= 5×10−6, Layer 1 1019 1011 103 105 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (o) LR= 3×10−5, Layer 1 Figure 32: FIM for NPO Across Layers. All plots are for the complex task on Qwen2.5-7B, using three learning rates {3×10−6,5×10−6,3×10−5}and fixed N= 6. 40 1014 109 104 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned(a) LR= 3×10−6, Layer 28 1014 109 104 101106 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (b) LR= 5×10−6, Layer 28 1021 1014 107 100 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 28 Original Unlearned Relearned (c) LR= 3×10−5, Layer 28 1014 109 104 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 25 Original Unlearned Relearned (d) LR= 3×10−6, Layer 24 1015 1011 107 103 101 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 25 Original Unlearned Relearned (e) LR= 5×10−6, Layer 24 1017 1013 109 105 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 25 Original Unlearned Relearned (f) LR= 3×10−5, Layer 24 1013 109 105 101 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (g) LR= 3×10−6, Layer 12 1012 108 104 100 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (h) LR= 5×10−6, Layer 12 1015 1011 107 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 13 Original Unlearned Relearned (i) LR= 3×10−5, Layer 12 1016 1010 104 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (j) LR= 3×10−6, Layer 4 1017 1012 107 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (k) LR= 5×10−6, Layer 4 1018 1013 108 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (l) LR= 3×10−5, Layer 4 1020 1014 108 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (m) LR= 3×10−6, Layer 1 1020 1014 108 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (n) LR= 5×10−6, Layer 1 1020 1014 108 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (o) LR= 3×10−5, Layer 1 Figure 33: FIM for Rlable Across Layers. All plots are for the complex task on Qwen2.5-7B, using three learning rates {3×10−6,5×10−6,3×10−5}and fixed N= 6. 41 1015 109 103 103 Fisher Diagonal Value (log scale)02Frequency1e8Reversible FIM @ Layer 31 Original Unlearned Relearned(a) Simple (Relearned by forget set), Layer 31 1015 109 103 103 Fisher Diagonal Value (log
https://arxiv.org/abs/2505.16831v1
scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned(b) Simple (Relearned by retain set), Layer 31 1015 109 103 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned(c) Simple (Relearned by unrelated data), Layer 31 1015 109 103 103 Fisher Diagonal Value (log scale)02Frequency1e8Reversible FIM @ Layer 31 Original Unlearned Relearned (d) Simple (input data = forget set), Layer 31 1014 108 102 104 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned(e) Simple (input data = retain set), Layer 31 1016 1010 104 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 31 Original Unlearned Relearned(f) Simple (input data = unrelated data), Layer 31 Figure 34: FIM in layer 31 under Varied Relearning and Evaluation Inputs on Yi-6B (Simple Task). (a–c): Relearning is performed using the forget set, retain set, or unrelated data respectively. (d–f): FIM is measured using the forget set, retain set, or unrelated data as evaluation input. 108 10010810161024 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 25 Original Unlearned Relearned (a) Simple (Relearned by forget set), Layer 25 1011 107 103 101105 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 25 Original Unlearned Relearned(b) Simple (Relearned by retain set), Layer 25 1011 107 103 101105 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 25 Original Unlearned Relearned(c) Simple (Relearned by unrelated data), Layer 25 108 10010810161024 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 25 Original Unlearned Relearned (d) Simple (input data = forget set), Layer 25 1011 107 103 101105 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 25 Original Unlearned Relearned(e) Simple (input data = retain set), Layer 25 1010 105 100105 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 25 Original Unlearned Relearned(f) Simple (input data = unrelated data), Layer 25 Figure 35: FIM in layer 25 under Varied Relearning and Evaluation Inputs on Yi-6B (Simple Task). (a–c): Relearning is performed using the forget set, retain set, or unrelated data respectively. (d–f): FIM is measured using the forget set, retain set, or unrelated data as evaluation input. 42 107 1011091017 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 16 Original Unlearned Relearned(a) Simple (Relearned by forget set), Layer 16 1010 106 102 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 16 Original Unlearned Relearned(b) Simple (Relearned by retain set), Layer 16 1010 106 102 102106 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 16 Original Unlearned Relearned(c) Simple (Relearned by unrelated data), Layer 16 107 1011091017 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 16 Original Unlearned Relearned (d) Simple (input data = forget set), Layer 16 1010 106 102 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 16 Original Unlearned Relearned(e) Simple (input data = retain set), Layer 16 1010 106 102 102 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 16 Original Unlearned Relearned(f) Simple (input data = unrelated data), Layer 16 Figure 36: FIM in layer 16 under Varied Relearning and Evaluation Inputs on Yi-6B (Simple Task). (a–c): Relearning is performed using the forget set, retain set, or unrelated data respectively. (d–f): FIM is measured using the forget set, retain set, or unrelated data as evaluation input. 1011 101101310251037
https://arxiv.org/abs/2505.16831v1
Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (a) Simple (Relearned by forget set), Layer 4 1015 108 101 106 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned(b) Simple (Relearned by retain set), Layer 4 1015 108 101 106 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned(c) Simple (Relearned by unrelated data), Layer 4 1011 101101310251037 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned (d) Simple (input data = forget set Layer), 4 1015 109 103 103109 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned(e) Simple (input data = retain set Layer), 4 1015 108 101 106 Fisher Diagonal Value (log scale)0.02.55.0Frequency1e8FIM @ Layer 4 Original Unlearned Relearned(f) Simple (input data = unrelated data), Layer 4 Figure 37: FIM in layer 4 under Varied Relearning and Evaluation Inputs on Yi-6B (Simple Task). (a–c): Relearning is performed using the forget set, retain set, or unrelated data respectively. (d–f): FIM is measured using the forget set, retain set, or unrelated data as evaluation input. 43 1013 10110151029 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned(a) Simple (Relearned by forget set), Layer 1 1018 1011 104 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned(b) Simple (Relearned by retain set), Layer 1 1017 109 101 107 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned(c) Simple (Relearned by unrelated data), Layer 1 1013 10110151029 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned (d) Simple (input data = forget set), Layer 1 1018 1011 104 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned(e) Simple (input data = retain set), Layer 1 1018 1011 104 103 Fisher Diagonal Value (log scale)012Frequency1e8FIM @ Layer 1 Original Unlearned Relearned(f) Simple (input data = unrelated data), Layer 1 Figure 38: FIM in layer 1 under Varied Relearning and Evaluation Inputs on Yi-6B (Simple Task). (a–c): Relearning is performed using the forget set, retain set, or unrelated data respectively. (d–f): FIM is measured using the forget set, retain set, or unrelated data as evaluation input. 44
https://arxiv.org/abs/2505.16831v1
From EduVisBench to EduVisAgent: A Benchmark and Multi-Agent Framework for Reasoning-Driven Pedagogical Visualization Haonian Ji1*, Shi Qiu1*, Siyang Xin1*, Siwei Han1* Zhaorun Chen2, Dake Zhang3, Hongyi Wang3, Huaxiu Yao1 1UNC-Chapel Hill,2University of Chicago,3Rutgers University Abstract While foundation models (FMs), such as diffu- sion models and large vision-language models (LVLMs), have been widely applied in educa- tional contexts, their ability to generate peda- gogically effective visual explanations remains limited. Most existing approaches focus pri- marily on textual reasoning, overlooking the critical role of structured and interpretable vi- sualizations in supporting conceptual under- standing. To better assess the visual reason- ing capabilities of FMs in educational settings, we introduce EduVisBench, a multi-domain, multi-level benchmark. EduVisBench features diverse STEM problem sets requiring visually grounded solutions, along with a fine-grained evaluation rubric informed by pedagogical the- ory. Our empirical analysis reveals that ex- isting models frequently struggle with the in- herent challenge of decomposing complex rea- soning and translating it into visual representa- tions aligned with human cognitive processes. To address these limitations, we propose Edu- VisAgent, a multi-agent collaborative frame- work that coordinates specialized agents for in- structional planning, reasoning decomposition, metacognitive prompting, and visualization de- sign. Experimental results show that EduVis- Agent substantially outperforms all baselines, achieving a 40.2% improvement and deliver- ing more educationally aligned visualizations. EduVisBench and EduVisAgent are available at github.com/aiming-lab/EduVisBench and github.com/aiming-lab/EduVisAgent. 1 Introduction “To truly teach is not to tell the answer, but to illuminate the path." While foundation models (FMs), such as dif- fusion models and large vision-language models (LVLMs), have been extensively adopted in edu- cational domains (Chu et al., 2025; Wang et al., *Equal contribution. Correspondence to: {haonianj, huaxiu}@cs.unc.edu The common method for producing carbon dioxide in the laboratory is ( )A: Charcoal burns. B: Sodium carbonate reacts with dilute hydrochloric acid. C: Reaction between Marble and Dilute Sulfuric Acid. D: Reaction between Limestone and Dilute Hydrochloric Acid. Visualize the problem-solving process. Figure 1: GPT-4o fails to illustrate its problem-solving with high-quality, logical, and explanatory visualization. 2024a), including pedagogical agents providing au- tomated classroom assistance and science learning agents offering textual explanations of problem- solving processes (Wu et al., 2023), their appli- cations have predominantly focused on text-based interactions (Wu et al., 2023; Xu et al., 2024). How- ever, in education, especially K-12 settings, creat- ing compelling visualizations is crucial for cog- nitive comprehension and overall learning effec- tiveness (Presmeg, 2006). Despite its importance, there is currently limited understanding of how FMs can effectively generate visually grounded el- ements (e.g., diagrams ,interactive education tools , illustrative graphics ) to support the pedagogical illustration of problem-solving processes. Currently, generating visually grounded ele- ments for pedagogical reasoning poses several chal- lenges: (1) decomposing complex reasoning into representable steps that align closely with human cognitive processes is non-trivial (Yang et al., 2024; Chen et al., 2024d); (2) precisely producing visual aids for each sub-step to optimally support learners is challenging (Hong et al., 2025); and (3) different educational domains require distinct visualization styles and formats, making consistent and adequate visual aid delivery difficult (Pandey and Ottley, 2025). Addressing these
https://arxiv.org/abs/2505.16832v2
obstacles first requires 1arXiv:2505.16832v2 [cs.AI] 27 May 2025 a clear picture of how current FMs perform, so that future models can be purpose-built to close the gaps. Consequently, a comprehensive evalua- tion platform is critical for systematically assessing FMs on visual pedagogical reasoning. To bridge this gap, we introduce EduVisBench , a multi-domain, multi-level benchmark designed to evaluate the capacity of foundation models to generate pedagogically effective, step-by-step vi- sual reasoning. EduVisBench comprises structured problem sets across diverse domains, each requir- ing multimodal-centric reasoning and solutions that prioritize visualization principles such as inter- pretability ,cognitive alignment , and instructional clarity to achieve high evaluation score. To facil- itate a detailed evaluation, we further develop a fine-grained rubric enabling multidimensional as- sessments of AI-generated visual outputs, focusing explicitly on pedagogical criteria such as contextual relevance ,visual clarity ,multimodal coherence , reasoning support , and interactive engagement . Utilizing this benchmark, we conduct exten- sive evaluations on a variety of FMs and agents. Our findings reveal that although current models achieve predominantly correct step-by-step textual analyses, they frequently fail to generate useful or faithful visualizations, as depicted in Figure 1. Specifically, our systematic analysis highlights re- curring challenges including (1) semantic misalign- ments between textual explanations and visual com- ponents, (2) omissions of critical steps within ren- dered diagrams, and (3) structural inconsistencies in code-based visual outputs, collectively under- mining accuracy, clarity, and interactivity. To address these limitations, we introduce a multi-agent collaborative framework, EduVis- Agent , designed to simulate the complete learn- ing journey—from initial problem exposure to deep conceptual understanding. Specifically, a central planning agent orchestrates six specialized expert agents dedicated to visualization design , cognitive scaffolding , and metacognitive regula- tion. A synthesis module then integrates these expert outputs into interactive, personalized learn- ing webpages tailored specifically to human learn- ers. Experimental results demonstrate that our proposed method EduVisAgent achieves an av- erage improvement of 40.2% than current SOTA method. This underscores the effectiveness of our approach—leveraging modular specialization and collaborative integration to produce robust and vi- sually grounded learning solutions. Figure 2: Dataset distribution of EduVisBench. Each domain encompasses various sub-domains, collectively covering 15 comprehensive pedagogical scenarios. 2 EduVisBench Benchmark 2.1 Overview In this section, we introduce EduVisBench, a novel and challenging benchmark designed to evaluate the capability of models to generate logical and ex- planatory visualizations for educational purposes. As shown in Figure 2, EduVisBench comprises 1,154 carefully curated STEM questions across three academic subjects and 15 distinct domains, organized into three levels of difficulty. In addi- tion to assessing accuracy in step-by-step problem solving, EduVisBench places particular emphasis on a model’s ability to communicate the reasoning process clearly and visually—helping students un- derstand problems through structured, interpretable visual outputs, as illustrated in Figure 3. Specifically, EduVisBench adopts a multimodal setting in which models are provided with both textual and visual inputs and are tasked with pro- ducing diverse output formats, including interactive web pages and visual diagrams. Beyond evaluat- ing the correctness of final answers, we introduce a fine-grained evaluation framework that assesses the quality of visualizations across
https://arxiv.org/abs/2505.16832v2
five key dimen- sions: (1) the logical sequencing of visual elements, (2) the structural richness of the visuals, (3) seman- tic alignment with the underlying subject matter, (4) the clarity and guidance provided for problem- solving, and (5) the level of interactivity and en- gagement. In the following subsections, we de- scribe our dataset curation process and the design of the evaluation rubric in detail. 2.2 Dataset Curation EduVisBench is built from several high-quality public educational resources that we carefully cu- 2 Chemistry Discuss the properties and effects of oxygen based on your knowledge. In your answer, address whether: 1.Oxygen can act as a fuel. 2.The proportion of oxygen in the air is approximately 21%. Visualization Question Mathematics Fill in the blank to make the equation true.7 + 9 = Question Visualization Physics A force of 50 N is applied to a 5 kg block. What is the acceleration of the block? Question Visualization ……… Visual Guidance: 5/5, Visual Design: 5/5,Coordination: 5/5,Learning Guidance: 5/5,Interactivity: 5/5Total Score: 100% Visual Guidance: 5/5, Visual Design: 4/5,Coordination: 5/5,Learning Guidance: 5/5,Interactivity: 3/5Total Score: 88% Visual Guidance: 5/5, Visual Design: 4/5,Coordination: 5/5,Learning Guidance: 5/5,Interactivity: 4/5Total Score: 92%Figure 3: Representative examples from EduVisBench, featuring questions from Maths, Chemistry, and Physics alongside their corresponding high-scoring visual explanations. These interactive visualizations, generated by our multi-agent system EduVisAgent, exemplify well-designed, pedagogically effective outputs for STEM problems. rated, translated, and adapted to support multi- modal visualization learning tasks. Specifically, the chemistry questions are sourced from the C- MHChem-Benchmark (Zhang et al., 2024a), orig- inally presented in Chinese and meticulously trans- lated into English with careful attention to scien- tific accuracy and terminology. The physics ques- tions are drawn from the high-school-physics (Ro- hith, 2023) dataset, which includes a range of conceptual and quantitative exercises suitable for secondary-level learners. The mathematics com- ponent combines easy-level problems from the Il- lustrative Mathematics curriculum with medium- to hard-level questions selected from the MATH- 500(Lightman et al., 2023) dataset. Furthermore, each domain encompasses diverse sub-domains, collectively covering 15 comprehensive pedagog- ical scenarios, as illustrated in Figure 2. All data sources were standardized into a unified format and consolidated to enable consistent and comprehen- sive evaluation across subjects. 2.3 Evaluation Metric In this subsection, we will detail the performance evaluation rubrics in EduVisBench. Evaluation Dimensions. To comprehensively eval- uate the quality of generated visualizations in sup- porting student understanding and learning, we in- troduce a fine-grained scoring metric grounded in five pedagogically motivated dimensions: (1) Con- text Visualization : evaluates how clearly the vi-sualization situates the problem within a relevant context; (2) Diagram Design : assesses the clarity, accuracy, and effectiveness of the diagrams used to represent information; (3) Text–Graphic Inte- gration : measures the coherence between textual explanations and visual elements, ensuring mutual interaction; (4) Thought Guidance : examines the extent to which the visualization supports reason- ing processes and highlights critical thinking steps; (5) Interactivity : evaluates whether and how the vi- sualization invites students engagement, reflection, or active manipulation. Each dimension captures a distinct aspect of effective multimedia learning, with detailed rubrics provided in Appendix
https://arxiv.org/abs/2505.16832v2
A.1 to guide the scoring process. Rubrics and Evaluation Procedure. To ensure a systematic and replicable evaluation pipeline, we manually construct detailed scoring rubrics for each of the five evaluation dimensions, with clear criteria defined for every level on a 0–5 scale. The visual outputs generated by each model are then as- sessed using GPT-4o within a controlled evaluation environment. For static image outputs from text- to-image models, the images are evaluated directly. For LVLMs, the generated SVG or HTML code is first rendered into its final visual form—either as a static image or an interactive webpage—before being evaluated. The detailed demonstration of our scoring procedure is in Figure 4. 3 Visualization Prompt Diffusion ModelsVisualAgent Code Code WebPage ScreenshotWebPage ScreenshotGPT-4o Visual Guidance: …, Visual Design: …, Coordination: …,Learning Guidance: …, Interactivity: …Total Score: …Question+ Image SVGImageRenderor Visualization Evaluation LVLMsFigure 4: Workflow for the EduVisBench benchmark evaluation. Models receive a visualization prompt and a question to generate visual outputs. All resulting vi- sualizations undergo evaluation by GPT-4o across five dimensions to compute a total performance score. 3 EduVisAgent Using the proposed EduVisBench benchmark, we systematically evaluate the performance of exist- ing text-to-image models and LVLMs (see detailed results in Table 1 in Section 4). We find that most models perform poorly, with average scores below 50 on a 0–100 scale. This underperformance un- derscores the inherent challenge of decomposing complex reasoning and translating it into visual representations that align with human cognitive processes to effectively support education—a task that remains highly non-trivial. To address these challenges, we propose a multi- agent system, EduVisAgent, inspired by pedagog- ical theories and designed to emulate the division of labor and collaborative reasoning found in ex- pert instructional design. EduVisAgent consists of five specialized yet interdependent agents: a Task Planning Agent , which structures the instructional objective; a Conceptual Mapping Agent , which extracts and organizes key information; a Rea- soning Decomposition Agent , which constructs step-by-step problem-solving logic; a Metacogni- tive Reviewer , which encourages summarization and learner reflection; and a Visualization Agent , which generates appropriate visual representations. This design introduces modularity and pedagogical Task PlanningAgentConceptual Mapping AgentReasoning Decomposition AgentMetacognitive ReviewerVisualization AgentQuestion subgoals expectation formula… InteractiveWebpageFigure 5: The structure of EduVisAgent. interpretability by embedding distinct instructional roles directly into the agent workflow. The overall operation of EduVisAgent proceeds in two stages: (1) instructional flow construction and (2) collabo- rative solution generation, as detailed below. 3.1 Instructional Flow Construction The first stage of EduVisAgent focuses on formulat- ing a well-structured instructional task based on the original problem. A key challenge lies in analyz- ing the underlying reasoning structure, identifying implicit logical dependencies, and associating each reasoning step with relevant conceptual knowledge. To address this, we employ the Task Planning Agent , which systematically organizes the prob- lem into an instructional format suitable for multi- modal visualization. Its main functions include: (1) breaking down the problem into coherent subgoals, (2) clarifying the reasoning expected at each step, (3) aligning each step with domain-specific prin- ciples or formulas, and (4) anticipating potential student misconceptions or cognitive needs. This
https://arxiv.org/abs/2505.16832v2
structured formulation provides a pedagogically grounded foundation that guides the downstream agents in generating coherent, targeted, and educa- tionally effective visual explanations. 3.2 Collaborative Solution Generation In this stage, EduVisBench executes the instruc- tional task constructed by sequentially activating a set of specialized agents, each responsible for completing a specific aspect of the task. As shown in Figure 5, these agents operate in a coordinated manner to enhance the coherence of instructional logic, improving the clarity of visual representation, and ensure alignment with educational objectives. Specifically: Conceptual Mapping Agent. This agent is re- sponsible for extracting and organizing the core components of the input problem. Drawing on 4 the Concrete–Representational–Abstract (CRA) in- structional model (Nugroho and Jailani, 2019), it classifies information into three categories: con- crete entities, representational elements, and ab- stract constructs. The agent conducts fine-grained categorization and semantic summarization to sup- port downstream visualization modules. Reasoning Decomposition Agent. This agent decomposes complex problems into manageable subcomponents and provides step-specific instruc- tional guidance. It applies the memory-oriented FOPS strategy (Miller and Cohen, 2020)—find the problem type (e.g., equation solving, conceptual reasoning, commonsense application, or graphical interpretation), organize the structure via equations or diagrams, plan the solution path, and solve the task. Based on the decomposed steps, the agent also identifies critical instructional points that re- quire additional support, especially those that bene- fit from visual scaffolding or interactive guidance. Metacognitive Reviewer. Grounded in metacog- nitive theory (Schraw and Moshman, 1995), this agent supports learners in monitoring their com- prehension and reasoning processes. It generates reflective prompts that foster self-questioning and self-correction, encouraging learners to evaluate the soundness of their problem-solving approaches. Visualization Agent. This agent is responsible for constructing the “visual guidance” component of the instructional output. Instead of relying on dec- orative visuals, it emphasizes the use of abstract yet pedagogically effective representations—such as number lines, bar charts, schematic object illus- trations, graphic organizers, sketch diagrams, and structured data tables. The agent ensures that each visualization is tightly aligned with the underly- ing abstract concept being taught. All visuals are rendered using the v0(Vercel, 2025) system for web-based deployment. 4 Experiments This section outlines the experimental setup for benchmarking various foundation models on Edu- VisBench. We evaluate Diffusion Models, LVLMs, a specialized visualization agent ( v0), and our pro- posed EduVisAgent. Our investigation seeks to address the following key questions: (1) How profi- cient are existing models at generating high-quality, explanatory visualizations within EduVisBench? (2) Can the proposed EduVisAgent system out- perform current models? (3) What distinct per-formance patterns emerge across different model architectures, academic disciplines, and evaluation dimensions in EduVisBench? 4.1 Experiment Setup Baseline Models. Our experimental evaluation encompasses a range of FMs, categorized as fol- lows: (1) Image Generation Models: This cat- egory includes Flux.1-dev (Labs, 2024), Stable Diffusion 3.5 Large (SD3.5) (IT Admin, 2024), and Stable Diffusion XL Base 1.0 (SDXL) (Podell et al., 2023). These models are tasked with gen- erating static images directly from textual or vi- sual inputs. (2) Large Vision-Language Models (LVLMs): We evaluate Deepseek-VL2 (Wu et al., 2024), GLM-4V-9B (GLM et al.,
https://arxiv.org/abs/2505.16832v2
2024), MiniCPM- V2.6 (Yao et al., 2024), Mistral-Small-3.1-24B- Instruct-2503 (Mistral AI, 2025), Phi-3.5-Vision- Instruct (Abdin et al., 2024), Phi-4-Multimodal- Instruct (Abouelenin et al., 2025), Qwen2.5-VL- 72B (Team, 2025), GPT-4o (Hurst et al., 2024), Claude 3.7 Sonnet (Anthropic, 2025), and Gemini 2.0 Flash (Mallick and Kilpatrick, 2025). These models are prompted to generate SVG or HTML code, which is then rendered into visual outputs for evaluation. (3) Specialized Visualization Agent: We also assess v0(Vercel, 2025), an AI agent specifically designed to create interactive web pages based on instructional content. Evaluation Setups. During evaluation, all gener- ated visualizations are standardized into image for- mat. For interactive web pages containing buttons, an automated script navigates through all accessi- ble sub-pages, capturing individual screenshots of each. Performance is assessed using the evalua- tion metric described in Section 2.3, where GPT- 4o scores the visual outputs based on predefined rubrics, assigning a score from 0 to 5 for each of the five dimensions. The cumulative score (maxi- mum 25 points) is then normalized to a 0–100 scale for standardized reporting and comparison. 4.2 Baseline Benchmarking The performance of all evaluated baseline models is detailed in Table 1. Across all evaluated models, the average scores indicate significant room for im- provement. Diffusion Models generally exhibited the lowest performance, with average scores rang- ing from 13.8% (Flux.1-dev) to 21.8% (SDXL). This suggests that direct static image generation, while capable of producing visual elements, strug- gles substantially with the nuanced requirements of 5 Table 1: Performance of Diffusion Models, Large Vision Language Models and v0on EduVisBench. Method Vis. TypeMaths Physics Chemistry Avg Easy Medium Hard Easy Medium Hard Easy Medium Hard Diffusion Model Flux.1-dev Image 13.8 13.4 13.2 11.7 8.5 10.0 20.0 16.6 16.0 13.8 SD3.5 Image 17.3 20.3 18.8 16.8 13.0 12.0 22.8 21.7 34.0 18.4 SDXL Image 17.3 23.3 25.5 18.9 15.4 24.0 33.6 30.2 24.0 21.8 Large Vision Language Model Deepseek VL2 Webpage 20.3 17.1 15.7 17.9 17.0 20.0 16.4 13.8 14.0 17.5 GLM-4V-9B Webpage 22.3 21.1 19.4 24.5 21.5 24.0 22.3 21.5 16.0 21.9 MiniCPM-V-2.6 Webpage 24.1 17.3 15.5 19.1 17.4 20.0 14.5 15.2 12.0 19.3 Mistral-Small-3.1 Webpage 29.1 31.6 32.2 32.3 33.5 20.0 30.6 27.5 24.0 30.2 Phi-3.5 Webpage 25.3 20.7 19.1 21.2 19.5 12.0 20.0 18.6 20.0 21.8 Phi-4 Webpage 26.1 25.1 22.9 27.8 25.5 24.0 31.2 27.5 12.0 26.4 Qwen2.5-VL-72B Webpage 24.3 18.1 15.8 19.7 17.1 24.0 18.2 16.4 12.0 20.0 Claude 3.7 Sonnet SVG 61.2 26.7 23.6 18.5 16.9 14.0 47.5 47.2 18.0 42.0 Claude 3.7 Sonnet Webpage 56.2 57.5 55.6 44.8 42.6 24.0 61.1 60.6 64.0 54.6 GPT-4o Webpage 47.6 39.3 37.9 25.7 24.2 24.0 34.3 32.6 36.0 38.1 GPT-4o SVG 36.1 19.7 19.5 13.0 12.8 4.0 30.0 27.5 22.0 26.3 Gemini 2.0 Flash Webpage 46.9 9.5 15.7 31.7 26.5 24.0 32.0 25.8 30.0 43.6 Visualization Agent v0 Webpage 63.0 37.6 47.2 53.3 58.5 52.0 74.7 52.8 68.0 58.2 explanative and guiding visualizations for complex logical problems in our benchmark. LVLMs typically scored between 17.5% (Deepseek VL2) and 30.2% (Mistral-Small-3.1). Notable exceptions include Gemini 2.0 Flash (43.6%) and Claude 3.7 Sonnet;
https://arxiv.org/abs/2505.16832v2
the latter’s significantly better performance with Webpages (54.6%) over SVG (42.0%). GPT-4o also showed a preference for Webpage generation (38.1%) over SVG (26.3%), suggesting that prompting advanced LVLMs for structured interactive webpages can yield more effective visual explanations. Nevertheless, even these top-tier LVLMs face considerable challenges in consistently meeting all of evaluation criteria. The visualization agent v0, specifically designed for webpage generation, achieved the highest average score among all baseline models at 58.2%. This result highlights the advantage of a specialized agent in this task over more general-purpose FMs. 4.3 Performance Analysis of EduVisBench Building upon the insights gained from the baseline evaluations, we assessed our proposed multi-agent system, EduVisAgent. The results, presented in Ta- ble 2 demonstrate a substantial leap in performance for generating explanative and logical valuable vi- sualizations for STEM problems. EduVisAgent achieved an impressive overall average score of 81.6%. Specifically, EduVisAgent surpasses the best-performing baseline v0(58.2%), by a remark-able 23.4 percentage points. This constitutes an approximately 40.2% relative improvement, under- scoring the efficacy of our multi-agent architecture and the integration of educational methodologies. Compared to the best performing LVLM (Claude 3.7 Sonnet Webpage at 54.6%) and the top diffu- sion model (SDXL at 21.8%), the advancement offered by EduVisAgent is even more pronounced. These results clearly indicate that the design princi- ples underlying EduVisAgent, which incorporate a multi-agent structure and pedagogical strategies, ef- fectively address many of the limitations observed in existing generative models. 4.4 Case Analysis To further illustrate the limitations of existing base- lines and how our approach addresses these chal- lenges, we present two case studies in Figure 6. On the left, for a chemistry question, the GPT-4o- generated solution lacks intuitive visualization of the chemical processes, resulting in fragmented in- formation without visual guidance—reflected in a low score of just 28%. In contrast, EduVisAgent begins by displaying background images of the rel- evant chemical elements, activating students’ prior knowledge. It then contextualizes each of the four answer options with real-world scenarios, thereby enhancing students’ understanding of the underly- ing chemical transformations. Conversely, for the Carnot cycle efficiency physics problem (right side of Figure 6), the Gem- 6 Method Vis. Type Easy Medium Hard Avg EduVisAgent WebpageMaths 81.690.2 64.5 65.0 Physics 85.3 81.7 84.0 Chemistry 69.0 76.3 76.0 (a) Performance of our EduVisAgent on EduVisBench. (b) Comparison of average score across all models. Table 2: Overall comparison of models: left is our EduVisAgent performance, right is the bar chart. EduVisAgent achieves the highest average score among all models. Visual InformationMispresentation A Carnot engine operates between 27°C and 127°C. What is its efficiency? Question Gemini Visual Guidance: 2/5, Visual Design: 3/5,Coordination: 3/5,Learning Guidance: 2/5,Interactivity: 0/5Total Score: 40% The image contains only one chart, and the values represented are incorrect. The heights of 300k and 400k are identical, not reflecting the difference in data values. Lack of Visual Guidance The following changes belong tochemical changes are ( )A: Dry the wet clothesB: The glass shattersC: Melting SnowD: Paper Burning Question GPT-4o Visual Guidance: 2/5, Visual Design: 2/5, Coordination: 2/5,Learning Guidance: 1/5, Interactivity: 0/5Total Score: 28% EduVisAgent(Ours) ……
https://arxiv.org/abs/2505.16832v2
EduVisAgent(Ours) The image fails to illustrate the chemical process, lacks a clear step-by-step instructional approach, and offers no intuitive visual representation. …… Figure 6: Baseline models versus our EduVisAgent. These examples clearly demonstrate the often poor output qual- ity of baseline models, contrasting sharply with the high-quality, effective visualizations produced by EduVisAgent. ini solution presents a single, flawed chart. Its depiction of 300K and 400K temperatures with identical heights introduces visual misinformation, failing to accurately represent data differences and thereby diminishing its pedagogical value. In stark contrast, EduVisAgent employs a multi-agent col- laborative approach: it first generates a concrete factory scene to activate students’ working memory of the "heat engine" concept. Subsequently, it con- structs an accurate Carnot cycle diagram and offers a step-by-step problem breakdown, fostering clear conceptual understanding. Crucially, EduVisAgent provides interactive visualization components, en- abling users to dynamically adjust temperatures via sliders and observe real-time changes in heat engine efficiency. This interactive engagement sig- nificantly facilitates higher-order thinking skills. Overall, through coordinated multi-agent opti- mization of image design, instructional structure,and learning pathways, EduVisAgent significantly outperforms traditional single-model approaches in accuracy, guidance, and interactivity. 4.5 Fine-Grained Analysis on Five Evaluation Dimensions Figure 7 reveals distinct performance profiles for eight well performed evaluated models. In Context Visualization and Diagram Design, most baselines, including SDXL, Claude 3.7, and v0, exhibit mod- erate to low scores, often struggling with providing rich situational cues or pedagogically sound visual structures, especially for complex problems. v0 and Claude show relatively better capabilities in Text-Graphic Integration and Thought Guidance compared to other FMs, which generally offer min- imal support in these areas. However, all baseline models, including v0, are significantly limited in the Interactivity dimension, primarily due to their 7 Figure 7: Fine-grained performance comparison across our five key evaluation dimensions. output format (static images/SVG or less dynamic webpages). In contrast, our EduVisAgent demon- strates consistently strong performance across all five dimensions. It particularly excels in creating rich context visualizations, well-structured diagram designs, and ensuring seamless text-graphic inte- gration. Furthermore, EduVisAgent provides su- perior thought guidance and achieves notably high scores in Interactivity, areas where baseline mod- els significantly lag. This comprehensive strength highlights EduVisAgent’s advanced ability to gen- erate not just visualizations, but truly effective and interactive pedagogical tools. 5 Related Work LLM for Pedagogical Assistance. Foundation models (FMs), including diffusion models and large vision-language models (LVLMs), have been increasingly adopted in educational contexts (Chu et al., 2025; Wang et al., 2024a) to support teach- ing and classroom interactions. EduAgent (Xu et al., 2024) and Teachtune (Jin et al., 2025) en- hance the problem-solving process through auto- mated simulations of student-teacher dialogues, collaborative learning, and task-oriented reason- ing. Agents such as SEFL (Zhang et al., 2025) and PROF (Nair et al., 2024) synthesize immediate, on-demand feedback to support large-scale instruc- tional scenarios. Furthermore, domain-specific agents such as MathChat (Wu et al., 2023), NEW- TON (Wang et al., 2023b), and MEDCO (Wei et al., 2024) further provide textual explanations tailored to scientific and medical education. While these systems address diverse pedagogical needs, their focus remains largely on
https://arxiv.org/abs/2505.16832v2
text-based interac- tions (Wu et al., 2023; Xu et al., 2024; Cui et al., 2024), overlooking the critical role of visualization in fostering conceptual understanding and improv- ing learning outcomes (Presmeg, 2006). Despite its pedagogical importance, the capacity of FMs and agents to generate logical, explanatory visual illustrations remains underexplored. EduVisBench is the first comprehensive benchmark designed to systematically evaluate FMs’ ability to produce pedagogically effective, step-by-step visual reason-ing, covering 15 diverse visually grounded educa- tional scenarios with multi-level problem sets and multimodal-centric solutions, providing a rigorous platform for visual pedagogy assessment. LLM for Scientific Visualization. While some existing works have preliminarily explored the potential of FMs in supporting visual scaffold- ing (Podo et al., 2024; Chen et al., 2024c; Pandey and Ottley, 2025; Hong et al., 2025), they are typically fragmented, lack pedagogical grounding, and fail to generalize across diverse educational tasks (Wang et al., 2023a; Ku et al., 2025). For instance, Visual Sketchpad (Hu et al., 2024) at- tempts to illustrate problem-solving processes with sketches generated from code. However, these visu- als are often low in quality, lack logical coherence, and fall short in explanatory depth (Wang et al., 2025). Other approaches like MatplotAgent (Yang et al., 2024), PlotGen (Goswami et al., 2025), and OmniSVG (Yang et al., 2025) leverage plotting and SVG tools to produce more accurate, data- grounded visualizations. Still, these methods are limited in scope, often addressing only isolated steps rather than providing systematic, end-to-end visual explanations of multi-step problem-solving tasks (Vázquez, 2024; Chen et al., 2024a, 2025b). To overcome these limitations, we propose a multi- agent collaborative framework, EduVisAgent, that simulates the full learning journey—from initial problem exposure to deep conceptual understand- ing—by coordinating specialized agents to gener- ate coherent, pedagogically aligned visualizations throughout the reasoning process. 6 Conclusion This paper addressed the challenge of generating pedagogically meaningful visual explanations with AI systems. We introduced EduVisBench, a bench- mark revealing that existing models often produce inadequate visual outputs. To overcome this, we proposed EduVisAgent, a collaborative multi-agent framework. Experiments show EduVisAgent sig- nificantly outperforms all baselines, demonstrating the potential of agent-based systems for advancing educational visualization. 8 References Marah Abdin, Sam Ade Jacobs, Ammar Ahmad Awan, Jyoti Aneja, Ahmed Awadallah, Hany Has- san Awadalla, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Harkirat Singh Behl, Alon Benhaim, Misha Bilenko, Johan Bjorck, Sébastien Bubeck, Martin Cai, Caio C’esar Teodoro Mendes, Weizhu Chen, Vishrav Chaudhary, Parul Chopra, and 69 oth- ers. 2024. Phi-3 technical report: A highly capa- ble language model locally on your phone. ArXiv , abs/2404.14219. Abdelrahman Abouelenin, Atabak Ashfaq, Adam Atkin- son, Hany Hassan Awadalla, Nguyen Bach, Jianmin Bao, Alon Benhaim, Martin Cai, Vishrav Chaudhary, Congcong Chen, Dongdong Chen, Dongdong Chen, Junkun Chen, Weizhu Chen, Yen-Chun Chen, Yi ling Chen, Qi Dai, Xiyang Dai, Ruchao Fan, and 54 oth- ers. 2025. Phi-4-mini technical report: Compact yet powerful multimodal language models via mixture- of-loras. ArXiv, abs/2503.01743. Cognition AI. 2024. Devin ai: World’s first ai software engineer. Anthropic. 2025. Claude 3.7 sonnet and claude code. https://www.anthropic.com/news/claude-3-7 -sonnet . Accessed: 2025-05-16. Jun Shern Chan, Neil Chowdhury, Oliver Jaffe, James Aung,
https://arxiv.org/abs/2505.16832v2
Dane Sherburn, Evan Mays, Giulio Starace, Kevin Liu, Leon Maksin, Tejal Patwardhan, and 1 others. 2024. Mle-bench: Evaluating machine learn- ing agents on machine learning engineering. arXiv preprint arXiv:2410.07095. Jiawen Chen, Jianghao Zhang, Huaxiu Yao, and Yun Li. 2025a. Celltypeagent: Trustworthy cell type an- notation with large language models. arXiv preprint arXiv:2505.08844. Zhaorun Chen, Yichao Du, Zichen Wen, Yiyang Zhou, Chenhang Cui, Zhenzhen Weng, Haoqin Tu, Chaoqi Wang, Zhengwei Tong, Qinglan Huang, and 1 others. 2024a. Mj-bench: Is your multimodal reward model really a good judge for text-to-image generation? arXiv preprint arXiv:2407.04842. Zhaorun Chen, Mintong Kang, and Bo Li. 2025b. Shieldagent: Shielding agents via verifiable safety policy reasoning. arXiv preprint arXiv:2503.22738. Zhaorun Chen, Zhen Xiang, Chaowei Xiao, Dawn Song, and Bo Li. 2024b. Agentpoison: Red-teaming llm agents via poisoning memory or knowledge bases. Advances inNeural Information Processing Systems, 37:130185–130213. Zhaorun Chen, Zhuokai Zhao, Hongyin Luo, Huaxiu Yao, Bo Li, and Jiawei Zhou. 2024c. Halc: Object hallucination reduction via adaptive focal-contrast decoding. arXiv preprint arXiv:2403.00425. Zhaorun Chen, Zhuokai Zhao, Zhihong Zhu, Ruiqi Zhang, Xiang Li, Bhiksha Raj, and Huaxiu Yao.2024d. Autoprm: Automating procedural su- pervision for multi-step reasoning via control- lable question decomposition. arXiv preprint arXiv:2402.11452. Zhendong Chu, Shen Wang, Jian Xie, Tinghui Zhu, Yibo Yan, Jinheng Ye, Aoxiao Zhong, Xuming Hu, Jing Liang, Philip S Yu, and 1 others. 2025. Llm agents for education: Advances and applications. arXiv preprint arXiv:2503.11733. Chenhang Cui, An Zhang, Yiyang Zhou, Zhaorun Chen, Gelei Deng, Huaxiu Yao, and Tat-Seng Chua. 2024. Fine-grained verifiers: Preference modeling as next- token prediction in vision-language alignment. arXiv preprint arXiv:2410.14148. Team GLM, Aohan Zeng, Bin Xu, Bowen Wang, Chen- hui Zhang, Da Yin, Diego Rojas, Guanyu Feng, Han- lin Zhao, Hanyu Lai, Hao Yu, Hongning Wang, Jiadai Sun, Jiajie Zhang, Jiale Cheng, Jiayi Gui, Jie Tang, Jing Zhang, Juanzi Li, and 37 others. 2024. Chatglm: A family of large language models from glm-130b to glm-4 all tools. Preprint, arXiv:2406.12793. Kanika Goswami, Puneet Mathur, Ryan Rossi, and Franck Dernoncourt. 2025. Plotgen: Multi-agent llm-based scientific data visualization via multimodal feedback. arXiv preprint arXiv:2502.00988. Siwei Han, Peng Xia, Ruiyi Zhang, Tong Sun, Yun Li, Hongtu Zhu, and Huaxiu Yao. 2025. Mdocagent: A multi-modal multi-agent framework for document understanding. arXiv preprint arXiv:2503.13964. Jiayi Hong, Christian Seto, Arlen Fan, and Ross Ma- ciejewski. 2025. Do llms have visualization literacy? an evaluation on modified visualizations to test gen- eralization in data interpretation. IEEE Transactions onVisualization andComputer Graphics. Yushi Hu, Weijia Shi, Xingyu Fu, Dan Roth, Mari Os- tendorf, Luke Zettlemoyer, Noah A Smith, and Ran- jay Krishna. 2024. Visual sketchpad: Sketching as a visual chain of thought for multimodal language models. arXiv preprint arXiv:2406.09403. Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, and 1 others. 2024. Gpt-4o system card. arXiv preprint arXiv:2410.21276. IT Admin. 2024. Introducing Stable Diffusion 3.5. ht tps://stability.ai/news/introducing-sta ble-diffusion-3-5 . Updated October 29, 2024; Accessed: 2025-05-20. Carlos E Jimenez, John Yang, Alexander Wettig, Shunyu Yao, Kexin Pei, Ofir Press, and Karthik Narasimhan. 2023. Swe-bench: Can language mod- els resolve real-world github issues?
https://arxiv.org/abs/2505.16832v2
arXiv preprint arXiv:2310.06770. 9 Hyoungwook Jin, Minju Yoo, Jeongeon Park, Yokyung Lee, Xu Wang, and Juho Kim. 2025. Teachtune: Reviewing pedagogical agents against diverse stu- dent profiles with simulated students. In Proceedings ofthe2025 CHI Conference onHuman Factors in Computing Systems, pages 1–28. Max Ku, Thomas Chong, Jonathan Leung, Krish Shah, Alvin Yu, and Wenhu Chen. 2025. The- oremexplainagent: Towards multimodal explana- tions for llm theorem understanding. arXiv preprint arXiv:2502.19400. Black Forest Labs. 2024. Flux. https://github.com /black-forest-labs/flux . Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2023. Let’s verify step by step. arXiv preprint arXiv:2305.20050. Shrestha Basu Mallick and Logan Kilpatrick. 2025. Gemini 2.0: Flash, flash-lite and pro. https: //developers.googleblog.com/en/gemini-2 -family-expands/ . Accessed: 2025-05-20. Jiageng Mao, Junjie Ye, Yuxi Qian, Marco Pavone, and Yue Wang. 2023. A language agent for autonomous driving. arXiv preprint arXiv:2311.10813. Chad M. Miller and Jonathan D. Cohen. 2020. Metacognitive prompts in multimedia learning: A meta-analysis. Educational Psychology Review , 32(3):979–1003. Mistral AI. 2025. Mistral small 3.1: Sota. multimodal. multilingual. apache 2.0. https://mistral.ai/n ews/mistral-small-3-1 . Accessed: 2025-05-20. Inderjeet Nair, Jiaye Tan, Xiaotian Su, Anne Gere, Xu Wang, and Lu Wang. 2024. Closing the loop: Learning to generate writing feedback via language model simulated student revisions. arXiv preprint arXiv:2410.08058. Fan Nie, Lan Feng, Haotian Ye, Weixin Liang, Pan Lu, Huaxiu Yao, Alexandre Alahi, and James Zou. 2025. Weak-for-strong: Training weak meta- agent to harness strong executors. arXiv preprint arXiv:2504.04785. Satria Nugroho and Jailani Jailani. 2019. The effective- ness of concrete representational abstract approach (cra) approach and problem solving approach on mathematical representation ability at elementary school. KnE Social Sciences. Saugat Pandey and Alvitta Ottley. 2025. Benchmarking visual language models on standardized visualization literacy tests. arXiv preprint arXiv:2503.16632. Dustin Podell, Zion English, Kyle Lacey, A. Blattmann, Tim Dockhorn, Jonas Muller, Joe Penna, and Robin Rombach. 2023. Sdxl: Improving latent diffusion models for high-resolution image synthesis. ArXiv , abs/2307.01952.Luca Podo, Muhammad Ishmal, and Marco Angelini. 2024. Vi (e) va llm! a conceptual stack for evaluating and interpreting generative ai-based visualizations. arXiv preprint arXiv:2402.02167. Norma Presmeg. 2006. Research on visualization in learning and teaching mathematics. Handbook ofresearch onthe psychology ofmathematics education, pages 205–235. Jianing Qiu, Kyle Lam, Guohao Li, Amish Acharya, Tien Yin Wong, Ara Darzi, Wu Yuan, and Eric J Topol. 2024. Llm-based agentic systems in medicine and healthcare. Nature Machine Intelligence , 6(12):1418–1420. M. Rohith. 2023. High school physics. https://hugg ingface.co/datasets/mrohith29/high-schoo l-physics . Accessed: 2025-05-15. Gregory Schraw and David Moshman. 1995. Metacog- nitive theories. Educational Psychology Review , 7(4):351–371. Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Côté, Yonatan Bisk, Adam Trischler, and Matthew Hausknecht. 2020. Alfworld: Aligning text and em- bodied environments for interactive learning. arXiv preprint arXiv:2010.03768. Chan Hee Song, Jiaman Wu, Clayton Washing- ton, Brian M Sadler, Wei-Lun Chao, and Yu Su. 2023. Llm-planner: Few-shot grounded planning for embodied agents with large language mod- els. In Proceedings oftheIEEE/CVF international conference oncomputer vision, pages 2998–3009. Giulio Starace, Oliver Jaffe, Dane Sherburn, James Aung, Jun Shern Chan, Leon Maksin, Rachel Dias, Evan Mays, Benjamin Kinsella, Wyatt Thompson, and 1 others.
https://arxiv.org/abs/2505.16832v2
2025. Paperbench: Evaluating ai’s ability to replicate ai research. arXiv preprint arXiv:2504.01848. Qwen Team. 2025. Qwen2.5-vl. Pere-Pau Vázquez. 2024. Are llms ready for visual- ization? In 2024 IEEE 17th Pacific Visualization Conference (PacificVis), pages 343–352. IEEE. Vercel. 2025. v0: Ai chat interface for web automation. https://v0.dev . Accessed: 2025-05-20. Lei Wang, Songheng Zhang, Yun Wang, Ee-Peng Lim, and Yong Wang. 2023a. Llm4vis: Explainable vi- sualization recommendation using chatgpt. arXiv preprint arXiv:2310.07652. Shen Wang, Tianlong Xu, Hang Li, Chaoli Zhang, Joleen Liang, Jiliang Tang, Philip S Yu, and Qing- song Wen. 2024a. Large language models for ed- ucation: A survey and outlook. arXiv preprint arXiv:2403.18105. 10 Yaoting Wang, Shengqiong Wu, Yuecheng Zhang, Shuicheng Yan, Ziwei Liu, Jiebo Luo, and Hao Fei. 2025. Multimodal chain-of-thought reason- ing: A comprehensive survey. arXiv preprint arXiv:2503.12605. Yi Ru Wang, Jiafei Duan, Dieter Fox, and Siddhartha Srinivasa. 2023b. Newton: Are large language mod- els capable of physical reasoning? arXiv preprint arXiv:2310.07018. Zora Zhiruo Wang, Jiayuan Mao, Daniel Fried, and Graham Neubig. 2024b. Agent workflow memory. arXiv preprint arXiv:2409.07429. Hao Wei, Jianing Qiu, Haibao Yu, and Wu Yuan. 2024. Medco: Medical education copilots based on a multi- agent framework. arXiv preprint arXiv:2408.12496. Yiran Wu, Feiran Jia, Shaokun Zhang, Hangyu Li, Erkang Zhu, Yue Wang, Yin Tat Lee, Richard Peng, Qingyun Wu, and Chi Wang. 2023. Mathchat: Con- verse to tackle challenging math problems with llm agents. arXiv preprint arXiv:2306.01337. Zhiyu Wu, Xiaokang Chen, Zizheng Pan, Xingchao Liu, Wen Liu, Damai Dai, Huazuo Gao, Yiyang Ma, Chengyue Wu, Bingxuan Wang, Zhenda Xie, Yu Wu, Kai Hu, Jiawei Wang, Yaofeng Sun, Yukun Li, Yishi Piao, Kang Guan, Aixin Liu, and 8 oth- ers. 2024. Deepseek-vl2: Mixture-of-experts vision- language models for advanced multimodal under- standing. Preprint, arXiv:2412.10302. Songlin Xu, Xinyu Zhang, and Lianhui Qin. 2024. Edu- agent: Generative student agents in learning. arXiv preprint arXiv:2404.07963. Yiying Yang, Wei Cheng, Sijin Chen, Xianfang Zeng, Jiaxu Zhang, Liao Wang, Gang Yu, Xingjun Ma, and Yu-Gang Jiang. 2025. Omnisvg: A unified scalable vector graphics generation model. arXiv preprint arXiv:2504.06263. Zhiyu Yang, Zihan Zhou, Shuo Wang, Xin Cong, Xu Han, Yukun Yan, Zhenghao Liu, Zhixing Tan, Pengyuan Liu, Dong Yu, and 1 others. 2024. Mat- plotagent: Method and evaluation for llm-based agentic scientific data visualization. arXiv preprint arXiv:2402.11453. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2023. React: Synergizing reasoning and acting in language models. In International Conference onLearning Representations (ICLR). Yuan Yao, Tianyu Yu, Ao Zhang, Chongyi Wang, Junbo Cui, Hongji Zhu, Tianchi Cai, Haoyu Li, Weilin Zhao, Zhihui He, and 1 others. 2024. Minicpm-v: A gpt-4v level mllm on your phone. arXiv preprint arXiv:2408.01800. Yangyang Yu, Haohang Li, Zhi Chen, Yuechen Jiang, Yang Li, Denghui Zhang, Rong Liu, Jordan W Su- chow, and Khaldoun Khashanah. 2024. Finmem: Aperformance-enhanced llm trading agent with layered memory and character design. In Proceedings ofthe AAAI Symposium Series , volume 3, pages 595–597. Puzhen Yuan, Angyuan Ma, Yunchao Yao, Huaxiu Yao, Masayoshi Tomizuka, and Mingyu Ding. 2025. Remac: Self-reflective and self-evolving multi-agent collaboration for long-horizon robot manipulation. arXiv preprint arXiv:2503.22122. Di Zhang,
https://arxiv.org/abs/2505.16832v2
Wei Liu, Qian Tan, Jingdan Chen, Hang Yan, Yuliang Yan, Jiatong Li, Weiran Huang, Xi- angyu Yue, Dongzhan Zhou, Shufei Zhang, Mao Su, Hansen Zhong, Yuqiang Li, and Wanli Ouyang. 2024a. Chemllm: A chemical large language model. Preprint, arXiv:2402.06852. Kechi Zhang, Jia Li, Ge Li, Xianjie Shi, and Zhi Jin. 2024b. Codeagent: Enhancing code gener- ation with tool-integrated agent systems for real- world repo-level coding challenges. arXiv preprint arXiv:2401.07339. Mike Zhang, Amalie Pernille Dilling, Léon Gondel- man, Niels Erik Ruan Lyngdorf, Euan D Lindsay, and Johannes Bjerva. 2025. Sefl: Harnessing large language model agents to improve educational feed- back systems. arXiv preprint arXiv:2502.12927. Shuyan Zhou, Frank F Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, and 1 others. 2023. Webarena: A realistic web environment for building autonomous agents. arXiv preprint arXiv:2307.13854. Yiyang Zhou, Zhaoyang Wang, Tianle Wang, Shangyu Xing, Peng Xia, Bo Li, Kaiyuan Zheng, Zijian Zhang, Zhaorun Chen, Wenhao Zheng, and 1 others. 2025. Anyprefer: An agentic framework for preference data synthesis. arXiv preprint arXiv:2504.19276. 11 A Appendix A.1 Visualization disciplines Table 3 illustrates the disciplines and types in our EduVisBench. Discipline Common Visualization Types MathematicsNumber lines, function graphs, and other formalized visual tools. Physics Diagrams involving levers, rigid body motion, forces, and fields. ChemistryMolecular structures and schematic representations of standard laboratory apparatus. Table 3: Representative Visualization Types Across Academic Disciplines A.2 Evaluation Metric Visual Scenario Design Guidance The category of "Visual Scenario Design Guidance" outlines differ- ent levels of visualizing mathematical concepts, progressing from basic text-only representations to highly integrated visual-text formats. Through five defined levels, the framework demonstrates how visual ele- ments can enhance students’ understanding and engagement with abstract ideas, guiding instructional de- signers to gradually enrich scenarios, add annotations, and strengthen contextual connections—ultimately achieving the goal of visually presenting the full flow and conceptual structure of the content.The five levels of Visual Scenario Design Guidance are as follows: Level Description Level 1The image contains no scenes or illustrations, presenting only text and formulas. It lacks contextual visual cues, failing to spark interest or connect the concepts to real-life situations. Level 2The image includes a single static illustration or low-fidelity mockup with minimal labeling that does not highlight variables or key objects, offering limited context and poor immersion. Level 3Multiple static schematic diagrams or sketch-style illustrations appear in the image, labeling core objects, variables, and simple steps, providing basic visual guidance but lacking layered coherence. Level 4The image integrates scenario illustrations, storyboard panels, and infographics to present the process in multiple views and steps, with annotations and captions guiding students through mapping abstract concepts to context. Level 5Storyboard-style illustrations and infographics are fused into a single image, including overview, detailed close-ups, and key pathway diagrams with comprehensive annota- tions, allowing students to grasp the entire flow and conceptual network at a glance. Table 4: Five Levels of Visual Scenario Design Guidance Visual Illustration Design The category of "Visual Illustration Design" describes progressive levels of visual elements used to support students’ systematic understanding of quantities and relationships. It ranges from
https://arxiv.org/abs/2505.16832v2
no visual aids to complex integrated dashboards that deeply connect data and model structures. Through five levels, the framework guides designers to improve clarity, coherence, and contextual richness of visual illustrations, enhancing students’ analytic and comparative abilities. 12 Level Description Level 1The image contains no charts, axes, or flow diagrams—only text. Without embedded visual tools, students cannot systematically organize or analyze quantities and relation- ships. Level 2The image presents a static number line and colored bar chart with complete scales and a legend, helping students gain a basic understanding of numerical changes. However, it lacks comparison and contextual layering. Level 3The image presents a static number line and colored bar chart with complete scales and legends, helping students grasp basic numerical changes visually, though comparison and context layering are absent. Level 4The image combines number lines, flowcharts, infographics, and arrow annotations; multiple visuals are juxtaposed or overlaid to show processes and variable changes for a coherent modeling view. Level 5The image presents a dashboard-style visualization integrating axes, bar charts, flow diagrams, heatmaps, etc., with linked elements that deeply visualize data relationships and model structure. Table 5: Five Levels of Visual Illustration Design Text–Illustration Coordination The category of "Text–Illustration Coordination" describes levels of alignment and integration between textual content and visual elements within images. This progression ranges from complete disconnection to seamless fusion, enabling students to effectively map and synthe- size text, formulas, and graphics. The framework guides designers in strengthening links between verbal and visual information to enhance comprehension and structural understanding. Level Description Level 1Text and illustrations in the image are completely disconnected, with no labels, legends, or connectors—students cannot use visuals to understand text or formulas. Level 2Text occasionally prompts “see diagram” or “refer to the illustration,” but the image lacks legends or clear labels, so mapping between text and graphics remains ambiguous. Level 3Text descriptions and image elements share consistent numbering, color blocks, or arrows linked to a simple legend, explaining core symbols and variables to support initial mapping. Level 4Text paragraphs are laid out alongside corresponding visuals within the same image, with detailed legends and color-coded annotations enabling simultaneous reading and mapping. Level 5Text, formulas, and legends are fully integrated in one image, using consistent colors, numbering, and layered layout to achieve seamless text–graphic fusion for complete structural understanding. Table 6: Five Levels of Text–Illustration Coordination Learning Thought Guidance The category of "Learning Thought Guidance" describes the progressive inclusion of visualized problem-solving strategies and reflective cues in images. From presenting only problem statements to complex integrated dashboards, this framework guides designers to scaffold students’ strategic thinking and metacognitive reflection through visual tools, enabling deeper reasoning and transfer of learning. Interactivity and Personalized Support The category of "Interactivity and Personalized Support" outlines levels of incorporating feedback, hints, and tailored assistance into images, evolving from static presentations to dynamic, student-responsive visual supports. This framework encourages designers 13 Level Description Level 1The image offers no visualized problem-solving guidance, showing only the problem statement and formulas, leaving students without strategic cues or reflection prompts. Level 2The image embeds a simple flowchart or
https://arxiv.org/abs/2505.16832v2
two title-style hints (e.g., “Identify problem type,” “Check result”), but the flowchart is overly simplistic and hints lack hierarchical detail. Level 3The image displays a step-by-step flowchart template with key thinking nodes and self-check checkpoints, leaving annotation space for students to visually record their reasoning. Level 4The image combines a near-transfer exercise with a comparative thought diagram, visually highlighting strategy differences so students can apply existing reasoning to a new context. Level 5The image fuses near- and far-transfer exercises, concept mind maps, and a reflection panel into a dashboard-style layout, allowing students to review and extend their problem-solving network visually. Table 7: Five Levels of Learning Thought Guidance to embed interactive elements that adapt to learner needs, promoting engagement and personalized problem-solving. Level Description Level 1The image includes no feedback or support components—only a static problem state- ment and answer field—offering no hints, examples, or error cues and resulting in a nonresponsive visual. Level 2The image shows fixed hint boxes (e.g., “Hint: draw a number line,” “Hint: check rounding”), but hints are not tailored to student responses, limiting personalized guid- ance. Level 3The image integrates multiple static correction tips and example solution modules (common mistakes and standard approaches), which students can reference visually but without intelligent recommendations. Level 4The image presents example solution workflows, text hints, and a common-errors analysis section highlighted with color blocks and arrows, providing diverse visual support in a single layout. Level 5The image displays a comprehensive visual support panel with difficulty suggestions, personalized hints, worked examples, and extension resource links, enabling students to select tailored guidance directly from the visual layout. Table 8: Five Levels of Interactivity and Personalized Support A.3 Similarity between GPT and Human Evaluation Metric Chemistry Math Physics Average Cosine Similarity ↑ 0.9742 0.9557 0.9666 0.9655 MSE ↓ 0.3895 0.7093 0.6118 0.5702 Table 9: Cosine similarity and mean squared error across subjects. Math is the average of Math500 and Illustrative- Math, each with 50 samples. 14 We evaluated the similarity between GPT-based evaluations and human evaluations to assess the reliability of GPT’s scoring capabilities. Specifically, we selected 50 samples from each subject category (Chemistry, Math, and Physics) and had both GPT and human evaluators independently rate these samples. Our human evaluators were undergraduate students from top universities, ensuring high-quality and informed assessments. We measured agreement using two standard metrics: Cosine Similarity and Mean Squared Error (MSE). As shown in Table 9, the high Cosine Similarity scores (averaging 0.9655) and low MSE values (averaging 0.5702) across all subjects indicate that GPT’s scoring aligns closely with human judgment. These results demonstrate that GPT-based evaluation is highly reliable and sufficiently robust for practical usage, closely approximating human evaluative standards. A.4 Evaluation Prompt The instructional web page evaluation prompt is structured as follows: Evaluation Prompt As a professional evaluator of instructional web pages , your task is to determine whether the generated web page meets expectations across five specific categories. Instructions: • Assign an integer score from 0 to 5 for each of the five categories (1–5). •0= completely missing or extremely poor 5= fully meets the highest
https://arxiv.org/abs/2505.16832v2
standard • Evaluation should be based solely on the specified aspect: {category} . The definition of {category} is:{description} . •Do not include any explanation, justification, or additional commentary. Refusing to provide a score is not allowed. Evaluation Output Format {{RATING: {"1":score, "2":score, "3":score, "4":score, "5":score}}} A.5 Additional Related Works LLM-based AI Agents. Recent advancements in LLM-based agents have led to the development of specialized architectures capable of long-horizon planning, tool use, and memory management across a range of real-world domains (Yao et al., 2023; Chan et al., 2024; Chen et al., 2024b, 2025a; Nie et al., 2025; Han et al., 2025; Zhou et al., 2025). In software engineering, agents like Devin (AI, 2024), CodeAgent (Zhang et al., 2024b), and SWE-agent (Jimenez et al., 2023) manage full development pipelines and perform iterative code debugging. In the domain of web automation, agents have been deployed for complex web navigation and interaction tasks (Wang et al., 2024b; Chen et al., 2025b; Zhou et al., 2023). LLM-based agents have also demonstrated effectiveness in embodied settings such as robotic manipulation, autonomous driving, and embodied navigation (Song et al., 2023; Shridhar et al., 2020; Mao et al., 2023; Yuan et al., 2025). Beyond these, specialized agents have emerged in domains including healthcare (Qiu et al., 2024), finance (Yu et al., 2024), and academic research (Starace et al., 2025). In the educational domain, AI agents such as EduAgent (Xu et al., 2024) and Teachtune (Jin et al., 2025) simulate student-teacher dialogues, collaborative learning activities, and task-oriented reasoning to enhance problem-solving instruction. Agents like SEFL (Zhang et al., 2025) and PROF (Nair et al., 2024) generate on-demand feedback for large-scale educational settings, while domain-specific tools such as MathChat (Wu et al., 2023), NEWTON (Wang et al., 2023b), and MEDCO (Wei et al., 2024) provide textual explanations for scientific and medical learning. 15 Despite these advancements, few works explore collaborative, multi-agent designs tailored for edu- cational reasoning and visualization. EduVisAgent is the first systematic multi-agent framework that coordinates specialized agents for instructional planning, reasoning decomposition, metacognitive prompt- ing, and visualization design, offering a comprehensive approach to support step-by-step pedagogical problem-solving. 16
https://arxiv.org/abs/2505.16832v2
SimpleDeepSearcher: Deep Information Seeking via Web-Powered Reasoning Trajectory Synthesis Shuang Sun1*, Huatong Song1*, Yuhao Wang1, Ruiyang Ren1, Jinhao Jiang1, Junjie Zhang1, Fei Bai1, Jia Deng1, Wayne Xin Zhao1†, Zheng Liu2, Lei Fang3, Zhongyuan Wang2, Ji-Rong Wen1 1Gaoling School of Artificial Intelligence, Renmin University of China 2Beijing Academy of Artificial Intelligence3DataCanvas Alaya NeW {sunshuanguns, batmanfly}@gmail.com Abstract Retrieval-augmented generation (RAG) sys- tems have advanced large language mod- els (LLMs) in complex deep search scenar- ios requiring multi-step reasoning and iterative information retrieval. However, existing ap- proaches face critical limitations that lack high- quality training trajectories or suffer from the distributional mismatches in simulated environ- ments and prohibitive computational costs for real-world deployment. This paper introduces SimpleDeepSearcher, a lightweight yet effec- tive framework that bridges this gap through strategic data engineering rather than complex training paradigms. Our approach synthesizes high-quality training data by simulating realis- tic user interactions in live web search environ- ments, coupled with a multi-criteria curation strategy that optimizes the diversity and quality of input and output side. Experiments on five benchmarks across diverse domains demon- strate that SFT on only 871 curated samples yields significant improvements over RL-based baselines. Our work establishes SFT as a viable pathway by systematically addressing the data- scarce bottleneck, offering practical insights for efficient deep search systems. Our code is available at https://github.com/RUCAIBox/ SimpleDeepSearcher . 1 Introduction In recent years, retrieval-augmented genera- tion (RAG) methods have significantly enhanced LLMs by incorporating external knowledge re- trieval mechanisms (Zhao et al., 2024; Gao et al., 2024). Recent advancements have extended these capabilities to complex deep search scenarios that demand multi-step reasoning with iterative infor- mation retrieval and synthesis(Alzubi et al., 2025). To address the complex reasoning demands in deep search scenarios, early research explored *Equal contributions. †Corresponding author.prompt-based strategies that guide models to de- compose questions, generate queries, and retrieve information iteratively (Jiang et al., 2024; Teng et al., 2025; Li et al., 2025a). Other studies have attempted to improve model performance through supervised fine-tuning (SFT) (Wang et al., 2025), but due to the lack of high-quality interaction tra- jectories for reasoning and retrieval, SFT methods often fail to fully activate the model’s autonomous search abilities (Schick et al., 2023). To further enhance the model’s autonomous search capabil- ities, reinforcement learning (RL) (Sutton et al., 1999) is considered as a promising solution to train models through real-time interaction with the envi- ronment (Song et al., 2025; Jin et al., 2025; Zheng et al., 2025). However, most RL-based approaches operate within artificial environments using static document corpora, creating a distributional mis- match with real-world web dynamics. Moreover, the inherent computational intensity of RL training escalates exponentially when interfacing with live search APIs (Sun et al., 2025). Given the overhead and complexity of RL-based training, we posits that SFT remains a viable path- way for building efficient deep search systems. While SFT offers a streamlined training process, it faces the critical challenge of lacking high-quality training data in deep search scenarios. On the in- put side, existing QA datasets often lack the di- versity and complexity of questions and search-
https://arxiv.org/abs/2505.16834v2
oriented purposes on the Web, which are essential for deep search training. On the output side, tradi- tional answer annotations omit the critical reason- ing traces (search operations, evidence synthesis, and efficient decision paths) required for teaching search-integrated reasoning strategies. In this paper, we propose SimpleDeepSearcher, an efficient search-with-think framework that uti- lizes strategic data engineering rather than com- plex training paradigms. Our core methodology centers on a three-fold process for constructingarXiv:2505.16834v2 [cs.CL] 25 May 2025 What is the birthdate of …. who plays …and who broke the record … that has ever …? When holding … in two hands, what is … ? What award did … receive in 2008?… … Where is the location of the newly constructed …? What is the population of the city in which …?check add populationlocation author birthday language…Keywords Set Domain Heterogeneity Keyword Diversity Complexity of Knowledge Units checkQuestion 1 Question n …… S1 S0 S2 S9 S0 S1 S2 S9… Correct ratios 0.2 Correct ratios 1 Question Difficulty … 𝑟0𝑡𝑠𝑞0𝑟1𝑟3 a … a 𝑟0 𝑟2 … 𝑟0𝑡𝑠𝑞0𝑡𝑒𝑟6 a … 𝑟0𝑡𝑠𝑞0𝑡𝑒𝑟2 aFormat Standardization Reasoning Path Control Search Effectiveness Retained Solution S0 S1 S2 S9Response Curation Query SamplingArena Essex Raceway is built alongside an out -of-town shopping centre constructed on the site of a former what?Okay, let's tackle this question about Arena Essex Raceway. The user is asking what the site of the out -of- town … <𝑡𝑠>Arena Essex … <𝑡𝑠>Chalk QuarryThe site of the Lakeside Shopping Centre, next to which Arena Essex Raceway was built, was previously used as a chalk quarry. This is confirmed by Web Page 7 …Search Arena Essex Raceway …Reason SummarizationMulti -turn QuestionInference Search Data Other DataAnswer Pure Reasoning Data … Supervised Fine -tuning pr q d r a … 𝑡𝑠 𝑡𝑒 Mask Mask Reinforcement Learning Offline RejectedDPOQuestion PoolREINFO RCE++ Top-k Webpages ChosenOnline Qu est io n Po ol Qu est io n Po olFigure 1: Overall framework of our proposed SimpleDeepSearcher approach. pdenotes the prompt, rdenotes the reasoning content, qrepresents the search query, and drefers to the retrieved document after summarization. tsand teare special tokens indicating the beginning and end of the search query, and adenotes the final answer. high-quality training data. First, we develop a data synthesis framework grounded in real web search environments, simulating realistic user search be- haviors to generate multi-turn reasoning trajecto- ries. Second, we propose a diversity-aware query sampling strategy to optimize domain coverage, semantic complexity, and knowledge unit density. Moreover, we adopt a four-dimensional response curation that enforces format standardization, rea- soning efficiency, question difficulty, and search effectiveness. By systematically addressing both query and response-side quality through automated pipelines, SimpleDeepSearcher can obtain high- quality supervised signals based on real web search for complex reasoning to facilitate the SFT process. Experimental results show that our SFT method significantly boosts model performance on five rep- resentative benchmarks with only 871 high-quality training samples. Compared to prompt-based meth- ods, SimpleDeepSearcher achieves a 48.3% im- provement, and compared to RL-based RAG meth- ods achieves a 24.9% improvement. This demon- strates that our framework effectively balances per- formance and
https://arxiv.org/abs/2505.16834v2
efficiency, providing a simple yet powerful approach to enhancing deep search ca- pabilities. Furthermore, our framework is highly extensible that can be combined with other types of training data, the framework is also applicable to RL-based training.Our main contributions are as follows: •We propose a real web-based data synthesis framework that simulates realistic user search be- haviors, generating multi-turn reasoning and search trajectories. •We design a multi-criteria data curation strat- egy that jointly optimizes both input question selec- tion and output response filtering through orthogo- nal filtering dimensions. •Experimental results demonstrate that SFT on only 871 samples enables SimpleDeepSearcher to outperform strong baselines (especially RL-based baselines) on both in-domain and out-of-domain benchmarks. 2 Method 2.1 Overview In this section, we propose SimpleDeepSearcher for complex deep search tasks by leveraging multi- stage data construction strategies. To address the resource-intensive limitations of deep search systems, we propose Sim- pleDeepSearcher, a framework that achieves in- telligence search through efficient supervised fine- tuning (SFT) with minimal training data. For con- structing high-quality SFT data, we establish a sys- tematically designed data synthesis and curation pipeline, as illustrated in Figure 1. First, we replace static document retrieval with real-time network interactions, simulating human search behavior through an iterative cycle of “reasoning-searching-summarizing-generating.” By directly processing raw HTML content via com- mercial search APIs, we capture diverse web in- formation features—ranging from structured data snippets to unstructured narrative discourse. Based on this, we first filter input queries using domain heterogeneity, keyword diversity, and knowledge unit complexity to construct a maximally informa- tive training foundation while ensuring selected queries align with real-world web search scenar- ios. Additionally, we apply a filtering mechanism to LLM-synthesized responses, implementing a four-dimensional quality filter that simultaneously optimizes format standardization, reasoning path control, question difficulty, and search effective- ness to guarantee response quality. The framework’s modular design offers three dis- tinctive advantages over conventional approaches: First, it exposes the model to authentic search ar- tifacts and noise patterns through real web inter- actions. Second, our multidimensional filtering strategy enables state-of-the-art performance with remarkably small SFT datasets, eliminating de- pendency on resource-heavy RL training. Third, the decoupled architecture between data synthesis and model constraints provides exceptional flex- ibility that our curated datasets can enhance any LLMs while maintaining compatibility with emerg- ing reasoning architectures and alternative training paradigms including RL. Since the searched con- tent is not generated by the LLM, we mask out these tokens during the SFT process. Our methodology achieves unprecedented effi- ciency in search-oriented model training, reducing computational demands while maintaining com- petitive performance through strategic data quality optimization rather than brute-force data quantity. 2.2 Data Synthesis in Real Web Environment Typically, traditional retrieval-augmented gen- eration (RAG) systems rely on closed and static knowledge corpora. Such knowledge corpora ex- hibit two primary limitations: firstly, the content they contain often consists of refined and con- densed segments (Barnett et al., 2024); secondly, the information within these knowledge corpora lacks timeliness (Ouyang et al., 2025). Conse- quently, RAG systems are limited in their ability to simulate authentic user search behaviors, as userstypically search within
https://arxiv.org/abs/2505.16834v2
open, dynamic, and com- plex web environments where the information is not only diverse in format and varied in quality but is also frequently accompanied by redundancy and noise. In light of this, our data synthesis approach does not rely on curated document collections but is instead grounded in the real, open web environ- ment. This authentic web environment also places greater demands on the model’s capabilities for information extraction, synthesis, and reasoning. Building upon the widely adopted iterative deep search process (Li et al., 2025a) of reason-search- summarize-generate, we develop an automated pipeline for large-scale training data synthesis. For each query, our framework at each iteration (1) initiates web searches via commercial APIs, (2) ex- tracts and processes raw HTML content, (3) applies an LLM to reason over multi-source evidence, and (4) continues for the next iteration or stop iteration. By sampling multiple reasoning paths per query, we capture nuanced decision-making processes in- herent to real-world information synthesis. Our data synthesis strategy is firmly rooted in real web scenarios, which substantially enriches the diversity and authenticity of training samples. Our strategy generates more practical and representative supervisory signals for SFT, thereby addressing critical limitations in conventional SFT paradigms. 2.3 Diversity-aware Query Sampling To engineer a deep search architecture with ad- vanced query comprehension and reasoning capa- bilities, we implement a strategic repurposing of open-domain question answering (QA) resources. These curated datasets offer natural language ques- tions that inherently require multi-hop informa- tion retrieval operations, thereby exhibiting strong task alignment with the cognitive demands of deep search systems. Our selection protocol combines single-hop and multi-hop QA benchmarks through principled composition, ensuring coverage of both atomic and composite reasoning paradigms. However, empirical evidence suggests that naive dataset scaling yields diminishing returns in SFT. The efficacy of such approaches fundamentally de- pends on the intrinsic diversity and informational entropy of training instances. While existing open- domain QA corpora provide substantial volume, systematic analysis reveals three critical limita- tions: (1) domain-specific overrepresentation cre- ating skewed knowledge distributions, (2) repeti- tive syntactic patterns reducing linguistic variabil- Algorithm 1 Diversity-aware Query Sampling Input: Annotated dataset Dwith domains d1, d2, . . . , d m, target number of queries N 1:Nd←N/m 2:S← ∅ ▷Initialize the target set 3:fori= 1 to ndo 4: Ddi← {x∈D|domain (x) =di} 5: SortDdiby descending interrogative 6: words 7: while|Sdi|< N dandDdi̸=∅do 8: K← ∅ ▷Initialize the keyword set 9: foreach sample xinDdido 10: if|Sdi| ≥Ndthen 11: break 12: end if 13: kw←keywords (x) 14: ifx /∈Sandkw∩K=∅then 15: S←S∪ {x} 16: K←K∪keywords (x) 17: Ddi←Ddi\ {x} 18: end if 19: end for 20: end while 21:end for 22:return S ity, and (3) semantic simplicity thresholds below real-world query complexity. These factors collec- tively induce model brittleness and constrain cross- domain generalization potential. To address these critical limitations, we introduce a diversity-aware query sampling strategy to implement systematic data filtering through tripartite orthogonal criteria: Domain Heterogeneity encompasses the system- atic classification of query semantics across distinct knowledge domains ( e.g., history, science, politics). This dimension ensures a balanced distribution of questions across different domains,
https://arxiv.org/abs/2505.16834v2
thereby reduc- ing domain-specific biases and enhancing general- ization capabilities. Keyword Diversity focuses on the distributional diversity of core semantic constituents. we ensure non-redundant exposure to low-frequency concep- tual entities, multi-order relational dependencies, and contextually ambiguous referential expressions. Such systematic variation compels the model to transcend superficial lexical pattern matching, in- stead developing reasoning architectures essential for interpreting complex entity interactions.Complexity of knowledge units captures the fre- quency of interrogative terms used in questions (e.g., what, when), which serve as indicators of syntactic and semantic complexity. Questions with greater inquiry potential are given priority, ensur- ing comprehensive modeling of implicit reasoning chains triggered by diverse question formulations. We developed a systematic query selection framework incorporating three complementary di- mensions: domain heterogeneity, keyword diver- sity, and complexity of knowledge units. First, we partition the dataset into domain-specific clusters using the LLM-generated semantic classifications. Within each domain cluster, queries are ranked by knowledge unit complexity scores derived from conceptual density analysis. Subsequently, we per- form iterative selection using a greedy algorithm that maximizes keyword diversity while maintain- ing inter-domain balance. The detailed procedure for query sampling is presented in Algorithm 1. 2.4 Multi-Dimention Response Curation Building upon the aforementioned data synthe- sis and query sampling strategies, we have success- fully generated high-quality training data derived from real-world web environments. However, due to the inherent unpredictability of LLM reasoning, the quality of synthesized data exhibits consider- able variability despite meticulous control over in- put and generation processes. Three primary issues are observed: (i) Formatting irregularities, such as inconsistent reasoning languages, non-standard formats for search and reasoning steps, and hetero- geneous answer formats; (ii) reasoning redundancy, including hypothesis overgeneration, fabricated re- trieval content, and excessive validation loops; (iii) inefficient search strategies, including redundant search exploration, contextual myopia and failure to retrieve relevant information. The presence of low-quality reasoning outputs in language models not only compromises perfor- mance and transparency but also introduces noise into training signals, leading to inefficient compu- tational resource utilization. To address these chal- lenges, we developed a systematic filtering protocol that selects optimal solutions through rigorous and comprehensive evaluation of multiple responses per query. To mitigate these issues, we impose strict con- straints on both the format and content of sam- pled responses, retaining only those that satisfy all predefined criteria. Our filtering strategy, struc- tured around four pillars, ensures retention of high- quality reasoning data while promoting efficient search integration. Format Standardization . Filter out responses with mixed reasoning languages or incorrect rea- soning and search formats, and correct answers with formatting errors to ensure consistency and standardization across all responses. Responses exhibiting mixed languages, irregular reasoning structures, or formatting inconsistencies were ex- cluded. Automated correction aligned remaining answers with standardized templates. Reasoning Path Control . Strictly limit the use of reflection expressions ( e.g., alternatively, wait, etc.) and control the length of reasoning to avoid unnec- essary and redundant reasoning steps. Reasoning models tend to hypothesize, infer, and reflect based on internal knowledge, often resulting in delayed use of search tools and inefficient reasoning. By regulating the reasoning path, the
https://arxiv.org/abs/2505.16834v2
model can learn to seamlessly integrate search into its inference pro- cess and adopt more efficient reasoning strategies. Question Difficulty . Filter out questions with con- sistently high accuracy across multiple reasoning attempts and prioritize those with lower accuracy. Accuracy obtained from multiple samples can serve as a proxy for question difficulty. Selecting more challenging questions helps enhance the model’s ability to handle complex queries. Search Effectiveness . Among multiple candidate responses, prioritize those with fewer search steps and more diverse search content. This encourages the model to not only invoke search capabilities but also to learn how to formulate effective sub- queries based on the original question for efficient information retrieval. Based on the above dimensions, we first collect metadata for each response, such as the number of search steps, reasoning length, and accuracy. Sub- sequently, responses are filtered sequentially based onformat standardization andreasoning path con- trol. Then, based on question difficulty , questions with high accuracy are removed. For each remain- ing question, we retain multiple high-quality re- sponses that meet all constraints and sort them by search steps. According to search effectiveness , the response with the fewest search steps is selected as the final answer. Through this process, we ulti- mately obtained 871 high-quality question-answerpairs. This multi-criteria approach not only en- hances model training efficiency but also provides insights into optimal human-AI reasoning patterns. 3 Experiments 3.1 Experimental Setup Datasets. We sample training data from single- hop and multi-hop knowledge-intensive QA datasets to cover a wide range of domains and ques- tion difficulty. For single-hop questions, we use Natural Questions (Kwiatkowski et al., 2019) and SimpleQA (Wei et al., 2024). For multi-hop ques- tions, we use HotpotQA (Yang et al., 2018), 2Wiki- MultiHopQA (Ho et al., 2020), MuSiQue (Tang and Yang, 2024), and MultiHopRAG (Tang and Yang, 2024). To test the model’s performance on out-of-domain data, we select Bamboogle (Press et al., 2022), FRAMES (Krishna et al., 2024), and GAIA (Mialon et al., 2023) as extra test sets. These datasets are not used during training and help eval- uate how well the model works on new domains. We evaluate our approach on 500 randomly sam- pled instances from the validation sets of Hot- potQA, 2WikiMultiHopQA, and MuSiQue. For Bamboogle and FRAMES, we use their full test sets. For GAIA, we use 103 examples from the text-only validation subset (Li et al., 2025b). Metrics. We report results using two metrics: F1 score and LLM-as-Judge (LasJ). The F1 score cap- tures the word-level similarity between the pre- dicted and golden answers, while LasJ leverages GPT-4o-mini to evaluate the correctness of the pre- dicted response. Baselines. We consider following type of base- lines: Naive Generation : Direct generation of answers without retrieval. Standard RAG (Zhao et al., 2024): Directly retrieve relevant documents by querying the original question. Search-o1 (Li et al., 2025a): Encourages the model to perform self-initiated retrieval using prompts. RAG-RL : R1-Searcher (Song et al., 2025) and DeepRe- searcher (Zheng et al., 2025), the open-source 7B model trained with reinforcement learning to enable self-initiated retrieval. We conduct ex- periments
https://arxiv.org/abs/2505.16834v2
using the following model backbones with an online search engine, including Qwen-2.5- 7B-Instruct, Qwen-2.5-32B-Instruct, Deepseek- Distilled-Qwen-2.5-32B, and QwQ-32B. Implementation Details. Our experimental setup consists of four main components: query Models Methods2Wiki†MuSiQue†Bamboogle‡Frames‡GAIA‡ F1 LasJ F1 LasJ F1 LasJ F1 LasJ F1 LasJ Qwen-7BDirectly Gen 27.7 26.8 9.6 6.2 18.2 17.6 12.6 10.1 13.6 6.8 Standard RAG 34.8 34.8 17.2 14.6 31.5 31.2 13.9 13.5 - - Search-o1 48.0 51.2 21.5 20.6 57.9 59.2 30.9 35.0 24.3 21.4 R1-Searcher 63.4 66.4 29.0 26.8 68.2 68.8 34.4 40.3 24.1 20.4 DeepResearcher 59.7∗66.6∗27.1∗29.3∗71.0∗72.8∗- - - - SimpleDeepSearcher 70.6 79.8 28.2 29.4 74.5 76.8 44.9 55.3 39.3 36.9 Qwen-32BDirectly Gen 31.7 31.2 13.3 12.4 25.7 25.6 15.6 14.2 18.6 13.9 Standard RAG 43.7 45.0 19.5 16.8 40.8 40.8 19.4 19.4 - - Search-o1 64.9 74.8 29.1 30.6 74.4 78.4 47.2 56.8 36.5 34.0 SimpleDeepSearcher 71.9 81.2 30.6 33.0 78.1 80.0 50.1 60.8 42.1 40.8 DDQ-32BDirectly Gen 36.9 36.2 19.6 16.0 32.6 32.8 27.8 29.2 14.8 9.7 Standard RAG 48.1 50.0 24.0 21.6 42.6 46.4 26.5 28.9 - - Search-o1 49.6 55.2 25.4 23.8 65.7 68.0 32.2 38.7 23.2 24.3 SimpleDeepSearcher 69.0 77.4 32.9 33.6 80.5 83.2 52.2 63.8 42.0 41.7 QwQ-32BDirectly Gen 39.6 39.8 18.9 17.4 29.6 29.6 28.1 31.3 16.8 11.7 Standard RAG 48.4 50.6 21.8 19.4 42.5 46.4 27.4 31.6 - - Search-o1 69.4 78.0 34.3 36.4 78.7 78.4 51.6 64.4 38.3 37.9 SimpleDeepSearcher 75.6 84.4 34.8 37.4 83.4 88.0 56.8 68.8 48.9 50.5 Table 1: Performance comparisons between SimpleDeepSearcher and the baselines on QA benchmarks. The best results are in bold and the second-best are underlined .†/‡represents in-domain/out-domain datasets. Results marked with * are cited from their official paper or report. Qwen-7B ,Qwen-32B ,DDQ-32B are the abbreviations of Qwen-2.5-7B-Instruct, Qwen-2.5-32B-Instruct, and Deepseek-Distilled-Qwen-2.5-32B, respectively. sampling, data synthesis, generation, and SFT. During query sampling, we used QwQ-32B to annotate each query with its corresponding domain and keywords. For data synthesis, we employed QwQ-32B as the reasoning model and Google Search API as the search engine, with a maximum of 10 search calls and 15 reasoning turns per query. For each query, we sampled 10 candidate responses. For generation, all models are configured with a maximum sequence length of 20,480 tokens, temperature of 0.6, top -p of 0.95, and top -k of 40. In the SFT phase, we use a total batch size of 64 and train for 6 epochs with a learning rate of 1e -5, warmup ratio of 0.03, and a sequence length of 30,000 tokens. During fine-tuning, external retrieval documents are masked to avoid learning from noisy or spurious information. All prompts used in the experiments are provided in Appendix E. 3.2 Main Results Table 1 presents the main results of the proposed SimpleDeepSearcher and baselines across five rep- resentative datasets. Firstly, SimpleDeepSearcher consistently out- performs all existing baseline methods across five benchmark datasets. Specifically, it achieves the best performance not only on in-domain datasets (i.e., 2Wiki, MuSiQue) but also shows substan- tial improvements on out-of-domain datasets ( i.e.,Category MethodBamboogle GAIA F1 LasJ F1 LasJ Ours 74.5 76.8 39.3 36.9 Query Samplingw/o DH 69.7
https://arxiv.org/abs/2505.16834v2
70.4 35.6 35.8 w/o KD 73.2 76.0 32.9 31.1 w/o CKU 71.7 74.4 32.1 29.1 Environment w/o Online 74.0 74.4 30.4 28.2 Response Curationw/o FS 72.8 75.2 38.0 36.9 w/o RPC 71.7 74.4 31.6 30.1 w/o QD 67.1 70.4 32.9 32.0 w/o SE 72.6 73.6 37.7 35.0 Table 2: Results of variants of SimpleDeepSearcher on Bamboogle and GAIA. Bamboogle, FRAMES, GAIA), demonstrating its strong generalization ability. Besides, SimpleDeepSearcher consistently out- performs reinforcement learning-based methods such as R1-Searcher and DeepResearcher across most evaluation metrics. These approaches are trained on large-scale datasets using complex re- inforcement learning algorithms. In contrast, our method relies on supervised fine-tuning with only 871 training examples. This demonstrates that our framework achieves strong performance while maintaining high data efficiency, offering a simple yet effective alternative for improving deep search capabilities. Thirdly, SimpleDeepSearcher achieves stable and substantial performance improvements across MethodBamboogle GAIA F1 LasJ F1 LasJ Distilled (Ours) 74.5 76.8 39.3 36.9 w. DPO 75.0 79.2 39.0 37.9 w. Reinforce++ 73.8 75.8 29.4 24.3 Table 3: Evaluation Results of RL-based Methods. Model #Alternatively #Search Output Length QwQ-32B 7.933 2.390 867.148 QwQ-32B-SFT 4.051 2.329 581.731 Table 4: Statistical analysis of model outputs. models with diverse backbones and parameter scales, ranging from 7B to 32B. For instance, com- pared to Search-o1, it achieves relative improve- ments of 48.3%, 42.6%, and 11.5% on Qwen2.5- 7B-Instruct, DeepSeek-R1-Distill-Qwen-2.5-32B, and QwQ-32B, respectively. This demonstrates the strong generalization ability of our distillation and self-distillation strategies, with the selected data consistently leading to performance gains across heterogeneous model architectures. 4 Further Analysis 4.1 Ablation Study To validate the effectiveness of the proposed SimpleDeepSearcher, we conduct a comprehen- sive ablation analysis using Qwen2.5-7B-Instruct on the Bamboogle and GAIA datasets. We conduct detailed ablation studies on three main aspects: (1) Query Sampling: w/o DH removes domain hetero- geneity filter, w/o KD removes keyword diversity filter, w/o CKU removes complexity of knowledge units filter; (2) Environment: w/o Online uses local dense dense retrieval to synthesize training data; (3) Response Curation: w/o FR removes format regularization filter, w/o RPC removes reasoning path control filter, w/o QD removes question diffi- culty filter, w/o SC search count filter. As shown in Table 2, all ablated variants exhibit a decline in performance compared to our full method, un- derscoring the integral contribution of each com- ponent. Among them, w/o QD leads to the most significant performance drop, suggesting that ques- tion difficulty plays a crucial role in training. More challenging questions are more likely to stimulate the model’s autonomous retrieval capabilities dur- ing reasoning.Model Plan. Search Summ. Qwen-7B 0.416 0.455 0.363 Qwen-7B-SFT 0.590 0.677 0.584 QwQ-32B 0.623 0.680 0.594 QwQ-32B-SFT 0.629 0.713 0.624 Table 5: Proportion of instances containing the correct answer at each stage of the inference process (Planning, Search, and Summarization), before and after SFT. 4.2 Effect of Post-SFT RL Recent studies have investigated the integration of RL and RAG (Song et al., 2025; Jin et al., 2025; Zheng et al., 2025). We further examine the advan- tages and limitations of applying RL after SFT. We apply DPO and REINFORCE++ to conduct offline
https://arxiv.org/abs/2505.16834v2
and online reinforcement learning, respec- tively. As shown in Table 3, the model trained with DPO achieves further improvements over the SFT baseline, demonstrating the effectiveness of offline preference optimization. In contrast, the model trained with REINFORCE++ produces sig- nificantly shorter responses (see Appendix C) and shows notable performance degradation on both the Bamboogle and GAIA benchmarks. This suggests that online RL mainly triggers retrieval behavior, but brings little benefit to models that are already good at retrieval. We hypothesize that the suc- cess of offline DPO stems from its ability to lever- age high-quality trajectories generated by a strong LLM. These trajectories provide informative pref- erence signals and stable supervision, allowing the model to refine its reasoning and search strategies. 4.3 Response Redundancy In this part, we analyze how SFT impacts redun- dant reasoning and search behavior. Specifically, we focus on three indicators: (1) the frequency of the reflective word “alternatively”, which signals hesitation or divergent reasoning; (2) the average length of reasoning chains, measured by output length; and (3) the number of search calls made dur- ing inference. Our analysis is based on the QwQ- 32B model, evaluated on the 2Wiki, MuSiQue, and Bamboogle datasets. As shown in Table 4, the average use of “alternatively” and the overall out- put length are both significantly reduced after SFT. Moreover, the model issues fewer search queries. These results indicate that our self-distillation ap- proach improves both the reasoning clarity and search efficiency of the model. This improvement ModelsSummarization ModelsBamboogle GAIA F1 LasJ F1 LasJ Qwen-7B-SFTbefore training 70.8 71.2 28.0 26.2 after training 67.5 68.8 23.9 21.4 QwQ-32B 74.5 76.8 39.3 36.9 GPT-4o-mini 70.9 76.8 33.7 32.0 QwQ-32B-SFTbefore training 83.5 88.0 48.9 50.5 after training 83.9 86.4 43.2 47.6 GPT-4o-mini 80.0 80.8 40.5 44.7 Table 6: Performance comparison across two bench- marks using different summarization models. Training DataBamboogle GAIA AIME F1 LasJ F1 LasJ F1 LasJ - Reasoning 74.5 76.8 39.3 36.9 13.3 13.3 + Reasoning 76.9 80.8 37.2 37.9 20.0 20.0 Table 7: Results of the SimpleDeepSearcher trained w/ and w/o reasoning data across three benchmarks. can be attributed to the high-quality training data selected through our proposed method. 4.4 Stage-wise Analysis In this part, we analyze how training improves the performance of each sub-task in our approach, including iterative search, planning, and summa- rization. We evaluate the proportion of cases in which the final answer appears during each sub- process to quantify the efficiency of that stage. To eliminate interference from the summarization stage, all summarization models are kept identical during inference, with detailed settings provided in Appendix D. The results are shown in Table 5. We can observe substantial improvements across all components, with the search component show- ing the most significant gain. This suggests that training effectively enhances the model’s ability to generate more coherent reasoning and search trajectories, leading to more accurate information retrieval and improved overall model performance. 4.5 Effect of Summarization Model This part investigates the impact of the summa- rization model on overall performance. We fix the reasoning model and conduct a comparative analy- sis
https://arxiv.org/abs/2505.16834v2
of overall performance using different summa- rization models. As shown in Table 6, QwQ-32B demonstrates the strongest summarization capa- bility and is therefore selected as the summariza- tion model for all reasoning models. Furthermore, using fine-tuned models for summarization leads to performance degradation on downstream tasks Figure 2: Average reasoning length across three bench- marks w/ and w/o reasoning data for training. Training DataSearch Count Bamboogle GAIA AIME - Reasoning 1.552 1.757 0 + Reasoning 1.672 1.845 0 Table 8: Average search count across three benchmarks of the model trained w/ and w/o reasoning data. compared to their pre-trained counterparts. This might be attributed to the reduced long-text summa- rization ability of the fine-tuned models, due to the distributional shifts on a limited task and domain of the training data. This decline is more pronounced for models with fewer parameters. 4.6 Effect of Additional Reasoning Data We further investigate the impact of incorpo- rating complex mathematical reasoning data on Qwen2.5-7B-Instruct. As shown in Table 7, this leads to consistent performance gains across all benchmarks. Furthermore, Figure 2 and Table 8 reveals significant alterations in the model’s behav- ioral patterns on two kinds of tasks: for tasks em- phasizing complex reasoning ( e.g., AIME, GAIA), the model generates longer and more in-depth rea- soning outputs; for search tasks ( e.g., Bamboogle), the model performs more searches and explores more thoroughly. These findings suggest that incor- porating complex reasoning data helps the model learn to adapt its reasoning and search strategies to the specific demands of a task. This adaptability is critical for addressing complex and diverse queries. 5 Conclusion In this work, we present SimpleDeepSearcher, a lightweight yet effective framework for deepsearch tasks, addressing the limitations of existing RAG methods that rely heavily on complex training paradigms or suffer from distributional mismatches. By leveraging realistic web search simulations and a multi-criteria data curation strategy, we construct high-quality training trajectories that enable effi- cient supervised fine-tuning. Despite using only 871 curated samples, our method achieves substan- tial gains over RL-based baselines across diverse in-domain and out-of-domain benchmarks. Our results highlight the potential of strategic data engi- neering to empower deep search reasoning. Limitation Despite our substantial efforts, this work is sub- ject to two limitations stemming. Due to limitations in training resources and hardware, we conducted distillation training on 7B and 32B models. In fu- ture work, we plan to train and evaluate our frame- work on larger-scale models ( i.e.,72B) to further verify its generalization capability and robustness. Additionally, because of the inherent difficulty in synthesizing multi-hop data, the original data used for distillation primarily consisted of relatively sim- ple multi-hop questions. If more realistic and chal- lenging multi-hop queries can be synthesized in the future, applying our framework for filtering and training may yield even better performance. References Salaheddin Alzubi, Creston Brooks, Purva Chiniya, Edoardo Contente, Chiara von Gerlach, Lucas Irwin, Yihan Jiang, Arda Kaz, Windsor Nguyen, Sewoong Oh, and 1 others. 2025. Open deep search: Democ- ratizing search with open-source reasoning agents. arXiv preprint arXiv:2503.20201 . Akari Asai, Zeqiu Wu, Yizhong
https://arxiv.org/abs/2505.16834v2
Wang, Avirup Sil, and Hannaneh Hajishirzi. 2024. Self-rag: Learning to retrieve, generate, and critique through self-reflection. InThe Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net. Scott Barnett, Stefanus Kurniawan, Srikanth Thudumu, Zach Brannelly, and Mohamed Abdelrazek. 2024. Seven failure points when engineering a retrieval augmented generation system. In Proceedings of the IEEE/ACM 3rd International Conference on AI Engineering-Software Engineering for AI , pages 194– 199. Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, Meng Wang, and Haofen Wang. 2024. Retrieval-augmented gener- ation for large language models: A survey. Preprint , arXiv:2312.10997.Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasu- pat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In International confer- ence on machine learning , pages 3929–3938. PMLR. Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara, and Akiko Aizawa. 2020. Constructing a multi-hop qa dataset for comprehensive evaluation of reasoning steps. arXiv preprint arXiv:2011.01060 . Soyeong Jeong, Jinheon Baek, Sukmin Cho, Sung Ju Hwang, and Jong C Park. 2024. Adaptive-rag: Learn- ing to adapt retrieval-augmented large language mod- els through question complexity. arXiv preprint arXiv:2403.14403 . Jinhao Jiang, Jiayi Chen, Junyi Li, Ruiyang Ren, Shijie Wang, Wayne Xin Zhao, Yang Song, and Tao Zhang. 2024. Rag-star: Enhancing deliberative reasoning with retrieval augmented verification and refinement. CoRR , abs/2412.12881. Bowen Jin, Hansi Zeng, Zhenrui Yue, Dong Wang, Hamed Zamani, and Jiawei Han. 2025. Search-r1: Training llms to reason and leverage search engines with reinforcement learning. CoRR , abs/2503.09516. Jaehyung Kim, Jaehyun Nam, Sangwoo Mo, Jongjin Park, Sang-Woo Lee, Minjoon Seo, Jung-Woo Ha, and Jinwoo Shin. 2024. Sure: Summarizing re- trievals using answer candidates for open-domain QA of LLMs. In The Twelfth International Conference on Learning Representations . Satyapriya Krishna, Kalpesh Krishna, Anhad Mo- hananey, Steven Schwarcz, Adam Stambler, Shyam Upadhyay, and Manaal Faruqui. 2024. Fact, fetch, and reason: A unified evaluation of retrieval-augmented generation. arXiv preprint arXiv:2409.12941 . Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Ken- ton Lee, and 1 others. 2019. Natural questions: a benchmark for question answering research. Trans- actions of the Association for Computational Linguis- tics, 7:453–466. Xiaoxi Li, Guanting Dong, Jiajie Jin, Yuyao Zhang, Yujia Zhou, Yutao Zhu, Peitian Zhang, and Zhicheng Dou. 2025a. Search-o1: Agentic search-enhanced large reasoning models. CoRR , abs/2501.05366. Xiaoxi Li, Jiajie Jin, Guanting Dong, Hongjin Qian, Yu- tao Zhu, Yongkang Wu, Ji-Rong Wen, and Zhicheng Dou. 2025b. Webthinker: Empowering large rea- soning models with deep research capability. arXiv preprint arXiv:2504.21776 . Yucheng Li, Bo Dong, Frank Guerin, and Chenghua Lin. 2023. Compressing context to enhance inference ef- ficiency of large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natu- ral Language Processing , pages 6342–6353, Singa- pore. Association for Computational Linguistics. Grégoire Mialon, Clémentine Fourrier, Thomas Wolf, Yann LeCun, and Thomas Scialom. 2023. Gaia: a benchmark for general ai assistants. In The Twelfth International Conference on Learning Representa- tions . Jie Ouyang, Tingyue Pan, Mingyue Cheng, Ruiran Yan, Yucong Luo, Jiaying Lin, and
https://arxiv.org/abs/2505.16834v2
Qi Liu. 2025. Hoh: A dynamic benchmark for evaluating the impact of out- dated information on retrieval-augmented generation. arXiv preprint arXiv:2503.04800 . Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A Smith, and Mike Lewis. 2022. Measuring and narrowing the compositionality gap in language models. arXiv preprint arXiv:2210.03350 . Ruiyang Ren, Yuhao Wang, Junyi Li, Jinhao Jiang, Wayne Xin Zhao, Wenjie Wang, and Tat-Seng Chua. 2025. Holistically guided monte carlo tree search for intricate information seeking. arXiv preprint arXiv:2502.04751 . Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Eric Hambro, Luke Zettle- moyer, Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: Language models can teach themselves to use tools. In Advances in Neural Information Pro- cessing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 . Zhihong Shao, Yeyun Gong, Yelong Shen, Minlie Huang, Nan Duan, and Weizhu Chen. 2023. Enhanc- ing retrieval-augmented large language models with iterative retrieval-generation synergy. arXiv preprint arXiv:2305.15294 . Huatong Song, Jinhao Jiang, Yingqian Min, Jie Chen, Zhipeng Chen, Wayne Xin Zhao, Lei Fang, and Ji- Rong Wen. 2025. R1-searcher: Incentivizing the search capability in llms via reinforcement learning. CoRR , abs/2503.05592. Hao Sun, Zile Qiao, Jiayan Guo, Xuanbo Fan, Yingyan Hou, Yong Jiang, Pengjun Xie, Fei Huang, and Yan Zhang. 2025. Zerosearch: Incentivize the search capability of llms without searching. arXiv preprint arXiv:2505.04588 . Richard S Sutton, Andrew G Barto, and 1 others. 1999. Reinforcement learning. Journal of Cognitive Neuro- science , 11(1):126–134. Yixuan Tang and Yi Yang. 2024. Multihop-rag: Bench- marking retrieval-augmented generation for multi- hop queries. arXiv preprint arXiv:2401.15391 . Fengwei Teng, Zhaoyang Yu, Quan Shi, Jiayi Zhang, Chenglin Wu, and Yuyu Luo. 2025. Atom of thoughts for markov llm test-time scaling. arXiv preprint arXiv:2502.12018 . Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2023. Interleaving retrievalwith chain-of-thought reasoning for knowledge- intensive multi-step questions. In Proceedings of the 61st Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers) , pages 10014–10037. Liang Wang, Haonan Chen, Nan Yang, Xiaolong Huang, Zhicheng Dou, and Furu Wei. 2025. Chain-of-retrieval augmented generation. CoRR , abs/2501.14342. Jason Wei, Nguyen Karina, Hyung Won Chung, Yunxin Joy Jiao, Spencer Papay, Amelia Glaese, John Schulman, and William Fedus. 2024. Measuring short-form factuality in large language models. arXiv preprint arXiv:2411.04368 . Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, and 1 others. 2022. Chain-of-thought prompting elic- its reasoning in large language models. Advances in neural information processing systems , 35:24824– 24837. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Ben- gio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answer- ing. arXiv preprint arXiv:1809.09600 . Penghao Zhao, Hailin Zhang, Qinhan Yu, Zhengren Wang, Yunteng Geng, Fangcheng Fu, Ling Yang, Wentao Zhang, and Bin Cui. 2024. Retrieval- augmented generation for ai-generated content: A survey. CoRR , abs/2402.19473. Yuxiang Zheng, Dayuan Fu, Xiangkun Hu, Xiaojie Cai, Lyumanshan Ye, Pengrui Lu, and Pengfei Liu. 2025. Deepresearcher: Scaling deep research
https://arxiv.org/abs/2505.16834v2
via reinforce- ment learning in real-world environments. arXiv preprint arXiv:2504.03160 . A Related Work Retrieval-Augmented LLMs. To improve the fac- tual precision of LLM-generated texts (Zhao et al., 2024), researchers enhance LLMs with retrieval- augmented generation (RAG) (Guu et al., 2020). Various approaches have been proposed, such as branching-based methods (Kim et al., 2024), summarization-based methods (Li et al., 2023), and adaptive retrieval techniques (Jeong et al., 2024). With the increase in model parameters, LLMs have demonstrated chain-of-thought reasoning capabil- ities (Wei et al., 2022), and many researchers to integrated such reasoning with RAG via prompt engineering (Shao et al., 2023; Trivedi et al., 2023). Other studies have attempted to distill retrieval abilities into smaller models through supervised fine-tuning (Asai et al., 2024). However, these ap- proaches limit the model’s capacity with a fixed reasoning path. Enhancing LLMs with Search. Recently, several deep search frameworks are proposed (Ren et al., 2025; Alzubi et al., 2025). They integrate large language models with search engines in a more flexible and dynamic manner. Search-o1 (Li et al., 2025a) simulates deep search in LLMs through prompt engineering, allowing them to retrieve in- formation independently during multi-step reason- ing. R1-Searcher (Song et al., 2025) and Search- R1 (Jin et al., 2025) equip large language models with retrieval tools and train them end-to-end us- ing reinforcement learning. This approach effec- tively enhances the model’s ability to interleave reasoning with retrieval during inference. However, due to the inherent complexity of RL and its high computational demands, conducting large-scale ex- periments on full-sized LLMs remains challeng- ing. SimpleDeepSearcher synthesizes high-quality training data via broad query sampling and precise filtering, enabling strong deep search performance with minimal training cost. B DPO Detailed Settings Our objective was to identify answer trajec- tories that were both correct and demonstrated efficient reasoning and search paths. To this end, we construct preference pairs (Rw, Rl), where Rwdenotes the preferred trajectory and Rlthe rejected one. We repurpose our previ- ously established pipeline for query sampling and data synthesis. During the data synthesis Figure 3: Changes in Sequence Length and Reward During REINFORCE++ Training. stage, we generate responses using the strongest SFT-trained model, SimpleDeepSearcher-QwQ- 32B-SFT, and the target model to be optimized, SimpleDeepSearcher-Qwen-7B-SFT. Responses generated by SimpleDeepSearcher-QwQ-32B-SFT that pass both the formatting and reasoning path control checks are treated as chosen examples, while those generated by SimpleDeepSearcher- Qwen-7B-SFT that fail these checks are treated as rejected examples. Ultimately, we construct a dataset consisting of approximately 875 training pairs. For Direct Preference Optimization (DPO) train- ing, we utilize a learning rate of 5×10−7, aβof 0.1, training for 5epochs with a batch size of 256, a warm-up ratio of 0.1, and a maximum sequence length of 10000 . C REINFORCE++ Detailed Settings To construct the reinforcement learning (RL) dataset, we utilized the model that had been trained though SimpleDeepSearcher to perform rollout sampling on the training sets of 2Wiki and Hot- potQA. For each question, eight candidate re- sponses were generated. From this pool, we se- lected 2480 samples corresponding to questions with one to six correct answers, ensuring
https://arxiv.org/abs/2505.16834v2
diversity in the RL training data. The reward function employed in REIN- FORCE++ consists of two components: an answer reward and a format penalty. The answer reward is calculated as the F1 score between the predicted answer and the reference answer, providing a di- rect measure of response accuracy. In addition, a discrete format penalty of −2is applied if any of the following undesirable behaviors are detected: •Self-Retrieved Content: The model fabricates content that is not retrieved from external sources. •Contains Gibberish: The generated output con- tains nonsensical, irrelevant, or corrupted text seg- ments. •Excessive Analytical Markers: The response contains more than 5 occurrences of phrases such asAlternatively ,Wait, orHmm , which are treated as signals of incoherent reasoning. •Lack of Boxed Answers or Excessive Reason- ing Length: The model either executes more than 8 retrieval steps or the token length of the analytical content between any two retrievals exceeds 8,096 tokens. If none of these conditions are met, no penalty is applied. To maintain on-policy training through- out the RL process, we adjusted the batch size to ensure that learning was based on the most recent policy rollouts. Figure 3 shows the variations in response length and reward values observed during the training process. D Stage-wise Analysis Settings We conduct a comparative analysis of Qwe2.5- 7B-Instruct and QwQ-32B before and after training across the 2Wiki, MuSiQue, and Bamboogle bench- marks. During inference, we fix the summarization model to QwQ-32B across all comparisons to elim- inate potential interference from the summarization component (the impact of the summarization model will be further discussed in Section 4.5). E Instruction Templates Figure 4 shows the instruction for annotating the domain and keywords of questions. Figure 5 shows the instruction for LLM as a judge. Figure 6 shows the instruction for the reasoning model. Figure 7 shows the instruction for the summarization model. Instruction for Annotation You are an advanced semantic analyzer. For the given question, perform the following tasks step by step: 1. **Domain Identification**: - Determine the broad subject category (domain) this question belongs to. - Examples: film, history, biology, geography, politics, technology, etc (or any other suitable domain) 2. **Key Point Extraction**: - Identify 2 -4 core semantic components that are crucial for answering - Include: • Key entities (e.g., films, people, locations) • Critical attributes (e.g., age, duration, population) • Core relationships (e.g., comparison, causality) • Measurement dimensions (e.g., time, quantity) -Exclude filler words and non -essential descriptors \n **Output Requirements**: - Use JSON format: {{"domain": "...", " key_points ": [...]}} - Keep key_points concise (1 -2 words each) - Use lowercase for all outputs - Separate multiple key_points with commas \n **Examples**: Question: "Which film whose director is younger, Charge It To Me or Danger: Diabolik ?" Output: {{"domain": "film", " key_points ": ["director", "age "]}}\n **Now process this question:** {{Question}}Figure 4: Instruction for annotation. Instruction for LLM as Judge Given a Question and its Golden Answer, verify whether the Predicted Answer is correct. The prediction is correct if it fully aligns with the meaning and key information of the Golden
https://arxiv.org/abs/2505.16834v2
Answer. Respond with True if the prediction is correct and False otherwise. Golden Answer may have multiple options, and matching any one of them is considered correct. \n Question: {question} Golden Answer: {reference} Predicted Answer: {prediction} Figure 5: Instruction for LLM as a judge. Instruction for Reasoning Model You are a reasoning assistant with the ability to perform web searches to help you answer the user's question accurately. You have special tools: \n\n - To perform a search: write <| begin_search_query |> your query here <| end_search_query |>.\n Then, the system will search and analyze relevant web pages, then provide you with helpful information in the format <|begin_search_result |> ...search results... <| end_search_result |>.\n\n Whenever you encounter a topic, fact, or piece of information you are uncertain about or need further details on, please perform a search to gather more accurate, up -to-date, or specific information. You can repeat the search process multiple times if necessary. The maximum number of search attempts is limited to {MAX_SEARCH_LIMIT}. \n\n Once you have all the information you need, continue your reasoning. \n\n Remember: \n - Use <| begin_search_query |> to request a web search and end with <| end_search_query |>.\n - When done searching, continue your reasoning. \n - Do not generate <| begin_search_result |> and <| end_search_result |> tags yourself. \n\n Please answer the following question. You should think step by step to solve it. \n\n Provide your final answer in the format \\boxed{YOUR_ANSWER}. \n\n Question: \n{question} \n\nFigure 6: Instruction for reasoning model. Instruction for Summarization Model **Task Instruction:** \n\n You are tasked with reading and analyzing web pages based on the following inputs: **Previous Reasoning Steps**, **Current Search Query**, and **Searched Web Pages**. Your objective is to extract relevant and helpful information for **Current Search Query** from the **Searched Web Pages** and seamlessly integrate this information into the **Previous Reasoning Steps** to continue reasoning for the original question. \n **Guidelines:** \n 1. **Analyze the Searched Web Pages:** - Carefully review the content of each searched web page. -Identify factual information that is relevant to the **Current Search Query** and can aid in the reasoning process for the original question. \n 2. **Extract Relevant Information:** - Select the information from the Searched Web Pages that directly contributes to advancing the **Previous Reasoning Steps** - Ensure that the extracted information is accurate and relevant. \n 3. **Output Format:** - Present the helpful information for current search query: beginning with `**Final Information**` as shown below. **Final Information** \n [Helpful information] \n **Inputs:** - **Previous Reasoning Steps:** {prev_reasoning }\n - **Current Search Query:** {search_query }\n - **Searched Web Pages:** {document} \n Now you should analyze each web page and find helpful information based on the current search query "{ search_query }" and previous reasoning steps.Figure 7: Instruction for summarization model.
https://arxiv.org/abs/2505.16834v2
arXiv:2505.16838v1 [cs.CL] 22 May 2025R1-Compress: Long Chain-of-Thought Compression via Chunk Compression and Search Yibo Wang1Li Shen2∗Huanjin Yao1Tiansheng Huang Rui Liu3Naiqiang Tan3Jiaxing Huang4Kai Zhang1Dacheng Tao4 1Tsinghua University2Sun Yat-sen University 3Didichuxing Co. Ltd4Nanyang Technological University Abstract Chain-of-Thought (CoT) reasoning enhances large language models (LLMs) by enabling step-by-step problem-solving, yet its extension to Long-CoT introduces substantial computational overhead due to increased token length. Existing com- pression approaches—instance-level and token-level—either sacrifice essential local reasoning signals like reflection or yield incoherent outputs. To address these limitations, we propose R1-Compress, a two-stage chunk-level compression frame- work that preserves both local information and coherence. Our method segments Long-CoT into manageable chunks, applies LLM-driven inner-chunk compression, and employs an inter-chunk search mechanism to select the short and coherent se- quence. Experiments on Qwen2.5-Instruct models across MATH500, AIME24, and GPQA-Diamond demonstrate that R1-Compress significantly reduces token usage while maintaining comparable reasoning accuracy. On MATH500, R1-Compress achieves an accuracy of 92.4%, with only a 0.6% drop compared to the Long-CoT baseline, while reducing token usage by about 20%. Source code will be available athttps://github.com/w-yibo/R1-Compress . 1 Introduction Chain-of-Thought (CoT) reasoning [ 12,29,35] has recently emerged as a powerful technique that enables large language models (LLMs) to perform complex reasoning tasks, such as mathematical problem solving [ 8,14] and code generation [ 2,10], by decomposing the reasoning process into a sequence of intermediate steps. Recent advancements, including OpenAI’s o1 [ 21], DeepSeek-R1 [ 6], leverage reinforcement learning to scale to Long-CoT, further improving performance and enabling LLMs to tackle real-world tasks. However, the extended token length in Long-CoT incurs substantial computational overhead, leading to slower inference and a dramatic increase in KV cache memory usage [ 27,24,28]. These factors significantly hinder practical deployment and impose greater demands on hardware infrastructure. Therefore, developing efficient compression methods for Long-CoT that preserve their reasoning capabilities is of critical importance for enabling scalable and deployable reasoning systems. Existing methods for CoT compression can be broadly categorized into two paradigms: instance-level compression and token-level compression. Instance-level compression includes C3oT [ 11] and CoT-Valve [ 17]. C3oT utilize powerful LLMs like GPT-4 to directly compress entire CoT sequences. CoT-Valve compresses the length of CoT by identifying and manipulating a specific direction in the parameter space. These methods aim to retain the essential reasoning path while reducing the global token count. Token-level compression, such as TokenSkip [ 31], adopt a more fine-grained strategy by ∗Corresponding Authors. Preprint. identifying and skipping unimportant tokens. This allows for a compressed representation that retains detailed local information. However, our evaluation results show that the instance-level compression can degrade the local information by reducing the global token count—the reflection in Long-CoT is reduced, leading to a decline in performance. As reflection is a crucial capability within Long-CoT that enables LLMs to self-reflect and explore the correct answer, it needs to be preserved through a more fine-grained compression approach. TokenSkip as a token-level method could preserve local information such as reflection well by skipping only unimportant tokens. However, through observation and analysis, we find that this direct token-skipping approach often leads to incoherent compressed CoT, creating
https://arxiv.org/abs/2505.16838v1
a gap from the natural language patterns typically used by LLMs. Based on the above finding, it seems that effectively compressing Long-CoT cannot be achieved solely through instance-level or token-level methods. Therefore, we propose a chunk-level compres- sion approach, which better preserves chunk-level local information and can be implemented via prompting LLMs, thus maintaining linguistic coherence. However, since each chunk is compressed independently, contextual connections between chunks are lost. A subsequent question is that: Although coherence within each chunk can be ensured, how to ensure coherence across inter chunks ? Driven by this question, we propose a chunk search mechanism that generates multiple compressed candidate chunks and employs a search model to select the most coherent one. Conditioned on the previously selected optimal chunk, the search model identifies the candidate with the highest likelihood of maintaining continuity, thereby enhancing coherence across the compressed reasoning process. To improve efficiency, we first filter each chunk’s candidates to retain a smaller subset. To this end, we propose R1-Compress in Figure 1, a two-stage method designed to compress Long- CoT on chunk-level: i) The original CoT is segmented into multiple chunks based on predefined length and formatting constraints. Within each chunk, an LLM is prompted to perform local compression. ii) We first generate multiple compressed candidates for each chunk and a chunk-level search is performed to obtain the short and coherent one. By combining inner-chunk compression with inter- chunk search, our method yields a compressed yet consistent CoT, enabling efficient and coherent reasoning. We evaluate our method on Qwen2.5-14B-Instruct and Qwen2.5-32B-Instruct [ 25] on the subset of Open-Math-R1 dataset with responses generated by DeepSeek-R1. Experiments are conducted on the MATH500 [ 8] and AIME24 [ 18] benchmarks for mathematical reasoning, and GPQA [ 26] for out-of-distribution reasoning. Results show that our method consistently reduces inference token usage across model scales and datasets while maintaining comparable accuracy. Our method achieves 92.4% accuracy on MATH500—only 0.6% below the Long-CoT baseline (93%), with about 20% reduction in token usage (from 2406 to 1949). The contributions of this paper: •We find that instance-level compression methods tend to overlook local information—such as reducing the number of reflections in Long-CoT—which negatively impacts performance. In addition, our analysis reveals that token-level methods often lead to CoT lacking coherence. •To preserve the local information of Long-CoT and generate coherent reasoning chains, we propose R1-Compress, a two-stage chunk-level approach. This method combines inner-chunk compression with inter-chunk search to produce CoT that are both efficient and coherent. •Extensive results demonstrate that our method can effectively reduce the length of CoT while maintaining the model’s reasoning performance across reasoning benchmark. 2 Related Work Chain-of-Thought. [30] prompting has emerged as a powerful technique for improving the reasoning capabilities of large language models (LLMs). By encouraging the model to solve complex problems step by step, CoT significantly enhances the accuracy and interpretability of its outputs. CoT is particularly effective for tasks that requiring multiple solving steps, such as mathematical problem- solving and logical reasoning. Beyond the basic CoT paradigm, many innovative frameworks like Tree of Thought (ToT) [ 36]
https://arxiv.org/abs/2505.16838v1
and Graph of Thought (GoT) [ 1] expand upon the CoT architecture by investigating various reasoning trajectories or integrating structures based on networks. Besides, Chain-of-thought reasoning also enable human to comprehend the model’s decision-making pathway, thereby rendering the reasoning process both transparent and credible. 2 Long-CoTSo, the problem says that Alice is twice as old as Bob, … Bob’s age right now is x years old …Since Alice is … To find …So we write the equation: 3x + 10 = 55…, we divide both sides by 3. That leaves us with x = 15. So after all this …Just to double check that our answer makes sense, If we add those two numbers, we get 20 + 35, which equals..Chunk SegmentationInter-Chunk SearchCompressed CoTChunked CoT Inner-ChunkCompressionAlice is twice as old as Bob... That gives (x + 5) + (2x + 5) = 55 In five years, they’ll be 20 and 35… So, the problem says that Alice is twice as old as Bob, … Bob’s age right now is x years old …Since Alice is twice as old, that would make her age 2x, meaning …x + 5 years old, and Alice will be 2x + 5 years old. To find …So we write the equation: 3x + 10 = 55…, we divide both sides by 3. That leaves us with x = 15. So after all this work, we’ve found that Bob is currently 15 years old.…Just to double check that our answer makes sense, If we add those two numbers, we get 20 + 35, which equals 55—just like the problem said. So everything checks out, and our final answer is that Bob is 15 and Alice is 30.Alice is twice as old as Bob. Let Bob’s age be x, so Alice is 2x. In five years, their ages will be x + 5 and 2x + 5, and their sum will be 55. That gives (x + 5) + (2x + 5) = 55 →3x + 10 = 55 →3x = 45 →x = 15. So Bob is 15 and Alice is 30. In five years, they’ll be 20 and 35, which adds to 55. Correct. Concat Max P, min LLLMs Multiple CandidateFigure 1: Pipeline of our method. The Long-CoT is segmented into chunks, multiple compressed candidates for each chunk are generated using a LLM, and then a compressed CoT is constructed chunk by chunk through inter-chink search with length filtering and probability selection. Efficient Reasoning. Some approaches adopt sampling-based and post-training techniques to fine- tune existing Long-CoT models for shorter outputs. For example, Overthinking[ 4] utilizes DPO and SimPO to construct preference datasets for concise reasoning and trains models accordingly. O1-Pruner[ 15] establishes baselines for CoT length and accuracy via sampling, then employs offline optimization to shorten CoT without degrading performance. Concise Reasoning [ 19] leverages sim- ple fine-tuning strategies based on self-generated concise CoT obtained through best-of-N sampling and few-shot prompting. While other methods use different reasoning paradigm to enhance efficiency. For example, Speculative Thinking[ 34] enables large reasoning models to guide smaller ones during inference at
https://arxiv.org/abs/2505.16838v1
the reasoning level; LightThinker[ 37] dynamically compress intermediate thoughts during reasoning and Sleep-time Compute[ 13] allows models to "think" offline about contexts before queries are presented. Additionally, methods like COCONUT[ 7] and CCOT[ 5] enable reasoning in the latent space. Besides these, some other work ([ 32],[22], [16], [23], [38], [33],[9],[20]) also design novel reasoning paradigms for efficiency. Chain-of-Thought Compression. Several methods aim to directly compress Chain-of-Thought (CoT) of Large Reasoning Models. C3oT employs LLMs to compress CoT end-to-end. CoT- Valve[ 17] controls the parameter space to generate CoT samples with varying levels of compression for training models that output shorter reasoning paths. TokenSkip[ 31] selectively removes tokens based on their estimated importance within the CoT. 3 Revisiting Long-CoT Compression 3.1 Problem Setup Long-CoT. Long-CoT approaches, such as OpenAI’s o1 and DeepSeek-R1, exhibit the ability to identify and correct their own mistakes by decomposing complex reasoning steps into simpler subproblems. This iterative process significantly enhances the model’s reasoning capability. Long- CoT typically consists of multiple steps. In this work, we adopt the responses generated by DeepSeek- R1 as representative Long-CoT. SFT with Compressed Long-CoT. We focuses on compressing the token length of Long-CoT by directly reducing the original Long-CoT into shorter reasoning chains. In our setup, the Long-CoT baseline refers to the model fine-tuned using the original Long-CoT responses, while Long-CoT Compression method denotes the model fine-tuned on the compressed versions of Long-CoT. The latter retains the reasoning capabilities of Long-CoT while reducing the number of output tokens. 3.2 Revisiting Instance-level Compression In this section, we investigate existing instance-level compression methods and discuss their limi- tations in preserving local information, particularly the reflection steps within Long-CoT. Further experimental analysis reveals that this reduction in reflection leads to a decline in performance. Reflection. Reflection is the model’s ability to evaluate and revise its reasoning process during problem-solving. It enables the model to recognize and correct its own mistakes, decompose complex 3 Question: $\\left(16 \\cdot 5^{2 x -1}-2 \\cdot 5^{x -1}-0.048 \\right) \\lg \\left(x^{3}+2 x+1 \\right)=0$. LongCoT :Okay, let‘s see. I need to solve the equation:…First, let's consider the logarithm part… Let's check when that's true. For real numbers, maybe …we need to check if x=0 satisfies the domain condition. … Wait, 1600 divided by 8 is 200… Wait, that still has a coefficient 200… Wait, that still has a coefficient 200, which is a bit large. Maybe I made a mistake earlier. Wait let me check…But wait, we need to check if x= -1 is in the domain of the original equation. The… Wait, so the answer is x=0? Let me double -check …But wait, let me check if there are other solutions from case 1… Wait, but maybe there's another solution? Let me check Suppose there is another x where… Wait, maybe x=0 is the only solution…But wait, let me check the cubic equation x^3 +2x +1 >0…might be zero? Let me test x=-0.5 is invalid, x= -0.4. Let's try x=-0.4:…Hence, the answer is x=0.**Final Answer**The solution is \boxed{0}.CoT-Valve :To solve the equation \(\left(16 \cdot 5^{2x -1} - 2 \cdot 5^{x -1}
https://arxiv.org/abs/2505.16838v1
- 0.048 \right) \lg \left(x^3 + 2x + 1 \right) = 0 \), we need to…The equation \(x^2 + 2 = 0 \) has no real solutions because \(x^2 = -2\) is not possible for real numbers. Therefore, the only solution from Case 2 is \(x = 0 \).**Verification **We need to…For \(x = -1\): First, check the logarithm term:… Check the logarithm term: \(x^3 + 2x + 1 = 0^3 + 2(0) + 1 = 1 \)…Wait, but the equation is \((\text{first factor}) \times (\text{second factor}) = 0 \)...**Final Answer**The solution to the equation is \(\boxed{0} \). C3oT :We have the equation: \[(16 \cdot 5^{2 x -1}-2 \cdot 5^{x -1}-0.048) \lg \left(x^{3}+2 x+1 \right)=0 \]Since the product is zero, either the exponential part or the logarithm part is zero. First, consider… we reject y = -0.075. So y = 0.2, which gives \(5^x = 0.2 = 5^{ -1}\), and thus x = -1. However, x = -1 is not in the domain, since the logarithm is undefined when x = - 1.Therefore, the only solution is x = 0.Figure 2: Comparison of LongCoT, CoT-Valve, and C3oT. Red text indicates reflection-related phrases such as “Wait”. steps into simpler components, and adapt its strategy when the current approach proves ineffective. This iterative process of self-assessment and adjustment plays a crucial role in enhancing the model’s overall reasoning capability. We select the C3oT that simply prompting the LLMs to obtain the compressed CoT, CoT-Valve that manipulating a specific direction in the parameter space to reduce the length of CoT to Compress the Long-CoT from DeepSeek-R1. As shown in Figure 2, we find that the compressed CoT obtained through these two methods are able to preserve certain key steps and reach the final answer. However, compared to the original Long-CoT, they omit a considerable number of intermediate steps and exploratory attempts—particularly the processes of reflection and strategy switching that are often essential for arriving at the correct solution. Since reflection is a critical reasoning skill that models are expected to learn from Long-CoT supervision, the absence of this capability prompts an important question: Does the reduced frequency of reflection in compressed Long-CoT adversely affect the performance of models fine-tuned on it? To evaluate this, we count the occurrences of indicative reflection-related keywords (See Sec. 5.3 for more details,) that is also adopted by other methods [ 3]. We calculate the average number of reflection steps in 500 examples for both the original Long-CoT and the CoT compressed by the two mentioned methods. We then evaluate the performance of models fine-tuned using these different CoT. As shown in Table 1, the quantitative results reveal that as the number of reflection steps decreases, the performance of the fine-tuned model deteriorates—indicating that the absence of reflection impairs the model’s reasoning ability. Table 1: Comparison of methods on average reflection and accuracy on MATH500. Method Avg. Reflection Accuracy (%) Long-CoT 18.68 88.0 C3oT 0.15 65.8 CoT-Valve 8.36 77.4Conclusion. Instance-level compression methods operate from a global perspective and fail to preserve local information such as reflection, which is crucial for
https://arxiv.org/abs/2505.16838v1
reason- ing. Since the presence of reflection signif- icantly affects the reasoning ability of fine- tuned models, a more fine-grained com- pression strategy is needed to effectively retain such local information. 3.3 Revisiting Token-level Compression Token-level methods are inherently capable of preserving local information in Long-CoT, such as reflection. In this section, we explore existing token-level compression approaches and analyze the coherence issues observed in the resulting CoT. Furthermore, we perform a quantitative analysis using loss values derived from probabilistic predictions. 4 Question: Let $a_n\ (n\geq 1)$ be the value for which $\int_x^{2x} e^{- t^n}dt\ (x\geq 0)$ is maximal. Find $\lim_{n\to\infty} \ln a_n.$ Tokenskip: Okay, need to find limit as n approaches infinity of natural logarithm of a_n, where a_n is value maximizes integral from x to 2x of e^{-t^n} dt for x >= 0. Let's break this down step by step. First remember to find value of x that maximizes a function, take derivative of function with respect to x set it equal to zero. integral from x to 2x of e^{-t^n} dt is our function,'s call it F(x). F'(x) = 0 will give us critical points, can check if it's maximum, compute F‘(x). Using Fundamental Theorem of Calculus, derivative of integral with variable limits.integral from a(x) to b(x) of f(t derivative f(b(x)) * b'(x) - f(a(x)) * a'(x).....<OMITTED> Origin : Okay, so I need to find the limit as n approaches infinity of the natural logarithm of a_n, where a_n is the value that maximizes the integral from x to 2x of e^{-t^n} dt for x >= 0. Hmm, let's break this down step by step. First, I remember that to find the value of x that maximizes a function, we take the derivative of that function with respect to x and set it equal to zero. So, the integral from x to 2x of e^{- t^n} dt is our function,let's call it F(x)...<OMITTED> Figure 3: Left: Example of TokenSkip CoT Compression, Right: Token-level loss curves of Long- CoT and TokenSkip. We select TokenSkip as a representative token-level method to compress Long-CoT generated by DeepSeek-R1. As shown in Figure 3 Left, although TokenSkip can identify and remove unimportant tokens—thus partially preserving the original semantic content—we observe that the compressed CoT often exhibit clear incoherence, for example, " of function " and " , can check if ". We attribute this to a mismatch between the compressed outputs and the natural language patterns expected by LLMs. This gap not only results in incoherent outputs after supervised fine-tuning, but also affects the training dynamics by increasing the prediction loss due to the unnatural text of the input. We quantify coherence using token-level loss, computed as the negative log-likelihood of each token in the compressed CoT predicted by the LLM. (See more details in Sec. 5.4). As shown in Figure 3 Right, the token-level loss of the Tokenskip is generally higher than that of the origin, which further indicates a significant inconsistency between its output and the original content, making it less aligned with the typical output patterns of LLMs. Conclusion. Token-level compression
https://arxiv.org/abs/2505.16838v1
methods often produce incoherent CoT, which can negatively impact the training process and lead to models that generate incoherent outputs. In contrast, instance- level methods, such as C3oT, compress global information through prompting LLMs, resulting in more coherent outputs. 3.4 Derived Insight Based on the above analysis, we argue that effectively compressing Long-CoT cannot be achieved solely through instance-level or token-level methods. To address this, we propose compressing Long- CoT at the chunk level . Chunk-level compression allows for better preservation of local information within each chunk while maintaining stronger inner-chunk coherence. To ensure coherence across chunks, we introduce a inter-chunk search mechanism, which selects the most coherent sequence of chunks. Additionally, we incorporate a search over compression lengths to further enhance compression efficiency. 4 Method 4.1 Chunk Segmentation We are given a dataset of problem-solution pairs, denoted as D={(xk, yk)}N k=1where xkrepresents a problem and yk= [y1 k, . . . , ymk k]denotes its corresponding solution generated by a large language model (LLM) parameterized by θ, denoted as πθ(· |xk). Each solution yk(CoT) is segmented into a sequence of mkconstituent chunks: yk= [ck,1, ck,2, . . . , c k,m k] To obtain the chunks ck,jfrom the raw text yk, we use the following segmentation strategy: Minimum length requirement : A chunk must contain at least a predefined minimum number of characters or tokens (e.g., 50 words). Double newline boundary : A chunk ends when two consecutive newline characters (‘/n/n‘) are encountered, provided that the current chunk has met the minimum length requirement. This results in variable-length chunks that are semantically meaningful and structurally 5 coherent, often corresponding to paragraphs or logical substeps in a solution. This chunking strategy ensures that each ck,jcaptures a complete unit of reasoning or explanation, which is essential for later compression and search. 4.2 Inner-Chunk Compression The simplification process for a given pair (x, y)(dropping the index kfor clarity) involves the following steps: For each chunk ciin the solution y= [c1, c2, . . . , c m], we utilize a separate LLM, parameterized by ϕand denoted as πϕ, to generate multiple simplified candidate versions. Given the original chunk ciand a suitable prompt p, we sample Mcandidate simplified chunks from the conditional distribution πϕ(·|p, ci). These candidates for chunk ciare denoted as {ˆcj i}M j=1. ˆcj i∼πϕ(· |p, ci),forj= 1, . . . , M (1) This process is applied independently to each chunk ciof the original solution y. The prompt pis carefully designed to guide the LLM toward generating simplified and concise versions of the input chunk while preserving its original meaning. The full prompt used in our experiments is provided in the Appendix. 4.3 Inter-Chunk Search After obtaining Mcandidate simplified chunks {ˆcj i}M j=1for each original chunk ciiny, we aim to construct a complete simplified solution sequence y∗= [ˆc∗ 1,ˆc∗ 2, . . . , ˆc∗ m]by selecting one optimal candidate ˆc∗ ifor each position i. The selection criteria prioritize brevity and a low "loss", where loss is inversely related to the probability assigned by the original LLM πθto the simplified sequence. We
https://arxiv.org/abs/2505.16838v1
employ a greedy search approach: Length-based Filtering: For each position i, we first filter the set of Mcandidates {ˆcj i}M j=1. We discard the α·Mlongest candidates, keeping the (1−α)Mshortest ones, where α∈[0,1)is a predetermined filtering ratio. Let the filtered set of candidates for position ibe˜Ci⊆ {ˆcj i}M j=1. Probability-based Selection: We iteratively select the best simplified chunk for each position i= 1, . . . , m . At position i, having selected the optimal simplified chunks ˆc∗ 1, . . . , ˆc∗ i−1for the preceding positions, we choose the candidate ˆc∗ i∈˜Cithat maximizes the conditional probability under the original LLM πθ, given the original problem xand the previously selected simplified chunks: ˆc∗ i= arg max ˆc∈˜Ciπθ(ˆc|x,ˆc∗ 1, . . . , ˆc∗ i−1) (2) For the first chunk ( i= 1), the selection is based solely on the probability conditioned on the problem x: ˆc∗ 1= arg max ˆc∈˜C1πθ(ˆc|x) (3) 4.4 Compressed CoT The final simplified solution y∗for the problem xis constructed by concatenating the sequence of optimally selected simplified chunks: y∗= [ˆc∗ 1,ˆc∗ 2, . . . , ˆc∗ m] (4) This entire process is applied to each (xk, yk)pair in the dataset Dto obtain a dataset Dcompressed of simplified solutions. 4.5 Fine-tuning with Compressed CoT After obtaining the compressed dataset Dcompressed ={(xk, y∗ k)}N k=1, we perform standard supervised fine-tuning (SFT) on the base model πθto better align it with the simplified reasoning trajectories. 6 The training objective is to maximize the log-likelihood of the compressed outputs given the input problems: LSFT(θ) =NX k=1logπθ(y∗ k|xk) (5) This fine-tuning step encourages the model to generate concise yet faithful reasoning chains. 5 Experiments 5.1 Experiment Settings Dataset. For training, we use the OpenR1-Math-220k dataset, a large-scale benchmark for mathe- matical reasoning. It consists of 220k math problems, each responses generated by DeepSeek-R1. For evaluation, we leverage two widely used mathematical reasoning benchmarks. MATH500 and AIME24. GPQA-Diamond as an out-of-distribution benchmark. More details are in Appendix A.2. Baseline. We consider two primary baselines: CoT-Valve andTokenSkip .R1-Compress random is a variant of R1-Compress that randomly selects a candidate chunk during compression. Long- CoT refers to supervised fine-tuning (SFT) on the original DeepSeek-R1 responses without any compression. Base denotes the model without SFT. In our experiments, the base model is fine-tuned on the compressed CoT generated by each method. More details can be found in Appendix A.3. Metric. We employ the following three metrics to evaluate the model’s performance. Accuracy : For MATH500 and GPQA-Diamond, we report pass@1 accuracy. For AIME24, due to its small size, we report avg@10 accuracy. Token (Token Length): The average token length of generated responses. Valid (Valid Token Length): The average token length of responses that are answered correctly. Implementation Details. We primarily evaluate our method using the Qwen2.5-Instruct series (14B/32B). All evaluations are conducted using the lighteval framework, following the widely adopted Long-CoT evaluation setting, with a temperature of 0.6 and a top-p of 0.95. For supervised fine-tuning (SFT), we use a learning rate of 1e-5 and train for 4 epochs using the LlamaFactory library. For chunk
https://arxiv.org/abs/2505.16838v1
compression, we utilize LLaMA3.1-70B-Instruct with a sampling temperature of 0.75 and generate 8 candidate chunks. The chunk search method is performed by the DeepSeek-R1-Distill-Qwen-14B model, More details are in Appendix A.1. 5.2 Main Results For the Qwen2.5-14B-Instruct model in Table 2, we observe that R1-Compress achieves consistent im- provements over the Long-CoT baseline by significantly reducing the average token length—ranging from a reduction of 412 tokens on MATH500 to 1056 tokens on GPQA-Diamond. Importantly, R1-Compress attains the highest accuracy and lowest token length across all three benchmarks comparing with other baselines, demonstrating its ability to effectively compress Long-CoT without compromising its reasoning effectiveness. Our method also performs well on the out-of-distribution benchmark GPQA-Diamond, highlighting its generalizability. Compared to R1-Compress random , our full method further improves both accuracy and token efficiency, validating the effectiveness of the proposed inter-chunk search in selecting shorter and more coherent CoT. As we scale up to the Qwen2.5-32B-Instruct model, R1-Compress continues to outperform all baselines in terms of token length while achieving the best or comparable accuracy. On MATH500, our method achieves a strong accuracy of 92.4%, with only 0.6% performance drop compared to the Long-CoT baseline (93.0%), while reducing the average token length by around 500 tokens. Furthermore, the valid token length is reduced by approximately 20% (from 2406 to 1949) under nearly equal numbers of correct responses. The consistent improvement over R1-Compress random on the larger model further supports the robustness and scalability of our proposed search strategy. 5.3 Reflection Evaluation We conduct this analysis by counting the occurrences of eflection-related keywords: “wait”, “alterna- tively”, "emm", "hmm". These tokens indicate shifts in reasoning or self-reflect. 7 Table 2: Main experiment results. We present the performance of two models and report accuracy ( ↑), average token length (Token) ( ↓), valid token length (Valid) ( ↓) on three reasoning benchmark. Methods MATH500 AIME24 GPQA-Diamond Accuracy Token (Valid) Accuracy Token (Valid) Accuracy Token (Valid) Qwen2.5-14B-Instruct Base 79.8 - 11.00 - 47.97 - Long-CoT 88.0 3781 (2601) 30.00 12101(6402) 51.51 9600 (7830) CoT-Valve 77.4 3733 ( 1343 ) 15.00 12972 ( 4186 )39.89 10257 ( 6704 ) TokenSkip 82.8 4236 (2313) 17.66 13504 (4644) 33.83 11974 (8130) R1-Compress random 81.2 3880 (2033) 24.00 12444 (6381) 48.48 9524 (7153) R1-Compress 84.8 3369 (2074) 25.66 11369 (5575) 49.49 8544 (6962) Qwen2.5-32B-Instruct Base 83.2 - 16.66 - 50.0 - Long-CoT 93.0 3147 (2406) 50.66 10541 (5997) 61.11 8054 (6199) CoT-Valve 91.0 2718 (1891) 39.33 11357 (5898) 54.04 9578 (6891) TokenSkip 89.8 3004 ( 1871 ) 44.33 10881 (6000) 59.59 8505 (5877) R1-Compress random 89.8 2899 (1965) 42.00 11135 (5705) 54.04 8335 (6510) R1-Compress 92.4 2661 (1949) 43.33 10747 ( 5495 )59.09 6963 (5005 ) As shown in Table 3, our method preserves significantly more reflection steps compared to other baselines—on average, six more than CoT-Valve. Ane We preserving approximately 78% of the reflections found in Long-CoT, while achieving accuracy competitive with the original Long-CoT responses. 5.4 Coherence Evaluation To quantitatively assess the coherence of compressed CoT, we compute the token-level log-likelihood loss using the DeepSeek-R1-Distill-Qwen-14B model. Specifically, given a compressed CoT
https://arxiv.org/abs/2505.16838v1
as input and the original uncompressed CoT as reference, we evaluate the average token-level loss for TokenSkip, R1-Compress random , R1-Compress. Table 4 reports the average token-level loss for each method. The results indicate that both of our methods achieve lower token-level loss compared to TokenSkip, indicating better coherence between tokens in the compressed CoT. Moreover, R1-Compress achieves lower loss than R1-Compress random , demonstrating that the introduction of the search mechanism improves inter-chunk coherence. Enhanced coherence contributes to greater stability during training and enables the fine-tuned model to produce more semantically precise outputs. To complement the quantitative results, we also perform a qualitative analysis by visualizing token- level loss across several representative examples in Figure 4a. We observe that TokenSkip frequently exhibits high-loss regions, particularly around intermediate conclusions and reflective reasoning steps. R1-Compress random and R1-Compress achieves lower loss compared to R1-Compress random by leveraging the search mechanism to identify chunks that are more coherent within the given context. 5.5 Ablation Study 8 Table 3: Comparison of methods on average re- flection and accuracy on MATH500. Method Avg. Reflection Accuracy (%) Long-CoT 18.68 88.0 C3oT 0.15 65.8 CoT-Valve 8.36 77.4 R1-Compress 14.59 84.8Table 4: Coherence evaluation. Comparison of methods on token-level loss. Method Token-Level Loss Long-CoT 0.41 TokenSkip 0.87 R1-Compress random 0.63 R1-Compress 0.59 (a) Token loss comparison on case 1 (b) Token loss comparison on case 2 Figure 4: Token-level loss visualization. Table 5: Ablation study on chunk size. Chunk Size MATH500 AIME24 1000 79.0 (4188) 21.66 (13074) 500 81.2 (3880 )24.00 (11369 )Chunk Size. Table 5 reports the results of using different chunk size constraints during chunk segmentation. For a clearer ablation, we com- pare variants without the search mechanism, i.e., R1-Compress random . The results show that smaller chunk sizes yield higher-quality com- pressed CoT, as finer-grained chunks better pre- serve local information and reduce the compression difficulty for LLMs. In the limit where the chunk size becomes unbounded, the method effectively reduces to C3oT. Table 6: Ablation study on search model. “w/o” denotes the absence of a search model, “w/ Qwen” uses Qwen2.5-14B-Instruct as the search model, and “w/ DeepSeek-Distill” uses DeepSeek-R1-Distill-Qwen- 14B as the search model. Methods MATH500 Accuracy Token(Valid) Qwen2.5-14B-Ins w/o 81.2 3880 (2033) w/ Qwen 83.0 3373 ( 1874 ) w/ DeepSeek-Distill 84.8 3369 (2074)Search Model. We investigate the im- pact of different models used in the search phase of reasoning compression. Specifi- cally, we compare Qwen2.5-14B-Instruct (Qwen) and DeepSeek-R1-Distill-Qwen- 14B (DeepSeek-Distill) as the search model. As shown in Table 6, we ob- serve that both models, when used as the search model, improve accuracy compared to the variant without search. Specifically, DeepSeek-Distill tends to favor longer re- sponses, resulting in a larger gain in accu- racy, while Qwen prefers shorter responses, leading to a lower valid token length. Over- all, both search models contribute to improved performance, and by selecting shorter yet coherent chunks, the search process ultimately leads to reduced total token usage. 6 Conclusion In this paper, we propose R1-Compress, an effective framework for compressing long Chain-of- Thought (CoT) reasoning by
https://arxiv.org/abs/2505.16838v1
combining inner-chunk compression with an inter-chunk search mech- anism. Unlike existing approaches that either compromise critical reasoning behaviors—such as reflection—or lead to incoherent outputs, R1-Compress effectively reduces token length while main- taining high reasoning quality. Experimental results across multiple benchmarks demonstrate the method’s ability to preserve performance under significant compression. These findings underscore the potential of chunk-level CoT compression as a practical and scalable solution for enhancing the efficiency and deployability of large-scale reasoning models. 9 References [1]M. Besta, N. Blach, A. Kubicek, R. Gerstenberger, M. Podstawski, L. Gianinazzi, J. Gajda, T. Lehmann, H. Niewiadomski, P. Nyczyk, and T. Hoefler. Graph of thoughts: Solving elaborate problems with large language models. Proceedings of the AAAI Conference on Artificial Intelligence , 38(16):17682–17690, Mar. 2024. [2]M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. de Oliveira Pinto, J. Kaplan, H. Edwards, Y . Burda, N. Joseph, G. Brockman, A. Ray, R. Puri, G. Krueger, M. Petrov, H. Khlaaf, G. Sastry, P. Mishkin, B. Chan, S. Gray, N. Ryder, M. Pavlov, A. Power, L. Kaiser, M. Bavarian, C. Winter, P. Tillet, F. P. Such, D. Cummings, M. Plappert, F. Chantzis, E. Barnes, A. Herbert-V oss, W. H. Guss, A. Nichol, A. Paino, N. Tezak, J. Tang, I. Babuschkin, S. Balaji, S. Jain, W. Saunders, C. Hesse, A. N. Carr, J. Leike, J. Achiam, V . Misra, E. Morikawa, A. Radford, M. Knight, M. Brundage, M. Murati, K. Mayer, P. Welinder, B. McGrew, D. Amodei, S. McCandlish, I. Sutskever, and W. Zaremba. Evaluating large language models trained on code, 2021. [3]R. Chen, Z. Zhang, J. Hong, S. Kundu, and Z. Wang. Seal: Steerable reasoning calibration of large language models for free. arXiv preprint arXiv:2504.07986 , 2025. [4]X. Chen, J. Xu, T. Liang, Z. He, J. Pang, D. Yu, L. Song, Q. Liu, M. Zhou, Z. Zhang, R. Wang, Z. Tu, H. Mi, and D. Yu. Do not think that much for 2+3=? on the overthinking of o1-like llms, 2025. [5]J. Cheng and B. V . Durme. Compressed chain of thought: Efficient reasoning through dense representations, 2024. [6]DeepSeek. Deepseek-r1-lite-preview: Unleashing supercharged reasoning power. https: //api-docs.deepseek.com/news/news1120 , 2024. Accessed: 2024-12-29. [7]S. Hao, S. Sukhbaatar, D. Su, X. Li, Z. Hu, J. Weston, and Y . Tian. Training large language models to reason in a continuous latent space, 2024. [8]D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang, D. Song, and J. Steinhardt. Measuring mathematical problem solving with the math dataset. NeurIPS , 2021. [9]B. Hou, Y . Zhang, J. Ji, Y . Liu, K. Qian, J. Andreas, and S. Chang. Thinkprune: Pruning long chain-of-thought of llms via reinforcement learning, 2025. [10] J. Jiang, F. Wang, J. Shen, S. Kim, and S. Kim. A survey on large language models for code generation, 2024. [11] Y . Kang, X. Sun, L. Chen, and W. Zou. C3ot: Generating shorter chain-of-thought without compromising effectiveness, 2024. [12] Z.-Z. Li, D. Zhang, M.-L. Zhang, J. Zhang, Z. Liu, Y . Yao, H. Xu, J. Zheng, P.-J. Wang, X. Chen, Y . Zhang, F. Yin, J. Dong, Z. Li, B.-L. Bi, L.-R.
https://arxiv.org/abs/2505.16838v1
Mei, J. Fang, Z. Guo, L. Song, and C.-L. Liu. From system 1 to system 2: A survey of reasoning large language models, 2025. [13] K. Lin, C. Snell, Y . Wang, C. Packer, S. Wooders, I. Stoica, and J. E. Gonzalez. Sleep-time compute: Beyond inference scaling at test-time, 2025. [14] H. Luo, H. He, Y . Wang, J. Yang, R. Liu, N. Tan, X. Cao, D. Tao, and L. Shen. Adar1: From long-cot to hybrid-cot via bi-level adaptive reasoning optimization, 2025. [15] H. Luo, L. Shen, H. He, Y . Wang, S. Liu, W. Li, N. Tan, X. Cao, and D. Tao. O1-pruner: Length-harmonizing fine-tuning for o1-like reasoning pruning, 2025. [16] W. Ma, J. He, C. Snell, T. Griggs, S. Min, and M. Zaharia. Reasoning models can be effective without thinking, 2025. [17] X. Ma, G. Wan, R. Yu, G. Fang, and X. Wang. Cot-valve: Length-compressible chain-of-thought tuning, 2025. [18] MAA. American invitational mathematics examination - aime. In American Invitational Mathematics Examination - AIME 2024 , February 2024. 10 [19] T. Munkhbat, N. Ho, S. H. Kim, Y . Yang, Y . Kim, and S.-Y . Yun. Self-training elicits concise reasoning in large language models, 2025. [20] Y . Ning, W. Li, J. Fang, N. Tan, and H. Liu. Not all thoughts are generated equal: Efficient llm reasoning via multi-turn reinforcement learning, 2025. [21] OpenAI. Learning to reason with llms. https://openai.com/index/ learning-to-reason-with-llms/ , 2024. [Accessed 19-09-2024]. [22] J. Pan, X. Li, L. Lian, C. Snell, Y . Zhou, A. Yala, T. Darrell, K. Keutzer, and A. Suhr. Learning adaptive parallel reasoning with language models, 2025. [23] Z. Qiao, Y . Deng, J. Zeng, D. Wang, L. Wei, F. Meng, J. Zhou, J. Ren, and Y . Zhang. Concise: Confidence-guided compression in step-by-step efficient reasoning, 2025. [24] X. Qu, Y . Li, Z. Su, W. Sun, J. Yan, D. Liu, G. Cui, D. Liu, S. Liang, J. He, P. Li, W. Wei, J. Shao, C. Lu, Y . Zhang, X.-S. Hua, B. Zhou, and Y . Cheng. A survey of efficient reasoning for large reasoning models: Language, multimodality, and beyond, 2025. [25] Qwen, :, A. Yang, B. Yang, B. Zhang, B. Hui, B. Zheng, B. Yu, C. Li, D. Liu, F. Huang, H. Wei, H. Lin, J. Yang, J. Tu, J. Zhang, J. Yang, J. Yang, J. Zhou, J. Lin, K. Dang, K. Lu, K. Bao, K. Yang, L. Yu, M. Li, M. Xue, P. Zhang, Q. Zhu, R. Men, R. Lin, T. Li, T. Tang, T. Xia, X. Ren, X. Ren, Y . Fan, Y . Su, Y . Zhang, Y . Wan, Y . Liu, Z. Cui, Z. Zhang, and Z. Qiu. Qwen2.5 technical report, 2025. [26] D. Rein, B. L. Hou, A. C. Stickland, J. Petty, R. Y . Pang, J. Dirani, J. Michael, and S. R. Bowman. Gpqa: A graduate-level google-proof q&a benchmark, 2023. [27] Y . Sui, Y .-N. Chuang, G. Wang, J. Zhang, T. Zhang, J. Yuan, H. Liu, A. Wen, S. Zhong, H. Chen, and X. Hu. Stop overthinking: A survey on efficient reasoning for
https://arxiv.org/abs/2505.16838v1
large language models, 2025. [28] R. Wang, H. Wang, B. Xue, J. Pang, S. Liu, Y . Chen, J. Qiu, D. F. Wong, H. Ji, and K.-F. Wong. Harnessing the reasoning economy: A survey of efficient reasoning for large language models, 2025. [29] Y . Wang, S. Wu, Y . Zhang, S. Yan, Z. Liu, J. Luo, and H. Fei. Multimodal chain-of-thought reasoning: A comprehensive survey, 2025. [30] J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. Chi, Q. Le, and D. Zhou. Chain-of-thought prompting elicits reasoning in large language models, 2023. [31] H. Xia, Y . Li, C. T. Leong, W. Wang, and W. Li. Tokenskip: Controllable chain-of-thought compression in llms, 2025. [32] C. Yang, Q. Si, Y . Duan, Z. Zhu, C. Zhu, Z. Lin, L. Cao, and W. Wang. Dynamic early exit in reasoning models, 2025. [33] J. Yang, K. Lin, and X. Yu. Think when you need: Self-adaptive chain-of-thought learning, 2025. [34] W. Yang, X. Yue, V . Chaudhary, and X. Han. Speculative thinking: Enhancing small-model reasoning with large model guidance at inference time, 2025. [35] H. Yao, J. Huang, W. Wu, J. Zhang, Y . Wang, S. Liu, Y . Wang, Y . Song, H. Feng, L. Shen, and D. Tao. Mulberry: Empowering mllm with o1-like reasoning and reflection via collective monte carlo tree search, 2024. [36] S. Yao, D. Yu, J. Zhao, I. Shafran, T. L. Griffiths, Y . Cao, and K. Narasimhan. Tree of thoughts: Deliberate problem solving with large language models, 2023. [37] J. Zhang, Y . Zhu, M. Sun, Y . Luo, S. Qiao, L. Du, D. Zheng, H. Chen, and N. Zhang. Light- thinker: Thinking step-by-step compression, 2025. [38] R. Zhuang, B. Wang, and S. Sun. Accelerating chain-of-thought reasoning: When goal-gradient importance meets dynamic skipping, 2025. 11 A More Details about Experiments A.1 Implementation Details. We perform compression using LLaMA3.1-70B-Instruct on 4 ×80GB GPUs. For model training, we conduct full-parameter fine-tuning of Qwen2.5-14B-Instruct on 8 ×80GB GPUs and Qwen2.5- 32B-Instruct on 16 ×80GB GPUs. All fine-tuning procedures run for more than 2–4 hours. The hyperparameters used for full fine-tuning are summarized in Table 7. Table 7: Hyperparameters for the Qwen2.5-14B-Instruct and Qwen2.5-32B-Instruct. Hyperparameter Qwen2.5-14B-Instruct Qwen2.5-32B-Instruct cutoff_len 8192 8192 batch_size 8 2 learning_rate 1.0e-5 1.0e-5 num_train_epochs 4.0 4.0 lr_scheduler_type cosine cosine warmup_ratio 0.1 0.1 A.2 Benchmark. MATH500 : A challenging math dataset comprising 500 problems from high school math competi- tions. AIME24 : A benchmark dataset consisting of 30 challenging mathematical problems from the 2024 American Invitational Mathematics Examination. GPQA-Diamond : A high-difficulty subset of the GPQA benchmark, with 198 complex graduate- level multiple-choice questions across various scientific domains. A.3 Baseline. CoT-Valve : We adopt the Short-Long-Short CoT compression strategy proposed by CoT-Valve, which aligns with our experimental setting. We use an untrained model as the short model and a model fine-tuned on Long-CoT as the long model. By applying model merging, we obtain a Short-Long-Short model, following the setup introduced in CoT-Valve. Specifically, we perform linear interpolation with weights of (0.9, 0.1) and (0.8, 0.2) for the short
https://arxiv.org/abs/2505.16838v1
and long models, respectively, to create different variants of the Short-Long-Short model. These merged models are then used to sample and construct the MixChain of Short-Long-Short CoT dataset. TokenSkip : This baseline directly applies token-level compression to Long-CoT to generate shortened CoT. We follow its setting to measure the token importance by LLMLingua-2 compressor. The control ratios that we use are 0.9, 0.8, 0.7, 0.6. C3oT : In Table 1, we adopt the prompt template provided in the original implementation and use the same LLM (LLaMA3.1-70B-Instruct) as the compressor to ensure a fair comparison. A.4 Metric. Accuracy. For MATH500 and GPQA-Diamond, we report pass@1 accuracy, where the model is evaluated with a single response. For AIME24, due to its small size (30), we report the accuracy avg@10 , calculated as the average accuracy over 10 independent runs. Token (Token Length) : The average token length of all model-generated responses, used to evaluate the overall compression effectiveness. Valid (Valid Token Length) : The average token length of responses that are answered correctly. This metric is introduced to better analyze the relationship between output length and successful reasoning. 12 A.5 Training Dataset. We use the OpenR1-Math-220k dataset, a large-scale benchmark for mathematical reasoning. It consists of 220k math problems, each accompanied by two to four reasoning traces generated by DeepSeek R1 for problems sourced from NuminaMath 1.5. All traces have been verified using Math Verify. We randomly sample 5,000 examples from it. A.6 Filter Strategy. Each response from DeepSeek-R1 is first segmented into multiple chunks using our chunk seg- mentation strategy. To ensure efficient downstream compression, we filter out samples with more than 30 chunks, reducing the initial 5k samples to 3.8k. We further refine the dataset by verify- inghas_vaild_answer : Whether the original R1 response contains an extractable answer and has_same_answer : Whether the answer extracted from the compressed CoT matches the original one. Additionally, we remove samples with excessively low or high compression ratios. After this filtering process, a total of 2,513 samples are retained for training. B More Discussion B.1 Necessity of Chunk. C3oT compresses Long-CoT directly via LLMs. However, due to the extremely long context of Long-CoT, LLMs often struggle to follow instructions faithfully and preserve critical information. Specifically, we use the advanced model LLaMA3.1-70B-Instruct as the compressor. As shown in Table 8, after compressing 3,620 Long-CoT samples, only 442 resulting CoT retain answers consistent with the original responses. This outcome highlights the limitations of direct instance-level compression and underscores the necessity of our proposed chunk-level approach. Table 8: Filtering statistics of C3oT-compressed data based on answer consistency. has_same_answer is introduce in Appendix A.6. Stage Sample Count Before has_same_answer filter 3,620 After has_same_answer filter 442 B.2 Main Results. As shown in Table 2, our method achieves a substantial reduction of over 1,000 tokens on the GPQA-Diamond benchmark, with only a minimal performance drop (approximately 2%) compared to the Long-CoT baseline. The strong performance on this out-of-distribution benchmark suggests that models may inherit overly verbose reasoning patterns from Long-CoT supervision, which are then reflected in other tasks. This observation highlights the
https://arxiv.org/abs/2505.16838v1
practical significance of compressing Long-CoT. Additionally, we observe that on AIME24, the 32B model exhibits a noticeable reduction in valid token length, while the overall token length remains nearly unchanged. This is because AIME24 is a highly challenging task, and when the model produces incorrect answers, it tends to generate longer responses. Thus, although valid reasoning becomes more concise, the total output length does not decrease accordingly. C Prompt Template The compression prompt used for LLMs is provided in Table 9. The templates for dataset construction, mathematical evaluation and GPQA evaluation are shown in Table 10. 13 Table 9: Compression prompt for LLMs. Here is an reasoning piece excerpt from some math problem solving process (it is incomplete, but this doesn’t matter.): {step} Instructions: You need to simplify the wording of given reasoning piece to get a concise reasoning piece. Notice: 1. Avoid omitting any reasoning steps. You should keep all the reflection, analysing, checking steps and even steps making mistakes. (Especially steps contains word “wait”, “hmm”) 2. Directly give me the simplified content without any additional words. 3. Do not add additional steps or continue the reasoning process. 4. Follow the format of given reasoning piece. Output format: <start> (simplified content) <end> Table 10: Template for Dataset construction and Evaluation. Dataset construction template: {Question} Let’s think step by step and output the final answer within boxed{{}}, {Com- pressed_CoT}. MATH template: {Question} Let’s think step by step and output the final answer within boxed{{}}. GPQA template: {Question} Answer the following multiple choice question. The last line of your response should be of the following format: ’Answer: $LETTER’ (without quotes) where LETTER is one of ABCD. Let’s think step by step before answering. A) A B) B C) C D) D D Case Study Case Example 1 Problem: In the school, there are 50 teachers, of whom 29 drink coffee, 28 drink tea, and 16 do not drink either coffee or tea. How many teachers drink only coffee, and how many drink only tea? Long-CoT: (Partial) \nOkay, let’s see. There’s a problem about teachers who drink coffee or tea, and some who don’t drink either. I need to find out how many drink only coffee and how many drink only tea. Hmm, this sounds like a problem where I can use sets and maybe a Venn diagram. Let me start by recalling some concepts.\n\nFirst, the total number of teachers is 50. Out of these, 29 drink coffee, 28 drink tea, and 16 don’t drink either. Wait , so the number of teachers who drink at least one of the beverages should be the total minus those who don’t drink either. Let me calculate that.\n\nTotal teachers = 50\nTeachers who don’t drink coffee or tea = 16\nSo, teachers who drink coffee or tea or both = 50 - 16 = 34.\n\nOkay, so 34 teachers are in the union of coffee drinkers and tea drinkers. Now, I remember that the principle of inclusion-exclusion says that the number of people in either set is equal to the number in the first set plus the number in the
https://arxiv.org/abs/2505.16838v1
second set minus the number in both sets. In formula terms:\n\nNumber of coffee drinkers + Number of tea drinkers - Number of both = Total in union\n\nPlugging in the numbers I have:\n\n29 (coffee) + 28 (tea) - Both = 34\n\nSo, 29 + 28 = 57. 1173 characters R1-Compress: (Partial) I need to find out how many teachers drink only coffee and how many drink only tea. This sounds like a problem where I can use sets and maybe a Venn diagram. The total number of teachers is 50. Out of these, 29 drink coffee, 28 drink tea, and 16 don’t drink either. Hmm , the number of teachers who drink at least one of the beverages should be the total minus those 14 who don’t drink either. Let me calculate that.\n\nTotal teachers = 50\nTeachers who don’t drink coffee or tea = 16\nSo, teachers who drink coffee or tea or both = 50 - 16 = 34.\n\nNow, I recall the principle of inclusion-exclusion: \nNumber of coffee drinkers + Number of tea drinkers - Number of both = Total in union\n\nPlugging in the numbers:\n\n29 (coffee) + 28 (tea) - Both = 34\n\nSo, 29 + 28 = 57. 758 characters As shown in Case 1, R1-Compress is able to successfully compress nearly 30% of the content, while preserving the key computational steps and effectively retaining the reflection component (with “Wait” transformed into “Hmm”). Further analysis shows that the compressed content primarily consists of unnecessary discussions and redundant explanations. Case Example 2 Problem: Two is 10% ofxand20% ofy. What is x−y? (A) 1 (B) 2 (C) 5 (D) 10 (E) 20 Long-CoT: (Partial) \nOkay, let’s see here. The problem says that two is 10% of x and 20% of y. Then we need to find x minus y. Hmm, percentages can sometimes trip me up, but let me take it step by step.\n\nFirst, let’s parse the problem. Two is 10% of x. So, in math terms, that would be 2 = 10% of x. Similarly, two is also 20% of y, so 2 = 20% of y. Then, after finding x and y, subtract them to get x - y. The answer choices are numbers from 1 to 20, so it’s probably a straightforward calculation once we figure out x and y.\n\nAlright, starting with the first equation: 2 is 10% of x. To translate percentage to decimal, 10% is 0.1. So, 2 = 0.1 * x. 630 characters ...... Wait, but let me make sure I didn’t make any mistakes here. It’s easy to mix up percentages. Let me go through the steps again just to confirm.\n\nFirst equation: 10% of x is 2. 10% is 0.1. So, 0.1x = 2. Solving for x: x = 2 0.1 = 20. Correct.\n\nSecond equation: 20% of y is 2. 20% is 0.2. So, 0.2y = 2. Solving for y: y = 2 0.2 = 10. Correct.\n\nSubtracting them: 20 - 10 = 10. Answer choice D. Hmm, seems solid. But the answer choices include 10, which is D, but I wonder if there’s a trick here. Sometimes problems might involve
https://arxiv.org/abs/2505.16838v1
something like the percentage of x equals the percentage of y but in different terms. Wait, no, in this case, it’s straightforward: two is both 10% of x and 20% of y. So, x and y are different variables, each solved independently. Then subtract. Yeah, that’s right.\n\nAlternatively, maybe someone might get confused and think it’s the same number, but no, the problem clearly states two is 10% of x and 20% of y. So, x and y have different percentages leading to the same value, 2. Hence, solving for x and y gives different numbers, and their difference is 10.\n\nI don’t see any mistakes here. Therefore, the answer should be D) 10.\n\n**Final Answer**\n D\n 1193 characters R1-Compress: (Partial) Okay, let’s see here. Two is 10% of x and 20% of y. We need to find x minus y. Hmm, percentages can trip me up, so I’ll take it step by step.\n\nFirst, let’s parse the problem. Two is 10% of x, so 2 = 10% of x. Similarly, two is 20% of y, so 2 = 20% of y. Then, we need to find x and y, and subtract them to get x - y.\n\nStarting with the first equation: 2 is 10% of x. 10% is 0.1, so 2 = 0.1 * x. 394 characters ...... Wait, let me recheck my steps. First, 10% of x is 2, so 0.1x = 2 and x = 2 0.1 = 20. Correct.\n\nNext, 20% of y is 2, so 0.2y = 2 and y = 2 0.2 = 10. Correct.\n\nSubtracting them: 20 - 10 = 10. Answer choice D. Hmm, seems solid. But I wonder if there’s a trick. Wait, no, two is both 10% of x and 20% of y, so x and y are different variables. Alternatively, maybe someone might get confused and think it’s the same number, but the problem clearly states two is 10% of x and 20% of y, so x and y have different percentages leading to the same value, 2. Hence, solving for x and y gives different numbers, and their difference is 10.\n\nI don’t see any mistakes. So, the answer is D) 10.\n\n**Final Answer**\n D713 characters 15 As shown in Case 1, R1-Compress is able to successfully compress 236 characters during the problem- solving stage, while retaining the key steps and calculations. Furthermore, it effectively preserves the reflection and verification phases, compressing approximately 500 characters in this stage. Overall, R1-Compress achieves high-quality compressed CoT by reducing redundant expressions and calculations, while maintaining the essential computations and reflective verification. E Limitation Due to the inherent randomness in LLM-based compression—stemming from variations in prompts, different LLMs, and sampling parameters—the quality of each compressed instance cannot be fully guaranteed. Although we generate multiple candidate chunks and apply a search strategy to select high-quality compressed CoT, it is still possible to obtain outputs with contextual incoherence. This work provides insights into Long-CoT compression from the perspective of reflection and, through case studies, reveals that the removed tokens are primarily associated with redundant expressions and repetitive computational steps. Nonetheless, the compression of Long-CoT
https://arxiv.org/abs/2505.16838v1
Highlights • Implementation of detailed guidelines and a structured process to en - sure high -quality annotations, focused on targeting language, from the crowd, experts, and ChatGPT to detect and analyze inappropriately targeting language in a comprehensive data set including conversations from banned subreddits • Insights into toxic and non-toxic language • Potential and limitations of OpenAI’s GPT -3 (text -davinci -003) in the process of annotating inapropriately targeting language • Examination of inter -annotator agreement challenges, including anal - ysis of expert disagreements and performance comparison across the experts, crowd annotators and ChatGPT Understanding and Analyzing Inappropriately Targeting Language in Online Discourse: A Comparative Annotation Study Baran Barbarestania,, Isa Maksa, Piek Vossena aVrije Universiteit Amsterdam, De Boelelaan 1105, Amsterdam, 1081 HV, The Netherlands Abstract Attention: This paper includes instances of hateful content for research purposes. This paper introduces a method for detecting inappropriately targeting language in online conversations by integrating crowd and expert annotations with ChatGPT. We focus on English conversation threads from Reddit, examining comments that target individuals or groups. Our approach involves a comprehensive annotation framework that labels a diverse data set for various target categories and specific target words within the conversational context. We perform a comparative analy - sis of annotations from human experts, crowd annotators, and ChatGPT, revealing strengths and limitations of each method in recognizing both explicit hate speech and subtler discriminatory language. Our findings highlight the significant role of contextual factors in identifying hate speech and uncover new categories of targeting, such as social belief and body image. We also address the challenges and subjec - tive judgments involved in annotation and the limitations of ChatGPT in grasping nuanced language. This study provides insights for improving automated content moderation strategies to enhance online safety and inclusivity. Keywords: Hateful Content Detection, Online Conversations, Targeting Language 1. Introduction The study of combating internet hate speech using Natural Language Processing (NLP) has gained significant attention due to its scalability and efficiency in alleviat - ∗Corresponding author Email addresses: b.barbarestani@vu.nl (Baran Barbarestani), isa.maks@vu.nl (Isa Maks), p.t.j.m.vossen@vu.nl (Piek Vossen) 2 ing the burden on human moderators [1]. Researchers are compiling extensive data sets from various sources to drive investigations in this field, comparing methodolo - gies such as feature selection, machine learning (ML) techniques, and classification algorithms [2]. Despite this progress, less attention has been paid to how hate speech evolves within conversational contexts, a factor crucial for early detection and effec - tive moderation on social media platforms. Understanding the early triggers of group targeting within online conversations is essential for identifying implicit hate speech and preventing it from escalating into explicit language. This paper aims to enhance our understanding of these dynamics, thereby contributing to the development of more effective content moderation strategies that foster inclusive and respectful on - line environments. We introduce a novel dataset designed for benchmarking and analyzing target - ing language in Reddit discussions. This dataset serves as a valuable resource for deepening our understanding of harmful online communication and evaluating the effectiveness of automated detection methods. Although the dataset is limited
https://arxiv.org/abs/2505.16847v1
in size, it provides a foundational resource for future research and model evaluation. The study emphasizes advancing knowledge in the field and supporting the develop - ment of improved content moderation systems, rather than serving as a direct tuning tool. Our analysis includes annotations from crowd sources, experts, and OpenAI’s GPT-3 model, comparing these methods to highlight how contextual factors influence hate speech. We identify both explicit hate speech and subtler forms of discrimina - tory language and microaggressions, illustrating the need for nuanced approaches to online toxicity. For example, a comment like “Sunday Gunday: Self-Defense” could be interpreted as either neutral or targeting specific groups, depending on its context. This work contributes to automated content moderation by enhancing the understanding and detection of targeting language, aiming to foster a more respectful online environment. The paper is structured as follows: Section 3 details the compre - hensive annotation framework involving expert annotators, crowd annotators, and ChatGPT. Section 4 discusses the challenges and opportunities in annotating target - ing language, including inter -annotator agreement and the limitations of ChatGPT. Main contributions: • Comprehensive annotation framework: Development and implementa - tion of a detailed annotation framework using expert annotators, crowd an - notators, and ChatGPT to label a diverse Reddit data set for inappropriately targeting language. • Comparative analysis: A thorough comparative analysis of annotations from the experts, crowd, and ChatGPT, highlighting each approach’s strengths and 3 weaknesses in detecting implicit hate speech and microaggressions. • Identification of nuanced hate speech: Detection of both explicit hate speech and subtler forms of discriminatory language, emphasizing how contex - tual factors impact hate speech manifestation. • Challenges and subjectivity in annotation: Examination of the challenges and subjectivity involved in annotating targeting language, as well as issues of inter -annotator agreement and ChatGPT’s difficulties with contextual nuances. • New target categories: Identification of new targeting categories such as social belief, body image, addiction, and socioeconomic status, expanding the scope of hate speech detection research. • Addressing ChatGPT limitations Highlighting the limitations of Chat - GPT as an automated system in accurately identifying targeting language, emphasizing the need for advanced content moderation strategies that incor - porate deeper contextual analysis and improve handling of nuanced language to enhance the effectiveness of AI -driven moderation tools. 2. Related Work Recent research has seen a surge in efforts to detect and analyze targeting lan - guage on social media, driven by its importance in understanding online interactions and societal impacts. Various methods have been developed to identify and study such language. For instance, [3] introduced a 5000 -pair hate speech/counter narrative (HS/ CN) data set using a semi -automatic human -in-the-loop (HITL) approach with GPT -2 fine-tuning and iterative human review. This work offers diverse examples for NLP models combating online hate. Similarly, [4] created a data set from the Dutch LiLaH corpus, annotating 36,000 Facebook comments to identify hate speech types targeting groups like religion, gender, migrants, and the LGBT community. This data set focused exclusively on the prevalence of hateful metaphors. Although [3] and [4] focus on specific
https://arxiv.org/abs/2505.16847v1
groups, such as race, religion, gender, migrants, and LGBT, their applicability to broader online hate speech contexts is limited. The data set from [5] includes categorized tweets gathered from various sources such as Hatebase, Twitter hashtags, and public hate speech data sets. Human anno - tators classified these tweets into hate speech categories such as ethnicity or religion. The data set was compared with general Twitter data for context and used in di- verse analyses exploring linguistic features, psychological dimensions, and semantic 4 frames of hate speech. [5] also identifies explicit hate speech tweets using keyword - based filtering and manual annotation, collecting details about hate instigators and targets to study interaction patterns in online hate speech on Twitter. The Offen - sive Language Identification Dataset (OLID) [6] was designed to categorize offensive content in social media, particularly tweets. OLID employs a three -level hierarchi - cal annotation schema: Level A distinguishes offensive (OFF) from non -offensive (NOT) tweets; Level B categorizes offensive tweets as targeted insults/threats (TIN) or untargeted offenses (UNT); Level C identifies the target of offensive language, classifying it into individual (IND), group (GRP), or other (OTH) targets. Data collection involved retrieving tweets using specific keywords and annotating the data set through crowdsourcing with experienced annotators. OLID is used for training and evaluating offensive language detection models, focusing on insults, threats, and targeted groups, but may overlook subtler forms of offensiveness. The keyword -based filtering methodology employed in data set creation by [5], [7], and [6] could miss nuanced instances of hate speech not containing specific keywords. The data set used for abusive content classification by [8] consists of textual entries sourced from Reddit discussions accessed through the Pushshift API and Google’s BigQuery. It employs social scientific concepts to detect and classify abuse across categories such as Neutral, Identity -directed abuse, Affiliation -directed abuse, Person -directed abuse, and Counter Speech. Trained annotators assigned labels to entries based on predefined categories, although inter -annotator agreement could be enhanced, particularly for challenging or ambiguous ”edge case” content. The data set exhibits a skewed distribution towards the Neutral class, mirroring prevalent real-world patterns, with certain abusive categories appearing less frequently. The reviewed studies have advanced our understanding of inappropriately target - ing language online by creating specialized data sets and analytical methodologies. Each data set focuses on different aspects of hate speech, collectively revealing the complexities of online discourse. Building on this, our paper introduces a novel approach that broadens analysis to include more contextual factors and targets, en - hancing hate speech detection comprehensively. The additional aspects of our work in comparison to the previous works are namely the integration of ChatGPT for annotation, a more detailed and specific annotation focus on targeting behaviors, offering insights about the comparative analysis of ChatGPT -generated annotations with human annotations, a more nuanced approach to target category identification, and discussing the subtleties and variations in the interpretation of online discourse. 5 3. Methodology 3.1. Data Description We use a data set of English conversation threads from banned subreddits on Red - dit, as created
https://arxiv.org/abs/2505.16847v1
and selected by [9]. These subreddits were accessed via the Pushshift API and BigQuery. The data set includes 67,677 submissions and 1,168,546 com - ments, totaling 4,017,460 tokens. To ensure meaningful analysis, a structured ap - proach inspired by [8] was employed to reconstruct conversation threads without overlap, starting each subthread with its initial comment. Subthreads were selected based on the following criteria: a minimum of 3 and a maximum of 17 comments, a token count range of 51 to 1,276 tokens per subthread, and a cap of 38 tokens per comment. Additionally, subthreads were categorized by their toxicity level using three lexicons. The top 400 most toxic subthreads and 98 non-toxic subthreads were filtered out to provide a robust representation of both high and low toxicity content. Out of 498 subthreads, 39 were selected as the gold data subset by [9]. This selection involved a stratified sampling method based on toxicity scores and comment counts, ensuring diverse representation. Text preprocessing steps included removing links and non-alphanumeric characters. 3.2. Annotation Process Annotators identified inappropriate language targeting individuals or groups in comments or titles. They reviewed titles and comments, including associated context such as previous comments, title, and post text when available, with anonymized usernames guiding their assessments. Anonymization was performed according to [10]. Annotators exercised caution and discretion with potentially offensive content, adhering to the provided guidelines. Usernames in the examples presented in this paper are shown in brackets. Detailed annotation guidelines provided to annotators on the annotation platform, along with screenshots from the user interface that includes the instructions, can be found in Appendix C. 1. Annotators reviewed comments or titles to identify targeting towards individ - uals, groups, or broader categories. Items without targeting were labeled ”not targeting,” and annotators then proceeded with those that exhibited targeting behavior. Example 3.1. Targeting: Furries should be in the same mental institutions as tran**es. What in the f**k happened to this country. 2. Inside or outside of the conversation: Annotators determined if comments or titles targeted people within or outside the conversation thread, with Examples 6 3.2 and 3.3 highlighting these distinctions through underlined targets to clarify the concept. Example 3.2. Inside: go back to your fu**king estro weed subs my dude. Example 3.3. Outside: How else you gonna know what these retards are say - ing? A comment can target both a person inside the conversation and a person outside of the conversation, such as Example 3.4: Example 3.4. Context: [anon 8Wbs0]: I can’t, he’s right. [anon 8mUPN]: Hey I see your c*ck fa**ot a** in the pic on the left. [anon 8Wbs0]: At least I’m not a racist loser, sweetie Comment: Being a racist is far better than being a c*ck fa**ot. In the context of Example 3.4, the speaker is both targeting the person they are addressing in the conversation and people with a particular sexual orientation in general outside of the conversation. 3. Identify target categories and target tokens: Annotators identified target cate - gories , i.e., sexual orientation, gender, disability, age, race/ ethnicity/ nation - ality, religion, famous
https://arxiv.org/abs/2505.16847v1
individual, political affiliation, and other target category (none of the aforementioned). They marked all relevant words in the comment or title pertaining to these categories, primarily focusing on nouns. Additional examples were given to assist annotators, as shown in Example 3.5, Example 3.6, and Example 3.7, where target tokens are underlined. Example 3.5. Race/ ethnicity/ nationality: I don’t want to know. It’s usually dribble. I’ll stick with more conservative black people who value their education. Example 3.6. Disability: What kind of a**hole actually reports someone who is on their side politically? Fu**king moron Example 3.7. Famous individual: The President having one of them clippers would be sweet. However, I think Trump deserves no less than a B -52. 7 Furthermore, it is important to note that there could be more than one target category identified. For example, a comment can target both gender and dis - ability, as in Example 3.8, where ”c**t” refers to gender and ”dullard” refers to disability. Example 3.8. Not only a c**t but a dullard too? Is there no beginning to your talent? Navigating challenges in achieving high inter -annotator agreement (IAA) due to subjective interpretation and nuanced instructions, we refined guidelines and implemented feedback mechanisms to enhance agreement through multiple rounds of discussions among the authors of the paper. 3.3. Crowd Annotation All 498 subthreads were annotated by a crowd of five annotators. Annotator selection followed the method outlined by [11], involving pre -screening and post -screening to ensure quality and reliability. Detailed information about each annotator including their nationality, fluent languages, primary language, age, sex, ethnicity, country of residence, country of birth, employment status, and student status can be found in Appendix D. Crowd annotations were then adjudicated based on majority vote to create the AdjCrowd annotation set. 3.4. Expert Annotation To evaluate the crowd annotation independently, three expert annotators (the authors) annotated the gold data. They underwent comprehensive training on the annotation platform for consistent guideline application. Annotations were adjudicated based on discussion and majority vote, forming the adjudicated annotation set referred to as ”AdjExpert” for this study. 3.5. ChatGPT Annotation and Prompting We integrated OpenAI’s GPT -3 language model into our annotation process, using the text-davinci -003 engine to analyze and annotate textual data from 498 subthreads. ChatGPT was tasked with generating outputs for each step separately, building on its previous outputs. Prompt development, detailed in Appendix E, aimed for consistent annotation guidelines. We refined prompts through iterative analysis of conversations, incorporating feedback to enhance clarity and effectiveness in capturing nuanced aspects of inappropriate lan - guage. 8 3.5.1. Targeting or Not, Inside vs. Outside of the Conversation To assess targets in conversations (comments and titles), we designed a prompt providing contextual details for each. This prompt guides the model to identify whether the target is within or outside the conversation during targeting. The model’s varied responses are categorized to provide insights into the nature of targeting instances. 3.5.2. Target Category Annotation In target category annotation, we tailored a prompt to identify specific targeted characteristics in comments or titles. ChatGPT responses in various
https://arxiv.org/abs/2505.16847v1
formats were analyzed and categorized into predefined target categories based on the input from 3.5.1 3.5.3. Target Tokens Annotation We created a prompt directing the model to focus on tokens related to identified target categories. These tokens, extracted from targeting comments or titles provided to ChatGPT along with their associated categories (as described in 3.5.3, offer additional context that enriches our analysis of targeting within the annotated data. 3.6. Evaluation and Statistical Analysis We measured inter -annotator agreement by comparing levels of consistency among expert annotators, crowd annotators, and ChatGPT. Specifically, we analyzed the agreement between expert and ChatGPT annotations (AdjEx - pert vs. ChatGPT), between expert and crowd annotations (AdjExpert vs. AdjCrowd), as well as among expert annotations and among crowd annota - tions. For inter -annotator agreement analysis among both experts and crowd annotators, we computed average agreement scores across annotator pairs at both the comment level and subthread level. The original annotations were made at the comment level. Comment -level analysis considered annotations for each individual comment, whereas for subthread -level analysis, the compar - ison was based on aggregated comment labels, such that the whole subthread would be labelled as a particular category if at least one comment in that sub - thread was labelled as such. Target token comparison was limited to respective comments. The upper section of Table 1 presents Cohen’s Kappa scores on targeting at comment and subthread levels. The lower section focuses on agreement on tar - get categories, inside/ outside of the conversation thread, and target tokens at comment and subthread levels given agreement on targeting. At the comment 9 Kappa scores on targeting Expert Crowd Comment -level 0.58 0.36 Subthread -level 0.63 0.54 Kappa scores given agreement on being targeting Category Expert (Comment - level) Crowd (Comment - level) Expert (Subthread - level) Crowd (Subthread - level) Inside 0.65 0.14 0.8 0.34 Outside 0.5 0.295 0.19 0.27 Sexual Orientation 0.68 0.56 0.66 0.6 Gender 0.57 0.42 0.37 0.41 Disability 0.62 0.556 0.72 0.54 Age 0.92 0.67 0.89 0.63 Race 0.73 0.58 0.67 0.61 Religion 0.605 0.75 0.59 0.73 Famous Individual 0.45 0.455 0.50 0.45 Political Affiliation 0.62 0.5 0.64 0.48 Other Target Category 0.11 0.26 0.16 0.24 Target Tokens 0.58 0.47 - - Table 1: Cohen’s Kappa scores on targeting + Cohen’s Kappa scores given agreement on being targeting (among the crowd, among the experts) 10 level, expert agreement on targeting achieved a moderate Cohen’s Kappa score of 0.58, while the crowd scored lower at 0.36. This discrepancy reflects varia - tions in expertise and experience, with experts demonstrating more consistent identification of targeting behaviors compared to the crowd. Subthread -level agreements improved (experts: 0.63, crowd: 0.54), indicating that aggregation at a broader level over multiple comments helps align interpretations. The higher agreement between expert and crowd annotations reflects a majority vote approach among crowd annotations versus expert labels, which tends to smooth out individual differences and align more closely with expert judg - ments. Conversely, the lower agreement among crowd annotators indicates greater variability in individual annotations, as it is based
https://arxiv.org/abs/2505.16847v1
on averaged scores from varying annotator pairs. Therefore, this variability is due to the fact that the majority vote method for crowd annotations minimizes individual differ - ences, enhancing apparent agreement with expert annotations, whereas direct comparison among crowd annotators reveals more inherent inconsistencies in their judgments due to their diverse interpretations and backgrounds. Agree - ment scores varied across categories, with some like ”Age” and ”Race” showing high consensus, while others like ”Outside,” ”Gender”, ”Religion”, and ”Fa- mous Individual” lower agreement, particularly among the crowd. ”Other Tar - get Category” exhibited consistently low agreement scores across all sources, indicating greater complexity and ambiguity. To further understand the inter -annotator agreement among experts, we ana- lyzed the confusion matrices for targeting at both the comment and subthread levels, as presented in Tables 2 and 3. These matrices reveal that at the sub - thread level, annotators reached higher agreement compared to the comment level. This suggests that interpretations are more likely to align when consid - ering aggregation at a broader level, thereby reducing the ambiguity associated with annotating individual comments. The high agreement at the subthread level indicates that, despite individual differences in annotating specific com - ments, annotators consistently converge on identifying the overall targeting behavior within a conversation thread. Table 4 displays Cohen’s Kappa scores comparing AdjExpert and AdjCrowd sets, as well as AdjExpert and ChatGPT -generated sets. ChatGPT -generated annotations show moderate agreement with AdjExpert annotations at both comment and subthread levels (0.4 and 0.37, respectively). In contrast, higher agreement exists between AdjExpert and AdjCrowd sets (0.58 at the com - ment level and 0.53 at the subthread level). This suggests closer alignment between AdjExpert and AdjCrowd compared to the ChatGPT -generated set. 11 A1 vs. A2 Not targeting by A1 Targeting by A1 Not targeting by A2 94 15 Targeting by A2 34 105 A1 vs. A3 Not targeting by A1 Targeting by A1 Not targeting by A3 80 29 Targeting by A3 26 113 A2 vs. A3 Not targeting by A2 Targeting by A2 Not targeting by A3 92 36 Targeting by A3 14 106 Table 2: Confusion Matrices of Expert Annotators on the Annotation of Targeting at the Comment Level. Note: ”A” is an abbreviation of ”Annotator” A1 vs. A2 Not targeting by A1 Targeting by A1 Not targeting by A2 2 2 Targeting by A2 2 33 A1 vs. A3 Not targeting by A1 Targeting by A1 Not targeting by A3 3 1 Targeting by A3 1 34 A2 vs. A3 Not targeting by A2 Targeting by A2 Not targeting by A3 3 1 Targeting by A3 1 34 Table 3: Confusion Matrices of Expert Annotators on the Annotation of Targeting at the Subthread Level. Note: ”A” is an abbreviation of ”Annotator” Some variation can be observed in agreement levels across different categories and annotation sources. For example, ”Sexual Orientation” exhibits relatively high agreement between the ChatGPT -generated set and AdjExpert set (com - 12 Kappa scores on targeting AdjExpert vs. ChatGPT AdjExpert vs. AdjCrowd Comment -level 0.4 0.58 Subthread -level 0.37 0.53 Kappa
https://arxiv.org/abs/2505.16847v1
scores given agreement on being targeting Category ChatGPT vs. Adj.Expert (Comment - level) AdjExpert vs. AdjCrowd (Comment - level) ChatGPT vs. AdjExpert (Subthread - level) AdjExpert vs. AdjCrowd (Subthread - level) Inside 0.09 0.55 0.37 0.70 Outside 0.08 0.65 0 0.46 Sexual Orientation 0.66 0.74 0.87 0.75 Gender 0.06 0.52 0.48 0.66 Disability -0.015 0.49 0.12 0.47 Age 0.34 0.74 0.545 0.63 Race 0.55 0.62 0.59 0.8 Religion 0.66 0.66 0.48 0.48 Famous Individual 0 0.695 -0.05 0.77 Political Affiliation 0.49 0.7 0.53 0.47 Other Target Category 0 0.14 0.02 0.18 Target Tokens 0.68 0.57 - - Table 4: Cohen’s Kappa scores on targeting + Cohen’s Kappa scores given agreement on being targeting (AdjExpert vs. AdjCrowd, ChatGPT vs. AdjExpert) 13 ment: 0.66, subthread: 0.87) as well as between AdjCrowd and AdjExpert set (comment: 0.74, subthread: 0.75). Similarly, ”Religion” shows high agree - ment between the ChatGPT -generated set and AdjExpert set (comment: 0.66, subthread : 0.48) and between AdjCrowd and AdjExpert set (comment: 0.66, subthread: 0.48). Conversely, between the ChatGPT -generated set and the AdjExpert set, ”Famous Individual” displays very low agreement (comment: 0, subthread: -0.05), indicating challenges in consistently identifying targeting in these contexts. As for the identification of targeting ”Famous Individual”, ChatGPT was able to identify only one case as such although identified by experts as not targeting at all (Example 3.9). Example 3.9. If you think Demi Lovato is ugly, you have probably never had sex with a human being before. Not even a gay guy would think she’s ugly. Example 3.10 is an example of ”Famous Individual” not detected correctly, where ChatGPT labeled it as targeting ”Other Target Category”, while experts labeled it as targeting ”Famous Individual”. In this example, the complexity of language and indirect reference to President Donald J. Trump might have challenged ChatGPT’s ability to connect the comment directly to a famous individual. Example 3.10. Context: Title: WATCH PARTY: USA -NK Summit Coverage & Leader Arrivals [anon 2Uwug]: I ignore them. Make them irrelevant Comment: That’s all well and good until he starts yelling stupid shit during a meeting of this magnitude Overall, crowd annotations demonstrate higher agreement with expert anno - tations compared to ChatGPT -generated annotations. We also analyzed the confusion matrices to evaluate the performance and agreement levels between ChatGPT -generated annotations and AdjExpert, as well as between AdjCrowd and AdjExpert, at both the comment and subthread levels. Figure 1 indicates that ChatGPT tends to over -identify comments as targeting, demonstrating higher sensitivity but lower specificity compared to expert annotations. Fig- ure 2 suggests better alignment between ChatGPT and AdjExpert annotations at the subthread level, with ChatGPT showing higher precision in identifying targeting behavior when considering the aggregation at a broader level. The discrepancy between the higher alignment indicated by the confusion matrices 14 Figure 1: Confusion matrix of ChatGPT vs. AdjExpert on the annotation of targeting at the comment level and the slightly lower Cohen’s Kappa scores at the subthread level compared to the comment level can be attributed to several factors. Cohen’s Kappa adjusts for chance agreement, which can differ
https://arxiv.org/abs/2505.16847v1
significantly depending on the preva - lence of targeting labels and the number of instances being evaluated. At the comment level, with more data points, the absolute number of agreements and disagreements is higher, affecting the expected chance agreement. Additionally, at the subthread level, the aggregation of comments might reduce the penaliza - tion for misclassifications, but it still influences the overall Kappa calculation. Consequently, the apparent better raw alignment at the subthread level does not translate directly into a higher Kappa score, reflecting the nuanced nature of chance -adjusted agreement metrics. Figure 3 indicates that while there is good agreement between crowd and expert annotations, there are still notable 15 Figure 2: Confusion matrix of ChatGPT vs. AdjExpert on the annotation of targeting at the subthread Level discrepancies, likely due to varying levels of understanding and interpretation of targeting criteria among crowd annotators. Figure 4 suggests that aggrega - tion at a broader level helps align crowd annotations more closely with expert annotations, reducing individual variations. Table 5 summarizes the number of annotations for targeting, per target cate - gory, and inside vs. outside conversation threads, at comment and subthread levels, from expert annotators, crowd annotators, and ChatGPT. To ensure comparability, expert and crowd counts were normalized by dividing total an - notations by the number of annotators. Expert annotators averaged 133.67 targeting comments annotated, slightly lower than the crowd’s 141.4. Chat - GPT generated notably more, totaling 188 targeting annotations at the com - ment level. At the subthread level, experts averaged 35 targeting annotations, 16 Figure 3: Confusion matrix of AdjExpert vs. AdjCrowd on the annotation of targeting at the comment level the crowd 36, while ChatGPT annotated 38. Categories like ”Religion” and ”Famous Individual” received fewer annotations across all sources. ChatGPT was more over -identified targeting language compared to the experts and crowd but struggled more to discern targeting inside the conversation in comparison to the human annotators. As already demonstrated in this section, the moderate agreement between ex- pert and crowd annotators indicates consistent identification of targeting be - haviors. Section 4 further explores discrepancies among expert annotators. While we know that ChatGPT handles large data volumes at a lower cost and faster compared to humans, lower agreement scores compared to humans reveal challenges with context and subtle targeting behaviors, including cul - tural nuances. This emphasizes the importance of refining AI-driven content 17 Figure 4: Confusion matrix of AdjExpert vs. AdjCrowd on the annotation of targeting at the subthread level moderation for better contextual comprehension and cultural sensitivity. 4. Analysis and Discussion 4.1. Disagreements Among the Experts We concentrated on cases where expert annotators disagreed on comment -level targeting annotations, revealing discrepancies among annotators where one deviated from the consensus. We categorized these disagreements to explore their nuances and potential origins, presenting an error analysis with specific examples for each category. 1. Different criteria for inappropriate language (targeting but not necessarily tar- geting inappropriately): 18 Category Comment -level Subthread -level Expert Crowd ChatGPT Expert Crowd ChatGPT Targeting 133.67 141.4 188 35 36 38 Inside 70.33 78
https://arxiv.org/abs/2505.16847v1
9 20 22.4 6 Outside 59.67 61.8 179 28.33 28.2 38 Sexual Orientation 32.67 24 33 11.33 14.6 12 Gender 24.33 19.2 25 14.67 11.8 9 Disability 20.67 12.2 1 10.67 6.6 1 Age 4.33 5 8 3.33 3.8 7 Race 16 19.8 32 6 8.4 10 Religion 1.67 4.2 3 1.67 3.2 3 Famous Individual 9.33 10.4 1 5.33 8 1 Political Affiliation 13.67 22 19 8.33 12.6 8 Other Target Category 26.33 50.8 69 15 22.4 30 Target Tokens 197.67 389.4 303 - - - Table 5: Number of annotations Example 4.1. Let me give you all a hint. Hillary’s statement the other day was not a spontaneous, meaningless slip up Annotators may argue that the statement does not always imply inappropri - ately targeting. They suggest it might be analyzing or discussing a public figure’s actions, which might not always amount to inappropriately targeting. This discrepancy arises from differing interpretations of what qualifies as in - appropriately targeting. In Example 4.1, while some see the reference to the famous individual as targeting inappropriately, others take into account nu - anced factors like tone and context before deeming it as such, considering it as not inappropriate and only targeting. 2. Inherent complexity/ ambiguity Example 4.2. Sunday Gunday: Self-Defense The varied interpretation of Example 4.2 sparks disagreement among anno - tators. It may target specific groups, like Sunday gun enthusiasts promoting self-defense, or describe activities without clearly targeting, highlighting com - plexity and ambiguity, making unanimous agreement challenging. 3. Contextual understanding Example 4.3. Context: 19 Title: Me during fag pride month [anon Khiib]: Why the he** do You Care if guys like d**k instead of pu**y. Relax dude [anon 4OsvY]: They’re degenerate f**ks. [anon BbKt2]: Dann, how long it take you to write that thesis? [anon yDQWE]: you don’t need a thesis to see something so simple [anon BbKt2]: Then you should be able to explain easily: Why does it matter if a guy likes d**k instead of p***y? [anon EzrXS]: How is it an accomplishment? Comment: Its not? Why is it degeneracy? Disagreement over Example 4.3 stems from differing interpretations of ”de - generacy.” One annotator might view it as targeting sexual orientation, while another might see it as critiquing behavior or societal norms. This highlights the need to consider broader context and language nuances, urging discretion in annotator judgments, especially when interpretations vary with conversational context. 4. Subjective interpretation on the concept of targeting Example 4.4. Me during fag pride month The disagreement on Example 4.4 stems from differing interpretations of tar - geting. Some might view the term ”fag ” as derogatory, targeting individuals, especially during ”pride month”, while others might see it as self -referential or reclaimed, celebrating LGBTQ+ identities. 5. Insufficient contextual information Example 4.5. Context: Interesting thing to say when you’re objectively below average looking Comment: Cause you can get called a rapist and get convicted if the girl regrets sleeping with you. The lack of explicit context or cues in Example 4.5 makes it difficult for annota - tors to accurately identify the intended
https://arxiv.org/abs/2505.16847v1
target. This highlights the importance of contextual information for precise annotation and interpretation. 6. Different interpretation of sarcasm Example 4.6. 20 Context: Title: SO TRUE!! [anon py8O2]: you’re a c**t [anon aWz24]: You’re 14 [anon ozqK8]: not only a c**t but a dullard too? is their no beginning to your talent? [anon FTutK]: 12, then. [anon ETMDg]: you weirdo liberal folks sure have an unhealthy interest in talking with young people, are you trying to groom me? Comment: Well, you tried. Disagreements often arise over interpreting sarcasm in comments. For instance, in Example 4.6, one annotator might perceive the comment as sarcastic or dismissive, suggesting the person’s effort was futile, while another might view it as supportive or encouraging, acknowledging effort despite the outcome. These differing interpretations highlight the subjective nature of assessing language nuances, especially when tone indicators are not clear. Additionally, we analyzed cases where expert annotators unanimously identified the ”other target category” within comments and classified these instances into new categories to grasp their nuances. The new categories discovered in the data are social belief, body image, addiction, and socioeconomic status. 4.2. Analysis of ChatGPT’s Identification of Targeting Language Our examination of ChatGPT’s annotations versus expert assessments on tar - geting language revealed 37 discrepancies where ChatGPT identifies comments as targeting, but experts unanimously disagree. Out of all gold data, ChatGPT marks 75% of cases as targeting, while experts (based on the majority vote) deem only 55% as such. A majority of cases flagged by ChatGPT as targeting are actually neutral according to expert assessments (see Example 4.7). Additionally, some comments contain toxic language but do not meet the criteria for targeting. For instance, com - ments deemed inappropriate or containing slurs do not necessarily imply targeting language directed at an individual or group (see Example 4.8). The findings suggest that ChatGPT’s classification leans towards over -identification of targeting language, which can result in inaccurate annotations and potentially misleading interpretations. The system’s approach lacks nuanced understanding of contextual cues and the sub- tleties of language that human experts can readily discern. One notable observation 21 is that ChatGPT’s misclassifications often involve comments where humor, sarcasm, or colloquialisms are present. For instance, light -hearted jokes or playful banter may be mistakenly categorized as targeting due to ChatGPT’s limited ability to grasp nuanced language (see Example 4.9 and Example 4.10). Another critical insight stems from ChatGPT’s handling of comment structure. Notably, ChatGPT appears to interpret titles and first comments as non -targeting. This observation suggests that ChatGPT’s comprehension of targeting versus non -targeting language may be influenced by structural elements. Example 4.7. 747-8 is the newest generation. The current AF1s are 747-200s. Example 4.8. Why the hell do You Care if guys like d**k instead of pu**y. Relax dude Example 4.9. LOL remeber how triggered they were when he got the nomination? Example 4.10. Ouch, epic burn, my friend. How would I ever recover 5. Conclusion This paper explores hate speech detection in online conversations through crowd annotation, expert annotation, and ChatGPT integration. Our analysis uncovered explicit hate speech as well
https://arxiv.org/abs/2505.16847v1
as subtler discriminatory language and microaggressions. This study contributes a benchmark data set for analyzing targeting language and evaluating automated detection systems, facilitating research into harmful online communication and supporting the development of improved content moderation strategies. We observed moderate agreement between expert and crowd annotators, yet challenges remain in consistently identifying targeting behaviors, particularly in nuanced contexts such as distinguishing targets within or outside conversations and assessing subjective targeting language. ChatGPT showed that it tended to over -label, identifying more instances of targeting language compared to human an - notators. This highlights the need for further refinement in automated moderation systems. By categorizing specific target categories like gender, political affiliation, and sexual orientation, this study enhances our understanding of harmful online com - munication and informs the development of more effective moderation strategies to promote inclusive and respectful online environments. Future research should focus on refining annotation guidelines, enhancing contextual NLP model understanding, and developing scalable moderation methods. Additionally, efforts should be made to expand the data set to include a broader range of targeting instances and improve its utility as a benchmarking resource. 22 6. Limitations While our study offers valuable insights into detecting and analyzing inappropri - ately targeting language in online platforms, certain limitations must be considered. Firstly, the size and diversity of the data set may limit the generalizability of our findings, despite efforts to collect a varied range of conversation threads from Red- dit. We acknowledge that the focus on banned subreddits may not fully capture the diversity of online hate speech across different platforms and communities. Addition - ally, inherent biases in the annotation process, influenced by annotators’ subjective interpretations and contextual understanding, may affect the reliability of labels as - signed to comments. The predefined target categories may not fully encompass the spectrum of inappropriately targeting language, and emerging forms of online harass - ment may not be adequately captured. While ChatGPT assisted in the annotation process, its performance varies depending on text complexity and ambiguity, intro - ducing errors. Inter -annotator agreement challenges and the limited generalizability of findings to other platforms further underscore the need to interpret results cau - tiously and guide future research directions effectively. Finally, our annotation and analysis is limited to English. This should be expanded to more diverse languages and cultures. 7. Acknowledgments This research was supported by Huawei Finland through the DreamsLab project. All content represented the opinions of the authors, which were not necessarily shared or endorsed by their respective employers and/ or sponsors. Appendix A. Annotation Platform and Annotator Recruitment The annotation task was designed and implemented using the LingoTURK plat - form [12], which provided a user -friendly interface for annotators to complete the tasks efficiently. Annotators were recruited through the Prolific platform [13], which allowed for the selection of a diverse pool of participants to ensure comprehensive annotation coverage. This combination of annotation platform and recruitment strat - egy facilitated the collection of annotations from a broad range of annotators, con - tributing to the reliability and validity of the data set.
https://arxiv.org/abs/2505.16847v1
Appendix B. Ethical Considerations This study adhered to ethical guidelines, ensuring the privacy and anonymity of Reddit users and annotators. The annotation task contained content warnings, 23 and participants were informed about the nature of the task before engaging in the annotation process. Appendix C. Annotation Guidelines and User Interface Screenshots Figure C.5 demonstrates the instructions page shown to the annotators on the annotation platform. Figure C.9 is an example of the way the comments in a con - versation were shown to the annotators to be annotated, starting from the title text (Figure C.6), to comment 1 (Figure C.7) and (Figure C.8). The real conversation is longer than what presented here up to the second comment. Appendix D. Crowd Annotators’ Information Fluent Languages Percentage English 99.75% Chinese 0.74% Spanish 1.48% French 1.73% Afrikaans 0.25% German 1.23% Italian 0.25% Turkish 0.25% Dutch 0.25% Japanese 0.25% Hindi 0.49% CONSENT REVOKED 0.25% Thai 0.49% Gaelic 0.25% Romanian 0.25% Urdu 0.49% Bengali 0.25% Greek 0.25% Table D.6: Statistics for fluent languages. Note that the percentages are not mutually exclusive, as annotators may speak more than one language. Tables D.7, D.8, D.9, D.10, D.11, D.12, D.13, D.14, and D.15 provide statistical information of the crowd annotators’ backgrounds. 24 Figure C.5: Instructions 25 Primary Language Percentage English 99.75% Turkish 0.25% Other 0.25% CONSENT REVOKED 0.25% Thai 0.49% Urdu 0.49% Esperanto 0.25% Table D.7: Statistics for primary language. Note that the percentages are not mutually exclusive, as annotators may report more than one primary language. Nationality Percentage United Kingdom 99.75% CONSENT REV OKED 0.25% Table D.8: Statistics for nationality Age Range Percentage 31-40 31.11% 51-60 10.12% 41-50 16.79% 21-30 34.07% 11-20 1.98% 61-70 4.94% 120-130 0.25% 71-80 0.74% Table D.9: Statistics for age Sex Percentage Female 66.91% Male 32.84% CONSENT REVOKED 0.25% Table D.10: Statistics for sex 26 Ethnicity Percentage White 85.68% Asian 4.20% Mixed 6.91% Black 2.72% CONSENT REVOKED 0.25% Other 0.25% Table D.11: Statistics for ethnicity Country of Birth Percentage United Kingdom 99.26% CONSENT REVOKED 0.25% United States 0.25% Canada 0.25% Table D.12: Statistics for country of birth Student Status Percentage No 72.84% DATA EXPIRED 16.79% Yes 10.12% CONSENT REVOKED 0.25% Table D.13: Statistics for student status Appendix E. ChatGPT prompts 27 Employment Status Percentage DATA EXPIRED 20.49% Full-Time 40.25% Part -Time 14.81% Not in paid work (e.g. homemaker) 14.32% Retired or disabled 14.32% Other 3.95% Due to start a new job within the next month 1.48% Unemployed (and job seeking) 4.44% CONSENT REVOKED 0.25% Table D.14: Statistics for employment status Country of Residence Percentage United Kingdom 94.81% Australia 0.74% Spain 0.49% South Africa 0.49% Netherlands 0.25% Ireland 1.23% United States 0.25% CONSENT REVOKED 0.25% Japan 0.74% Canada 0.25% France 0.25% New Zealand 0.25% Table D.15: Statistics for country of residence 28 Figure C.6: * Title text Figure C.7: * Comment 1 Figure C.9: User interface Figure C.8: * Comment 2 Figure E.10: Prompt for ChatGPT annotation of targeting and inside vs. outside of the conversation 29 You are a language model that helps annotate inappropriate words and harmful messages in comments. Given a CSV file,
https://arxiv.org/abs/2505.16847v1
First, you should read the following columns per row: ”title text”, ”comment”, ”context”, ”user id”, and ”post text”. ”context” refers to the previous comments. You should read each ”title text”, ”post text”, and ”context”. Then, you should read ”comment”. If ”comment” is not ”-”, you should produce your response based on ”comment”. If ”comment” is ” -”, you should produce your response based on ”title text”. Instructions You will review comments/titles posted on Reddit that may contain inappropriate lan - guage. Your task is to identify if they target a specific person or group of people. You should first read the post text. Read the comment/title and assess whether it targets anyone. If the comment/title does not target someone, mark the comment/title as ”not targeting”. If the comment/title targets someone, you should decide whether the com - ment/title is targeting a person inside the conversation thread or outside of the conversa - tion thread. Mark targeting a person inside the conversation thread as I. Mark targeting a person outside of the conversation thread as O. Mark not targeting as N Examples: sexual orientation: Gay men refuse to f**k like any other person. gender: This person is a cisgender man pretending to be a woman to sell his sh**ty blog. disability: Why are you just sitting there like a mentally paralyzed person? age: He still acts like a child. race, ethnicity, or nationality: I don’t give a f**k if you are Asian. religion: I literally hate Christians so much. famous individual: How embarrassing Serena Williams was at the US open when she lost to Osaka. political affiliation: Good morning to everyone who isn’t a liberal f**k! other target category: We should send all refugees to concentration camps. no target category: I hate you! Note: If a comment/title targets multiple categories, you should mark all target categories. Examples: sexual orientation, famous individual: F**k Pitbull and Ne-Yo gay a** other target category, religion: Unemployed people, particularly Muslims, are like para - sites as they live off social welfare. Figure E.11: Prompt for ChatGPT annotation of target categories 30 Figure E.12: Prompt for ChatGPT annotation of target tokens (Part 1) You are a language model that helps annotate inappropriate words and harmful messages in comments. You should read each ”Title”, ”Post Text”, ”Context”, ”User ID”, ”Comment”, ”Text”, and ”Target categories”. Then, you should produce your response based on ”Text”. ”Context” refers to the previous comments. Instructions You will review comments/titles posted on Reddit that target a person or group of people inappropriately due to certain characteristics known as target categories. Your task is to identify the tokens that make reference to those target categories. List of target categories: 1. Sexual orientation 2. Gender 3. Disability 4. Age 5. Race, ethnicity, or nationality 6. Religion 7. Famous individual 8. Political affiliation Examples: Text: Gay men refuse to f**k like any other person. Target categories: Sexual orientation Target tokens: Gay, men Text: This person is a cisgender man pretending to be a woman to sell his sh**ty blog. Target categories: Gender Target tokens: cisgender, man 31 Figure E.13:
https://arxiv.org/abs/2505.16847v1
Prompt for ChatGPT annotation of target tokens (Part 2) Text: Why are you just sitting there like a mentally paralyzed person? Target categories: Disability Target tokens: mentally, paralyzed Text: He still acts like a child. Target categories: Age Target tokens: child Text: I don’t give a f**k if you are Asian. Target categories: Race, ethnicity, or nationality Target tokens: Asian Text: I literally hate Christians so much. Target categories: Religion Target tokens: Christians Text: How embarrassing Serena Williams was at the US open when she lost to Osaka. Target categories: Famous individual Target tokens: Serena, Williams 32 33 Figures E.10, E.11, E.12, and E.13 demonstrate the designed prompts for Chat - GPT. Please note that the slurs and inappropriate words that are obfuscated here are not so in the real prompts. References [1] M. Chaudhary, C. Saxena, H. Meng, Countering online hate speech: An nlp perspective, arXiv preprint arXiv:2109.02941 (2021). [2] M. S. Jahan, M. Oussalah, A systematic review of hate speech automatic de - tection using natural language processing., Neurocomputing (2023) 126232. [3] M. Fanton, H. Bonaldi, S. S. Tekiroglu, M. Guerini, Human -in-the-loop for data collection: a multi -target counter narrative dataset to fight online hate speech, arXiv preprint arXiv:2107.08720 (2021). [4] J. Lemmens, I. Markov, W. Daelemans, Improving hate speech type and target detection with hateful metaphor features, in: Proceedings of the fourth work - shop on NLP for internet freedom: censorship, disinformation, and propaganda, 2021, pp. 7 –16. [5] M. ElSherief, V. Kulkarni, D. Nguyen, W. Y. Wang, E. Belding, Hate lingo: A target -based linguistic analysis of hate speech in social media, in: Proceedings of the international AAAI conference on web and social media, Vol. 12, 2018. [6] M. Zampieri, S. Malmasi, P. Nakov, S. Rosenthal, N. Farra, R. Kumar, Pre - dicting the type and target of offensive posts in social media, arXiv preprint arXiv:1902.09666 (2019). [7] M. ElSherief, S. Nilizadeh, D. Nguyen, G. Vigna, E. Belding, Peer to peer hate: Hate speech instigators and their targets, in: Proceedings of the International AAAI Conference on Web and Social Media, Vol. 12, 2018. [8] B. Vidgen, D. Nguyen, H. Margetts, P. Rossini, R. Tromble, Introducing cad: the contextual abuse dataset (2021). [9] B. Barbarestani, I. Maks, P. Vossen, Content moderation in online platforms: A study of annotation methods for inappropriate language, in: Proceedings of the Third Workshop on Threat, Aggression and Cyberbullying (TRAC 2024), 2024. 34 [10] W. T. Tufa, I. Markov, P. Vossen, The constant in hate: Analyzing toxicity in reddit across topics and languages (2024). arXiv:2404.18726. [11] B. Barbarestani, I. Maks, P. Vossen, Annotating targets of toxic language at the span level, in: Proceedings of the Third Workshop on Threat, Aggression and Cyberbullying (TRAC 2022), 2022, pp. 43 –51. [12] F. Pusse, A. Sayeed, V. Demberg, Lingoturk: managing crowdsourced tasks for psycholinguistics, in: Proceedings of the 2016 Conference of the North Ameri - can Chapter of the Association for Computational Linguistics: Demonstrations, 2016, pp. 57 –61. [13] S. Palan, C. Schitter, Prolific. ac—a subject pool for online experiments, Journal of Behavioral and Experimental Finance 17
https://arxiv.org/abs/2505.16847v1
arXiv:2505.16849v2 [cs.IR] 28 May 2025Walk&Retrieve: Simple Yet Effective Zero-shot Retrieval-Augmented Generation via Knowledge Graph Walks Martin Böckling martin.boeckling@uni-mannheim.de University of Mannheim Mannheim, GermanyHeiko Paulheim heiko.paulheim@uni-mannheim.de University of Mannheim Mannheim, GermanyAndreea Iana andreea.iana@uni-mannheim.de University of Mannheim Mannheim, Germany Abstract Large Language Models (LLMs) have showcased impressive rea- soning abilities, but often suffer from hallucinations or outdated knowledge. Knowledge Graph (KG)-based Retrieval-Augmented Generation (RAG) remedies these shortcomings by grounding LLM responses in structured external information from a knowledge base. However, many KG-based RAG approaches struggle with (i) aligning KG and textual representations, (ii) balancing retrieval accuracy and efficiency, and (iii) adapting to dynamically updated KGs. In this work, we introduce Walk&Retrieve , a simple yet effec- tive KG-based framework that leverages walk-based graph traversal and knowledge verbalization for corpus generation for zero-shot RAG. Built around efficient KG walks, our method does not require fine-tuning on domain-specific data, enabling seamless adaptation to KG updates, reducing computational overhead, and allowing inte- gration with any off-the-shelf backbone LLM. Despite its simplicity, Walk&Retrieve performs competitively, often outperforming exist- ing RAG systems in response accuracy and hallucination reduction. Moreover, it demonstrates lower query latency and robust scalabil- ity to large KGs, highlighting the potential of lightweight retrieval strategies as strong baselines for future RAG research. CCS Concepts •Information systems →Information retrieval ;Language models ;Question answering . Keywords Knowledge Graph Retrieval-Augmented Generation, Graph Walks, Zero-Shot Retrieval, Question Answering ACM Reference Format: Martin Böckling, Heiko Paulheim, and Andreea Iana. 2025. Walk&Retrieve: Simple Yet Effective Zero-shot Retrieval-Augmented Generation via Knowl- edge Graph Walks. In Proceedings of Information Retrieval’s Role in RAG Systems (IR-RAG 2025). ACM, New York, NY, USA, 6 pages. https://doi.org/ XXXXXXX.XXXXXXX Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. IR-RAG 2025, Padua, Italy ©2025 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxxx-x/YYYY/MM https://doi.org/XXXXXXX.XXXXXXX1 Introduction Large Language Models (LLMs) are pivotal to question answering (QA) due to their strong language understanding and text gener- ation capabilities [ 3,27,28,35,50,59]. However, LLMs often (i) struggle with outdated knowledge, (ii) lack interpretability due to their black-box nature [ 7], and (iii) can hallucinate convincingly yet factually inaccurate answers [ 18,19,40]. These issues are particu- larly pronounced in knowledge-intensive tasks [ 31], when dealing with domain-specific [ 47,49] or rapidly changing knowledge [ 51]. Retrieval-augmented generation (RAG) mitigates these limitations by grounding responses in relevant external information [ 9,11,24]. Yet, text-based RAG primarily relies on semantic similarity search of textual content [ 9], which fails to capture the relational knowl- edge necessary to integrate passages with large semantic distance from the query in
https://arxiv.org/abs/2505.16849v2
multi-step reasoning [6, 21, 23, 30, 37]. Consequently, several works leverage knowledge graphs (KGs) – structured knowledge bases representing real-world information as networks of entities and relations [ 15] – as external information sources to overcome standard RAG limitations [ 37]. Given a query, KG-based RAG systems retrieve relevant facts as nodes, triplets, paths, or subgraphs using graph search algorithms, or parametric retrievers based on graph neural networks or language models [37]. The retrieved graph data is then reformatted for the language model – via linearized triples [ 22], natural language descriptions [8,10,26,52,53], code-like forms [ 12], or node sequences [ 29,32,46] – and finally used by an LLM to generates the final response [37]. The existing body of work exhibits several drawbacks. First, aug- menting a query with relevant KG triples [ 25,42,45] can lead to suboptimal retrieval performance due to the misalignment of struc- tured graphs and the sequential token-based nature of the language model. Although converting KG data to a LLM-suitable tokenized format can help, naive triple linearization [ 2,14], which directly converts KG triples into plain text without considering context, coherence, or structural nuances, often produces semantically inco- herent descriptions [ 54].1Second, RAG systems that directly reason over KGs with LLMs perform a step-by-step graph traversal for fact retrieval [ 21,30,46]. This requires multiple LLM calls per query, significantly increasing complexity and latency. Third, KG-based RAG models often fine-tune retrievers [ 13,29,53] or generators [14,17,29,32,56] on task-specific data to better adapt to diverse KG structures and vocabularies. However, collecting high-quality instruction data is costly [ 4], and fine-tuning large models – even 1Given the triples: A Fistful of Dollars →writtenBy→Sergio Leone , and The Godfather Part II →sequelOf→The Godfather , a prompt based on naive linearization would be: These facts might be relevant to answer the question: (A Fistful of Dollars, writtenBy, Sergio Leone), (The Godfather Part II, sequelOf, The Godfather) [...]. IR-RAG 2025, July 17, 2025, Padua, Italy Böckling et al. Figure 1: Overview of the Walk&Retrieve framework: (1) We combine walk-based graph traversal with knowledge verbalization for corpus generation; (2) The answer is generated with a prompt augmenting the query with the most similar verbalized walks. with parameter-efficient methods [ 5,16,39] – is expensive and limits generalization to dynamic KGs or unseen domains [25, 53]. Contributions. We propose Walk&Retrieve , a lightweight zero- shot KG-based RAG framework, designed as a simple yet competi- tive baseline to address these challenges. It combines efficient graph traversal, via random or breadth-first search walks, with verbal- ization of KG-derived information to build a contextual corpus of relevant facts for each KG entity. At inference, we retrieve the most similar nodes to the query, and their corresponding walks, respectively. We generate the final answer by prompting an LLM with the query, augmented with this relevant context. Unlike many existing KG-based RAG systems, Walk&Retrieve :(1)isadaptable to dynamic KGs – updates (e.g., node insertion or deletion) require no retraining, as new knowledge can be added by incrementally generating additional walks; (2)is more efficient , requiring no fine- tuning of the backbone LLM, and
https://arxiv.org/abs/2505.16849v2
only a single LLM call per query; (3)enables zero-shot RAG with any off-the-shelf LLM . We show that Walk&Retrieve consistently generates accurate responses, while minimizing hallucinations. Our findings render walk-based corpus generation as a promising approach for scalable KG-based RAG, and establish Walk&Retrieve as a strong baseline for future research. 2 Methodology Fig. 1 illustrates our proposed framework, comprising two stages: corpus generation and knowledge-enhanced answer generation. 2.1 Corpus Generation In the first stage, we leverage the knowledge stored in KGs to construct a corpus of relevant facts. A Knowledge Graph is defined as𝐺=(𝑉,𝐸,𝑅), where𝑉denotes a set of nodes 𝑣∈𝑉, and𝐸⊆ 𝑉×𝑅×𝑉a set of directed edges labeled with relation types from the set𝑅. For each node 𝑣∈𝑉, we define its neighbor set as 𝑁(𝑣):= {𝑣′:∃𝑟∈𝑅|(𝑣,𝑟,𝑣′)∈𝐸}. Corpus generation consists of walk- based graph traversal, knowledge verbalization, and indexing. Walk-based Graph Traversal. We extract relevant facts for all entities in the KG using two walk-based graph traversal approaches. Random Walks (RW). In this method, we retrieve facts for a given vertex𝑣∈𝑉by generating 𝑛𝑤graph walksW𝑙of length𝑙rooted in 𝑣. A random walk is a stochastic process with variables 𝑋0,𝑋1,𝑋2,..,where each𝑋𝑡∈𝑉denotes the vertex visited at time 𝑡[38]. At each step, when the random walker is at vertex 𝑣𝑖, it chooses the next node uniformly at random from one of its neighbors 𝑣𝑗∈𝑁(𝑣𝑖) according to the following transition probability: 𝑃(𝑋𝑡+1=𝑗|𝑋𝑡=𝑖)=(1 |𝑁(𝑣𝑖)|if(𝑣𝑖,𝑟,𝑣𝑗)∈𝐸 0otherwise,(1) where|𝑁(𝑣𝑖)|denotes the neighborhood size of 𝑣𝑖. Finally, the graph corpus is obtained by aggregating 𝑛𝑤random walksW𝑙= (𝑋0,𝑟𝑖,𝑋1,...,𝑟𝑘,𝑋𝑙),𝑟∈𝑅per vertex, asC𝑅𝑊=Ð|𝑉| 𝑖=1Ð𝑛𝑤𝑎𝑙𝑘𝑠 𝑗=1W𝑙. Breadth-First Search (BFS) Walks. In this approach, we construct a spanning tree for each entity in 𝐺using the BFS algorithm. For a given root𝑣𝑟∈𝑉, we build walks by partitioning the reachable nodes𝑣𝑗into layers𝐿𝑖based on their shortest-path distance to the root [ 41]. Starting with 𝐿0={𝑣𝑟}, layers are recursively defined as 𝐿𝑖+1={𝑣𝑗∈𝑉\∪𝑖 𝑘=0𝐿𝑘:∃𝑣𝑖∈𝐿𝑖|(𝑣𝑖,𝑟,𝑣𝑗)∈𝐸} (2) for𝑖∈[0,𝑑], where𝑑is the maximum depth (i.e., the maximum allowed shortest-path distance). This guarantees that each vertex is explored only once per search. Hence, the resulting corpus C𝐵𝐹𝑆=Ð|𝑉| 𝑖=1𝐿𝑖contains only non-duplicate walks for each vertex. We note that the maximum allowed shortest-distance path 𝑑of the BFS walks is equivalent to the length 𝑙of the randomly generated walks. Knowledge Verbalization. As LLMs require textual inputs, we convert the extracted walks for each entity in 𝐺into free-form textual descriptions, to enable knowledge-enhanced reasoning for answer generation. In contrast to recent works that fine-tune an LLM on question-answer pairs to learn a graph-to-text transforma- tion [ 53], we directly prompt the LLM – using the prompt template shown in Fig. 2a – to provide a natural language representation of the walks, obtaining the verbalized corpus. This approach aligns the KG-derived information with the LLM’s representation space, while preserving the order of the nodes and edges in the walks. Moreover, by not fine-tuning the LLM, we (i) eliminate the need for labeled graph-text pairs, (ii) improve generalization to unseen KGs, and (iii) enable the usage of any LLM in the knowledge verbalization step. Indexing. Lastly, we index the graph for efficient retrieval. After knowledge verbalization, each walk 𝑤𝑣 𝑖of vertex𝑣is converted into a vector w𝑣 𝑖. Moreover,
https://arxiv.org/abs/2505.16849v2
we compute each node’s global representa- tion from the concatenation of its respective walks. We store the Walk&Retrieve: Simple Yet Effective Zero-shot Retrieval-Augmented Generation via Knowledge Graph Walks IR-RAG 2025, July 17, 2025, Padua, Italy System : Please provide me from an extracted triple set of a Knowledge Graph a sentence. The triple set consists of one extracted random walk. Therefore, a logical order of the shown triples is present. Please consider this fact when constructing the sentence. Prevent introduction words. Human : Please return only the constructed sentence from the following set of node and edge labels extracted from the Knowledge Graph: {triples}. (a) Knowledge verbalization. System : You are provided with context information from a RAG retrieval, which gives you the top k context information. Please use the provided context information to answer the question. If you are not able to answer the question based on the context information, please return the following sentence: “I do not know the answer". Human : Please answer the following question: {question}. Use the following context information to answer the question: {context}. (b) Knowledge-enhanced answer generation. Figure 2: Prompt templates used for knowledge verbalization and answer generation. embeddings of all nodes and corresponding walks to facilitate effi- cient retrieval during inference. Crucially, our walk-based corpus generation renders Walk&Retrieve highly adaptable to dynamic KGs: updates (deletions, modifications, or additions of nodes and edges) require recomputing only the walks involving the changed graph elements – a much smaller subset than the entire corpus. 2.2 Knowledge-enhanced Answer Generation Given a query 𝑞, we encode it with the same LLM used for knowl- edge verbalization, so that the query and the retrieved facts share the same vector space. We then perform a 𝑘-nearest neighbor search to retrieve the 𝑘most similar nodes in 𝐺to𝑞and, for each node, the𝑘most relevant verbalized walks. Concretely, we define the sets of relevant nodes 𝑉𝑘and corresponding walks 𝑊𝑘based on the cosine similarity between the embeddings of the query qand each node vor walk w𝑣, respectively. To this end, we compute: 𝑉𝑘=𝑎𝑟𝑔𝑡𝑜𝑝𝑘𝑣∈𝑉𝑐𝑜𝑠(q,v) 𝑊𝑘=Ø 𝑣𝑘∈𝑉𝑘𝑎𝑟𝑔𝑡𝑜𝑝𝑘𝑤𝑣𝑘∈C𝑐𝑜𝑠(q,w𝑣𝑘),(3) whereC∈{C𝑅𝑊,C𝐵𝐹𝑆}, and the argtopk operation retrieves the 𝑘 nodes with the highest cosine similarity to the query. For zero-shot inference, we design a prompt that integrates the query 𝑞with the relevant context 𝑊𝑘, cf. template from Fig. 2b. Importantly, we instruct the LLM to refrain from responding if the context is insuf- ficient, thereby grounding responses in the extracted structured knowledge, and reducing hallucinations. Finally, the prompt is fed into the previously used LLM to generate a response. By avoiding LLM fine-tuning, we reduce computational costs and eliminate the need for task-specific training data. Moreover, we reduce inference latency as Walk&Retrieve uses a single call to the LLM per query.2 3 Experimental Setup Baselines. We compare Walk&Retrieve against three kinds of baselines: standard LLM, text-based RAG, and KG-based RAG. With LLM only , we test whether the LLM can answer questions without external data. For Vanilla RAG , following [ 45], we uniformly sample 5 triples from all 1-hop facts of the question entities. We consider two KG-RAG models. SubgraphRAG [25]
https://arxiv.org/abs/2505.16849v2
retrieves subgraphs us- ing a MLP and parallel-triple scoring; the LLM then reasons over the linearized triples of the subgraph to generate a response. Re- trieveRewriteAnswer [53] uses constrained path search and relation 2Note that the preprocessing step’s computational overhead is a one-time cost, as subsequent graph changes require only incremental, inexpensive updates to the corpus.Table 1: Statistics of MetaQA [58] and CRAG [55] test sets. MetaQACRAG 1-hop 2-hop 3-hop # Question types 13 21 15 8 # Questions 9,947 14,872 14,274 1,335 path prediction for subgraph retrieval, which it then converts into free-form text to augment the prompt for response generation. Data. We conduct experiments on MetaQA [ 58] and CRAG [ 55]. MetaQA [58] is a knowledge base QA benchmark, with over 400K questions (single- and multi-hop), and a KG containing 43K entities and 9 relation types. We use all its 1-hop, 2-hop, and 3-hop subsets with the "vanilla" question version. CRAG [ 55] is a factual QA benchmark for RAG, featuring over 4.4K question-answer pairs across five domains and eight question categories. It provides mock KGs with 2.6 million entries.3Table 1 summarizes their statistics. Evaluation Metrics. We follow prior work [ 25,43,44] and use Hits@1 to measure if a response includes at least one correct entity. Additionally, we adopt the model-based evaluation setup of Yang et al. [55] to assess the quality of the generated answers using a three-way scoring system: accurate (1),incorrect (-1), or missing (0). Exact matches are labeled accurate ; all others are evaluated with two LLMs, gpt-4-0125-preview [33] and Llama-3.1-70B-instruct [1], to mitigate self-preference [ 36]. We report averages of accurate , hallucinated , and missing responses, and the overall truthfulness (i.e., accuracy minus hallucination) from the LLM evaluators. Implementation Details. We retrieve 𝑘=3similar nodes and walks, respectively, for answer generation.4Our main experiments useLlama-3.1-70B-instruct [1] with temperature 𝑡=0and spec- ulative decoding for all models. We perform 60 walks for random- walk corpus generation. For both Walk&Retrieve model variants, we use walks of depth 4 on MetaQA and 3 on CRAG. We train and evaluate the baselines using their official implementations, and conduct all experiments on two NVIDIA A6000 48 GB GPUs.5 4 Results and Discussion Table 1 summarizes the QA performance of Walk&Retrieve and the baselines with Llama-3.1 . On MetaQA, Walk&Retrieve-BFS consistently outperforms all other models in answer accuracy and 3In our experiments, we use the public test set of CRAG. 4In preliminary experiments with 𝑘∈[1,5], we found𝑘=3to be the optimal value that balances accuracy and hallucination. 5Code available at https://github.com/MartinBoeckling/KGRag IR-RAG 2025, July 17, 2025, Padua, Italy Böckling et al. Table 2: Question-answering performance. We report numbers in percentage, and the query runtime in seconds. For MetaQA, we average results over its k-hop subsets. The best results per column are highlighted in bold, the second best underlined. MetaQA CRAG Baseline Type Model Hits@1↑Accuracy↑Hallucination↓Missing↓Time (s)↓Hits@1↑Accuracy↑Hallucination↓Missing↓Time (s)↓ LLM only Direct 30.37 31.79 18.86 61.89 13.03 11.05 9.31 23.95 67.49 14.14 Text-based RAG Vanilla RAG 25.08 14.73 13.52 65.70 22.11 15.21 16.94 19.53 51.39 26.01 KG-based RAGSubgraphRAG 43.88 41.17 18.08 32.53 23.12 – – – –
https://arxiv.org/abs/2505.16849v2
– RetrieveRewriteAnswer 47.49 34.01 22.92 32.12 22.37 – – – – – Walk&Retrieve-RW 55.60 41.11 15.31 37.13 22.12 19.31 19.40 19.64 51.94 22.15 Walk&Retrieve-BFS 67.99 57.08 12.74 28.27 21.31 21.31 21.53 23.01 53.40 23.34 20 30 40 50 60 70 80 Missing (%)10 0102030405060Truthfulness (%)Walk&Retrieve-RW 1-hop Walk&Retrieve-BFS 1-hop RewriteRetrieveAnswer 1-hop2-hop 2-hop 2-hop3-hop 3-hop 3-hopSubgraphRAG 1-hop VanillaRAG 1-hop LLM only 1-hop2-hop 2-hop 2-hop3-hop 3-hop 3-hop Figure 3: Missing vs. truthfulness rates over MetaQA subsets. Hits@1, achieving a relative improvement of 38.64% over the best baseline (SubgraphRAG). While other KG-based RAG systems yield high accuracy, they tend to hallucinate more than the simpler LLM- only and Vanilla RAG systems, which often produce no answer rather than an incorrect one. In contrast, Walk&Retrieve-BFS min- imizes both hallucinations and missing responses. Although LLM- only has the lowest query latency due to the absence of a retrieval step, Walk&Retrieve achieves the fastest inference time per query among all RAG approaches, underscoring its efficiency. Fig. 3 breaks down MetaQA performance by number of hops. LLM-only and Vanilla RAG fail to answer over 60% of 2- and 3-hop questions. Both SubgraphRAG and RetrieveRewriteAnswer lower the missing rate below 35% across hops, although truthfulness remains under 25%. Conversely, Walk&Retrieve-BFS better trades off accuracy and hallucination (55%+ truthfulness for 1-hop and 37%+ for 2- and 3-hop questions), while greatly reducing non-responses. On CRAG, both Walk&Retrieve variants outperform LLM-only and Vanilla RAG in answer accuracy, while matching them in hal- lucination and missing rates. Note that, SubgraphRAG and Retriev- eRewriteAnswer could not be evaluated on CRAG due to scalability and computational constraints.6These results highlight the scala- bility of our walk-based corpus generation approach, which limits traversal to small-hop neighborhoods rather than the full graph. While performance drops on CRAG, likely due to its greater com- plexity (i.e., MetaQA expects only entity answers) and focus on holistic RAG performance, Walk&Retrieve remains robust. Even though the findings are promising, we plan to further evaluate Walk&Retrieve on larger KGs and other challenging benchmarks (e.g., WebQSP [57], CWQ [48]) to fully showcase its capabilities. Ablation of Walk Approach. The graph traversal strategy and its hyperparameters define a node’s relevant context, directly im- pacting corpus quality and, consequently, retrieval accuracy in 6SubgraphRAG fails to scale to CRAG’s KG (over 1 million edges), and Retriev- eRewriteAnswer requires fine-tuning the backbone LLM beyond our available resources. 1 2 3 4 5 6 Walk Depth25 15 5 51525354555Truthfulness (%)Walk&Retrieve-BFS 1-hop Walk&Retrieve-RW 1-hop2-hop 2-hop3-hop 3-hop Mixtral 8x70B GPT-4o LlaMa 3.1 70B LLM0102030405060Truthfulness (%)Walk&Retrieve-BFS 1-hop Walk&Retrieve-RW 1-hop2-hop 2-hop3-hop 3-hopWalk Approaches & Settings Backbone LLMsFigure 4: Truthfulness rates for different (i) walk approaches and (ii) backbone LLMs, over the MetaQA subsets. RAG systems. The left graph in Fig. 4 shows MetaQA results for Walk&Retrieve with walk depths ranging from 1 to 6.7We find that a walk depth of 4 offers the best trade-off between answer accuracy and hallucination. Notably, regardless of walk length, Walk&Retrieve-BFS consistently yields higher truthfulness than Walk&Retrieve-RW , likely due to is systematic graph exploration, which avoids duplicate walks (cf. §2). In contrast, random walks tend to produce noisier context
https://arxiv.org/abs/2505.16849v2
and fewer unique paths, thus cap- turing less relevant information from the KG.8While they may be more efficient on large-scale KGs, as they do not compute full neighborhoods, this efficiency comes at the cost of increased noise.9 Robustness to Backbone LLMs. Lastly, we evaluate model ro- bustness using different LLMs (see right graph of Fig. 4), including Mixtral-8x7B-Instruct [20] and GPT-4o [34].Mixtral improves answer truthfulness over Llama-3.1 on 2- and 3-hop questions, while GPT-4o yields the highest truthfulness across all types of questions. The RW approach exhibits considerably higher variance across LLMs compared to the BFS-based model, which we attribute to the noisier and less relevant information in its generated corpus. 5 Conclusion Current KG-based RAG faces challenges in aligning structured and textual representations, balancing accuracy with efficiency, and adapting to dynamic KGs. We proposed Walk&Retrieve , a simple yet effective KG-based framework for zero-shot RAG. It leverages walk-based graph traversal and LLM-driven knowledge verbaliza- tion for corpus generation. At inference time, the LLM is prompted 7ForWalk&Retrieve-RW , we also ablate 𝑛𝑤∈[10,100](step of 10); for brevity, we report results for 𝑛𝑤=60, as other values perform comparably. 8On average, each node yields 60 duplicated and 8.74 unique random walks, whereas BFS generates 9.41 unique walks. Although RW could be modified to avoid duplicates, our current setup spans the full spectrum from randomness (RW) to structure (BFS). 9The time complexity of BFS is O(|𝑉|+|𝐸|), whereas that of RW varies between O(|𝑉|log|𝑉|)andO(|𝑉|3). Walk&Retrieve: Simple Yet Effective Zero-shot Retrieval-Augmented Generation via Knowledge Graph Walks IR-RAG 2025, July 17, 2025, Padua, Italy with the query augmented by relevant verbalized walks for en- hanced reasoning. Its efficient retrieval mechanism supports seam- less adaptation to evolving KGs through incremental generation of new walks. Walk&Retrieve is compatible with any off-the-shelf LLM, and reduces computational overhead by avoiding fine-tuning of the backbone LLM. Despite its simplicity, Walk&Retrieve out- performs existing RAG approaches in answer accuracy and in the reduction of hallucinated or missing responses, while maintaining low query latency. Our results highlight walk-based corpus gener- ation as a promising strategy for scaling to large-size KGs. These findings establish Walk&Retrieve as a simple, yet strong baseline for KG-based RAG, and we hope they inspire further research into adaptable and scalable RAG systems. References [1]AI@Meta. 2024. Llama 3 Model Card. (2024). https://github.com/meta-llama/ llama3/blob/main/MODEL_CARD.md [2]Jinheon Baek, Alham Fikri Aji, and Amir Saffari. 2023. Knowledge-Augmented Language Model Prompting for Zero-Shot Knowledge Graph Question Answer- ing. In Proceedings of the 1st Workshop on Natural Language Reasoning and Struc- tured Explanations (NLRSE) . 78–106. doi:10.18653/v1/2023.nlrse-1.7 [3]Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al .2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901. [4]Yihan Cao, Yanbin Kang, Chi Wang, and Lichao Sun. 2023. Instruction Mining: Instruction Data Selection for Tuning Large Language Models. arXiv preprint arXiv:2307.06290 (2023). doi:10.48550/arXiv.2307.06290 [5]Ziwei Chai, Tianjie Zhang, Liang Wu, Kaiqiao Han, Xiaohai Hu, Xuanwen Huang, and Yang Yang. 2023. Graphllm: Boosting graph reasoning ability of large lan- guage model. arXiv preprint arXiv:2310.05845
https://arxiv.org/abs/2505.16849v2
(2023). doi:10.48550/arXiv.2310. 05845 [6]Jiawei Chen, Hongyu Lin, Xianpei Han, and Le Sun. 2024. Benchmarking large language models in retrieval-augmented generation. In Proceedings of the AAAI Conference on Artificial Intelligence , Vol. 38. 17754–17762. doi:10.1609/aaai.v38i16. 29728 [7]Marina Danilevsky, Kun Qian, Ranit Aharonov, Yannis Katsis, Ban Kawas, and Prithviraj Sen. 2020. A Survey of the State of Explainable AI for Natural Language Processing. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing . 447–459. doi:10.18653/v1/2020.aacl- main.46 [8]Darren Edge, Ha Trinh, Newman Cheng, Joshua Bradley, Alex Chao, Apurva Mody, Steven Truitt, and Jonathan Larson. 2024. From local to global: A graph rag approach to query-focused summarization. arXiv preprint arXiv:2404.16130 (2024). doi:10.48550/arXiv.2404.16130 [9]Wenqi Fan, Yujuan Ding, Liangbo Ning, Shijie Wang, Hengyun Li, Dawei Yin, Tat-Seng Chua, and Qing Li. 2024. A survey on rag meeting llms: Towards retrieval-augmented large language models. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining . 6491–6501. doi:10. 1145/3637528.3671470 [10] Bahare Fatemi, Jonathan Halcrow, and Bryan Perozzi. 2024. Talk like a Graph: Encoding Graphs for Large Language Models. In The Twelfth International Confer- ence on Learning Representations . https://openreview.net/forum?id=IuXR1CCrSi [11] Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, and Haofen Wang. 2023. Retrieval-augmented generation for large language models: A survey. arXiv preprint arXiv:2312.10997 (2023). doi:10.48550/ arXiv.2312.10997 [12] Jiayan Guo, Lun Du, Hengyu Liu, Mengyu Zhou, Xinyi He, and Shi Han. 2023. Gpt4graph: Can large language models understand graph structured data? an empirical evaluation and benchmarking. arXiv preprint arXiv:2305.15066 (2023). doi:10.48550/arXiv.2305.15066 [13] Tiezheng Guo, Qingwen Yang, Chen Wang, Yanyi Liu, Pan Li, Jiawei Tang, Dapeng Li, and Yingyou Wen. 2024. Knowledgenavigator: Leveraging large language models for enhanced reasoning over knowledge graph. Complex & Intelligent Systems 10, 5 (2024), 7063–7076. [14] Xiaoxin He, Yijun Tian, Yifei Sun, Nitesh V Chawla, Thomas Laurent, Yann LeCun, Xavier Bresson, and Bryan Hooi. 2024. G-retriever: Retrieval-augmented generation for textual graph understanding and question answering. arXiv preprint arXiv:2402.07630 (2024). doi:10.48550/arXiv.2402.07630 [15] Aidan Hogan, Eva Blomqvist, Michael Cochez, Claudia d’Amato, Gerard De Melo, Claudio Gutierrez, Sabrina Kirrane, José Emilio Labra Gayo, Roberto Navigli,Sebastian Neumaier, et al .2021. Knowledge graphs. ACM Computing Surveys (Csur) 54, 4 (2021), 1–37. [16] Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al .2022. LoRA: Low-Rank Adaptation of Large Language Models. In International Conference on Learning Representations . [17] Yuntong Hu, Zhihan Lei, Zheng Zhang, Bo Pan, Chen Ling, and Liang Zhao. 2024. GRAG: Graph Retrieval-Augmented Generation. arXiv preprint arXiv:2405.16506 (2024). doi:10.48550/arXiv.2405.16506 [18] Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, et al .2024. A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions. ACM Transactions on Information Systems (2024). doi:10.1145/3703155 [19] Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of Hallucination in Natural Language
https://arxiv.org/abs/2505.16849v2
Generation. Comput. Surveys 55, 12 (2023), 1–38. doi:10.1145/ 3571730 [20] Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, De- vendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al .2023. Mistral 7B. arXiv preprint arXiv:2310.06825 (2023). doi:10.48550/arXiv.2310.06825 [21] Bowen Jin, Chulin Xie, Jiawei Zhang, Kashob Kumar Roy, Yu Zhang, Zheng Li, Ruirui Li, Xianfeng Tang, Suhang Wang, Yu Meng, et al .2024. Graph Chain-of- Thought: Augmenting Large Language Models by Reasoning on Graphs. arXiv preprint arXiv:2404.07103 (2024). doi:10.48550/arXiv.2404.07103 [22] Jiho Kim, Yeonsu Kwon, Yohan Jo, and Edward Choi. 2023. KG-GPT: A General Framework for Reasoning on Knowledge Graphs Using Large Language Models. InFindings of the Association for Computational Linguistics: EMNLP 2023 . 9410– 9421. doi:10.18653/v1/2023.findings-emnlp.631 [23] Jonathan Larson and Steven Truitt. 2024. GraphRAG: Unlocking LLM discovery on narrative private data . Retrieved 2025-01-27 from https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm- discovery-on-narrative-private-data/ [24] Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al.2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems 33 (2020), 9459–9474. [25] Mufei Li, Siqi Miao, and Pan Li. 2025. Simple is effective: The roles of graphs and large language models in knowledge-graph-based retrieval-augmented generation. In International Conference on Learning Representations . https: //openreview.net/pdf?id=JvkuZZ04O7 [26] Shiyang Li, Yifan Gao, Haoming Jiang, Qingyu Yin, Zheng Li, Xifeng Yan, Chao Zhang, and Bing Yin. 2023. Graph Reasoning for Question Answering with Triplet Retrieval. In Findings of the Association for Computational Linguistics: ACL 2023. 3366–3375. doi:10.18653/v1/2023.findings-acl.208 [27] Valentin Liévin, Christoffer Egeberg Hother, Andreas Geert Motzfeldt, and Ole Winther. 2024. Can large language models reason about medical questions? Patterns 5, 3 (2024). [28] Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. Comput. Surveys 55, 9 (2023), 1–35. doi:10.1145/3560815 [29] Linhao Luo, Yuan-Fang Li, Reza Haf, and Shirui Pan. 2024. Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning. In The Twelfth International Conference on Learning Representations . [30] Shengjie Ma, Chengjin Xu, Xuhui Jiang, Muzhi Li, Huaren Qu, Cehao Yang, Jiaxin Mao, and Jian Guo. 2024. Think-on-Graph 2.0: Deep and Faithful Large Language Model Reasoning with Knowledge-guided Retrieval Augmented Generation. arXiv preprint arXiv:2407.10805 (2024). doi:10.48550/arXiv.2407.10805 [31] Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Daniel Khashabi, and Hannaneh Hajishirzi. 2023. When Not to Trust Language Models: Investigating Effectiveness of Parametric and Non-Parametric Memories. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) . 9802–9822. doi:10.18653/v1/2023.acl-long.546 [32] Costas Mavromatis and George Karypis. 2024. GNN-RAG: Graph Neural Retrieval for Large Language Model Reasoning. arXiv preprint arXiv:2405.20139 (2024). doi:10.48550/arXiv.2405.20139 [33] OpenAI. 2023. ChatGPT . Retrieved 2025-02-14 from https://openai.com/index/ chatgpt/ [34] OpenAI. 2023. GPT-4 Technical Report. arXiv preprint arXiv:2303.08774 (2023). doi:10.48550/arXiv.2303.08774 [35] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al .2022. Training language models to follow instructions with human feedback. Advances in neural information processing systems
https://arxiv.org/abs/2505.16849v2
35 (2022), 27730–27744. [36] Arjun Panickssery, Samuel R Bowman, and Shi Feng. 2024. Llm evaluators recognize and favor their own generations. arXiv preprint arXiv:2404.13076 (2024). doi:10.48550/arXiv.2404.13076 IR-RAG 2025, July 17, 2025, Padua, Italy Böckling et al. [37] Boci Peng, Yun Zhu, Yongchao Liu, Xiaohe Bo, Haizhou Shi, Chuntao Hong, Yan Zhang, and Siliang Tang. 2024. Graph retrieval-augmented generation: A survey. arXiv preprint arXiv:2408.08921 (2024). [38] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining . 701–710. doi:10.1145/2623330. 2623732 [39] Bryan Perozzi, Bahare Fatemi, Dustin Zelle, Anton Tsitsulin, Mehran Kazemi, Rami Al-Rfou, and Jonathan Halcrow. 2024. Let your graph do the talking: Encoding structured data for llms. arXiv preprint arXiv:2402.05862 (2024). [40] Vipula Rawte, Swagata Chakraborty, Agnibh Pathak, Anubhav Sarkar, SM Towhidul Islam Tonmoy, Aman Chadha, Amit Sheth, and Amitava Das. 2023. The Troubling Emergence of Hallucination in Large Language Models-An Extensive Definition, Quantification, and Prescriptive Remediations. In Proceed- ings of the 2023 Conference on Empirical Methods in Natural Language Processing . 2541–2573. doi:10.18653/v1/2023.emnlp-main.155 [41] Petar Ristoski and Heiko Paulheim. 2016. Rdf2vec: Rdf graph embeddings for data mining. In International semantic web conference . Springer, 498–514. [42] Ahmmad O. M. Saleh, Gokhan Tur, and Yucel Saygin. 2024. SG-RAG: Multi- Hop Question Answering With Large Language Models Through Knowledge Graphs. In Proceedings of the 7th International Conference on Natural Language and Speech Processing (ICNLSP 2024) , Mourad Abbas and Abed Alhakim Freihat (Eds.). Association for Computational Linguistics, Trento, 439–448. https:// aclanthology.org/2024.icnlsp-1.45/ [43] Apoorv Saxena, Adrian Kochsiek, and Rainer Gemulla. 2022. Sequence-to- Sequence Knowledge Graph Completion and Question Answering. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) . 2814–2828. doi:10.18653/v1/2022.acl-long.201 [44] Priyanka Sen, Alham Fikri Aji, and Amir Saffari. 2022. Mintaka: A Complex, Natu- ral, and Multilingual Dataset for End-to-End Question Answering. In Proceedings of the 29th International Conference on Computational Linguistics . 1604–1619. [45] Priyanka Sen, Sandeep Mavadia, and Amir Saffari. 2023. Knowledge graph- augmented language models for complex question answering. In Proceedings of the 1st Workshop on Natural Language Reasoning and Structured Explanations (NLRSE) . 1–8. doi:10.18653/v1/2023.nlrse-1.1 [46] Jiashuo Sun, Chengjin Xu, Lumingyuan Tang, Saizhuo Wang, Chen Lin, Yeyun Gong, Lionel Ni, Heung-Yeung Shum, and Jian Guo. 2024. Think-on-Graph: Deep and Responsible Reasoning of Large Language Model on Knowledge Graph. In The Twelfth International Conference on Learning Representations . [47] Kai Sun, Yifan Xu, Hanwen Zha, Yue Liu, and Xin Luna Dong. 2024. Head- to-Tail: How Knowledgeable are Large Language Models (LLMs)? AKA Will LLMs Replace Knowledge Graphs?. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) . 311–325. doi:10.18653/v1/2024. naacl-long.18 [48] Alon Talmor and Jonathan Berant. 2018. The Web as a Knowledge-Base for Answering Complex Questions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: HumanLanguage Technologies, Volume 1 (Long Papers) . 641–651. doi:10.18653/v1/N18- 1059 [49] SM Tonmoy, SM Zaman,
https://arxiv.org/abs/2505.16849v2
Vinija Jain, Anku Rani, Vipula Rawte, Aman Chadha, and Amitava Das. 2024. A comprehensive survey of hallucination mitigation techniques in large language models. arXiv preprint arXiv:2401.01313 (2024). doi:10.48550/arXiv.2401.01313 [50] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yas- mine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhos- ale, et al .2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 (2023). doi:10.48550/arXiv.2307.09288 [51] Tu Vu, Mohit Iyyer, Xuezhi Wang, Noah Constant, Jerry Wei, Jason Wei, Chris Tar, Yun-Hsuan Sung, Denny Zhou, Quoc Le, and Thang Luong. 2024. FreshLLMs: Re- freshing Large Language Models with Search Engine Augmentation. In Findings of the Association for Computational Linguistics: ACL 2024 , Lun-Wei Ku, Andre Martins, and Vivek Srikumar (Eds.). Association for Computational Linguistics, Bangkok, Thailand, 13697–13720. doi:10.18653/v1/2024.findings-acl.813 [52] Yilin Wen, Zifeng Wang, and Jimeng Sun. 2023. Mindmap: Knowledge graph prompting sparks graph of thoughts in large language models. arXiv preprint arXiv:2308.09729 (2023). doi:10.48550/arXiv.2308.09729 [53] Yike Wu, Nan Hu, Sheng Bi, Guilin Qi, Jie Ren, Anhuan Xie, and Wei Song. 2023. Retrieve-rewrite-answer: A kg-to-text enhanced llms framework for knowledge graph question answering. arXiv preprint arXiv:2309.11206 (2023). doi:10.48550/ arXiv.2309.11206 [54] Yike Wu, Yi Huang, Nan Hu, Yuncheng Hua, Guilin Qi, Jiaoyan Chen, and Jeff Pan. 2024. CoTKR: Chain-of-Thought Enhanced Knowledge Rewriting for Complex Knowledge Graph Question Answering. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing . 3501–3520. doi:10.18653/v1/2024.emnlp-main.205 [55] Xiao Yang, Kai Sun, Hao Xin, Yushi Sun, Nikita Bhalla, Xiangsen Chen, Sa- jal Choudhary, Rongze Daniel Gui, Ziran Will Jiang, Ziyu Jiang, et al .2024. CRAG–Comprehensive RAG Benchmark. 38th Conference on Neural Information Processing Systems (NeurIPS 2024), Track on Datasets and Benchmarks (2024). [56] Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang, and Jure Leskovec. 2021. QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies . 535–546. doi:10.18653/v1/2021.naacl-main.45 [57] Wen-tau Yih, Matthew Richardson, Christopher Meek, Ming-Wei Chang, and Jina Suh. 2016. The value of semantic parse labeling for knowledge base question answering. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers) . 201–206. doi:10.18653/v1/P16-2033 [58] Yuyu Zhang, Hanjun Dai, Zornitsa Kozareva, Alexander Smola, and Le Song. 2018. Variational reasoning for question answering with knowledge graph. In Proceedings of the AAAI conference on artificial intelligence , Vol. 32. [59] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al .2023. A survey of large language models. arXiv preprint arXiv:2303.18223 (2023). doi:10.48550/ arXiv.2303.18223
https://arxiv.org/abs/2505.16849v2
arXiv:2505.16850v1 [cs.LG] 22 May 2025ATR-Bench: A Federated Learning Benchmark for Adaptation, Trust, and Reasoning Tajamul Ashraf1, Mohammed Mohsen Peerzada5, Moloud Abdar2, Yutong Xie1, Yuyin Zhou3, Xiaofeng Liu4, Iqra Altaf Gillani∗5, Janibul Bashir∗5 ∗Equal senior authorship. 1Department of Computer Vision, MBZUAI2University of Queensland 3University of California, Santa Cruz4Yale University5Gaash Lab, Department of IT, NIT Srinagar Abstract Federated Learning (FL) has emerged as a promising paradigm for collaborative model training while preserving data privacy across decentralized participants. As FL adoption grows, numerous techniques have been proposed to tackle its practical challenges. However, the lack of standardized evaluation across key dimensions hampers systematic progress and fair comparison of FL methods. In this work, we introduce ATR-Bench , a unified framework for analyzing federated learning through three foundational dimensions: Adaptation ,Trust, and Reasoning . We provide an in-depth examination of the conceptual foundations, task formulations, and open research challenges associated with each theme. We have extensively benchmarked representative methods and datasets for adaptation to heterogeneous clients andtrustworthiness in adversarial or unreliable environments . Due to the lack of reliable metrics and models for reasoning in FL, we only provide literature- driven insights for this dimension. ATR-Bench lays the groundwork for systematic and holistic evaluation of federated learning with real-world relevance. We will make our complete codebase1publicly accessible and a curated repository that continuously tracks new developments and research in the FL literature. 1 Introduction Deep learning has revolutionized numerous fields, leading to groundbreaking advancements across various scientific domains, and has increasingly permeated industrial and societal applications. This transformation is especially evident in areas such as computer vision [ 28,155,32,53,196,59,213], natural language processing [ 179,30], multi-modal learning [ 149,97,232], and medical analysis [ 14]. With increasing concerns around data sensitivity and privacy, several regulatory frameworks have been introduced to regulate how data is collected and used [ 116,135,182,137]. As a result, traditional centralized training approaches, which rely on aggregating raw data from multiple sources, face significant deployment challenges in real-world applications. To address these constraints, federated learning (FL) [ 82,81,118,210,166,57,209] has gained traction as an effective paradigm for privacy- aware collaborative learning. FL allows multiple participants to collaboratively train a shared model without sharing their data. Clients locally update the model using their data, and only the learned updates are sent to a central server, which aggregates them into a global model for further refinement. This decentralized approach allows FL to support secure and privacy-preserving learning across distributed data silos. Despite notable progress in FL research [ 51,73,236,49,102,189,138,13], the field still faces several open challenges. Three primary areas of concern are: 1https://github.com/Tajamul21/ATR-Bench Preprint. Under review. Table 1: Summary of existing works . Additional information can be found in § C. Adaptation Trust Reasoning Prior Works Generalization Robustness Fairness Benchmark [arXiv’18], [226], [TIST’19] [210], [WS4’20] [180], [arXiv’21] [114] ✓ [FGCS’22] [201], [CSUR’23] [129], [arXiv’23] [215] [NC’21] [66], [CSUR’22] [131], [FGCS’22] [201] [FGCS’21][181], [SPM’20][175], [CSR’23][24] ✓ ✓ [FTML’21][140], [TKDE’21][144] ✓ ✓ ✓ ✓ [arXiv’20][108], [TrustCom’22] [72], [TNNLS’22] [109] ✓ [TKDE’21][144], [arXiv’22] [198], [arXiv’23] [71] ✓ ✓ ✓ [TNNLS’23] [219], [arXiv’23] [215] ✓ [TPAMI’24][62] [CVPR’24] [230] ✓ ✓ ✓ ✓
https://arxiv.org/abs/2505.16850v1
[arXiv’24] [113], [OJCOMS’24] [169] ✓ Ours ✓ ✓ ✓ ✓ ✓ ✓ •Adaptation . In federated learning, data is often generated across diverse sources, naturally resulting in non-independent and non-identically distributed (Non-IID) characteristics [ 134,133,79,173,197, 19,221]. These discrepancies introduce two primary types of distribution shifts: i)Cross-Client Distribution Shift : Each client typically operates on data with a distinct distribution, leading to significant heterogeneity between participants. As a result, clients tend to optimize their local models toward different empirical minima, which may conflict with one another [ 142,125,115,212,238]. This misalignment in optimization trajectories can hinder convergence and reduce the effectiveness of the aggregated global model. ii)Out-of-Client Distribution Shift : Federated models are trained solely on data from participating clients, and thus are biased toward the distributions present during training. When deployed in unseen environments or encountering new clients ( i.e., external domains), these models often underperform due to their inability to generalize beyond the observed training distributions [ 65,204,146,123,107]. This issue limits the model’s robustness in real-world scenarios. •Trust . Although FL preserves privacy, its decentralized structure makes it vulnerable: a few compromised clients can poison local updates and skew global training. i)Byzantine Attacks : Clients may send malicious updates by poisoning local data ( data poisoning [9,4]) or tampering with model weights ( model poisoning [44,23,121]), degrading model accuracy. ii)Backdoor Attacks : Adversaries embed triggers in their updates so the global model misclassifies specific inputs while appearing normal otherwise [ 193,17,170]. Distributed trigger schemes further evade detection by splitting patterns across clients [ 21,200]. In high-stakes applications such as medical imaging [ 131], autonomous driving [ 3], and fraud detection [ 190], these threats demand robust defenses and fair reward mechanisms to ensure long-term collaboration. iii)Privacy-Preserving Adaptation: Adapting pretrained models to local tasks ( e.g., via federated fine-tuning methods such as LoRA [ 58]) must preserve data privacy while maintaining robustness under heterogeneous client objectives [93]. •Reasoning . The rapid emergence of large language models (LLMs) and foundation models has made reasoning a central capability in modern AI systems. These models have demonstrated remarkable abilities in structured inference, commonsense reasoning, and multi-hop decision-making across modalities. As such, enabling robust reasoning is now a critical goal for federated learning (FL), especially in vision-centric tasks involving multimodal inputs [ 187,56]. Major challenge arise: Distributed Reasoning. FL must support structured, multi-step inference across clients with Non-IID data, and limited communication without overfitting to any single client’s distribution [119, 94, 90]. Despite growing interest in adaptation, trust, and reasoning, the absence of a unified evaluation framework limits systematic progress. We address this by introducing a structured benchmark that consolidates these challenges for robust, comparative assessment. As shown in Table 1, prior works often focus on isolated FL challenges e.g., generalization [ 226], robustness [ 108], or fairness [ 219] without offering unified perspectives. Recent benchmarks [ 71,230] also overlook reasoning and fairness trade-offs. In contrast, our benchmark holistically evaluates generalization, adaptation, robustness, fairness, and reasoning making our contributions threefold: •We introduce ATR-Bench , a unified benchmark that not only categorizes key federated learning challenges across Adaptation,
https://arxiv.org/abs/2505.16850v1
Trustworthiness, and Reasoning, but also formalizes task settings, evaluation criteria, and research gaps in current literature. 2 Table 2: Overview of Key Attributes in Reviewed Techniques for Cross Calibration (See § 2). Method Venue Core Idea Global Neural Network Drawback : Linear growth in local computational load FedProx[173] [MLSys’20] ℓ2-based constraint on updates SCAFFOLD[78] [ICML’20] Gradient correction via control variates MOON[142] [CVPR’21] Contrastive learning in feature space FedNTD[45] [NeurIPS’22] Decoupled approach to knowledge transfer FedSeg[122] [CVPR’23] Contrastive strategy at pixel-level granularity GeFL[77] [arXiv’24] Aggregate global knowledge across users Global Statistical Cues Drawback : Heavily dependent on comprehensive data diversity FedProc[202] [arXiv’21] Use of prototype similarity for contrast HarmoFL [124] [AAAI’22] Employs signal amplitude normalization FedFA [178] [ICLR’23] Data augmentation via Gaussian modeling FPL[61] [CVPR’23] Prototype refinement using clustering FedSB[164] [ICASSP’25] Utilizes label smoothing to prevent overfitting Augmented Architectures Drawback : Introduces integration issues and added overhead FedMLB[69] [ICML’22] Multi-branch architecture for flexibility FedCGAN[225] [IJCAI’22] GAN-based synthetic data generation ADCOL[143] [ICML’23] Generator that learns client representations DaFKD[64] [CVPR’23] Introduces a discriminator for distillation CAFA[84] [FGCS’25] Leverages computational capacities for local training Self-Regulated Learning Drawback : Hyperparameter tuning instability, risk of forgetting FedRS[96] [KDD’21] Limits softmax confidence levels FedAlign [126] [CVPR’22] Ensures final layer stability via Lipschitz constraints FedSAM [238] [ICML’22] Applies sharpness-aware optimization FedLC [229] [ICML’22] Adjusts logits using class-wise probability FedDecorr [220] [ICLR’23] Reduces inter-feature redundancy FedVR-AL[171] [arXiv’24] Variance reduction and adaptation for non-convex optimizationMethod Venue Core Idea Collaborative Data Sharing Drawback : Assumes prior availability of suitable external data DC-Adam [141] [CS’21] Initial warm-up using pre-distributed data FEDAUX [41] [TNNLS’21] Auxiliary data for pretraining and distillation ProxyFL[76] [NatureComms’23] Shares proxy models across clients ShareFL[160] [arXiv’23] Review on collaborative data sharing in FL FedSPD[98] [arXiv’24] Clustering-based framework enabling consensus for distinct data clusters Data Augmentation for FL Drawback : May reduce data variety, can cause privacy issues FedMix[177] [ICLR’21] Mixup of averaged samples across clients FEDGEN[241] [ICML’21] Uses ensemble generators for diversity FedInverse[188] [ICLR’24] Investigates inversion attacks and defenses FLea[194] [KDD’24] Privacy-preserving feature augmentation techniques Sample Filtering in FL Drawback : Risk of unfair exclusion at client/data level FedACS[185] [IWQOS’21] Detects and excludes poisoned data via clustering Safe[206] [TII’22] Prefers clients with lower distributional skew FedBalancer [163] [MobiSys’22] Prioritizes fair data sampling across devices Fedrtid[211] [Cybersecurity’24] Introducing random client participation and adaptive time constraints Aggregation Reweighting at Server Drawback : Requires thorough dataset quality evaluation FEDBE[11] [ICLR’21] Uses Bayesian ensembles for aggregation Elastic[29] [CVPR’23] Aggregates via parameter sensitivity interpolation FFA[31] [arXiv’24] Novel metrics that consider client participation and aggregation methods Server-Side Adaptive Methods Drawback : Needs auxiliary data and aligned training objectives FedMD [89] [NeurIPS’19] Distills from local classifiers on proxy data FedDF [99] [NeurIPS’20] Combines knowledge from diverse client models FedGKT[52] [NeurIPS’20] Shares group knowledge across clients FedOPT[152] [ICLR’21] Adaptive optimization on central server FCCL [60] [CVPR’22] Cross-correlation for representation alignment •We conduct extensive empirical evaluations covering generalization, adaptation, robustness, and fairness across diverse FL settings. For reasoning, we provide a conceptual outline and literature insights, identifying open challenges due to the absence of mature models and evaluation metrics. •We highlight
https://arxiv.org/abs/2505.16850v1
future research directions and consolidate key datasets, tasks, and method trends to guide actionable progress in federated learning deployments. 2 Adaptive Federated Learning Adaptive Federated Learning tackles generalization and personalization across diverse clients. It balances global performance with client-specific adaptation using techniques like meta-learning and fine-tuning, enabling effective deployment in Non-IID settings such as healthcare and cross-device systems. Cross Calibration. In the case of Cross-Client Shift challenge, client data is often distributed in a highly skewed manner, which results in inconsistencies between local training goals. Consequently, each client updates its model based on a distinct local optimum, resulting in divergence of optimization directions. Existing approaches primarily aim to mitigate this divergence by adjusting client updates from three key perspectives, as shown in Table 2. Client Regularization. Federated methods that seek to align client updates with a shared global objective can be broadly classified into four categories. First, global neural network guidance directly incorporates the aggregated model into each client’s local update either via parameter-sensitivity constraints ( e.g., FedProx [ 173], FedCurv [ 134], FedDyn [ 1]) or by penalizing divergence from global predictions ( e.g., MOON [ 142], FedUFO [ 115]) at the cost of increased computation that scales with model size. Second, global statistical cues approaches construct class-wise summaries (e.g., prototypes [ 202], Gaussian descriptors [ 125], spectral signatures [ 124]) or aggregate feature representations [ 139] to provide finer-grained guidance, though their reliability depends on the diversity and richness of client data. Third, augmented architectures introduce supplementary modules such as GAN-based generators [ 241,64] or parallel “global” branches [ 52,69] to counter client drift, but these often require architectural compatibility and increase communication overhead. Finally, self-regulated learning leverage self-distillation [ 222] or reweighted loss functions [ 96,220] to stabilize local training without extra communication, though their effectiveness can be highly sensitive to hyperparameters, especially under extreme data heterogeneity. Client Augmentation. To mitigate client data heterogeneity, FL methods can be broadly grouped into three strategies. First, collaborative data sharing exchanges labeled or unlabeled examples or models among clients to promote knowledge transfer. Approaches like DC-Adam [ 141] and FEDAUX [41] use warm-up phases or auxiliary pretraining, while others like ProxyFL [ 76] share proxy models to enable indirect data knowledge exchange. ShareFL [ 160] provides a comprehensive review, and FedSPD [ 98] enables inter-client clustering to reach consensus among data-similar clients. However, 3 Table 3: Overview of key properties of the evaluated methods for Unknown Generalization (see § 2). The symbols ⋆and◦indicate possible privacy exposure andmodifications to the model architecture , respectively. Federated Domain Adaptation Methods Venue Highlight Limitation FADA [204] [ICLR’20] Adversarial alignment ◦: Uses GAN [48] COPA [50] [ICCV’21] Shared encoder, task heads ◦: Needs IBN [203] AEGR [46] [ICME’23] Pseudo-label tuning ⋆: Exposed to PGD [2] FedGP [25] [ICLR’24] Gradient projection aggregation Requires projection tuning FedRF-TCA [40] [TKDE’25] Random features for efficiency May underperform on complex domainsFederated Domain Generalization Methods Venue Highlight Limitation FedDG [146] [CVPR’21] Frequency-based sharing ⋆: Reveals amplitude CCST [12] [WACV’23] Client-wise style mixing ⋆: Leaks style cues CSAC [74] [TKDE’23] Semantic layer fusion ◦:
https://arxiv.org/abs/2505.16850v1
Adds attention FedSB [164] [ICASSP’25] Label smoothing and balanced training Careful tuning of smoothing parameters FedCGA [105] [ICME’24] Global consistent augmentation Assumes availability of diverse styles these strategies assume the availability of meaningful and appropriately matched auxiliary data, which may not always be feasible. Second, data augmentation enhances local datasets to simulate more diverse conditions. Methods like FedMix [ 177] mix local data representations across clients, FEDGEN [ 241] employs ensemble generators to synthesize informative samples, and FedInverse [188] explores the privacy implications of such augmentations. FLea [ 194] applies privacy-preserving feature augmentation techniques. While useful, these methods can reduce diversity or inadvertently leak private data through reconstruction or overfitting. Third, sample filtering avoids direct data sharing or augmentation by selecting clients or samples deemed more trustworthy. For example, FedACS [ 185] and Safe [ 206] cluster data or prioritize lower-skew clients, respectively. FedBalancer [163] balances fairness by allocating sampling quotas, and Fedrtid [ 211] introduces random client participation with adaptive timing to reduce resource burden and enhance robustness. However, these methods risk marginalizing clients with less “mainstream” data, undermining fairness. Server Operation. To better handle heterogeneous client updates, federated learning can adapt aggregation dynamics at the server. One direction is aggregation reweighting, where clients are weighted based on factors beyond static proportions. For instance, FEDBE [ 11] uses Bayesian ensembling, Elastic [ 29] reweights updates using gradient sensitivity, and FFA [ 31] introduces fairness-aware metrics to evaluate participation and aggregation impacts. While these improve personalization and convergence, they rely on costly evaluations of data quality or model variance. A complementary direction is server-side adaptive optimization, where the central model is refined using external data or tailored learning rules. Methods like FedMD [ 89], FedDF [ 99], and FedGKT [52] distill knowledge across clients using proxy data. FedOPT [ 152] adapts server-side optimization rules, while FCCL [ 60] aligns representations using cross-correlation signals. Though effective, such approaches often require additional datasets and tuned objectives, which may complicate real-world deployment. Unknown Generalization. Prior studies have shown that deep neural networks often overfit their training data and produce overly confident outputs [ 10,6]. We summarize the essential characteristics of various solutions addressing Unknown Generalization in Table 3. Such overconfidence can prove detrimental in practice [ 27], as even slight distributional shifts between training and deployment data may lead to substantial performance degradation [ 7,218]. In federated learning, the majority of the work concentrates on boosting in-distribution accuracy across clients, with limited attention paid to how models generalize to novel, out-of-federation domains [ 33,204,146,65]. Approaches addressing this gap can be categorized according to when they gain access to out-of-distribution data: Federated Domain Adaptation (FDA) and Federated Domain Generalization (FDG). FDA methods incorporate unlabeled target-domain samples during training to reduce distribution shift, and can be broadly categorized into alignment-based approaches which enforce feature consistency through contrastive losses [ 223,224], knowledge-distillation alignment [ 55,237,199], adversarial adaptation [46], or gradient matching [ 235,228] and disentanglement-based methods, which split the model into shared and domain-specific components via adversarial losses [ 204,88], multi-expert gating [ 227], or separate classifiers [
https://arxiv.org/abs/2505.16850v1
50]. In contrast, FDG seeks to train on heterogeneous client data and generalize directly to unseen domains, using either invariant optimization techniques, such as spectrum alignment [146], style normalization [ 12], barycenter-based feature fusion [ 233], or specialized architectural blocks (GANs [ 114], AdaIN [ 12], IBN [ 50]) or invariant aggregation schemes that reweight or calibrate server-side model fusion to balance domain performance [157, 34, 74]. 3 Trustworthy Federated Learning Trustworthy Federated Learning centers on robustness andfairness . Robustness addresses threats from adversarial clients or corrupted updates, while fairness ensures equitable performance across heterogeneous users. Together, they define the trust boundary essential for FL deployment in sensitive domains like healthcare and finance. 4 Byzantine Tolerance. To guard against Byzantine clients, robust aggregation methods can be grouped into three families: distance-based tolerance, which detects and discards updates that deviate strongly from the group consensus ( e.g., Krum [ 8], FoolsGold [ 42], FABA [ 148]); statistical-based tolerance, which applies robust estimators such as the geometric median or trimmed means to filter outliers without tracking individual contributions ( e.g., RFA [ 85], Bulyan [ 153]); and proxy-based tolerance, which uses a small, clean auxiliary dataset to score and weight client updates by their performance on trusted samples ( e.g., Sageflow [ 70], FLTrust [ 192]). Similarly, mitigating backdoor attacks has led to three main defense paradigms: post-hoc model sanitization, where the aggregated model is fine-tuned or distilled on clean data to erase backdoors ( e.g., FedPurning [ 20], FedDF [99]); aggregation-time filtering, which extends Byzantine defenses to remove poisoned updates during server aggregation ( e.g., DimKrum [ 240], RLR [ 136]); and certified defenses, which construct provable guarantees by maintaining multiple model variants or applying randomized smoothing so that small client perturbations cannot alter predictions ( e.g., ProvableFL [ 191], CRFL [ 22]). Each category trades off different assumptions, computational costs, and requirements for auxiliary data or statistical priors, and their effectiveness can degrade significantly under real-world heterogeneity. Table 4 summarizes the essential characteristics of Byzantine Tolerance solutions discussed above. Table 4: Key characteristics of the reviewed Byzantine Tolerance solutions as discussed in (§ 3). Methods Venue Highlight Distance Base Tolerance Limitation : Poor handling of data heterogeneity Multi Krum [8] [NeurIPS’17] Selects gradients using Krum rule FoolsGold [42] [arXiv’18] Detects sybils via similarity scores DnC [162] [NDSS’21] Uses SVD to isolate abnormal updates RED-FL [54] [GlobConET’23] Distance-based method to assign weights to client updates FedWad [151] [ICLR’24] Compute Wasserstein distances Statistics Distribution Tolerance Limitation : Depends on strong mathematical assumptions Trim Median [35] [ICML’18] Applies trimmed mean per dimension Bulyan [153] [ICML’18] Selects top vectors, aggregates per axis RFA [85] [TSP’22] Iterative median via Weiszfeld approach OPDS-FL [106] [NeurIPS’23] Measure data heterogeneity across clients DFL-FS [16] [ICME’24] Address long-tailed and non-IID data distributions FD-PerFL [117] [NeurIPS’24] Feature distributions for personalized federated learning Proxy Dataset Tolerance Limitation : Needs trusted data and client similarity FLTrust [192] [NDSS’21] Uses trusted seed and ReLU score Sageflow [70] [NeurIPS’21] Adjusts weights via entropy and loss ProxyZKP [92] [ScientificReports’24] Zero-knowledge proofs with polynomial proxy modelsCollaboration Fairness. In
https://arxiv.org/abs/2505.16850v1
federated learn- ing, fair contribution evaluation is critical to reward clients in proportion to their inputs while respecting data privacy [ 112,110]. A common strategy is individualized evaluation, where each client’s score is derived from locally available signals such as data acquisition cost [75], economic incentives ( e.g., contract the- ory [ 68], Stackelberg models [ 128]), compute bids [ 172], or performance-based reputations computed via local validation [ 111] or update divergence from the global model [ 90]. How- ever, this approach assumes honest reporting and can penalize clients with non-IID or smaller datasets. An alternative is marginal contribution estimation via cooperative game theory, notably Shapley value approximations [ 161,43,207]. Methods like Cosine-Gradient Shapley (CGSV) [207] and FEDCE [ 67] evaluate each client’s impact on model performance, but suffer from exponential complexity and often require auxiliary validation data, limiting their scalability in large-scale federations. Performance Fairness. Performance imbalance in federated learning arises when the global model disproportionately favors clients with abundant or homogeneous data, leaving underrepresented participants with subpar accuracy. To mitigate this, two main classes of methods have emerged: (i) fairness-aware optimization, which embeds fairness constraints directly into each client’s local loss—for example, min–max formulations such as AFL [ 127] and loss-penalizing schemes like qFFL [174], or multi-objective descent approaches such as FedMGDA [ 234] and FCFL [ 18] to uplift the worst-performing clients; and (ii) fair aggregation reweighting, which dynamically adjusts server-side combination weights based on client-level signals ( e.g., gradient conflict in FedFV [ 239] or variance of generalization gaps in FedCE [67, 37]). While optimization-based strategies can improve the tail accuracy, they often assume honest reporting and can degrade overall utility; reweighting methods reduce skew via stale or auxiliary risk estimates, but incur extra synchronization overhead and may require validation data. 4 Reasonable Federated Learning Reasoning in FL extends beyond explainability, requiring structured, privacy-aware inference across decentralized clients. As FL expands to high-stakes domains ( e.g., healthcare, autonomous vehicles), it must evolve from black-box prediction to reasoning-driven learning. Traditional methods like FedAvg [ 119] struggle with heterogeneity and lack interpretability. To address this, FL must adopt reasoning frameworks that generalize across clients and maintain privacy. 5 68.0765.05 40455055606570 CommunicationEpochCifar-10FedAVGFedNovaFedProtoFedProxFedRSMOONFedDynFedOpt 72.7668.52 4045505560657075 CommunicationEpochCifar-100FedProtoFedAVGScaffoldFedProxFedRSMOONFedDynFedOpt 99.4199.11 9898.298.498.698.89999.299.4 CommunicationEpochMNISTFedProtoFedNovaFedAVGFedProxFedRSMOONFedDynFedOpt 90.1189.40 838485868788899091 CommunicationEpochFashion-MNISTFedProtoFedNovaFedAVGFedProxFedRSMOONFedDynFedOptFigure 1: Test accuracy over 100 communication rounds on Cifar-10, Cifar-100, MNIST, and Fashion- MNIST datasets under Dirichlet distribution with β= 0.5. Reasonable and Distributed FL Approaches. We categorize recent advances in reasonable FL into three primary strands: (1) Distillation with Interpretability: This approach augments knowl- edge distillation by transmitting intermediate representations such as attention maps or saliency scores alongside logits [ 205]. These explanations align not only outputs but also rationales across clients, improving both interpretability and robustness in the presence of partial, non-IID data. (2) Explanation-Guided Aggregation: Rather than uniformly averaging models, this strategy reweights client updates based on the quality or fidelity of their explanations. For example, rule-based FL systems prioritize IF–THEN rules that remain consistent during global validation, thus preserving diverse reasoning pathways [ 165], a step toward distributed, structured inference. (3) Symbolic and Graph-Based Models:
https://arxiv.org/abs/2505.16850v1
These inherently interpretable approaches, such as decision trees, fuzzy-rule systems, and knowledge graph embeddings, allow each client to build localized reasoning systems. The server merges symbolic components ( e.g., aligned rules or entity vectors), resulting in global models that are both transparent and adaptive across tasks. Distributed and Privacy-Preserving Reasoning Mechanisms. Beyond explanation, recent ap- proaches aim to enable true reasoning capabilities within the constraints of FL, tackling the challenges of partial observability, communication bottlenecks, and data privacy. (1) Chain-of-Thought Dis- tillation: Inspired by LLMs, clients generate multi-step rationales ( e.g., token-level traces or logic chains), which are abstracted and aggregated server-side to form unified reasoning processes [ 47]. This supports distributed inference across diverse clients while avoiding overfitting to any single logic path. (2) Knowledge-Driven Aggregation: Clients encode relevant subgraphs from external knowledge bases alongside local data, producing semantically enriched representations. Server-side schema alignment ensures logically valid global inference while enabling domain-adaptive reasoning in heterogeneous environments [ 15,231].(3) Symbolic–Neural Hybrids: These systems co-train interpretable symbolic modules ( e.g., rule sets, decision trees) with neural networks. Clients train local hybrids that preserve explicit logic under local constraints, and the server integrates them into global models capable of both symbolic inference and gradient-based learning [ 172].(4) Privacy- Preserving Adaptation: Many FL scenarios require local adaptation ( e.g., via LoRA or adapters) without compromising privacy. Clients fine-tune reasoning behaviors, such as domain-specific CoT or task-specific logic, without revealing raw data [ 58]. These adaptations are then distilled or merged into the global model, balancing personalization and privacy [ 93]. Integrating LLMs into FL enables structured, multimodal reasoning by allowing clients to generate personalized reasoning traces ( e.g., domain-specific dialogue or causal inference) via local prompt tuning or fine-tuning. These can be aggregated through federated prompt tuning, adapter fusion, or CoT distillation, enhancing the global model without exposing raw data. However, challenges remain in aligning diverse client reasoning styles, avoiding leakage of sensitive logic, and ensuring consistent global behavior across symbolic, neural, and language-based modules. Models should avoid exposing full reasoning chains during inference. Revealing these can lead to data privacy risks. 5 Benchmark Setup Label Skew Datasets. A common approach in current studies to emulate Label Skew scenarios involves using the Dirichlet distribution, denoted as Dir(β)(§ A.2.1), for experimental purposes [ 94, 90]. In this context, β >0acts as a concentration parameter that dictates the extent of class imbalance. Smaller values of βcause a sharper disparity between local and global class distributions, intensifying data heterogeneity among clients. •Cifar-10 [86] contains 50,000images for training and 10,000 images for the validation. Its image size is 32 ×32 within 10 categories. •Cifar-100 [86] is a famous image classification dataset, containing 32 ×32 images of 100 categories. Training and validating sets are composed of 50,000and10,000images. •Tiny-ImageNet [156] is the subset of ImageNet 6 Table 5: Performance under Label Skew conditions on Cifar-10, Cifar-100, MNIST, and Fashion- MNIST datasets, measured using AU, andE(with β= 0.5) as defined in § A.2.1. Bold indicates the highest value, underline marks the second-best, and "/" denotes zero or NaN. See § E.1 for
https://arxiv.org/abs/2505.16850v1
metric details and § E.3 for further insights. Cifar-10 Cifar-100 MNIST Fashion-MNISTMethods1.00.50.30.1E 1.00.50.30.1E 1.00.50.30.1E 1.00.50.30.1E FedAvg [118] 70.64 66.96 63.92 60.43 0.354 68.47 69.72 69.21 68.92 0.213 99.44 99.37 99.13 98.76 0.602 89.94 89.87 83.82 90.15 0.462 FedProx [173] 71.22 67.16 64.88 61.03 0.423 72.37 70.19 63.48 67.4 0.773 99.15 99.41 99.32 98.73 0.114 89.87 89.97 88.69 83.57 0.524 SCAFFOLD [78] 70.77 68.33 68.34 60.83 / 71.91 72.76 69.82 68.24 / 99.41 99.12 98.95 96.95 / 89.83 89.73 88.32 81.27 / FedNova [184] 70.94 67.06 66.42 64.05 / 70.12 67.11 63.86 27.91 / 99.42 99.29 99.22 99.88 / 90.20 89.81 89.03 84.39 / MOON [142] 69.73 68.07 66.48 61.71 0.063 71.47 69.51 69.09 65.53 0.412 99.51 99.36 99.17 98.02 0.324 90.52 90.11 88.95 82.92 0.614 FedRS [96] 70.14 66.036 63.89 59.47 0.184 69.81 68.53 67.32 67.16 0.637 99.34 99.33 99.23 98.93 0.333 90.01 89.40 88.47 77.54 0.579 FedDyn [1] 70.59 67.80 64.39 60.52 0.488 71.48 71.25 70.28 66.81 0.583 99.48 99.31 99.10 98.71 0.059 90.24 89.97 88.59 82.92 0.533 FedOPT [152] 70.44 66.70 65.95 63.10 / 69.40 68.52 67.57 67.26 / 99.32 99.11 98.92 98.13 / 90.06 89.65 88.79 83.41 / FedProto [168] 69.75 65.05 56.45 48.74 0.319 70.07 70.83 68.32 67.36 0.759 99.44 99.26 99.12 98.69 0.323 90.17 90.07 88.73 83.26 0.444 FedNTD [45] 51.43 35.06 37.37 22.18 0.647 32.48 28.92 24.36 21.21 0.492 85.47 31.41 78.87 30.18 0.930 83.67 79.23 70.12 52.04 0.782 Table 6: Quantitative Domain Skew results in term of AU,Au,E, andV(Defined in E.3) on Digits, Office Caltech, and PACS. Refer to § 5.1. Digits Office Caltech PACSMethodsM U Svz SyAUEV Am Ca D WAUEV P AP Ct SkAUEV FedAvg [118] 90.40 60.30 34.68 46.99 58.09 0.024 4.35 81.99 73.21 79.37 67.93 75.62 0.653 0.379 76.09 64.19 83.50 89.40 78.30 0.279 0.911 FedProx [173] 95.03 63.25 34.50 44.60 59.34 0.059 5.44 85.26 75.08 84.67 75.17 80.23 0.717 0.273 79.26 69.86 80.51 90.82 80.19 0.170 0.612 SCAFFOLD [78] 97.79 94.45 26.64 90.69 77.39 / 8.93 39.79 42.50 78.02 70.69 57.75 / 0.281 61.95 45.44 58.87 54.64 55.25 / 0.383 MOON [142] 92.78 68.11 33.36 39.28 58.36 0.287 5.72 84.42 75.98 84.67 68.97 78.51 0.678 0.539 74.44 64.19 83.92 89.17 77.93 0.321 0.924 FedDyn [1] 88.91 60.34 34.57 50.72 58.65 0.161 4.06 84.02 72.59 77.34 68.97 75.72 0.824 0.430 78.17 64.29 82.27 89.93 78.66 0.129 0.881 FedOPT [152] 92.71 87.62 31.32 87.92 74.89 / 6.37 79.05 71.96 89.34 74.48 78.71 / 0.480 78.66 67.66 82.41 83.68 78.12 / 0.410 FedProto [168] 90.54 89.54 34.61 58.00 68.18 0.558 5.47 87.79 75.98 90.0 79.31 83.27 0.556 0.410 85.63 73.69 83.57 91.14 83.51 0.540 0.411 FedNTD [45] 52.31 58.07 18.03 97.29 56.43 0.800 7.90 10.95 10.89 14.67 10.34 11.71 0.911 0.601 16.77 18.23 28.47 93.18 39.16 0.642 9.932 Framework for the Performance Fairness Setting § 3 AFL [127] 96.58 90.72 32.90 87.56 76.94 0.64 6.57 85.33 73.79 80.21 68.93 77.06 0.775 0.517 85.76 72.92 83.16 87.08 82.23 0.90 0.329 with 100K images of size 64×64with 200 classes scale. •Fashion-MNIST [195] includes 70,000 28×28grayscale fashion product images with ten categories. Figure 1 illustrates test
https://arxiv.org/abs/2505.16850v1
accuracy over 100 communication rounds for various federated learning methods on Cifar-10, Cifar-100, MNIST, and Fashion-MNIST under a Dirichlet distribution with β= 0.5. Domain Skew & Out-Client Shift Datasets. Both Domain Skew and Out-Client Shift scenarios involve datasets originating from different domains, where the main distinction lies in how evaluation is conducted. In Domain Skew, each client has domain-specific feature variations, as described in § A.2.1. In contrast, Out-Client Shift adopts a leave-one-domain-out evaluation strategy, where one domain is treated as the unseen target client and the remaining domains are used collectively as sources for training. Examples from federated domain datasets are illustrated in Figure 2. •Office Caltech combines samples from the Office dataset and Caltech256 [ 39], focusing on 10 shared categories across four domains: Amazon (Am), Caltech (Ca), DSLR (D), and Webcam (W). •Digits features handwritten and synthetic digit recognition across four domains: MNIST (M) [214], USPS (U) [ 63], SVHN (Svz) [ 130], and SYN (Sy) [ 80], each with ten digit classes. •Office31 [158] includes 31 object categories commonly seen in office environments, such as monitors, keyboards, and filing cabinets, spread across three domains: (Am, D, and W). •PACS comprises four stylistically varied domains: Photo (P), Art Painting (AP), Cartoon (Ct), and Sketch (Sk). 5.1 Adaptation Benchmark Evaluation Metrics. The metric AU, known as Cross-Client Accuracy, is used to evaluate perfor- mance in Cross-Client Shift scenarios, including both Label and Domain Skew settings. We further denote Out-Client Accuracy AOunder Out-Client Shift for generalizable performance evaluation. Results: Federated learning has been extensively explored in various settings, including Label Skew, Domain Skew, and Out-Client Shift. For the Label Skew scenario, we consider four widely used datasets: Cifar-10 [ 86], Cifar-100 [ 86], MNIST [ 214], and Fashion-MNIST [ 195]. The performance of ten methods on these datasets is summarized in Table 5. These methods range from the foundational FedAvg [ 118], introduced in 2017, to more recent and sophisticated solutions [ 45]. For a more detailed comparison, we also provide a visualization of the training curves, illustrating test accuracy trends during training under β= 0.5. In the case of the Domain Skew scenario, we leverage three widely used federated benchmarks: Digits [ 214,63,216,130], Office Caltech [ 39,158], and PACS. As shown in Table 6, methods like SCAFFOLD [ 78] and FedProto [ 168] demonstrate relatively competitive 7 Table 7: Quantitative Byzantine Attack results in term of Au,Au Byz, andI(§ E.2) on Cifar-10, MNIST, and Fashion-MNIST scenarios. FLTrust and Sageflow utilizes SVHN as the proxy. The local optimization is FedProx [173] with µ=0.01. See Byzantine Tolerance comparison in § 5.2. Cifar-10 Fashion-MNIST MNIST USPS β=0.5 β=0.3 β=0.5 β=0.3 β=0.5 β=0.3 β=0.5 β=0.3 Υ=0 .2Υ=0 .4Υ=0 .2Υ=0 .4Υ=0 .2Υ=0 .4Υ=0 .2Υ=0 .4Υ=0 .2Υ=0 .4Υ=0 .2Υ=0 .4Υ=0 .2Υ=0 .4Υ=0 .2Υ=0 .4Methods Au ByzAu ByzIAu ByzAu ByzIAu ByzAu ByzIAu ByzAu ByzIAu ByzAu ByzIAu ByzAu ByzIAu ByzAu ByzIAu ByzAu ByzI FedProx [173] Au:67.16 Au:64.88 Au:89.97 Au:88.69 Au:99.41 Au:99.32 Au:96.70 Au:96.69 Pair Flipping Multi Krum [8] 50.21 46.85 20.31 46.99 43.91 20.82 82.20 47.59 42.38 80.79 82.51 6.18 10.18 11.35 88.06 10.43 11.35 87.97 50.83 93.52
https://arxiv.org/abs/2505.16850v1
3.18 93.41 51.11 45.58 Bulyan [153] 46.88 44.06 20.68 10.00 10.00 54.88 82.62 80.76 9.21 78.00 73.57 15.12 97.01 98.18 1.23 93.21 92.13 7.19 93.21 92.13 4.57 86.04 87.20 9.49 Trim Median [35] 51.70 45.77 21.39 19.94 10.67 54.21 84.18 78.09 11.88 81.76 77.89 10.8 98.57 94.62 4.79 93.25 92.90 6.42 94.85 94.33 2.37 91.72 92.05 0.64 FoolsGold [42] 60.09 56.80 10.36 50.81 57.98 6.90 86.97 86.07 3.90 85.65 81.50 7.19 97.25 97.80 1.61 98.05 97.22 2.10 77.69 91.77 4.93 87.90 77.23 19.46 DnC [162] 62.67 58.38 8.78 60.41 59.96 4.92 87.54 87.76 2.21 87.22 88.24 0.45 99.33 99.07 0.34 98.85 98.70 0.62 95.94 95.16 1.54 95.07 95.08 1.61 FLTrust [192] / / / / / / / / / / / / 11.35 11.35 88.06 11.35 78.68 20.64 13.15 13.15 83.55 13.15 13.15 83.54 Sageflow [70] / / / / / / / / / / / / 99.28 99.03 0.38 99.02 98.73 0.59 95.36 94.34 2.36 96.15 95.37 1.32 RFA [85] 66.84 66.31 0.85 62.28 61.54 3.34 89.67 89.73 0.24 88.18 88.73 -0.04 99.12 99.10 0.31 98.97 98.91 0.41 96.12 95.56 1.14 96.30 96.08 0.61 Symmetry Flipping Multi Krum [8] 52.18 46.48 20.68 49.03 50.56 14.32 81.87 85.52 4.45 82.14 81.76 6.93 10.02 91.76 7.65 11.35 92.72 6.60 81.20 93.06 3.64 84.12 93.79 2.90 Bulyan [153] 50.73 38.38 28.78 14.55 27.01 37.87 84.15 82.15 7.82 79.51 74.93 13.76 97.16 97.52 1.89 87.10 91.66 7.66 91.46 89.71 6.99 89.94 87.93 8.76 Trim Median [35] 53.24 49.82 17.34 34.46 39.24 25.64 84.61 84.39 5.58 80.49 81.48 7.21 98.50 98.08 1.33 92.16 96.25 3.07 93.46 92.23 4.47 93.32 93.70 2.99 FoolsGold [42] 61.37 59.34 7.82 58.35 54.97 9.91 69.15 86.30 3.67 82.34 84.27 4.42 98.46 97.77 1.64 95.90 90.45 8.87 83.02 78.07 18.63 75.72 73.92 22.77 DnC [162] 62.57 58.12 9.04 61.94 59.51 5.37 88.15 87.23 12.74 86.33 87.83 0.86 99.31 98.99 0.42 98.63 98.63 0.69 95.86 94.70 2.00 94.98 93.64 3.05 FLTrust [192] / / / / / / / / / / / / 11.35 70.09 29.32 11.35 67.29 32.03 60.41 52.83 43.87 59.31 13.15 83.54 Sageflow [70] / / / / / / / / / / / / 98.86 98.75 0.66 98.51 98.31 1.01 94.08 92.32 4.38 95.33 92.93 3.76 RFA [85] 63.43 61.67 5.49 62.78 60.13 4.75 89.44 88.30 11.67 87.73 87.49 1.20 99.00 99.06 0.35 98.78 98.65 0.67 95.80 94.57 2.13 95.98 95.47 1.22 Random Noise Multi Krum [8] 10.00 13.06 54.1 29.25 14.11 50.77 10.00 21.71 68.26 75.55 25.60 63.09 11.35 13.42 85.99 11.35 21.04 78.28 89.25 15.07 81.63 13.15 26.79 69.90 Bulyan [153] 51.04 51.34 15.82 42.09 49.29 15.59 82.70 87.24 2.73 81.70 86.43 2.26 98.74 98.63 0.78 91.95 98.32 1.00 94.27 94.51 2.19 92.59 95.34 1.35 Trim Median [35] 53.87 51.92 15.24 50.24 50.21 14.67 85.94 85.66 4.31 82.32 85.61 3.08 98.86 98.85 0.56 94.36 98.18 1.14 94.80 13.15 83.55 95.66 95.59 1.10 FoolsGold [42] 50.01 32.85 34.31 49.60 27.45 37.43 85.98 35.82 54.15 76.86 83.58 5.11 98.46 37.62 61.79 87.91 78.90 20.42 85.36 22.55 74.15 54.10 55.92 40.77 DnC [162] 59.64 56.95 10.21
https://arxiv.org/abs/2505.16850v1
60.00 56.45 8.43 87.81 87.72 2.25 87.26 87.66 1.03 99.31 98.97 0.44 98.78 98.85 0.47 95.73 94.60 2.10 95.31 94.28 2.41 FLTrust [192] / / / / / / / / / / / / 11.35 11.35 88.06 11.35 11.35 87.97 36.53 13.15 83.55 13.15 13.15 83.54 Sageflow [70] / / / / / / / / / / / / 98.76 96.75 2.66 93.14 89.85 9.47 92.40 78.20 18.50 86.02 75.63 21.06 RFA [85] 56.37 10.64 56.52 55.88 15.45 49.43 87.11 64.10 25.87 85.32 72.30 16.39 99.15 95.40 4.01 98.26 94.01 5.31 94.67 67.49 29.21 95.35 53.08 43.61 Min-Sum Multi Krum [8] 10.00 10.90 56.26 42.20 10.02 54.86 10.00 11.02 78.95 80.78 10.00 78.69 11.35 23.17 76.24 10.43 11.35 87.97 13.15 15.96 80.74 13.15 13.15 83.54 Bulyan [153] 51.49 51.00 16.16 42.99 40.07 24.81 84.64 85.84 4.13 80.23 84.21 4.48 98.60 94.38 5.03 92.40 90.14 9.18 94.88 85.91 10.79 92.91 93.36 3.33 Trim Median [35] 53.62 53.71 13.45 49.58 51.76 13.12 84.64 85.71 4.26 83.24 85.41 3.28 98.77 98.76 0.65 96.80 92.90 6.42 95.12 95.75 0.95 94.22 95.45 1.24 FoolsGold [42] 52.26 10.00 57.16 47.83 10.00 54.88 80.58 14.80 75.17 80.20 19.36 69.33 97.18 16.87 82.54 98.71 97.22 2.10 69.49 15.04 81.66 64.16 13.12 83.57 DnC [162] 61.11 55.52 11.84 60.29 55.83 9.05 87.63 87.80 2.17 87.25 88.01 0.68 99.19 99.20 0.21 98.80 98.70 0.62 95.34 94.51 2.19 94.93 95.35 1.34 FLTrust [192] / / / / / / / / / / / / 61.57 12.99 86.42 11.35 11.35 87.97 13.15 15.04 81.66 13.15 14.09 82.60 Sageflow [70] / / / / / / / / / / / / 98.59 92.85 6.56 92.30 85.01 14.31 87.07 14.09 82.61 81.95 50.59 46.1 RFA [85] 51.90 11.40 55.76 60.29 14.22 50.66 87.40 22.83 67.14 85.71 61.18 27.51 99.05 94.39 5.02 98.80 98.91 0.41 94.65 71.23 25.47 94.93 57.83 38.86 performance across these datasets. In the Out-Client Shift setting, we evaluate Federated Domain Adaptation(FDA) and Federated Domain Generalization paradigms. FDA leverages unlabeled target distributions during training, improving Out-Client Accuracy . For example, KD3A achieves 67.16 accuracy on Office Caltech, demonstrating strong generalization to unseen domains. 5.2 Trustworthiness Benchmark Evaluation Metrics for Robustness. Au Byzrepresents the test accuracy when subjected to Byzantine Attack conditions. Consequently, the metric Accuracy Decline Impact Iquantifies the drop in performance relative to standard (benign) federated learning. Likewise, Attack Success Rate Ru measures model behavior on datasets affected by backdoor attacks. Results: Table 7 summarizes the experimental outcomes for various Byzantine Tolerance strategies under Byzantine Attack scenarios. The evaluation is conducted on four widely used datasets: Cifar-10, Fashion-MNIST, MNIST, and USPS. We examine two categories of data poisoning attacks, specifi- cally Data-Based Byzantine Attack techniques: Pair Flipping and Symmetry Flipping. Additionally, we investigate two model poisoning approaches under Model-Based Byzantine Attack, namely Ran- dom Noise and Min-Sum. The selected Byzantine Tolerance approaches fall into three categories: Distance Base Tolerance, Statistics Distribution Tolerance, and Proxy Dataset Tolerance. Among them, DnC demonstrates comparatively strong resilience across all attack types. In contrast, methods under the Proxy Dataset Tolerance category exhibit notable limitations,
https://arxiv.org/abs/2505.16850v1