text string | source string |
|---|---|
J. Wang, F. Zhao, and D. Lin, “Sharegpt4v: Improving large multi-modal models with better captions,” in European Conference on Computer Vision . Springer, 2024, pp. 370–387. [4]H. Liu, C. Li, Q. Wu, and Y . J. Lee, “Visual instruction tuning,” Advances in neural information processing systems , vol. 36, pp. 34 892–34 916, 2023. [5]H. Liu, C. Li, Y . Li, and Y . J. Lee, “Improved baselines with visual instruction tuning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2024, pp. 26 296–26 306. [6]W. Dai, J. Li, D. Li, A. M. H. Tiong, J. Zhao, W. Wang, B. Li, P. Fung, and S. Hoi, “Instructblip: Towards general-purpose vision-language models with instruction tuning,” 2023. [Online]. Available: https://arxiv.org/abs/2305.06500 [7] J. Li, D. Li, S. Savarese, and S. Hoi, “Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models,” in International conference on machine learning . PMLR, 2023, pp. 19 730–19 742. [8]J. Bai, S. Bai, Y . Chu, Z. Cui, K. Dang, X. Deng, Y . Fan, W. Ge, Y . Han, F. Huang et al. , “Qwen technical report,” arXiv preprint arXiv:2309.16609 , 2023. [9]P. Wang, S. Bai, S. Tan, S. Wang, Z. Fan, J. Bai, K. Chen, X. Liu, J. Wang, W. Ge et al. , “Qwen2-vl: Enhancing vision-language model’s perception of the world at any resolution,” arXiv preprint arXiv:2409.12191 , 2024. [10] Y . Li, Y . Du, K. Zhou, J. Wang, W. X. Zhao, and J.-R. Wen, “Evaluating object hallucination in large vision-language models,” arXiv preprint arXiv:2305.10355 , 2023. [11] Z. Bai, P. Wang, T. Xiao, T. He, Z. Han, Z. Zhang, and M. Z. Shou, “Hallucination of multimodal large language models: A survey,” arXiv preprint arXiv:2404.18930 , 2024. [12] H. Liu, W. Xue, Y . Chen, D. Chen, X. Zhao, K. Wang, L. Hou, R. Li, and W. Peng, “A survey on hallucination in large vision-language models,” arXiv preprint arXiv:2402.00253 , 2024. [13] Y . Zhou, C. Cui, J. Yoon, L. Zhang, Z. Deng, C. Finn, M. Bansal, and H. Yao, “Analyz- ing and mitigating object hallucination in large vision-language models,” arXiv preprint arXiv:2310.00754 , 2023. [14] R. Rafailov, A. Sharma, E. Mitchell, C. D. Manning, S. Ermon, and C. Finn, “Direct preference optimization: Your language model is secretly a reward model,” in Advances in Neural Information Processing Systems , A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, Eds., vol. 36. Curran Associates, Inc., 2023, pp. 53 728–53 741. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2023/file/ a85b405ed65c6477a4fe8302b5e06ce7-Paper-Conference.pdf [15] T. Yu, Y . Yao, H. Zhang, T. He, Y . Han, G. Cui, J. Hu, Z. Liu, H.-T. Zheng, M. Sun et al. , “Rlhf-v: Towards trustworthy mllms via behavior alignment from fine-grained correctional human feedback,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2024, pp. 13 807–13 816. [16] T. Yu, H. Zhang, Y . Yao, Y . Dang, D. Chen, X. Lu, G. Cui, T. He, Z. Liu, T.-S. Chua et al. , “Rlaif-v: Aligning mllms through open-source ai feedback for super gpt-4v | https://arxiv.org/abs/2505.15963v1 |
trustworthiness,” arXiv preprint arXiv:2405.17220 , 2024. [17] Y . Xie, G. Li, X. Xu, and M.-Y . Kan, “V-dpo: Mitigating hallucination in large vision language models via vision-guided direct preference optimization,” arXiv preprint arXiv:2411.02712 , 2024. 10 [18] P. Sarkar, S. Ebrahimi, A. Etemad, A. Beirami, S. Ö. Arık, and T. Pfister, “Data-augmented phrase-level alignment for mitigating object hallucination,” arXiv preprint arXiv:2405.18654 , 2024. [19] F. Wang, W. Zhou, J. Y . Huang, N. Xu, S. Zhang, H. Poon, and M. Chen, “mdpo: Conditional preference optimization for multimodal large language models,” in Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , 2024, pp. 8078–8088. [20] S. Wu, F.-Y . Sun, K. Wen, and N. Haber, “Symmetrical visual contrastive optimization: Aligning vision-language models with minimal contrastive images,” arXiv preprint arXiv:2502.13928 , 2025. [21] J. Fu, S. Huangfu, H. Fei, X. Shen, B. Hooi, X. Qiu, and S.-K. Ng, “Chip: Cross-modal hierarchical direct preference optimization for multimodal llms,” 2025. [Online]. Available: https://arxiv.org/abs/2501.16629 [22] Y . Zhou, C. Cui, R. Rafailov, C. Finn, and H. Yao, “Aligning modalities in vision large language models via preference fine-tuning,” arXiv preprint arXiv:2402.11411 , 2024. [23] J. Lu, J. Li, Y . Gao, J. Wu, J. Wu, X. Wang, and X. He, “Adavip: Aligning multi-modal llms via adaptive vision-enhanced preference optimization,” arXiv preprint arXiv:2504.15619 , 2025. [24] S. Guo, B. Zhang, T. Liu, T. Liu, M. Khalman, F. Llinares, A. Rame, T. Mesnard, Y . Zhao, B. Piot et al. , “Direct language model alignment from online ai feedback,” arXiv preprint arXiv:2402.04792 , 2024. [25] C. Zhang, C. Zhang, M. Zhang, and I. S. Kweon, “Text-to-image diffusion models in generative ai: A survey,” arXiv preprint arXiv:2303.07909 , 2023. [26] Q. Yu, Z. Zhang, R. Zhu, Y . Yuan, X. Zuo, Y . Yue, T. Fan, G. Liu, L. Liu, X. Liu et al. , “Dapo: An open-source llm reinforcement learning system at scale,” arXiv preprint arXiv:2503.14476 , 2025. [27] J. Yan, Y . Li, Z. Hu, Z. Wang, G. Cui, X. Qu, Y . Cheng, and Y . Zhang, “Learning to reason under off-policy guidance,” arXiv preprint arXiv:2504.14945 , 2025. [28] Z. Yang, X. Luo, D. Han, Y . Xu, and D. Li, “Mitigating hallucinations in large vision-language models via dpo: On-policy data hold the key,” arXiv preprint arXiv:2501.09695 , 2025. [29] E. J. Hu, Y . Shen, P. Wallis, Z. Allen-Zhu, Y . Li, S. Wang, L. Wang, W. Chen et al. , “Lora: Low-rank adaptation of large language models.” ICLR , vol. 1, no. 2, p. 3, 2022. [30] Z. Shao, P. Wang, Q. Zhu, R. Xu, J. Song, X. Bi, H. Zhang, M. Zhang, Y . Li, Y . Wu et al. , “Deepseekmath: Pushing the limits of mathematical reasoning in open language models,” arXiv preprint arXiv:2402.03300 , 2024. [31] Z. Zhao, B. Wang, L. Ouyang, X. Dong, J. Wang, and C. He, “Beyond hallucinations: En- hancing lvlms through hallucination-aware direct preference optimization,” arXiv preprint arXiv:2311.16839 , 2023. [32] Z. Sun, S. Shen, S. Cao, H. Liu, C. Li, Y . Shen, | https://arxiv.org/abs/2505.15963v1 |
C. Gan, L. Gui, Y .-X. Wang, Y . Yang et al. , “Aligning large multimodal models with factually augmented rlhf,” in Findings of the Association for Computational Linguistics ACL 2024 , 2024, pp. 13 088–13 110. [33] E. Amirloo, J.-P. Fauconnier, C. Roesmann, C. Kerl, R. Boney, Y . Qian, Z. Wang, A. Dehghan, Y . Yang, Z. Gan et al. , “Understanding alignment in multimodal llms: A comprehensive study,” arXiv preprint arXiv:2407.02477 , 2024. [34] J. Wang, Y . Wang, G. Xu, J. Zhang, Y . Gu, H. Jia, J. Wang, H. Xu, M. Yan, J. Zhang et al. , “Amber: An llm-free multi-dimensional benchmark for mllms hallucination evaluation,” arXiv preprint arXiv:2311.07397 , 2023. [35] A. Rohrbach, L. A. Hendricks, K. Burns, T. Darrell, and K. Saenko, “Object hallucination in image captioning,” in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , 2018, pp. 4035–4045. [36] xAI, “Grok-1.5 vision preview,” April 2024, accessed: 2024-12-12. [Online]. Available: https://x.ai/blog/grok-1.5v [37] A. Singh, V . Natarajan, M. Shah, Y . Jiang, X. Chen, D. Batra, D. Parikh, and M. Rohrbach, “Towards vqa models that can read,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , 2019, pp. 8317–8326. 11 [38] S. Tong, E. Brown, P. Wu, S. Woo, M. Middepogu, S. C. Akula, J. Yang, S. Yang, A. Iyer, X. Pan, A. Wang, R. Fergus, Y . LeCun, and S. Xie, “Cambrian-1: A fully open, vision-centric exploration of multimodal llms,” in Advances in Neural Information Processing Systems , A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, and C. Zhang, Eds., vol. 37. Curran Associates, Inc., 2024, pp. 87 310–87 356. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2024/file/ 9ee3a664ccfeabc0da16ac6f1f1cfe59-Paper-Conference.pdf [39] L. Chen, J. Li, X. Dong, P. Zhang, Y . Zang, Z. Chen, H. Duan, J. Wang, Y . Qiao, D. Lin et al. , “Are we on the right way for evaluating large vision-language models?” arXiv preprint arXiv:2403.20330 , 2024. [40] W. Xiao, Z. Huang, L. Gan, W. He, H. Li, Z. Yu, F. Shu, H. Jiang, and L. Zhu, “Detecting and mitigating hallucination in large vision language models via fine-grained ai feedback,” inProceedings of the AAAI Conference on Artificial Intelligence , vol. 39, no. 24, 2025, pp. 25 543–25 551. [41] H. Duan, J. Yang, Y . Qiao, X. Fang, L. Chen, Y . Liu, X. Dong, Y . Zang, P. Zhang, J. Wang et al. , “Vlmevalkit: An open-source toolkit for evaluating large multi-modality models,” in Proceedings of the 32nd ACM International Conference on Multimedia , 2024, pp. 11 198–11 201. [42] T. Guan, F. Liu, X. Wu, R. Xian, Z. Li, X. Liu, X. Wang, L. Chen, F. Huang, Y . Yacoob et al. , “Hallusionbench: an advanced diagnostic suite for entangled language hallucination and visual illusion in large vision-language models,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2024, pp. 14 375–14 385. [43] Z. Liu, H. Liang, X. Huang, W. Xiong, Q. Yu, L. Sun, C. Chen, C. He, B. Cui, and W. Zhang, “Synthvlm: | https://arxiv.org/abs/2505.15963v1 |
High-efficiency and high-quality synthetic data for vision language models,” arXiv preprint arXiv:2407.20756 , 2024. [44] S. Sharifzadeh, C. Kaplanis, S. Pathak, D. Kumaran, A. Ilic, J. Mitrovic, C. Blundell, and A. Banino, “Synth2: Boosting visual-language models with synthetic captions and image embeddings,” arXiv preprint arXiv:2403.07750 , 2024. [45] Z. Li, R. Luo, J. Zhang, M. Qiu, X. Huang, and Z. Wei, “V ocot: Unleashing visually grounded multi-step reasoning in large multi-modal models,” arXiv preprint arXiv:2405.16919 , 2024. [46] B. Safaei and V . M. Patel, “Active learning for vision-language models,” in 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) . IEEE, 2025, pp. 4902–4912. [47] Y . Ouali, A. Bulat, B. Martinez, and G. Tzimiropoulos, “Clip-dpo: Vision-language models as a source of preference for fixing hallucinations in lvlms,” in European Conference on Computer Vision . Springer, 2024, pp. 395–413. [48] S. Yan, M. Bai, W. Chen, X. Zhou, Q. Huang, and L. E. Li, “Vigor: Improving visual grounding of large vision language models with fine-grained reward modeling,” in European Conference on Computer Vision . Springer, 2024, pp. 37–53. [49] M. Kim, M. Kim, J. Bae, S. Choi, S. Kim, and B. Chang, “Exploiting semantic reconstruction to mitigate hallucinations in vision-language models,” in European Conference on Computer Vision . Springer, 2024, pp. 236–252. A Evaluation A.1 Hallucination Reduction Index A.1.1 Metric Design HRI represents an aggregate improvement metric across five different benchmarks. Simply summing the raw scores from each benchmark would not be a reasonable or rigorous approach, as the metrics are not directly comparable. Therefore, we calculate the improvement ratio for each benchmark based on its potential improvement range, effectively converting the raw metric gains into an additive proportion of improvement. Furthermore, we employ a conservative aggregation method to avoid overestimating the effectiveness of our approach. 12 Letai, i∈ {1,2,3,4,5}denotes F1AMB−gen,Score MMHal ,F1ObjectHal ,LVscore,F1AMB−disre- spectively, namely the results on each benchmark, superscript “base” represents performances of the baseline model and “ref” represents the set reference performances. Then HRI is calculated as: HRI = 2×5X i=1ai−abase i aref i−abase i(7) A.1.2 Main Results For 7B model, we set the reference performances as OViP 2ep, so it comes: HRI = 2×a1−65.01 67.12−65.01+a2−1.90 2.65−1.90+a3−72.40 74.18−72.40+a4−57.20 60.90−57.20+a5−85.5 87.4−85.5 For 13B model, we also use OViP 2epas the reference performances except for the ObjectHal benchmark which almost all methods fail to improve. We set the reference performance of ObjectHal to 79.0. HRI = 2×a1−65.99 68.98−65.99+a2−2.24 2.57−2.24+a3−76.73 79.00−76.73+a4−62.60 67.90−62.60+a5−89.1 90.2−89.1 A.1.3 Ablation Study: Loss Functions There is no method surpassing other methods significantly, so we consider the best performance on the benchmark as its reference peerformance. HRI = 2×a1−65.01 68.57−65.01+a2−1.90 2.70−1.90+a3−72.40 74.14−72.40+a4−57.20 64.10−57.20+a5−85.5 87.20−85.5 A.1.4 Ablation Study: Online and Offline Same as Main Results. HRI = 2×a1−65.01 67.12−65.01+a2−1.90 2.65−1.90+a3−72.40 74.18−72.40+a4−57.20 60.90−57.20+a5−85.5 87.4−85.5 A.1.5 Further Study Same as Main Results. HRI = 2×a1−65.01 67.12−65.01+a2−1.90 2.65−1.90+a3−72.40 74.18−72.40+a4−57.20 60.90−57.20+a5−85.5 87.4−85.5 A.2 Bad Cases A.2.1 MMHal Shown in Figure 7, the original evaluation protocol utilizes the text-only gpt-4-turbo-2024-04-09 to evaluate the model response, which has no access to the input image and can only infer from the given image contents | https://arxiv.org/abs/2505.15963v1 |
and ground truth, so it will lead to many incorrect judgments. We replace it with gpt-4o-2024-05-13 , which yields more accurate assessments. A.2.2 AMBER-generative & ObjectHal AMBER uses an automatic method for detecting the hallucination entity, which primarily relies on the pre-defined hallucination words. ObjectHal introduces LLM to extract the mentioned entities, its metrics are basically the same with AMBER. Figure 8 illustrates several cases of misjudgment in AMBER. Since the score is determined solely by the presence of specific predefined words rather than the actual semantic correctness, the hallucination rate (Chair score) is often overestimated. Moreover, this issue becomes more pronounced as the diversity and informativeness of model responses increases. Many methods achieve great improvements in the Chair score (entity-wise hallucination rate), but often at the cost of a significant decrease in the cover rate (completeness and informativeness). Figure 9 provides an example of this information deficit phenomenon, which should also be considered in the evaluation of model performance. 13 gpt-4-turbo-2024-04-09: The LMM's response claims that there are four people in the image, which matches the standard human-generated answer. However, the LMM further specifies the composition of these four people as "a man, a woman, and two children." This additional detail is not supported by the image contents provided, which only mention a \"Girl\" and does not specify any other individuals or their roles/ages. Therefore, the LMM's response includes information that is not present or implied in the image, leading to a hallucination. Explanation: The response is informative in terms of the number of people, but it hallucinates details about their specific identities (man, woman, two children) which are not mentioned in the image contents. Rating: 1, somewhat informative, with hallucination. gpt-4o-2024-05-13: The LMM's response correctly identifies the number of people in the image, which is four. It also provides additional information by specifying the likely roles of the individuals (a man, a woman, and two children). This additional detail is accurate and does not introduce any false claims or hallucinations. The response is informative and provides a good analysis without any hallucination. Rating: 6, very informative with good analysis or reasoning, no hallucination Question: How many people are there in the image? Model response: There are four people in the image: a man, a woman, and two children. Ground truth: There are four people in the image. Figure 7: Text-only LLM can not correctly judge the response. Model description The image features a dog running on a sandy beach near the water. The dog is positioned towards the center of the scene, with its legs spread out as it moves across the sand. The beach appears to be relatively empty, with no other people or animals visible in the image. The sky above the beach is cloudy, adding a sense of depth to the scene. Annotated correct & wrong words Truth: sky, cloud, sea, beach, dog, mountain Hallucination: lounge, umbrella, tree, person, seegull Model description The image captures a woman running on a sandy beach. She is wearing a white top and appears to be enjoying her run. The woman's | https://arxiv.org/abs/2505.15963v1 |
shadow is visible on the sand, indicating that the sun is shining and casting light on the scene. The beach setting provides a serene and picturesque backdrop for her activity. Annotated correct & wrong words Truth: woman, beach Hallucination: sky, sun, sea, dog, lounge Figure 8: Rule-based extraction will lead to misjudgments to some extent. OPA-DPO: The image features a stop sign prominently displayed against a backdrop of a blue wall. The stop sign is positioned on the left side of the frame, drawing attention to it as the main subject. The blue wall behind the sign adds a contrasting element to the scene, creating visual interest. The overall composition of the image suggests it might be capturing a moment in an urban or suburban setting where traffic regulations are important. OViP: The image features a man walking down a street , passing by a stop sign. The stop sign is positioned on the left side of the scene, and the man is walking towards it. The street appears to be empty, with no other people or vehicles visible in the image. The man is wearing a striped shirt, and his pants are rolled up. The scene is set against a blue wall, which adds a vibrant touch to the otherwise quiet street. Failing to mention the main entity in the generated description is also a form of hallucination. Figure 9: OPA-DPO fails to mention the man, a deficiency that is captured by the “Cover” metric but often overlooked in previous evaluations. “vehicles” is incorrectly identified as a hallucination word. 14 B Experiments B.1 Loss Functions Base image loss Lbase Image is similar to DPO loss which replace the response pair with the image pair: Lbase Image I+,I−;Q,A+ = log σ β· logπθ(A+|I+,Q) πref(A+|I+,Q)−logπθ(A+|I−,Q) πref(A+|I−,Q) Symmetrical image loss LImage−Symconsiders the negative image and the negative response a correct pair, then calculate Image loss using negative response and image as the positive one: LImage−Sym I+,I−,A+,A−;Q =LImage (I+,I−;Q,A+) +LImage (I−,I+;Q,A−) =−logσ β1· logπθ(A+|I+,Q) πref(A+|I+,Q)−logπθ(A+|Q) πref(A+|Q) +β2· logπθ(A+|Q) πref(A+|Q)−logπθ(A+|I−,Q) πref(A+|I−,Q) −logσ β1· logπθ(A−|I−,Q) πref(A−|I−,Q)−logπθ(A−|Q) πref(A−|Q) +β2· logπθ(A−|Q) πref(A−|Q)−logπθ(A−|I+,Q) πref(A−|I+,Q) Anchor loss LAnchor directly enforces the probability of positive response to be higher for intuitively better optimization results. LAnchor (A+,A−;Q,I+) =−logσ β·logπθ(A+|I+,Q) πref(A+|I+,Q) Bi-directional anchor loss LBi−Anchor not only exerts supervision on the positive response, but it also makes the negative response probability to be lower. LBi−Anchor (A+,A−;Q,I+) =−logσ β·logπθ(A+|I+,Q) πref(A+|I+,Q) + log σ β·logπθ(A−|Q) πref(A−|Q) B.2 Settings By default, we use the following settings: Software infrastructure. In our implementation, we deploy the non-training LLM and diffusion models as services using FastAPI. During training, the system interacts with these services via API calls to obtain feedback, image prompts, and the paths to generated images. Models. The LLM we use for judging response and providing image-generation prompt is Qwen-2.5-7b-instruct (https://huggingface.co/Qwen/Qwen2.5-7B-Instruct). The diffusion model for image generation is FLUX.1-dev (https://huggingface.co/black-forest-labs/FLUX.1-dev). Sampling and Filter. The score is between 0 and 10, which 10 means a perfect response and 0 means a totally incorrect response. We sample 16 responses for one query and set the lower-bound margin δto 3. Moreover, | https://arxiv.org/abs/2505.15963v1 |
the quality criterion coefficients τpos=τneg= 5, which means the score of positive response should be at least 6 and negative response be at most 4. The temperature of the LLM scorer is 0.1. Image Generation. For image prompt generation, we set the model’s temperature as 0.1, top_p as 0.9, and max_new_tokens as 128. We generate a 384×384image given the prompt with num_inference_steps =40 and guidance_scale =7.5. 15 Table 4: OViP pseudocode Algorithm 1 Algorithm of OViP Input: training dataset D={(I+,Q,A∗)}; target model π; reward model Gr; prompt generator Gdiff; diffusion model diff Initialize: experience buffer B ← ∅ Output: optimized model π foreach(I+,Q,A∗)∈ D do Sample candidate responses {Ai}k i=1∼π(·|I+,Q) Compute reward scores: ri= G r(Ai,A∗) Compute standard deviation σrof{ri} Initialize temporary pair list T ← ∅ while∃(A+,A−)satisfying: |r+−r−|>max( δ,2σr),r+> τpos, r−< τnegdo Add(A+,A−)toTand remove from candidate pool end while ifT=∅andminiri< τnegthen LetA−be the lowest-scoring response Add(A∗,A−)toT endif for each (A+,A−)∈ T do Generate prompt: T−= G diff(A+,A−) Synthesize image: I−= diff( T−) Add(I+,I−,Q,A+,A−)to buffer B end for if|B| ≥ Nthen Sample Nsamples from Bfor training Compute total loss: LOViP Update π←π−η∇πLOViP endif end for Training. We list training setups in Section 3.1. We perform ablation and further study using LLaV A-1.5-7B. The following describes the relevant experimental settings. B.2.1 Ablation on Loss Functions We fine-tune the model for one epoch using data generated by the model itself immediately before training, following the OViP data construction pipeline. Foriterative training , we first fine-tune the base model on the original dataset using DPO to obtain a stronger initialization. We then sample and filter 4,730 instances as the second-stage contrastive dataset, which remains fixed across all variants. To improve supervision quality, model responses are annotated using DeepSeek-V3 for more accurate reward estimation. B.2.2 Ablation on Online Learning Although online methods can continuously improve when trained with another epoch, we conduct the experiment with one epoch for both online and offline methods. 16 B.2.3 Further Study We save several checkpoints during training and evaluate them to gain a complete understanding of the training process. We select 227 instances from the original OPA-DPO dataset—distinct from the training set—for analyzing changes in token-wise log-probability and output quality distributions. For token-wise log-probability analysis, we treat the original negative samples from the dataset as out-of-distribution (OOD) responses and compute their log-probabilities. For in-distribution (IND) responses, we perform 16 response samplings per query using our model with a temperature of 0.2, and compute the average log-probability across these samples. To examine output distribution shifts, we sample 16 responses per query with a higher temperature of 1.2, assign scores to each response, and analyze the distribution of these scores. For comparison about different image generation strategy, we implement two other representative methods. 1. Random Cropping (R.Crop). We randomly crop 0 ˜20% area from the image to form the negative image. This method is from mDPO. 2. Offline construction. We use an LLM to generate the image description with some inaccordance with the positive image, then we use the diffusion model to generate the image according to the description. The prompt is | https://arxiv.org/abs/2505.15963v1 |
at Table 8. C Algorithm The pseudocode is at Table 4. D Efficiency and Time Consuming OViP training takes approximately 17 hours on 7 ×A800 (40G) GPUs. Among them, 4 GPUs are allocated for VLM training, 1 GPU for LLM deployment, and 2 GPUs for diffusion model deployment. We divide each training step into six stages: sampling (response generation), scoring (response evaluation), description (image prompt construction), negative image (counterfactual image generation), forward (model inference), and post-processing. Figure 10 illustrates the proportion of time spent on each stage, where post-processing refers to the period after forward propagation and before the next training step begins, including gradient accumulation, backpropagation, optimizer updates, and other related operations. Excluding post-processing, the most time-consuming component is the sampling stage, similar to reinforcement learning. This is because it requires autoregressive generation of 16 responses, one token at a time. The second most expensive stage is negative image generation. To reduce latency, we parallelize this process by assigning two diffusion models to handle image generation requests from four sampling subprocesses. Additionally, since the experience buffer is implemented independently in our system, repeated sampling by one subprocess may block others due to synchronization constraints. This can indirectly slow down the forward and post-processing stages as some processes await completion. E Limitations This work introduces an online training framework that integrates dual contrastive learning across vision and language. While our loss function follows the DPO formulation, we do not explore how existing reinforcement learning algorithms—such as PPO or GRPO—could be effectively combined with image-level contrastive objectives. In terms of evaluation, although we identify and discuss several limitations of prior protocols and propose improved metrics and procedures, the current benchmarks still fall short of fully capturing model capability. We manually identified a subset of erroneous cases through inspection, but did not conduct a comprehensive correction. Lastly, our data filtering strategy during sampling has not been carefully tuned, and a more refined design could potentially lead to better training dynamics and model performance. 17 Sampling29.7% Scoring12.8% Description2.4%Negative Image18.3%Forward 12.9%Post-processing 23.9%Figure 10: Time consumption for each stage during training. F Broader Impacts This work focuses on improving the factual reliability of vision-language models by reducing hallucination. While it does not directly engage with societal applications, it contributes to the broader goal of building more trustworthy and robust AI systems. Although the method itself does not pose obvious risks, we note that even improved generation quality does not eliminate the possibility of misuse, such as producing misleading content. Responsible deployment and proper safeguards remain necessary when integrating such models into real-world applications. 18 G Prompts for Judgment and Negative Image Generation Prompt for Quality Judgment # Task Your role is as a discerning assistant tasked with evaluating model responses for multimodal tasks (though you have no access with the image). Upon being presented with a question that requires the interpretation of both text and images, you will receive two distinct responses. The first is crafted by our sophisticated multimodal model, while the second represents an approximate ideal answer–it may be incomplete. Your objective is to meticulously | https://arxiv.org/abs/2505.15963v1 |
and precisely assess the model-generated response (the former) based on the provided reference answer (the latter). - Here’s how you should approach the assessment process: 1. The quality of the response depends on its accuracy and the degree of adherence to the correct answer. Therefore, if the response is much more detailed than the reference answer, it should not be considered a very good response (although it may still be considered a good one). 2. Directly provide the score of the response, with a full score of 10. Your response should follow this format: "Score: [x]\n", where "[x]" represents the score you give, and "\n" is a line break. 3. Please do not provide additional reasoning, just give the score directly. # Question {question} # Response {response} # Correct answer {answer} Table 5: The prompt for judging model response given the answer. Prompt for image generation-Part 1 # Task Given an unknown image-related question, a correct answer, and an inaccurate response, carefully analyze the differences between the response and the answer. Then, provide a brief description of the image so that it aligns with the correct answer and differs from the incorrect responses. In other words, infer the content of the image. # Example [Example 1 ] **<Question>** What is on the sandwich? **<Answer>** The sandwich has tomatoes and lettuce on it. **<Response>** The sandwich has a slice of egg and tomato on it. **<Output Description>** A sandwich with only tomatoes and lettuce on it. **Explanation**: The answer mentions lettuce and tomato, while the incorrect response mentions tomato and egg. So there is no egg on the sandwich. 19 Prompt for image generation-Part 2 [Example 2] **<Question>** Can you point out the details that make this image unique? **<Answer>** In the image, there is a plate with a slice of pizza topped with tomatoes, herbs, and cheese. The distinctive detail about the image is that the pizza is missing two slices, leaving just one slice remaining on the plate. This suggests that someone has already started enjoying the pizza. The slice appears to be well-cooked and freshly served, creating an appetizing and mouthwatering scene for the viewer. **<Response>** In the image, there is a slice of pizza on a plate with tomatoes and cheese. The pizza appears to be homemade and has been cut into two pieces. The tomatoes are sliced in half, revealing their juicy interior. The cheese on top of the pizza is melted, creating a delicious-looking dish. Additionally, there is a fork nearby, suggesting that someone might be planning to enjoy this pizza soon. **<Output Description>** A plate with a one-third remaining piece of pizza, topped with herbs, cheese, and tomatoes; someone has finished eating and left. **Explanation**: The answer mentions that only one-third of the pizza remains and that someone has just finished eating and left, which is inconsistent with the response. Therefore, the image should include these two features. [Example 3] **<Question>** Bird or cow? **<Answer>** Bird **<Response>** The bird in the image is a small, brown and white bird with a distinctive head shape and coloration. | https://arxiv.org/abs/2505.15963v1 |
It is not a cow. The bird is perched on a branch, which is situated in front of a white building. **<Output Description>** A big, blue bird perched on a branch in front of a black building. **Explanation**: Both the answer and the response mention the bird, but the response is more detailed. So the description should be contrastive to the features of the bird in the response. # Requirements - The description should be brief but precise. - If both the answer and the response are long, focus on describing the one or two most significant differences. - Do not provide any analysis or explanation; only describe the image. - A common approach is to describe what is present in the image and what is missing. **<Question>** {question} **<Answer>** {answer} **<Response>** {response} **<Output Description>** Table 6: The prompt for image generation instruction. 20 Prompt for image distortion-Part 1 # Task Given an unknown image-related question, a correct answer, and an inaccurate response, carefully analyze the differences between the response and the answer. Then, provide a brief description of the image so that it aligns with the correct answer and differs from the incorrect responses. In other words, infer the content of the image. # Example [Example 1] **<Question>** What is on the sandwich? **<Answer>** The sandwich has tomatoes and lettuce on it. **<Response>** The sandwich has a slice of egg and tomato on it. **<Output Description>** A sandwich with only tomatoes and lettuce on it. **Explanation**: The answer mentions lettuce and tomato, while the incorrect response mentions tomato and egg. So there is no egg on the sandwich. [Example 2] **<Question>** Can you point out the details that make this image unique? **<Answer>** In the image, there is a plate with a slice of pizza topped with tomatoes, herbs, and cheese. The distinctive detail about the image is that the pizza is missing two slices, leaving just one slice remaining on the plate. This suggests that someone has already started enjoying the pizza. The slice appears to be well-cooked and freshly served, creating an appetizing and mouthwatering scene for the viewer. **<Response>** In the image, there is a slice of pizza on a plate with tomatoes and cheese. The pizza appears to be homemade and has been cut into two pieces. The tomatoes are sliced in half, revealing their juicy interior. The cheese on top of the pizza is melted, creating a delicious-looking dish. Additionally, there is a fork nearby, suggesting that someone might be planning to enjoy this pizza soon. **<Output Description>** A plate with a one-third remaining piece of pizza, topped with herbs, cheese, and tomatoes; someone has finished eating and left. **Explanation**: The answer mentions that only one-third of the pizza remains and that someone has just finished eating and left, which is inconsistent with the response. Therefore, the image should include these two features. Table 7: The prompt for distorted image generation instruction. 21 Prompt for image distortion-Part 2 [Example 3] **<Question>** Bird or cow? **<Answer>** Bird **<Response>** The bird in the image is a small, brown | https://arxiv.org/abs/2505.15963v1 |
and white bird with a distinctive head shape and coloration. It is not a cow. The bird is perched on a branch, which is situated in front of a white building. **<Output Description>** A big, blue bird perched on a branch in front of a black building. **Explanation**: Both the answer and the response mention the bird, but the response is more detailed. So the description should be contrastive to the features of the bird in the response. # Requirements - The description should be brief but precise. - If both the answer and the response are long, focus on describing the one or two most significant differences. - Do not provide any analysis or explanation; only describe the image. - A common approach is to describe what is present in the image and what is missing. **<Question>** question **<Answer>** answer **<Response>** response **<Output Description>** Table 8: The prompt for distorted image generation instruction. 22 | https://arxiv.org/abs/2505.15963v1 |
arXiv:2505.15966v2 [cs.CV] 26 May 2025Pixel Reasoner: Incentivizing Pixel-Space Reasoning with Curiosity-Driven Reinforcement Learning Alex Su♠♡∗, Haozhe Wang♢♡∗, Weiming Ren♡†, Fangzhen Lin♢, Wenhu Chen♡† University of Waterloo♡, HKUST♢, USTC♠, Vector Institute† Project Page: https://tiger-ai-lab.github.io/Pixel-Reasoner/ Abstract Chain-of-thought reasoning has significantly improved the performance of Large Language Models (LLMs) across various domains. However, this reasoning pro- cess has been confined exclusively to textual space, limiting its effectiveness in visually intensive tasks. To address this limitation, we introduce the concept of rea- soning in the pixel-space. Within this novel framework, Vision-Language Models (VLMs) are equipped with a suite of visual reasoning operations, such as zoom-in and select-frame. These operations enable VLMs to directly inspect, interrogate, and infer from visual evidences, thereby enhancing reasoning fidelity for visual tasks. Cultivating such pixel-space reasoning capabilities in VLMs presents no- table challenges, including the model’s initially imbalanced competence and its reluctance to adopt the newly introduced pixel-space operations. We address these challenges through a two-phase training approach. The first phase employs instruc- tion tuning on synthesized reasoning traces to familiarize the model with the novel visual operations. Following this, a reinforcement learning (RL) phase leverages a curiosity-driven reward scheme to balance exploration between pixel-space reason- ing and textual reasoning. With these visual operations, VLMs can interact with complex visual inputs, such as information-rich images or videos to proactively gather necessary information. We demonstrate that this approach significantly improves VLM performance across diverse visual reasoning benchmarks. Our 7B model, Pixel-Reasoner , achieves 84% on V* bench, 74% on TallyQA-Complex, and 84% on InfographicsVQA, marking the highest accuracy achieved by any open-source model to date. These results highlight the importance of pixel-space reasoning and the effectiveness of our framework. 1 Introduction Recent advancements have demonstrated remarkable progress in developing complex reasoning abilities in Vision-Language Models (VLMs). Leading models, such as OpenAI GPT4o/GPT- o1 [Hurst et al., 2024, Jaech et al., 2024a], Gemini-2.5 [Team et al., 2023], VL-Rethinker [Wang et al., 2025] achieve superior performance on various multimodal reasoning benchmarks like MathVista [Lu et al., 2023], MMMU [Yue et al., 2024], MEGA-Bench [Chen et al., 2024], etc. A common paradigm underpinning these state-of-the-art VLMs involves processing multimodal queries to extract relevant cues, followed by a reasoning process (CoT [Wei et al., 2022]) conducted purely in the textual format. Despite their success, the prevailing textual reasoning paradigm faces an inherent limitation: relying solely on text tokens to express intermediate reasoning steps can constrain the depth and accuracy achievable by Vision-Language Models (VLMs) on visually intensive tasks. The lack of direct interaction with visual inputs—such as drawing lines/marks, highlighting regions, or zooming ∗These authors contributed equally and are listed alphabetically. Haozhe as Project Lead. Corresponding to: jasper.whz@outlook.com, dlwlrma314516@gmail.com, wenhuchen@uwaterloo.ca Preprint. Under review. in—hinders the model’s ability to interact with the information-rich images. As a result, VLMs often struggle to capture fine-grained visual details, including tiny objects, subtle spatial relationships, small embedded text, and nuanced actions in videos. Question: Where to buy a mug like this based on its logo? This image shows a desk setup where … The question asks for a mug. It is | https://arxiv.org/abs/2505.15966v2 |
likely located on the table, so I will look closer at … The mug is on the right upper corner. I need to look closer at the logo on it. Answer: This mug is from \boxed{Starbucks}. Pixel Reasoner Figure 1: Illustration of Pixel Reasoner. When asked a visually-rich question, Pixel-Reasoner first inspects the visual inputs. Then it itera- tively refines its understanding and evolve its reasoning by leveraging visual operations, such as ZOOM -INfor images and SELECT -FRAMES for videos, ultimately arriving at a conclusion. Standard RLrenders the policy to bypassPixel-Space Reasoning. Warm-Start Instruction Tuning lays the foundation for effectively using visualoperations.Curiosity-Driven RL encourages the exploration and practice of Pixel-Space Reasoning.The Learning TrapFigure 2: The Learning Trap. Our approach combines a warm- start instruction tuning phase and curiosity-driven RL phase to overcome the learning trap. These limitations motivates a fundamental rethinking of how VLMs engage with the visual modality during reasoning more seamlessly. This leads us to pose the research question: Can VLMs perform reasoning steps more directly within the visual modality itself, leveraging computational visual manipulations as actions to guide reasoning? We introduce the concept of pixel-space reasoning, proposing a novel paradigm where reasoning is not exclusively confined to verbalized format but actively incorporates operations applied directly to the visual inputs. Rather than solely translating visual observations into textual cues, the model could actively manipulate and interact with the visual information throughout the reasoning process – employing operations like ‘ZOOM -IN’, or‘SELECT -FRAME ’. These visual operations serve as integral steps within its reasoning chain, empowering the model to inspect, interrogate, and infer from visual evidence with enhanced fidelity. We frame this problem as developing a VLM endowed with a suite of visual operations. This novel framework involves strategically selecting and applying appropriate visual operations to the visual inputs, progressively refining its understanding, evolving its reasoning and ultimately arriving at a conclusion. To instill this novel capability of pixel-space reasoning, we follow the common post-training paradigm of instruction tuning and reinforcement learning (RL). However, cultivating pixel-space reasoning presents significant challenges. Firstly, existing VLMs exhibit limited zero-shot proficiency in executing pre-defined visual operations, thus requiring meticulous instruction tuning to establish a foundational understanding of these new visual operations. This initial training phase must also preserve the model’s inherent self-correction abilities, thereby preparing for trial-and-errors in the subsequent RL phase. Secondly, the warm-started model exhibits a significant disparity in proficiency between its well- established textual reasoning and its emergent pixel-space reasoning capabilities, which creates a "learning trap" that impedes the effective acquisition of pixel-space reasoning. On one hand, the model’s initial incompetence in visual operations garners more negative feedback than textual reasoning. On the other hand, a significant portion of training queries may not strictly necessitate visual operations, allowing the model to bypass these under-developed skills. These factors trap the cultivation of pixel-space reasoning, causing the premature cessation of efforts to utilize visual operations and improve pixel-space reasoning. To address these challenges, our approach combines a warm-start instruction tuning phase and a reinforcement learning phase. For instruction tuning, we synthesize 7,500 reasoning traces | https://arxiv.org/abs/2505.15966v2 |
that facilitate the cultivation of both mastery over visual operations and self-correction capabilities. Following this meticulous warm-start instruction tuning, our RL approach leverages a curiosity-driven reward scheme to balance the exploration and exploitation of pixel-space reasoning to incentivize the pixel-level reasoning. The RL phase collect another 7,500 examples from several public image and video datasets [Feng et al., 2025, Xu et al., 2025]. Our final model Pixel-Reasoner , built on 2 top of Qwen2.5-VL-7B [Bai et al., 2025], is able to show significant improvement across several visual reasoning benchmarks (with information-rich images/video) like V* [Wu and Xie, 2024], TallyQA [Acharya et al., 2019], MVBench [Li et al., 2024a] and InfographicsVQA [Mathew et al., 2021]. On these benchmarks, Pixel-Reasoner shows best known open-source performance and even exceeds proprietary models like Gemini-2.5-Pro [Team et al., 2024a] and GPT-4o [Hurst et al., 2024]. We further conduct comprehensive ablation studies to provide insights into how our framework effectively cultivates pixel-space reasoning. Our contributions are listed as follows: 1. We introduce the concept of pixel-space reasoning for the first time. 2. We identified a learning trap when cultivating this novel reasoning ability. 3. We proposed a novel two-staged post-training approach, featuring a meticulous instruction tuning stage, and a curiosity-driven RL stage. 4. We achieved state-of-the-art results on visually-intensive benchmarks with pixel-space reasoning. 2 Problem Formulation We introduce pixel-space reasoning , a novel paradigm enabling models to integrate operations directly applied to visual inputs, rather than solely relying on textual reasoning. Formally, consider a vision-language query x= [V, L], where V represents visual inputs (e.g., images or videos) and L is the textual query. A model πθconstructs a solution y= [y1, . . . , y n]via an iterative reasoning process in both pixel and textual space. At each step t, the model generates a reasoning segment yt∼πθ(·|x,yt−1)conditioned on the initial query xand the set of all preceding reasoning steps yt−1= [y1, . . . , y t−1]. Unlike the predominant textual reasoning paradigm, pixel-space reasoning allows each reasoning step ytto be one of two types: •Textual Thinking : Steps that involve reasoning purely within the textual domain, like calculating an equation or use domain knowledge to derive a conclusion, etc. •Visual Operations : Steps that activate visual operations to directly manipulate or extract infor- mation from the visual inputs. A visual operation ytinvolves invoking a predefined function f, yielding an execution outcome et=f(yt). For instance, a model might generate ytto trigger a select_frame operation, fSF, with specified arguments (e.g., "target_frame") in ytto retrieve visual tokens etfor a particular frame. The reasoning step is then updated to yt←concat (yt,et), incorporating the execution outcome etfor subsequent reasoning. This iterative reasoning process concludes when a designated end token is generated. We aim to cultivate pixel-space reasoning via reinforcement learning (RL), where the objective is to optimize a Vision-Language Model (VLM) policy πθthat maximizes the expected reward over a dataset D: max θEh r(x,y) x∼ D,y∼πθ(y|x)i A common approach for r(x,y)is to adopt a binary correctness reward, which assesses the correctness of the generated solution yfor a given query x[DeepSeek-AI et | https://arxiv.org/abs/2505.15966v2 |
al., 2025, Liu et al., 2025]: r(x,y) =1if the solution ycontains the correct answer to query x, 0otherwise . In this work, we focus on two types of visual inputs: images and videos. We specifically consider two types of visual operations: ZOOM -INfor inspecting details within a specified region of a target image, and SELECT -FRAME for analyzing specific frames in a video sequence. Detailed protocols for these visual operations are provided in the appendix. 3 Warm-Start Instruction Tuning We aim to cultivate a novel pixel-space reasoning paradigm leveraging existing Visual-Language Models (VLMs). However, instruction-tuned models such as Qwen2.5-VL-Instruct exhibit limited zero-shot proficiency in executing novel visual operations (as shown in the analysis in the appendix), likely due to their absence in standard training data. To lay the groundwork for utilizing visual operations in subsequent reinforcement learning, we describe in this section our approach to data curation and instruction tuning. Collect Seed Datasets. Our data curation pipeline is designed to collect high-quality pixel-space reasoning trajectories. These trajectories are intended to serve as expert demonstrations for our policy, 3 What kind of restaurant is it? This image shows a scene … I need to take a closer look at .. <visual_operation>{“name”: “zoom-in”, “arguments”:{“bbox”: [100,200,400,400]}}</visual_operation> I don’t see the restaurant from zoom-in. Let’s look at the originalimage,Ithinktherestaurantsellstea. (a) Direct Distillation from GPT4o is prone to “bypassing trajectories”. [Analysis ofWholeImage] <visual_operation>{“name”: “zoom-in”, “arguments”:{“bbox”: [600,400,700,500]}}</visual_operation> (b) We employ Template-Based Synthesis for complete control. [Analysis ofLocalVisualCues] …… [Final Answer]The Reference Visual CueFigure 3: Direct Distillation from GPT4o may generate "bypassing trajec- tories" where the model ignores the visual operations and performs textual reasoning. We thus adopt a template-based synthesis strategy. The Self-Correction Trajectory <visual_operation>{“name”: “zoom-in”, “arguments”:{“bbox”: […]}}</visual_operation> [Analysis ofLocalVisualCues]The Reference Visual Cue <visual_operation>{“name”: “zoom-in”, “arguments”:{“bbox”: […]}}</visual_operation> [Analysis ofLocalVisualCues]The Distractor Visual CueAn Inserted Error-Induced Reasoning Segment [Final Answer] [Analysis of Whole Image]Figure 4: We synthesize self- correction trajectories by inserting erroneous reasoning segments. showcasing the effective utilization of visual operations. Therefore, we first select three datasets: SA1B [Kirillov et al., 2023], FineWeb [Ma et al., 2024] and STARQA [Wu et al., 2024]. These datasets offer diverse modalities and contents, spanning natural scenes with extensive segmentation masks, diverse web-pages, and real-world videos requiring situated reasoning. Across all three datasets, their high visual complexity provides rich visual information for fine-grained analysis, and their explicit annotations serve as crucial reference visual cues for trajectory synthesis. A detailed description of these datasets can be found in the appendix. Localize Reference Visual Cues. To ensure that the visual operations are genuinely necessary for resolving the vision-language queries, we selected or synthesized queries that specifically require the localization of fine-grained visual cues within the rich visual information. The FineWeb and STARQA datasets already provide vision-language queries paired with reference visual cues for answers. For the SA1B dataset, we first leveraged GPT-4o to identify specific target visual details within an image, such as small objects or particular attributes. Subsequently, we prompted GPT-4o to generate a natural language query based on the identified detail and the corresponding image, formulating a fine-grained visual question that necessitates locating | https://arxiv.org/abs/2505.15966v2 |
that specific cue. Synthesize Expert Trajectories. Based on the curated vision-language queries requiring fine-grained visual analysis, we then synthesize expert trajectories using GPT-4o. As illustrated in Fig. 3 (a), we observed that direct distillation from GPT-4o sometimes resulted in "bypassing trajectories". In these cases, GPT-4o could occasionally bypass erroroneous visual operations and arrive at the correct final answer solely through its textual reasoning capabilities. Such trajectories pose a risk of misleading the policy by ignoring the problematic outcomes of executing visual operations. To mitigate this issue and ensure complete control over the synthesized trajectories, we employ a template-based synthesis approach. As shown in Fig. 3 (b), this template structures a pixel-space reasoning trajectory as a sequence: initial analysis of the entire visual input, followed by triggering specific visual operations to extract fine-grained details, subsequent analysis of these detailed visual cues, and ultimately arriving at the final answer. To synthesize a trajectory according to this template, we utilize the reference visual cue associated with each vision-language query. We first prompt GPT-4o to generate a textual description summarizing the entire visual input. Then, leveraging the reference visual cue, we prompt GPT-4o for a more detailed textual analysis focusing specifically on that cue. By composing these textual thinking segments and incorporating the visual operation targeting the reference visual cue, we obtain a pixel-space reasoning trajectory that effectively interleaves textual reasoning with required visual operations. In addition to these basic single-pass trajectories that help the policy understand the effective utilization of visual operations, we also synthesize error-induced self-correction trajectories . These are designed to preserve and foster the policy’s ability to properly react to unexpected inputs or errors during execution. As illustrated in Fig. 4, we synthesize such trajectories by deliberately choosing incorrect or improper visual cues, such as an irrelevant video frame or overly large image regions, for reaching the correct answer. We then insert the visual operations and textual thinking segments for these distracting visual cues before introducing the correct reference visual cues, thus simulating self-correction behaviors in error-induced trajectories. Warm-Start Instruction Tuning. We include two primary types of pixel-space reasoning trajectories in our training data: single-pass and error-induced self-correction trajectories. We also include textual reasoning trajectories for vision-language queries that do not necessitate fine-grained visual analysis. This mixed data composition allows the policy to adaptively employ pixel-space reasoning only when necessary. We employ the standard Supervised Fine-Tuning (SFT) loss for training. However, 4 0 5 10 15 20 25 30 Steps (x16)0.00.10.20.30.40.50.60.70.8Rate of Pixel-Space Reasoning (RaPR) RL w/o Curiosity PixelReasoner RaPR TrendFigure 5: RL Requires Incentives to Explore Pixel- space Reasoning. Without proper incentives, the pol- icy learns to bypass the nascent pixel-space reasoning, resulting in declining RaPR. 0.0020.0040.0060.008Averaged Curiosity Value Curiosity Bonus over Training Steps Curiosity Bonus 0 5 10 15 20 25 30 Steps (x16)0.0125 0.0100 0.0075 0.0050 0.0025 0.0000Averaged Penalty Value Efficiency Penalty over Training Steps Efficiency PenaltyFigure 6: The Training Trend of our Curiosity-Driven Reward Scheme. We leverage curiosity bonus to en- courages exploration and efficiency penalty to punish excessive visual operations. we apply loss masks to tokens that | https://arxiv.org/abs/2505.15966v2 |
represent either execution outputs from visual operations or the specifically designated erroneous visual operations within the self-correction trajectories. Masking the erroneous operations prevents the policy from learning to execute the incorrect actions. 4 Curiosity-Driven Reinforcement Learning The warm-started model typically suffers from a disparity in its capabilities: proficient textual reasoning versus nascent pixel-space reasoning. This inherent imbalance creates a "learning trap" that impedes the development of pixel-space reasoning, stemming from two synergistic issues. Firstly, the model’s initial limited mastery over visual operations frequently leads to failure or incorrect outputs, resulting in a higher incidence of negative feedback compared to text-mediated reasoning. Secondly, a significant portion of training queries does not rigorously demand visual processing for a correct response, allowing the model to ignore the outcomes of visual operations or default to its stronger textual reasoning. This interplay fosters a detrimental cycle where initial failures discourage further attempts, leading to the premature abandonment of exploring and mastering visual operations. As shown in Fig. 5, when training the Warm-Start Model with standard RL [DeepSeek-AI et al., 2025, Wang et al., 2025] without proper incentives, the policy learns to bypass the nascent visual operations. To break this cycle, we propose a curiosity-driven reward scheme to incentivize sustained exploration of pixel-space reasoning, inspired by curiosity-driven exploration in conventional RL [Pathak et al., 2017]. Instead of relying solely on extrinsic rewards for correctness, this curiosity bonus specifically incentivizes the act of attempting pixel-space operations. By intrinsically rewarding such active practice, we aim to bolster the model’s nascent visual skills and counteract the discouragement of exploration that arises from early operational failures and the associated negative feedback. This mirrors how a child, driven by curiosity, might repeatedly attempt a difficult motor task, learning from each attempt, rather than immediately defaulting to an easier, already mastered skill. Specifically, we formalize this objective as a constrained optimization problem. Let 1PR(y)denotes the indicator function of response yutilizing pixel-space reasoning, and nvo(y)represent the number of visual operations. The goal is to maximize the expected correctness outcome, subject to two critical constraints meticulously designed to cultivate the pixel-space reasoning: max θEh r(x,y) x∼ D,y∼πθ(y|x)i (1) subject to RaPR (x).=Eh 1PR(y) y∼πθ(y|x)i ≥H,nvo(y)≤N (2) Here, the first constraint concerns the Rate of Pixel-space Reasoning (RaPR) (pronounced "rapper") triggered for a query x. We mandate that this rate, averaged over rollouts yof query x, must be no less than a predefined threshold H. This encourages the policy to consistently attempt pixel-space reasoning across a significant proportion of queries, acting as a directive to explore this less familiar reasoning path. The second constraint imposes an upper bound Non the number of visual operations 5 used in any individual response. This ensures that while exploration is encouraged, it remains computationally efficient and does not lead to overly complex or protracted visual processing for individual responses. This constrained optimization problem can be transformed into an unconstrained problem via La- grangian Relaxation [Lemaréchal, 2001], resulting in a single reward function. This technique is commonly employed in constrained RL [Achiam et al., 2017, Wang et al., 2022, 2023]. The transformation | https://arxiv.org/abs/2505.15966v2 |
yields the following modified reward function r′(x,y), detailed in the appendix: r′(x,y) =r(x,y) +α·rcuriosity (x,y) +β·rpenalty (y), (3) where rcuriosity (x,y) = max(H −RaPR (x),0)·1PR(y) (4) rpenalty (y) = min(N −nvo(y),0) (5) The modified reward incorporates two additional terms. The first term rcuriosity (x,y)serves as the core of our curiosity mechanism. It provides an intrinsic reward that directly encourages the model to satisfy its "curiosity" about pixel-space operations, especially for queries where it has a low history of attempting them. Akin to infants curious about and exploring unseen environments or novel interactions, this term credits response ya bonus for employing pixel-space reasoning when the adoption of pixel-space reasoning, RaPR (x), is below a target threshold H. This curiosity bonus effectively lowers the activation energy for trying the visual operations, making the model more "inquisitive" and willing to venture into less certain reasoning paths. The second term, rpenalty (y), acts as an efficiency penalty at the response level, penalizing redundancy in visual operations by considering the number of visual operations performed, nvo(y), relative to a desired maximum N. The coefficients α≥0andβ≥0are non-negative Lagrangian multipliers. These multipliers can be tuned automatically, for instance, via dual gradient descent [Bishop and Nasrabadi, 2006, Wang et al., 2020], or set as pre-defined hyperparameters [Wang et al., 2022]. In our experiments, we adopt the latter approach for simplicity. We provide a concrete example in the appendix to illustrate how these hyperparameters reflect our desired properties of the policy. This reward scheme offers a dynamic reward mechanism that automatically tune the exploration bonus as training proceeds. As illustrated in Fig. 6, the curiosity bonus will naturally diminishes as the policy explores more on pixel-space reasoning. This prevents the policy policy from reward hacking – overly relying on the exploration bonus regardless of final correctness. 5 Experiments In this section, we first outline the training and evaluation settings. We then examine the effectiveness of pixel-space reasoning, and study the key factors for cultivating pixel-space reasoning. Training Data and Evaluation Settings. Utilizing the data curation pipeline outlined in Section 3, we assembled a dataset of 7,500 trajectories for warm-start instruction tuning. This dataset includes 5,500 pixel-space reasoning trajectories synthesized using GPT-4o, spanning domains such as images, webpages, and videos. We also include 2,000 text-space reasoning trajectories to balance the use of visual operations. During RL, we construct 15,000 queries from our SFT dataset, Infograph- icVQA [Mathew et al., 2021], and publicly available datasets [Xu et al., 2025, Wu et al., 2024]. Refer to the appendix for a comprehensive view of the dataset compositions. We evaluated our model and other baselines on four representative multimodal benchmarks using greedy decoding: TallyQA, V*, InfographicVQA, and MVBench. This selection offers a wide spectrum of visual understanding tasks, from fine-grained object recognition to high-level reasoning in both static and dynamic scenarios. Specifically, V* (V-Star) [Wu and Xie, 2024] evaluates multimodal large language models (MLLMs) on their ability to process high-resolution, visually complex images and focus on fine-grained visual details. TallyQA [Acharya et al., 2019] consists of questions that require reasoning over object quantities, often demanding the | https://arxiv.org/abs/2505.15966v2 |
model to locate, differentiate, and tally objects across complex scenes. MVBench [Li et al., 2024a] is a comprehensive benchmark designed to evaluate multimodal large language models (MLLMs) on their temporal understanding capabilities across 20 challenging video tasks, necessitating reasoning beyond static image analysis. InfographicVQA [Mathew et al., 2021] evaluates the model’s ability to understand complex infographic images that blend textual and visual content, including charts, diagrams, and annotated images. Success on this benchmark requires parsing layout, reading embedded text, and linking visual elements with semantic meaning. 6 Table 1: Our main results on the four evaluated benchmarks. Model Size V* Bench TallyQA-complex MVBench-test InfoVQA-test Metric Acc Acc Acc ANLS Models w/o Tools GPT-4o - 62.8 73.0 64.6 80.7 Gemini-2.0-Flash - 73.2 73.8 - 86.5 Gemini-2.5-Pro - 79.2 74.0 - 84.0 Qwen2.5-VL 7B 70.4 68.6 63.8 80.7 Video-R1 7B 51.2 42.6 63.9 67.9 LongLLava 13B 68.5 64.6 54.6 65.4 Gemma3 27B 62.3 54.3 56.8 59.4 Models with Tools Visual Sketchpad (GPT-4o) - 80.4 - - - IVM-Enhance (GPT-4V) - 81.2 - - - PaLI-3-VPD 5B 70.9 - - - SEAL 7B 74.8 - - - PaLI-X-VPD 55B 76.6 - - - Ours (Initialized from Qwen2.5-VL-7B) Pixel-Reasoner 7B 84.3 73.8 67.8 84.0 Ablation Baselines (Ablated from Pixel-Reasoner ) Warm-Start Model (w/o RL) 7B 79.0 67.9 59.0 74.3 RL w/o Curiosity 7B 81.1 71.8 66.4 80.7 RL w/o Warm-Start 7B 81.7 72.2 65.6 81.2 RL w/o Correction-Data 7B 80.1 69.8 63.6 78.2 Compared Models and Implementation. We compare against a wide range of models. •Models without Tools: We include GPT-4o [Hurst et al., 2024] and Gemini-2.0-Flash [Team et al., 2024b] and Gemini-2.5-Pro [Team et al., 2024a]. These models do not have access to tools and simply answer with chain-of-thought. We include Qwen2.5-VL [Bai et al., 2025], Gemma3 [Team et al., 2025] to show the general VLMs’ performance. We also compare with RL-based VLM model Video-R1 [Feng et al., 2025] due to the similar algorithm. We further include LongLlava [Wang et al., 2024] because it aims to scale up image input to deal with high-resolution images (V*) and long video sequence (MVBench). •Models with Tools: We include Visual Sketchpad [Hu et al., 2024a], which empowers the GPT- 4o to use different tools like zoom-in, depth, etc. We also include Instruction-Guided Visual Masking [Zheng et al., 2024], which highlights desired region in a given image. Finally, we add Visual-Program-Distillation (VPD) [Hu et al., 2024b], which aims to distill tools reasoning into closed-source VLMs like PaLI. These models are specialized in V* Bench. We include SEAL [Wu and Xie, 2024] from original V* Bench paper, which utilizes visual guided search tool to augment high-resolution image understanding. Pixel-Reasoner was trained on 8×A800(80G) GPUs, using Open-R1 and OpenRLHF for instruc- tion tuning and reinforcement learning respectively. We adopt GRPO [DeepSeek-AI et al., 2025] with selective sample relay due to vanishing advantages [Wang et al., 2025]. We include training details in the appendix, and will release code, models, and data to support reproducibility. 5.1 Main Results Table 1 shows that Pixel-Reasoner achieves the highest open-source results across all four bench- marks. | https://arxiv.org/abs/2505.15966v2 |
Remarkably, Pixel-Reasoner , at a mere 7B parameters, not only surpasses substantially larger open-source models like the 27B Gemma3 across all benchmarks, but also outperforms specialized models that depend on external tools, such as IVM-Enhance (GPT-4V). Furthermore, Pixel-Reasoner ’s exceptional capabilities extend to outperforming leading proprietary models, evidenced by its significant 5.1 percentage point lead over Gemini-2.5-Pro on V-star Bench (84.3 vs 79.2), and achieving the overall highest scores amongst all models listed. We observe that RL training 7 0 5 10 15 20 25 30 Steps (x16)0.00.10.20.30.40.50.60.70.8Rate of Pixel-space Reasoning (RaPR) Rate of Pixel-space Reasoning PixelReasoner RL w/o Curiosity RL from No-Correction Model RL from Zero-Shot Model 0 5 10 15 20 25 30 Steps (x16)0.00.10.20.30.4Failure Rate of Visual Operations Visual Operations Error Rate PixelReasoner RL w/o Curiosity RL from No-Correction Model RL from Zero-Shot Model 0 2 4 6 8 Steps (x16)0.10 0.08 0.06 0.04 0.02 Gap in Averaged Correctness Competence Gap between T wo Reasoning Capabilities PixelReasoner PixelReasoner Trend RL w/o Curiosity RL w/o Curiosity TrendFigure 7: Trainig Dynamics of Ablation Baselines. During RL training, different baselines show different trends in triggering pixel-space reasoning (Left), and the error rate of utilizing visual operations (Middle). Our curiosity-driven reward scheme effectively cultivates pixel-space reasoning by actively practicing and enhancing this nascent ability, as evidenced by the narrowed gap in return between the two reasoning paradigms (Right). is monumental to the great results. Our ablation model "Warm-Start Model (w/o RL)" has a lower performance than the original checkpoint Qwen2.5-VL on many benchmarks. After RL training, the performance skyrockets to SoTA level. This reflects the necessity of RL training to cultivate pixel-level reasoning. The performance profoundly underscores the significant potential of our proposed pixel-space reasoning paradigm. This potential is further highlighted when comparing Pixel-Reasoner with the ablation baselines, "RL w/o Curiosity" and "RL w/o Warm-Start". Due to insufficient incentives and limited proficiency in utilizing visual operations, these baselines ultimately default to text-space reasoning, resulting in a significant performance drop of 2.5 points on average across benchmarks. These empirical gains verify the effectiveness of pixel-space reasoning – by enabling the model to proactively engage with visual operations, this new reasoning paradigm facilitates a more precise visual understanding and consequently, a stronger reasoning capability. 5.2 Key Factors for Cultivating Pixel-Space Reasoning This section investigates two critical factors in fostering pixel-space reasoning through RL: first, the policy’s proficiency in utilizing visual operations lays the foundation for RL, and second, the role of incentives in encouraging the adoption of pixel-space reasoning during RL. To gain deeper insights into the results in Tab. 1, we analyze the training dynamics of various baselines, shown in Fig. 7. Specifically, the left panel shows the proportion of training rollouts that employ pixel-space reasoning strategies, while the middle panel indicates the error rate associated with the execution of visual operations. The right panel compares the expected correctness between the two reasoning paradigms, highlighting the disparity in the two capabilities over training time. In addition, the appendix provides concrete examples of trajectories that bypass pixel-space reasoning, illustrating "the learning trap." Effective Utilization of | https://arxiv.org/abs/2505.15966v2 |
Visual Operations Requires Instruction Tuning. A crucial finding is that meticulous warm-start instruction tuning is essential for enhancing the policy’s mastery of visual operations and its capacity for self-correction. To demonstrate this, we analyze the RL training dynamics originating from three distinct instruction-tuned models: (a) The Warm-Start Model that undergoes the proposed warm-start instruction tuning phase. (b) The No-Correction Model is tuned using single-pass expert trajectories but without the error-induced self-corrective trajectories. (c) The Zero-Shot Model is Qwen2.5-VL-Instruct with zero-shot prompts of available visual operations. The training dynamics reveal distinct outcomes: •The Zero-Shot Model (orange lines) commences with a low RaPR of approximately 20%, which progressively declines during RL and reaches zero. This initial low propensity to trigger visual operations provides insufficient practice on visual operations. Consequently, the model receives lower expected returns from its nascent pixel-space reasoning compared to its more established textual reasoning, leading to a diminishing RaPR. This illustrates how limited initial proficiency in visual operations can create a detrimental cycle, hindering the development of pixel-space reasoning. Its error rate for visual operations also remains low due to their minimal usage. •The No-Correction Model (blue lines), trained solely on single-pass expert trajectories, initially exhibits an increase in RaPR, suggesting a propensity for attempting visual operations. However, this trend is quickly overshadowed by a significant and persistent rise in the failure rate of 8 these operations. This elevated error rate points to the model’s inability to effectively respond to unexpected or erroneous outcomes from visual tasks. This deficiency stems directly from the absence of error-induced self-correction trajectories during its instruction-tuning phase. Consequently, the policy increasingly relies on pixel-space reasoning while simultaneously ignoring the outcomes of visual operations and favoring textual reasoning. Interestingly, we observe that the resulting reasoning trajectories involve error messages from visual operations but can still arrive at a correct answer. This indicates reward hacking: the policy earns curiosity bonus by superficially executing visual operations, and meanwhile it also earns correctness reward by essentially relying on textual reasoning to arrive at the final answer. •The Warm-Start Model (purple lines as "PixelReasoner"), in constrast, serves as the foundation for RL and is appropriately incentivized, it enables the successful cultivation of pixel-space reasoning without exhibiting excessive error rates in visual operations. This underscores the importance of the comprehensive instruction tuning provided by the warm-start phase. Cultivation of Novel Reasoning Capabilities Requires Incentives. To evaluate the impact of incentives, we compare the RL training dynamics starting from the Warm-Start Model, both with and without curiosity-driven incentives. •Standard RL without curiosity (grey lines). The curve shows a consistent decrease in the utilization of visual operations (RaPR), from around 0.55 to 0 in 240 gradient steps. This decline occurs because, without a specific impetus to explore, the policy favors its more developed textual reasoning over the initially less competent pixel-space reasoning. The failure rate of visual operations remains low as their usage diminishes. •Our Model (purple lines), which also starts from the Warm-Start Model but incorporates a curiosity-driven exploration bonus, demonstrates a more complex and ultimately successful trajectory. Initially, Pixel-Reasoner exhibits a decrease | https://arxiv.org/abs/2505.15966v2 |
in RaPR in the first 50 gradient steps, and then plateaus for around 150 steps. During this stage, the policy is compelled by the curiosity bonus to continue exploring pixel-space reasoning, despite its relative inferiority compared to textual reasoning (as shown in Fig. 7 (Right)). Not until 200 gradient steps, the policy starts to effectively leverage the benefits of pixel-space reasoning. Its RaPR proactively and substantially increases, accompanied by a low and stable failure rate for visual operations. This indicates that the combination of the robust Warm-Start instruction tuning and the curiosity-driven incentive allows the policy to not only explore but also master the new pixel-space reasoning capability. Also note that Pixel-Reasoner exhibit relatively high RaPR of 80% due to the high proportion of visually intensive tasks in training queries. We provide curves of test sets in the appendix. 6 Related Work Post-Training for Vision-Language Models. Post-training techniques, such as instruction tuning and reinforcement learning, are critical for adapting large Vision-Language Models (VLMs) to complex tasks beyond initial pre-training. LLaV A [Liu et al., 2023], Llava-OV [Li et al., 2024b], Infinity-MM [Gu et al., 2024], and MAmmoTH-VL [Guo et al., 2024] has shown that scaling instruction tuning datasets and increasing task diversity significantly enhances VLM generalization across various multimodal benchmarks. Recently, a growing body of work applies RL to the multimodal domain [Deng et al., 2025, Huang et al., 2025, Feng et al., 2025]. These approaches typically employ multi-stage pipelines, starting with SFT on costly data distillation and then applying RL to further refine the model’s reasoning capabilities. VL-Rethinker [Wang et al., 2025] investigates more direct RL approaches to foster slow-thinking in VLMs, and introduced selective sample replay (SSR) to counteract the vanishing advantages problem in GRPO. Vision-Language Models with Tools. Recent research has explored augmenting VLMs with external tools or enabling them to perform pixel-level operations on inputs. Chain-of-Manipulation [Qi et al., 2025], Visual-Program-Distillation (VPD) [Hu et al., 2024b] focus on training models to effectively utilize tools or distill tool-based reasoning. Visual Sketchpad [Hu et al., 2024a] equips models, such as GPT-4o, with tools like depth perception and Python plotting. Models like o3 [Jaech et al., 2024b] demonstrate an ability to "think with images" by dynamically applying operations like zooming or flipping to improve visual understanding. Specific tools such as Instruction-Guided Visual Masking [Zheng et al., 2024] and visual guided search [Wu and Xie, 2024] has been integrated into these frameworks. 9 7 Conclusion In this paper, we show how to incentivize the pixel-space reasoning from an existing vision-language models for the first time. Our warm-start instruction tuning and curiosity-driven RL are both essential to achieve the state-of-the-art performance. However, our work is currently still limited to two primary operations, which is insufficient for broader tasks. Our framework is easily extensible to other operations like depth map, image search, etc. In the future, the community can work together to enrich the visual operations to enhance the pixel-space reasoning in VLMs. 10 References Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Os- trow, Akila Welihinda, Alan | https://arxiv.org/abs/2505.15966v2 |
Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint arXiv:2410.21276 , 2024. Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720 , 2024a. Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, Katie Millican, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805 , 2023. Haozhe Wang, Chao Qu, Zuming Huang, Wei Chu, Fangzhen Lin, and Wenhu Chen. Vl-rethinker: Incentivizing self-reflection of vision-language models with reinforcement learning. arXiv preprint arXiv:2504.08837 , 2025. Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. arXiv preprint arXiv:2310.02255 , 2023. Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, et al. Mmmu: A massive multi-discipline multimodal under- standing and reasoning benchmark for expert agi. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 9556–9567, 2024. Jiacheng Chen, Tianhao Liang, Sherman Siu, Zhengqing Wang, Kai Wang, Yubo Wang, Yuansheng Ni, Wang Zhu, Ziyan Jiang, Bohan Lyu, et al. Mega-bench: Scaling multimodal evaluation to over 500 real-world tasks. arXiv preprint arXiv:2410.10563 , 2024. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. Kaituo Feng, Kaixiong Gong, Bohao Li, Zonghao Guo, Yibing Wang, Tianshuo Peng, Benyou Wang, and Xiangyu Yue. Video-r1: Reinforcing video reasoning in mllms. arXiv preprint arXiv:2503.21776 , 2025. Guowei Xu, Peng Jin, Hao Li, Yibing Song, Lichao Sun, and Li Yuan. Llava-cot: Let vision language models reason step-by-step, 2025. URL https://arxiv.org/abs/2411.10440 . Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, et al. Qwen2. 5-vl technical report. arXiv preprint arXiv:2502.13923 , 2025. Penghao Wu and Saining Xie. V?: Guided visual search as a core mechanism in multimodal llms. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 13084–13094, 2024. Manoj Acharya, Kushal Kafle, and Christopher Kanan. Tallyqa: Answering complex counting questions. In Proceedings of the AAAI conference on artificial intelligence , volume 33, pages 8076–8084, 2019. Kunchang Li, Yali Wang, Yinan He, Yizhuo Li, Yi Wang, Yi Liu, Zun Wang, Jilan Xu, Guo Chen, Ping Luo, et al. Mvbench: A comprehensive multi-modal video understanding benchmark. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 22195–22206, 2024a. Minesh Mathew, Viraj Bagal, Rubèn Pérez Tito, Dimosthenis Karatzas, Ernest Valveny, and CV Jawa- har. Infographicvqa. 2022 ieee. In CVF Winter Conference on Applications of Computer Vision (WACV) , pages 2582–2591, 2021. 11 Gemini Team, Petko Georgiev, Ving Ian Lei, Ryan Burnell, Libin Bai, Anmol Gulati, Garrett Tanzer, Damien Vincent, Zhufeng Pan, Shibo Wang, et al. Gemini 1.5: Unlocking | https://arxiv.org/abs/2505.15966v2 |
multimodal understanding across millions of tokens of context. arXiv preprint arXiv:2403.05530 , 2024a. DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang, Bochao Wu, Bei Feng, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Qu, Hui Li, Jianzhong Guo, Jiashi Li, Jiawei Wang, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, J. L. Cai, Jiaqi Ni, Jian Liang, Jin Chen, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Liang Zhao, Litong Wang, Liyue Zhang, Lei Xu, Leyi Xia, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Meng Li, Miaojun Wang, Mingming Li, Ning Tian, Panpan Huang, Peng Zhang, Qiancheng Wang, Qinyu Chen, Qiushi Du, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, R. J. Chen, R. L. Jin, Ruyi Chen, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shengfeng Ye, Shiyu Wang, Shuiping Yu, Shunfeng Zhou, Shuting Pan, S. S. Li, Shuang Zhou, Shaoqing Wu, Shengfeng Ye, Tao Yun, Tian Pei, Tianyu Sun, T. Wang, Wangding Zeng, Wanjia Zhao, Wen Liu, Wenfeng Liang, Wenjun Gao, Wenqin Yu, Wentao Zhang, W. L. Xiao, Wei An, Xiaodong Liu, Xiaohan Wang, Xiaokang Chen, Xiaotao Nie, Xin Cheng, Xin Liu, Xin Xie, Xingchao Liu, Xinyu Yang, Xinyuan Li, Xuecheng Su, Xuheng Lin, X. Q. Li, Xiangyue Jin, Xiaojin Shen, Xiaosha Chen, Xiaowen Sun, Xiaoxiang Wang, Xinnan Song, Xinyi Zhou, Xianzu Wang, Xinxia Shan, Y . K. Li, Y . Q. Wang, Y . X. Wei, Yang Zhang, Yanhong Xu, Yao Li, Yao Zhao, Yaofeng Sun, Yaohui Wang, Yi Yu, Yichao Zhang, Yifan Shi, Yiliang Xiong, Ying He, Yishi Piao, Yisong Wang, Yixuan Tan, Yiyang Ma, Yiyuan Liu, Yongqiang Guo, Yuan Ou, Yuduan Wang, Yue Gong, Yuheng Zou, Yujia He, Yunfan Xiong, Yuxiang Luo, Yuxiang You, Yuxuan Liu, Yuyang Zhou, Y . X. Zhu, Yanhong Xu, Yanping Huang, Yaohui Li, Yi Zheng, Yuchen Zhu, Yunxian Ma, Ying Tang, Yukun Zha, Yuting Yan, Z. Z. Ren, Zehui Ren, Zhangli Sha, Zhe Fu, Zhean Xu, Zhenda Xie, Zhengyan Zhang, Zhewen Hao, Zhicheng Ma, Zhigang Yan, Zhiyu Wu, Zihui Gu, Zijia Zhu, Zijun Liu, Zilin Li, Ziwei Xie, Ziyang Song, Zizheng Pan, Zhen Huang, Zhipeng Xu, Zhongyu Zhang, and Zhen Zhang. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning, 2025. URL https://arxiv.org/abs/2501.12948 . Zichen Liu, Changyu Chen, Wenjun Li, Penghui Qi, Tianyu Pang, Chao Du, Wee Sun Lee, and Min Lin. Understanding r1-zero-like training: A critical perspective. arXiv preprint arXiv:2503.20783 , 2025. Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollár, and Ross Girshick. Segment anything, 2023. URL https://arxiv.org/abs/2304.02643 . Xueguang Ma, Shengyao Zhuang, Bevan Koopman, Guido Zuccon, Wenhu | https://arxiv.org/abs/2505.15966v2 |
Chen, and Jimmy Lin. Visa: Retrieval augmented generation with visual source attribution, 2024. URL https: //arxiv.org/abs/2412.14457 . Bo Wu, Shoubin Yu, Zhenfang Chen, Joshua B Tenenbaum, and Chuang Gan. Star: A benchmark for situated reasoning in real-world videos, 2024. URL https://arxiv.org/abs/2405.09711 . Deepak Pathak, Pulkit Agrawal, Alexei A Efros, and Trevor Darrell. Curiosity-driven exploration by self-supervised prediction. In International conference on machine learning , pages 2778–2787. PMLR, 2017. Claude Lemaréchal. Lagrangian relaxation. Computational combinatorial optimization: optimal or provably near-optimal solutions , pages 112–156, 2001. Joshua Achiam, David Held, Aviv Tamar, and Pieter Abbeel. Constrained policy optimization. In International conference on machine learning , pages 22–31. PMLR, 2017. Haozhe Wang, Chao Du, Panyan Fang, Shuo Yuan, Xuming He, Liang Wang, and Bo Zheng. Roi- constrained bidding via curriculum-guided bayesian reinforcement learning. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pages 4021–4031, 2022. 12 Haozhe Wang, Chao Du, Panyan Fang, Li He, Liang Wang, and Bo Zheng. Adversarial constrained bidding via minimax regret optimization with causality-aware reinforcement learning. In Proceed- ings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pages 2314–2325, 2023. Christopher M Bishop and Nasser M Nasrabadi. Pattern recognition and machine learning , volume 4. Springer, 2006. Haozhe Wang, Jiale Zhou, and Xuming He. Learning context-aware task reasoning for efficient meta-reinforcement learning. arXiv preprint arXiv:2003.01373 , 2020. Gemini Team, Petko Georgiev, Ving Ian Lei, Ryan Burnell, Libin Bai, Anmol Gulati, Garrett Tanzer, Damien Vincent, Zhufeng Pan, Shibo Wang, et al. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint arXiv:2403.05530 , 2024b. Gemma Team, Aishwarya Kamath, Johan Ferret, Shreya Pathak, Nino Vieillard, Ramona Merhej, Sarah Perrin, Tatiana Matejovicova, Alexandre Ramé, Morgane Rivière, et al. Gemma 3 technical report. arXiv preprint arXiv:2503.19786 , 2025. Xidong Wang, Dingjie Song, Shunian Chen, Chen Zhang, and Benyou Wang. Longllava: Scal- ing multi-modal llms to 1000 images efficiently via a hybrid architecture. arXiv preprint arXiv:2409.02889 , 2024. Yushi Hu, Weijia Shi, Xingyu Fu, Dan Roth, Mari Ostendorf, Luke Zettlemoyer, Noah A Smith, and Ranjay Krishna. Visual sketchpad: Sketching as a visual chain of thought for multimodal language models. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024a. Jinliang Zheng, Jianxiong Li, Sijie Cheng, Yinan Zheng, Jiaming Li, Jihao Liu, Yu Liu, Jingjing Liu, and Xianyuan Zhan. Instruction-guided visual masking. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. Yushi Hu, Otilia Stretcu, Chun-Ta Lu, Krishnamurthy Viswanathan, Kenji Hata, Enming Luo, Ranjay Krishna, and Ariel Fuxman. Visual program distillation: Distilling tools and programmatic reasoning into vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 9590–9601, 2024b. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems , 36:34892–34916, 2023. Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Peiyuan Zhang, Yanwei Li, Ziwei Liu, et al. Llava-onevision: Easy visual task transfer. arXiv preprint arXiv:2408.03326 , 2024b. Shuhao Gu, Jialing Zhang, Siyuan Zhou, Kevin Yu, Zhaohu | https://arxiv.org/abs/2505.15966v2 |
Xing, Liangdong Wang, Zhou Cao, Jintao Jia, Zhuoyi Zhang, Yixuan Wang, et al. Infinity-mm: Scaling multimodal performance with large-scale and high-quality instruction data. arXiv preprint arXiv:2410.18558 , 2024. Jarvis Guo, Tuney Zheng, Yuelin Bai, Bo Li, Yubo Wang, King Zhu, Yizhi Li, Graham Neubig, Wenhu Chen, and Xiang Yue. Mammoth-vl: Eliciting multimodal reasoning with instruction tuning at scale. arXiv preprint arXiv:2412.05237 , 2024. Yihe Deng, Hritik Bansal, Fan Yin, Nanyun Peng, Wei Wang, and Kai-Wei Chang. Openvlthinker: An early exploration to complex vision-language reasoning via iterative self-improvement. arXiv preprint arXiv:2503.17352 , 2025. Wenxuan Huang, Bohan Jia, Zijie Zhai, Shaosheng Cao, Zheyu Ye, Fei Zhao, Zhe Xu, Yao Hu, and Shaohui Lin. Vision-r1: Incentivizing reasoning capability in multimodal large language models, 2025. URL https://arxiv.org/abs/2503.06749 . Ji Qi, Ming Ding, Weihan Wang, Yushi Bai, Qingsong Lv, Wenyi Hong, Bin Xu, Lei Hou, Juanzi Li, Yuxiao Dong, and Jie Tang. Cogcom: A visual language model with chain-of-manipulations reasoning, 2025. URL https://arxiv.org/abs/2402.04236 . 13 Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720 , 2024b. 14 Appendix A Limitations In this work, we pose a fundamental rethinking of vision-language reasoning, and introduce the concept of pixel-space reasoning. While we show the effectiveness of our approach to cultivating pixel-space reasoning, this improvement is still bottleneck-ed by limited data that spans across tasks and contents. In addition, we focus on two specific visual operations to handle primary media formats of images and videos. In the future, we endeavor to include more visual operations and examine the effectiveness of pixel-space reasoning on more diverse collections of tasks. B Derivations of Curiosity-Driven Reward The primary objective is to maximize the expected correctness outcome, formalized as a constrained optimization problem. Let r(x,y)be the original correctness reward for a query xand response y. The policy generating responses is denoted by πθ(y|x). The optimization problem is: max θEh r(x,y) x∼ D,y∼πθ(y|x)i (6) subject to C1(θ;x)≡RaPR (x)−H≥0 (7) C2(y)≡N−nvo(y)≥0 (8) where: • RaPR (x).=Eh 1PR(y) y∼πθ(y|x)i is the Rate of Pixel-space Reasoning for query x. •His a predefined minimum threshold for RaPR (x). •nvo(y)is the number of visual operations in response y. •Nis a predefined upper bound on nvo(y). Constraint (7)is an expectation-level constraint for a given query x, while constraint (8)applies to each individual response y. To incorporate these constraints into the objective, a common technique is the method of Lagrangian Relaxation. For a maximization problem, this typically involves subtracting terms proportional to the constraint violations (when constraints are written as g(x)≤0) from the original objective function r(x,y). If we rewrite our constraints as g1(θ;x)≡H−RaPR (x)≤0andg2(y)≡nvo(y)−N≤0, the standard Lagrangian modification to the per-instance reward would be: rLagrangian (x,y;θ) =r(x,y)−λ1(H−RaPR (x))−λ2(nvo(y)−N) (9) where λ1, λ2≥0are Lagrange multipliers. The overall optimization objective would then be to maximize E[rLagrangian (x,y;θ)]with respect to θ, and to minimize with respect to the multipliers. However, directly applying this standard formulation has two problems. Firstly, this formulation has an over-satisfaction issue. The term −λ2(nvo(y)−N)would provide a positive reward if nvo(y)<N | https://arxiv.org/abs/2505.15966v2 |
(i.e., the constraint is "over-satisfied"), potentially encouraging the policy to use far fewer visual operations than necessary. Secondly, the term −λ1(H−RaPR (x))operates on the expectation-level and does not properly reward individual responses y∼πθ. Therefore, we adopt the following modified reward function: r′(x,y) =r(x,y) +α·max(H −RaPR (x),0)·1PR(y) +β·min(N −nvo(y),0) (10) where α≥0, β≥0are fixed hyperparameters. This formulation offers several benefits. Firstly, the clipping mechanism addresses the over- satisfaction issue while preserving equivalence to the original constrained objective [Wang et al., 2022]. The clipping ensures the penalties are active only when the respective constraints are violated, otherwise the penalties are zero, thus avoiding over-statisfaction. 15 Secondly, this structure allows α, β to be treated as fixed hyperparameters. In standard Lagrangian methods (Eq. 9), multipliers are often dynamically adjusted; for example, Karush-Kuhn-Tucker (KKT) conditions imply that multipliers for inactive constraints (those satisfied with slack) are zero. The clipping zeros out the penalties when constraints are satisfied, thereby obviating the need for dynamic adjustment of α, β based on constraint satisfaction levels. In addition, the inclusion of the indicator 1PR(y)converts the query-level expectation constraint into a response-level reward. Intuitively, this term acts as a targeted incentive: it rewards the specific behavior of engaging in pixel-space reasoning precisely when the average rate of such reasoning is below the desired threshold. The multiplier α≥0scales this incentive. It provides an implicit penalty for missing out on the potential bonuses the policy could have earned by employing pixel-space reasoning. C Data and Training Details Figure 8: A detailed illustration of our data generation pipeline. C.1 Protocols of Visual Operations We include two primary visual operations: cropping an image and selecting frames from a video. CropImage This operation allows the model to zoom in on a specific region of an image by providing a bounding box. The input includes a two-dimensional bounding box bbox_2d —a list of numeric coordinates [x1, y1, x2, y2]constrained within the image dimensions—and a target_image index indicating which image to operate on (indexed from 1, where 1 refers to the original image). This operation helps the model focus on fine-grained details. SelectFrames This operation enables the model to select a subset of frames from a video. The input target_frames is a list of integer indices specifying which frames to extract from a 16-frame sequence, with a limit of no more than 8 frames. This allows the model to focus on key temporal moments relevant to the query. C.2 Instruction Tuning Data Details of Seed Datasets We selected datasets based on two key attributes: high visual complexity requiring fine-grained analysis, and the presence of explicit annotations that can serve as targets or anchors for visual operations. Based on these criteria, our data sources include: •SA1B [Kirillov et al., 2023]: A large-scale dataset of high-resolution natural scenes offering rich visual detail and complexity. 16 •FineWeb [Ma et al., 2024]: Consists of webpage screenshots paired with Question-Answering (QA) instances and precise bounding box annotations for answer regions, offering explicit spatial targets for visual analysis. •STARQA [Wu et al., 2024]: Provides video data with QA pairs and annotated temporal windows indicating | https://arxiv.org/abs/2505.15966v2 |
relevant visual contents for answers, offering both visual and temporal context for potential video-specific operations. Detailed Data Pipeline Illustration. As the Fig. 8 depicts, after we obtain reference visual cues from seed data, we input both the whole HR image or video and the corresponding localized reference visual cues to gpt. Then we use template-based method to extract whole visual input analysis and local detailed analysis before we concatenate the whole analysis, localized reference visual cue and the partial analysis to form the single-pass trajectory. We utilize the reference visual cue to get the wrong visual cues to insert in the obtained single-pass trajectories to get self-correction trajectory. Single-pass and Self-correction Data Synthesis Details Category Trajectory Type Proportion Imagesingle-pass 30% Recrop once 20% Recrop twice 20% Further zoom-in 30% Videosingle-pass 90% Reselect 10% Table 2: Self-correction trajectory types and corresponding proportions. Here single-pass means no error is inserted in the trajectory. Recrop once means we randomly select a bbox that has no intersection with the reference visual cue and insert it before the correct visual operation. Recrop twice means we randomly select 2 bboxes that have no intersection with the reference visual cue and insert them sequentially before the correct visual operation. Further zoom-in means we select an inaccurate bbox that contains the reference visual cue but is excessively larger than it, and we insert it before the correct visual operation. Reselect means we sample frame indexes that have no intersection with the reference visual cue’s frame indexes, and we insert it before the correct visual operation. C.3 Training Details Implementation Details . For Instruction Tuning, we adapt the Open-R1 code to implement SFT loss with loss masks. For RL, we implement based on OpenRLHF. We adopt GRPO [DeepSeek-AI et al., 2025] with selective sample replay [Wang et al., 2025], because we witness significant issues of vanishing advantages. As shown in Fig. 9, our reward scheme incorporates curiosity bonus and efficiency penalty in addition to correctness rewards, which provides more variance in rewards. However, the ratio of queries that suffer from reward uniformity steadily increases to 90% as training progresses, leading to a drastic plunge in performance evidenced by the ratios of "response-all- incorrect" queries. During RL training, we employed a near on-policy RL paradigm, where the behavior policy was synchronized with the improvement policy after every 512 queries, which we define as an episode. The replay buffer for SSR persisted for the duration of each episode before being cleared. For each query, we sampled 8 responses. The training batch size was set to 256 query-response pairs. Our 7B model is trained on 4×8sets of A800 (80G) for 20hours . Training Hyperparameters. For Instruction Tuning, we use a batch size of 128. The learning rate is 1e−6with 10% warm up steps. For RL, we set employ a cosine learning rate schedule with initial learning rate 1e−6and 3% warm up iterations. During RL training, we sample 8 trajectories per training query and set hyperparameters to α= 0.5,β= 0.05,H = 0 .3, and N = 1 . This configuration reflects our objectives: the threshold | https://arxiv.org/abs/2505.15966v2 |
H = 0 .3encourages the policy to utilize pixel-space reasoning in approximately 30% of responses generated for a given query, while N = 1 promotes efficiency by favoring responses that require at most one visual operation. Under these parameters, a response can receive a maximum exploration bonus of approximately 17 0 10 20 30 40 50 60 70 Steps (x16)0.30.40.50.60.70.80.9reward-uniformity-ratio 0 10 20 30 40 50 60 70 Steps (x16)0.10.20.30.40.50.60.7response-all-correct 0 10 20 30 40 50 60 70 Steps (x16)0.10.20.30.40.50.60.70.8response-all-incorrect 0 10 20 30 40 50 60 70 Steps (x16)0.20.30.40.50.60.7correctness 0 10 20 30 40 50 60 70 Steps (x16)0.00000.00050.00100.00150.00200.00250.0030curiosity 0 10 20 30 40 50 60 70 Steps (x16)0.012 0.010 0.008 0.006 0.004 0.002 0.000penaltyFigure 9: Trainig Dynamics of RL without SSR. The ratio of reward uniformity steadily saturates to 90%. Figure 10: Pixel-Reasoner effectively captures key frames that are most informative for solving the query. 0.5×(0.3−1/8)≈0.0875 , while each additional visual operation beyond the first incurs a penalty of−0.05. D Additional Analysis D.1 Statistics Qwen2.5-VL-Instruct shows limited Zero-Shot Ability in utilizing novel visual operations. We include the protocols of visual operations in the system prompt for Qwen2.5-VL-Instruct. Initially, it invokes visual operations in 20.2% training rollouts, where 40.6% of them incurs error and 36.2% leads to incorrect answer. This results in 23.2% average accuracy when utilizing pixel-space reasoning, in contrast to 49.5% average accuracy when utilizing textual reasoning. RaPR of Pixel-Reasoner on the Evaluation Benchmarks. Our Pixel-Reasoner adaptively triggers pixel-space reasoning with a portion of 78.53% on V-Star, 57.78% on TallyQA-Complex, 58.95% on InfographicsVQA, and 66.95% on MVBench. D.2 Case Analysis We show successful trajectories in Fig. 11 and Fig. 10. We also provide concrete examples to show the failure modes when the policy suffers from "the learning trap". 18 Figure 11: Pixel-Reasoner effectively identifies the relevant region within the infographic and performs targeted analysis, either by refining the crop or directly answering the query. Failure mode 1: Hallucination. In the example below, the policy hallucinates as if the operations are successfully executed. The video shows a table with various items, including vegetables and a bowl. The person is seen interacting with the vegetables, specifically placing them into the bowl. At one point, the person is seen shuffling some letters on the table. The video does not clearly show the sequence or order of the letters throughout the video. Now I will select some frames to look clearer at the sequence of the letters. Execution error:max() arg is an empty sequence The cropped video frames show the person continuing to prepare the salad and shuffle the letters. The letters appear to be in a different arrangement compared to the beginning of the video.\n\n\\boxed{B} Failure mode 2: No-Reaction. In the example below, the policy ignores the error message and proceeds with textual reasoning. The video shows a person in a kitchen. The person interacts with objects on the stove, including a pot. At one point, the person appears to handle an object on the counter and then moves away from the stove area. The sequence of actions suggests the person might | https://arxiv.org/abs/2505.15966v2 |
have placed an item down or completed an action involving the counter.\n\nNow I will select some frames to look clearer at which object was put down by the person. Execution error:max() arg is an empty sequence It seems there was an issue with selecting frames. Let me describe the scene instead:\n\ nThe person in the video is standing near the stove with a pot on it. After interacting with the pot, the person moves to the counter area. It appears the person may have placed an object (like a box or a paper/notebook) down on the counter before moving away. The object is not visible in the provided frames, but the sequence of actions suggests it was placed there.\n\n\\boxed{C} E Prompts E.1 Prompts Used in RL Training During RL training, we append the following prompt after the user query. {question} \n\nGuidelines: Understand the given visual information and the user query. Determine if 19 it is beneficial to employ the given visual operations (tools). For a video, we can look closer by `select_frames `. For an image, we can look closer by `crop_image `. Reason with the visual information step by step, and put your final answer within \\boxed{}. E.2 Prompts Used in Data Synthesis E.2.1 Prompt for Question-answer Pair Generation for SA1B Since SA1B lacks question-answer pairs and corrisponding annotations and some pictures in SA1B have little content, we prompt gpt-4o to first determine if the image is information-rich. If yes, gpt-4o needs to use zoom-in tool to first crop a small part of the image, and then ask a question about objects in the small region. Otherwise, gpt-4o should reply Not valid . Here is the prompt for gpt-4o: You are an expert in generating questions about small details in a image. You will be given a HR image. First determin if the image is an information-rich image. If it is not, return 'Not valid '. If it is, choose a small region and use crop image tool to zoom in. According to both cropped image and whole image. Generate a question about objects in the small region. The question should be about the small object or its color, material. Also generate 4 choices. One of them is the correct answer. Others are wrong. It should not be ambiguous. For example if you ask about the color of a person 's shoes, there should either be only one person or you specify which person you are referring to. Please make sure the object is small. Don 't ask about questions related to the cropped image. For example, don 't ask 'What is the color of the frame in the cropped image? 'because the cropped image will not be provided. Put the question in the following format: <question> QUESTION HERE </question> Here is an example question: <question> question:What is the color of the person 's shoes? choices: A: Red B: Blue C: Green D: Yellow correct_answer: A </question> <question> question:What is the child on the crosswalk holding? choices: A: Ice cream B: Ball C: Book D: None correct_answer: C </question> Here is | https://arxiv.org/abs/2505.15966v2 |
the tool description {tool_description}. For each tool call, return a json object with function name and arguments within <tool_call></tool_call> XML tags: <tool_call> {{"name": <function-name>, "arguments": <args-json-object>}} </tool_call> Stop generating after you call a tool. Here is the image. E.2.2 Prompts for Expert Trajectory Synthesis For SA1B dataset: You are an expert in generating trajectories involving image cropping and answering 20 questions. You will be given an image and one cropped part of it and a question. First, you need to briefly analyze the whole image, then generate: "Now I will zoom in to look clearer at 'query object or text '." Then you need to analyze the cropped part and answer the question. Put your answer choice in \boxed{}. Here is an example: question: What is the price mentioned for renting the single house? choices: A: 9,000 Baht B: 10,000 Baht C: 8,500 Baht D: 12,000 Baht Analyzing the whole image: The image shows a lively street scene with people celebrating, possibly during a festival. There is a pickup truck with people on it, and others walking around. A signboard with text is visible in the background, which seems to contain information about renting or selling a house. Now I will zoom in to look clearer at the text on the signboard. Analyzing the cropped part: The cropped image focuses on the signboard. The text on the signboard mentions "SALE / RENT SINGLE HOUSE" and specifies the price for renting as **9,000 Baht**. \boxed{A} Here is the question, image and cropped part: {text} For Fineweb dataset: You are an expert in generating trajectories involving image cropping and answering questions. You will be given an image and one cropped part of it and a question. First you need to briefly analyze the whole image, then generate: "Now I will zoom in to look clearer at the part about 'query '." Then you need to analyze the cropped part and answer the question. Put your answer in \boxed{}. Final answer should be text from article. Don 't change the original text or include irrelevant text from the article. The answer should be in one sentence. Here are some examples: question: What are the key responsibilities of a leader? Analyzing the whole image: The document appears to be an article titled "Top 7 Skills a Leadership Training Should Teach Managers." It discusses various aspects of leadership training, including leadership essentials, change management, performance coaching, and conflict management. The article emphasizes the importance of leadership skills in managing teams effectively. Now I will zoom in to look clearer at the part about "key responsibilities of a leader." Analyzing the cropped part: The cropped part focuses on "Leadership Essentials," which outlines the basics of leadership, including understanding the role of a leader and the key responsibilities of a leader. \boxed{building relationships, setting expectations, delegation, and developing a goal-oriented approach.} question: Who won the first SEC championship in football? Analyzing the whole image: The document is a Wikipedia article titled "SEC Championship Game." It provides an overview of the Southeastern Conference (SEC) Football Championship Game, including its history, | https://arxiv.org/abs/2505.15966v2 |
format, results, and notable 21 moments. The article also includes a table summarizing the results of all SEC Championship games since its inception in 1992. Now I will zoom in to look clearer at the part about "who won the first SEC championship in football." Analyzing the cropped part: The cropped section includes a table of results from all SEC Championship games. The first game, held in 1992, lists #2 Alabama defeating #12 Florida with a score of 28-21 at Legion Field in Birmingham, Alabama. \boxed{Alabama} Here is the question and image: {text} For STARQA dataset: You are an expert in generating trajectories involving frame selection and answering questions. You will be given 16 images (video frames) in chronological order and several selected frames from them and a question. First you need to briefly analyze the whole video, then generate: "Now I will select some frames to look clearer at 'query object or text '." Then you need to analyze the selected frames and answer the question. Put your answer choice in \boxed{}. Here are some examples: question: why did the woman take the measuring spoons away from the boy? choices: A: do not need it anymore B: feeding C: finish eating the piece D: so can take picture E: wants to play with it Analyzing the video: The video shows a woman and a boy in a kitchen setting. The boy is sitting on the counter, holding measuring spoons, while the woman appears to be engaged in a baking or cooking activity. The woman interacts with the boy, guiding him as they work with ingredients like flour and eggs. Toward the end, the woman takes the measuring spoons away from the boy. Now I will select some frames to look clearer at why the woman took the measuring spoons away from the boy. Analyzing the selected frames: In the selected frames, the woman is seen taking the measuring spoons from the boy. The boy appears to have finished using the spoons to add ingredients to the bowl. The woman likely takes the spoons to proceed with the next step in the cooking process. \boxed{A} question: Which object was put down by the person? choices: A: The cup/glass/bottle. B: The clothes. C: The bag. D: The book Analyzing the video: The video shows a person entering a room and sitting at a table. The person appears to be holding a sandwich and a book. She places the book on the table, eats the sandwich, and then picks up the book again to read. Toward the end of the video, the person leaves the table, leaving the book behind. Now I will select some frames to look clearer at which object was put down by the 22 person. Analyzing the selected frames: In the selected frames, the person is seen entering the room holding a sandwich and a book. She places the book on the table before eating the sandwich. The book remains on the table as the person continues her activity and eventually leaves the room. \boxed{D} Here is the question and | https://arxiv.org/abs/2505.15966v2 |
Explaining Puzzle Solutions in Natural Language: An Exploratory Study on 6×6Sudoku Anirudh Maiya Razan Alghamdi Maria Leonor Pacheco Ashutosh Trivedi Fabio Somenzi University of Colorado Boulder {first.last}@colorado.edu Abstract The success of Large Language Models (LLMs) in human-AI collaborative decision- making hinges on their ability to provide trust- worthy, gradual, and tailored explanations. Solving complex puzzles, such as Sudoku, of- fers a canonical example of this collaboration, where clear and customized explanations often hold greater importance than the final solution. In this study, we evaluate the performance of five LLMs in solving and explaining 6×6Su- doku puzzles. While one LLM demonstrates limited success in solving puzzles, none can explain the solution process in a manner that reflects strategic reasoning or intuitive problem- solving. These findings underscore significant challenges that must be addressed before LLMs can become effective partners in human-AI col- laborative decision-making. 1 Introduction The recent success of Large Language Models (LLMs) in automating reasoning tasks using natu- ral language highlights their potential in human-AI collaborative decision-support systems. However, their effectiveness in critical domains—such as safety-critical systems, legally sensitive decision- making, medical support systems, and business- critical processes—depends on their ability to pro- vide step-by-step, consistent, and trustworthy ex- planations tailored to the prior knowledge and cur- rent capabilities of the human partner. Solving com- plex, multi-step puzzles like Sudoku offers a mean- ingful example of such decision-making scenar- ios. These puzzles share several key challenges— including step-by-step reasoning, the need for tai- lored explanations, and the importance of maintain- ing consistency—but without the risks associated with making incorrect critical decisions. Sudoku thus provides a controlled environment to study the capabilities of LLMs in addressing these chal- lenges while isolating them from high-stakes con-sequences. This paper explores whether LLMs can effectively navigate these decision-making chal- lenges by focusing on their performance in solving and explaining Sudoku problems. The ability of LLMs to reason and solve prob- lems has become a key area of interest in the field of artificial intelligence (Huang and Chang, 2023; Yin et al., 2024). Although LLMs have shown re- markable performance in tasks involving language generation, translation, and comprehension (Brown et al., 2020; Wei et al., 2024), their ability to han- dle more structured reasoning tasks is less under- stood (Mirzadeh et al., 2024). In the last year, sev- eral studies have attempted to test the abilities of LLMs to solve complex puzzles (Chia et al., 2024; Li et al., 2024; Giadikiaroglou et al., 2024a). While these models are increasingly capable of generat- ing correct answers, the explanations they provide for their reasoning process often fall short (Turpin et al., 2024). We argue that good reasoners should be able to clearly expose their deductive processes. Therefore, reasoning explanations should be faith- ful, provide enough support for individual deci- sions, and provide insight into the process they took to arrive at any given solution. While we can obtain correct answers and faithful reasoning steps for many puzzles with symbolic reasoners (de Moura and Bjørner, 2011), it can be hard for humans to make sense of the explanations provided by | https://arxiv.org/abs/2505.15993v1 |
these tools. These explanations usu- ally come in the form of fine-grained proofs that require a good grasp of proof terminology and are often overwhelmingly detailed. They often refer to encodings of the problem that are cumbersome to relate to the original presentation of the puzzle. To be useful in practical scenarios, explanations need to be easily understandable to humans (Gilpin et al., 2018). Given that LLMs excel at producing human-like output (Ouyang et al., 2022), we are interested in studying the challenges and opportu- nities of LLMs for explaining puzzles solutions.arXiv:2505.15993v1 [cs.CL] 21 May 2025 Research Questions and Findings We investi- gate the explanation capabilities of LLMs in the context of 6×6Sudoku puzzles. Sudoku is a widely recognized logic-based puzzle (Rosenhouse and Taalman, 2011), which provides an ideal bench- mark to explore these facets of LLMs. Solving Su- doku requires deductive reasoning and rule-based elimination, skills that go beyond simple pattern recognition. We choose 6×6Sudoku because, while not being overly difficult, it often requires a few nontrivial reasoning steps to be solved. With our study, we aim to answer the following two re- search questions: (1) LLMs solve 6×6Sudoku puz- zles? (2) Can LLMs successfully explain the steps needed to arrive to a given solution? To answer these questions, we construct a dataset of 2,293 6×6Sudoku puzzles with their solutions, and show that open-source models can correctly solve less than 1% of the puzzles, while the latest Open AI model can solve 65% of them. Then, select a subset of 20 puzzles spanning different difficulty levels to evaluate explanation capabilities using a structured survey, and show that explanations fall short for the best performing models. Contributions This study contributes to a deeper understanding of the (in)ability of LLMs to articu- late the reasoning steps needed to explain a particu- lar solution to multi-step reasoning tasks. We show that while these tools show promise (as evidenced by the increase performance for newer models), they still have significant challenges in exposing reasoning steps in a way that is both faithful and insightful. Finally, we outline a research agenda for combining LLMs with logic-based reasoning to address some of these challenges1. 2 Related Work The challenges that LLMs face in solving puz- zles that require complex reasoning have been well-documented (Giadikiaroglou et al., 2024b), with most methods struggling to successfully pro- duce consistent results. In response to these chal- lenges, methods that combine symbolic reasoning and LLMs have recently received attention with varying success (Mittal et al., 2024; Jiang et al., 2022; Xin et al., 2024). PuzzleLM (Mittal et al., 2024) first prompts an LLM to generate an instance- agnostic program capable of handling various prob- lem classes, such as Sudoku, N-Queens, graph col- oring, etc. The program then converts the problem 1All data and code will be released.instance into SMT constraints and passes them to an external solver like Z3. The solver processes these constraints and returns a solution, which the program then converts to the desired output format. Xin et al. (2024) introduces a method for gener- ating large-scale formal proof data | https://arxiv.org/abs/2505.15993v1 |
from informal math problems. It fine-tunes an LLM, translates informal problems into formal Lean 4 statements, filters low-quality formalizations using a chain-of- thought evaluation, and attempts to prove both the formal statement and its negation. This iterative process refines the model’s performance by incor- porating newly generated data to enhance subse- quent rounds of statement generation. However, significant less attention has been given to the problem of whether LLMs can produce useful explanations (in form of reasoning steps) for the solutions that they provide. In this work, we study this question in the context of 6x6 sudoku and show that, while LLMs have improved their capacity of producing correct solutions, generating explanations continues to be challenging. 3 Task Formulation This study assesses the problem-solving capabil- ities of LLMs using 6×6Sudoku puzzles. We evaluate two key aspects: first, the ability of LLMs to generate correct solutions for puzzles of varying difficulty, and second, their capability to provide meaningful explanations for how these solutions are derived. These complementary tasks allow us to assess both the computational accuracy and reason- ing transparency of LLMs in structured problem- solving scenarios. Dataset We generated 2,293 6×6Sudoku puzzles using the Z3 solver. Z3 decides the satisfiability of sentences of many decidable fragments of first- order logic. It combines the CDCL algorithm for propositional satisfiability with solvers for conjunc- tions of literals for background theories like linear arithmetic. Puzzle Generation Process In our Sudoku gen- eration process, each puzzle is modeled as a grid of cells, where every row, column, and 2×3box must contain the numbers 1 to 6 without repetition. To do this, the generator initially evaluates random seeds, that is, partially filled grids with 11 givens, until Z3 confirms the existence of a solution (not necessarily unique). Once a solved grid is found, givens are removed from it in random order until none can be removed lest the uniqueness of the solution should be lost. Z3 is used to check that the solution remains unique. Namely, if xis the value in Row yColumn zthat is a candidate for removal, the constraint that asserts rycz =xis replaced by rycz̸=x. If the resulting constraints are unsatisfi- able, the candidate can be dropped. This results in a dataset whose every puzzle has a unique solution and is minimal, in the sense that removal of any clue introduces additional solutions. The minimum number of givens for a 6x6 puzzle is 8 (McGuire et al., 2014). While our generation process does not guarantee uniform sampling from all 28,200,960 valid 6x6 grids, we get essentially the same statistics for the number of givens when we start from a uniform random sample of grids. (We do not sample from the uniform distribution of grids in our generator because the approach is com- putationally infeasible for grids larger than 6x6.) Consequently, approximately 96% of our resulting puzzles have between 9 and 11 givens. To assess the representativeness of our sample, we compare with publicly available benchmarks2, which show the following distribution for a sample of 323,231 symmetrical 6x6 puzzles: 0.2% puzzles with 8 givens; approx. | https://arxiv.org/abs/2505.15993v1 |
88.7% with 10 or 12 givens, 11.1% with 14 or more givens. Symmetrical puzzles have an even number of givens. Moreover, the puzzles in the Menneske data set with more than 8 givens, unlike ours, have redundant givens. Both differ- ences contribute to the higher average number of givens when compared to our sample. Assessing puzzle difficulty To assess puzzle dif- ficulty, we employ the widely-accepted Explainer Rating (ER) system, which rates puzzles based on the most complex solving technique required for their solution. As shown in Figure 2, the ER scale ranges from 1.0 (basic techniques) to above 6.2 (advanced techniques), with each solving tech- nique assigned a specific rating value. Our puzzles fall into five distinct difficulty categories: Easy (ER 1.0-1.2), Medium (ER 1.5), Hard (ER 1.7-2.5), Fiendish(ER 2.6-6.0) and Diabolical (ER 6.2+). Further details can be found in App. A. This distribution ensures our dataset spans a wide range of difficulty levels, with 93.5% of puz- zles being Easy or Medium difficulty, and 6.5% requiring more advanced solving techniques. This variety allows us to evaluate both basic solving ca- 2https://www.menneske.no/sudoku/2x3/eng/ showstartcount.html Puzzle IDRating 02468 5 10 15 20Average Human Rating Human Rating #1 Human Rating #2 Explainer Rating (ER)Puzzles Difficulty RatingsFigure 1: The difficulty rating for 2 Puzzles given by two human solvers and a Sudoku Explainer Program. A higher rating denotes higher difficulty. pabilities and more sophisticated reasoning strate- gies. To assess the reliability of this metric, 20 puzzles with varying numbers of back-jumps were solved and rated by two authors of the paper. The human solvers rated the puzzles on a scale from 1 to 5, with 1 denoting the easiest and 5 being the hardest among the set. Comparing the average rat- ing of the two human solvers against the ER for the same puzzles resulted in a Spearman’s Rank- ing Correlation Coefficient ( ρ) of 0.86994. To put it in perspective, the two individual human rank- ings have a lower ( ρ) of 0.8042. This ensured that the ER is reflective of the human perception of a puzzle’s difficulty as seen in Fig. 1 4 Experiments Can LLMs Solve Sudoku Puzzles? We tested the performance of several state-of-the-art LLMs on our 6×6Sudoku puzzles. The open-source models Llama-3.1-70B-Instruct and Llama-3.1- 8B-Instruct (Dubey et al., 2024), Gemma-2-9B- Instruct (Rivière et al., 2024) and Mistral-7B- Instruct-v0.3 (Jiang et al., 2023) were evaluated on our full dataset of 2,293 puzzles. Due to the access limitations and rate restrictions of closed propri- etary models, we evaluated OpenAI’s o1-preview model on a randomly selected subset of 100 puz- zles. Each model was tasked with generating so- lutions and the correctness of their outputs was evaluated at multiple levels: cell-wise, row-wise, column-wise, box-wise, and full accuracy. The prompt used can be found in App. B. Table 1 shows the results split into two sections: performance on the full dataset (2,293 puzzles) for open-source models, and performance on the 100- puzzle subset for all models including o1-preview. Open source models only managed 0.04% fully correct solutions, regardless of their size, showing that they | https://arxiv.org/abs/2505.15993v1 |
struggle with integrating all constraints across the grid. For these models, column-wise and row-wise constraints were prioritized over box- Accuracy (%) LLM Cell-wise Row-wise Column-wise Box-wise Fully Correct Solutions Mistral-7B-Instruct-v0.3 21.96 0.59 0.17 0.17 0.00 Llama-3.1-8B-Instruct 39.65 2.22 2.01 1.36 0.00 Llama-3.1-70B-Instruct 45.93 4.59 6.29 2.70 0.04 Gemma-2-9B-Instruct 50.12 6.74 6.53 3.87 0.04 Mistral-7B-Instruct-v0.3 22.42 0.67 0.17 0.17 0.00 Llama-3.1-8B-Instruct 39.44 1.67 1.83 0.50 0.00 Llama-3.1-70B-Instruct 44.92 2.50 4.67 1.83 0.00 Gemma-2-9B-Instruct 49.47 5.33 6.00 2.33 0.00 OpenAI o1-preview 83.53 67.50 67.00 66.83 59.00 Humans 100.00 100.00 100.00 100.00 100.00 Table 1: Sudoku Solving Performance Across Models and Dataset Sizes. Upper section shows results for open- source models evaluated on the full dataset (2,293 puzzles). Lower section shows results for all models, including o1-preview, evaluated on a subset of 100 puzzles wise constraints. Even then, they struggled en- forcing constraints in general - exemplified by cor- rectly placing 40-50% of cells yet only satisfying 7% of the row/column constraints. The results on the 100-puzzle subset revealed that OpenAI o1- preview considerably outperforms all open-source models, achieving 59% accuracy for fully correct solutions compared to near-zero performance from other models. This significant performance gap suggests that proprietary models have made sub- stantial progress in logical reasoning capabilities. Performance analysis across difficulty levels re- veals that o1-preview achieves 100% accuracy for Easy (ER 1.0-1.2) and Medium (ER 1.5) puzzles, but shows significant degradation as complexity increases. For Diabolical puzzles (ER 6.2+), cell- wise accuracy drops to 57.2% and complete solu- tion accuracy to 40%, suggesting the model strug- gles with integrating multiple constraints as puzzle complexity increases (detailed analysis in App. E) Can LLMs Provide Good Explanations for So- lutions? We conducted a detailed evaluation of o1-preview’s explanation capabilities on a subset of 20 puzzles (difficulty distribution of these puzzles in App. D, performance analysis in App. E). Three expert evaluators (details on survey participators in App. F) assessed the quality of o1’s explanations using a structured survey, with each explanation being evaluated by two assessors. The evaluation was conducted through survey- ing the correctness of the final solution, comments on the overall explanation, and a three-point Likert scale (Yes, Maybe, No) assessing three key dimen- sions: (1) Justification (the explanation provides justification for the solution), (2) Clarity (the ex- planation is easy to follow and detailed), and (3)Educational Value (the explanation promotes under- standing of Sudoku solving strategies). For each of the 20 puzzles, two evaluators independently rated o1’s explanations across these three dimen- sions. This resulted in a total of 120 individual assessments (20 puzzles ×3 questions ×2 eval- uators). We measured inter-annotator agreement using weighted Kappa and obtained 0.6 for educa- tional value, 0.6 for justification, and 0.4 for clarity, which suggests moderate to good agreement. While o1-preview achieved correct solutions in 65% of cases, the evaluation of its explanations across 20 puzzles revealed significant limitations. Only 5% of the responses indicate the explanation indeed justifies the solution with a "Yes" ratings (52.5% "Maybe", 42.5% "No"); while the model identified correct digit placements, it rarely justi- fied why the placements were | https://arxiv.org/abs/2505.15993v1 |
were chosen. The clarity dimension scored poorly, with 7.5% "Yes" responses (32.5% "Maybe", 60% "No"), with re- ported issues regarding logical progression and in- consistent terminology. Educational value proved to be the weakest dimension, with just 2.5% "Yes" responses (52.5% "Maybe", 45% "No"), as explana- tions failed to convey generalizable solving strate- gies. The qualitative feedback reinforced these findings – evaluators consistently noted that ex- planations lacked systematic progression, failed to justify moves, misused basic terminology, and poorly articulated the solution path. These results highlight a stark contrast between o1-preview’s ability to solve puzzles and its capacity to explain its reasoning process effectively. 5 Conclusions and Future Work Our analysis of the performance of five LLMs has found that, with the exception of OpenAI o1- preview, they are not currently capable of solving 6×6Sudoku puzzles. OpenAI’s LLM fares much better than the other LLMs as a solver, but does not provide explanations that may shed light on what deductions may be used to solve a puzzle. We argue that improving explanation generation for multi-step reasoning processes is a promising area of research, where the ability of LLMs to generate easy-to-understand language has great potential. The ability to explain multi-step reasoning is of even greater value for decision processes that have unique sets of constraints and are much harder to solve and explain. It is therefore justified to expect that LLMs may need the help of tools based on different forms of reasoning. We are particularly interested in exploring approaches that combine LLMs with logic-based reasoning, as implemented in SMT solvers (e.g., Z3 or CVC5 (Barbosa et al., 2022)) and proof assistants like Lean (de Moura et al., 2015) or Isabelle HOL (Nipkow et al., 2024). Explanations produced by these tools are usually reliable and detailed, but are notoriously hard for humans to parse and digest. We expect LLMs to be able to interpret those explanations in ways acces- sible to the non-specialist and with levels of detail that match the needs of the human user. 6 Limitations Although this study examines the capabilities of LLMs in solving and explaining 6×6Sudoku puz- zles, there are several limitations. 1.Dataset Scope: Although we generated 2,293 puzzles using the Z3 solver, the subset used for human and o1-preview comparison was limited to 100 puzzles for solutions and 20 puzzles for explanations. This small exper- iment was enough to appreciate the inabil- ity of o1 to come up with satisfying explana- tions for its solutions. Given that our subsets contained puzzles of varying difficulty and that results were consistently deficient, we be- lieve that we have enough evidence to assert that this is an area where cutting-edge LLMs struggle. However, a larger-scale study could strengthen these findings and provide a more comprehensive view of the models’ overall capabilities and performance under different difficulty levels.2.Puzzle Complexity: We evaluated the mod- els on 6×6Sudoku puzzles, which, while requiring some non-trivial reasoning, are sim- pler than the more commonly studied 9×9 puzzles. The limited complexity of the 6×6 grid may not fully capture the range of rea- soning challenges that | https://arxiv.org/abs/2505.15993v1 |
LLMs might encounter in more difficult puzzles, potentially underes- timating the models’ limitations in handling more complex tasks. 3.Impact of Fine-Tuning: Our study was fo- cused on evaluating the capacity of LLMs to produce solutions and explanations out of the box, without any fine-tuning. Of particular interest was to study the disconnect between o1’s ability to solve puzzles vs. its inability to provide consistent, faithful explanations for its reasoning process. While fine-tuning models was out of scope for this work, it is possible that conclusions could differ if any training were to be performed. Acknowledgments Ashutosh Trivedi holds the position of Royal Soci- ety Wolfson Visiting Fellow and acknowledges the support of the Wolfson Foundation and the Royal Society for this fellowship. References Haniel Barbosa, Clark W. Barrett, Martin Brain, Gereon Kremer, Hanna Lachnitt, Makai Mann, Abdalrhman Mohamed, Mudathir Mohamed, Aina Niemetz, An- dres Nötzli, Alex Ozdemir, Mathias Preiner, Andrew Reynolds, Ying Sheng, Cesare Tinelli, and Yoni Zo- har. 2022. cvc5: A versatile and industrial-strength SMT solver. In Tools and Algorithms for the Con- struction and Analysis of Systems TACAS 2022 , pages 415–442. LNCS 13243. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, and 12 others. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems , volume 33, pages 1877–1901. Curran Associates, Inc. Yew Ken Chia, Vernon Toh, Deepanway Ghosal, Lidong Bing, and Soujanya Poria. 2024. PuzzleVQA: Diag- nosing multimodal reasoning challenges of language models with abstract visual patterns. In Findings of the Association for Computational Linguistics ACL 2024 , pages 16259–16273, Bangkok, Thailand and virtual meeting. Association for Computational Lin- guistics. Leonardo de Moura and Nikolaj Bjørner. 2011. Satisfia- bility modulo theories: Introduction and applications. Communications of the ACM , 54(9):69–77. Leonardo de Moura, Soonho Kong, Jeremy Avigad, Floris van Doorn, and Jakob von Raumer. 2015. The Lean theorem prover (system description). In Con- ference on Automated Deduction (CADE-25) , pages 378–388. Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, and 82 others. 2024. The llama 3 herd of models. CoRR , abs/2407.21783. Panagiotis Giadikiaroglou, Maria Lymperaiou, Giorgos Filandrianos, and Giorgos Stamou. 2024a. Puzzle solving using reasoning of large language models: A survey. Preprint , arXiv:2402.11291. Panagiotis Giadikiaroglou, Maria Lymperaiou, Giorgos Filandrianos, and Giorgos Stamou. 2024b. Puzzle solving using reasoning of large language models: A survey. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 11574–11591, Miami, Florida, USA. Associa- tion for Computational Linguistics. Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha Ba- jwa, Michael Specter, and Lalana Kagal. 2018. Ex- plaining explanations: An overview of interpretabil- ity of machine learning. In 2018 IEEE 5th Interna- tional Conference on Data Science and Advanced Analytics (DSAA) , pages | https://arxiv.org/abs/2505.15993v1 |
80–89. Jie Huang and Kevin Chen-Chuan Chang. 2023. To- wards reasoning in large language models: A survey. InFindings of the Association for Computational Linguistics: ACL 2023 , pages 1049–1065, Toronto, Canada. Association for Computational Linguistics. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Men- sch, Chris Bamford, Devendra Singh Chaplot, Diego de Las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Re- nard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timo- thée Lacroix, and William El Sayed. 2023. Mistral 7b.CoRR , abs/2310.06825. Albert Qiaochu Jiang, Sean Welleck, Jin Peng Zhou, Wenda Li, Jiacheng Liu, Mateja Jamnik, Timo- thée Lacroix, Yuhuai Wu, and Guillaume Lam- ple. 2022. Draft, sketch, and prove: Guiding for- mal theorem provers with informal proofs. ArXiv , abs/2210.12283.Yinghao Li, Haorui Wang, and Chao Zhang. 2024. As- sessing logical puzzle solving in large language mod- els: Insights from a minesweeper case study. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 59–81, Mexico City, Mexico. Association for Computational Linguistics. Gary McGuire, Bastian Tugemann, and Gilles Civario and. 2014. There is no 16-clue sudoku: Solving the sudoku minimum number of clues problem via hitting set enumeration. Experimental Mathematics , 23(2):190–217. Iman Mirzadeh, Keivan Alizadeh, Hooman Shahrokhi, Oncel Tuzel, Samy Bengio, and Mehrdad Farajtabar. 2024. Gsm-symbolic: Understanding the limitations of mathematical reasoning in large language models. Preprint , arXiv:2410.05229. Chinmay Mittal, Krishna Kartik, Parag Singla, and 1 others. 2024. Puzzlebench: Can llms solve challeng- ing first-order combinatorial reasoning problems? arXiv preprint arXiv:2402.02611 . Tobias Nipkow, Lawrence C. Paulson, and Markus Wenzel. 2024. Isabelle HOL: A Proof Assistant for Higher-Order Logic . Springer. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems , volume 35, pages 27730–27744. Curran Associates, Inc. Morgane Rivière, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupatiraju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, Johan Ferret, Peter Liu, Pouya Tafti, Abe Friesen, Michelle Casbon, Sabela Ramos, Ravin Kumar, Charline Le Lan, Sammy Jerome, An- ton Tsitsulin, and 80 others. 2024. Gemma 2: Im- proving open language models at a practical size. CoRR , abs/2408.00118. J. Rosenhouse and L. Taalman. 2011. Taking Sudoku Se- riously: The Math Behind the World’s Most Popular Pencil Puzzle . Oxford University Press. Miles Turpin, Julian Michael, Ethan Perez, and Samuel R. Bowman. 2024. Language models don’t always say what they think: unfaithful explanations in chain-of-thought prompting. In Proceedings of the 37th International Conference on Neural Information Processing Systems , NIPS ’23, Red Hook, NY , USA. Curran Associates Inc. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V . Le, and Denny Zhou. 2024. Chain-of-thought prompt- ing elicits reasoning in large language | https://arxiv.org/abs/2505.15993v1 |
models. In Proceedings of the 36th International Conference on Neural Information Processing Systems , NIPS ’22, Red Hook, NY , USA. Curran Associates Inc. Huajian Xin, Daya Guo, Zhihong Shao, Zhizhou Ren, Qihao Zhu, Bo Liu (Benjamin Liu), Chong Ruan, Wenda Li, and Xiaodan Liang. 2024. Deepseek- prover: Advancing theorem proving in llms through large-scale synthetic data. ArXiv , abs/2405.14333. Zhangyue Yin, Qiushi Sun, Qipeng Guo, Zhiyuan Zeng, Xiaonan Li, Junqi Dai, Qinyuan Cheng, Xuanjing Huang, and Xipeng Qiu. 2024. Reasoning in flux: Enhancing large language models reasoning through uncertainty-aware adaptive guidance. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 2401–2416, Bangkok, Thailand. Association for Computational Linguistics. A Puzzle difficulty rubric 1. Easy (ER 1.0-1.2): Requires only hidden sin- gles in blocks, making up 73.4% of the dataset. These puzzles can be solved using basic scan- ning techniques. 2. Medium (ER 1.5): Requires hidden singles in rows/columns, comprising 20.1% of puzzles. These puzzles need slightly more advanced pattern recognition. 3. Hard (ER 1.7-2.5): No candidates needed but requires more complex deduction, represent- ing 1.6% of the dataset. 4. Fiendish (ER 2.6-6.0): Requires writing down candidates and advanced techniques, making up 3.4% of puzzles. 5. Diabolical (ER 6.2+): Requires forcing chains and sophisticated solving strategies, compris- ing 1.5% of the dataset. Easy (1.0-1.2)Medium (1.5)Hard (1.7-2.5)Fiendish (2.6-6.0)Diabolical (6.2+) Explainer Rating (ER)050010001500Number of Puzzles73.4% 20.1% 1.6%3.4% 1.5% Hidden Singles in blocks onlyHidden Singles in rows/columnsNo candidates neededRequires writing down candidatesRequires Forcing ChainsDistribution of Sudoku Puzzle Difficulty (N = 2,293)Figure 2: Distribution of Sudoku puzzle difficulty across the dataset (N = 2,293). Each category corresponds to specific solving techniques required, from basic hidden singles to complex forcing chains. B Prompt Details ' &$ %Task Description: You are given an incomplete 6×6Sudoku grid. The objective is to fill in the empty cells (represented by 0s) with numbers from 1 to 6 while following these rules: Row Constraint: Each number from 1 to 6 must appear exactly once in every row. Column Constraint: Each number from 1 to 6 must appear exactly once in every column. Box Constraint: Each number from 1 to 6 must appear exactly once in each of the six 2×3boxes. Important: Do not use any code to solve this puzzle. Solve this puzzle using logical reasoning and only provide the final grid. Input Format: A6×6grid is represented by six lines of input. Each line contains six space- separated integers. The number 0 indicates an empty cell that you need to fill, while numbers 1 to 6 represent pre-filled cells. Output Format: Output the completed 6×6grid, replacing the 0s with the correct numbers from 1 to 6, such that all the constraints are satisfied. Problem to Solve: 0 0 1 2 0 0 0 0 0 0 0 0 0 0 2 0 0 4 0 0 0 6 0 5 0 0 0 0 3 0 4 0 0 0 0 6 Provide the completed 6×6grid only C Experiment Details The experiments were conducted using the follow- ing configurations: 1.Zero-shot Prompting: For all | https://arxiv.org/abs/2505.15993v1 |
experiments, we employed zero-shot prompting to evaluate the models’ ability to solve 6x6 Sudoku puz- zles without additional task-specific training or prior context. 2.Temperature and top-p: All models were run with a fixed temperature of 0 to ensure de- terministic outputs across multiple runs. Ad- ditionally we use top-p sampling value of 0.5 for all models. 3.Infrastructure: Gemma-2-9B-Instruct was was run on GroqCloud. Llama-3.1-70B- Instruct, Llama-3.1-8B-Instruct, and Mistral- 7B-Instruct-v0.3 models were accessed and run via the Hugging Face Chat API. o1- preview was used via OpenAI’s user interface. 4.Accuracy Evaluation: The number of pre- filled (givens) was also considered when cal- culating row, column, box and cell-wise accu- racy. D Difficulty Distribution in 20-Puzzle Subset For our detailed explanation analysis, we selected 20 puzzles from the larger dataset, ensuring repre- sentation across different difficulty levels. ??shows the distribution of these puzzles across the Ex- plainer Rating (ER) categories. This distribution was intentionally designed to be more balanced than the original dataset to ensure sufficient cov- erage of higher difficulty levels while maintaining representation of common puzzle types. The selec- tion allows us to evaluate o1-preview’s explanation capabilities across the full spectrum of puzzle com- plexity. E o1-preview Performance Analysis by Difficulty Level and Evaluation Metric Figure. 3 presents a detailed heatmap analysis of o1- preview’s solving performance across the surveyed puzzles, revealing three key patterns: •Perfect performance on simpler puzzles: For Easy (ER 1.0-1.2) and Medium (ER 1.5) difficulty levels, o1-preview achieves 100% accuracy across all metrics. •Progressive degradation with difficulty: Performance declines notably for harder puz- zles, with success rates dropping to 40% forcomplete solutions in Diabolical puzzles (ER 6.2+). •Consistent metric hierarchy: Cell-wise ac- curacy remains highest across all difficulty lev- els (57-100%), while complete solution rates show the steepest decline (40-100%), suggest- ing the model struggles more with global con- straints than local patterns. Easy (1.0-1.2)(N=4) Medium (1.5)(N=2) Hard (1.7-2.5)(N=2) Fiendish (2.6-6.0)(N=7) Diabolical (6.2+)(N=5) Difficulty LevelCell-wise Row-wise Column-wise Box-wise Full SolutionEvaluation Criteria100.0 100.0 76.4 71.4 57.2 100.0 100.0 58.3 59.5 40.0 100.0 100.0 50.0 57.1 40.0 100.0 100.0 58.3 57.1 40.0 100.0 100.0 50.0 57.1 40.0O1 Performance Heatmap: Success Rate by Metric and Difficulty (%) 020406080100 Success Rate (%) Figure 3: Heatmap visualization of o1-preview’s perfor- mance across different difficulty levels and evaluation metrics for the 20-puzzle subset. F Survey Participants and Expertise Our Likert survey involved three participants from our institution: one faculty member and two gradu- ate students, all of whom were connected to the re- search project. While their experience with Sudoku varied, all participants had a solid understanding of Sudoku rules and demonstrated experience in solving puzzles across different difficulty levels. | https://arxiv.org/abs/2505.15993v1 |
arXiv:2505.16000v2 [cs.CL] 24 May 2025Leveraging Online Data to Enhance Medical Knowledge in a Small Persian Language Model 1stMehrdad Ghassabi Faculty of Computer Engineering University of Isfahan Isfahan, Iran m.ghassabi@eng.ui.ac.ir2ndPedram Rostami School of Electrical and Computer Engineering University of Tehran Tehran, Iran pedram.rostami@ut.ac.ir3rdHamidreza Baradaran Kashani Faculty of Computer Engineering University of Isfahan Isfahan, Iran hrb.kashani@eng.ui.ac.ir 4thAmirhossein Poursina School of Medicine Isfahan University of Medical Sciences Isfahan, Iran Amirhosseinpoorsina9@gmail.com5thZahra Kazemi Faculty of Computer Engineering University of Isfahan Isfahan, Iran zhrakazemi@mehr.ui.ac.ir6thMilad Tavakoli Faculty of Computer Engineering University of Isfahan Isfahan, Iran m.tavakoli@mehr.ui.ac.ir Abstract —The rapid advancement of language models has demonstrated the potential of artificial intelligence in the health- care industry. However, small language models struggle with specialized domains in low-resource languages like Persian. While numerous medical-domain websites exist in Persian, no curated dataset or corpus has been available—making ours the first of its kind. This study explores the enhancement of medical knowledge in a small language model by leveraging accessible online data, including a crawled corpus from medical magazines and a dataset of real doctor-patient Q&A pairs. We fine-tuned a baseline model using our curated data to improve its medical knowledge. Benchmark evaluations demonstrate that the fine-tuned model achieves improved accuracy in medical question answering and provides better responses compared to its baseline. This work highlights the potential of leveraging open-access online data to enrich small language models in medical fields, providing a novel solution for Persian medical AI applications suitable for resource- constrained environments. Index Terms —persian medical question answering,small lan- guage model,medical language models, data crawling I. I NTRODUCTION The advent of the transformer architecture, as introduced in the groundbreaking paper “Attention is All You Need” [1] has catalyzed a rapid evolution in the field of natural language processing (NLP). This innovation has led to the development of increasingly sophisticated language models that leverage attention mechanisms to understand and generate human language with remarkable accuracy. As a result, the integration of artificial intelligence (AI) into various domains has surged, particularly in the medical field, where AI-driven solutions are being employed to enhance diagnostic accuracy, patient care, and administrative efficiency. Despite the vast amount of research and development ded- icated to English medical language models, such as Med- Palm [2] [3] and others, there remains a significant disparity in resources available for non-English languages, particularly Persian. To the best of our knowledge, the only existingPersian medical language model, Sina-BERT [4] , is a closed- source solution, limiting its accessibility and adaptability for further research and application. This gap underscores the urgent need for open-source resources that cater specifically to the Persian-speaking medical community. However, this gap stems from underutilization rather than a lack of raw material. Persian-language medical forums (e.g., drhast,doctor-yab) and authoritative online magazines (e.g., hidocor, niniban) host vast amounts of expert-curated content and real-world patient-doctor interactions. These sources—if systematically crawled, cleaned, and structured—could serve as valuable resources for training a domain-specific Persian language model. Moreover, the development of small language models is particularly crucial in the medical domain due to privacy concerns. These models can be optimized to run on local | https://arxiv.org/abs/2505.16000v2 |
devices, ensuring that sensitive patient data remains secure and confidential, which is a paramount consideration in healthcare settings. However, the unavailability of appropriate medical corpora and datasets in Persian has hindered progress in this area, impeding the creation of robust language models that can effectively address the linguistic and cultural nuances of the Persian-speaking population. In response to these challenges, we present a novel approach with our model, Gaokerena1, which fine-tunes a baseline model, aya-expanse-8b [5] , on the crawled data comprising a persian medical corpus and a medical free form farsi question answering dataset. Importantly, our model, corpus, and datasets are all open-source, promoting transparency and collaboration within the research community. This develop- ment aims to enhance access to Persian medical information and support secure, efficient interactions within the healthcare environment. By bridging the existing gaps in resources and 1Our language model is named after Gaokerena, an ancient Persian mytho- logical tree believed to possess healing properties and grant immortality to those who consume its fruit. leveraging advancements in NLP, our work contributes to the growing landscape of AI in medicine, particularly for Persian- speaking users. Our contributions in this work are as follows: •Introducing the first open-source2persian medical lan- guage model that achieved state of the art result in comparison to other home device runnable alternatives •Introducing a persian medical corpus obtained by crawl- ing different websites. •Introducing the first persian free form medical question answering dataset obtained by crawling different web- sites. •translating medical portion of MMLU benchmark which can be used to evaluate any persian medical language model II. R ELATED WORK A. Related Works in English Several notable projects have contributed to the development of medical language models, employing various strategies to enhance their performance and applicability in healthcare. ChatDoctor [6], which is the most similar work to ours, represents a notable initiative focused on developing a medical language model. The team behind ChatDoctor sourced its training data from HealthcareMagic and its test data from iCliniq, compiling a total of more than 200,000 free-form question-answering pairs from these online platforms. They then curated the dataset by filtering answers based on their length, resulting in a final collection of 100,000 high-quality pairs. Using this dataset, they fine-tuned a LLaMA model [7] to create a system capable of delivering accurate and contex- tually relevant medical information. Furthermore, ChatDoctor leveraged a retrieval-augmented generation (RAG) approach, which allowed the model to access and integrate external knowledge more effectively, thereby enhancing its overall performance. Meerkat [8] is another significant contribution in this field. This project involved extracting chains of thought from med- ical textbooks and fine-tuning a language model using this data, alongside other supplementary datasets. By focusing on the reasoning processes involved in medical decision-making, Meerkat aimed to create a model that not only provides infor- mation but also mimics the cognitive processes of healthcare professionals, thereby supporting more nuanced and informed interactions. MedMobile [9] represents yet another advancement in the realm of small medical language models. This work fine-tuned the Phi-3-mini model [10] using a combination of synthetic and human-generated | https://arxiv.org/abs/2505.16000v2 |
datasets, enabling it to achieve optimal performance tailored for mobile applications in the medical domain. By focusing on the specific requirements of mobile users, MedMobile sought to deliver a model that is both efficient and effective, ensuring accessibility to high-quality medical information on the go. 2we have published our model and datasets at https://huggingface.co/gaokerenaB. Related Works in Persian As previously mentioned, there has been limited research focused on Persian medical language models, highlighting a significant gap in resources for the Persian-speaking medical community. Furthermore, existing works on Persian medical question-answering systems are entirely closed-source regard- ing their datasets, models, and codebases. This lack of public resources leaves the field largely underexplored, presenting researchers with an almost blank slate to build upon. On the other hand, all of these efforts have primarily concentrated on extractive solutions, which aim to retrieve relevant information from predefined sources, rather than employing generative approaches capable of producing context-aware responses. Perhaps the most notable effort in this area is Sina-BERT [4], which involved training a BERT model [11] using a crawled corpus alongside Persian annotated datasets specifi- cally developed for various tasks, including medical question answering, medical sentiment analysis, and medical question retrieval. Sina-BERT is the most similar work to ours among Persian language focused efforts; however, it differs in that it uses a BERT model—an encoder-based language model—as its baseline. This choice limits its capability for generative AI tasks, as BERT is primarily designed for understanding and extracting information rather than generating answers. Another notable work is the Persian Medical Question Answering System developed by H. Veisi et al. [12]. Their system is structured around three main modules: question processing, document retrieval, and answer extraction. The question processing module is responsible for analyzing and refining user queries, the document retrieval module locates relevant medical documents from predefined data, and the answer extraction module identifies and extracts the most suitable answers from the retrieved content. Similar to these two works L.Darabi [13] used models like Pars-BERT [14] to retrieve relevant answers. Her approach in- volves finding similar questions to handle repeated queries and employs strict and lenient evaluation strategies for accurate or approximate answers. Additionally, classification methods and Named Entity Recognition (NER) are used to improve answer relevance by categorizing questions and identifying medical entities like drug and disease names. III. B ASELINE MODEL We have chosen aya-expanse-8b as our baseline model primarily due to the lack of open-source Persian medical language models, which necessitates the use of a general- purpose language model. While there are several multilingual options available, including aya-expanse [5], Gemma2 [15], Qwen2 [16], and PersianMind [17], we have determined that aya-expanse is the most suitable choice for our needs. One key reason is that the training data for the other models predomi- nantly consists of non-Persian languages, leading to biases that may result in the generation of non-Persian characters, even when we explicitly instruct the model to use only Persian. In contrast, aya-expanse demonstrates a robust understanding of Persian grammar and produces grammatically rich Persian text, making it a more reliable option for our | https://arxiv.org/abs/2505.16000v2 |
research. Fur- thermore, if we merge our updated parameters into aya-vision [18] , another model of aya family, instead of aya-expanse, we gain the capability to incorporate medical images such as MRIs and CT scans as inputs, thereby enhancing our model’s applicability to the medical domain. IV. D ATA A. Corpus As previously mentioned, there is a notable absence of pub- licly available Persian medical corpora specifically collected for training machine learning models. This lack of a dedi- cated Persian medical corpus poses a significant challenge for researchers and developers aiming to create effective models for medical applications in the Persian language. Without high- quality, domain-specific textual data necessary for training, these efforts may be hindered, ultimately impacting the de- velopment of advanced medical technologies and solutions tailored for Persian-speaking populations. To provide further insight into this issue, we have compiled a comprehensive corpus containing approximately 90 million tokens and about 100,000 articles. I. Garcia Ferrero et al. [19] collected medical corpora dedicated to four languages (English, French, Spanish, and Italian), which can be compared to ours, as shown in Table I. The accompanying Figure 1 illustrates the share of each magazine within our corpus, effectively highlighting the di- versity of sources and underscoring the need to address gaps in available resources to foster innovation and improve health- related applications. TABLE I COMPARISON OF OURCORPUS WITH CORPORA COLLECTED BY I. GARCIA FERRERO ET AL . language no. tokens collected by English 1.1B I. Garcia Ferrero et al. Spanish 950M I. Garcia Ferrero et al. French 675M I. Garcia Ferrero et al. Italian 143M I. Garcia Ferrero et al. Persian 90M us B. Dataset The collection of a real-world doctor-patient question- answering dataset is crucial for enhancing the capabilities of language models in the healthcare domain. Such a dataset allows models to learn valuable information derived from au- thentic interactions between healthcare providers and patients. By analyzing these real-world exchanges, language models can grasp the nuances of medical terminology, patient concerns, and the context surrounding healthcare inquiries. Furthermore, this dataset equips models with the ability to learn not just the factual content of responses but also the appropriate structure and tone for answering questions. This dual learning process is essential, as it enables the model to generate accurate, empathetic, and contextually relevant responses, ultimately improving patient communication and support in medical Fig. 1. Our corpus resources environments. In this context, Yang Liu [20] highlights several real-world doctor-patient question-answering datasets in his survey, a comparison of these datasets with ours can be found in Table II. In an era where technology increasingly aids healthcare, a robust doctor-patient dataset stands as a foundational element in training models that can effectively contribute to better healthcare delivery. TABLE II COMPARISON OF OUR DATASET WITH OTHERS dataset name language no. records collected by ChatDoctor English 100K Yunxiang Li et al. [6] CMtMedQA Chinese 68K Songhua Yang et al. [21] DISC-Med Chinese 465K Zhijie Bao et al. [22] -SFT HuatuoGPT- Chinese 226K Hongbo Zhang et al. [23] sft-data-v1 Huatuo-26M Chinese 26M Jianquan Li et al. [24] MedDialog Chinese & | https://arxiv.org/abs/2505.16000v2 |
3.66M Guangtao Zeng et al. English [25] Medical- Meadow English 160k Tianyu Han et al. [26] MF3QA Persian 20k us In our research, we crawled more than 180,000 question- answer pairs from Persian medical forums, employing both manual and automatic filtering methods to refine the dataset through a laborious cleaning process. This approach is similar to the work done by Yunxiang Li et al. [6]. in their article on the Chat Doctor medical language model, where they extracted data from English medical forums. Notably, Yunxiang Li discarded about half of the question-answer pairs based on the length of the answers, as shorter responses are generally inadequate for training a model and can lead the model to learn to provide brief answers. However, we faced a greater challenge; Persian doctors tend to provide much shorter an- swers compared to their English counterparts, resulting in the necessity to discard over 80% of our question-answer records to ensure quality and relevance for our training purposes. As you can see in the Figure 2 to create our dataset, we utilized patient-doctor interactions from the drhast and niniban platforms for the training split. It is important to note that drhast does not provide all of its doctor-patient interaction records on its site; it only offers access to the last 2,000 records. Additionally, each record is linked to 100 related records, complicating the crawling process. To address this challenge, we treated their data as a graph and performed a breadth-first search, which took about two weeks to extract 120,000 records out of a total of 200,000. For the test set, we used the doctor-yab and isovisit sites, ensuring diversity by translating the K-QA question-answering dataset [27] and appending it to our test split. This comprehensive approach not only enriched our dataset but also underscored the importance of real-world doctor-patient interactions in training effective language models. Fig. 2. MF3QA resources V. T RAINING A. Fine tuning We fine-tuned the 8-billion-parameter checkpoint of the aya- expanse model on 60% of our corpus, focusing on minimizing resource usage. To ensure an efficient fine-tuning process, we employed gradient checkpointing and a small batch size of 2, reducing memory requirements during training. Additionally, we used gradient accumulation steps of 16, effectively increas- ing the overall batch size to 32 and enabling stable training dynamics. To further reduce the memory usage of our fine- tuning process, we leveraged Low Rank Adaptation (LoRA) [28] to significantly reduce the number of trainable parameters. Specifically, we implemented a rank of 8, an alpha value of 16, and a dropout rate of 5%, applying the LoRA weights to all trainable parameters of each transformer layer. To further optimize the process, we employed efficient tokenization and memory-aware training techniques. The to- kenization process divided the input text into manageable token sequences, ensuring consistent input and label structures by truncating, padding, and handling overflowed tokens to maintain contextual integrity within a fixed context length. This streamlined preparation, coupled with the LoRA-based fine-tuning, was further enhanced by Flash Attention 2 [29]. By minimizing memory overhead, Flash Attention 2 allowed us to | https://arxiv.org/abs/2505.16000v2 |
handle longer context lengths and larger batch sizes efficiently, enabling effective fine-tuning for the next-token prediction objective while balancing computational efficiency with model performance.B. Instruction tuning Following the fine-tuning stage, we performed instruction- tuning on the fine-tuned model using our crawled free-form Farsi question-answering (MF3QA) dataset. This stage utilized the default template of the aya-expanse model and retained the same techniques and almost the same hyperparameters as the fine-tuning stage, with a few adjustments. Specifically, we employed the LoRA method with a rank of 2, an alpha value of 2, and a dropout rate of 0.4, and increased the weight decay to 0.5 instead of 0.1. The instruction-tuning process was conducted for a single epoch, enabling the model to better understand and generate responses tailored to Farsi question- answering tasks. This targeted optimization further refined the model’s capabilities, enhancing its effectiveness on our specific dataset. C. Carbon Footprint The carbon footprint of our model optimization—including both fine-tuning and instruction-tuning—was estimated based on hardware specifications and operational duration. The pro- cess ran for a combined total of 19 hours on a NVIDIA A100 PCIe 40GB GPUs hosted in Google Cloud Platform’s asia- east1 region. Assuming a typical power consumption of 250 watts per GPU, the total energy used was 4.75 KWh (250 watts × 19 hours). Using the carbon intensity factor of the asia-east1 grid (0.56 kilograms of CO2 equivalent per kWh), this translates to 2.66 kilograms of CO2 equivalent emitted during the tuning process [30]. VI. R ESULTS In the absence of a publicly available Persian medical language model, we opted to evaluate our model against general-purpose language models to establish a baseline for performance. This comparison allows us to assess the efficacy of our specialized model in handling medical-related queries in Persian. Importantly, all models used for comparison were selected based on their suitability for small, runnable environ- ments on home devices, addressing privacy concerns prevalent in the medical domain. Additionally, we compared our model with a pipeline alternative in our evaluation, which consists of a series of processes: first, a translator converts the user’s query from Persian to English; next, this English query is input into an English medical language model; and finally, the response generated by the English model is translated back from English to Persian. By contrasting our model with both general-purpose language models and this pipeline alternative, we aim to demonstrate the advantages and specific capabilities of our small Persian medical language model in addressing the unique challenges of medical language processing within the Persian language context.3 Moreover, we also address the significant challenge posed by the lack of available Persian benchmarks for medical lan- guage processing. To overcome this challenge, we translated 3You can see the detailed result at https://github.com/Mehrdadghassabi/Gaokerena the medical portion of the Massive Multitask Language Under- standing (MMLU) dataset [31] into Persian and supplemented it with data from the Iranian Basic Medical Sciences Entrance Exam (IBMSEE). A. Comparison with general purpose language models As you can see in Table III, our model achieved remark- able success by surpassing the passing score of the | https://arxiv.org/abs/2505.16000v2 |
Iranian Basic Medical Sciences Entrance Exam, which stands at 36%, making it the first Persian language model with fewer than 8 billion parameters to pass this exam. Furthermore, our model demonstrated improvements on the translated MMLU dataset, achieving not only higher average scores but also excelling across most sub-categories, thereby showcasing its effective- ness in understanding and generating medical knowledge in the Persian language. Notably, the reason PersianMind has a low inference time, as shown in Table III, is that it tends to generate very short responses compared to other models, producing the end-of-sentence token much sooner. In addition to our multiple choice question answering evaluation, we also utilized GPT- 4o [32] as an evaluator for free-form question answering. We provided the test set from the MF3QA dataset to both the competing language model and our model. As shown in Figure 3, GPT-4o predominantly preferred the responses generated by our model over those from the other three language models tested. This indicates that our model delivers high-quality responses, as judged by an advanced language model. TABLE III OUR MODEL PERFORMANCE IN COMPARISON WITH GENERAL PURPOSE LANGUAGE MODELS Gao- aya- kerena expanse-8b Qwen2.5 PersianMind (ours) (baseline) MMLU- 48.14 40.74 41.48 25.18 anatomy(fa) MMLU- medical 53.0 49.0 52.0 34.0 genetics(fa) MMLU- college 43.93 44.51 43.35 20.23 medicine(fa) MMLU- clinical 55.47 52.07 47.92 25.28 knowledge(fa) MMLU- professional 47.05 45.58 43.01 23.89 medicine(fa) MMLU- college 47.22 45.14 44.85 32.63 biology(fa) MMLU(avg) 49.31 46.64 45.17 25.89 IBMSEE Sept 2023 38.69 34.52 33.33 19.64 Number of parameters 8b 8b 7.6b 6.8b inference time ≈10s ≈8s ≈15s ≈2s B. Comparison with pipeline alternatives As previously mentioned, one alternative to creating a Persian medical language model is a pipeline system. How- ever, a major problem with pipeline systems is their speed; Fig. 3. Our model win rate against general purpose language models they exhibit high inference times because the output from one model must be fed into a second model, and then the output of the second model is processed again by the first model. This iterative process significantly hampers efficiency. To address the low speed of pipeline models, we have loaded all parameters—both those pertaining to the translators and the medical language model—simultaneously. Our experiments with models such as Medmobile [9] paired with gemma-2b-it [14] as translators, and Medmobile paired with parsinlu [32] [34] models, showed disappointing results, as evidenced by the poor performance displayed in Table IV. Another significant issue with the pipeline alternative is its poor performance in accurately detecting and translating medical terms. This limitation poses a serious challenge, as precise terminology is crucial for effective communication in healthcare settings, where misunderstandings can have serious consequences for patient care and treatment outcomes. The underlying cause of this deficiency is likely due to the fact that the translators employed in these pipeline systems have not been specifically developed for medical translation. Un- like general-purpose translation models, medical translation requires a nuanced understanding of specialized vocabulary, context, and the intricacies of medical language. Currently, there are no models tailored for medical translation in the Persian language, which means | https://arxiv.org/abs/2505.16000v2 |
that existing systems are ill- equipped to handle the complexities of medical terminology. As shown in Figure 4, these limitations have resulted in all pipeline alternatives achieving lower win rates against our model, Gaokerena. This performance gap highlights the inadequacies of current pipeline approaches in meeting the demands of medical translation, emphasizing the necessity for the development of dedicated medical translator models that can effectively address these specific challenges. Fig. 4. Our model win rate against pipeline alternatives TABLE IV OUR MODEL PERFORMANCE IN COMPARISON WITH PIPELINE ALTERNATIVES Gao- MedMobile MedMobile kerena +gemma2 + parsinlu (ours) -2b-it MMLU- anatomy(fa) 48.14 14.07 25.18 MMLU- medical-genetics(fa) 53.0 20.0 35.0 MMLU- college-medicine(fa) 43.93 19.08 27.17 MMLU- clinical-knowledge(fa) 55.47 27.54 31.70 MMLU- professional-medicine(fa) 47.05 17.27 33.82 MMLU- college-biology(fa) 47.22 18.75 31.25 MMLU(avg) 49.31 20.11 30.99 IBMSEE Sept2023 38.69 24.40 32.73 Number of parameters 8b 3.8b+2b 3.8b+1.2b+1.2b inference time ≈10s ≈20s ≈30s FUTURE RESEARCH As mentioned in the baseline model section, we can merge our updated parameters into aya-vision instead of aya- exapanse, This will allow us to incorporate medical images, such as MRIs and CT scans, as inputs to our model. However, it is important to note that our models capabilities with multi- modal queries are under-explored. our future research plans involve to exploring and enhancing our model’s responses to queries that are accompanied by medical images. REFERENCES [1] Vaswani, Ashish, et al. ”Attention is all you need.” Advances in neural information processing systems 30 (2017). [2] Singhal, Karan, et al. ”Toward expert-level medical question answering with large language models.” Nature Medicine (2025): 1-8. [3] Singhal, Karan, et al. ”Large language models encode clinical knowl- edge.” Nature 620.7972 (2023): 172-180. [4] Taghizadeh, Nasrin, et al. ”SINA-BERT: a pre-trained language model for analysis of medical texts in Persian.” arXiv preprint arXiv:2104.07613 (2021). [5] Dang, John, et al. ”Aya expanse: Combining research breakthroughs for a new multilingual frontier.” arXiv preprint arXiv:2412.04261 (2024). [6] Li, Yunxiang, et al. ”Chatdoctor: A medical chat model fine-tuned on a large language model meta-ai (llama) using medical domain knowledge.” Cureus 15.6 (2023). [7] Touvron, Hugo, et al. ”Llama: Open and efficient foundation language models.” arXiv preprint arXiv:2302.13971 (2023). [8] Kim, Hyunjae, et al. ”Small language models learn enhanced reasoning skills from medical textbooks.” arXiv preprint arXiv:2404.00376 (2024). [9] Vishwanath, Krithik, et al. ”MedMobile: A mobile-sized language model with expert-level clinical capabilities.” arXiv preprint arXiv:2410.09019 (2024). [10] Abdin, Marah, et al. ”Phi-3 technical report: A highly capable language model locally on your phone.” arXiv preprint arXiv:2404.14219 (2024). [11] Koroteev, Mikhail V . ”BERT: a review of applications in natural lan- guage processing and understanding.” arXiv preprint arXiv:2103.11943 (2021). [12] Veisi, Hadi, and Hamed Fakour Shandi. ”A Persian medical question answering system.” International Journal on Artificial Intelligence Tools 29.06 (2020): 2050019. [13] Darabi, Leila. Medical Question Answering for Persian. Master’s thesis, LIACS, Leiden University, 2024.[14] Farahani, Mehrdad, et al. ”Parsbert: Transformer-based model for per- sian language understanding.” Neural Processing Letters 53 (2021): 3831-3847. [15] Team, Gemma, et al. ”Gemma 2: Improving open language models at a practical size, 2024.” URL https://arxiv. org/abs/2408.00118 1.3 (2024). [16] | https://arxiv.org/abs/2505.16000v2 |
Yang, An, et al. “Qwen2 Technical Report.” arXiv Preprint arXiv:2407.10671, 2024. [17] Rostami, Pedram, Ali Salemi, and Mohammad Javad Dousti. ”Persian- mind: A cross-lingual persian-english large language model.” arXiv preprint arXiv:2401.06466 (2024). [18] Dash, Saurabh, et al. ”Aya Vision: Advancing the Frontier of Multilin- gual Multimodality.” arXiv preprint arXiv:2505.08751 (2025). [19] Garc ´ıa-Ferrero, Iker, et al. ”Medical mT5: an open-source multi- lingual text-to-text LLM for the medical domain.” arXiv preprint arXiv:2404.07613 (2024). [20] Liu, Yang, et al. ”Datasets for large language models: A comprehensive survey.” arXiv preprint arXiv:2402.18041 (2024). [21] Yang, Songhua, et al. ”Zhongjing: Enhancing the chinese medical capabilities of large language model through expert feedback and real- world multi-turn dialogue.” Proceedings of the AAAI conference on artificial intelligence. V ol. 38. No. 17. 2024. [22] Bao, Zhijie, et al. ”Disc-medllm: Bridging general large language models and real-world medical consultation.” arXiv preprint arXiv:2308.14346 (2023). [23] Zhang, Hongbo, et al. ”Huatuogpt, towards taming language model to be a doctor.” arXiv preprint arXiv:2305.15075 (2023). [24] Wang, Xidong, et al. ”Huatuo-26M, a Large-scale Chinese Medical QA Dataset.” Findings of the Association for Computational Linguistics: NAACL 2025. 2025. [25] Zeng, Guangtao, et al. ”MedDialog: Large-scale medical dialogue datasets.” Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP). 2020. [26] Han, Tianyu, et al. ”MedAlpaca–an open-source collection of med- ical conversational AI models and training data.” arXiv preprint arXiv:2304.08247 (2023). [27] Manes, Itay, et al. ”K-qa: A real-world medical q&a benchmark.” arXiv preprint arXiv:2401.14493 (2024). [28] Hu, Edward J., et al. ”Lora: Low-rank adaptation of large language models.” ICLR 1.2 (2022): 3. [29] Dao, Tri. ”Flashattention-2: Faster attention with better parallelism and work partitioning, 2023.” URL https://arxiv. org/abs/2307.08691 (2023). [30] Lacoste, Alexandre, et al. ”Quantifying the carbon emissions of machine learning.” arXiv preprint arXiv:1910.09700 (2019). [31] Hendrycks, Dan, et al. ”Measuring massive multitask language under- standing.” arXiv preprint arXiv:2009.03300 (2020). [32] Hurst, Aaron, et al. ”Gpt-4o system card.” arXiv preprint arXiv:2410.21276 (2024). [33] Khashabi, Daniel, et al. ”Parsinlu: a suite of language understanding challenges for persian.” Transactions of the Association for Computa- tional Linguistics 9 (2021): 1147-1162. [34] Kashefi, Omid. ”MIZAN: a large persian-english parallel corpus.” arXiv preprint arXiv:1801.02107 (2018). | https://arxiv.org/abs/2505.16000v2 |
arXiv:2505.16002v1 [cs.CL] 21 May 2025Causal Interventions Reveal Shared Structure Across English Filler–Gap Constructions Sasha Boguraev1Christopher Potts2Kyle Mahowald1 1The University of Texas at Austin2Stanford University {sasha.boguraev,kyle}@utexas.edu cgpotts@stanford.edu Abstract Large Language Models (LLMs) have emerged as powerful sources of evidence for linguists seeking to develop theories of syntax. In this pa- per, we argue that causal interpretability meth- ods, applied to LLMs, can greatly enhance the value of such evidence by helping us character- ize the abstract mechanisms that LLMs learn to use. Our empirical focus is a set of En- glish filler–gap dependency constructions (e.g., questions, relative clauses). Linguistic theo- ries largely agree that these constructions share many properties. Using experiments based in Distributed Interchange Interventions, we show that LLMs converge on similar abstract analy- ses of these constructions. These analyses also reveal previously overlooked factors – relating to frequency, filler type, and surrounding con- text – that could motivate changes to standard linguistic theory. Overall, these results suggest that mechanistic, internal analyses of LLMs can push linguistic theory forward. 1 Introduction Language models can generate and process utter- ances typically thought to require rich linguistic grammatical structure (Futrell et al., 2019; Wilcox et al., 2018; Manning et al., 2020; Hu et al., 2020), including much-studied syntactic constructions like long-distance filler–gap constructions (Wilcox et al., 2024). These results have been taken to chal- lenge claims that these phenomena can be learned only with strong innate priors (Piantadosi, 2023; Futrell and Mahowald, 2025). Despite the strong performance, questions re- main as to whether models acquire syntax in ways that are posited by linguists to be human-like (e.g., acquiring rich grammatical abstraction and syntac- tic structure). Causal interpretability methods now make it possible to characterize the abstract mecha- nisms underlying neural networks (Vig et al., 2020; Finlayson et al., 2021; Geiger et al., 2021; Meng Minimal pairs: Embedded Wh-Question bIkno wt hatt hemanlik es→him (yb).sIkno wwhot hemanlik es→him. (ys)Minimal pairs: Cleft bItiscleart hemanlik es→him (yb).sItishimt hemanlik es→him. (ys)Wit hin-Class Int er v ention (Embedded Wh-Q) Ikno wwhot hemanlik esIkno wt hatt hemanlik ese v al Embedded Wh-Q gap tr ansf ersWit hin-Class Ev aluation (Embedded Wh-Q) him.→him. Causal eff ectCr oss-Class Int er v ention (Cleft t o Embedded Wh-Q) Itishimt hemanlik esIkno wt hatt hemanlik ese v al Cleft gap tr ansf ersCr oss-Class Ev aluation (Cleft t o Embedded Wh-Q) him.→him. Causal eff ect Shar ed mechanismsFigure 1: Causal intervention overview. Here, we illustrate our methodology when we intervene within a class, transferring an embedded wh- filler–gap structure into a corresponding minimal pair that didn’t previously have one. We then show intervening across classes, inserting a wh- filler–gap into a gap-less cleft sentence et al., 2022; Geiger et al., 2023; Wang et al., 2023). These methods have revealed non-trivial linguis- tic syntactic structure is learned by models (Arora et al., 2024; Lasri et al., 2022; Finlayson et al., 2021; Mueller et al., 2022; Lakretz et al., 2019). But a key hypothesis in the history of linguistics is that seemingly different linguistic constructions can share underlying structure. For instance, com- pare “I wonder what | https://arxiv.org/abs/2505.16002v1 |
the lion ate.” to “It was the gazelle that the lion ate.” The former is an embed- ded wh- clause and the latter is a cleft construction. 1 These are distinct constructions but share some- thing in common: both have a long-distance depen- dency with an extracted element, often specified with a linguistic trace: “I wonder what tthe lion ate ___ t.” and “It was the gazelle tthat the lion ate ___ t.” Thus, many linguistic theories predict common processing characteristics between these sentences (Fodor, 1989). On the other hand, there is also reason to expect wh- sentences to be quite different from clefts since both wh- elements and clefts have idiosyncratic properties (Ross, 1967; Culicover, 1999; Ginzburg and Sag, 2001). To tackle these questions, we take advantage of advances in large open source models as well as ad- vances in mechanistic interpretability, specifically the Causal Abstraction framework (Geiger et al., 2023) and Distributed Alignment Search (DAS; Geiger et al. 2024). Our resulting methodology gives us direct access to the abstract causal mecha- nisms learned by these models. By accessing these causal mechanisms, we can take a filler-gap mech- anism learned on Construction A (e.g., wh- sen- tences), transfer it to Construction B (e.g., clefts), and see if we get predictable filler–gap behavior (see Figure 1). If we do, this would be strong evi- dence of underlying shared structure learned by the model. Importantly, this method gives us a gradient mea- sure of transfer. As such, we explore whether more similar constructions transfer more readily to each other; whether some constructions in general tend to serve as sources of transfer; whether mecha- nisms transfer across clauses; and whether transfer is greater when lexical items are shared across con- structions (an effect predicted by the “lexical boost” in syntactic priming, whereby syntactic structures are primed more strongly when there is lexical over- lap Pickering and Branigan, 1998). Ultimately, we find strong generalization in LMs across a range of filler–gap constructions, with ef- fects observed at all positions within constructions. We observe lexical boost: effects are stronger when lexical items match (e.g., the same animacy). More- over, we identify source constructions whose un- derlying mechanisms generalize broadly, as well assinkconstructions that consistently benefit from such transferred mechanisms. Finally, we provide evidence that such generalization does not seem to extend across clausal boundaries. We claim these experiments make good on the promise that studying LMs can help us better un- derstand linguistic structure and language learningin general by not just serving as proxies for data- driven learners, but by helping us develop linguisti- cally interesting hypotheses (Futrell and Mahowald, 2025; Portelance and Jasbi, 2024; Potts, 2023).1 2 Filler–Gaps and Neural Models Consider the following sentence: (1) [The bagel ]t, I liked __ t. The embedded clause, I liked , seems incomplete, lacking an object. However, the sentence is gram- matical, as the fronted entity the bagel is under- stood to be the object of the anteceding clause. Grammatical constructions of this nature are termed filler–gaps , due to constituents appearing as ‘fillers’ in non-canonical positions, | https://arxiv.org/abs/2505.16002v1 |
colloquially being said to leave a ‘gap’ at its canonical position. This grammatical family encompasses a wide range of common constructions including wh-questions, relative clauses, clefts, and more. Filler–gap dependencies have long been a target of linguistic inquiry. They are believed to require sophisticated syntactic machinery, beyond simple surface statistics, since a word might appear quite linearly far from a word that it depends on for its meaning (Chomsky, 1957; Ross, 1967). They have been of interest in computational linguistics for the same reason: earlier models like n-gram models were fundamentally unable to handle structures over long distances. Hence, filler–gaps have served as a common test- bed for LMs’ grammatical capacities. Wilcox et al. (2018) provided early positive evidence of RNN’s grammatical competence in English by comparing LMs’ surprisals for gap and gapless continuations in the presence and absence of fillers. More re- cently Ozaki et al. (2022) and Wilcox et al. (2024) have demonstrated LM sensitivities to linguistic constraints on these constructions. Kobzeva et al. (2023) found mixed results in Norwegian, a lan- guage known to have very different filler–gap struc- tures and constraints than English. There has been further work to measure the gen- eralization capacities of LMs across filler–gap con- structions. Lan et al. (2024) test models’ knowl- edge of parasitic gaps and across-the-board move- ment, finding that unless the training data is sup- plemented with adequate examples, LMs struggle to learn these constructions from small corpora. Howitt et al. (2024) build on the methodology of 1Code will be released with final version. 2 Construction Prefix Filler NC Article NP Verb Label Emb. Wh-Question ( Know -Class) I know who/that the man liked ./him Emb. Wh-Question ( Wonder -Class) I wonder who/if the man liked ./him Matrix Wh-Question Who/"" did the man like ?/him Restrictive Relative Clause The boy who/and the man liked was/him Cleft It was the boy/clear that the man liked ./the boy Pseudo-Cleft Who/That the man liked was/it Topicalization Actually, the boy/"" the man liked ./the boy Subject-Verb Agreement The boy/boys that the man liked is/are Transitive/Intransitive Verbs Last night/Yesterday some/that man/boy ran/liked ./him Table 1: Exemplar minimal pairs for each evaluated construction’s single-clause, animate extraction variant. The filler/label combinations are used to evaluate whether the model is processing the construction correctly and whether our causal interventions are successful. The final two constructions are used as control conditions. For a full set of examples, including the multi-clause and inanimate extraction variants, see Appendix A. Lan et al. (2024), training LSTMs on specific filler– gap constructions and evaluating LM performance on others, with results suggesting little generaliza- tion in LMs. Prasad et al. (2019) and Bhattacharya and van Schijndel (2020) further use a methodology based on psycholinguistic priming to explore filler– gap generalization in LMs, with the former finding evidence suggesting that LMs hierarchically orga- nize relative clauses in representation space, and the latter finding general representations for filler– gaps which are shared across various constructions. These previous works show LMs can learn to process filler–gap constructions, but show more mixed results as to whether this processing | https://arxiv.org/abs/2505.16002v1 |
is shared across constructions. But most of this work has been behavioral, without exploring the model’s underlying causal mechanisms. Our work fills this gap. We first uncover the causal mechanisms LMs learn to process various filler–gap dependencies, and then we measure to what extent these mecha- nisms generalize across different filler–gaps. 3 Methods 3.1 Data Evaluated Constructions We focus our inves- tigation on seven filler–gap constructions: em- bedded wh-questions with a finite complemen- tizer (denoted as the know -class), embedded wh- questions with a non-finite complementizer ( won- der-class), matrix-level wh-questions, restrictive relative clauses, clefts, pseudo-clefts, and topical- ization. For each construction, we design sentential templates in the style of Arora et al. (2024), allow- ing us to sample a large number of minimal pairs differing in our targeted grammatical phenomenon. We design four templates per construction, dif- fering in the extracted object’s animacy and by thenumber of clausal boundaries between the filler and the gap left by its extraction (one or two clauses). We manipulated animacy since changing animacy requires changing the key wh- element (“who” vs. “what”), but is not hypothesized to affect the sen- tence’s structure. All our templates involve the extraction of a direct object from a verb phrase and all follow a general template, allowing cross- construction alignment by position. Our general template, as well as examples of animate extraction from a single-clause variant of each construction, are in Table 1. Controls Our first control is the task of subject– verb number agreement (e.g., “The boy is”, not “The boy are”). This task was selected because, relative to our constructions of interest, there is a similar distance between the subject and the verb. However, while subject–verb agreement can oper- ate over long linear distances, it does not have the filler–gap property of our target constructions (as agreement is always between clausemate elements) and thus we hypothesize that it should notrely on the same mechanism. The second control is the task of predicting a continuation after transitive or intransitive verbs. This task controls for the predicted label, ensuring that any generalization we find is meaningful, not merely due to heuristics related to the predicted labels. In order to maintain the distance between minimal contrast and prediction location, we have lexical items in faux-contrast at the FILLER ,ARTI - CLE,NPpositions, such that there is no meaningful difference in the sampled items at those positions. 3.2 Distributed Alignment Search To localize internal mechanisms used by LMs to process our constructions of interest, we use Dis- 3 tributed Alignment Search (DAS; Geiger et al. 2024; Wu et al. 2023). DAS is a supervised in- terpretability method that can be used to assess whether a given feature is encoded in a particu- lar set of neural activations. We rely on the 1- dimensional variant of DAS used by Arora et al. (2024). The core intervention performed is b+ (sa⊤−ba⊤)a where b∈Rnis a representation formed by the model when it processes a base example (right sides in Figure 1), and s∈Rnis the corresponding representation formed when the model processes a source example (left | https://arxiv.org/abs/2505.16002v1 |
sides in Figure 1). In our experiments, bandsare always the outputs of a Transformer block. Intuitively, this intervention defines a direction in the rotated feature space de- fined by the learned vector a∈Rn. This is a soft intervention targeting only the learned feature and preserving orthogonal dimensions of b. In DAS, all LM parameters are kept frozen, and ais learned via a standard cross-entropy loss trained on inter- ventions of the sort depicted in Figure 1. The goal of learning is to make the correct predictions under the intervention. For example, in the within-class intervention in Figure 1, we seek to learn an in- tervention that predicts a gap site (signaled by a period) even though the inputs correspond to a non- filler–gap case. The extent to which we can learn such an intervention provides the basis for assess- ing the hypothesis that the filler–gap dependency itself can be localized to the intervention site. We chose to use DAS for two main reasons. First, Arora et al. (2024) demonstrate that, in a compari- son among several interpretability methods, DAS consistently performed the best in finding causally efficacious features in syntactic tasks. Second, Wu et al. (2023) show that the feature-alignments learned by DAS are robust and generalize strongly. Training We train interventions at each position from the FILLER onwards, and across every layer of our given LM. We use the pythia series of models (Biderman et al., 2023), a series of open-source, open-data LLMs. We run our experiments on the 1.4,2.8and6.9billion parameter models. We find qualitatively similar results for all sizes, reporting those of the 1.4b variant in the main text (results for2.8b and 6.9b variants in Appendix H). We evaluate two distinct categories of interven- tions: (1) single-source interventions, where for each of the nconstructions, ci<n, the trainingdataset for DAS contains sentences sampled from the templates of ci, and (2) leave-one-out construc- tion interventions, where for each of the nconstruc- tions, ci<n, the training dataset contains sentences sampled from the templates of cj̸=i– that is, all constructions that are not ‘left-out’. Evaluation For evaluation, we use the ODDS met- ric from Arora et al. (2024). This metric measures how much more likely a counterfactual label is af- ter performing an intervention, with higher ODDS denoting larger causal effect from the given inter- vention. Intuitively, it tells us: after intervention, how much more likely is the continuation expected based on the “source sentence” than the one naively expected based on the “base sentence”. We mea- sure the average ODDS at each position-layer pair across 400 sentences, sampled from the templates of each individual construction. In cases of aggregation, we max-pool the av- erage ODDS value across layers at each sentential position (we refer to this metric as MAX ODDS here- after). We also normalize the MAX ODDS by the corresponding average MAX ODDS for the items present in the training set, with this normalization giving us a measure of how much the mechanisms used by a given set of constructions generalize to an evaluated construction, relative to how much they generalize to | https://arxiv.org/abs/2505.16002v1 |
those they were trained on. We aggre- gate across layers by max-pooling ODDS because our causal methodology aims to localize syntactic features in the model, with the maximum value rep- resenting the most causally efficacious localization of the given features. 4 Exp. 1: Do LMs Share Filler–Gap Mechanisms Across Constructions? Our first experiment investigates the extent to which language models employ common mech- anisms for processing different filler–gaps. Setup We measure the MAX ODDS for all trained interventions evaluated on every construction of the same clausal category (for a discussion on cross- clause generalization see §6). We then group these values into six categories, depending on the rela- tion between the set of constructions the interven- tions were trained on and those used to generate the evaluation set. These groups comprise (1) the same set of constructions in the training set and the evaluation set, with the same animacy – this is our reference group as training and evaluation sentences are drawn from the same distribution; (2) 4 {filler} the {np} {verb} 0.000.250.500.751.00 Same Animacy, Not in Train Set Difference Animacy, Not in Train SetSame Animacy, In Train Set Different Animacy, In Train SetControls Lexical ControlsAverage Max Odds by PositionFigure 2: Average normalized MAX ODDS across positions, ±1standard error. Corresponding multi-clause plots can be found in Appendix E. Note that normalization fixes the “Same Animacy, In Train Set” condition at 1.00. In Train Set, Same AnimacyLeft Out, Same AnimacyIn Train Set, Different AnimacyLeft Out, Different AnimacyCleftEmbedded Wh−Question (Know−Class)Embedded Wh−Question (Wonder−Class)Matrix Wh−QuestionPseudocleftRestricted Relative ClauseTopicalization 0 510 15 200 510 15 200 510 15 200 510 15 20{filler} the {np} {verb} {filler} the {np} {verb} {filler} the {np} {verb} {filler} the {np} {verb} {filler} the {np} {verb} {filler} the {np} {verb} {filler} the {np} {verb} LayerPosition Odds Ratio 0 5 10 15Generalization Across Constructions Figure 3: For each source construction, we measure the ODDS at each position layer pair, aggregating the values by evaluation group. Corresponding plots with control values and multi-clause variants are in Appendix E.the same set of constructions in training set and evaluation set, with different animacy; (3) evalu- ation on the held-out constructions, but with the same animacy as the training set; (4) evaluation on the held-out constructions, and differing animacy from the training set; and (5–6) the two controls. Hypothesis We hypothesize that the MAX ODDS for all our targeted evaluation groups will be greater than that of the controls. We further expect MAX ODDS to be higher when the evaluated construc- tions are in training or match in animacy. Results Figure 2 shows the average MAX ODDS of the aforementioned groups at each position in our single-clause templates. In both these single- clause variants and the embedded-clause variants of our constructions (corresponding figure in Ap- pendix E), we find consistently high MAX ODDS values for each of the aforementioned non-control groups. The controls show significantly less trans- fer. We run pairwise t-tests with a Holm-Bonferroni correction, finding the MAX ODDS of each of our test groups is significantly higher than both con- trols at every position in the | https://arxiv.org/abs/2505.16002v1 |
single clause templates and nearly every position in the multi-clause ones. These results strongly suggest shared internal rep- resentations across filler–gap constructions in the evaluated models. To test our hypotheses regarding the effect of training and evaluation set overlap and matching animacy, we fit a linear mixed-effects regression model to our MAX ODDS data at each position. Our random effects comprise intervention training set, and evaluation construction, and our fixed ef- fects take the form of binary indicator variables for (1) whether the evaluated construction was in the training set and (2) whether the animacy con- dition of the evaluated construction matches that of the training set. We find significant, positive, 5 effects for overlap, matching animacy, and their in- teraction at the FILLER ,THE, and NPpositions, and for matching animacy at the VERB position. See Appendix C.1 for regression details. Thus, across positions, LM internal processing is sensitive to linguistically meaningful features, such as ani- macy of the extracted item (possible evidence of “lexical boost”). While we broadly see generalization as fitting into held-out constructions (Figure 3), embedded wh- questions and restrictive relative clauses show noticeably less generalization than other construc- tions. We briefly offer up two accounts for these peculiarities: (1) there is asymmetry in LM gener- alization between different filler–gap dependencies or (2) these constructions are processed by largely different mechanisms than the other constructions. Clarifying which of these applies to each construc- tion helps motivate our next experiment. 5 Exp. 2: What Factors Drive Filler–Gap Generalization in LMs? Our previous experiment demonstrated significant overlap between the LM’s abstract representations of various filler–gap constructions. However, we also observed notable variation in the strength of this generalization across positions and construc- tions. Here, we attempt to characterize the nature of this cross-construction generalization. In par- ticular, we attempt to identify whether there exist constructions which serve as sources (their filler– gap properties transfer well to other constructions) orsinks (filler–gap properties from other construc- tions transfer well to them). We further investigate which features of natural language (e.g. distribu- tional properties construction frequency, or linguis- tic properties like the nature of the filler item) may drive this generalization. Setup To characterize the degree to which a given construction is a source or sink, we perform the following procedure. First, we evaluate all single- source interventions on all constructions of the same clausal length, averaging the normalized MAX ODDS across the animacy-conditions at each posi- tion, training construction, and evaluation construc- tion triple. We take the resulting n×nmatrix to be an adjacency matrix for a weighted, directed graph G= (V, E)in which vertex Vis a construction and each directed edge Ei,jis the transfer from construction ito construction j. We then calculate theout-degree centrality – the fraction of a graph’s Mean AUCRes. Rel. ClauseEmb. Wh−Q (Wonder)Emb. Wh−Q (Know)Matrix Wh−Q CleftPseudocleft Topicalization 0.00.20.40.6 10 30 100 300In−Degree Res. Rel. ClauseEmb. Wh−Q (Wonder)Emb. Wh−Q (Know) Matrix Wh−QCleft PseudocleftTopicalization 10 30 100 300Out−Degree Construction Frequency (Log Scale)Figure 4: Top: Generalization network at single-clause THE position with edge-threshold of 1. Nodes size pro- portional to in-degree, edge | https://arxiv.org/abs/2505.16002v1 |
size and color proportional toODDS of the source construction’s interventions mea- sured on the target construction. Bottom: In- and out- degree centrality AUCs against construction frequency. total a given node’s outgoing edges are connected to – and in-degree centrality – the fraction of nodes its incoming edges come from. We do this for nodes (constructions) across a range of edge thresh- olds – that is, the minimum edge weight retained in the graph. We measure each construction’s area under the threshold-centrality curves (AUC). The resulting out- and in-degree AUCs serve as proxies for the degree to which a given construction is a source or sink respectively. We provide a repre- sentative generalization network (for the THE po- sition) in Figure 4. That figure shows particularly strong transfer into pseudoclefts, very little transfer into either control, strong within-construction tran- fser (dark recurrent arrows), and some non-random structure of transfer across constructions. We also analyzed the effect of construction fre- quency on generalization capacity. We extracted 6 estimates of each construction’s prevalence in the English-EWT Universal Dependencies dataset (De Marneffe et al., 2021; Nivre et al., 2020; Sil- veira et al., 2014). See Appendix D for details. We further investigate the effects of four param- eters of linguistic variation across filler–gap con- structions: the nature of the filler, whether the head daughter is inverted, the syntactic category of the mother (the word under which a construction is embedded), and the semantic/pragmatic nature of the construction (whether the fronted element is fronted by necessity or for discourse reasons). At each position, we fit a linear mixed-effects model with binary indicator variables denoting whether the source and evaluated construction match for each of the above posited parameters of variation as fixed effects, with random effects for training- source construction, and evaluated construction. For regression details, see Appendix C.2. Hypothesis We expect to see specific construc- tions serving as strong sources and others as strong sinks in the generalization network. We further expect a positive relationship between a given con- struction’s frequency and the degree to which it is a source, and conversely, a negative relationship between its frequency and its sink-ness. Finally, we anticipate stronger generalization between linguis- tically similar constructions than dissimilar ones. Results Figure 4 shows construction frequency against in-degree and out-degree AUCs, mean- pooled across sentence positions. Constructions are spread across the AUC axis, suggesting varying levels of generalization. These AUCs are consistent across both sentence position and clausal variant (single and multi-clause AUCs, faceted by position are available in Appendix F). Figure 4 also shows a negative relationship be- tween construction frequency and in-degree AUC and a (weak) positive relationship between con- struction frequency and out-degree AUC. There are some notable exceptions to these trends, such as the low-frequency topicalization construction hav- ing a surprisingly low in-degree AUC and the most frequent construction, restrictive relative clauses, having a low out-degree AUC. Below, we argue that these anomalies are linguistically explainable. We further find evidence supporting our hypoth- esis that linguistic similarity aids generalization between constructions. Our regression reveals sig- nificant, positive effects | https://arxiv.org/abs/2505.16002v1 |
for filler type at the FILLER and THE positions, inversion of the head daughterand nature of the fronted element at the FILLER , THE, and NPpositions, and syntactic category of the mother at all positions. Discussion These results paint a clear picture of filler–gap generalization in LMs. Frequent con- structions are encountered at a high-enough rate during training so as to drive the development of robust mechanisms to process them. Less frequent constructions are not encountered enough for stand- alone, robust, processing mechanisms to form. In- stead, their processing relies on the mechanisms of more frequent, linguistically similar constructions. These analyses shed light on the anomalous re- sults. For instance, we observed a low in-degree AUC for the low-frequency construction topical- ization. Topicalization is linguistically dissimilar to higher-frequency constructions, being the only construction with a phrasal element at its filler site, and it generally shares very few linguistic features with more frequent constructions. In this light, its low in-degree AUC is not surprising, especially when compared to pseudoclefts, which much more closely resemble higher-frequency constructions (especially wh-questions). Similarly, restrictive relative clauses are the only constructions which are embedded under a noun phrase, possess a wh-item at the filler position, and have their filler item fronted out of syntactic ne- cessity, not for discourse purposes. This makes them linguistically dissimilar to many of the lower frequency constructions along the features found important by our mixed-effects model. As such, despite their high frequency, their mechanisms do not transfer broadly to these constructions, leading to a relatively low out-degree. These results also answer the questions posed at the end of Experiment 1. Namely, embedded wh-questions and restrictive relative clauses show little generalization in the leave-one-out setting, as they are frequent enough to largely not rely on the processing mechanisms of other constructions. However, embedded wh-questions possess enough linguistic overlap with less frequent constructions to aid in their processing, whereas restrictive rela- tive clauses are more isolated in the generalization network due to their linguistic dissimilarities. 6 Exp. 3: Do Language Models Generalize Across Clausal Boundaries? Our first two experiments demonstrate that LMs share processing mechanisms across various filler– 7 {filler} the_1 {np1} {verb1} that the_2 {np2} {verb2} 04812 Embedded −> Embedded Controls Lexical Controls Single −> EmbeddedAverage Max Odds Across PositionFigure 5: MAX ODDS ±1standard error, by position for interventions (1) trained and evaluated on multi-clause variants, (2) trained on single-clause variants and evaluated on multi-clause variants, and (3-4) controls. gap constructions of the same clausal length. In this section, we analyze whether our constructions’ single-clause processing mechanisms are used to process both clauses in the multi-clause variant. Set-Up We evaluate the interventions trained at each position of the single-source variants on the corresponding positions in the matrix and embed- ded clause of the same construction’s multi-clause template. We compare the by-position MAX ODDS values to the corresponding values of interventions trained and evaluated on the multi-clause variants. Hypothesis Under a purely modular account of syntactic structure, we expect to see generalization across clausal boundaries. That is, we expect the single-clause interventions to show above-chance | https://arxiv.org/abs/2505.16002v1 |
MAX ODDS when evaluated on both the matrix and embedded clause of our multi-clause variants. Results Our results are displayed in Figure 5. Our single-clause mechanisms show above-chance MAX ODDS at the FILLER through NP1positions of the matrix clause, before dropping off at the VERB 1through THE 2positions, and then slowly rebounding as we move towards the final VERB 2. These results make sense when we consider the relative sentential structures of single-clause and multi-clause sentences, and the auto-regressive na- ture of the LMs we study. Primarily, the first three positions of a multi-clause sentence – that is,FILLER ,THE 1, and NP1– are indistinguish- able from the first three positions of a singular- clause sentence. As such, we would expect an auto-regressive LM, processing from left-to-right, to not be aware that it is processing an embedded clause until it reached the VERB 1position. Until then, it will use the same mechanisms it would to process a sentence with a single clause. This is reflected in the strong generalization through these first three positions. In the VERB 1position, however, single-clause and multi-clause sentences have verbs that sharplydiverge in their semantic character and syntactic properties. Specifically, the verbs at this position in a multi-clause sentence must be ones which can embed a clause (e.g. say,know , and wonder , among others), whereas in a single-clause sentence this is not necessary. As such, upon encountering this position, the LM encounters a different set of verbs than it was trained on, leading to a drop in the single-clause intervention’s MAX ODDS . As the LM processes the next couple of positions (THAT ,THE 2, and NP2), we see the single-clause intervention’s MAX ODDS steadily increasing, as the LM gets closer to a position where it can poten- tially discharge its filler. This process culminates at the VERB 2where we see clear, above-chance, generalization from the single-clause mechanisms to the embedded-clause. 7 Conclusion Long-held views in linguistics suggest that there should be common processing characteristics across diverse filler–gap constructions. We found this to be the case: we were able to transfer the filler–gap property across neural representations of different filler–gap constructions, suggesting that neural models rely on similar representations across distinct constructions. This transfer is not entirely uncomplicated, how- ever: transfer was stronger when animacy matched and less strong when animacy did not match even within a construction . This was true even though animacy is not a key part of the usual account of En- glish filler–gap constructions. We also found that some constructions were stronger sources of filler– gap transfer than others — and others were stronger sinks. Finally, we found that transfer across main and embedded clauses was not strong. Taken to- gether, these results show how mechanistic analysis of LLMs can provide novel insights into the nature of syntactic structures. 8 8 Limitations Our work is primarily an attempt to show that LLMs can be useful tools for pushing linguistic theory forward. This brings with it specific theoret- ical presuppositions that are worth articulating to avoid a suggestion that there is scientific consensus | https://arxiv.org/abs/2505.16002v1 |
where there is not. Our investigation is oriented toward finding evi- dence of modular structure in LLMs. However, it is not a settled question what constitutes rule-like or systematic linguistic behavior in neural systems (Geiger et al., 2024; Nefdt, 2023; Buckner, 2024; Futrell and Mahowald, 2025). How causally sys- tematic should a syntactic behavior be for it to be rule-like? One reading of our results would be that our causal interventions capture human filler–gap behavior but noisily (e.g., imperfect transfer across constructions, less transfer when animacy differs). This is possible, but another reasonable interpre- tation is that the relevant constructs are also fuzzy in humans. Despite a historical proclivity for rules, nearly all syntactic theories allow for numerous exceptions, and human behavior itself is variable and subject to errors. As such, the questions we ask regarding the rule-like nature of LLMs extend beyond such models, becoming broader questions about human processing and behavior. Our findings alone cannot adjudicate these questions, though. We also note that our results are only in English. It would be valuable to extend to other languages, particularly those with typologically different filler- gap patterns. We relied here on templatically generated sen- tences, which are known to differ in systematic ways from naturally occuring sentences. We would like to extend this work to naturalistic sentences, but doing so is challenging because of the strong constraint that we have matched pairs. Acknowledgments We would like to thank Qing Yao and, more broadly, the computational linguistics research group at UT Austin for their helpful conversations regarding this project. We further acknowledge funding from NSF CAREER grant 2339729 (to Kyle Mahowald). References Aryaman Arora, Dan Jurafsky, and Christopher Potts. 2024. CausalGym: Benchmarking causal inter-pretability methods on linguistic tasks. In Proceed- ings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers) , pages 14638–14663, Bangkok, Thailand. As- sociation for Computational Linguistics. Dale J. Barr, Roger Levy, Christoph Scheepers, and Harry J. Tily. 2013. Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language , 68(3):255–278. Debasmita Bhattacharya and Marten van Schijndel. 2020. Filler-gaps that neural networks fail to gen- eralize. In Proceedings of the 24th Conference on Computational Natural Language Learning , pages 486–495, Online. Association for Computational Lin- guistics. Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle O’Brien, Eric Hallahan, Mo- hammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. 2023. Pythia: A suite for analyzing large language models across training and scaling. Preprint , arXiv:2304.01373. Cameron J Buckner. 2024. From deep learning to ra- tional machines: What the history of philosophy can teach us about the future of artificial intelligence . Oxford University Press. Noam Chomsky. 1957. Syntactic Structures . Walter de Gruyter. Peter W. Culicover. 1999. Syntactic Nuts: Hard Cases, Syntactic Theory, and Language Acquisition . Oxford University Press, Oxford. Marie-Catherine De Marneffe, Christopher D Manning, Joakim Nivre, and Daniel Zeman. 2021. Universal dependencies. Computational linguistics , 47(2):255– 308. Matthew Finlayson, Aaron Mueller, Sebastian Gehrmann, Stuart Shieber, | https://arxiv.org/abs/2505.16002v1 |
Tal Linzen, and Yonatan Belinkov. 2021. Causal analysis of syntactic agreement mechanisms in neural language models. InProceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) , pages 1828–1843, Online. Association for Computational Linguistics. Janet Dean Fodor. 1989. Empty categories in sentence processing. Language and Cognitive Processes , 4(3- 4):SI155–SI209. Richard Futrell and Kyle Mahowald. 2025. How linguis- tics learned to stop worrying and love the language models. arXiv preprint arXiv:2501.17047 . Richard Futrell, Peng Qian, Edward Gibson, Evelina Fedorenko, and Idan Blank. 2019. Syntactic depen- dencies correspond to word pairs with high mutual in- formation. In Proceedings of the Fifth International 9 Conference on Dependency Linguistics (Depling, SyntaxFest 2019) , pages 3–13, Paris, France. Associ- ation for Computational Linguistics. Atticus Geiger, Duligur Ibeling, Amir Zur, Maheep Chaudhary, Sonakshi Chauhan, Jing Huang, Arya- man Arora, Zhengxuan Wu, Noah Goodman, Christo- pher Potts, et al. 2023. Causal abstraction: A the- oretical foundation for mechanistic interpretability. arXiv preprint arXiv:2301.04709 . Atticus Geiger, Hanson Lu, Thomas Icard, and Christo- pher Potts. 2021. Causal abstractions of neural net- works. In Advances in Neural Information Process- ing Systems , volume 34, pages 9574–9586. Atticus Geiger, Zhengxuan Wu, Christopher Potts, Thomas Icard, and Noah Goodman. 2024. Finding alignments between interpretable causal variables and distributed neural representations. In Causal Learning and Reasoning , pages 160–187. PMLR. Jonathan Ginzburg and Ivan A. Sag. 2001. Interroga- tive Investigations: The Form, Meaning, and Use of English Interrogatives . CSLI, Stanford, CA. Katherine Howitt, Sathvik Nair, Allison Dods, and Robert Melvin Hopkins. 2024. Generalizations across filler-gap dependencies in neural language models. In Proceedings of the 28th Conference on Computational Natural Language Learning , pages 269–279, Miami, FL, USA. Association for Compu- tational Linguistics. Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, and Roger Levy. 2020. A systematic assessment of syntactic generalization in neural language mod- els. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 1725–1744, Online. Association for Computational Linguistics. Anastasia Kobzeva, Suhas Arehalli, Tal Linzen, and Dave Kush. 2023. Neural networks can learn patterns of island-insensitivity in Norwegian. In Proceedings of the Society for Computation in Linguistics 2023 , pages 175–185, Amherst, MA. Association for Com- putational Linguistics. Alexandra Kuznetsova, Per B Brockhoff, and Rune HB Christensen. 2017. lmertest package: tests in linear mixed effects models. Journal of statistical software , 82:1–26. Yair Lakretz, German Kruszewski, Theo Desbordes, Dieuwke Hupkes, Stanislas Dehaene, and Marco Ba- roni. 2019. The emergence of number and syntax units in LSTM language models. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) , pages 11–20, Minneapolis, Minnesota. Association for Computational Linguistics. Nur Lan, Emmanuel Chemla, and Roni Katzir. 2024. Large language models and the argument from thepoverty of the stimulus. Linguistic Inquiry , pages 1–56. Karim Lasri, Tiago Pimentel, Alessandro Lenci, Thierry Poibeau, and Ryan Cotterell. 2022. Probing for the usage of grammatical number. In Proceedings | https://arxiv.org/abs/2505.16002v1 |
of the 60th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers) , pages 8818–8831, Dublin, Ireland. Association for Compu- tational Linguistics. Christopher D Manning, Kevin Clark, John Hewitt, Ur- vashi Khandelwal, and Omer Levy. 2020. Emer- gent linguistic structure in artificial neural networks trained by self-supervision. Proceedings of the Na- tional Academy of Sciences , 117(48):30046–30054. Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. 2022. Locating and editing factual asso- ciations in gpt. In Advances in Neural Information Processing Systems , volume 35, pages 17359–17372. Curran Associates, Inc. Aaron Mueller, Yu Xia, and Tal Linzen. 2022. Causal analysis of syntactic agreement neurons in multi- lingual language models. In Proceedings of the 26th Conference on Computational Natural Lan- guage Learning (CoNLL) , pages 95–109, Abu Dhabi, United Arab Emirates (Hybrid). Association for Com- putational Linguistics. Ryan M Nefdt. 2023. Language, Science, and Structure: A Journey into the Philosophy of Linguistics . Oxford University Press. Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Jan Haji ˇc, Christopher D. Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal Dependencies v2: An evergrowing multilingual treebank collection. In Proceedings of the Twelfth Language Resources and Evaluation Conference , pages 4034–4043, Marseille, France. European Language Resources Association. Satoru Ozaki, Dan Yurovsky, and Lori Levin. 2022. How well do lstm language models learn filler-gap dependencies? In Proceedings of the Society for Computation in Linguistics 2022 , pages 76–88. Steven Piantadosi. 2023. Modern language models refute Chomsky’s approach to language. Lingbuzz Preprint, lingbuzz , 7180. Martin J Pickering and Holly P Branigan. 1998. The rep- resentation of verbs: Evidence from syntactic prim- ing in language production. Journal of Memory and language , 39(4):633–651. Eva Portelance and Masoud Jasbi. 2024. The roles of neural networks in language acquisition. Language and Linguistics Compass , 18(6):e70001. Christopher Potts. 2023. Characterizing english prepos- ing in pp constructions. Journal of Linguistics , pages 1–39. 10 Grusha Prasad, Marten van Schijndel, and Tal Linzen. 2019. Using priming to uncover the organization of syntactic representations in neural language models. InProceedings of the 23rd Conference on Computa- tional Natural Language Learning (CoNLL) , pages 66–76, Hong Kong, China. Association for Computa- tional Linguistics. John Robert Ross. 1967. Constraints on Variables in Syntax . Ph.D. thesis, MIT, Cambridge, MA. Natalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel Bowman, Miriam Connor, John Bauer, and Chris Manning. 2014. A gold standard dependency corpus for English. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC‘14) , pages 2897– 2904, Reykjavik, Iceland. European Language Re- sources Association (ELRA). Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Yaron Singer, and Stuart Shieber. 2020. Causal mediation analysis for inter- preting neural nlp: The case of gender bias. Preprint , arXiv:2004.12265. Kevin Ro Wang, Alexandre Variengien, Arthur Conmy, Buck Shlegeris, and Jacob Steinhardt. 2023. Inter- pretability in the wild: a circuit for indirect object identification in GPT-2 small. In The Eleventh Inter- national Conference on Learning Representations . Ethan Wilcox, Roger Levy, Takashi Morita, and Richard Futrell. 2018. What do RNN | https://arxiv.org/abs/2505.16002v1 |
language models learn about filler–gap dependencies? In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyz- ing and Interpreting Neural Networks for NLP , pages 211–221, Brussels, Belgium. Association for Com- putational Linguistics. Ethan Gotlieb Wilcox, Richard Futrell, and Roger Levy. 2024. Using computational models to test syntactic learnability. Linguistic Inquiry , 55(4):805–848. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Rémi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Hug- gingface’s transformers: State-of-the-art natural lan- guage processing. Preprint , arXiv:1910.03771. Zhengxuan Wu, Atticus Geiger, Aryaman Arora, Jing Huang, Zheng Wang, Noah Goodman, Christopher Manning, and Christopher Potts. 2024. pyvene: A library for understanding and improving PyTorch models via interventions. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demon- strations) , pages 158–165, Mexico City, Mexico. As- sociation for Computational Linguistics.Zhengxuan Wu, Atticus Geiger, Thomas Icard, Christo- pher Potts, and Noah Goodman. 2023. Interpretabil- ity at scale: Identifying causal mechanisms in Alpaca. InAdvances in Neural Information Processing Sys- tems, volume 36, pages 78205–78226. Curran Asso- ciates, Inc. A Construction Templates We provide templates and examples for our single- clause inanimate extraction (Table 3), multi-clause animate extraction (Table 4), and multi-clause inan- imate extractions (Table 5). In these tables, we use the shorthand demonstrated in Table 2 to refer to our constructions. Full Construction Shorthand Emb. Wh-Question ( Know -Class) Emb. Wh-Q ( K) Emb. Wh-Question ( Wonder -Class) Emb. Wh-Q ( W) Matrix Wh-Question Matrix Wh-Q Restrictive Relative Clause RRC Pseudo-Cleft PC Topicalization Topic Subject-Verb Agreement SV A Transitive/Intransitive Verbs T/I Verbs Table 2: Abbreviations for syntactic constructions in Tables 3 to 5. B Training and Evaluation Details We access the pythia models used in this study through the transformers python package (Wolf et al., 2020). For each construction, we build out training sets as described by Arora et al. (2024), sampling 200 sentences to form the basis of our training set, before adding each sentence’s mini- mal pair, resulting in perfectly balanced training sets. To train DAS, we use the pyvene library (Wu et al., 2024) and follow the hyperparameters used by Arora et al. (2024). Our evaluation sets for the pythia-1.4b models used in the main text consist of 400 sentences, with ODDS at each position-layer pair averaged across all evaluation sentences. For the other model vari- ants evaluated ( pythia-2.8b andpythia-6.9b ) we use evaluation sets of 96 sentences due to com- putational constraints, noting that this is still larger than the prescribed size of 50 evaluation sentences from Arora et al. (2024). We also ensure that the in- tersection of train sets and evaluation sets is empty, so as to not bias our evaluations. Our training and evaluation ran on 2 NVIDIA A40 GPUs. For one model size, training totaled ∼12 hours, and evalua- tion∼250 hours. 11 Construction Prefix Filler | https://arxiv.org/abs/2505.16002v1 |
NC Article NP Verb Label Emb. Wh-Q ( K) I know what/that the man built ./it Emb. Wh-Q ( W) I wonder what/if the man built ./it Matrix Wh-Q What/"" did the man build ?/it RRC The chair which/and the man built was/it Cleft It was the chair/clear that the man built ./the chair PC What/That the man built was/it Topic. Actually, the chair/"" the man built ./the chair SV A The boy/boys that the man liked is/are T/I Verbs Last night/Yesterday some/that man/boy ran/built ./it Table 3: Template and exemplar sentences for inanimate extraction from our single-clause construction variants. Construction Prefix Filler NC Article 1NP1 Verb 1that Article 2 NP2 Verb 2 Label Emb. Wh-Q ( K) I know who/that the nurse said that the man liked ./it Emb. Wh-Q ( W) I wonder who/if the nurse said that the man liked ./it Matrix Wh-Q Who/"" did the nurse say that the man liked ?/it RRC The boy who/and the nurse said that the man liked was/it Cleft It was the boy/clear that the nurse said that the man liked ./the chair PC Who/That the nurse said that the man liked was/it Topic. Actually, the boy/"" the nurse said that the man liked ./the chair SV A The boy/boys that the nurse said that the man liked is/are T/I Verbs Last night/Yesterday the nurse said that some/that man/boy ran/liked ./it Table 4: Template and exemplar sentences for animate extraction from our multi-clause construction variants. Construction Prefix Filler NC Article 1NP1 Verb 1that Article 2 NP2 Verb 2 Label Emb. Wh-Q ( K) I know what/that the nurse said that the man built ./it Emb. Wh-Q ( W) I wonder what/if the nurse said that the man built ./it Matrix Wh-Q What/"" did the nurse say that the man built ?/it RRC The chair which/and the nurse said that the man built was/it Cleft It was the chair/clear that the nurse said that the man built ./the chair PC What/That the nurse said that the man built was/it Topic. Actually, the chair/"" the nurse said that the man built ./the chair SV A The boy/boys that the nurse said that the man liked is/are T/I Verbs Last night/Yesterday the nurse said that some/that man/boy ran/built ./it Table 5: Template and exemplar sentences for inanimate extraction from our multi-clause construction variants. model <- lmer(max_odds) ∼(1 + in_train_set * same_animacy | from) + (1 + in_train_set * same_animacy | to) + in_train_set * same_animacy Figure 6: Model formula used at each position for the linear mixed-effects regressions in Experiment 1. model <- lmer(max_odds) ∼(1 + match_filler_class + match_inversion + match_embedded_under + match_discourse_fronted || from ) + (1 + match_filler_class + match_inversion + match_embedded_under + match_discourse_fronted || to ) + match_filler_class + match_inversion + match_embedded_under + match_discourse_fronted Figure 7: Model formula used at each position for the linear mixed-effects regressions in Experiment 2. 12 Construction Filler Class Inverted Clause Embedding Item Fronting for Discourse Embedded Wh-Q Wh-Item False Verb Phrase False Matrix Wh-Q Wh-Item True N/A False RRC Wh-Item False Noun Phrase False Cleft Null-Element | https://arxiv.org/abs/2505.16002v1 |
False Verb Phrase True Pseudocleft Wh-Item False N/A True Topicalization Phrasal Element False N/A True Table 6: Parameter values for each filler–gap construction. C Regression Details We perform all regressions with the lmerTest package in R (Kuznetsova et al., 2017). C.1 Experiment 1 Regression In the leave-one-out setting, we fit a linear mixed effects model at each position with our dependent variable as the MAX ODDS at each training-set, and evaluation-set pair. We treat the training-set and evaluation-set as random effects, and indicator variables for whether the evaluation-set comprises a construction in the training-set and whether the evaluation-set has the same animacy as the training- set as fixed effects. We also include a term to inves- tigate their interaction. As per Barr et al. (2013), we include maximal random effect slope structures. Our full regression model is reported in Figure 6, which we fit to obtain the reported βcoefficients, and corresponding p-values. Indicator variables are codified such that if the evaluated construction is in the training-set, in_train_set = 1 with in_train_set = -1 otherwise. Similarly, if the evaluated construction’s animacy matches that of the training conditions, same_animacy = 1 with same_animacy = -1 otherwise. The full results of this regression can be found in Table 7. Note: In this setting, the construction_from variable denotes the held-out construction. C.2 Experiment 2 Regression In the single-construction setting, we fit a linear mixed effects model at each position with our de- pendent variable as the MAX ODDS at each training- set and evaluation-set pair. We treat the training-set and evaluation-set as random effects. Our mixed ef- fects comprise indicator variable denoting whether the training construction and the evaluation con- struction match in our proposed filler–gap param- eters of variation. A full breakdown of these pa-rameters of variation and how they apply to our constructions of interest can be seen in Table 6. The resulting indicator variables take a value of 1 if the construction in the trainset and the construction in the evaluation set match for that given parameter, and -1 otherwise. We include maximal random ef- fect slope structures, excluding correlations to help convergence, as per Barr et al. (2013). Our resulting regression model is reported in in Figure 7, which we fit to obtain the reported β coefficients, and corresponding p-values (Table 8). D Frequencies To calculate frequencies, we use the English-EWT Universal Dependencies dataset (De Marneffe et al., 2021; Nivre et al., 2020; Silveira et al., 2014). It is sourced from the English Web Treebank, a cor- pus which totals 16,622 sentences scraped from the web. We parse the train, test, and dev connlu asso- ciated files searching for dependency relations de- noting each of our given constructions. We do not differentiate between our two classes of embedded wh-questions, as the lexically defined constraint would have likely yielded a non-exhaustive extrac- tion of all possible sentences. Instead we calculate a generic total for embedded wh-questions, and share this count among both of them. We present the final counts in Table 9. Construction Type Total Count Restrictive Relative Clauses 504 Embedded Wh-Questions 308 | https://arxiv.org/abs/2505.16002v1 |
Matrix Wh-Questions 82 Clefts 20 Pseudocleft 6 Topicalization 6 Total Sentences 16622 Table 9: Construction Type Counts 13 Term βFILLER βTHE βNP βVERB (Intercept) 1.93*** 2.70*** 1.87*** 9.06*** in_train_set 0.67*** 0.56*** 0.42** 0.26 same_animacy 1.08*** 0.51*** 0.60*** 2.13*** in_train_set:same_animacy 0.36** 0.20** 0.10* 0.10 Table 7: Experiment 1 Regression Results. * denotes p < .05, ** denotes p < .01, and *** denotes p < .001. Term βFILLER βTHE βNP βVERB (Intercept) 1.15*** 1.96*** 1.32*** 7.12*** match_filler_class 0.75*** 1.06** 0.28 0.53 match_inversion 0.38** 0.51** 0.40** 0.06 match_embedded_under 0.85*** 1.05** 0.54** 2.06** match_discourse_fronted 0.30** 0.36* 0.34*** 0.32 Table 8: Experiment 2 Regression Results. * denotes p < .05, ** denotes p < .01, and *** denotes p < .001 . E Experiment 1: Supplementary Information A by-position aggregation figure for the multi- clause variant is in Figure 8, complementing Fig- ure 2. An extended version of the mechanistic plots in Figure 3, including controls, appears in Figure 9, with a multi-clause counterpart shown in Figure 10. F Experiment 2: Supplementary Information We report raw bar charts for AUCs of in-degree and out-degree centrality across single- and multi- clause settings (Figures 11 to 14). G Experiment 3: Supplementary Information We also provide mechanistic heatmaps for our cross-clausal generalization experiments. They can be found in Figure 15. H Duplication with Other Model Sizes We duplicate these experiments with other model sizes, namely pythia-2.8b andpythia-6.9b . Be- low, we report these results. H.1 Experiment 1 We provide the aggregation figures across posi- tions – single (Figure 16) and multi-clause (Fig- ure 17) variants. We note that we find signifi- cant differences in the same positions as with thepythia-1.4b models. We provide regression re- sults in Table 10. Term βFILLER βTHE βNP βVERB pythia-2.8b (Intercept) 1.95*** 2.74*** 1.83*** 7.68*** in_train_set 0.67*** 0.50*** 0.37** 0.48* same_animacy 1.08*** 0.51*** 0.51*** 2.18*** in_train_set:same_animacy 0.45** 0.19** 0.09 0.13 pythia-6.9b (Intercept) 1.78*** 2.59*** 1.48*** 9.15*** in_train_set 0.76*** 0.59*** 0.36** 0.20 same_animacy 1.05*** 0.47*** 0.46*** 2.45*** in_train_set:same_animacy 0.42** 0.18** 0.07 0.00 Table 10: Experiment 1 Regression Results for pythia-2.8b andpythia-6.9b . * denotes p < . 05, ** denotes p < .01, and *** denotes p < .001. H.2 Experiment 2 For experiment 2, we provide scatter plots in Fig- ure 18 and regression results in Table 11. Term βFILLER βTHE βNP βVERB pythia-2.8b (Intercept) 1.05*** 1.99*** 1.20*** 6.20*** match_filler_class 0.68*** 1.16** 0.27 0.78** match_inversion 0.42** 0.46** 0.48*** 0.29 match_embedded_under 0.82*** 1.03** 0.35*** 1.95** match_discourse_fronted 0.30* 0.37 0.58** 0.32 pythia-6.9b (Intercept) 1.10*** 1.84*** 1.08*** 7.61*** match_filler_class 0.62** 1.10** 0.31 0.28 match_inversion 0.36* 0.60** 0.48*** 0.01 match_embedded_under 0.82** 1.02** 0.53** 2.05** match_discourse_fronted 0.35* 0.31 0.39* 0.14 Table 11: Experiment 2 Regression Results for pythia-2.8b andpythia-6.9b . * denotes p < . 05, ** denotes p < .01, and *** denotes p < .001. H.3 Experiment 3 For experiment 3, we provide corollary figures to Figure 5 in Figure 19. 14 {filler} the_1 {np1} {verb1} that the_2 {np2} {verb2} 0.000.250.500.751.00 ConditionSame Animacy, Left Out Different Animacy, Left OutSame Animacy, In Train Set Different Animacy, In Train SetControls Lexical ControlsAverage Max Log Odds Ratio by PositionFigure 8: | https://arxiv.org/abs/2505.16002v1 |
Multi-Clause Aggregation Values by Evaluation Group In Train Set, Same AnimacyLeft Out, Same AnimacyIn Train Set, Different AnimacyLeft Out, Different AnimacyControl Lexical ControlCleftEmbedded Wh−Question (Know−Class)Embedded Wh−Question (Wonder−Class)Matrix Wh−QuestionPseudocleftRestricted Relative ClauseTopicalization 0 510 15 200 510 15 200 510 15 200 510 15 200 510 15 200 510 15 20{filler} the {np} {verb} {filler} the {np} {verb} {filler} the {np} {verb} {filler} the {np} {verb} {filler} the {np} {verb} {filler} the {np} {verb} {filler} the {np} {verb} LayerPositionOdds Ratio 051015Generalization Across Constructions Figure 9: Single Clause ODDS at each position-layer pair for each construction. Averaged across animacy conditions. 15 In Train Set, Same AnimacyLeft Out, Same AnimacyIn Train Set, Different AnimacyLeft Out, Different AnimacyControlCleftEmbedded Wh−Question (Know−Class)Embedded Wh−Question (Wonder−Class)Matrix Wh−QuestionPseudocleftRestricted Relative ClauseTopicalization 0 510 15 200 510 15 200 510 15 200 510 15 200 510 15 20{filler} the_1 {np1} {verb1} that the_2 {np2} {verb2} {filler} the_1 {np1} {verb1} that the_2 {np2} {verb2} {filler} the_1 {np1} {verb1} that the_2 {np2} {verb2} {filler} the_1 {np1} {verb1} that the_2 {np2} {verb2} {filler} the_1 {np1} {verb1} that the_2 {np2} {verb2} {filler} the_1 {np1} {verb1} that the_2 {np2} {verb2} {filler} the_1 {np1} {verb1} that the_2 {np2} {verb2} LayerPositionOdds Ratio 051015Generalization Across ConstructionsFigure 10: Multi-Clause ODDS at each position-layer pair for each construction. Averaged across animacy conditions. 16 {filler} the {np} {verb} Average 0.00.20.40.60.8AUC Res. Rel. Clause Emb. Wh−Q (Wonder)Emb. Wh−Q (Know) Matrix Wh−QCleft PseudocleftTopicalizationIn−Degree − Single ClauseFigure 11: In-Degree AUC by position, with the final facet denoting the average across positions. {filler} the_1 {np1} {verb1} that the_2 {np2} {verb2} Average 0.00.20.40.6AUC Res. Rel. Clause Emb. Wh−Q (Wonder)Emb. Wh−Q (Know) Matrix Wh−QCleft PseudocleftTopicalizationIn−Degree − Embedded Clause Figure 12: In-Degree AUC by position, with the final facet denoting the average across positions. {filler} the {np} {verb} Average 0.00.20.40.6AUC Res. Rel. Clause Emb. Wh−Q (Wonder)Emb. Wh−Q (Know) Matrix Wh−QCleft PseudocleftTopicalizationOut−Degree − Single Clause Figure 13: Out-Degree AUC by position, with the final facet denoting the average across positions. {filler} the_1 {np1} {verb1} that the_2 {np2} {verb2} Average 0.00.20.40.60.8AUC Res. Rel. Clause Emb. Wh−Q (Wonder)Emb. Wh−Q (Know) Matrix Wh−QCleft PseudocleftTopicalizationOut−Degree − Embedded Clause Figure 14: Out-Degree AUC by position, with the final facet denoting the average across positions. 17 Embedded−>Embedded Single−>Embedded Control Lexical ControlCleftEmbedded Wh−Question (Know−Class)Embedded Wh−Question (Wonder−Class)Matrix Wh−QuestionPseudocleftRestricted Relative ClauseTopicalization0 5 10 15 20 0 5 10 15 20 0 5 10 15 20 0 5 10 15 20{filler} the {np} {verb} that the {np} {verb} {filler} the {np} {verb} that the {np} {verb} {filler} the {np} {verb} that the {np} {verb} {filler} the {np} {verb} that the {np} {verb} {filler} the {np} {verb} that the {np} {verb} {filler} the {np} {verb} that the {np} {verb} {filler} the {np} {verb} that the {np} {verb} LayerPositionOdds Ratio 051015Generalization Across ConstructionsFigure 15: ODDS at each position-layer pair for each construction in the cross-clausal generalization experiment. Averaged across animacy conditions and items in a given group. {filler} the {np} {verb} 0.000.250.500.751.00 Same Animacy, Not in Train Set Difference Animacy, Not in Train SetSame Animacy, In Train Set Different Animacy, In Train SetControls Lexical Controlspythia−2.8b, Average Max Odds by Position (a)pythia 2.8b average normalized MAX | https://arxiv.org/abs/2505.16002v1 |
ODDS . {filler} the {np} {verb} 0.000.250.500.751.00 Same Animacy, Not in Train Set Difference Animacy, Not in Train SetSame Animacy, In Train Set Different Animacy, In Train SetControls Lexical Controlspythia−6.9b, Average Max Odds by Position (b)pythia 6.9b average normalized MAX ODDS . Figure 16: Top: pythia-2.8b andbottom: pythia-6.9b average normalized MAX ODDS across positions in the single-clause variants, ±1standard error. Normalization fixes the “Same Animacy, In Train Set” condition at 1.00. 18 {filler} the_1 {np1} {verb1} that the_2 {np2} {verb2} 0.000.250.500.751.00 ConditionSame Animacy, Left Out Different Animacy, Left OutSame Animacy, In Train Set Different Animacy, In Train SetControls Lexical Controlspythia−2.8b, Average Max Log Odds Ratio by Position(a)pythia 2.8b {filler} the_1 {np1} {verb1} that the_2 {np2} {verb2} 0.000.250.500.751.00 ConditionSame Animacy, Left Out Different Animacy, Left OutSame Animacy, In Train Set Different Animacy, In Train SetControls Lexical Controlspythia−6.9b, Average Max Log Odds Ratio by Position (b)pythia 6.9b Figure 17: Top: pythia-2.8b andbottom: pythia-6.9b average normalized MAX ODDS across positions in the single-clause variants, ±1standard error. Normalization fixes the “Same Animacy, In Train Set” condition at 1.00. Mean AUC Res. Rel. ClauseEmb. Wh−Q (Wonder)Emb. Wh−Q (Know)Matrix Wh−Q CleftPseudocleft Topicalization 0.00.20.40.6 10 30 100 300In−Degree Res. Rel. ClauseEmb. Wh−Q (Wonder)Emb. Wh−Q (Know) Matrix Wh−QCleft PseudocleftTopicalization 10 30 100 300Out−Degree Construction Frequency (Log Scale) (a)pythia 2.8b Mean AUC Res. Rel. ClauseEmb. Wh−Q (Wonder)Emb. Wh−Q (Know)Matrix Wh−QCleftPseudocleft Topicalization 0.00.20.40.6 10 30 100 300In−Degree Res. Rel. ClauseEmb. Wh−Q (Wonder)Emb. Wh−Q (Know) Matrix Wh−QCleft PseudocleftTopicalization 10 30 100 300Out−Degree Construction Frequency (Log Scale) (b)pythia 6.9b Figure 18: Average in-degree centrality AUC and out-degree centrality AUC plotted against construction frequency. 19 {filler} the_1 {np1} {verb1} that the_2 {np2} {verb2} 0510 Embedded −> Embedded Controls Lexical Controls Single −> Embeddedpythia−2.8b, Average Max Odds Across Position(a)pythia 2.8b {filler} the_1 {np1} {verb1} that the_2 {np2} {verb2} 0510 Embedded −> Embedded Controls Lexical Controls Single −> Embeddedpythia−6.9b, Average Max Odds Across Position (b)pythia 6.9b Figure 19: MAX ODDS ±1standard error, by position for interventions (1) trained and evaluated on multi-clause variants, (2) trained on single-clause variants and evaluated on multi-clause variants, and (3-4) controls. Evaluations are performed on sentences matching training conditions (i.e. same construction and same animacy). 20 | https://arxiv.org/abs/2505.16002v1 |
arXiv:2505.16003v1 [cs.CL] 21 May 2025SLMEval: Entropy-Based Calibration for Human-Aligned Evaluation of Large Language Models Roland Daynauth, Christopher Clarke, Krisztian Flautner, Lingjia Tang, Jason Mars University of Michigan {daynauth, csclarke, manowar, lingjia, profmars}@umich.edu Abstract The LLM-as-a-Judge paradigm offers a scal- able, reference-free approach for evaluating language models. Although several calibra- tion techniques have been proposed to better align these evaluators with human judgment, prior studies focus primarily on narrow, well- structured benchmarks. As a result, it remains unclear whether such calibrations generalize to real-world, open-ended tasks. In this work, we show that SOTA calibrated evaluators often fail in these settings, exhibit- ing weak or even negative correlation with human judgments. To address this, we pro- pose SLMEval, a novel and efficient calibration method based on entropy maximization over a small amount of human preference data. By es- timating a latent distribution over model quality and reweighting evaluator scores accordingly, SLMEval achieves strong correlation with hu- man evaluations across two real-world produc- tion use cases and the public benchmark. For example, on one such task, SLMEval achieves a Spearman correlation of 0.57 with human judgments, while G-Eval yields a negative cor- relation. In addition, SLMEval reduces evalua- tion costs by 5–30 ×compared to GPT-4-based calibrated evaluators such as G-eval. 1 Introduction Despite the rapid advancement of large language models (LLMs), reliably evaluating their outputs in a way that aligns with human judgment remains an open challenge. Traditional automated metrics like BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) offer efficiency and scalability, but depen- dent on reference outputs and often fail to capture the subjective and contextual nuances that char- acterize real-world human preferences. Human evaluation remains the gold standard for assess- ing LLM quality, but is prohibitively expensive, time-consuming, and impractical for large-scale or iterative development cycles (Chiang et al., 2024).To address these limitations, the LLM-as-a- Judge paradigm has gained widespread adoption as a practical, low-cost alternative to human evalua- tion (Zheng et al., 2024; Liu et al., 2023a). How- ever, despite its growing popularity, LLM-based evaluators have certain limitations. Their scores suffer from several well-documented biases and ar- tifacts, including token length bias (Li et al., 2023), position bias (Wang et al., 2024), and repeated score patterns (Liu et al., 2023b), all of which can distort comparative judgments and reduce reliabil- ity. To mitigate these issues, a range of calibration techniques have been proposed to better align LLM- as-a-Judge evaluations with human judgments. Yet most prior studies validate these methods on nar- row, well-structured benchmarks, such as sum- marization, translation, and question answering, where task definitions are clear and evaluation cri- teria are constrained (Yuan et al., 2023; Li et al., 2023; Liu et al., 2023a). As a result, it remains un- clear whether these calibration strategies generalize to more subjective, open-ended tasks commonly encountered in real-world deployments. In this work, we demonstrated that the state- of-the-art calibrated LLM-based evaluators (Yuan et al., 2023; Li et al., 2023; Liu et al., 2023a) fre- quently fail on such open-ended tasks, often yield- ing weak or even negative correlation with human judgments. To | https://arxiv.org/abs/2505.16003v1 |
address this limitation, we introduce SLMEval, a novel and effective evaluation frame- work that uses entropy maximization to calibrate LLM-generated scores based on a small amount of human evaluation data. This calibration pro- motes well-calibrated LLM-based judgments that better correlate with human evaluations. In addition to better calibration results, unlike prior methods that rely on computationally intensive techniques, such as chain-of-thought prompting or multiple API calls, SLMEval operates using a single-pass evaluation with a small language model (SLM). 1 This dramatically reduces evaluation costs and en- ables scalable deployment in real-world pipelines, without sacrificing alignment with human evalua- tions. We summarize the key contributions of this work as follows: 1.Improved Alignment with Human Judg- ments : We show that state-of-the-art eval- uation methods, such as G-Eval (Liu et al., 2023a) and GPTScore (Fu et al., 2023), per- form poorly in subjective real-world tasks. In contrast, our method substantially improves alignment with human judgments. For ex- ample, in our production use case, our ap- proach achieves a Spearman correlation of 0.57, while G-Eval yields a negative correla- tion, highlighting a significant performance gap in this setting. 2.Reduced Computational Overhead : Using a small language model (SLM), our approach achieves strong performance while signifi- cantly reducing the number of required API calls. For example, on a public dataset, our method achieves comparable or better align- ment with human judgments while reducing evaluation costs by up to 30 ×compared to GPT-based evaluators.. 3.Evaluation in Real-World and Generalized Settings : We evaluate SLMEval on two real- world production tasks and a public bench- mark to demonstrate its generalizability and practical relevance across diverse evaluation scenarios. 2 Related Work Traditional Evaluation Methods Embedding- based methods such as SBERT (Reimers and Gurevych, 2019) and BERTScore (Zhang et al., 2020) improved contextual evaluations, while BLEURT (Sellam et al., 2020) fine-tunes models on human-labeled data for better alignment. However, reliance on reference outputs limits their effective- ness in open-ended tasks (Liu et al., 2024). LLM-as-a-Judge Paradigm LLM-based eval- uators (Liu et al., 2023a; Fu et al., 2024; Irugal- bandara et al., 2024; Kocmi and Federmann, 2023; Chen et al., 2023), assess the output without refer- ences, reducing the dependency on human annota- tion. Despite efficiency gains, these models sufferfrom position and verbosity biases, which impact reliability (Dubois et al., 2024). Alignment with Human Preferences Efforts to align LLM evaluations with human judgments in- clude reinforcement learning with human feedback (RLHF) (Ji et al., 2024; Askell et al., 2021) and di- rect preference optimization (DPO) (Rafailov et al., 2024). However, these approaches require exten- sive labeled data and significant computational re- sources, which limits scalability. Calibrating Evaluation with Human Prefer- ences Studies such as (Liu et al., 2024) and (Wang et al., 2024) explore human-in-the-loop cali- bration to improve the performance of LLM-based evaluators. These methods depend on expert judg- ments, which may not always be feasible, particu- larly in open-ended tasks. 3 SLMEval Framework 3.1 SLMEval Overview LetM={1,···, n}be the set of models under evaluation. For a given prompt Q, an LLM or SLM- based evaluator fcompares two model outputs | https://arxiv.org/abs/2505.16003v1 |
Ri andRjfrom models i, j∈Mwhere i̸=j. The evaluator freturns scalar scores Sij, Sji∈[1,10] such that: f(Q, R i, Rj) ={Sij, Sji} (1) By default, model iis said to beat jifSij> Sji. However, scores suffer from several well- documented biases and limitations, such as token length bias (Li et al., 2023), position bias (Wang et al., 2024), and repeated score patterns (Liu et al., 2023b) that can distort comparative judgments. To mitigate these biases, we introduce weights p={pi}n i=1similar to G-Eval (Liu et al., 2023b) with the constraint pi>0,nX i=1pi= 1 (2) Unlike G-Eval, which refines evaluator scores using token-level output probabilities, SLMEval assigns each model ia latent strength parameter piestimated from a distribution designed to reflect human preferences. These parameters are then used to calibrate the evaluator’s raw scores. The win condition between models iandjis redefined as: model ibeats j⇐⇒ piSij> pjSji (3) 2 3.2 Entropy-Based Weight Estimation To obtain an unbiased estimate of p, we maximize the Shannon entropy using the principle of maxi- mum entropy (Jaynes, 1957), selecting the distribu- tion with the highest entropy among those consis- tent with observed human preferences. max pH(p) =−nX i=1pilogpi (4) This objective is optimized subject to constraints derived from observed human comparisons. De- tails of the optimization procedure are provided in Appendix A.5. 3.3 Human Preference Constraints LetDhuman⊆M×Mdenote the set of model pairs for which we have a small amount of sampled human evaluation data for calibration purposes. For each (i, j)∈ D human , we define P(i > j ) =Pr[human judges prefer RioverRj] If preferences were perfectly consistent and fol- lowed a Bradley-Terry model, then P(i > j ) = pi pi+pj(Bradley and Terry, 1952). In practice, hu- man data are noisy and may violate transitivity (Bradley and Terry, 1952; Daynauth et al., 2024). To ensure robustness, we use the relaxed constraint: pi≥P(i > j )(pi+pj),∀ ∈ D human (5) 3.4 Optimization and Final Ranking SLMEval computes the calibrated weights pby maximizing the entropy function H(p), subject to the normalization constraint (Eq. 2) and the relaxed preference constraints (Eq. 5). This yields a proba- bility distribution that reflects the relative strength of each model while remaining minimally biased. For each model i∈M, we calculate its win rate by aggregating the results of all calibrated pairwise comparisons against every other model j∈M, j̸= i. Specifically, the win rate is defined as win_rate (i) =1 n−1X j∈M j̸=i1[piSij> pjSji] The final ranking is obtained by sorting all the models in descending order of win_rate (i).4 Experiments The application scenario in this study involves the evaluation of a series of generative models (see Appendix A.4) on their response to users of a per- sonal task management and productivity applica- tion. Users create and manage their plans and tasks across all aspects of their lives, such as work, per- sonal health, and finances, to stay organized, fo- cused, and productive. 4.1 Production Use-cases Our use-cases focus on generating motivational and actionable advice for users of a to-do list appli- cation, which leverages an auto-generative model in the backend to inspire and engage clients. The | https://arxiv.org/abs/2505.16003v1 |
following two use cases are used: •Daily Pep Talk (PT) : At the beginning of each day, an encouraging message is presented to the user based on what they had accomplished the previous day and their goals for the current day. •Recommendation (RE) is a personalized sug- gestion provided to users, helping them orga- nize their tasks into manageable categories. 4.2 Baselines Model responses were ranked using a range of reference-based and LLM-based evaluation met- rics, including embedding-based, lexical, and GPT- 4-based scorers. A full list of the metrics and cor- responding references is provided in Table 4. We evaluated two versions of G-Eval—one with prob- abilistic refinement and one without—both using chain-of-thought reasoning. 4.3 Implementation Details Dataset We collect human preference data from 360 Clickworker annotators, each comparing paired model responses for randomly selected prompts from two use cases. The tasks and the model output were uniformly distributed to ensure fairness, yielding 360 evaluations per task. The prompts were sampled from an internal to-do list application.1 Evaluator For SLMEval, we utilized a 4-bit quantized version of LLaMA 3.1 (with temp = 1.0 ) as our evaluation model. This choice was driven by its low computational requirements while retaining much of the performance of its larger counterpart. 1https://www.myca.ai 3 EvaluatorPeptalk Recommendation ρ ρ USE 0.17 -0.47 TF-IDF 0.12 0.21 SBERT 0.01 -0.40 BERTScore -0.25 -0.35 BLEURTScore 0.20 -0.47 GPTScorer 0.35 -0.15 GPTScore -0.14 0.39 G-Eval (CoT) 0.41 -0.55 G-Eval (Prob+CoT) 0.19 -0.53 GPT-4 + BPC ( k= 3) -0.08 -0.48 SLMEval (Ours) 0.48 0.57 Table 1: Spearman correlation ( ρ) with human judg- ments on Peptalk and Recommendation tasks. Bold indicates highest correlation per task. 5 Results and Discussion Table 1 reports the performance of automated eval- uators on each task, measured by the Spearman rank correlation ( ρ). The complete results, includ- ing Spearman’s ρand Kendall’s τ, are provided in Appendix A.5.4. 5.1 Performance on Production Use-Case GPTScorer aligns reasonably well with human judgments in Peptalk , but it does not achieve mean- ingful alignment in Recommendation . In contrast, GPTScore (Fu et al., 2024) performs adequately in Recommendation but struggles in Peptalk . These discrepancies highlight the limitations of current evaluators and emphasize the need for human feed- back on tasks that require diverse perspectives. The negative correlation scores of most eval- uators suggest that reference-based evaluation is poorly suited for tasks where multiple diverse re- sponses may be equally valid and human prefer- ences are more subjective. In contrast, SLMEval does not rely on predefined references and outper- forms these methods, demonstrating stronger adapt- ability to real-world tasks with high variability in acceptable output. 5.2 Performance on Open Datasets Although SLMEval is designed for real-world, application-specific tasks, we also evaluate its per- formance on a standardized open benchmark to assess generalizability. Specifically, we used the FairEval dataset from (Wang et al., 2024), with MT-Bench (Zheng et al., 2024) serving as a human reference. We compare SLMEval’s pairwise pre- diction accuracy with existing evaluators listed in Table 1.Evaluator Accuracy API Cost Increase GPTScorer 52.7% 1.0x GPTScore 40% 2.0x G-Eval 45% | https://arxiv.org/abs/2505.16003v1 |
2.0x GPT4 + BPC (k = 3) 62.5% 6x SLMEval 58.8% 0.2x Table 2: Comparison of evaluation accuracy on the FairEval dataset. SLMEval, using a 4-bit quantized model, outperforms GPTScorer and GPTScore while approaching the performance of GPT-4 with BPC ( k= 3). As shown in Table 2, SLMEval achieves an accuracy of 58.8%, outperforming GPTScorer, GPTScore and G-Eval, while approaching GPT-4 + BPC ( k= 3, 62.5%)-despite using a smaller quan- tized model. This result reinforces a broader trend: many evaluators that perform well on structured benchmarks struggle with open-ended tasks, even when drawn from standardized datasets. SLMEval, on the contrary, maintains strong performance in these settings, supporting its robustness beyond application-specific use. A key advantage of SLMEval is its cost- effectiveness. Although GPT-4 + BPC achieves the highest accuracy, it incurs an API cost approx- imately six times that of GPTScorer. In contrast, SLMEval uses a 4-bit quantized model served lo- cally via Ollama. When deployed in-house, its operational cost is negligible, as it runs efficiently on a standard laptop. For fair comparison, we esti- mate its cloud cost using AWS pricing, following the methodology from (Irugalbandara et al., 2024). 6 Conclusion Evaluating the performance of large language mod- els (LLMs) in subjective and application-specific tasks presents unique challenges that traditional automated evaluators often do not address. This paper introduced SLMEval, a scalable and efficient framework designed to bridge the gap between au- tomated evaluation and human preferences. SLMEval represents a practical and scalable so- lution to the challenges of evaluating LLMs in sub- jective tasks, achieving a balance between the reli- ability of human judgment and the efficiency of au- tomated methods. This improvement is crucial for applications where user satisfaction and perceived utility are paramount, such as providing motivation, advice, or personalized recommendations. 4 7 Limitations Our research focuses on a narrow application use case, highlighting the limitations of current autoe- valuation techniques. This narrow scope was nec- essary due to the significant time and monetary in- vestments required to gather human feedback data. Future research will aim to extend this work to more diverse use cases and incorporate traditional benchmarks such as MT-Bench and Chatbot Arena. Furthermore, future research will explore dy- namic recalibration techniques that can adjust evaluation criteria in real time based on ongoing performance metrics and evolving human prefer- ences. These advances have the potential to fur- ther enhance the adaptability and responsiveness of SLMEval in a wider range of application sce- narios, improving its utility in both subjective and traditional evaluation contexts. References Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et al. 2021. A general language assistant as a laboratory for align- ment. arXiv preprint arXiv:2112.00861 . Ralph Allan Bradley and Milton E Terry. 1952. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika , 39(3/4):324– 345. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder. arXiv preprint | https://arxiv.org/abs/2505.16003v1 |
arXiv:1803.11175 . Yi Chen, Rui Wang, Haiyun Jiang, Shuming Shi, and Ruifeng Xu. 2023. Exploring the use of large lan- guage models for reference-free text quality evalua- tion: An empirical study. In Findings of the Associa- tion for Computational Linguistics: IJCNLP-AACL 2023 (Findings) , pages 361–374. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An open- source chatbot impressing gpt-4 with 90%* chatgpt quality. Wei-Lin Chiang, Lianmin Zheng, Ying Sheng, Anasta- sios Nikolas Angelopoulos, Tianle Li, Dacheng Li, Hao Zhang, Banghua Zhu, Michael Jordan, Joseph E Gonzalez, et al. 2024. Chatbot arena: An open plat- form for evaluating llms by human preference. arXiv preprint arXiv:2403.04132 .Roland Daynauth, Christopher Clarke, Krisztian Flaut- ner, Lingjia Tang, and Jason Mars. 2024. Ranking unraveled: Recipes for llm rankings in head-to-head ai combat. arXiv preprint arXiv:2411.14483 . Yann Dubois, Balázs Galambosi, Percy Liang, and Tat- sunori B Hashimoto. 2024. Length-controlled al- pacaeval: A simple way to debias automatic evalua- tors. arXiv preprint arXiv:2404.04475 . Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. 2023. Gptscore: Evaluate as you desire. arXiv preprint arXiv:2302.04166 . Jinlan Fu, See Kiong Ng, Zhengbao Jiang, and Pengfei Liu. 2024. Gptscore: Evaluate as you desire. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 6556–6576. Chandra Irugalbandara, Ashish Mahendra, Roland Day- nauth, Tharuka Kasthuri Arachchige, Jayanaka Dan- tanarayana, Krisztian Flautner, Lingjia Tang, Yiping Kang, and Jason Mars. 2024. Scaling down to scale up: A cost-benefit analysis of replacing openai’s llm with open source slms in production. In 2024 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS) , pages 280–291. IEEE. E. T. Jaynes. 1957. Information theory and statistical mechanics. Phys. Rev. (2) , 106:620–630. Jiaming Ji, Donghai Hong, Borong Zhang, Boyuan Chen, Josef Dai, Boren Zheng, Tianyi Qiu, Boxun Li, and Yaodong Yang. 2024. Pku-saferlhf: Towards multi-level safety alignment for llms with human preference. arXiv preprint arXiv:2406.15513 . Albert Q. Jiang, Alexandre Sablayrolles, Arthur Men- sch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guil- laume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7b. Preprint , arXiv:2310.06825. Tom Kocmi and Christian Federmann. 2023. Large language models are state-of-the-art evaluators of translation quality. In Proceedings of the 24th Annual Conference of the European Association for Machine Translation , pages 193–203. Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Alpacaeval: An au- tomatic evaluator of instruction-following models. https://github.com/tatsu-lab/alpaca_eval . Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out , pages 74–81. 5 Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023a. G-eval: NLG evaluation using gpt-4 with better human align- ment. In Proceedings of the 2023 Conference | https://arxiv.org/abs/2505.16003v1 |
on Empirical Methods in Natural Language Processing , pages 2511–2522, Singapore. Association for Com- putational Linguistics. Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023b. G-eval: Nlg evaluation using gpt-4 with better human align- ment. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 2511–2522. Yuxuan Liu, Tianchi Yang, Shaohan Huang, Zihan Zhang, Haizhen Huang, Furu Wei, Weiwei Deng, Feng Sun, and Qi Zhang. 2024. Calibrating llm- based evaluator. In Proceedings of the 2024 Joint International Conference on Computational Linguis- tics, Language Resources and Evaluation (LREC- COLING 2024) , pages 2638–2656. Arindam Mitra, Luciano Del Corro, Shweti Mahajan, Andres Codas, Clarisse Simoes, Sahaj Agarwal, Xuxi Chen, Anastasia Razdaibiedina, Erik Jones, Kriti Ag- garwal, Hamid Palangi, Guoqing Zheng, Corby Ros- set, Hamed Khanpour, and Ahmed Awadallah. 2023. Orca 2: Teaching small language models how to rea- son. Preprint , arXiv:2311.11045. Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawa- har, Sahaj Agarwal, Hamid Palangi, and Ahmed Awadallah. 2023. Orca: Progressive learning from complex explanation traces of gpt-4. Preprint , arXiv:2306.02707. OpenAI. 2024. Gpt-4. Accessed: 2024-10-24. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computa- tional Linguistics , pages 311–318. Rafael Rafailov, Archit Sharma, Eric Mitchell, Christo- pher D Manning, Stefano Ermon, and Chelsea Finn. 2024. Direct preference optimization: Your language model is secretly a reward model. Advances in Neu- ral Information Processing Systems , 36. Anand Rajaraman and Jeffrey David Ullman. 2011. Mining of Massive Datasets . Cambridge University Press. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084 . Thibault Sellam, Dipanjan Das, and Ankur P. Parikh. 2020. Bleurt: Learning robust metrics for text gener- ation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics . Intel Analytics Software. 2023. The practice of super- vised finetuning and direct preference optimization on habana gaudi2. Accessed: 2024-10-24.Stability AI. 2024. Stablelm zephyr 3b: Stability’s latest llm. Accessed: 2024-10-24. Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288 . Lewis Tunstall, Edward Beeching, Nathan Lambert, Nazneen Rajani, Kashif Rasul, Younes Belkada, Shengyi Huang, Leandro von Werra, Clémentine Fourrier, Nathan Habib, Nathan Sarrazin, Omar San- seviero, Alexander M. Rush, and Thomas Wolf. 2023. Zephyr: Direct distillation of lm alignment. Preprint , arXiv:2310.16944. Guan Wang, Sijie Cheng, Qiying Yu, and Changling Liu. 2023. Openllms: Less is more for open- source models, july 2023a. URL https://github. com/imoneoi/openchat . Peiyi Wang, Lei Li, Liang Chen, Zefan Cai, Dawei Zhu, Binghuai Lin, Yunbo Cao, Lingpeng Kong, Qi Liu, Tianyu Liu, and Zhifang Sui. 2024. Large lan- guage models are not fair evaluators. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 9440–9450, Bangkok, Thailand. Association for Computational Linguistics. Peiwen Yuan, Shaoxiong Feng, | https://arxiv.org/abs/2505.16003v1 |
Yiwei Li, Xinglin Wang, Boyuan Pan, Heda Wang, and Kan Li. 2023. Batcheval: Towards human-like text evaluation. arXiv e-prints , pages arXiv–2401. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2020. Bertscore: Eval- uating text generation with bert. arXiv preprint arXiv:1904.09675 . Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2024. Judging llm-as-a-judge with mt-bench and chatbot arena. In Proceedings of the 37th International Conference on Neural Information Processing Systems , NIPS ’23, Red Hook, NY , USA. Curran Associates Inc. Banghua Zhu, Evan Frick, Tianhao Wu, Hanlin Zhu, and Jiantao Jiao. 2023. Starling-7b: Improving llm helpfulness and harmlessness with rlaif. 6 A Appendix A.1 Prompts The prompt consists of two parts: an initial context specific to the use case (see Appendix A.2 for the Peptalk context) and detailed instructions on how scoring should be performed (see Appendix A.3 for the evaluation prompt). SLMEval Prompt Template [User Question] <prompt > [The Start of Assistant A’s Answer] Model A’s Response [The End of Assistant A’s Answer] [The Start of Assistant B’s Answer] Model B’s Response [The End of Assistant B’s Answer] [Evaluator Instructions] A.2 Peptalk Usecase Prompt Model Prompt Imagine you are my personal assistant, gen- erate a short briefing for me at the start of my day. In the briefing, summarize what I completed in the previous day and then give me a preview of the key activities for today. In this briefing, consider my goals for this week and tell me if my focused tasks and rit- uals are aligned with those goals. Carefully evaluate the associations between the tasks and goals and describe the tasks based on how related you think they are. Note that it is possible that a task is not directly associ- ated with any goals. Reference the specific tasks mentioned in the context and generate this briefing in a single, naturally flowing narrative. Avoid simply listing out tasks one by one. Use a motivating and encouraging tone.A.3 SLMEval Prompt Evaluator Instructions Please act as an impartial judge and evaluate the quality of the responses provided by two AI assistants to the user question displayed below. You should choose the assistant that follows the user’s instructions and answers the user’s question better. Your evaluation should consider factors such as the clarity, intelligence, likability, trustworthiness, and level of detail of their responses. Begin your evaluation by comparing the two responses and provide a short explanation. Avoid any position biases and ensure that the order in which the responses were pre- sented does not influence your decision. Do not allow the length of the responses to influence your evaluation. Do not favor cer- tain names of the assistants. Be as objective as possible. Each assistant receives an overall score on a scale of 1 to 10, where a higher score indicates a better response. A.4 LLM/SLM Backend The usecases are evaluated using the output of a diverse set of small language models (Table 3) | https://arxiv.org/abs/2505.16003v1 |
to- gether with GPT-4 (OpenAI, 2024). A.5 Entropy Maximization via the Principle of Maximum Entropy We estimate the latent model strength distribution p={p1, . . . , p n}using the Principle of Maximum Entropy , which selects the least biased distribution consistent with observed constraints. Specifically, we solve for pthat maximizes the Shannon entropy: H(p) =−nX i=1pilogpi subject to: 1.Normalization: nX i=1pi= 1 2.Empirical preference constraints: pi≥P(i > j )(pi+pj),∀(i, j)∈ D human 7 Model Parameter Size Quantization Zephyr (Tunstall et al., 2023) 7B 4Bit Mistral (Jiang et al., 2023) 7B 4Bit StableLM-Zephyr (Stability AI, 2024) 3B 4Bit Starling-LM (Zhu et al., 2023) 7B 4Bit Orca2 (Mitra et al., 2023) 7B 4Bit OpenChat (Wang et al., 2023) 7B 4Bit LLaMA2 (Touvron et al., 2023) 7B 4Bit Neural-Chat (Software, 2023) 7B 4Bit Vicuna (Chiang et al., 2023) 7B 4Bit Orca-Mini (Mukherjee et al., 2023) 3B 4Bit Table 3: List of evaluated models with their parameter sizes and quantization levels. where P(i > j )is the empirical probability that model iis preferred over model jbased on the human-labeled dataset Dhuman . A.5.1 Optimization Procedure We solve the above as a constrained opti- mization problem using the SLSQP method in scipy.optimize.minimize . Since minimize per- forms minimization, we instead minimize the neg- ative entropy: min pnX i=1pilogpi subject to: •Equality constraint:Pn i=1pi= 1 •Inequality constraints: For each (i, j)∈ Dhuman , pi−P(i > j )(pi+pj)≥0 •Bound constraints: pi≥ϵfor a small ϵ >0 (e.g., 10−8) to avoid numerical instability. A.5.2 Implementation Notes •The entropy function is strictly concave, and the constraint set is convex, making the prob- lem well-suited to gradient-based solvers. •We initialize pwith the uniform distribution: pi= 1/nfor all i. •Constraints are enforced with tight numeri- cal tolerances (e.g., tol=1e-8 ) to maintain solution validity. A.5.3 Stability Considerations To ensure numerical stability: • All piare constrained to be strictly positive. •Logarithmic terms logpiare safe from do- main errors due to the positivity constraint. A.5.4 Full Result 8 Evaluator Peptalk Recommendation ρ τ ρ τ USE (Cer et al., 2018) 0.17 0.13 -0.47 -0.36 TF-IDF (Rajaraman and Ullman, 2011) 0.12 0.05 0.21 0.18 SBERT (Reimers and Gurevych, 2019) 0.01 -0.02 -0.40 -0.26 BERTScore (Zhang et al., 2020) -0.25 -0.13 -0.35 -0.26 BLEURTScore (Sellam et al., 2020) 0.20 0.16 -0.47 -0.33 GPTScorer (Irugalbandara et al., 2024) 0.35 0.28 -0.15 -0.10 GPTScore (Fu et al., 2024) -0.14 -0.07 0.39 0.29 G-Eval - cot (Liu et al., 2023b) 0.41 0.35 -0.55 -0.4 G-Eval - probs + cot 0.19 0.1 -0.53 -0.37 GPT4 + BPC ( k= 3) (Wang et al., 2024) -0.08 -0.10 -0.48 -0.56 SLMEval (Ours) 0.48 0.34 0.57 0.42 Table 4: Comparison of Correlation Scores ( ρ: Spearman, τ: Kendall) for Auto Evaluators on Peptalk and Recommendation tasks. Bold values indicate the highest correlation scores for each task. 9 | https://arxiv.org/abs/2505.16003v1 |
Interpretability Illusions with Sparse Autoencoders: Evaluating Robustness of Concept Representations Aaron J. Li1, Suraj Srinivas2, Usha Bhalla1, and Himabindu Lakkaraju1 1Harvard University 2Bosch Research Abstract Sparse autoencoders (SAEs) are commonly used to interpret the internal activations of large language models (LLMs) by mapping them to human-interpretable concept representations. While existing evaluations of SAEs focus on metrics such as the reconstruction-sparsity tradeoff, human (auto-)interpretability, and feature disentanglement, they overlook a crit- ical aspect: the robustness of concept representations to input perturbations. We argue that robustness must be a fundamental consideration for concept representations, reflecting the fi- delity of concept labeling. To this end, we formulate robustness quantification as input-space optimization problems and develop a comprehensive evaluation framework featuring realistic scenarios in which adversarial perturbations are crafted to manipulate SAE representations. Empirically, we find that tiny adversarial input perturbations can effectively manipulate concept-based interpretations in most scenarios without notably affecting the outputs of the base LLMs themselves. Overall, our results suggest that SAE concept representations are fragile and may be ill-suited for applications in model monitoring and oversight. The code for our experiments is available at https://github.com/AI4LIFE-GROUP/sae_robustness . 1. Introduction As large language models (LLMs) become widely used across diverse applications, the need to monitor and summarize their internal representations is critical for both interpretability and reliability. Sparse autoencoders (SAEs) [Cunningham et al., 2023] have shown promise as an unsupervised approach to map LLM embeddings to sparse interpretable concept embeddings via dictionary learning, where each neuron’s activation can be associated with specific, human- understandable concepts. Besides the reconstruction-sparsity Pareto frontier [Gao et al., 2024] and the human-understandability of the learned SAE latents [Paulo et al., 2024], a growing number of recent works have considered SAE’s feature disentanglement and concept detection capabilities [Karvonen et al., 2024a, 2025] as important components in SAE evaluation. However, while existing works show promise with the usage of SAEs under co-operative contexts, where both the explanation provider and the user share similar incentives; their applicability to adversarial contexts remains underexplored. We borrow the nomenclature of "co-operative" and "adversarial" contexts from [Bordt et al., 2022], who define an adversarial context as one where the model explainer has conflicting goals with the consumer of the explanation. For example, an adversarial user may craft prompts that manipulate SAE activations to bypass refusal mechanisms or produce benign-looking interpretations, thereby evading safety systems built on top of the model’s SAE representations. More broadly, if SAE-derived latent spaces are vulnerable to minimal input perturbations, adversaries could exploit this to conceal harmful, deceptive, or biased model outputs from downstream users. Conversely, if minor variations in 1arXiv:2505.16004v1 [cs.LG] 21 May 2025 inputs lead semantically unrelated prompts to yield similar SAE representations, it is challenging to assign precise, human-interpretable labels to individual concepts and to reason systematically about the effects of representation editing, leading to questions about the fragility of concept labeling. Thesevulnerabilitiesextendbeyondwhatpriorevaluationsofmonosemanticity[Bricken et al., 2023, Minegishi et al., 2025], which assume static, coherent, and unperturbed inputs, could uncover, raising deeper concerns about the robustness and reliability of SAE interpretations in trustworthy AI contexts. In this work, we propose | https://arxiv.org/abs/2505.16004v1 |
incorporating robustness as a core component of the SAE evaluation agenda, and concept-based explanations in general. We begin by defining SAE robustness under input-levelperturbations, followedbythespecificationofadversarialsettingsforacomprehensive robustness evaluation. Each setting is characterized by two key objectives and a perturbation mode, which determines how the input-level attack is applied. First, the semantic goal: the perturbation could be either untargeted , aiming to alter the SAE activations from the original, ortargeted, aiming to match the perturbed activations to that of another semantically unrelated input. Second, the activation goal: either at the population level , where the objective is to manipulate all SAE neurons simultaneously to alter the overall activation pattern; or at the individual level , where the goal is to activate a specific feature independently. Additionally, we consider two basic attack modes in this work: suffix-based attacks, where new tokens are appended to the input, and replacement-based attacks, where original tokens are allowed to be substituted with adversarial ones. Putting these together, we define eight different scenarios to evaluate SAE concept robustness against adversarial perturbations. To search for adversarial inputs that manipulate SAE interpretations, we adapt Greedy Coordin- ate Gradient (GCG) [Zou et al., 2023], a widely used input-space search algorithm originally developed to generate adversarial prompts that induce harmful outputs in LLMs, to our SAE settings. Our findings reveal that current SAEs are not sufficiently robust to serve as reliable tools for interpreting and monitoring LLM behaviors, nor as trustworthy components in down- stream applications that depend on stable, concept-level representations. We further believe that this vulnerability to adversarial input manipulation could generalize to other SAE-inspired approachessuchastranscoders[Dunefskyetal.,2024]andcrosscoders[Lindseyetal.,2024], high- lighting the need for future research on developing more robust and reliable concept-extraction methods in the field of mechanistic interpretability. Our main contributions can be summarized as follows: •We identify robustness as a critical yet underexplored dimension in evaluating SAEs, ex- panding the current evaluation agenda by introducing input-level perturbations. •We propose a comprehensive evaluation framework that defines SAE robustness along semantic goals, activation goals, and perturbation modes, resulting in eight distinct evalu- ation scenarios. •We conduct extensive experiments by designing adversarial input-level attacks, showing that SAE interpretations are consistently vulnerable to input perturbations across different datasets and model configurations. 2. Related Work SAE as an Interpretability Tool Since SAE was first proposed by Cunningham et al. [2023] as an effective approach for mechanistic interpretability [Bereska and Gavves, 2024, Sharkey et al., 2025], extensive works have focused on improving its architectural design [Rajamanoharan et al., 2024a, Mudide et al., 2024], activation functions [Gao et al., 2024, Rajamanoharan et al., 2 2024b, Bussmann et al., 2024], and loss functions [Karvonen et al., 2024b, Marks et al., 2024a]. SAEs have been applied to study LLM internal dynamics [Kissane et al., 2024, Ziyin et al., 2024, O’Neill et al., 2024, Balagansky et al., 2024, Lawson et al., 2024], control model behaviors [Marks et al., 2024b, Chalnev et al., 2024], as well as facilitate various downstream applications [Magalhães et al., 2024, Lei et al., 2024]. Evaluation of SAEs Beyond the reconstruction–sparsity tradeoff [Gao et al., 2024], which has largely | https://arxiv.org/abs/2505.16004v1 |
shaped the design of SAE training objectives, and the alignment of learned latents with human knowledge (i.e., human-understandability) [Cunningham et al., 2023, Paulo et al., 2024], recent works have begun to assess SAE performance from a more interpretation-centric perspective [Makelov et al., 2024, Karvonen et al., 2025, Bhalla et al., 2024]. These efforts in- clude evaluating whether prespecified, meaningful concepts can be captured by individual latents [Gurnee et al., 2023, Chanin et al., 2024] and whether independent semantic features are properly disentangled in the latent space [Huang et al., 2024, Karvonen et al., 2024a]. Our work com- plements and extends these static evaluations of concept detection and feature disentanglement [Karvonen et al., 2025] by introducing adversarial perturbations at the input level to assess the robustness of SAE-derived interpretations. Adversarial Attacks and Prompt Optimization LLMs are known to be vulnerable to adversarial attacks in the input space [Chen et al., 2022, Zou et al., 2023, Kumar et al., 2023, Zeng et al., 2024, Das et al., 2025], where small perturbations to prompts can lead to degraded cognitive performance or harmful model generations. Greedy Coordinate Gradient (GCG) [Zou et al., 2023] is a universal prompt optimization paradigm that searches for promising tokens to minimize a specified language model loss. In this work, we generalize GCG to the SAE setting to construct effective adversarial inputs that render SAE interpretations unreliable. 3. Evaluating the Robustness of SAE Interpretations In this section, we introduce a formal framework for evaluating SAE robustness. We begin by formulating robustness as input-space optimization problems, then present an evaluation framework based on structured adversarial scenarios, and finally propose a generalized input- level attack algorithm for solving the optimization objectives. 3.1 Preliminaries Sparse autoencoders (SAEs) are linear layers typically trained on the residual stream of LLMs, with distinct weights for each layer. Formally, the target LLM fLLM :X → H first maps an input sequence xto a hidden state h, and then the SAE fSAE:H → Zprojects it to the sparse latent space. The SAE encoding and decoding processes are given by: z=ϕ(Wench+benc) (1) ˆh=Wdecz+bdec (2) During encoding, ϕis a sparsity-encouraging activation function, and popular choices include ReLU [Cunningham et al., 2023] and TopK [Gao et al., 2024]. During decoding, ˆhcan be reconstructed as a sparse linear combination of interpretable concepts in the dictionary with a bias term. 3 3.2 Proposed Theory of SAE Robustness Independent from LLMs and SAEs, we assume the existence of a ‘ground-truth’ concept map- ping fcfrom the input space Xto a semantic concept space C, such that an input sequence corresponds to a ground truth concept vector capturing the magnitudes of all semantic features. This mapping provides an external reference for interpretability: under this setup, evaluating SAE interpretability can be ultimately considered as assessing the degree of alignment between the learned sparse latent space ZandC. Ideally, this mapping should be close to a bijection, suggesting both monosemanticity (i.e. a single latent encodes a single concept) and concept identifiability (i.e. a single concept can be captured by a small number of latents) [Karvonen et al., 2025]. We now consider | https://arxiv.org/abs/2505.16004v1 |
the conditions under which such alignment is violated. This can be formally expressed in two directions as: ∃c1, c′ 1∈ C:dc(c1, c′ 1)< ϵc, dz(z1, z′ 1)> δz (3) ∃c1, c2∈ C:dc(c1, c2)> δc, dz(z1, z2)< ϵz (4) where zi= (fSAE◦fLLM◦f−1 c)(ci),∀ci∈ C. The distance metrics dcanddz, along with the thresholds ϵc, δc, ϵz, δz, should be chosen based on the criteria for when two semantic concept vectors or SAE activations are considered highly similar or entirely unrelated. Simply stated, the violations holds when similar semantic concepts correspond to dissimilar SAE concept vectors, or vice versa. However, since directly searching in the hypothetical concept space Cis impractical, existing works [Gurnee et al., 2023, Huang et al., 2024, Marks et al., 2024b, Karvonen et al., 2024a] often simplifythisproblemsetupbyfocusingonprespecifiedconcepts, andthenapplyingperturbations inZwith predetermined directions and step sizes. In this setup, dzis typically measured based on the overlap between two sets of top-k activated SAE latents, while dcis quantified by the accuracies of external probes trained to predict the presence of specific concepts. As we cannot directly apply perturbations in C, we instead propose to apply input perturbations inX, by making a Lipschitz-ness assumption regarding the ‘ground-truth’ concept map fc. Specifically, thedistancemetric dxisdefinedastheLevenshteindistance[Levenshteinetal.,1966] between two token sequences (i.e. the minimum number of single-token insertions, deletions, or replacements required to transform xiintoxj), which locally and proportionally reflects the semantic distance dcin the concept space. Assumption 1. fcis bi-Lipschitz, i.e. L1·dx(xi, xj)≤dc(fc(xi), fc(xj))≤L2·dx(xi, xj) for some constants L1, L2>0and all xi, xj∈ X. This assumption is motivated by the observation that small changes in inputs typically induce onlyslightandgradualshiftsinoverallsemanticmeaning, whichmakestoken-leveleditdistancea practical proxy for semantic variations, enabling small perturbations at the concept level without requiringdirectaccesstothehypotheticalconceptspace C. Comparedtopriorapproaches, input- level perturbations offer three distinct advantages: •They support more fine-grained control, allowing perturbations in arbitrary directions and with variable step sizes in the input space (as explained in Section 3.3). •They enable concept-level evaluations without relying on hand-crafted latent directions or predefined concept labels. 4 •They better reflect realistic threat models, as it is significantly easier for an adversary to manipulate raw inputs than to intervene in latent or activation spaces. Therefore, the search problem in the concept space can be transformed into an optimization problem in the input space, by directly investigating the mapping fLLM◦fSAE :X → Z.We define the extent to which this bijection is preserved under adversarial input-level perturbations as the robustness of the SAE . For any given input x1, this can be quantified by: max x′ 1dz(z1, z′ 1)subject to dx(x1, x′ 1)≤ϵx (5) min x2dz(z1, z2)subject to dx(x1, x2)≥δx (6) These two objectives form the foundation of our evaluation framework. 3.3 Proposed Evaluation Framework Based on the preceding definition of SAE robustness, we propose a structured evaluation frame- work that further specifies the optimization problem for empirical analysis. The framework consists of three independent binary dimensions: semantic goal, activation goal, and perturb- ation mode. Each of the eightresulting combinations defines a unique adversarial scenario corresponding to a well-defined optimization task in the input space. 3.3.1 Semantic Goal The semantic goal determines the direction of the perturbation: | https://arxiv.org/abs/2505.16004v1 |
Untargeted Given x1and a fixed edit distance ϵxbetween token sequences, the attack aims to find a perturbed x′ 1that maximizes the difference in SAE activation. The perturbation direction is not predefined but is empirically selected to induce the maximal change in the sparse latent space Z. This setting corresponds to objective (5) exactly. Targeted Given both x1and an entirely unrelated x2as the target, our goal becomes searching for an x′ 1that remains close to x1inXwhile resembling x2inZ: min x′ 1dz(x′ 1, x2)subject to dx(x1, x′ 1)≤ϵx (7) This is equivalent to objective (6), since dx(x1, x′ 1)≤ϵximplies dx(x′ 1, x2)≥δx. While it may seem that the pair (x1, x′ 1)could also satisfy the objective (5) simultaneously, we nevertheless define the untargeted setting separately, as the perturbation here follows a fixed direction toward x2, making it a more constrained scenario. These two settings capture distinct adversarial objectives: untargeted perturbations evaluate the general fragility of the sparse latent space Z, while targeted perturbations test whether SAE activations can be deliberately steered toward misleading interpretations. 5 3.3.2 Activation Goal The activation goal defines the distance metric dzwithin the SAE latent space. Given two SAE activation vectors zi, zj∈ Z, the perturbation goal could vary in granularity. In this work, we consider the following two levels: Population level The goal is to manipulate groups of SAE features simultaneously to control the overall sparse representation vector, so the distance metric can be defined by the overlap ratio between two sets of kmost activated SAE latents (we call this metric neuron overlap ratio throughout this work): dz(zi, zj) = 1−|Ik(zi)∩ Ik(zj)| k(8) where Ik(z)denotes the set of indices of the kmost activated latents in vector z. To align with the notion of activating/deactivating SAE latents, kis always set to the number of non-zero latents activated by the target sequence x2. Individual level The perturbation may also be directed toward a specific SAE feature by modifying its rank among all latent dimensions. In the case of activation, the objective is to increase the feature’s rank until it has non-zero activation after ϕ. Conversely, in the case of deactivation, the objective is to reduce the feature’s rank such that the latent becomes zero. We thus define dzin a binary manner: dz(zi, zj) =1 1z(t) i>0̸=1z(t) j>0 (9) where tis the index of the target SAE feature to be manipulated. These two levels capture complementary structural vulnerabilities of SAE representations. The population-levelobjectiveassessesglobalinterpretationshifts, whiletheindividual-levelobjective probes the microscopic stability of specific SAE features. 3.3.3 Perturbation Mode Analogous to adversarial attacks on LLMs, the perturbation mode defines how adversarial edits are applied to the input. In this work, we consider two fundamental types: Suffix mode Adversarial tokens are appended to the end of the original input. This preserves the original semantic content while allowing the attacker to introduce new features that can dominate the existing ones by appending a limited-length suffix. Replacement mode Tokens in the original input are substituted with adversarial ones. Com- pared to the suffix mode, this approach directly modifies existing semantic features, and we therefore restrict the perturbation to a | https://arxiv.org/abs/2505.16004v1 |
single token replacement to ensure that the change at the input level remains minimal. The overall robustness under replacement mode can be measured by the average across all token positions. These two modes represent different types of adversarial control, and both are highly plausible in practical deployment scenarios. By systematically combining semantic goal, activation goal, and perturbation mode, we have defined a total of eight different settings to evaluate SAE robustness. 6 3.4 Generalized Input-level Attack for SAE Inspired by Gradient Coordinate Gradient (GCG) [Zou et al., 2023] used to elicit harmful LLM outputs, we propose a generalized algorithm to find best adversarial input-level perturbations, as part of our evaluation framework for SAE robustness. To search for promising tokens in the discrete input space X, traditional GCG employs an iterative optimization procedure: at each iteration, it first computes gradients with respect to token embeddings using a designated loss function, which is typically a language modeling loss aimed at aligning outputs with expected behavior; it then samples a batch of adversarial prompts based on the gradients, evaluates them under the same loss function, and finally selects the most effective candidate to proceed to the next iteration. A primary challenge in directly applying GCG to our SAE setting lies in the non-differentiability of the distance metrics in Z, as defined by equations (8) and (9). Therefore, we compute the gradients with differentiable loss functions defined over the continuous SAE representation space, while candidate solutions are evaluated using the original non-differentiable distance metrics defined over the sparse latent space. We summarize the various loss functions and evaluation metrics used for different semantic and activation goals in Table 1. Activation Goals Semantic GoalsPopulation Level Individual Level GCG Loss Evaluation GCG Loss Evaluation Untargeted˜z1·˜z′ 1 ∥˜z1∥∥˜z′ 1∥|Ik(z1)∩Ik(z′ 1)| k±log exp(z′(t) 1) P jexp(z′(j) 1) ±rank( z′(t) 1) Targeted −˜z′ 1·˜z2 ∥˜z′ 1∥∥˜z2∥1−|Ik(z′ 1)∩Ik(z2)| k±log exp(z′(t) 1) P jexp(z′′(j) 1) ±rank( z′(t) 1) Table 1: Customized GCG loss functions and evaluation metrics for different combinations of semantic and activation goals. When evaluating individual SAE features, both activation and deactivation tasks are tested. Here, ˜z=Wench+bencdenotes the raw activation vector prior to applying the sparsity-inducing activationfunction. Weuse cosine similarity andlog-likelihood aslossfunctionsinthecontinuous representation space, while retaining the original distance measures in Zas criteria for selecting adversarialcandidates. Theonlyexceptionisattheindividuallevel, wherewereplacetheoriginal binary distance metric with the rank of the specified SAE feature. The complete pseudocode for our generalized input-level attack is provided in Appendix A. 4. Experimental Evaluation In this section, we first describe our experimental setup, including model and dataset choices, followed by results across all eight adversarial scenarios. We then present additional analyses 7 to validate the reliability of our evaluation framework, examine robustness trends across model depths, assess cross-model attack transferability, and conclude with a case study on manipulating highly interpretable SAE latents. 4.1 Experimental Setup Models WeevaluateSAErobustnessontwocombinationsofopen-sourceLLMsandpretrained SAEs on the residual streams of the models. In this section, we present the results for (1) layer 20 of Llama-3-8B with TopK as the SAE activation function [Gao et al., 2024] and (2) layer 30 of Gemma-2-9B with JumpReLU SAE [Rajamanoharan et | https://arxiv.org/abs/2505.16004v1 |
al., 2024b]. Both SAEs have a width of 131k. Our experiments are not conducted on smaller models or SAEs with fewer latents, as our targeted attacks require a relatively large initial distance dz(x1, x2)to meaningfully evaluate their effectiveness. In other words, we evaluate the robustness of an SAE only when it can clearly distinguishsemanticallyunrelatedinputs. Wefocusonmid-to-latelayers, astheystrikeabalance between low-level surface patterns in early layers and highly entangled representations in final layers, makingthembettersuitedforcapturinghuman-interpretableconcepts. Additionalresults for different model depths are included in Appendix F. Datasets The ideal evaluation for targeted tasks should be conducted on two sets of input prompts with minimal semantic overlap. Our first ChatGPT-generated Art&Sciencedataset consists of 100 pairs of sentences each with approximately 20 to 25 tokens, broadly divided into two categories: x1is related to art and humanity, and x2is about science and technology. We adaptanotherexistingdataset AG News tosample300pairsofnewstitleswithdifferentcategory labels as an additional evaluation dataset. The mean initial overlap ratios are 31.3%and33.4% for the two datasets, averaged across two models. Evaluation Configurations Since the residual streams of LLMs encode both semantic fea- turesandnext-tokenpredictioninformation, weappendashortinstructionprompttotheoriginal sequence (including the suffix), " The previous sentence is about ", to the input to better extract LLM’s semantic content from the last hidden state. At individual level, we focus on 10indi- vidual neurons, which are selected for a activation/deactivation task based on the semantic goal: for an untargeted task, we select the SAE latents with the lowest/highest activation values for x1; for a targeted task, we select the latents most/least activated by x2but are currently de- activated/activated by x1. The specific choices of hyperparameters are included in Appendix B. Model Semantic GoalPopulation Level Individual Level Neuron Overlap (%) Attack Success Rate (%) Suffix Replacement Suffix Replacement Llama 3 8BUntargeted −89.1±2.3−84.6±0.9 0 .0±0.0,22.9±1.5 0 .0±0.0,34.3±0.9 Targeted +74.5±3.5 +29 .2±1.9 55 .9±2.2,91.4±0.8 59 .2±1.1,86.4±0.8 Gemma 2 9BUntargeted −85.5±1.9−79.8±1.1 0 .0±0.0,30.5±1.7 0 .0±0.0,32.6±1.2 Targeted +68.8±3.8 +31 .9±2.2 63 .0±1.9,82.3±1.4 61 .3±1.2,84.8±0.6 Table 2: Attack results for all eight settings for Llama-3-8B and Gemma-2-9B on our generated promptdataset. Forindividuallevel, weincluderesultsforbothactivation(left)anddeactivation (right)tasks. Themeanvaluesandstandarddeviationsarecomputedbasedonthreeindependent runs. 8 0 10 20 30 40 50 Number of Iterations0.30.40.50.60.70.8Metric ValueTargeted Population-level Suffix Attack Cosine Similarity Fraction of Shared SAE Latents 0 2 4 6 8 10 Number of Iterations0.20.40.60.81.0Metric ValueUntargeted Population-level Suffix Attack Cosine Similarity Fraction of Shared SAE LatentsFigure 1: Examples of GCG and evaluation losses across iterations for targeted (left) and untar- geted (right) population-level suffix attacks. Example: Targeted Population- level Suffix Attack x1:His research surveys Islamic architecture from MoroccotoIndia, revealingregionaladaptationsand shared aesthetic principles x2:Autonomous vehicles rely on sensor fusion, ma- chine learning, and path planning to navigate com- plex traffic environments x′ 1:His research surveys Islamic architecture from MoroccotoIndia, revealingregionaladaptationsand shared aesthetic principles UAV AMC algorithmsExample: Untargeted Population- level Suffix Attack x1:Philosophical skepticism challenges empirical certainty, questioning whether knowledge is possible in the face of doubt x′ 1:Philosophical skepticism challenges empirical certainty, questioning whether knowledge is possible in the face of doubt setImageBitmap Figure 2: Instances of adversarial attacks correspond to the loss curves in Figure 1. Adversarial suffixes are highlighted in red. 4.2 Results Interpretation We include the complete results for our | https://arxiv.org/abs/2505.16004v1 |
generated Art&Sciencedataset in Table 2, and provide the loss curves and adversarial inputs for two representative attack examples in Figure 1 and 2. For tasks on population level, we report the change in overlap ratio (as a percentage) between two sets of kmost activated latents, while for those on individual level we report the attack success rate (ASR). The experiment details and results for the AG News dataset are included in Appendix C. In general, our attacks are effective in most cases, and we discuss several important insights below. Failure Cases The adversarial attack is only ineffective in the untargeted tasks at the indi- vidual level: the SAE latents with lowest activation values cannot be activated by input per- turbations. This is most likely due to dead latents (i.e. latents cannot be activated under any circumstances) prevalent in pretrained SAEs, which is aligned with previous findings by Gao et al. [2024]. Untargeted vs. Targeted Empirically, we find that untargeted attacks are more successful than targeted ones at the population level, primarily because suppressing dominant features is generally easier than constructing specific new ones. However, untargeted attacks are less effective at the individual level, likely because, in targeted settings, the selected neurons are guaranteed to be manipulatable - their exhibited activation status for x2are opposite to x1. In contrast, untargeted settings may include a substantial number of SAE latents that are either dead (i.e., never activate) or almost always active. In other words, our results show that as long as an SAE latent represents a meaningful concept (i.e. neither permanently active or inactive), there’s a high chance it could be adversarially manipulated. 9 Suffix vs. Replacement At the population level, suffix mode is more effective at controlling the overall SAE activation pattern than replacement mode. This trend is probably due to (1) more recent tokens are likely to have greater impact on both the residual stream and the SAE activations, and (2) appending a short suffix provides more fine-grained and predictable optimization steps on the latent representation space. When perturbing individual SAE features, the effectiveness of two perturbation modes varies across semantic and activation goals, with no significant implications. 4.3 Additional Analyses Decoupling SAE and LLM Robustness A potential critique of our evaluation strategy is that it does not fully disentangle the robustness of the SAE from that of the underlying LLM. To address this, we select the number of manipulatable tokens (e.g. suffix length) by examining the LLM’s generations before and after adversarial attacks. Since our user prompts end with an instruction to summarize the preceding sentence (see Section 4.1), the generated outputs reflect the LLM’s self-summarization of the semantic content. Then we can leverage an LLM judge (GPT-4.1-mini) to determine whether the semantic content changes notably after the attacks. We evaluate 100 successful attacks for each unique level of input manipulation, and when the total number of attacks exceed 100, we sample randomly while preserving the balance between different settings and token indices (for replacement mode). For each attack, we take the majority vote among 5 generations as the final decision. | https://arxiv.org/abs/2505.16004v1 |
Based on the results for the suffix lengths and single-token replacement used in our experiments reported in Table 3, we show that our adversarial attacks are mostly targeting the SAEs, without notably affecting the base LLM. The prompt template and self-summarization examples are included in Appendix E. Model Dataset Suffix (1 token) Suffix (3 tokens) Replacement (1 token) Llama 3 8BArt&Science 0.98 0 .93 1 .0 AG News 0.95 - 0.96 Gemma 2 9BArt&Science 1.0 0 .94 1 .0 AG News 0.94 - 0.95 Table 3: Fraction of adversarial attacks that result in no significant differences in model’s self- summarization, using LLM-as-a-judge (GPT-4.1-mini). During generation, the temperature is set to 0.7 and top_p is set to 0.9. Robustness Change Across Model Depth The main results presented in this section focus on a single mid-to-late layer for each model. To assess the generalizability of our findings across different model depths, we apply our population-level attacks under suffixmode to additional layers using open-source pretrained SAE weights. Empirically, we find that our attacks remain effective across layers, despite slight decreases in performance upper bound. The experiment results and interpretations are included in Appendix F. Transferability of Attacks In reality, the adversary might want to craft a single perturbed sequence that could be used to attack different models, so it’s important to investigate the trans- ferability of our attacks across different LLMs. Since SAE latents of different models encode dif- ferent semantic concepts, we only investigate transferability at the population level. As presented in Appendix G, we observe that the difference between average neuron overlap changes achieved by the original attacks and transferred attacks are at most 25.3%for the targeted settings and 10 43.8%for the untargeted settings, suggesting that although there’s a notable performance gap due to the model transfer, the attacks are still effective. Deactivating Highly Interpretable SAE Latents from Neuronpedia In the above ex- periments, the set of manipulated SAE latents is empirically determined by our datasets (i.e., the most or least activated ones by our inputs). In fact, such evaluation procedure could be reversed: we may instead begin by selecting specific target SAE latents and then assess their robustness by identifying input sequences that strongly activate them. To explore this, we select several SAE latents associated with consistent and human-interpretable semantic concepts from Neuronpedia, and apply the untargeted individual-level attack to top-activating sequences drawn from an external text corpus. Through experiments, we find that our attack could successfully deactivate these highly meaningful SAE latents, with illustrative examples provided in Appendix H. 5. Discussion and Limitations The effectiveness of our attacks is fundamentally bounded by compute constraints. All experi- ments and hyperparameter choices were based on a single 80GB A100 GPU. With access to more GPU memory or increased GCG iterations, even stronger attack performance is likely achievable. As discussed in section 4.1, we target medium-sized LLMs because smaller models, along with their SAEs, lack the capacity to distinguish semantically unrelated inputs. However, this choice results in slower attacks: for instance, a 50-iteration targeted population-level suffix attack on a longer prompt from the Art&Sciencedataset | https://arxiv.org/abs/2505.16004v1 |
takes approximately 10 minutes. In practical applications, users may only need to evaluate the most relevant scenario among our eight set- tings. Additionally, future work could explore optimizing the attack pipeline to better balance effectiveness and efficiency. More broadly, we consider our evaluation framework and adversarial attacks as a general meth- odology for assessing concept-extraction tools for LLMs. While we focus on standard SAEs in this work, the same vulnerabilities likely extend to other variants such as transcoders [Dunefsky et al., 2024] and crosscoders [Lindsey et al., 2024], which similarly lack structural constraints or robustness-aware objectives during training. We leave the exploration of such extensions to future work. 6. Conclusion In this work, we investigate the robustness of SAEs under input-level adversarial perturbations and introduce a comprehensive evaluation framework spanning semantic and activation-level objectives. Our experiments show that SAE interpretations are highly vulnerable to minimal input changes, even when the underlying LLM remains semantically stable, raising concerns about their reliability in realistic settings. To advance trustworthy interpretability, we hope our work motivates the development of more robust tools for understanding LLMs, as stability under real-world conditions is essential for aligning model behavior with human expectations. 11 References Nikita Balagansky, Ian Maksimov, and Daniil Gavrilov. Mechanistic permutability: Match fea- tures across layers, 2024. URL https://arxiv.org/abs/2410.07656 . Leonard Bereska and Efstratios Gavves. Mechanistic interpretability for ai safety–a review. arXiv preprint arXiv:2404.14082 , 2024. UshaBhalla, SurajSrinivas, AsmaGhandeharioun, andHimabinduLakkaraju. Towardsunifying interpretability and control: Evaluation via intervention. arXiv preprint arXiv:2411.04430 , 2024. Sebastian Bordt, Michèle Finck, Eric Raidl, and Ulrike von Luxburg. Post-hoc explanations fail toachievetheirpurposeinadversarialcontexts. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency , FAccT ’22, page 891–905, New York, NY, USA, 2022. Association for Computing Machinery. ISBN 9781450393522. doi: 10.1145/3531146. 3533153. URL https://doi.org/10.1145/3531146.3533153 . Trenton Bricken, Adly Templeton, Joshua Batson, Brian Chen, Adam Jermyn, Tom Conerly, Nick Turner, Cem Anil, Carson Denison, Amanda Askell, et al. Towards monosemanticity: Decomposing language models with dictionary learning. Transformer Circuits Thread , 2, 2023. Bart Bussmann, Patrick Leask, and Neel Nanda. Batchtopk: A simple improvement for topksaes. InAI Alignment Forum , page 17, 2024. Sviatoslav Chalnev, Matthew Siu, and Arthur Conmy. Improving steering vectors by targeting sparse autoencoder features. arXiv preprint arXiv:2411.02193 , 2024. David Chanin, James Wilken-Smith, Tomáš Dulka, Hardik Bhatnagar, and Joseph Bloom. A is for absorption: Studying feature splitting and absorption in sparse autoencoders. arXiv preprint arXiv:2409.14507 , 2024. Yangyi Chen, Hongcheng Gao, Ganqu Cui, Fanchao Qi, Longtao Huang, Zhiyuan Liu, and Maosong Sun. Why should adversarial perturbations be imperceptible? rethink the research paradigm in adversarial nlp. arXiv preprint arXiv:2210.10683 , 2022. HoagyCunningham, AidanEwart, LoganRiggs, RobertHuben, andLeeSharkey. Sparseautoen- coders find highly interpretable features in language models. arXiv preprint arXiv:2309.08600 , 2023. Badhan Chandra Das, M Hadi Amini, and Yanzhao Wu. Security and privacy challenges of large language models: A survey. ACM Computing Surveys , 57(6):1–39, 2025. Jacob Dunefsky, Philippe Chlenski, and Neel Nanda. Transcoders find interpretable llm feature circuits. arXiv preprint arXiv:2406.11944 , 2024. Leo Gao, Tom Dupré la Tour, Henk Tillman, Gabriel Goh, Rajan Troll, Alec Radford, Ilya Sutskever, Jan Leike, | https://arxiv.org/abs/2505.16004v1 |
and Jeffrey Wu. Scaling and evaluating sparse autoencoders. arXiv preprint arXiv:2406.04093 , 2024. Wes Gurnee, Neel Nanda, Matthew Pauly, Katherine Harvey, Dmitrii Troitskii, and Dimitris Bertsimas. Finding neurons in a haystack: Case studies with sparse probing. arXiv preprint arXiv:2305.01610 , 2023. Jing Huang, Zhengxuan Wu, Christopher Potts, Mor Geva, and Atticus Geiger. Ravel: Evaluat- ing interpretability methods on disentangling language model representations. arXiv preprint arXiv:2402.17700 , 2024. 12 Adam Karvonen, Can Rager, Samuel Marks, and Neel Nanda. Evaluating sparse autoencoders on targeted concept erasure tasks. arXiv preprint arXiv:2411.18895 , 2024a. Adam Karvonen, Benjamin Wright, Can Rager, Rico Angell, Jannik Brinkmann, Logan Smith, Claudio Mayrink Verdun, David Bau, and Samuel Marks. Measuring progress in dictionary learning for language model interpretability with board game models. Advances in Neural Information Processing Systems , 37:83091–83118, 2024b. Adam Karvonen, Can Rager, Johnny Lin, Curt Tigges, Joseph Bloom, David Chanin, Yeu-Tong Lau, Eoin Farrell, Callum McDougall, Kola Ayonrinde, et al. Saebench: A comprehens- ive benchmark for sparse autoencoders in language model interpretability. arXiv preprint arXiv:2503.09532 , 2025. ConnorKissane, RobertKrzyzanowski, JosephIsaacBloom, ArthurConmy, andNeelNanda. In- terpreting attention layer outputs with sparse autoencoders. arXiv preprint arXiv:2406.17759 , 2024. Aounon Kumar, Chirag Agarwal, Suraj Srinivas, Aaron Jiaxun Li, Soheil Feizi, and Hi- mabindu Lakkaraju. Certifying llm safety against adversarial prompting. arXiv preprint arXiv:2309.02705 , 2023. Tim Lawson, Lucy Farnik, Conor Houghton, and Laurence Aitchison. Residual stream analysis with multi-layer saes, 2024. URL https://arxiv.org/abs/2409.04185 . Song Lei, Xiujuan Lei, Ming Chen, and Yi Pan. Drug repositioning based on deep sparse autoen- coder and drug–disease similarity. Interdisciplinary Sciences: Computational Life Sciences , 16 (1):160–175, 2024. Vladimir I Levenshtein et al. Binary codes capable of correcting deletions, insertions, and re- versals. In Soviet physics doklady , volume 10, pages 707–710. Soviet Union, 1966. Jack Lindsey, Adly Templeton, Jonathan Marcus, Thomas Conerly, Joshua Batson, and Chris- topherOlah. Sparsecrosscodersforcross-layerfeaturesandmodeldiffing. Transformer Circuits Thread, 2024. Jorge Magalhães, Tomás Jorge, Rúben Silva, António Guedes, Diogo Ribeiro, Andreia Meixedo, Araliya Mosleh, Cecília Vale, Pedro Montenegro, and Alexandre Cury. A strategy for out- of-roundness damage wheels identification in railway vehicles based on sparse autoencoders. Railway Engineering Science , 32(4):421–443, 2024. Aleksandar Makelov, George Lange, and Neel Nanda. Towards principled evaluations of sparse autoencoders for interpretability and control, 2024. URL https://arxiv.org/abs/2405. 08366. Luke Marks, Alasdair Paren, David Krueger, and Fazl Barez. Enhancing neural network inter- pretability with feature-aligned sparse autoencoders. arXiv preprint arXiv:2411.01220 , 2024a. Samuel Marks, Can Rager, Eric J Michaud, Yonatan Belinkov, David Bau, and Aaron Mueller. Sparsefeaturecircuits: Discoveringandeditinginterpretablecausalgraphsinlanguagemodels. arXiv preprint arXiv:2403.19647 , 2024b. Gouki Minegishi, Hiroki Furuta, Yusuke Iwasawa, and Yutaka Matsuo. Rethinking evalu- ation of sparse autoencoders through the representation of polysemous words. arXiv preprint arXiv:2501.06254 , 2025. Anish Mudide, Joshua Engels, Eric J Michaud, Max Tegmark, and Christian Schroeder de Witt. Efficient dictionary learning with switch sparse autoencoders. arXiv preprint arXiv:2410.08201 , 2024. 13 Charles O’Neill, Christine Ye, Kartheik Iyer, and John F Wu. Disentangling dense embeddings with sparse autoencoders. arXiv preprint arXiv:2408.00657 , 2024. GonçaloPaulo,AlexMallen,CadenJuang,andNoraBelrose. Automaticallyinterpretingmillions of features in large language models. arXiv preprint arXiv:2410.13928 , 2024. SenthooranRajamanoharan, ArthurConmy, LewisSmith, TomLieberum, VikrantVarma, János Kramár, Rohin Shah, and | https://arxiv.org/abs/2505.16004v1 |
Neel Nanda. Improving dictionary learning with gated sparse autoencoders. arXiv preprint arXiv:2404.16014 , 2024a. Senthooran Rajamanoharan, Tom Lieberum, Nicolas Sonnerat, Arthur Conmy, Vikrant Varma, János Kramár, and Neel Nanda. Jumping ahead: Improving reconstruction fidelity with jumprelu sparse autoencoders. arXiv preprint arXiv:2407.14435 , 2024b. Lee Sharkey, Bilal Chughtai, Joshua Batson, Jack Lindsey, Jeff Wu, Lucius Bushnaq, Nicholas Goldowsky-Dill, Stefan Heimersheim, Alejandro Ortega, Joseph Bloom, et al. Open problems in mechanistic interpretability. arXiv preprint arXiv:2501.16496 , 2025. Yi Zeng, Hongpeng Lin, Jingwen Zhang, Diyi Yang, Ruoxi Jia, and Weiyan Shi. How johnny can persuade llms to jailbreak them: Rethinking persuasion to challenge ai safety by human- izing llms. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 14322–14350, 2024. Liu Ziyin, Isaac Chuang, Tomer Galanti, and Tomaso Poggio. Formation of representations in neural networks, 2024. URL https://arxiv.org/abs/2410.03006 . Andy Zou, Zifan Wang, Nicholas Carlini, Milad Nasr, J Zico Kolter, and Matt Fredrikson. Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043 , 2023. 14 A. Pseudocode for Adversarial Input-Level Attack Algorithm 1 Generalized Input-level Attack for SAE Input:Input token sequence (x1)1:l, reference input xref(either x1orx2), target LLM with the mapping fLLM :X → H, SAE encoding weights Wencandbenc, set of modifiable indices I, number of iterations T, GCG loss Lgcg, evaluation metric Leval,m, batch size B x′ 1←( x1 ifI ⊆ { 1, . . . , l} Concat ((x1)1:l,LLM (x1)I)otherwise▷Initialize x′ 1based on attack mode fort= 1, . . . , Tdo fori∈ Ido Si←Top-m(−∇ (x′ 1)iLgcg(x′ 1, xref)) ▷Compute top- kpromising token substitutions end for forb= 1, . . . , Bdo x′(b) 1←x′ 1 ▷Initialize each element within batch x′(b) 1,i←Uniform (Si), where i=Uniform (I) ▷Randomly select the token to be replaced end for x′ 1←( x′(b∗) 1, where b∗= arg min bLeval(x′(b) 1)ifLeval(x′(b∗) 1)<Leval(x′ 1) x′ 1 otherwise end for Output: Optimized input x′ 1 B. Hyperparameters for Different Attacks Semantic GoalPopulation Level Individual Level Suffix Replacement Suffix Replacement TargetedT 50 30 10 10 m 300 300 300 300 B 600 200 100 100 UntargetedT 10 10 10 10 m 300 300 300 300 B 200 200 100 100 Table B1: Recommended hyperparameters for different types of attacks when running on a 80GB A100 GPU, including the number of iterations T, the number of promising tokens considered at each token index m, and the batch size B. The suffix length is set to 3under the Targeted- Population setting for the generated Art&Sciencedataset, and to 1in all other cases. C. Additional Results on AG News Dataset The complete evaluation results for the AG News dataset are summarized in Table C1. Since the news titles are generally shorter than the prompts in our generated Art&Sciencedataset, we restrict the suffix length to one token in all settings. 15 Model Semantic GoalPopulation Level Individual Level Neuron Overlap (%) Attack Success Rate (%) Suffix Replacement Suffix Replacement Llama 3 8BUntargeted −82.4±1.2−77.8±0.7 0 .0±0.0,24.0±1.0 0 .0±0.0,27.9±0.3 Targeted +64.5±1.8 +26 .3±0.8 60 .4±1.3,95.4±0.1 57 .3±0.7,88.1±0.2 Gemma 2 9BUntargeted −85.5±0.6−73.6±0.6 0 .0±0.0,28.1±1.4 0 .0±0.0,29.3±0.5 Targeted | https://arxiv.org/abs/2505.16004v1 |
+56.2±2.0 +24 .8±1.1 62 .7±0.9,83.8±0.4 64 .0±0.5,92.2±0.1 Table C1: Attack results for all eight settings for Llama-3-8B and Gemma-2-9B on the AG News (300 samples) dataset. Both suffix and replacement modes are restricted to one token. For individual level, we include results for both activation (left) and deactivation (right) tasks. The mean values and standard deviations are computed based on three independent runs. Her painting references West African visual languages to interrogate diaspora, belonging, and colonial legacies Neural networks use multiple layers to extract features from data, mimicking the structure of biological brains Her painting references West African visual languages to interrogate diaspora, belonging, and colonial legacies Chris.opensource kerasOverlap = 0.31 Overlap = 0.60 California Official Rules on Gay Marriage Gunners Ready for Tough Test California Official Rules on Gay Marriage hopefull yOverlap = 0.27 Overlap = 0.62 Classical composers used harmonic tension and thematic development to reflect philosophical inquiries and emotional states Classical composers used harmonic tension and thematic development to reflect philosophical inquiries and emotional states DeprecatedOverlap = 0.11Examples of Effective Attacks at Population Level + 94% + 129%Targeted, Suffix Untargeted, Suffix - 89% Peru Rebel Chief Scores Publicity Coup in Court Peru Rebel Chief Scores Publicity Coup in Court 처음 Overlap = 0.14- 86%Ancient Greek tragedies continue to influence contemporary theater through their archetypal themes, dramatic irony, and complex character development Blockchain uses decentralized consensus and cryptographic hashing to verify transactions without central oversight Ancient Greek tragedies continue to influence contemporary theater through HttpClientModule archetypal themes, dramatic irony, and complex character developmentOverlap = 0.34 Overlap = 0.52 Skirmish outside Gaza camp kills 5 British Energy to delist to save rescue plan Skirm demonic outside Gaza camp kills 5Overlap = 0.39 Overlap = 0.65 Feminist theory interrogates patriarchal assumptions embedded in language, media, institutions, and historical narratives across disciplines Femin }}</ theory interrogates patriarchal assumptions embedded in language, media, institutions, and historical narratives across disciplinesOverlap = 0.10+ 53% + 67%Targeted, Replace Untargeted, Replace - 90% Radcliffe awaits gun for start of Russian roulette Radcliffe awaits gun for start environment Russian roulette Overlap = 0.20- 80% Figure D1: Examples of population-level attacks. Adversarial tokens are highlighted. We show one example from each dataset for every setting. D. More Examples of Effective Attacks We provide more examples of effective attacks under different evaluation scenarios in Figure D1, D2, D3, and D4. Samples from both datasets are included. E. Prompt Template for LLM Self-Summarization Comparisons In Figure E1 and E2, we provide the prompt template along with two examples of model- generated self-summarizations that capture the semantic content of the inputs. The few-shot learning prompt template for using GPT-4.1-mini as the LLM judge is included in Figure E3. The motivations are discussed in section 4.3. To further validate the faithfulness of the LLM judge, we manually label 100 randomly sampled pairs of model generations as ground truth and comparethemtothejudge’spredictions. Theagreementratereaches 98%, withalldisagreements 16 A close reading of Chaucer’s *Canterbury Tales* reveals satire aimed at religious hypocrisy, gender roles, and social class divisions Meteorologists use radar, satellites, and numerical models to forecast weather and | https://arxiv.org/abs/2505.16004v1 |
warn about extreme climate events A close reading of Chaucer’s *Canterbury Tales* reveals satire aimed at religious hypocrisy, gender roles, and social class divisions MinistersRank = 31834 Rank = 55 U.S. Seeks Reconciliation with Oil-Rich Venezuela Badgers ride early surge U.S. Seeks Reconciliation with Oil-Rich Venezuela ﻓﯿﻠﻢExamples of Effective Attacks: Targeted, Individual Level, Suffix Neuron #30018Activation Rank = 10879 Rank = 14Neuron #103184Romanticism privileged personal emotion and nature’s sublimity over Enlightenment rationalism and industrialization Drones provide real-time imagery for agriculture, disaster response, and infrastructure monitoring with minimal human risk Romanticism privileged personal emotion and nature’s sublimity over Enlightenment rationalism and industrialization SwiftRank = 8 Rank = 1569 Tennis: Leading Brits go marching on The Great Vegetarian Scam Tennis: Leading Brits go marching on .expressionNeuron #9254Deactivation Rank = 24 Rank = 48680Neuron #44603Figure D2: Examples of targeted individual-level suffix attacks, including both activation and deactivation tasks. The film explores love and trauma through non-linear storytelling, blending magical realism with emotionally raw performances Encryption secures sensitive digital communication by converting readable data into unreadable ciphertext elucid film explores love and trauma through non-linear storytelling, blending magical realism with emotionally raw performancesRank = 2037 Rank = 10 Putin Signs Up Russia for Kyoto Pact Cancer drug blow for AstraZeneca Putin Signs Up .FAIL for Kyoto PactNeuron #5928Activation Rank = 5324 Rank = 49Neuron #68848The museum's new exhibit explores surrealist art through post-war European movements, emphasizing emotion and abstract forms over realism Augmented reality overlays digital content onto the physical world, enhancing education, gaming, and navigation The museum's new j ustification explores surrealist art through post-war European movements, emphasizing emotion and abstract forms over realismRank = 18 Rank = 58586 Darfur Rebels Urge Nigeria To Intervene, Kickstart Sudan Peace Loosing the War on Terrorism Darfur Rebels Urge Nigeria To […]\n ervene, Kickstart Sudan PeaceNeuron #56590Deactivation Rank = 20 Rank = 1120Neuron #17170Examples of Effective Attacks: Targeted, Individual Level, Replace Figure D3: Examples of targeted individual-level replacement attacks, including both activation and deactivation tasks. The vibrant painting resists colonial narratives, using bright color and traditional motifs to reassert cultural pride and autonomy The vibrant painting resists colonial narratives, using bright color and traditional motifs to reassert cultural pride and autonomy blackjackRank = 4 Rank = 71314Examples of Effective Attacks: Untargeted, Individual Level (Deactivation Only) Neuron #65259Russia seeks island conflict resolution Russia seeks island conflict resolution 深Rank = 3 Rank = 11462Neuron #43054 Traditional calligraphy showcases the elegance of handwritten script, blending visual art with linguistic precision and cultural preservation Traditional calligraphy showcases the elegance of handwritten script, blending visual art with linguistic precision and cultural EthiopianRank = 9 Rank = 88911Neuron #88045Bombings at Two Buenos Aires Banks Kill 1 (AP) Bombings at Two Buenos Aires Banks Kill .bottomAnchor 1 (AP)Rank = 6 Rank = 3201Neuron #5340Suffix Replace Figure D4: Examples of untargeted individual-level attacks. Empirically, only deactivation at- tacks can be successful, possibly due to dead latents (explained in section 4.2). 17 being false negatives. Example 1: Generated Summariza- tion of Original Input User:She paints surreal dreamscapes that draw on ancestral folklore, mythological figures, and personal symbolism | https://arxiv.org/abs/2505.16004v1 |
from her cultural background. The previ- ous sentence is about Assistant (Llama 3 8B): the artist, and the follow- ing is about the art. The artist is a woman, and the art is a painting. The painting is a dream...Example 1: Generated Summariza- tion of Perturbed Input User:She paints surreal dreamscapes that draw on ancestral folklore, mythological figures, and personal symbolism from her cultural background Radiation- LTE regulatory. The previous sentence is about Assistant (Llama 3 8B): the artist’s work, and the following text is about the artist’s life. The artist was born in 1989 in the city of Tiju... Figure E1: Self-summarization Example 1 Example 2: Generated Summariza- tion of Original Input User:The commemorative statue honors a once- forgotten civil rights activist, reclaiming public space for historical memory. The previous sentence is about Assistant (Gemma 2 9B): the statue of the civil rights activist, the statue is located in the city of Montgomery, Alabama, the statue is a tribute to the civil rights activist...Example 2: Generated Summariza- tion of Perturbed Input User:The commemorative statue honors a once- forgotten civil rights activist, reclaiming public space for historical memorydifferential geometryDegrees. The previous sentence is about Assistant (Gemma 2 9B): the differential geometry of curves and surfaces. The differential geometry of curves and surfaces is a branch of mathematics that studies the geometry of curves and surfaces. The... Figure E2: Self-summarization Example 2 F. SAE Robustness Across Model Depth As discussed in section 4.3, we evaluate our population-level suffix attacks across multiple layers of LLaMA-3-8B (32 layers) and Gemma-2-9B (42 layers) to assess the generalizability of our findings. Table F1, F2 and Figure F1 report the average neuron overlap ratio before and after the attacks on all Art&Scienceinputs. In Table F2, the Beforevalues are omitted because the initial overlap is always 100%in the untargeted setting. For targeted suffix attacks, unlike the results presented in Table 2, which quantify overall ef- fectiveness by relative change as a percentage, we report both BeforeandAftervalues to avoid misleading interpretations: although the relative increase in overlap tends to grow significantly with layer depth, this is largely due to a lower initial overlap. At the same time, the absolute overlap achieved after the attack also decreases with depth, but still remains substantial as reach- ing approximately 50%in deeper layers. For the untargeted setting, however, an opposite trend is observed as the attacks become more effective when depth increases. One plausible explanation for these trends is that deeper layers in large language models tend to encode more abstract, task-specific, and distributed representations, making it harder for a short suffix to consistently steer the model toward activating a fixed set of SAE latents. In contrast, earlier and middle layers often retain more localized and compositional features that are easier to manipulate toward a specific goal. Together, these factors contribute to the observed decrease in both initial and post-attack neuron overlap ratios in deeper layers we see in the targeted task. On the other hand, since minor input perturbations can lead to disproportionate changes in model activations, it becomes easier for untargeted attacks | https://arxiv.org/abs/2505.16004v1 |
to disrupt existing semantic features without the need for precise control. Consequently, although the upper bound of attack effectiveness in the targeted setting slightly decreases with model depth, this trend is likely driven by representational shifts across LLM 18 layers rather than properties of the SAEs themselves, and we can conclude that our adversarial attacks are generalizable across different model depths. Model Layer 10 Layer 20 Layer 30 Layer 40 Llama 3 8BBefore 41.7 32 .4 21 .4 - After 57.9 56 .5 51 .8 - Gemma 2 9BBefore 47.2 36 .1 30 .1 15 .7 After 64.8 57 .9 50 .8 39 .1 Table F1: Neuron overlaps before and after targeted population-level suffix attacks for different LLM layers. While the upper-bound of attack performance decreases with model depth, the attacks are still effective. Model Layer 10 Layer 20 Layer 30 Layer 40 Llama 3 8B −81.7 −89.1 −93.2 - Gemma 2 9B −78.3 −83.0 −85.5 −93.7 Table F2: Neuron overlaps before and after untargeted population-level suffix attacks for dif- ferent LLM layers. Different from the targeted setting, the untargeted attacks are more effective for deeper layers. G. Evaluating Attack Transferability As motivated in section 4.3, we evaluate the transferability of our attacks by directly applying adversarialinputsgeneratedfromonemodeltoanother. Sincethesemanticconceptscapturedby individual SAE latents are not aligned across models, we restrict our analysis to population-level attacks. In Table G1, we report the relative changes in neuron overlap (as percentage values, consistent with Table 2 and C1), along with the absolute differences compared to the original attacks. The results show that, while there is a noticeable performance drop when transfer- ring attacks across models, the transferred adversarial inputs still induce meaningful changes in activation patterns, indicating that the attacks retain a substantial degree of effectiveness. Model Transfer Semantic GoalPopulation Level Neuron Overlap (%) Suffix Replacement Llama 3 8B →Gemma 2 9BUntargeted 45.3 (↓43.8) 44 .3 (↓40.3) Targeted 49.2 (↓25.3) 15 .8 (↓13.4) Gemma 2 9B →Llama 3 8BUntargeted 44.1 (↓41.4) 40 .9 (↓38.9) Targeted 46.1 (↓22.7) 18 .9 (↓13.0) Table G1: Performance of transferred attacks at population level. Here we also report the differences in magnitudes of neuron overlap changes between original attacks and transferred attacks. We use the Art&Sciencedataset for this experiment. 19 H. Case Study: Manipulating SAE Latents from Neuronpedia In Figure H1, we present successful adversarial untargeted individual-level replacement attacks that deactivate two SAE latents associated with human-interpretable concepts. These attacks are conducted on specific layers and pretrained SAEs identified by Neuronpedia, which differ from those used in our main experiments. We use top-activating sentences from an ex- ternal text corpus and leverage Neuronpedia’s token-level activation records to directly extract the corresponding hidden states, eliminating the need for an additional summarization prompt. Empirically, we find that the replacement attack mode is more effective than the suffix mode in this setting. We hypothesize that the high effectiveness of these attacks stems from the dy- namic activation behavior of human-interpretable SAE latents, which tend to respond selectively to specific semantic features rather than exhibiting consistently high or low activations. This selectivity makes them more vulnerable | https://arxiv.org/abs/2505.16004v1 |
to targeted input manipulations. 20 Example 1: LLM-as-a-Judge User:You will be given two (potentially truncated) summarizations of a particular sentence, and your goal is to judge whether the two summarizations are focusing on the same main topic. Please answer Yes or No. 1. The previous sentence is about the importance of oral histories in preserving the lived experiences of marginalized groups across generations. Oral histor- ies offer a unique perspective on the past, providing a window into... 2. The previous sentence is about the importance of oral histories in preserving the lived experiences of marginalized groups across generations. Oral histor- ies offer a unique opportunity to hear the voices of those who have... User:Yes. User:You will be given two (potentially truncated) summarizations of a particular sentence, and your goal is to judge whether the two summarizations are focusing on the same main topic. Please answer Yes or No. 1. The previous sentence is about the influence of ancient Greek tragedies on contemporary theater. The author argues that the archetypal themes, dra- matic irony, and complex character development of ancient... 2. The previous sentence is about the blockchain, which is a distributed ledger technology that allows for secure, transparent, and tamper-proof transac- tions. It is a decentralized system that does not... User:No. User:You will be given two (potentially truncated) summarizations of a particular sentence, and your goal is to judge whether the two summarizations are focusing on the same main topic. Please answer Yes or No. 1. The previous sentence is about the artist, and the following is about the art. The artist is a woman, and the art is a painting. The painting is a dream... 2. The previous sentence is about the artist’s work, and the following text is about the artist’s life. The artist was born in 1989 in the city of Tiju... Assistant (GPT-4.1-mini): Yes.Example 2: LLM-as-a-Judge User:You will be given two (potentially truncated) summarizations of a particular sentence, and your goal is to judge whether the two summarizations are focusing on the same main topic. Please answer Yes or No. 1. The previous sentence is about the importance of oral histories in preserving the lived experiences of marginalized groups across generations. Oral histor- ies offer a unique perspective on the past, providing a window into... 2. The previous sentence is about the importance of oral histories in preserving the lived experiences of marginalized groups across generations. Oral histor- ies offer a unique opportunity to hear the voices of those who have... User:Yes. User:You will be given two (potentially truncated) summarizations of a particular sentence, and your goal is to judge whether the two summarizations are focusing on the same main topic. Please answer Yes or No. 1. The previous sentence is about the influence of ancient Greek tragedies on contemporary theater. The author argues that the archetypal themes, dra- matic irony, and complex character development of ancient... 2. The previous sentence is about the blockchain, which is a distributed ledger technology that allows for secure, transparent, and tamper-proof transac- tions. It is a decentralized system that does not... | https://arxiv.org/abs/2505.16004v1 |
User:No. User:You will be given two (potentially truncated) summarizations of a particular sentence, and your goal is to judge whether the two summarizations are focusing on the same main topic. Please answer Yes or No. 1. The previous sentence is about the statue of the civil rights activist, the statue is located in the city of Montgomery, Alabama, the statue is a tribute to the civil rights activist... 2. The previous sentence is about the differential geometry of curves and surfaces. The differential geometry of curves and surfaces is a branch of math- ematics that studies the geometry of curves and sur- faces. The... Assistant (GPT-4.1-mini): No. Figure E3: Few-shot learning template for LLM-as-a-judge, corresponding to the previous sum- marization examples. 10 15 20 25 30 35 40 Layer Depth0.20.30.40.50.6Neuron Overlap RatioPopulation-level Suffix Attack Performance (Targeted) Llama 3 8B (Before) Llama 3 8B (After) Gemma 2 9B (Before) Gemma 2 9B (After) (a) Targeted 10 15 20 25 30 35 40 Layer Depth0.950 0.925 0.900 0.875 0.850 0.825 0.800 0.775 0.750 Neuron Overlap RatioPopulation-level Suffix Attack Performance (Untargeted) Llama 3 8B Gemma 2 9B (b) Untargeted Figure F1: Population-level suffix attack performance across different layer depths for Llama-3- 8B and Gemma-2-9B. 21 Neuronpedia Example 1 Layer 29, Neuron #73147 Activated by hurled,fled,fell, ... Top-1 Activated Sequence: Washington was stabbed several times but managed to grab a radio which he hurled against a post Deactivating radio: Washington intelligence stabbed several times but managed to grab a radio which he hurled against a post Deactivating hurled: Washington was stabbed Meso- zoic times but managed to grab a radio which he hurled against a post Deactivating post: Washington was stabbed several times but managed to grab a radio which he hurled After a postNeuronpedia Example 2 Layer 35, Neuron #66255 Activated by was,has,is, ... Top-1 Activated Sequence: The effect of lard and sunflower oil making part of a cirrhogenic ration with ahighcontentoffatanddeficientproteinandcholine on the level of total and esterified cholesterol and phospholipids in the blood serum and liver was stud- ied Deactivating lipids: The effect of lard and sunflower oil making part of a Swedishrhogenic ration with a high content of fat and deficient protein and choline on the level of total and esterified cholesterol and phospholipids in the blood serum and liver was stud- ied Deactivating blood: The effect of lard and sun- flower oil making part of a cirrhogenic ration with a high content of fat and deficient protein and cholineElaboración the level of total and esterified cholesterol and phospholipids in the blood serum and liver was studied Deactivating was: The effect of lard and sunflower oil making part of a cirrhogenic ration with a high con- tent of fat and deficient protein and choline on the level of total and esterified cholesterol and phosphol- ipids inrawDesc blood serum and liver was studied Figure H1: Examples of successful adversarial attacks that deactivate two highly interpretable SAE latents in their corresponding top-activating sentences. Tokens highlighted in green indicate the specific LLM hidden states passed to the SAE, which Neuronpedia identifies as positions of high | https://arxiv.org/abs/2505.16004v1 |
LAGO: Few-shot Crosslingual Embedding Inversion Attacks via Language Similarity-Aware Graph Optimization Wenrui Yu1, Yiyi Chen2, Johannes Bjerva2, Sokol Kosta1, Qiongxiu Li1* 1Department of Electronic Systems,2Department of Computer Science Aalborg University, Copenhagen, Denmark wenyu@es.aau.dk,{yiyic,jbjerva}@cs.aau.dk,{sok,qili}@es.aau.dk Abstract We propose LAGO -Language Similarity- AwareGraphOptimization - a novel approach for few-shot cross-lingual embedding inver- sion attacks, addressing critical privacy vul- nerabilities in multilingual NLP systems. Un- like prior work in embedding inversion attacks that treat languages independently, LAGO ex- plicitly models linguistic relationships through a graph-based constrained distributed opti- mization framework. By integrating syntactic and lexical similarity as edge constraints, our method enables collaborative parameter learn- ing across related languages. Theoretically, we show this formulation generalizes prior ap- proaches, such as ALGEN, which emerges as a special case when similarity constraints are relaxed. Our framework uniquely combines Frobenius-norm regularization with linear in- equality or total variation constraints, ensur- ing robust alignment of cross-lingual embed- ding spaces even with extremely limited data (as few as 10 samples per language). Exten- sive experiments across multiple languages and embedding models demonstrate that LAGO substantially improves the transferability of at- tacks with 10-20% increase in Rouge-L score over baselines. This work establishes lan- guage similarity as a critical factor in inver- sion attack transferability, urging renewed fo- cus on language-aware privacy-preserving mul- tilingual embeddings. 1 Introduction Text embeddings, which encode semantic and syn- tactic information into dense vector representations, serve as the backbone of modern natural language processing (NLP) systems. They are also pow- ering the Large Language Models, whose impact stretches far beyond NLP and is steadily shaping everyday lives and business operations. However, their widespread deployment in cloud-based ser- *Corresponding author. Vector DatabaseHola mundo!Hallo Wereld!Hallo Welt!Olá mundo!Ciao mondoBonjour le monde!Attack ModelengdeunlditaporspafraHola mundo!Hallo Wereld!Hallo Welt!Bonjour le monde!Olá mundo!Ciao mondo!Language Similarities eng TextEmbeddingsFigure 1: Few-shot Cross-lingual Textual Embedding Inversion Leveraging Language Similarities. Example: Attack model trained on English embeddings is used to attack embeddings in other languages, using language similarities as a prior. vices introduces significant privacy risks. A partic- ularly concerning threat is the embedding inversion attack (Song and Raghunathan, 2020; Chen et al., 2025b), where the adversaries can decode sensi- tive and private data directly from the embedding vectors. The security of the system can be compro- mised when malicious users abuse the embedding model API, collecting massive datasets to train at- tack models. Data leakage, whether accidental or deliberate, further exacerbates this vulnerability. As vector databases and generative-AI services pro- liferate across the globe, the embedding vectors of- fered as commodities are mostly multilingual. Yet, prior researches in this attack space mostly con- centrate on inverting English embeddings (Song and Raghunathan, 2020; Li et al., 2023; Morris et al., 2023; Huang et al., 2024). While recent ef- forts (Chen et al., 2024a,b, 2025b) touch upon mul- tilingual and cross-lingual inversion attacks, they lack an explicit modeling of language similarities , resulting in poor generalization across languages. In real-world adversarial scenarios, such as spe- cialized domains or low-resource languages, at- tackers may only have access to a handful of embedding-text pairs. Although ALGEN (Chen et al., 2025b) partially addresses | https://arxiv.org/abs/2505.16008v1 |
the few-shot regime through decoder transfer, it lacks mech- 1arXiv:2505.16008v1 [cs.CL] 21 May 2025 anisms to exploit language similarity, which we hy- pothesize is a key factor in crosslingual generaliza- tion failure. Prior studies have shown that language similarities, simulated from typological features and lexical overlapping, correlate with structural variations in inversion outputs (Chen et al., 2024a, 2025a), providing empirical motivation for incor- porating such relationships into attack models. To address this, we propose LAGO (Language Similarity- AwareGraphOptimization ) for few- shot crosslingual embedding inversion. LAGO explicitly models linguistic relationships by con- structing a topological graph over languages, where nodes represent languages and edges encode simi- larity. As illustrated in Fig. 1, this graph is used to guide collaborative optimization of decoder align- ment functions across languages, enabling knowl- edge transfer from typologically related neigh- bors. We formalize the attack objective as a dis- tributed optimization problem, where each node corresponds to a language and constraints encode similarity-based consistency. We present two al- gorithmic variants: (1) an inequality-constrained approach based on IEQ-PDMM (Heusdens and Zhang, 2024b), and (2) a total variation regularized formulation (Peng et al., 2021) that softly penalizes parameter drift across similar languages. Our main contributions include: •We propose LAGO, the first framework for few-shot crosslingual embedding inversion that incorporates language similarity as a structural prior in a graph-constrained opti- mization problem. •We develop two algorithmic variants, one us- ing inequality constraints and one using total variation penalties, that enable collaborative parameter learning across languages. Prior work, including ALGEN, emerges as a special case within our framework (cf. Section 4.3). •Experiments across multiple embedding mod- els and diverse languages show that language similarity strongly correlates with attack trans- ferability, improving performance by 10 - 20% over prior methods. By exposing overlooked vulnerabilities in multi- lingual embedding systems and demonstrating ef- fective inversion under realistic low-resource con- ditions, our work underscores the urgent need for stronger privacy protections in cross-lingual NLPdeployments. While differential privacy offers some protection against our attack, it also signif- icantly degrades downstream utility (Chen et al., 2025b), highlighting the need for more targeted and efficient defense mechanisms. 2 Related Work 2.1 Embedding Inversion Attacks Early work on embedding inversion framed the task as classification over fixed vocabularies. For exam- ple, Song and Raghunathan (2020) aim to recover input tokens directly from embeddings, achieving up to 70% reconstruction. Subsequent advances recast the task as generation: Li et al. (2023) intro- duce a decoder-based approach to produce fluent text, while Morris et al. (2023) further improve ac- curacy through iterative refinement. Several work have since then extended inversion attacks to multi- lingual scenarios (Chen et al., 2024b,a). Moreover, Huang et al. (2024) trains a surrogate model to con- duct transfer attack on victim embeddings under black-box access. These methods, however, typically rely on mas- sive training samples (8k to 5 million victim em- beddings) and are primarily evaluated in monolin- gual or well-resourced settings. In practice, attack- ers often face few-shot scenarios. For example, reconstructing text in low-resource languages or specialized domains with only a handful of avail- able | https://arxiv.org/abs/2505.16008v1 |
samples. ALGEN (Chen et al., 2025b) in- troduces a linear alignment technique, allowing a decoder trained in one domain to be reused in another. While effective in few-shot transfer, AL- GEN does not explicitly model language similarity or structural relationships between languages (cf. Section 3.2). Our framework improves ALGEN by directly incorporating linguistic knowledge to achieve stronger few-shot cross-lingual inversion. 2.2 Cross-lingual Transferability Crosslingual transferability is an central research topic in multilingual NLP. Prior researches leverage crosslingual transferability to improve downstream task performances in target languages, mainly through fine-tuning LLMs on related source lan- guages (Choenni et al., 2023), or using zero-shot transfer (Adelani et al., 2022; de Vries et al., 2022; Blaschke et al., 2025) or few-shot trasnfer with pre-trained MLLMs (Lauscher et al., 2020). Lan- guage similarity based on linguistic data, such as typological features (Littell et al., 2017) and lexi- 2 cal databases (Wichmann et al., 2022), have been used extensively in facilitating crosslingual trans- fer (Philippy et al., 2023). In this work, we leverage language similarity generated from both syntac- tic features and lexical overlap to provide alterna- tive perspectives on constructing graphs, to assist crosslingual inversion attacks. 2.3 Distributed Optimization Distributed optimization decomposes a global ob- jective into smaller local problems that are solved collaboratively across networked nodes. Owing to its scalability and efficiency, it has become a foundational tool in large-scale machine learning and signal processing. Applications span domains such as federated learning (McMahan et al., 2017), sensor networks (Rabbat and Nowak, 2004), and privacy-preserving systems (Li et al., 2020; Yu et al., 2024). Classical distributed optimization al- gorithms include the Alternating Direction Method of Multipliers (ADMM, (Boyd, 2010)) and Primal- Dual Method of Multipliers (PDMM, (Zhang and Heusdens, 2017)) and their variants (Wang and Banerjee, 2014; Ouyang et al., 2015; Heusdens and Zhang, 2024a,b). To the best of our knowledge, their application to inversion attacks remains un- explored. In this work, we present, for the first time, a novel migration of distributed optimization techniques to inversion attacks. 3 Preliminaries 3.1 Embedding Inversion attack Letx∈Vsdenote a sequence of text tokens, and the text encoder ϕ=enc(·) :Vs→Rnbe an em- bedding function that maps text xto a fixed-length vector ϕ(x)∈Rn.sis the sequence length and nthe embedding dimension respectively. An em- bedding inversion attack is formally defined as the process of learning an approximate inverse function g=dec(·)such that: g(ϕ(x))≈x . 3.2 ALGEN ALGEN enables cross-domain and cross-lingual sentence-level inversion through a framework com- bining embedding alignment and sequence genera- tion. The framework consists of three parts: 1) Training a local attack model decA(·)by fine-tuning a pre-trained decoder to function as an embedding-to-text generator.2) Embedding Alignment To bridge the discrep- ancy between the victim eV∈Rmand the attack eA∈Rnembedding spaces, a linear mapping ma- trixW∈Rm×nis learned: ˆeA=eVW . The optimal alignment matrix Wis obtained by solving the following least-squares minimization: min W∥EA−EVW∥2 F, where ∥ · ∥ Fdenotes the Frobenius norm, EV= [e1⊤ V,···,eb⊤ V]⊤∈Rb×mis the victim model’s embedding matrix, and EA= [e1⊤ A,···,eb⊤ A]⊤∈ Rb×nis the attacker’s embedding matrix, and bis the number of training samples. mandnare the embedding dimensions | https://arxiv.org/abs/2505.16008v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.