text
string
source
string
Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. 2024. The llama 3 herd of mod- els.arXiv preprint arXiv:2407.21783 . Qiangqiang He, Shuwei Qian, Jie Zhang, and Chongjun Wang. 2025. Inference retrieval-augmented multi- modal chain-of-thoughts reasoning for language mod- els. In ICASSP 2025-2025 IEEE International Con- ference on Acoustics, Speech and Signal Processing (ICASSP) , pages 1–5. IEEE. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language under- standing. arXiv preprint arXiv:2009.03300 . Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi- Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2023. Atlas: Few-shot learning with retrieval augmented language models. Journal of Machine Learning Research , 24(251):1–43. Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang, and Peter Szolovits. 2021. What disease does this patient have? A large-scale open domain question answering dataset from medical exams. Ap- plied Sciences , 11(14):6421. Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William W Cohen, and Xinghua Lu. 2019. PubMedQA: A dataset for biomedical research question answering. arXiv preprint arXiv:1909.06146 . Qiao Jin, Zheng Yuan, Guangzhi Xiong, Qianlan Yu, Huaiyuan Ying, Chuanqi Tan, Mosha Chen, Song- fang Huang, Xiaozhong Liu, and Sheng Yu. 2022. Biomedical question answering: a survey of ap- proaches and challenges. ACM Computing Surveys (CSUR) , 55(2):1–36. Yubin Kim, Chanwoo Park, Hyewon Jeong, Yik Siu Chan, Xuhai Xu, Daniel McDuff, Hyeonhoon Lee, Marzyeh Ghassemi, Cynthia Breazeal, and Hae Won Park. 2024a. MDAgents: An adaptive collaboration of llms in medical decision making. arXiv preprint arXiv:2404.15155 . Yunsoo Kim, Jinge Wu, Yusuf Abdulle, and Honghan Wu. 2024b. MedExQA: Medical question answer- ing benchmark with multiple explanations. arXiv preprint arXiv:2406.06331 . Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich Küttler, Mike Lewis, Wen-tau Yih, Tim Rock- täschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in neu- ral information processing systems , 33:9459–9474.Jiaxiang Liu, Yuan Wang, Jiawei Du, Joey Tianyi Zhou, and Zuozhu Liu. 2024. Medcot: Medical chain of thought via hierarchical expert. arXiv preprint arXiv:2412.13736 . Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. 2024. Self-Refine: Iterative refinement with self-feedback. Advances in Neural Information Pro- cessing Systems , 36. Ankit Pal, Logesh Kumar Umapathi, and Malaikan- nan Sankarasubbu. 2022. MedMCQA: A large-scale multi-subject multi-choice dataset for medical do- main question answering. In Conference on health, inference, and learning , pages 248–260. PMLR. Malaikannan Sankarasubbu Ankit Pal and Malaikannan Sankarasubbu. 2024. Openbiollms: Advancing open- source large language models for healthcare and life sciences. Hugging Face repository . Pengcheng Qiu, Chaoyi Wu, Xiaoman Zhang, Weixiong Lin, Haicheng Wang, Ya Zhang, Yanfeng Wang, and Weidi Xie. 2024. Towards building multilingual lan- guage model for medicine. Nature Communications , 15(1):8384. Weijia Shi, Sewon Min, Michihiro Yasunaga, Min- joon Seo, Rich James, Mike Lewis, Luke Zettle- moyer, and Wen-tau Yih. 2023. Replug: Retrieval- augmented black-box language models. arXiv preprint arXiv:2301.12652 . Wenqi Shi, Ran Xu, Yuchen Zhuang,
https://arxiv.org/abs/2505.18283v1
Yue Yu, Haotian Sun, Hang Wu, Carl Yang, and May D Wang. 2024. Medadapter: Efficient test-time adaptation of large language models towards medical reasoning. arXiv preprint arXiv:2405.03000 . Karan Singhal, Tao Tu, Juraj Gottweis, Rory Sayres, Ellery Wulczyn, Mohamed Amin, Le Hou, Kevin Clark, Stephen R Pfohl, Heather Cole-Lewis, et al. 2025. Toward expert-level medical question answer- ing with large language models. Nature Medicine , pages 1–8. Xiangru Tang, Daniel Shao, Jiwoong Sohn, Jiapeng Chen, Jiayi Zhang, Jinyu Xiang, Fang Wu, Yilun Zhao, Chenglin Wu, Wenqi Shi, et al. 2025. Meda- gentsbench: Benchmarking thinking models and agent frameworks for complex medical reasoning. arXiv preprint arXiv:2503.07459 . Xiangru Tang, Anni Zou, Zhuosheng Zhang, Ziming Li, Yilun Zhao, Xingyao Zhang, Arman Cohan, and Mark Gerstein. 2023. MedAgents: Large language models as collaborators for zero-shot medical reason- ing. arXiv preprint arXiv:2311.10537 . Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations . 10 Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, et al. 2024. MMLU-Pro: A more robust and challenging multi-task language understanding benchmark. arXiv preprint arXiv:2406.01574 . Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, and Heng Ji. 2023. Unleashing the emergent cognitive synergy in large language mod- els: A task-solving agent through multi-persona self- collaboration. arXiv preprint arXiv:2307.05300 . Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits rea- soning in large language models. Advances in Neural Information Processing Systems , 35:24824–24837. Juncheng Wu, Wenlong Deng, Xingxuan Li, Sheng Liu, Taomian Mi, Yifan Peng, Ziyang Xu, Yi Liu, Hyunjin Cho, Chang-In Choi, et al. 2025. Medreason: Elicit- ing factual medical reasoning steps in llms via knowl- edge graphs. arXiv preprint arXiv:2504.00993 . Mengdi Xu, Yikang Shen, Shun Zhang, Yuchen Lu, Ding Zhao, Joshua Tenenbaum, and Chuang Gan. 2022. Prompting decision transformer for few-shot policy generalization. In international conference on machine learning , pages 24631–24645. PMLR. Shaochen Xu, Yifan Zhou, Zhengliang Liu, Zihao Wu, Tianyang Zhong, Huaqin Zhao, Yiwei Li, Hanqi Jiang, Yi Pan, Junhao Chen, et al. 2024. Towards next-generation medical agent: How o1 is reshaping decision-making in medical scenarios. arXiv preprint arXiv:2411.14461 . An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. 2024a. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 . Zhaorui Yang, Tianyu Pang, Haozhe Feng, Han Wang, Wei Chen, Minfeng Zhu, and Qian Liu. 2024b. Self-distillation bridges distribution gap in language model fine-tuning. In Proceedings of the 62nd An- nual Meeting of the Association for Computational Linguistics , pages 1028–1043. Jin Ye, Guoan Wang, Yanjun Li, Zhongying Deng, Wei Li, Tianbin Li, Haodong Duan, Ziyan Huang, Yanzhou Su, Benyou Wang, et al. 2024. Gmai- mmbench: A comprehensive multimodal evaluation benchmark towards general medical ai. Advances in Neural Information Processing Systems ,
https://arxiv.org/abs/2505.18283v1
37:94327– 94427. Hongjian Zhou, Fenglin Liu, Boyang Gu, Xinyu Zou, Jinfa Huang, Jinge Wu, Yiru Li, Sam S Chen, Peilin Zhou, Junling Liu, et al. 2023. A survey of large language models in medicine: Progress, application, and challenge. arXiv preprint arXiv:2311.05112 . Yuxin Zuo, Shang Qu, Yifei Li, Zhangren Chen, Xuekai Zhu, Ermo Hua, Kaiyan Zhang, Ning Ding, andBowen Zhou. 2025. MedXpertQA: Benchmarking expert-level medical reasoning and understanding. arXiv preprint arXiv:2501.18362 . 11 A Ethics Statement This work relies solely on publicly available medi- cal question answering datasets, including MedQA, PubMedQA, MedMCQA, and others curated in the MEDAGENTSBENCH framework (Tang et al., 2025). These datasets are de-identified and col- lected from open educational or biomedical sources such as medical board exams and peer-reviewed literature. No private health records or patient- identifiable information were used. Our proposed framework operates entirely at test time and does not require any model fine-tuning or user data collection. All evaluations are conducted offline on benchmark datasets, and no deployment in real clinical settings has been performed. While our method is designed to improve the ro- bustness and reliability of medical LLMs, it is not intended for use in high-stakes clinical decision- making without appropriate human oversight. We emphasize that the generated answers should not be interpreted as medical advice. Future work may involve incorporating human-in-the-loop mecha- nisms and broader impact assessments before real- world deployment. BGeneralist–Specialist System Roles and Prompt Templates B.1 System Prompt for Specialist and Generalist To ensure consistency and clarity across different model roles, we define structured system prompts tailored to each classifier in our multi-agent frame- work. These prompts specify role-specific reason- ing strategies and output formats, enabling the mod- els to adopt appropriate clinical reasoning behav- iors under zero-shot test-time conditions. The system prompt for the specialist categoriza- tion agent is presented in Table 5, while the diag- nostic prompt for the specialist agent is shown in Table 6. The prompt for the generalist agent is provided in Table 7. B.2 Prompt Organization and Structure To ensure faithful and consistent model behavior across different roles and stages of inference, we design modular and task-specific prompt templates. These templates guide the models in both few-shot reasoning and auxiliary classification tasks. Specifically, the specialist classification prompt (Table 8) is used to determine the relevant subfieldsSpecialist Categorization — System Prompt You are a senior medical expert tasked with classifying clinical multiple-choice problems into the most relevant areas of medical science. Your role is strictly to determine and output the classification. Important: Do not provide any explanation, reasoning, or commentary. Only output the final classification strictly following the format. Table 5: System prompt for the specialist categorization. Specialist Agent — System Prompt You are an experienced specialist in { domain }. Your role is to carefully analyze clinical multiple-choice questions from the standpoint of a { domain.lower() } expert. You should reason by focusing on the interpretation of symptoms, underlying pathophysiology, and domain-specific diagnostic principles. First, review the provided reference examples and understand their reasoning patterns. Then, based on your specialist knowledge, perform structured, step-by-step reasoning
https://arxiv.org/abs/2505.18283v1
for the new question. Required output format Thought: [your detailed step-by-step reasoning] Answer: [one of A, B, C, . . . ] Table 6: System prompt for the specialist agent. Generalist Agent — System Prompt You are a general practitioner trained to manage a wide range of clinical conditions. Your task is to evaluate clinical multiple-choice questions using broad, cross-disciplinary medical knowledge. Focus on extracting key clinical findings, ruling out unlikely diagnoses, and applying general reasoning principles. First, analyze the reference examples to understand their diagnostic thought process. Then, produce a step-by-step analysis for the new question. Required output format Thought: [your detailed step-by-step reasoning] Answer: [one of A, B, C, . . . ] Table 7: System prompt for the generalist agent. 12 of medicine needed to solve a given question, serv- ing as a basis for downstream role assignment and retrieval. Meanwhile, the few-shot prompt tem- plate (Table 9) provides structured instructions and reference examples to facilitate reasoning transfer for clinical question answering. Specialist Classifier — Prompt Task Instructions • Carefully analyze the following medical question: '''{question} ''' • The corresponding options are: '''{options} ''' •Based on both the question and the options, determine the top3 most relevant subfields of medicine that are required to solve this question. • You must only output in the exact format: Medical Field: Field1 | Field2 | Field3 Table 8: Prompt used for classifying medical questions into relevant specialist subfields. C The Reasoning Consistency Evaluator Rubric and Prompt To robustly aggregate multi-agent responses, we introduce a reliability scoring mechanism that eval- uates the consistency between an agent’s reasoning and its final answer. In scenarios where a question has many answer options (e.g., N), simple majority voting becomes inefficient — achieving a reliable consensus typically requires at least N+1agreeing agents. To address this, we employ a scoring-based ver- ification strategy: each agent’s reasoning is eval- uated by a separate verifier agent that assigns a reliability score ranging from 1 to 5. This enables us to treat scores as soft confidence signals and aggregate responses more efficiently, even when only a few answers are available. The resulting per-sample reliability sum lies in the range of 4–20 (with 4 verifiers), providing fine-grained guidance for final answer selection. The full scoring prompt is shown in Table 10. D Reference CoT Dataset Examples We adopt the MedReason dataset (Wu et al., 2025) as our external reference corpus to supportFew-shot Prompt Template Header “Your task is to solve the following clinical multiple-choice question. ” Question Block Present the target question text, then list answer options (A/B/C/. . . ). Instruction Block • The prompt shows Nsolved reference examples . • Each example contains: – Finding Reasoning Paths : brainstorming approaches – Reasoning Process : a structured, step-by-step solution • Focus only on learning the reasoning patterns. •For the new question, generate your own reasoning and answer. • Constraints: –Always choose one of the provided options—never “un- known” –Follow the exact output format shown below Output Format Hint Thought: [your detailed step-by-step reasoning] Answer: [one of A, B, C, . .
https://arxiv.org/abs/2505.18283v1
. ] Reference Examples For each retrieved example iinsert: Reference Example i: <question text > <options > Thought: <reference rationale > Table 9: Prompt template used for few-shot reasoning with retrieved reference examples. retrieval-augmented reasoning. MedReason com- prises 32,682 high-quality question–answer pairs, each accompanied by detailed, clinically grounded chain-of-thought (CoT) explanations. The dataset is constructed through a knowledge graph–guided pipeline that ensures both logical consistency and medical factuality. Specifically, the authors first collect QA pairs from seven public medical benchmarks, includ- ing MedQA, MedMCQA, PubMedQA, MMLU, MedXpert, Huatuo, and HLE. For each QA pair, relevant medical entities are extracted from both the question and the answer using GPT-4o and are then mapped to a structured medical knowledge graph, PrimeKG. Next, the shortest reasoning paths con- necting the question and answer entities within the graph are retrieved and pruned using LLM-based selection to retain only clinically relevant paths. These paths serve as scaffolds for guiding step-by- step CoT generation. To guarantee data quality, each generated rea- 13 Reliability Scoring — System Prompt A clinical AI agent has answered the following multiple- choice question: Question: “{question}” Options: “{$options_str$}” The agent provided the following reasoning: Thought: “{thought}” Final Answer: “{answer}” Your Role: You are a critical-thinking medical reviewer. Your task is to assign a reliability score from 1 to 5 based on how well the reasoning supports the answer. Scoring Scale: •5 — Reasoning is complete, medically accurate, and fully supports the answer. •4 — Mostly correct with minor issues, but the answer is still justified. •3 — Reasoning has some issues or omissions, but partially supports the answer. •2 — Reasoning is flawed or incomplete; answer is weakly supported. •1 — Reasoning is incorrect or misleading; answer is not justified. Output Format: Score: [1-5] Table 10: System prompt used by the reasoning consis- tency evaluator to assign a soft reliability score. soning trace is verified by prompting the LLM to reproduce the original answer solely based on the CoT explanation. If the answer cannot be recov- ered, the CoT is discarded. This quality filtering process reduces 45K generated samples to a final dataset of 32,682 verified CoTs, ensuring that each retained example is both logically sound and clini- cally accurate. The 7,168th indexed sample from MedReason is shown below as an example: Question: A young male patient presents with complete rectal prolapse. The surgery of choice is? Answer Choices: A. Abdominal rectopexy B. Delerom’s procedureC. Anterior resection D. Goodsall’s procedure Answer: Abdominal rectopexy. Explanation: Surgery is re- quired, and the operation can be performed the perineal or the abdominal approaches. An abdom- inal rectopexy has a lower rate of recurrence, as an abdominal procedure risks damage to the pelvic autonomic nerves, resulting in possible sexual dys- function, a perineal approach is also usually pre- ferred in young men. Ref: Bailey & Love’s Short Practice of Surgery, 25 ed., p. 1225 Reasoning: Finding reasoning paths: 1. Rectal prolapse (dis- ease) -> Surgical treatment options -> Perineal ap- proach vs. Abdominal approach 2. Abdominal approach -> Abdominal rectopexy -> Risk of re- currence vs. risk of
https://arxiv.org/abs/2505.18283v1
complications 3. Abdominal rectopexy -> Potential complications -> Pelvic au- tonomic nerve damage -> Male sexual dysfunction Reasoning Process: 1. Understanding Rectal Prolapse and Treatment Options : Rectal pro- lapse is a condition where the rectum protrudes through the anus. Surgical intervention is often required, and the two main approaches are the per- ineal and abdominal approaches. 2.Evaluating Surgical Approaches : -Perineal Approach : This is generally less invasive and may be preferred in older or frail patients due to lower surgical risk. However, it is associated with a higher rate of recurrence. - Abdominal Approach : This includes procedures like abdominal rectopexy, which tend to have a lower recurrence rate. How- ever, it is more invasive and carries a risk of com- plications. 3.Abdominal Rectopexy and Complications : - Recurrence Rate : Abdominal rectopexy is favored for its lower recurrence rate compared to perineal procedures. - Complications : A significant risk associated with the abdominal approach is poten- tial damage to the pelvic autonomic nerves. This can lead to complications such as male sexual dys- function, which is an important consideration in a young male patient. Conclusion: Abdominal rectopexy is the preferred surgical option for a young male patient with complete rectal prolapse due to its lower rate of recurrence. However, it is important to consider the risk of pelvic autonomic nerve damage, which could lead to male sexual dysfunction. This risk should be weighed against the benefits of a lower recurrence rate when deciding on the surgical 14 approach. E Test Dataset Examples Test Datasets. We conduct experiments on a cu- rated benchmark comprising challenging subsets of nine medical QA datasets, selected from the MEDAGENTSBENCH framework (Tang et al., 2025). This benchmark is specifically designed to evaluate complex medical reasoning and in- cludes questions that require multi-step clinical in- ference, diagnosis formulation, and treatment plan- ning. The original datasets include: MedQA (Jin et al., 2021), a multilingual dataset with English questions derived from professional medical board exams (e.g., USMLE), focusing on diagnosis and treatment knowledge. Each question includes four answer choices, with an average length of 167 to- kens. PubMedQA (Jin et al., 2019), constructed from biomedical research articles in PubMed, re- quires reasoning over scientific evidence to deter- mine yes/no/maybe answers. It contains long pas- sages with an average length of 316 tokens. MedM- CQA (Pal et al., 2022), sourced from Indian medi- cal entrance exams (AIIMS and NEET-PG), com- prises short multiple-choice questions across 21 medical subjects, averaging 18.7 tokens per ques- tion. MedBullets (Chen et al., 2024a), a high-yield medical education platform, provides detailed clin- ical questions with up to five answer options and longer contexts (avg. 213 tokens). MedExQA (Kim et al., 2024b), covers five medical special- ties and emphasizes both correctness and explain- ability. It includes 935 questions, each with four options. MedXpertQA (Zuo et al., 2025), con- sists of two subsets: Reasoning andUnderstanding , specifically annotated to distinguish complex clini- cal inference tasks from knowledge recall. MMLU (Hendrycks et al., 2020) and MMLU-Pro (Wang et al., 2024), general-purpose multitask bench- marks that include medical
https://arxiv.org/abs/2505.18283v1
subsets testing both factual recall and abstract reasoning. To better reflect real-world difficulty, we fol- low the hard subset construction pipeline proposed by MEDAGENTSBENCH. Questions are selected based on model failure rates (<50% accuracy across a set of strong models), medical topic coverage, and reasoning depth. Specifically, we include 100 hard questions each from MedQA, PubMedQA, MedM- CQA, MedExQA, and MMLU-Pro; 100 from each MedXpertQA subset (Reasoning and Understand-ing); 89 from MedBullets; and 73 from MMLU. This results in a total of 862 expert-verified in- stances designed to stress-test the reasoning capa- bilities of large language models. Representative examples from the nine datasets are provided in Table 11. F Fail case of existing method To illustrate the limitations of existing methods, we present a representative failure case from DeepSeek-R1 on a clinically complex MedQA question. Scenario: A junior orthopaedic surgery resident is completing a carpal tunnel repair with the department chairman as the attending physician. During the procedure, the resident inadvertently cuts a flexor tendon, which is then repaired without complication. The attending advises the resident to exclude this incident from the operative report to avoid patient concern. Question: What is the correct next action for the resident to take? Answer Options: – (A) Disclose the error to the patient and put it in the operative report –(B) Tell the attending that he cannot fail to disclose this mistake –(C) Report the physician to the ethics com- mittee – (D) Refuse to dictate the operative report Predicted Answer: (A) - Disclose the error and document it. Model’s Reasoning: <think> Okay, let’s see. The question is about a resident who accidentally cuts a flexor tendon during carpal tunnel repair and the attending tells them not to report it. The options are A to D, and the generated solutions all say A. First, the key points here are medical ethics, specifically honesty and patient disclosure. The resident made an error, even if it was repaired. The attending is advising to omit this from the operative report and not tell the patient to avoid worry. In medical practice, complications should be documented and disclosed to the patient. The prin- ciple of informed consent requires patients to know 15 Dataset Question &Options &Answer MedQA Question A 39-year-old woman presents with headaches, constipation, increased thirst, and increased urinary frequency. Her blood pressure is 160/100 mm Hg and pulse is 67/min. Blood tests reveal high blood glucose and low potassium level. Abdominal CT scan shows a well-circumscribed homogeneous mass in the left adrenal gland. The hormone excess responsible for her symptoms uses which of the following pathways to exert its action? Options (A): "Intracellular receptors", (B): "cAMP pathway", (C): "cGMP pathway", (D): "JAK/STAT pathway". Answer (A) PubMedQA Question Sternal fractures in childhood are rare. The aim of the study was to investigate the accident mechanism, the detection of radiological and sonographical criteria and consideration of associated in- juries.In the period from January 2010 to December 2012 all inpatients and outpatients with sternal fractures were recorded according to the documentation.A total of 4 children aged 5-14 ˘00a0years with a
https://arxiv.org/abs/2505.18283v1
sternal fracture were treated in 2 ˘00a0years, 2 ˘00a0children were hospitalized for pain management and 2 remained in outpatient care.Sternal fracture in growing children : A rare and often overlooked fracture? Options (A): "yes", (B): "no", (C): "maybe". Answer (C) MedMCQA Question Minimum number of lobes require to form a tooth? Options (A): "1", (B): "2", (C): "3", (D): "4". Answer (C) MedBullets Question A 22-year-old woman presents to the emergency department with shortness of breath. She was hiking when she suddenly felt unable to breathe and had to take slow deep breaths to improve her symptoms. The patient is a Swedish foreign exchange student and does not speak any English. Her medical history and current medications are unknown. Her temperature is 99.5 ˘00b0F (37.5 ˘00b0C), blood pressure is 127/68 mmHg, pulse is 120/min, respirations are 22/min, and oxygen saturation is 90% on room air. Physical exam is notable for poor air movement bilaterally and tachycardia. The patient is started on treatment. Which of the following parameters including forced expiratory volume in 1 second (FEV1), forced vital capacity (FVC), and diffusing capacity of carbon monoxide (DLCO) most appropriately describes this patient’s underlying pathology? Options (A): "Decreased airway tone", (B): "Increased FEV1", (C): "Increased FEV1/FVC", (D): "Increased FVC", (E): "Normal DLCO". Answer (E) MMLU Question How many different types of microorganisms may colonize the mouth? Options (A): "35", (B): "100", (C): "350", (D): "500". Answer (C) MMLU-Pro Question How are new polyomaviruses detailed? Options (A): "Shot gun sequencing", (B): "Cultivation in human neural cells", (C): "Deep pyro sequencing (NGS)", (D): "Monoclonal antibody techniques". Answer (A) MedExQA Question Which biological tissue has the highest viscosity? Options (A): "Blood", (B): "Bone", (C): "Soft tissue", (D): "Water". Answer (B) MedXpert-R Question A 52-year-old paralegal comes to the clinic reporting chronic low back pain that has gradually worsened over two years. She describes an achy pain rated at 7/10 and spends considerable time at her computer desk. She experiences numbness, tingling, and radiating pain down her right leg during prolonged sitting or standing. Her examination reveals:- No urinary/bowel incontinence or perineal numbness- Right lower extremity strength of 4/5- Decreased sensation in right leg- Right patellar reflex grade 1/4- Positive straight leg raise test on right sideWhich nerve root levels are most likely involved in this presentation? Options (A) L1-L2 (B) L5-S1 (C) L2-L3 (D) S2-S3 (E) L4-L5 (F) L4-S1 (G) L3-L4 (H) S1-S2 (I) T12-L1 (J) L5-S2. Answer G MedXpert-U Which hypoxic cell radiosensitizer, known for undergoing redox recycling or decomposing into a toxic product, has demonstrated effectiveness in the treatment of head and neck cancer? Options (A): Doranida- zole, (B): Tirapazamine, (C): Camptothecin, (D): Misonidazole, (E): Pimonidazole, (F): Nimorazole, (G): Sanazole, (H): Cetuximab, (I): Etanidazole, (J): Methotrexate. Answer (F) Table 11: Dataset examples with corresponding questions and answer options from the nine test datasets. about any complications, even if resolved. Failing to document could lead to legal issues and violates professional standards. Option A says to disclose and document, which aligns with ethical guidelines. The other options either don’t directly address informing the patient (like B,
https://arxiv.org/abs/2505.18283v1
InstructPart: Task-Oriented Part Segmentation with Instruction Reasoning Zifu Wan Yaqi Xie Ce Zhang Zhiqiu Lin Zihan Wang Simon Stepputtis Deva Ramanan Katia Sycara Robotics Institute, Carnegie Mellon University {zifuw, yaqix, cezhang, zhiqiul, zihanwa3, sstepput, deva, sycara}@andrew.cmu.edu https://zifuwan.github.io/InstructPart/ Abstract Large multimodal foundation models, partic- ularly in the domains of language and vision, have significantly advanced various tasks, in- cluding robotics, autonomous driving, informa- tion retrieval, and grounding. However, many of these models perceive objects as indivisi- ble, overlooking the components that constitute them. Understanding these components and their associated affordances provides valuable insights into an object’s functionality, which is fundamental for performing a wide range of tasks. In this work, we introduce a novel real- world benchmark, InstructPart , comprising hand-labeled part segmentation annotations and task-oriented instructions to evaluate the per- formance of current models in understanding and executing part-level tasks within everyday contexts. Through our experiments, we demon- strate that task-oriented part segmentation re- mains a challenging problem, even for state- of-the-art Vision-Language Models (VLMs). In addition to our benchmark, we introduce a simple baseline that achieves a twofold perfor- mance improvement through fine-tuning with our dataset. With our dataset and benchmark, we aim to facilitate research on task-oriented part segmentation and enhance the applicabil- ity of VLMs across various domains, including robotics, virtual reality, information retrieval, and other related fields. 1 Introduction Large Vision-Language Models (LVLMs) (Rad- ford et al., 2021; Alayrac et al., 2022; You et al., 2024) have been extensively utilized across vari- ous domains, such as robotics (Driess et al., 2023), autonomous driving (Zhou et al., 2023; Wan et al., 2025), medical imaging (Han et al., 2023), and in- formation retrieval (Liu et al., 2021), owing to their strong language reasoning and perceptual capabili- ties. In these cases, LVLMs are primarily employed for language grounding, enabling the identification “Add some water.”Lid: Can be opened Handle : Can be grasped Body : Can contain liquid Spout : Can pour out liquidPart Name : Lid Affordance : Openable This is a kettle…Figure 1: The task-oriented part segmentation task: Pre- sented with an image observation (left) and a corre- sponding task to add some water, the system is required to reason about specific parts to fulfill the task. of visual targets within a scene based on associ- ated language descriptions. By leveraging large datasets composed of image-text pairs, LVLMs can map visual content to textual semantic representa- tions (Radford et al., 2021) within joint embedding spaces. However, while this approach yields power- ful models with strong text-image alignment, they often focus on understanding entire objects (Liu et al., 2023b; Zou et al., 2023b,a; Xu et al., 2023; Liang et al., 2023; Sun et al., 2023), overlooking the fact that grounding is not solely about clas- sifying whole objects but also about recognizing fine-grained parts. As illustrated in Figure 1, given the task of adding water and a visual observation of a kettle, the system must not only identify the entire kettle but also recognize each part of the target and its corresponding affordances before grounding to task-related regions. To advance task-oriented
https://arxiv.org/abs/2505.18291v1
part segmentation, we believe that establishing a benchmark is essential for the field. However, most large-scale vision datasets primarily focus on object-level understand- ing (Liu et al., 2023b; Zou et al., 2023b,a; Xu et al., 2023; Liang et al., 2023; Sun et al., 2023), while existing part-level recognition datasets either cover only a limited range of part categories (Nguyen et al., 2017; Myers et al., 2015; Roy and Todorovic, 2016) or are derived from simulations (Geng et al.,arXiv:2505.18291v1 [cs.CV] 23 May 2025 2023; Xiang et al., 2020; Mo et al., 2019). We attribute this primarily to the challenge of annotat- ing part-level labels and task-related descriptions, which is both time-consuming and expensive (Wan et al., 2024). To address this challenge, we introduce a new real-world dataset, InstructPart , consisting of 2,400 images across 48 object classes and 44 part classes, with hand-labeled segmentation masks, as well as 9,600 hand-labeled task instructions, 2,400 part queries, and 2,400 affordances. Each image is accompanied by human-annotated and GPT-polished instructions for common household tasks and detailed part segmentation masks. As part of our benchmark, we propose two distinct tasks: a) Task Reasoning Part Segmentation (TRPS): identi- fying a particular part given an instruction to fulfill a task, e.g., “Locate the part meant for pulling to open the microwave”; and b) Oracle Referring Part Segmentation (ORPS): identifying an object part given a part query, e.g., “handle of the microwave”. Thorough evaluations of current vision-language models on the two tasks reveal a significant defi- ciency in their ability to comprehend natural lan- guage and accurately ground it across diverse ob- jects and parts. This finding highlights the need to address a critical shortcoming in vision-language models for fine-grained segmentation. Finally, we explore the training potential of our dataset by proposing a simple yet effective base- line, which leads to a nearly 100% improvement. With our proposed benchmark, we emphasize the importance of advancing vision-language models to excel not only in object-level understanding but also in discerning fine-grained part-level details. By utilizing our dataset, we hope to envision ad- vancements in robotics, particularly for assistive robots, as well as in manipulation tasks, object seg- mentation, virtual reality, affordance learning, and other related domains. Our contributions can be summarized as follows: •To the best of our knowledge, we introduce the first dataset that bridges task-oriented in- teractions with part segmentation for common household tasks. •We rigorously evaluate various vision- language models on our dataset, revealing their limitations in fine-grained recognition with language reasoning. •We fine-tune a simple baseline based on aDataset #Object #Part #Affordance #Action Instruction PartImageNet 11/158 13 N/A N/A % Pascal-Part 20 – N/A N/A % PACO 75 – N/A N/A % UMD 17 N/A 7 N/A % NYUv2-AFF 40 N/A 5 N/A % IIT-AFF 10 N/A 9 N/A % AGD20K∗50 N/A 36 N/A % InstructPart (Ours) 48 44 30 37 ! Table 1: Comparison of relevant part segmentation datasets. We show the number of object classes (#Ob- ject), part classes (#Part), affordances (#Affordance), actions (#Action), and whether instructions are included (Instruction). N/A means there is no such
https://arxiv.org/abs/2505.18291v1
type of data, while – means the data exists while no relevant infor- mation is provided. 11/158 indicates the super-class and sub-class numbers in PartImageNet.∗indicates the dataset only contains point annotations instead of accurate masks for target affordances. state-of-the-art model, achieving performance gains of over twofold, highlighting the quality and training potential of our dataset. 2 Related Work 2.1 Part Segmentation The problem of segmenting an object into a collec- tion of semantic parts is not a novel problem in it itself. Prior works mainly utilized fully supervised approaches, which need to be trained on large dat- sets (Sun et al., 2023), such as PartImageNet (He et al., 2022), Pascal-Part (Chen et al., 2014), ADE20K (Zhou et al., 2019), and PACO (Ra- manathan et al., 2023). However, these datasets contain only a limited subset relevant to human- robot interaction (e.g., PartImageNet includes just one related category: bottle), thus restricting their applicability to daily tasks. In robotics, part seg- mentation is used to understand the components of objects and their associated affordances, which are crucial for manipulation tasks (Gadre et al., 2021; Yi et al., 2018). While many datasets have been created for this domain (Mo et al., 2019; Xiang et al., 2020; Geng et al., 2023), they are all gener- ated from simulators, which introduces potential challenges when generalizing to real-world scenar- ios. To address this issue, real-world affordance datasets such as UMD-Affordance (Myers et al., 2015), NYUv2-Affordance (Roy and Todorovic, 2016), and IIT-AFF (Nguyen et al., 2017) exist. However, due to the difficulty of collecting large quantities of real-world data, these datasets are lim- ited in the number of affordances they present. On the other hand, AGD20K (Luo et al., 2022) collects egocentric and exocentric images for affordance learning. However, it only provides sparse point annotations, which can be insufficient for accurate task execution, such as manipulation. Similarly, Where2Act (Mo et al., 2021) extracts actionable information from articulated objects with movable parts but is limited to six action types and a single contact point, which may be sub-optimal. Further- more, the aforementioned datasets only contain simple word phrases outlining the target; however, full language comprehension is crucial in a human- robot interaction task. Understanding language can be ambiguous even for simple objects like a light switch, which can be “turned on”, “pressed” or “twisted” depending on the switch’s type, and peo- ple tend to refer to such objects as parts of larger task descriptions instead of a single word. Moti- vated by this, we construct a comprehensive dataset with task descriptions and object-part classes, as shown in Tab. 1. 2.2 Open-Vocabulary Segmentation Open-vocabulary segmentation aims to perform zero-shot segmentation with the assistance of vision-language foundation models, such as CLIP (Radford et al., 2021). For example, OVSeg (Liang et al., 2023) proposes to crop the region proposals and finetune CLIP using a mask prompt tuning mechanism. SAN (Xu et al., 2023) applies a side adapter network to a frozen CLIP to get the class of masks. Going beyond object-level segmentation, VLPart (Sun et al., 2023) performs open-vocabulary part segmentation by parsing the
https://arxiv.org/abs/2505.18291v1
novel object into parts using its semantic corre- spondence with the base object and classifies it with CLIP. Although these open-world recognition meth- ods demonstrate potential in recognizing out-of- distribution classes, they have limited reasoning ability to understand complex instructional sen- tences, prohibiting their wider usage in daily tasks requiring complex language comprehension. 2.3 Referring Expression Segmentation Referring expression segmentation aims to gener- ate a segmentation mask from a given language expression (Hu et al., 2016). Popular referring seg- mentation methods use a visual and a language encoder to extract features from the two modali- ties respectively, and design attention mechanisms to incorporate the features and assemble classes for region masks (Yang et al., 2022; Liu et al.,2023a; Ouyang et al., 2023; Liu et al., 2023b). Re- cently, more works have applied pre-trained foun- dation models, e.g., SAM (Kirillov et al., 2023) and CLIP (Radford et al., 2021) as the encoder and focused on the design of the decoder, such as X- Decoder (Zou et al., 2023a) and SEEM (Zou et al., 2023b). Furthermore, ManipVQA (Huang et al., 2024) applies VLMs with manipulation-centric knowledge to detect tools and affordances. How- ever, the referring expression task only takes short phrases as input and does not consider complex reasoning, for example, when the target name does not appear directly in the expression. 2.4 Reasoning Segmentation On the other hand, remarkable advances have been made in large language models (LLMs), which can understand complex language inputs and have the potential for more complex referring segmen- tation. Models such as BLIP-2 (Li et al., 2023), LLaV A-1.5 (Liu et al., 2024a), MiniGPT-4 (Zhu et al., 2024), and Flamingo (Alayrac et al., 2022) have explored the design of multi-modal LLMs for visual understanding and demonstrate their ability through tasks such as image captioning, visual question answering (VQA), etc. To enable the grounding ability of multimodal LLMs, Shikra (Chen et al., 2023b) and MiniGPT-v2 (Chen et al., 2023a) process object coordinates as input and enable the localization ability by returning coordinates. However, these methods cannot pro- duce segmentation masks and can only implicitly generate texts using LLMs rather than using a visual decoder for localization directly, which can be counterintuitive for image segmentation. Recently, LISA (Lai et al., 2024) integrated a multi-modal LLM (Liu et al., 2024a) with a vi- sion backbone and jointly trained a decoder to produce segmentation masks from language input. Despite using only 239 collected samples, LISA shows significant improvement in the reasoning process. However, its data is limited to entire ob- jects, making it challenging for LISA to perform more fine-grained grounding. Motivated by this limitation, we introduce the InstructPart dataset, which contains instruction-part pairs, high-level af- fordance, low-level action, and part segmentation masks. With this dataset, we broaden the applica- bility of VLMs to various domains, such as manip- ulation, by enhancing their part grounding ability. Figure 2: Examples from our InstructPart dataset are illustrated as follows: instruction queries are denoted in red text, while object and part names are indicated in blue. Each example includes an observation image (left), with the
https://arxiv.org/abs/2505.18291v1
corresponding ground truth part segments (right), highlighted with a green mask. 3 The InstructPart Benchmark and Baseline Models In this section, we describe our InstructPart bench- mark in detail and introduce a simple baseline method for our benchmark. 3.1 InstructPart Task Definition Motivated by scenarios where agents need to lo- calize areas based on task-specific queries, we de- fine two tasks. The first, Task Reasoning Part Segmentation (TRPS) , challenges models to com- bine linguistic reasoning with visual grounding. The second, Oracle Referring Part Segmenta- tion (ORPS) , focuses exclusively on evaluating visual grounding using oracle information about the designated object and part. TRPS. The TRPS task, illustrated in the first row of Fig. 2, is designed to explore the model’s reasoning and part grounding abilities. The input is an instruction-image pair, and the goal is to identify the referred part’s segmentation mask, as shown in green masks in Fig. 2. This task challenges the model to comprehend the instruction, analyze the image, and locate the corresponding part, Formally, the task is defined as: F(Iinstruction , Iimage )⇒M, where Frepresents the evaluated model, and Iinstruction ∈ {Ihuman, IGPT}is the instruction input that can either be annotated by human experts or rewritten by GPT-4. ORPS. In the ORPS task, shown in the second row of Fig. 2, the model is provided with direct part names to ensure accurate textual input. The task can be formulated in two ways: We formulate the ORPS task in two formats: 1.Including both the part name and the ob- ject name, e.g., the handle of the faucet : F(PofO, I image )⇒M.2.Incorporating the affordance, e.g., the han- dle of the cup that can be held , which could assist the model in identifying the part: F(PofOthatAa, Iimage )⇒M,where Aa refers to the affordance. We manually adjust the active and passive voice of the affordance according to ensure grammatical precision. 3.2 InstructPart Dataset In line with our proposed tasks, we collect data to create the InstructPart dataset. This dataset is designed to evaluate the effectiveness of current models in understanding natural language and their ability to ground to specific parts. It comprises 2,400 images, carefully selected to align with ev- eryday household tasks. Specifically, InstructPart includes 48 object classes, 44 part categories, 30 af- fordances, and 37 actions. During data selection, a uniform distribution of object classes is ensured to create a balanced dataset. More details are included in Appendix A. In the first row of Fig. 2, we show annotated ex- amples for the TRPS task. For each image sample, we manually design a task description based on the observed environment and the potential intention of an agent to interact with the object. For each sam- ple, we annotate all the fine-grained segmentation masks relevant to the task description as the ground truth. These masks are human-labeled to ensure accuracy and alignment with human understanding of object parts, maintaining the high quality of our dataset. We deliberately avoid specific part names in the instructions to better adapt to real-world sce- narios. For example, commonly used expressions such as
https://arxiv.org/abs/2505.18291v1
“Flush the toilet” or“Turn on the faucet” are preferred over more detailed directives such as“Press the toilet handle” or“Lift the faucet han- dle”. The selection of these task descriptions aims to train models that are better at reasoning about object parts and their affordances, rather than sim- ply identifying the part name that would solve the task. By avoiding part names, our dataset more ef- fectively analyzes the reasoning ability of models, requiring them to infer parts from implicit descrip- tions. We engaged six human experts to create free-form natural language task instructions, which were then refined using GPT-4 for grammatical pre- cision and sentence diversity. This was followed by thorough human verification to prevent hallucina- tions or other issues that can arise from using large language models for phrase diversification. For the ORPS task, we use the part name and object name as the language input to evaluate the model’s ability to directly ground to the part. In addition to the instruction-image pairs, we provide the names of objects and parts relevant to the image, such as seat of the chair ,spout of the kettle ,handle of the cup . We also include a corre- sponding affordance and action for each instruction. Specifically, affordances refer to low-level actions performed to a specific part, like “pull” ,“push” , or “twist” , while actions refer to the high-level function to be achieved, such as “turn on” ,“pick up” , or “open” . Note that the affordance and action could be identical sometimes, e.g., “pour” ,“cut” , etc. In the examples shown in the first row of Fig. 2, the affordances are “support” ,“pour” ,“grip” , and the actions are “sit” ,“pour” and“pick up” . This allows us to categorize affordances into two levels, addressing the ambiguity in definitions as noted in previous studies (Nguyen et al., 2017; Roy and Todorovic, 2016; Myers et al., 2015). Note that in this work, we use the task descriptions and part names as the text input, while the affordance and action labels are reserved for future research. In summary, the annotation for each of the samples in InstructPart can be represented as: (Itask, Iimage, O, P, M, A affordance , Aaction ),where these items refer to task instruction Itask, image observation Iimage, object name O, part name P, segmentation mask M, affordance name Aaffordance , and action name Aaction ). Note that Itask∈ {Ihuman, IGPT}, which means the text instruction is either directly annotated by humans or rewritten by GPT-4. More annotated examples can be found in Appendix B and H. 3.3 Baseline Method For our InstructPart benchmark, we build a simple yet effective baseline model: Part Identification and Segmentation Assistant (PISA). PISA originatesfrom LISA (Lai et al., 2024), which demonstrates superior capability in object-level reasoning seg- mentation. Motivated by (Li et al., 2024a), which shows the effectiveness of DINOv2 (Oquab et al., 2024) in extracting correspondence information among various parts, we improve LISA with a frozen DINOv2 backbone for feature extraction. As suggested by (Li et al., 2024a), we use linear layers to integrate multi-level features
https://arxiv.org/abs/2505.18291v1
from DI- NOv2 for various granularity information fusion. The fused features are sent to an image decoder derived from SAM (Kirillov et al., 2023), where we apply Transpose Convolution and up-sampling for decoding in an alternating manner. 4 Experiments 4.1 Metrics To evaluate our approach, we use standard met- rics in LISA (Lai et al., 2024), namely gIoU and cIoU. gIoU reflects the average of all per-image Intersection-over-Unions (IoUs), while cIoU is de- fined by the cumulative intersection over the cu- mulative union. To evaluate the precision of the models, we adopt Precision@50 (P@50) metric as the previous referring segmentation works (Liu et al., 2023b; Mao et al., 2016) and develop a Precision@50:95 (P@50:95) metric according to COCO (Lin et al., 2014). The P@50 metric con- siders a mask to be a true positive when the IoU ratio exceeds 0.5, and P@50:95 calculates across a range of IoU thresholds from 0.50 to 0.95 with increments of 0.05, then averages across all the thresholds. The P@50:95 metric requires a higher least IoU for the prediction; hence, it is always lower than the P@50 metric. For the two metric types, IoU and Precision, the latter metric only counts those results greater than a threshold, hence can pose more challenges to the model and fairly evaluate the results with a high recall rate. 4.2 Evaluated Methods Here, we introduce the set of baseline models uti- lized in our experiments. More details about the model settings can be found in Appendix C. Open-vocabulary Segmentation Models. The open-vocabulary part segmentation model, i.e., VL- Part (Sun et al., 2023), is intuitively suitable for our tasks since plentiful part segments were used for training. We also choose OVSeg (Liang et al., 2023) and SAN (Xu et al., 2023) to discover the performance of the open-vocabulary object seg- MethodsOracle Referring Part Segmentation Task Reasoning Part Segmentation Object-Part Object-Part-Affordance Human-Annotated GPT-4-Rewritten gIoU cIoU P 50-95 P50gIoU cIoU P 50-95 P50gIoU cIoU P 50-95 P50gIoU cIoU P 50-95 P50OVSVLPart 22.06 21.78 16.02 22.50 15.32 12.78 11.83 15.33 0.39 1.16 0.00 0.00 0.76 0.84 0.20 0.50 OVSeg 28.58 20.49 10.37 22.33 28.60 20.99 10.87 22.50 22.44 14.11 7.07 15.33 23.14 15.51 7.13 15.17 SAN 10.51 20.24 4.72 10.17 12.11 20.37 5.48 12.00 9.08 13.56 2.62 6.67 6.96 14.69 1.90 5.17 X-Decoder 18.96 15.65 8.52 14.83 18.96 15.65 8.52 14.83 17.48 13.61 7.00 13.17 17.38 12.76 6.90 13.17RESSEEM 13.54 14.63 6.33 10.50 13.54 14.63 6.33 10.50 13.52 14.09 4.97 9.83 14.53 14.19 4.57 10.67 TRIS 23.02 19.90 6.97 17.50 23.11 19.65 6.98 18.50 21.97 17.83 6.68 15.00 22.66 18.52 7.03 16.83 G-SAM 34.33 24.83 15.03 28.83 33.63 24.79 14.42 27.83 29.95 21.45 11.98 25.17 29.57 21.88 11.60 23.00RSLISA 34.46 39.44 17.48 32.67 35.77 39.62 18.78 34.50 32.11 30.25 16.98 30.00 29.75 27.44 15.08 27.83 Shikra 4.50 7.20 2.67 4.17 9.36 15.37 4.92 7.83 1.70 3.48 0.83 1.50 14.65 12.95 8.40 13.33 MiniGPT-v2 35.65 36.05 18.38 33.17 34.58 35.11 18.50 34.27 26.29 19.46 13.00 24.00 29.67 21.37 15.07 24.17 Average 22.56 22.02 10.65 19.67 21.95 21.10 10.66 19.81 17.49 14.90 7.11 14.44 17.59 16.02 7.79
https://arxiv.org/abs/2505.18291v1
14.98 Table 2: Results on ORPS (left) and TRPS (right) tasks. We divide the methods into three categories, namely, open-vocabulary segmentation (OVS), referring expression segmentation (RES), and reasoning segmentation (RS). The best results are bolded , and the second-best are underlined . mentation methods on our task. We select the best- reported models for the three methods. Refering Segmentation Models. We conduct experiments with off-the-shelf models including X-Decoder (Zou et al., 2023a), SEEM (Zou et al., 2023b), and TRIS (Liu et al., 2023b). Besides, we also evaluate Grounding-DINO (Liu et al., 2024b), which has provided a great open-vocabulary refer- ring detection ability and has been integrated with SAM (Kirillov et al., 2023), namely Grounded- SAM. We adopt the best models for these methods. Reasoning Segmentation Models. For our tasks, LISA (Lai et al., 2024) is a natural choice since it can return masks and has been trained on several part segmentation datasets (He et al., 2022; Chen et al., 2014; Ramanathan et al., 2023). Other multi-modal LLMs, including Shikra (Chen et al., 2023b) and MiniGPT-v2 (Chen et al., 2023a) also have localization ability and have been chosen for our evaluation. Since they can only return bound- ing box outputs, we use the results as box prompts for SAM (Kirillov et al., 2023) to get a mask output for fair comparison. Grid-based GPT-4V . The recent release of GPT- 4V has demonstrated remarkable advancements in complex visio-linguistic reasoning (Yang et al., 2023b). However, GPT-4V API cannot return seg- mentation mask output directly, and our prelimi- nary experiments showed that GPT-4V performs poorly when it is asked to generate text coordinates. As a result, we first use Grounding-DINO (Liu et al., 2024b) to find the bounding box of the en- tire object and crop it, then ask GPT-4V to virtu- ally divide the box to 7×7grids and identify thegrids including the desirable parts. Afterward, the coordinates of the grids are used as a prompt for SAM (Kirillov et al., 2023) to obtain the segmenta- tion mask. SoM-based GPT-4V . SoM (Yang et al., 2023a) proposes to label the masks obtained by SAM (Kir- illov et al., 2023) with numbers in the center of each object. As it proves that precise referring can boost the performance of GPT-4V , we apply a similar manner for our part segmentation task. PISA and Fine-tuning. To evaluate our pro- posed method, we use all training data of LISA (Lai et al., 2024) for pertaining and fine-tuning with 1,800 samples of our data. As a comparison, we also fine-tune LISA with the same data. Besides, we also train the models with multiple numbers of samples. More results can be found in Appendix D. 4.3 Quantitative Results of SOTA VLMs Open-sourced VLMs Results. The left part of Tab. 2 shows the result of our ORPS task, where object and part names are explicitly embedded into a template, mitigating the need for models’ reasoning ability. The right part of Tab. 2 shows the result of TRPS, where part names are not present in the instruction and require more reasoning ability to
https://arxiv.org/abs/2505.18291v1
understand the implicit meaning. Comparing the left and right parts of Tab. 2 we can find that the performance of oracle referring task is generally better than that of task reasoning. This demonstrates that current models lack the reasoning ability to infer from a task-image pair to the correct interactive part. For the ORPS task, incorporating the affordance in the instruction leads to no apparent increase in the average performance. This indicates that most models may not possess the common sense to relate a part to an affordance, suggesting the potential of InstructPart for affordance learning. Besides, for the TRPS task, we can find that GPT-4 rewritten instructions lead to overall better performances. This indicates that the precise instruction descriptions generated by GPT-4 align more effectively with the language embedding space of multimodal LLMs, enhancing the reasoning capabilities of vision-language models for handling instructions. GPT-4V Based Methods Results. Tab. 3 shows the results of two GPT-4V segmentation methods. We test the two methods on the oracle referring task to explore GPT-4V’s localization ability. We select a subset consisting of 226 samples from the dataset according to the original category distribution. Al- though the results cannot be fairly compared with other methods in Tab. 2, it still reveals the poor performance of GPT-4V . Two reasons may explain this: 1) While GPT-4V can localize objects (Yang et al., 2023a), we hypothesize that it is not trained directly on fine-grained part data. 2) Labeling num- bers in the center of fine-grained parts may lead to overlapping and ambiguity in referring. MethodsObject-Part gIoU cIoU P 50-95 P50 Grid-based GPT-4V 14.14 17.15 5.67 12.37 SoM-based GPT-4V 25.41 26.82 17.90 25.81 Table 3: GPT-4’s performance in the object-part oracle referring part segmentation task, as applied to a subset of InstructPart. 4.4 Quantitative Results of Fine-tuning with InstructPart Tab. 4 shows the results of TRPS task with human- annotated instructions. The pre-trained PISA out- performs LISA by a large margin, demonstrating its strong reasoning part segmentation ability. After fine-tuning, both LISA and PISA gain great im- provement in all metrics, indicating the exceptional quality and training utility of our data. 4.5 Qualitative Results Fig. 3,4,5 shows the visualization results on the TRPS task. The first column depicts the ground truth labels, and the remaining columns include the results of off-the-shelf VLMs: X-Decoder (Zou et al., 2023a) , SEEM (Zou et al., 2023b) , TRIS (Liu et al., 2023b) , Grounded-SAM (Kirillov et al.,Methods gIoU cIoU P 50-95 P50 LISA-Pretrained 32.11 30.25 16.98 30.00 PISA-Pretrained 43.46 46.76 20.00 44.50 LISA-Tuned 71.26 72.14 57.73 79.33 PISA-Tuned 76.19 78.39 62.20 87.00 Table 4: Comparison of pre-training and fine-tuning results. We use all datasets that LISA was trained on to get the pre-trained model. Fine-tuned models are trained with 1,800 samples in InstructPart. 2023; Liu et al., 2024b) , MiniGPT-v2 (Chen et al., 2023a) , LISA (Lai et al., 2024). The last two columns show the results of fine-tuned LISA and PISA models. As shown by the examples, most VLMs tend to either obtain the entire object area or miss the correct
https://arxiv.org/abs/2505.18291v1
regions, demonstrating the chal- lenging tasks provided by InstructPart. In Fig. 3, we present examples where the fine-tuned PISA shows superior visual part segmentation results, demonstrating the effectiveness of our proposed method. Besides, both the pre-trained and fine- tuned LISA models also demonstrate great poten- tial in part grounding. Here, we visualize additional results of the VLMs and fine-tuned models. As shown in Fig. 5, the pre-trained LISA (Lai et al., 2024) can better identify desired parts compared to other VLMs. This indicates the evaluation usage of our InstructPart dataset, where all the advanced VLMs can be evaluated and compared. Further- more, in Fig. 4, the pre-trained LISA fails to recog- nize target parts, similar to other VLMs, while both fine-tuned models significantly improve the results. More visualizations are available in Appendix G. 5 Discussion Scale of InstructPart dataset. We consider In- structPart a sufficient task-oriented part segmen- tation dataset for the following reasons: 1)The size of InstructPart already exceeds that of sev- eral recent Vision-Language evaluation datasets, such as MMStar (Chen et al., 2024) (1500 samples, NeurIPS’24 ), VisIT-Bench (Bitton et al., 2024) (592 images, NeurIPS’23 ), WHOOPS! (Bitton- Guetta et al., 2023) (500 images, ICCV’23 ), and TIFA160 (Hu et al., 2023) (800 generated images, ICCV’23 ). We believe that our data are adequate for thorough evaluations of current models. 2)In- structPart addresses a gap in data related to rea- soning about robot-object interaction and part seg- mentation (e.g., PartImageNet includes only one relevant category: bottle). 3)Fine-tuning LISA Ground Truth X-Decoder SEEM TRIS G-SAM MiniGPT-v2 LISA-Pretrain LISA-Finetune PISA-Finetune Figure 3: Qualitative comparison of different VLMs and the fine-tuned models. In these examples, the pre-trained LISA falls short of recognizing the correct part. After fine-tuning, PISA shows better potential for part understanding than LISA. More results can be found in Figure G13. Ground Truth X-Decoder SEEM TRIS G-SAM MiniGPT-v2 LISA-Pretrain LISA-Finetune PISA-Finetune Figure 4: Qualitative comparison of different VLMs and the fine-tuned models. In these examples, the pre-trained LISA falls short of recognizing the correct part. After fine-tuning, both LISA and PISA perform well on the part identification. More results can be found in Figure G14. Ground Truth X-Decoder SEEM TRIS G-SAM MiniGPT-v2 LISA-Pretrain LISA-Finetune PISA-Finetune Figure 5: Qualitative comparison of different VLMs and the fine-tuned models. In these examples, the pre-trained LISA already delivers good identification of the target parts. More results can be found in Figure G14. with a small subset of our dataset (200 samples) can lead to a nearly 100% performance increase (re- sults included in the Appendix D), demonstrating the exceptional quality and utility of our dataset. Novelty of InstructPart. The novelty of In- structPart lies not in our baseline method but in our comprehensive evaluation of SOTA VLMs, reveal- ing their limitations in complex language reasoning and part-grounding. We hope that the established benchmark will foster progress in VLM-based part grounding, ultimately enhancing the real-world ap- plicability of VLMs across various scenarios. Our proposed baseline is simple yet demonstrates the su- perior quality and training potential of our dataset. Additionally, we conduct a case study
https://arxiv.org/abs/2505.18291v1
on real-world grasping data (see Appendix I), showing the poten- tial of InstructPart for broader applications. Potential Applications. Our dataset contains samples in various scenarios, including kitchen, liv- ing room, outdoor, etc., and can be used for robot manipulation and visual question answering. Be-sides, our dataset can provide data for affordance learning and semantic understanding. For bench- marking usage, one can also use the entire 2,400 images to evaluate current advanced VLMs. 6 Conclusion In this work, we introduce a new benchmark, InstructPart, a novel dataset containing part annotations for common household objects as well as two tasks: task reasoning and oracle referring segmentation. We showed that even the most advanced vision-language models struggle with tasks that link specific affordances to the corresponding parts of an object when given high-level instructions. By fine-tuning a simple baseline with our dataset, we achieve a twofold improvement in part segmentation, showcasing the quality and training utility of our data. Through our work, we highlight a significant gap in foundation models for task-oriented part segmentation and hope that with our dataset, we can pave the way for further research into object-part reasoning. Limitations. In this work, we propose a baseline method that achieves significant performance im- provements. However, we have not fully explored the potential of our dataset, as the affordance labels were not utilized during training. An intriguing direction for future research is to combine affor- dance learning with language reasoning to further enhance performance. Acknowledgements This work has been funded in part by the Army Research Laboratory (ARL) award W911NF-23- 2-0007 and W911QX-24-F-0049, DARPA award FA8750-23-2-1015, and ONR award N00014-23- 1-2840. References Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. 2022. Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems , 35:23716–23736. Yonatan Bitton, Hritik Bansal, Jack Hessel, Rulin Shao, Wanrong Zhu, Anas Awadalla, Josh Gardner, Ro- han Taori, and Ludwig Schmidt. 2024. Visit-bench: A dynamic benchmark for evaluating instruction- following vision-and-language models. In Advances in Neural Information Processing Systems , vol- ume 36, pages 26898–26922. Nitzan Bitton-Guetta, Yonatan Bitton, Jack Hessel, Ludwig Schmidt, Yuval Elovici, Gabriel Stanovsky, and Roy Schwartz. 2023. Breaking common sense: Whoops! a vision-and-language benchmark of syn- thetic and compositional images. In Proceedings of the IEEE/CVF International Conference on Com- puter Vision , pages 2616–2627. Jun Chen, Deyao Zhu, Xiaoqian Shen, Xiang Li, Zechun Liu, Pengchuan Zhang, Raghuraman Krishnamoor- thi, Vikas Chandra, Yunyang Xiong, and Mohamed Elhoseiny. 2023a. Minigpt-v2: large language model as a unified interface for vision-language multi-task learning. arXiv preprint arXiv:2310.09478 . Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, and Rui Zhao. 2023b. Shikra: Unleashing multimodal llm’s referential dialogue magic. arXiv preprint arXiv:2306.15195 . Lin Chen, Jinsong Li, Xiaoyi Dong, Pan Zhang, Yuhang Zang, Zehui Chen, Haodong Duan, Jiaqi Wang, Yu Qiao, Dahua Lin, and Feng Zhao. 2024. Are we on the right way for evaluating large vision-language models? In Advances in Neural Information Process- ing Systems , volume 37, pages 27056–27087.Xianjie Chen, Roozbeh Mottaghi, Xiaobai
https://arxiv.org/abs/2505.18291v1
Liu, Sanja Fidler, Raquel Urtasun, and Alan Yuille. 2014. De- tect what you can: Detecting and representing objects using holistic models and body parts. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition , pages 1971–1978. Danny Driess, Fei Xia, Mehdi SM Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, et al. 2023. Palm-e: An embodied multimodal language model. In International Conference on Machine Learning , pages 8469–8488. PMLR. Hao-Shu Fang, Chenxi Wang, Minghao Gou, and Cewu Lu. 2020. Graspnet-1billion: A large-scale bench- mark for general object grasping. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 11444–11453. Samir Yitzhak Gadre, Kiana Ehsani, and Shuran Song. 2021. Act the part: Learning interaction strategies for articulated object part discovery. In Proceedings of the IEEE/CVF International Conference on Com- puter Vision , pages 15752–15761. Haoran Geng, Helin Xu, Chengyang Zhao, Chao Xu, Li Yi, Siyuan Huang, and He Wang. 2023. Gapartnet: Cross-category domain-generalizable object percep- tion and manipulation via generalizable and action- able parts. In Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition , pages 7081–7091. Tianyu Han, Lisa C Adams, Sven Nebelung, Jakob Nikolas Kather, Keno K Bressem, and Daniel Truhn. 2023. Multimodal large language models are generalist medical image interpreters. medRxiv . Ju He, Shuo Yang, Shaokang Yang, Adam Kortylewski, Xiaoding Yuan, Jie-Neng Chen, Shuai Liu, Cheng Yang, Qihang Yu, and Alan Yuille. 2022. Partim- agenet: A large, high-quality dataset of parts. In European Conference on Computer Vision , pages 128–145. Springer. Ronghang Hu, Marcus Rohrbach, and Trevor Darrell. 2016. Segmentation from natural language expres- sions. In European Conference on Computer Vision , pages 108–124. Springer. Yushi Hu, Benlin Liu, Jungo Kasai, Yizhong Wang, Mari Ostendorf, Ranjay Krishna, and Noah A Smith. 2023. Tifa: Accurate and interpretable text-to-image faithfulness evaluation with question answering. In Proceedings of the IEEE/CVF International Confer- ence on Computer Vision , pages 20406–20417. Siyuan Huang, Iaroslav Ponomarenko, Zhengkai Jiang, Xiaoqi Li, Xiaobin Hu, Peng Gao, Hongsheng Li, and Hao Dong. 2024. Manipvqa: Injecting robotic affordance and physically grounded information into multi-modal large language models. In IEEE/RSJ International Conference on Intelligent Robots and Systems , pages 7580–7587. IEEE. Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. 2023. Segment anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 4015–4026. Xin Lai, Zhuotao Tian, Yukang Chen, Yanwei Li, Yuhui Yuan, Shu Liu, and Jiaya Jia. 2024. Lisa: Reason- ing segmentation via large language model. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 9579–9589. Gen Li, Deqing Sun, Laura Sevilla-Lara, and Varun Jam- pani. 2024a. One-shot open affordance learning with foundation models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recog- nition , pages 3086–3096. Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. 2023. Blip-2: Bootstrapping language-image pre- training with frozen image encoders and large lan- guage models. In International Conference on
https://arxiv.org/abs/2505.18291v1
Ma- chine Learning , pages 19730–19742. PMLR. Samuel Li, Sarthak Bhagat, Joseph Campbell, Yaqi Xie, Woojun Kim, Katia P. Sycara, and Simon Stepputtis. 2024b. Shapegrasp: Zero-shot task-oriented grasp- ing with large language models through geometric decomposition. In IEEE/RSJ International Confer- ence on Intelligent Robots and Systems , pages 10527– 10534. Feng Liang, Bichen Wu, Xiaoliang Dai, Kunpeng Li, Yi- nan Zhao, Hang Zhang, Peizhao Zhang, Peter Vajda, and Diana Marculescu. 2023. Open-vocabulary se- mantic segmentation with mask-adapted clip. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 7061–7070. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European Confer- ence on Computer Vision , pages 740–755. Springer. Chang Liu, Henghui Ding, and Xudong Jiang. 2023a. Gres: Generalized referring expression segmenta- tion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 23592–23601. Fang Liu, Yuhao Liu, Yuqiu Kong, Ke Xu, Lihe Zhang, Baocai Yin, Gerhard Hancke, and Rynson Lau. 2023b. Referring image segmentation using text supervision. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 22124–22134. Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. 2024a. Improved baselines with visual instruc- tion tuning. In Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition , pages 26296–26306.Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Qing Jiang, Chunyuan Li, Jianwei Yang, Hang Su, et al. 2024b. Grounding dino: Mar- rying dino with grounded pre-training for open-set object detection. In European Conference on Com- puter Vision , pages 38–55. Springer. Zheyuan Liu, Cristian Rodriguez-Opazo, Damien Teney, and Stephen Gould. 2021. Image retrieval on real-life images with pre-trained vision-and-language models. InProceedings of the IEEE/CVF International Con- ference on Computer Vision , pages 2125–2134. Hongchen Luo, Wei Zhai, Jing Zhang, Yang Cao, and Dacheng Tao. 2022. Learning affordance ground- ing from exocentric images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition , pages 2252–2261. Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L Yuille, and Kevin Murphy. 2016. Generation and comprehension of unambiguous ob- ject descriptions. In Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recognition , pages 11–20. Kaichun Mo, Leonidas J Guibas, Mustafa Mukadam, Abhinav Gupta, and Shubham Tulsiani. 2021. Where2act: From pixels to actions for articulated 3d objects. In Proceedings of the IEEE/CVF Interna- tional Conference on Computer Vision , pages 6813– 6823. Kaichun Mo, Shilin Zhu, Angel X Chang, Li Yi, Sub- arna Tripathi, Leonidas J Guibas, and Hao Su. 2019. Partnet: A large-scale benchmark for fine-grained and hierarchical part-level 3d object understanding. InProceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition , pages 909– 918. Austin Myers, Ching L Teo, Cornelia Fermüller, and Yiannis Aloimonos. 2015. Affordance detection of tool parts from geometric features. In IEEE Inter- national Conference on Robotics and Automation , pages 1374–1381. IEEE. Anh Nguyen, Dimitrios Kanoulas, Darwin G Caldwell, and Nikos G Tsagarakis. 2017. Object-based affor- dances
https://arxiv.org/abs/2505.18291v1
detection with convolutional neural networks and dense conditional random fields. In IEEE/RSJ International Conference on Intelligent Robots and Systems , pages 5908–5915. IEEE. Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy V o, Marc Szafraniec, Vasil Khalidov, Pierre Fer- nandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. 2024. DINOv2: Learning robust visual features without supervision. Transactions on Machine Learning Research . Shuyi Ouyang, Hongyi Wang, Shiao Xie, Ziwei Niu, Ruofeng Tong, Yen-Wei Chen, and Lanfen Lin. 2023. Slvit: Scale-wise language-guided vision transformer for referring image segmentation. In Proceedings of the International Joint Conference on Artificial Intelligence , pages 1294–1302. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sas- try, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning , pages 8748–8763. PMLR. Vignesh Ramanathan, Anmol Kalia, Vladan Petrovic, Yi Wen, Baixue Zheng, Baishan Guo, Rui Wang, Aaron Marquez, Rama Kovvuri, Abhishek Kadian, et al. 2023. Paco: Parts and attributes of common objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 7141–7151. Adam Rashid, Satvik Sharma, Chung Min Kim, Justin Kerr, Lawrence Yunliang Chen, Angjoo Kanazawa, and Ken Goldberg. 2023. Language embedded ra- diance fields for zero-shot task-oriented grasping. InConference on Robot Learning , pages 178–200. PMLR. Anirban Roy and Sinisa Todorovic. 2016. A multi-scale cnn for affordance segmentation in rgb images. In European Conference on Computer Vision , pages 186–201. Springer. Peize Sun, Shoufa Chen, Chenchen Zhu, Fanyi Xiao, Ping Luo, Saining Xie, and Zhicheng Yan. 2023. Go- ing denser with open-vocabulary part segmentation. InProceedings of the IEEE/CVF International Con- ference on Computer Vision , pages 15453–15465. Zifu Wan, Yaqi Xie, Ce Zhang, Zhiqiu Lin, Zihan Wang, Simon Stepputtis, Deva Ramanan, and Katia P Sycara. 2024. Instructpart: Affordance-based part segmentation from language instruction. In AAAI- 2024 Workshop on Public Sector LLMs: Algorithmic and Sociotechnical Design . Zifu Wan, Pingping Zhang, Yuhao Wang, Silong Yong, Simon Stepputtis, Katia Sycara, and Yaqi Xie. 2025. Sigma: Siamese mamba network for multi-modal semantic segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision , pages 1734–1744. IEEE. Wenhai Wang, Zhe Chen, Xiaokang Chen, Jiannan Wu, Xizhou Zhu, Gang Zeng, Ping Luo, Tong Lu, Jie Zhou, Yu Qiao, et al. 2023. Visionllm: Large language model is also an open-ended decoder for vision-centric tasks. In Advances in Neural Informa- tion Processing Systems , volume 36, pages 61501– 61513. Fanbo Xiang, Yuzhe Qin, Kaichun Mo, Yikuan Xia, Hao Zhu, Fangchen Liu, Minghua Liu, Hanxiao Jiang, Yifu Yuan, He Wang, et al. 2020. Sapien: A sim- ulated part-based interactive environment. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 11097–11107. Mengde Xu, Zheng Zhang, Fangyun Wei, Han Hu, and Xiang Bai. 2023. Side adapter network for open- vocabulary semantic segmentation. In Proceedingsof the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 2945–2954. Jianwei Yang, Hao Zhang, Feng Li, Xueyan Zou, Chun- yuan Li, and Jianfeng Gao. 2023a. Set-of-mark prompting unleashes extraordinary visual grounding
https://arxiv.org/abs/2505.18291v1
in gpt-4v. arXiv preprint arXiv:2310.11441 . Zhao Yang, Jiaqi Wang, Yansong Tang, Kai Chen, Hengshuang Zhao, and Philip HS Torr. 2022. Lavt: Language-aware vision transformer for referring im- age segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recog- nition , pages 18155–18165. Zhengyuan Yang, Linjie Li, Kevin Lin, Jianfeng Wang, Chung-Ching Lin, Zicheng Liu, and Lijuan Wang. 2023b. The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 , 9. Li Yi, Haibin Huang, Difan Liu, Evangelos Kaloger- akis, Hao Su, and Leonidas Guibas. 2018. Deep part induction from articulated object pairs. ACM Transactions on Graphics , 37(6):1–15. Haoxuan You, Haotian Zhang, Zhe Gan, Xianzhi Du, Bowen Zhang, Zirui Wang, Liangliang Cao, Shih-Fu Chang, and Yinfei Yang. 2024. Ferret: Refer and ground anything anywhere at any granularity. In In- ternational Conference on Learning Representations . Bolei Zhou, Hang Zhao, Xavier Puig, Tete Xiao, Sanja Fidler, Adela Barriuso, and Antonio Torralba. 2019. Semantic understanding of scenes through the ade20k dataset. International Journal of Computer Vision , 127:302–321. Xingcheng Zhou, Mingyu Liu, Bare Luka Zagar, Ekim Yurtsever, and Alois C. Knoll. 2023. Vision lan- guage models in autonomous driving and intelligent transportation systems. arxiv 2023. arXiv preprint arXiv:2310.14414 . Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. 2024. MiniGPT-4: Enhancing vision-language understanding with advanced large language models. In International Conference on Learning Representations . Xueyan Zou, Zi-Yi Dou, Jianwei Yang, Zhe Gan, Linjie Li, Chunyuan Li, Xiyang Dai, Harkirat Behl, Jian- feng Wang, Lu Yuan, et al. 2023a. Generalized decoding for pixel, image, and language. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 15116–15127. Xueyan Zou, Jianwei Yang, Hao Zhang, Feng Li, Lin- jie Li, Jianfeng Wang, Lijuan Wang, Jianfeng Gao, and Yong Jae Lee. 2023b. Segment everything every- where all at once. In Advances in Neural Information Processing Systems , volume 36, pages 19769–19782. Appendix A Dataset Details InstructPart dataset is collected from Flickr1web- site and AGD20K (Luo et al., 2022), where we selected free-licensed images from both sources. To better understand the categories of our dataset, we follow ADE20K (Zhou et al., 2019) to provide the distribution of objects and parts within Instruct- Part. As shown in Fig. 6, the dataset comprises 2,400 data items, encompassing 48 object classes and 44 part classes, which together form 98 distinct object-part pair classes. Besides, we also provide a word cloud to visualize the object-part classes and affordance-action categories, as depicted in Fig. A7 and Fig. A8, respectively. This diversity in classes indicates our dataset’s wide coverage of various daily scenes, offering robust criteria for comprehen- sively analyzing the proficiency of current models in understanding task instructions and segmenting parts. Furthermore, this suggests that our dataset can be valuable for broad areas, including semantic segmentation, robot manipulation, visual question answering, and more. B Annotation Example Fig. B9 presents two examples of annotations from our InstructPart dataset, focusing on the handle of a cup and the lid of a pod, respectively. In each JSON dictionary, the names of the object
https://arxiv.org/abs/2505.18291v1
and its specific part are noted, aligned with a task instruction that pertains to a particular part shown in the image. Additionally, both a low-level affordance name and a high-level action name are provided in relation to the instruction. Besides, in Fig. B10, we provide more examples that contain occlusions and human interactions to showcase the complexity of our dataset. C Evaluated Model Details Open-vocabulary segmentation models. We choose OVSeg (Liang et al., 2023) and SAN (Xu et al., 2023) to discover the performance of the open-vocabulary object segmentation methods on our task. We select the best-reported models for the two methods, ovseg_swinbase_vitL14_ft_mpt.pth andsan_vit_large_14.pth respectively. Refering expression segmentation. We con- duct experiments with off-the-shelf models includ- 1https://www.flickr.com/ing X-Decoder (Zou et al., 2023a), SEEM (Zou et al., 2023b), and TRIS (Liu et al., 2023b). We adopt xdecoder_focalt_last.pt ,seem_focall_v1.pt , and stage2_refcocog_google.pth for the three models respectively. Besides, we also evaluate Grounding-DINO (Liu et al., 2024b), which has witnessed a great open-vocabulary referring detec- tion ability and been integrated with SAM (Kirillov et al., 2023) to a project, Grounded-SAM2. Reasoning segmentation. For our tasks, LISA (Lai et al., 2024) can naturally be a good choice since it can return masks and has been trained on several part segmentation datasets. As a result, it is interesting to explore whether it pos- sesses the ability to understand instructions and find part segments. Other multi-modal LLMs, includ- ing VisionLLM (Wang et al., 2023), Shikra (Chen et al., 2023b), also have localization ability. Since they can only return bounding box outputs, we use the results as box prompts for SAM to get a mask output for fair comparison. However, we cannot test on VisionLLM since it has not release code. To prompt LISA, we follow its original setting to add "Please output the segmentation mask." at the end of each instruction. Besides, in order to formulate a query for the oracle referring part seg- mentation task, we embed the object and part name in a format of: “Where is the Itextin the image”, where Itextstands for the text input. To prompt Shikra, we integrate our instruction in its original template as follows: • Instruction referring part segmentation: <Itext>. Can you point out all the related parts in the image < Iimage> and provide the coordi- nates of their locations? • Oracle referring part segmentation: Can you point out all the < Itext> in the image <Iimage> and provide the coordinates of their locations? We adopt LISA-7B-v1 (Lai et al., 2024) model that has been fine-tuned on both training and val- idation data of LISA’s dataset. As for Shikra, we select the frequently updated model, Shikra-7B- delta-v1-0708. 2https://github.com/IDEA-Research/Grounded-Segment- Anything bed-bed_frame bed_frame-leg beverage_can-pull_tab beverage_can-body blanket-blanket bottle-cap bottle-body bottle-cork bottle-lid bottle-handle bucket-handle bucket-body bucket-opening cabinet-handle cabinet-shelves cabinet-leg chair-seat computer_mouse-buttons computer_mouse-scroll_wheel cup-handle cup-opening cup-body cup-lid cup_lid-handle cutting_board-board director_chair-armrest director_chair-seat director_chair-back door-handle door-knob door_of_the_cabinet-edge drawer-handle drawer-interior electric_iron-handle faucet-handle fork-handle fork-tines ipad_tablet-screen kettle-handle kettle-lid kettle-spout kettle_lid-handle keyboard-keyboard knife-handle knife-blade lamp-lamp_switch lamp-bulb lamp-lampshade laptop-keyboard laptop-screen laptop-touchpad laptop-lid light_switch-switch_toggle microwave-handle microwave-door_release_button monitor-screen monitor-button monitor-stand office_chair-seat office_chair-armrest office_chair-back oven-handle oven-knob oven-rack phone-screen pitcher-handle pitcher-lid pitcher-opening pitcher_lid-handle plate-surface
https://arxiv.org/abs/2505.18291v1
pliers-handle pliers-jaws pot-handle pot-lid pot-opening pot_lid-handle refrigerator-handle remote_control-buttons rocking_chair-armrest rocking_chair-seat rocking_chair-base scissors-handle scissors-blades sofa-couch_cushion spoon-handle spoon-bowl suitcase-static_handle suitcase-lid suitcase-interior suitcase-telescopic_handle suitcase-wheel table-desktop television-screen television-knob television-leg toilet-button toilet-handle toilet-lid towel-towel trash_can-lid trash_can-body trash_can-wheel trash_can-opening window-handle wine_glass-stem wine_glass-opening wine_glass-body wooden_chair-seat Object-Part Pairs0100101FrequencyT otal Object Classes: 53 T otal Part Classes: 46 T otal Object-Part Pair Classes: 108 T otal Parts Number: 2000Object-Part Pair DistributionFigure 6: Object-part pair distribution. We collect 2,400 data pieces in total, containing 48 object classes and 44 part classes, constituting 98 different object-part pair classes. The x-axis shows the name of the object-part pairs, and the y-axis shows the frequency of each item. The parts belonging to the same object classes are highlighted with the same color in the bar chart. Figure A7: InstructPart dataset object and part classes. The left part shows the object class names and the right part shows the part class names. Figure A8: InstructPart dataset affordance and action categories. The left part shows the affordance names and the right part shows the action names. Specifically, affordances refer to low-level actions performed to a specific part, while actions refer to the high-level function to be achieved. Figure B9: Annotation Example: Each data item is represented by a JSON dictionary, which details the components involved. This includes the object to which these parts belong, the name of each part, a specific instruction related to these parts, a low-level affordance associated with the instruction, and a high-level action performed on the parts. Corresponding parts are highlighted in green in the images on the right. Figure B10: More complex examples in InstructPart, including occlusions and human-object interactions. D Effect of Training Samples To verify the quality and training potential of the PISA dataset, we gradually increase the number of training samples from 200 to 1,800 and observe the performance improvement. Specifically, we start with 200 samples for training, then gradually increase the number of training samples to 600, 1,200, and finally 1,800. Each increment includes all the previously used training samples. As shown in Fig. D11, with the increasing number of training samples, the IoU metric gradually increases and exhibits a logarithmic convergence tendency. This indicates that our high-quality data significantly boosts performance, even with just 200 samples. The performance of both models improves substan- tially from the outset. Figure D11: Performance improvement with increasing number of training samples. We gradually add training samples to 200, 600, 1,200, and 1,800. E Does object recognition hinder part segmentation? To explore whether the bottleneck lies in current VLMs’ object recognition ability, we use the ob- ject classes as instruction and obtain the results in Tab. E5. Since we do not have object-level labels, we use the recall rate as a reflection of whether the model can find the entire object. From the results in Tab. E5, the precision is much lower compared to the recall rate, and the recall rate is close to 1 after the third quartile (75th percentile). This indi- cates that the predicted masks can generally cover the part labels, so the poor performance of TRPS cannot derive
https://arxiv.org/abs/2505.18291v1
from the object recognition ability.MethodsObject-Level Prec. Rec.@A Rec.@25% Rec.@50% Rec.@75% OVSeg 20.93 81.80 85.00 99.26 100.00 SAN 19.46 73.55 56.37 98.49 100.00 G-SAM 25.53 89.83 92.59 96.99 99.37 Table E5: Precision and recall rate on object-level seg- mentation results. The five metrics refer to precision (Prec.), average recall (Rec.@A), first quartile recall (Rec.@25%), median recall (Rec.@50%), and third quartile recall (Rec.@75%), respectively. F GPT-4V Qualitative Results We show the results of GPT-4V-based methods, namely SoM-based GPT-4V and Grid-based GPT- 4V , in Fig. F12. While GPT-4V-based methods deliver clear boundaries, they sometimes select the wrong segments from SAM (Kirillov et al., 2023), leading to poor overall performance. Figure F12: GPT4-V based methods. Fig. G13 Fig. G14 Fig. G15 1. 1009786005_d4a02fd811_o- faucet-handle 2. 2329134125_8a71be7470_o- kettle-handle 3. 3088942376_8681bb276f_o- spoon-handle 4. cup_000294-cup-handle 5. knife_000911-knife-handle 6. 410044558_6145ff0aaa_o- pot-handle 7. laptop_000445-laptop-keyboard 8. knife_000691-knife-handle1. 4178009615_ed8921d0d1_k- kettle-spout 2. cup_000324-cup-handle 3. bottle_002805-bottle-body 4. knife_000568-knife-handle 5. knife_000953-knife-blade 6. 34465720_f8f20ee31a_c- scissors-handle 7. 381204305_e5e937fccc_h- pitcher-handle 8. bench_001273-bench-seat 9. fork_002954-fork-handle 10. knife_000154-knife-handle 11. shovel_1-shovel-blade 12. suitcase_001098-suitcase- telescopic_handle 13. wine_glass_001774- wine_glass-stem 14. dining_4-chair-seat1. 2491323916_a05ac3648f_o- knife-handle 2. 4580224808_1194613deb_o- chair-seat 3. 4471021242_b9d855f193_k- bucket-handle 4. 8607578325_25221a7726_h- spoon-handle 5. bench_002898-bench-seat 6. cup_001798-cup-handle 7. cup_002055-cup-handle 8. knife_000530-knife-blade 9. scissors_001402-scissors- handle 10. cup_002062-cup-handle 11. 2939090254_2f01ebed6d_o- computer_mouse-scroll_wheel 12. 6217625873_411169d784_o- laptop-keyboard 13. cup_001104-cup-handle 14. fork_001529-fork-handle Table G6: Index name for samples in Fig. G13, Fig. G14, and Fig. G15. G More Qualitative Results In Figure 3-5 of the main paper, we only include six qualitative results due to space limitations. In Fig. G13, we present more examples where the fine-tuned PISA shows superior visual part segmen- tation results, demonstrating the effectiveness of our proposed method. Besides, both the pre-trained and fine-tuned LISA models also demonstrate great potential in part grounding. Here, we visualize ad- ditional results of the VLMs and fine-tuned models. As shown in Fig. G15, the pre-trained LISA(Lai et al., 2024) can better identify desired parts com- pared to other VLMs. This indicates the evaluation usage of our InstructPart dataset, where all the ad- vanced VLMs can be evaluated and compared. Fur- thermore, in Fig. G14, the pre-trained LISA fails to recognize target parts, similar to other VLMs, while both fine-tuned models significantly improve the results. In Tab. G6, we provide a list containing the name of each sample we evaluate so that their language input can be easily retrieved from our dataset. Ground Truth X-Decoder SEEM TRIS G-SAM MiniGPT-v2 LISA-Pretrain LISA-Finetune PISA-Finetune Figure G13: Qualitative comparison of different VLMs and the fine-tuned models. In these examples, the pre-trained LISA falls short of recognizing the correct part. After fine-tuning, PISA shows better potential for part understanding than LISA. Ground Truth X-Decoder SEEM TRIS G-SAM MiniGPT-v2 LISA-Pretrain LISA-Finetune PISA-Finetune Figure G14: Qualitative comparison of different VLMs and the fine-tuned models. In these examples, the pre-trained LISA falls short of recognizing the correct part. After fine-tuning, both LISA and PISA perform well on the part identification. Ground Truth X-Decoder SEEM TRIS G-SAM MiniGPT-v2 LISA-Pretrain LISA-Finetune PISA-Finetune Figure G15: Qualitative comparison of different VLMs and the fine-tuned models. In these examples, the pre-trained LISA already delivers good identification of the target parts. Ground Truth
https://arxiv.org/abs/2505.18291v1
X-Decoder SEEM TRIS G-SAM MiniGPT-v2 LISA-Pretrain LISA-Finetune PISA-Finetune Figure H16: More qualitative examples with corresponding annotations recorded in Table H7. H More Annotation Samples In addition to the annotation examples shown in Fig. B9, we include five more annotations for the samples in Fig. H16 in Table H7. The listed anno- tations correspond to the order of the images. { "image_path": "538210619_c4def94c9b_o.jpg", "part_list": [ { "object": "scissors", "part": "handle", "affordance": "hold", "action": "hold", "instruction": [ "If I want to use the scissors, which part in the picture ,→should I put my fingers in?", "Describe the part of the scissors in the picture where fingers ,→ should be placed.", "Where is the handle of the scissors in this image?", "Where is the handle of the scissors that can be held in this ,→image?", "handle of the scissors", "handle of the scissors that can be held" ] } ] } { "image_path": "knife_002845.jpg", "part_list": [ { "object": "knife", "part": "handle", "affordance": "hold", "action": "pick up", "instruction": [ "If I want to pick up the knife, which part in the picture can ,→be used?", "Which part of the knife is safe to hold when picking it up?", "Where is the handle of the knife in this image?", "Where is the handle of the knife that can be held in this ,→image?", "handle of the knife", "handle of the knife that can be held" ] } ] } { "image_path": "2329134125_8a71be7470_o.jpg", "part_list": [ { "object": "kettle", "part": "handle", "affordance": "hold", "action": "hold", "instruction": [ "Which part in the picture can be utilized to hold the kettle?", ,→ "In the image, identify the part of the kettle that 's meant to ,→be held.", "Where is the handle of the kettle in this image?", "Where is the handle of the kettle that can be held in this ,→image?", "handle of the kettle", "handle of the kettle that can be held" ] } ] } { "image_path": "bottle_002805.jpg", "part_list": [ { "object": "bottle", "part": "body", "affordance": "hold", "action": "hold", "instruction": [ "If I want to hold the bottles, which parts in the picture can ,→be utilized?", "To hold the bottles, which parts are designed for grip?", "Where is the body of the bottle in this image?", "Where is the body of the bottle that can be held in this image ,→?", "body of the bottle", "body of the bottle that can be held" ] } ] } { "image_path": "knife_000953.jpg", "part_list": [ { "object": "knife", "part": "blade", "affordance": "cut", "action": "cut", "instruction": [ "If I want to use the knife to cut the carrots, which part in ,→the picture should be used?", "Identify the part of the knife ideal for slicing the carrots.", ,→ "Where is the blade of the knife in this image?", "Where is the blade of the knife that can cut in this image?", "blade of the knife", "blade of the knife that can cut" ] } ] } Table H7: Corresponding annotations for images in Fig. H16. I A Case Study on Real-world Grasping Data. Grasping is one vital aspect that our InstructPart benchmark aims
https://arxiv.org/abs/2505.18291v1
to facilitate. Consequently, we evaluate the model trained with our data in a real- world tabletop grasping environment. We use the table setup from ShapeGrasp (Li et al., 2024b), which consists of 38 objects covering 12 general categories and 49 tasks. These categories and tasks are the same as those in LERF-TOGO (Rashid et al., 2023). More details about the dataset are in- cluded in the supplementary material. Our trained PISA model is evaluated on the zero-shot task- oriented grasping task, as described in (Li et al., 2024b; Rashid et al., 2023). We compare the suc- cessful part selection rate, defining a successful part selection as our output segmentation mask ac- curately aligned with the target part. As shown in Tab. I8, PISA’s zero-shot part identification ability is comparable to state-of-the-art (SOTA) methods. Additionally, due to PISA’s end-to-end advantage, its execution time significantly outperforms others. PISA ShapeGrasp LERF-TOGO Part Selection (%) 80 86 82∗ Time (s) 2 25 120 Table I8: Comparison of part selection accuracy and execution time.∗indicates that LERF-TOGO uses the same categories of objects, but not identical ones. In Tab. I9, we list all the tasks (Li et al., 2024b) evaluated in our case study in the discussion sec- tion. In Fig. I17, we showcase some results of our PISA model predicting in a zero-shot manner. It is evident that PISA, trained with our proposed dataset, demonstrates good generalization ability, successfully segmenting unseen parts like plant stems. It is worth discussing that while the quantita- tive results shown in the discussion are not supe- rior to ShapeGrasp (Li et al., 2024b) and LERF- TOGO (Rashid et al., 2023), the entire real-world dataset contains only 49 tasks. Although LERF- TOGO achieves 6% higher accuracy than us, this difference equates to just 3 images. Moreover, our method is significantly faster than others, and this novel end-to-end prediction approach can be ben- eficial for real-time robot grasping. Our methods can easily be integrated with existing grasping base- lines such as GraspNet (Fang et al., 2020). With our dataset, researchers can focus more on applyingsegmentation methods to grasping, creating a good bridge between 2D perception and 3D grasping. J Distinctions between InstructPart and LISA. While both works fall under the category of reasoning-based segmentation , the goal, task defi- nition, benchmark scale, and downstream applica- bility are fundamentally different: •Benchmarking Goals and Granularity: LISA focuses primarily on object-level scene understanding , where the objective is to se- mantically interpret an image and segment an object based on abstract instructions (e.g., “segment the food with the most protein” or “segment the food that is not spicy”). In con- trast, our work introduces task-oriented part- level segmentation , aiming to understand the affordance and functionality of object com- ponents. This finer-grained understanding is essential for practical applications that require actionable perception and reasoning grounded in object structure. •Benchmark Scale and Usefulness: While LISA introduces an important first step toward reasoning-based segmentation, its benchmark contains 1,218 samples, which may be insufficient for a comprehensive eval- uation of vision-language models. In contrast, ourInstructPart benchmark includes 2,400 images,
https://arxiv.org/abs/2505.18291v1
together with 9,600 diverse task in- structions, making it more comprehensive and diverse. This enables a more thorough eval- uation and offers greater potential for model training and fine-tuning. •Novelty and Research Opportunity: We consider the reasoning-based segmenta- tion task proposed by LISA as a combina- tion of VQA and semantic segmentation—two tasks that have been well explored. However, task-oriented part understanding remains sig- nificantly under-explored, as discussed in Sec- tion 2.1 of our paper. Our work goes further by introducing the use of instructions and affor- dances to refer to different object parts. This creates a more challenging and novel setting, which we believe will encourage research into part-level reasoning and grounding. Table I9: Complete list of tasks for each scene Scene Tasks Kitchen ‘pick up the grey spoon’, ‘pick up the teapot’, ‘scrub the dishes’, ‘dust the books’ Flowers ‘give the daisy’, ‘give the rose’ Mugs ‘pick up the mug’, ‘pick up the blue mug’, ‘pick up the grey mug’ ‘pick up the white mug’, ‘pick up the teacup’ Tools ‘pick up the retractable tape measure’, ‘pick up the screwdriver’, ‘cut the wire’ ‘pick up the soldering iron’, ‘swing the hammer’ Knives ‘cut the bread’, ‘cut the steak’, ‘cut the box’ Martinis ‘pick up the grey martini glass’, ‘pick up the green martini glass’ Fragile ‘hang the camera’, ‘wear the blue sunglasses’ ‘wear the black sunglasses’, ‘pick up the lightbulb’ Cords ‘pick up the power strip’, ‘plug in the power strip’, ‘pick up the usb dongle’, ‘push in the connector’ Messy ‘eat the ice cream’, ‘eat the lollipop’, ‘eat the red lollipop’ Pasta ‘pick up the wine bottle’, ‘uncork the wine’, ‘pick up the corkscrew’, ‘pick up the saucepan’, ‘open the saucepan’ Cleaning ‘pick up the clorox box’, ‘close the clorox box’, ‘grab a wet towel’ ‘pick up the tissue box’, ‘dispense a tissue’ Bottles ‘pick up the meyers cleaning spray’, ‘open the meyers cleaning spray’, ‘spray the meyers cleaning spray’, ‘pick up the purple cleaning spray’, ‘open the purple cleaning spray’, ‘spray the purple cleaning spray’ K Analysis on Sub-optimal Performance of Existing VLMs on InstructPart The sub-optimal performance of state-of-the-art VLMs on our benchmark can be attributed to both the lack of task-relevant training data and limita- tions in current model architectures for part-level understanding and affordance reasoning. •Training Data Limitations: Most existing VLMs are not trained with su- pervision at the part level, nor are they ex- posed to task-oriented instructions that require grounding specific object components. This leads to a gap in their ability to localize and reason about fine-grained object parts based on functional cues—capabilities that our task explicitly targets. We present two findings to support the claim that current VLMs lack suitable training data: –In Section 4.5 (Figures 3–5), we show that many VLMs tend to either segmentthe entire object or miss the correct re- gions entirely—indicating difficulty in fine-grained localization. –As shown in Appendix D, even simple fine-tuning on our dataset leads to a sig- nificant performance boost, suggesting that the models possess latent capabil- ity but lack the appropriate supervision signal. •Architectural
https://arxiv.org/abs/2505.18291v1
Limitations: Most VLMs use a CLIP-based image encoder, which is optimized for object-level seman- tic understanding and lacks explicit mecha- nisms for part-level grounding or affordance reasoning. To address this, we incorporate a DINOv2 vision encoder in our baseline, which better captures part-level correspon- dences across diverse objects (e.g., the handle of a knife vs. the handle of scissors). As a re- sult, our baseline outperforms state-of-the-art blue sunglasses - earhooks box cutter - handle daisy - plant stem green martini glass - stem grey mug - handle grey spoon - handle ice cream - cone knife - handle lollipop - stick mug - handle saucepan - handle teacup - handle Figure I17: PISA zero-shot prediction on novel objects. Green masks represent the prediction, and the label below each image highlights the object-part name. VLMs on the proposed task. L Justification for Including ORPS Referring Expression Segmentation (RES) gener- ally aims to generate segmentation masks from natural language expressions, and our ORPS task can indeed be viewed as a specialized form of RES. However, there are several important distinctions: •Existing RES tasks primarily focus on using expressions to identify entire entities (e.g., “the woman in the red shirt”). In contrast, ORPS focuses on identifying specific object parts, using a consistent and controlled format: “[part name] of [object name] ”. •ORPS can be considered the “optimal con- dition” of TRPS — that is, it strips away complex instruction reasoning and isolates the challenge of part-level visual grounding. This enables us to more precisely understand a model’s bottleneck: is it struggling with lan- guage reasoning or with part segmentation? •As shown in Table 2, by comparing the perfor- mance gap between ORPS and TRPS:–Reasoning segmentation (RS) methods show a smaller drop in performance from ORPS to TRPS, indicating stronger gen- eralization to complex instructions. –In contrast, Open-V ocabulary Segmen- tation (OVS) and Referring Expression Segmentation (RES) baselines show a larger drop, highlighting limited ability to handle task-oriented reasoning. •This analysis demonstrates that ORPS com- plements TRPS by offering a controlled set- ting for part-level grounding, and jointly, they allow us to better characterize the strengths and limitations of different segmentation ap- proaches — especially when comparing mod- els with or without integrated language rea- soning.
https://arxiv.org/abs/2505.18291v1
arXiv:2505.18298v1 [cs.CL] 23 May 2025Thinking Fast and Right: Balancing Accuracy and Reasoning Length with Adaptive Rewards Jinyan Su, Claire Cardie Cornell University {js3673, ctc9}@cornell.edu Abstract Large language models (LLMs) have demonstrated strong reasoning abilities in math mathematical reasoning, often enhanced through reinforcement learning (RL). However, RL-trained models frequently produce unnecessarily long reasoning traces—even for simple queries—leading to increased inference costs and latency. While recent approaches attempt to control verbosity by adding length penalties to the reward function, these methods rely on fixed penalty terms that are hard to tune and cannot adapt as the model’s reasoning capability evolves, limiting their effectiveness. In this work, we propose an adaptive reward-shaping method that enables LLMs to "think fast and right"—producing concise outputs without sacrificing correctness. Our method dynamically adjusts the reward trade-off between accuracy and response length based on model performance: when accuracy is high, the length penalty increases to encourage faster length reduction; when accuracy drops, the penalty is relaxed to preserve correctness. This adaptive reward accelerates early-stage length reduction while avoiding over-compression in later stages. Experiments across multiple datasets show that our approach consistently and dramatically reduces reasoning length while largely maintaining accuracy, offering a new direction for cost-efficient adaptive reasoning in large-scale language models. Our code can be found in https://github.com/JinyanSu1/A-DLP . 1 Introduction Recent advances in large language models (LLMs) have demonstrated impressive reasoning capa- bilities across domains such as mathematics [ 4,12] and programming [ 5], often enhanced through reinforcement learning (RL) [ 9,22]. While the long-form reasoning behavior encouraged by RL has substantially improved model performance across benchmarks, it also incurs significant com- putational overhead and latency. Moreover, LLMs frequently “overthink” even simple questions [3,26,27], producing unnecessarily verbose output with thousands of tokens even for questions such as"what is 2 plus 3?" . To mitigate this, many research efforts have explored ways to encourage more efficient reasoning [27,29,6]. A particularly practical approach is to incorporate length penalties directly into the reward function during RL training to penalize excessively long outputs. Since many reasoning-capable LLMs—such as DeepSeek-R1 [ 9], OpenAI’s o1 [ 22], and QwQ-32B-Preview [ 28]—already include RL in their training pipelines, modifying the reward function becomes a lightweight and low-overhead intervention. It integrates seamlessly into existing workflows without requiring architectural changes or additional training infrastructure, making it more practical and scalable than alternative approaches that operate outside the RL framework. Several recent works have investigated reward modifications to reduce reasoning length. For example, L1 [1] modified the reward to penalize responses that exceed user provided token budget; O1-Pruner [17] formulates a constrained optimization problem where accuracy is enforced as a hard constraint Preprint. Under review. while minimizing response length; and [ 35] rewards outputs that are close in length to the shortest correct response. Despite differences in formulation, these methods all depend on a fixed-length penalty parameter to manage the trade-off between accuracy and generation efficiency. However, it is difficult to set a universally effective value: if the penalty is too large, the model may collapse into overly short outputs that sacrifice accuracy; if it is too
https://arxiv.org/abs/2505.18298v1
small, the model may retain unnecessarily long outputs or take too long to converge to a reasonable response length. To address this limitation, we propose a simple and intuitive approach that adaptively adjusts the reward trade-off between response length and accuracy based on model performance. Specifically, during training, we continuously monitor the model’s accuracy relative to a reference accuracy threshold and dynamically adjust the length penalty: when accuracy exceeds the threshold, we increase the length penalty to encourage faster reduction in response length; when accuracy falls below the threshold, we relax the penalty to preserve correctness. Crucially, our method allows the penalty to evolve and adapt with the model’s capabilities. In the early stages, when the model outputs are excessively verbose, a stronger penalty should be used to eliminate redundancy and accelerate length compression. As training progresses and the model begins to produce more concise outputs, continuing to apply a strong penalty can lead to overly aggressive attempts to shorten responses as well asdegraded performance. By allowing the penalty to decrease when needed, the model can shift its focus toward preserving correctness rather than overly compressing the response. To summarize, our contributions are: •We propose an Adaptive Direct Length Penalty (A-DLP) reward that evolves throughout training to reduce response length while maintaining acceptable accuracy. By adaptively shaping the reward signal based on model performance, our method enables fast compression in the early stages and prevents over-compression in later stages. •Through extensive experiments, we demonstrate the effectiveness and advantages of our approach. Our results consistently show improved generation efficiency without compromis- ing performance, and our analyses offer new insights on how adaptive reward shaping can better balance the trade-off between reasoning length and accuracy. Our adaptive framework opens a promising direction for future research in building more cost- effective and efficient reasoning language models. 2 Related Work LLM Reasoning LLMs have demonstrated impressive performance on a wide range of reasoning tasks [ 24,23,4,12] particularly when using Chain-of-Thought (CoT) prompting [ 31]. To further improve reasoning performance, a number of test-time techniques have been proposed that increase computational cost in exchange for better accuracy. For example, self-consistency [ 30] samples multiple CoT traces and selects the most frequent final answer, while Best-of-N sampling [ 8] selects the best response from several samples based on a scoring function. Process reward models [ 16] evaluate CoT progress step-by-step, while self-correction methods [ 15] refine initially generated responses through iterative feedback. Although these test-time methods improve performance, modern large-scale reasoning models such as DeepSeek-R1 [ 9], OpenAI o1 [ 22], and QwQ-32B- Preview [ 28] are trained with reinforcement learning (RL), leading to the emergence of structured behaviors such as verification, backtracking, and branching in long CoT traces [32]. Long-to-Short Reasoning. While deliberate reasoning is useful for complex tasks, it often results in inefficient or unnecessarily verbose outputs. This phenomenon, referred to as the "overthinking problem" [ 3,26,34] arises when increased reasoning length fails to yield better answers. To address this, a growing body of work explores methods for shortening CoT reasoning [ 27,29,6]. For example, C3oT
https://arxiv.org/abs/2505.18298v1
[ 14] trains a compressor to convert verbose CoT traces. Others focus on prompt-based methods: [21,10] attempt to explicitly constrain token length through prompt engineering. TokenSkip [ 33] enables LLMs to selectively skip over less informative tokens. Meanwhile, [ 20] uses Best-of-N sampling to construct concise data for fine-tuning. For comprehensive overviews, we refer readers to recent surveys on efficient reasoning [27, 29, 6]. Reasoning Length Reduction via RL. Several recent works leverage RL to directly reduce reasoning length by modifying the reward function. [ 13] uses the correctness of clipped responses 2 as the reward. L1 [ 1] adds a penalty term to enforce generation length budgets specified in the prompt. O1-Pruner [ 17] and [ 2] penalize the reward with normalized length to ensure proportional compression. [ 35] propose a reward that penalizes deviations from the shortest correct responses among the samples. However, all of these methods rely on a fixed penalty parameter to control the trade-off between accuracy and length, which must be manually tuned and does not adapt to the model’s training dynamics. 3 Method When fine-tuning large language models (LLMs) using reinforcement learning (RL), the objective is to optimize the policy πθsuch that the generated response y∼πθ(· |x)is correct. Let y∗denote the reference answer associated with input x. A basic reward function focused on correctness can be written as: Rcorrect(x, y) =I{y=y∗} (1) where Iis the indicator function. While this encourages accurate generation, it does not consider the efficiency or length of the response—an important factor in practical deployment. 3.1 Static Direct Length Penalty (S-DLP) To encourage concise reasoning, it is natural to introduce a penalty on the reward for response length. Letlen(y)denote the number of tokens in the generated output. A straightforward formulation, which we refer to as the Static Direct Length Penalty (S-DLP), is defined by the following reward: Rλ(x, y) =I{y=y∗} −λ·len(y) (2) where λ≥0is a hyperparameter that controls the trade-off between correctness and brevity. A large λencourages shorter responses and accelerates length reduction, but may cause the model to collapse into overly short, incorrect outputs. In contrast, a small λprioritizes correctness, but may result in slow or ineffective reduction in generation length. Selecting an appropriate value for λis challenging, as it depends both on the degree of redundancy in the model’s current outputs and on the desired trade-off between accuracy and efficiency. 3.2 Adaptive Direct Length Penalty (A-DLP) To avoid static tuning, we propose an adaptive variant of the above reward, referred to as the Adaptive Direct Length Penalty (A-DLP). In this formulation, the penalty coefficient λtis dynamically updated throughout training based on the model’s observed performance. In particular, during training, we continuously compare the model’s accuracy against a reference accuracy: when the model maintains high correctness, we increase the penalty to encourage faster reduction in generation length; when performance drops, we ease the penalty to prioritize preserving accuracy and avoid overly aggressive compression. Letacctbe the observed accuracy at training step t, and let acc refbe the reference accuracy. We update the penalty coefficient as: λt+1= max (0 , λt+η·(acct−acc ref)) (3) where
https://arxiv.org/abs/2505.18298v1
η > 0is learning rate that controls the sensitivity of the penalty to performance changes. While it is ideal to use a reference accuracy computed from a separate model or evaluation run, doing so would incur additional computational cost—something we aim to avoid. To eliminate this overhead, we instead set acc ref=Ex∼D,y∼πref(y|x)[I{y=y∗}]to the expected accuracy over the data distribution under a reference model. This global constant serves as a fixed performance baseline. In practice, it can be estimated by the accuracy of the reference model before any length reduction is applied. This simplification removes the need for additional model inference during training, while still providing a meaningful anchor for adaptive reward adjustment. The resulting adaptive reward function used in A-DLP then becomes: Rλt(x, y) =I{y=y∗} −λt·len(y) (4) This formulation allows the reward signal to evolve adaptively throughout training. In the early stages, when the model’s responses are highly redundant, reducing generation length has little to no 3 impact on accuracy and λtis large enough to ensure rapid compression of unnecessary tokens. As the model continues to shorten its outputs, further reductions may begin to affect correctness. As a result, λtnaturally decreases, relaxing the pressure to shorten responses in order to preserve accuracy. Eventually, the generation length stabilizes at a point where further compression would lead to a measurable drop in accuracy, and λtconverges toward zero. This dynamic adaptation ensures that length reduction proceeds aggressively when safe, and conservatively when needed, without manual tuning. 4 Experimental Setup Models and Datasets We train our models on the DeepScaleR-Preview-Dataset [ 18], a mathematics dataset containing 40K question-answer pairs sourced from AIME, AMC, Omni-Math [ 7], and STILL [19]. We use AIME2024 as the validation set and evaluate on five datasets: AIME2025, MATH [ 12], AMC, Olympiad-Bench [ 11], and Minerva. Our base model is DeepScaleR-1.5B-Preview [ 18], a reasoning-oriented language model fine-tuned from DeepSeekR1-Distill-Qwen-1.5B [ 9]. For both training and evaluation, we restrict the maximum context length to 8192 tokens. Baselines We compare our method against the following baselines, selected for their adaptability and availability of released models: •Base Model : the original DeepScaleR-1.5B-Preview model, without any length penalization. •L1-Exact [1]: Fine-tunes DeepScaleR-1.5B-Preview with RL using a reward function that encourages the generation length to match a user-specified target. •L1-Max [1]: Fine-tunes DeepScaleR-1.5B-Preview using RL with a reward function that penalizes outputs exceeding a specified length budget. •Static Direct Length Penalty (S-DLP) : Applies a static penalty on generation length during RL, using Equation 2 as the reward function. •Adaptive Direct Length Penalty (A-DLP) : Adaptively updates the length penalty parameter λt, and consequently the reward function, based on the model’s current performance, following Equations 3 and 4. Implementation Details We use the standard RL training pipeline for DeepScaleR-1.5B-Preview, applying Group Relative Policy Optimization (GRPO) [ 25] for policy updates. All experiments are conducted on two A100 GPUs (80GB each). We use a batch size of 64 and constrain the maximum prompt length to 1024 tokens. The actor learning rate is set to 1×10−6, with a rollout count of 4. For length penalty parameters, we
https://arxiv.org/abs/2505.18298v1
initialize the penalty coefficient to be λ0= 1e−3, and use a learning rate η= 1e−3for updating λt. Training proceeds until the generation length stabilizes. We use the final checkpoint (at step 420) for evaluation. We set the reference accuracy to be 0.62, which is estimated from the training accuracy of the first batch on the base model. We use both the accuracy and average token count as our evaluation metrics, computed over 16 sampled completions per question using temperature 0.6 and top- psampling with p= 0.95. 5 Results and Analysis 5.1 Main Results In Figure 1, we compare our A-DLP reward against several baselines. For the S-DLP reward baseline, due to the monotonic reduction in generation length over training steps, to better capture the trade-off between accuracy and token length, we plot the full training trajectory by sampling checkpoints every 20 steps and fitting a curve through these points. Across nearly all datasets, A-DLP consistently lies above and to the left of the S-DLP curve, indicating improved length-efficiency, i.e., higher accuracy with fewer tokens. Meanwhile, compared to the original base model, A-DLP reduces token length by more than 50% while sacrificing less than 0.04 in accuracy. The L1-Max andL1-Exact baselines lie mostly below the S-DLP curve, as their primary objective is to enforce length control around a fixed budget rather than to minimize length for efficiency. Consequently, they tend to prioritize meeting the target length constraint over aggressively reducing generation length when possible. 4 Figure 1: Performance comparison of A-DLP with baseline methods. For the S-DLP, we plot checkpoints sampled every 20 training steps and fit a curve through them. Since both accuracy and generation length change monotonically during training under S-DLP, this trajectory captures the full accuracy–length trade-off. A-DLP consistently achieves better trade-offs, lying above and to the left of the S-DLP curve. Figure 2 provides a detailed comparison of S-DLP and A-DLP on accuracy and token length throughout training. Initially, both S-DLP and A-DLP exhibit similar rates of length reduction while maintaining accuracy close to that of the base model. As training progresses, both methods experience a slight but acceptable drop in accuracy, accompanied by continued reductions in generation length. However, beyond approximately 100 training steps, the behavior of the two methods diverges. Under S-DLP, further training leads to a sharp decline in both accuracy and token length, indicating model collapse due to excessive penalization. In contrast, A-DLP demonstrates stable convergence: both accuracy and token length gradually level off, even as training continues. Thus, A-DLP demonstrates greater robustness with respect to training termination, as it naturally stabilizes without requiring manual intervention to prevent collapse. In Figure 3, we present the token length reduction rates for both correct and incorrect responses. Although incorrect responses are initially much longer than correct ones, our algorithm effectively reduces the token lengths for both categories. Notably, the reduction appears roughly proportional to their original lengths, such that correct responses remain approximately half the length of incorrect ones before and after reduction. Furthermore, the reduction rates for both correct and incorrect responses consistently exceed 55% across
https://arxiv.org/abs/2505.18298v1
all datasets, demonstrating the effectiveness of our approach. 5.2 Behavior over Training Figure 4 illustrates the training dynamics of A-DLP, focusing on four key quantities: the accuracy gap between the current model and the reference threshold, i.e., acct−acc ref; the length penalty coefficient λt; the validation accuracy on AIME2024; and the average response length (i.e., the number of generated tokens) on the training data. In the early stages of training (before approximately step 50), the model frequently exceeds the reference accuracy, leading to a cumulative increase in the penalty coefficient λt. As training progresses, the model’s performance begins to fluctuate around the reference threshold and more often falls below it, causing λtto gradually decrease. By around step 100,λtreaches zero. Beyond this point, it remains near zero, exhibiting only minor oscillations due to noise in the gap estimation. 5 Figure 2: Accuracy and average token length across training steps for A-DLP and S-DLP. The dotted line is the accuracy and token length for the base model before length reduction. For S-DLP, performance remains stable during the early training phase, but both accuracy and token length drop sharply around step 100, indicating model collapse due to excessive length penalization. In contrast, A-DLP exhibits stable convergence, with both metrics gradually stabilizing—demonstrating its ability to adaptively balance correctness and brevity throughout training. Figure 3: Token length of correct and incorrect responses before and after applying A-DLP. The reduction rates for both categories consistently exceed 55%. Correspondingly, the average response length steadily decreases and converges to the range of 1500–2000 tokens—representing more than a 50% reduction from the initial average of 5000 tokens. Meanwhile, the validation accuracy initially drops slightly as the model aggressively shortens its outputs, but stabilizes around 0.3 once training converges. These patterns demonstrate A-DLP’s capacity to dynamically modulate its penalty signal in response to performance feedback, ultimately achieving a balance between brevity and correctness. 5.3 Parameter Setting In this section, we investigate the effect of tuning core parameters—namely the learning rate ηand the reference accuracy acc ref—on model performance. These experiments serve to guide practical choices for parameter selection. Effect of the Learning Rate Figure 5 illustrates how the penalty coefficient λtand the response length evolve during training under different learning rates: η∈ {10−2,10−3,10−4}. When the learning rate is small (e.g., η= 10−4), we observe rapid initial reduction in response length due to the 6 Figure 4: Training dynamics of A-DLP showing the accuracy gap between the current model and the reference threshold ( acct−acc ref), the length penalty coefficient λt, validation accuracy on AIME2024, and the average response length (number of tokens) on the training data. large initial penalty λ0= 1e−3. At this stage, model accuracy remains close to the reference threshold acc ref, soλtdoes not change significantly and continues to exert strong pressure to shorten outputs. However, as the response length is further reduced, this high penalty becomes detrimental—continued compression begins to harm accuracy. Ideally, λtshould decrease at this point to relax the penalty and preserve correctness. Yet with a small learning rate, the model fails to reduce λtswiftly enough, and
https://arxiv.org/abs/2505.18298v1
the penalty remains overly aggressive. This delayed adjustment leads to excessive shortening of outputs, ultimately causing the model to collapse in both accuracy and generation quality. Conversely, when the learning rate is large (e.g., η= 10−2), the penalty coefficient λtbecomes highly sensitive to the accuracy gap, acct−acc ref. Since acctis estimated from a small batch bbatch size = 64 at each step, it is inherently noisy, which amplifies fluctuations in this gap and leads to unstable updates in λt. For instance, starting from λ0= 1e−3, an accuracy drop of just 0.1 can reduce λtto zero in a single step. Consequently, λttends to oscillate sharply throughout training rather than evolving smoothly. Although the updates to λtunder a large learning rate are sharp and unstable—frequently spiking and then collapsing back to zero—this volatility is constrained by the non-negativity of λt. In particular, the penalty only activates when acct > accref and is rapidly diminished as soon as it starts harming accuracy. As a result, λtoften resets to zero or stays there safely, preventing prolonged over-penalization. Despite its instability, setting a relatively large learning rate can still be beneficial for length reduction: the intermittent spikes in λtexert sufficient pressure to gradually shorten responses over time. At the same time, the rapid decay of λtwhen accuracy drops acts as a built-in safeguard, preventing the model from collapsing into overly short, low-quality outputs. Figure 5: Training dynamics of λtand response length under different learning rates ( η∈ {10−2,10−3,10−4}).A larger learning rate causes λtto fluctuate more sharply due to sensitivity to noisy accuracy estimates, resulting in slower length reduction but eventually converges with sufficient training. In contrast, a smaller learning rate leads to smoother updates and faster token reduction in the early training stage, but risks model collapse in later stages, as λtfails to decrease quickly enough in response to dropping accuracy—causing continued over-penalization and excessive length reduction. 7 Effect of acc ref Figure 6 shows the training dynamics under different reference accuracies acc ref, selected to be either significantly higher or lower than the actual accuracy of the base model on the training data. Specifically, we use acc ref={0.7,0.5}, which are approximately 0.1 above or below the reference accuracy used in our main experiments—estimated from the first batch of training data in a preliminary run. When the reference accuracy is set too high relative to the true accuracy of the base model (e.g., acc ref= 0.7), the observed accuracy gap acct−acc refremains consistently negative. As a result, λtis quickly reduced to zero within the first 10 training steps and stays there throughout. In this setting, although there is some token length reduction in the early steps (while λt>0), the process plateaus quickly, and the model converges to a relatively insufficiently reduced response length. Conversely, when the reference accuracy is set too low (e.g., acc ref= 0.5), the initial accuracy gap is consistently positive, causing λtto increase steadily. This leads to an increased gradual decline in model accuracy and a corresponding reduction in the gap acct−accref , which approaches zero around step 120, where λtreaches its peak. Beyond
https://arxiv.org/abs/2505.18298v1
this point, the accuracy gap becomes negative, and λtbegins to decrease. However, similar to the case with a small learning rate, the update to λtbecomes too slow to respond adequately to the accuracy drop. In effect, the learning rate becomes small relative to the magnitude of λt, making it difficult for the penalty to relax quickly enough. As a result, the model continues to over-shorten its outputs, eventually collapsing into overly brief, low-quality responses. Figure 6: Training dynamics under different reference accuracies ( acc ref∈ {0.5,0.62,0.7}). When the reference accuracy is set too high (e.g., 0.7), λtquickly drops to zero and remains there, causing only a brief period of length reduction in the early steps and leading to convergence at a relatively high response length. When it is set too low (e.g., 0.5), λtincreases excessively, leading to over-penalization and eventual model collapse due to excessive shortening. General Guidance on Parameter Setting From the above experiments, we can draw some general insights for practical parameter selection. When the reference accuracy estimate does not deviate significantly from the baseline model’s true accuracy, the choice of learning rate should be considered in conjunction with the initial penalty strength λ0. Specifically, we want the learning rate to be large enough to allow λtto respond in a timely manner to fluctuations in accuracy—particularly when over-penalization begins to harm performance. A practical rule of thumb emerging from our experiments is as follows: the model should be able to reduce the penalty coefficient λtto zero within 8 approximately 10 steps when accuracy drops. If the maximum tolerable accuracy deviation is around 0.1, then setting the initial penalty λ0equal to the learning rate ηprovides a reasonable balance. As for how large this shared value should be: A larger λ0andηpair accelerates length reduction and reduces more aggressively, but at the cost of greater instability in λtdue to noisy accuracy estimates. A smaller pair results in smoother, more stable updates to λt, but leads to slower and more conservative length reduction. Ultimately, the choice reflects a trade-off between reduction speed and training robustness. To illustrate this, we visualize the training dynamics under three representative settings— (λ0, η)∈(10−2,10−2),(10−3,10−3),(10−4,10−4)—in Figure 7, which align with our practical guidance for parameter selection. Figure 7: Training dynamics of using λ0=η= 1e−2,1e−3,1e−4respectively. 6 Conclusion In this paper, we presented Adaptive Direct Length Penalty (A-DLP), a simple yet effective reward- shaping method that dynamically adjusts the trade-off between reasoning accuracy and response length during RL training. Unlike prior approaches that rely on static penalty terms, A-DLP adaptively adjusts the length penalty coefficient based on evolving model performance, enabling aggressive compression when safe and relaxing penalization when the length reduction becomes excessive. Through extensive experiments across diverse mathematical benchmarks, we show that A-DLP consistently reduces token length by over 50% while maintaining comparable accuracy. Compared to static baselines such as S-DLP, A-DLP offers better trade-offs and avoids training collapse by stabilizing naturally as performance plateaus. Our method integrates easily into existing RL pipelines, providing a practical tool for reducing inference cost in LLMs without sacrificing quality, paving the way for
https://arxiv.org/abs/2505.18298v1
building more efficient and cost-effective LLMs. 7 Limitation and Future works While our method is designed to be simple, efficient, and broadly applicable, several aspects offer opportunities for future extension: first, Our experiments are conducted on a 1.5B-parameter model to ensure reproducibility and manageable compute cost. Although we expect A-DLP to generalize to larger models—especially given its lightweight integration with RL training—scaling, it remains an important next step to evaluate our methods for models of different sizes and structures. Secondly, we use a fixed learning rate for updating the penalty coefficient λt. While this already yields stable and interpretable behavior, a more adaptive learning rate schedule could further improve the adaptivity of the penalty to dynamic training signals and make our method more robust across diverse settings. Thirdly, A-DLP reduces response lengths in a roughly proportional manner across correct and incorrect responses. While this leads to overall efficiency gains, in the future, we plan to improve our method to compress incorrect outputs more aggressively to better preserve the accuracy of correct ones. References [1]Pranjal Aggarwal and Sean Welleck. L1: Controlling how long a reasoning model thinks with reinforcement learning. arXiv preprint arXiv:2503.04697 , 2025. 9 [2]Daman Arora and Andrea Zanette. Training language models to reason efficiently. arXiv preprint arXiv:2502.04463 , 2025. [3]Xingyu Chen, Jiahao Xu, Tian Liang, Zhiwei He, Jianhui Pang, Dian Yu, Linfeng Song, Qiuzhi Liu, Mengfei Zhou, Zhuosheng Zhang, et al. Do not think that much for 2+ 3=? on the overthinking of o1-like llms. arXiv preprint arXiv:2412.21187 , 2024. [4]Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 , 2021. [5] Codeforces. competitive programming platform. https://codeforces.com/ , 2025. [6]Sicheng Feng, Gongfan Fang, Xinyin Ma, and Xinchao Wang. Efficient reasoning models: A survey. arXiv preprint arXiv:2504.10903 , 2025. [7]Bofei Gao, Feifan Song, Zhe Yang, Zefan Cai, Yibo Miao, Qingxiu Dong, Lei Li, Chenghao Ma, Liang Chen, Runxin Xu, et al. Omni-math: A universal olympiad level mathematic benchmark for large language models. arXiv preprint arXiv:2410.07985 , 2024. [8]Leo Gao, John Schulman, and Jacob Hilton. Scaling laws for reward model overoptimization. InInternational Conference on Machine Learning , pages 10835–10866. PMLR, 2023. [9]Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [10] Tingxu Han, Zhenting Wang, Chunrong Fang, Shiyu Zhao, Shiqing Ma, and Zhenyu Chen. Token-budget-aware llm reasoning. arXiv preprint arXiv:2412.18547 , 2024. [11] Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Thai, Junhao Shen, Jinyi Hu, Xu Han, Yujie Huang, Yuxiang Zhang, et al. Olympiadbench: A challenging benchmark for promoting agi with olympiad-level bilingual multimodal scientific problems. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 3828–3850, 2024. [12] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with
https://arxiv.org/abs/2505.18298v1
the math dataset. arXiv preprint arXiv:2103.03874 , 2021. [13] Bairu Hou, Yang Zhang, Jiabao Ji, Yujian Liu, Kaizhi Qian, Jacob Andreas, and Shiyu Chang. Thinkprune: Pruning long chain-of-thought of llms via reinforcement learning. arXiv preprint arXiv:2504.01296 , 2025. [14] Yu Kang, Xianghui Sun, Liangyu Chen, and Wei Zou. C3ot: Generating shorter chain-of- thought without compromising effectiveness. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 39, pages 24312–24320, 2025. [15] Aviral Kumar, Vincent Zhuang, Rishabh Agarwal, Yi Su, John D Co-Reyes, Avi Singh, Kate Baumli, Shariq Iqbal, Colton Bishop, Rebecca Roelofs, et al. Training language models to self-correct via reinforcement learning. arXiv preprint arXiv:2409.12917 , 2024. [16] Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. In The Twelfth International Conference on Learning Representations , 2023. [17] Haotian Luo, Li Shen, Haiying He, Yibo Wang, Shiwei Liu, Wei Li, Naiqiang Tan, Xiaochun Cao, and Dacheng Tao. O1-pruner: Length-harmonizing fine-tuning for o1-like reasoning pruning. arXiv preprint arXiv:2501.12570 , 2025. [18] Michael Luo, Sijun Tan, Justin Wong, Xiaoxiang Shi, William Y . Tang, Manan Roongta, Colin Cai, Jeffrey Luo, Tianjun Zhang, Li Erran Li, Raluca Ada Popa, and Ion Stoica. Deepscaler: Sur- passing o1-preview with a 1.5b model by scaling rl. https://pretty-radio-b75.notion. site/DeepScaleR-Surpassing-O1-Preview-with-a-1-5B-Model-by-Scaling-RL , 2025. 10 [19] Yingqian Min, Zhipeng Chen, Jinhao Jiang, Jie Chen, Jia Deng, Yiwen Hu, Yiru Tang, Jiapeng Wang, Xiaoxue Cheng, Huatong Song, et al. Imitate, explore, and self-improve: A reproduction report on slow-thinking reasoning systems. arXiv preprint arXiv:2412.09413 , 2024. [20] Tergel Munkhbat, Namgyu Ho, Seo Hyun Kim, Yongjin Yang, Yujin Kim, and Se-Young Yun. Self-training elicits concise reasoning in large language models. arXiv preprint arXiv:2502.20122 , 2025. [21] Sania Nayab, Giulio Rossolini, Marco Simoni, Andrea Saracino, Giorgio Buttazzo, Nicolamaria Manes, and Fabrizio Giacomelli. Concise thoughts: Impact of output length on llm reasoning and cost. arXiv preprint arXiv:2407.19825 , 2024. [22] OpenAI. Learning to reason with llms. https://openai.com/index/learning-to-reason-with-llms/ , 2024. [23] David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R Bowman. Gpqa: A graduate-level google-proof q&a benchmark. In First Conference on Language Modeling , 2024. [24] Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, et al. Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950 , 2023. [25] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. [26] Jinyan Su, Jennifer Healey, Preslav Nakov, and Claire Cardie. Between underthinking and overthinking: An empirical study of reasoning length and correctness in llms. arXiv preprint arXiv:2505.00127 , 2025. [27] Yang Sui, Yu-Neng Chuang, Guanchu Wang, Jiamu Zhang, Tianyi Zhang, Jiayi Yuan, Hongyi Liu, Andrew Wen, Shaochen Zhong, Hanjie Chen, et al. Stop overthinking: A survey on efficient reasoning for large language models. arXiv preprint arXiv:2503.16419 , 2025. [28]
https://arxiv.org/abs/2505.18298v1
Qwen Team. Qwq-32b-preview. https://qwenlm.github.io/blog/qwq-32b-preview/ , 2025. [29] Rui Wang, Hongru Wang, Boyang Xue, Jianhui Pang, Shudong Liu, Yi Chen, Jiahao Qiu, Derek Fai Wong, Heng Ji, and Kam-Fai Wong. Harnessing the reasoning economy: A survey of efficient reasoning for large language models. arXiv preprint arXiv:2503.24377 , 2025. [30] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171 , 2022. [31] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. [32] Han Wu, Yuxuan Yao, Shuqi Liu, Zehua Liu, Xiaojin Fu, Xiongwei Han, Xing Li, Hui-Ling Zhen, Tao Zhong, and Mingxuan Yuan. Unlocking efficient long-to-short llm reasoning with model merging. arXiv preprint arXiv:2503.20641 , 2025. [33] Heming Xia, Yongqi Li, Chak Tou Leong, Wenjie Wang, and Wenjie Li. Tokenskip: Controllable chain-of-thought compression in llms. arXiv preprint arXiv:2502.12067 , 2025. [34] Wenkai Yang, Shuming Ma, Yankai Lin, and Furu Wei. Towards thinking-optimal scaling of test-time compute for llm reasoning. arXiv preprint arXiv:2502.18080 , 2025. [35] Jingyang Yi and Jiazheng Wang. Shorterbetter: Guiding reasoning models to find optimal inference length for efficient reasoning. arXiv preprint arXiv:2504.21370 , 2025. 11 A Additional Experiments In Figure 8, we plot the A-DLP, L1-Max and L1-Exact with more configurations and fit a line over these configurations. For both L1-Max and L1-Exact, we use the budget to be {512,1024,2048,4096}while the max token length set to be twice of the budget. While, for A-DLP, we plot the evaluation result for different checkpoints during training. In Figure 9, we compare A- Figure 8: Performance comparison of A-DLP with baselines using more configurations. For A-LDP, we plot the evaluation results for different checkpoints during training. For L1-Max and L1-Exact, we plot different target token budgets. DLP on its accuracy and token length trade-off using two sets of parameters, λ0= 1e−3, η= 1e−2 andλ0= 1e−3, η= 1e−3on the testing data over different training steps. Figure 10 compares accuracy and token length respectively with x-axis being training steps. Figure 11 shows the training behavior of using parameter set λ0= 1e−3, η= 1e−2. B Effect of λ0 In Figure 15, we show the effect of different initialization λ0={1e−3,1e−4}. 12 Figure 9: Accuracy and token length trade-off comparison of A-DLP on two sets of parameters. Figure 10: Accuracy and token length comparison of A-DLP on two sets of parameters varying on the training steps. Figure 11: Training behavior using a larger learning rate η= 1e−2. 13 Figure 12: Training behavior using a smaller learning rate η= 1e−4. Figure 13: Training behavior using a reference accuracy acc ref= 0.5that is much smaller than the accuracy of the base model. Figure 14: Training behavior using a reference accuracy acc ref= 0.7that is much larger than the accuracy of the base model. Figure 15: Training dynamics of using different λ0while keeping learning rate η= 1e−3. 14
https://arxiv.org/abs/2505.18298v1
Is It Bad to Work All the Time? Cross-Cultural Evaluation of Social Norm Biases in GPT-4 Zhuozhuo Joy Liu1,2Farhan Samir3Mehar Bhatia4,5 Laura K. Nelson6Vered Shwartz1,2 1Department of Computer Science, University of British Columbia 2Vector Institute for AI3Department of Linguistics, University of British Columbia 4Mila – Quebec AI Institute5School of Computer Science, McGill University 6Department of Sociology, University of British Columbia {zhuozhuo, fsamir, vshwartz}@cs.ubc.ca Abstract LLMs have been demonstrated to align with the values of Western or North American cultures. Prior work predominantly showed this effect through leveraging surveys that directly ask – originally people and now also LLMs – about their values. However, it is hard to believe that LLMs would consistently apply those values in real-world scenarios. To address that, we take a bottom-up approach, asking LLMs to reason about cultural norms in narratives from different cultures. We find that GPT-4 tends to generate norms that, while not necessarily incorrect, are significantly less culture-specific. In addition, while it avoids overtly generating stereotypes, the stereotypical representations of certain cultures are merely hidden rather than suppressed in the model, and such stereo- types can be easily recovered. Addressing these challenges is a crucial step towards developing LLMs that fairly serve their diverse user base. 1 Introduction LLMs are trained on vast web text. In principle, this data is representative of the diverse population of web users, which should contribute to LLMs serving the diverse population of their users. In practice, the training data predominantly consists of English web text from Western web users (Hersh- covich et al., 2022), therefore covering more knowl- edge about Western cultures. Moreover, learning about the world through the lens of a Western user may entail that knowledge about other cultures is more prone to stereotyping and biases (Said, 1978; Nisbett et al., 2001; Henrich et al., 2010; Bender et al., 2021; Li et al., 2022). Secondly, while web texts are authored by nu- merous web users, LLMs are trained on them as a single stream of unattributed text. As a result, they don’t represent any specific person but rather an authoritative “voice from nowhere” which is sup- posedly representative of the diversity of its user Figure 1: Top-down vs. bottom-up approaches to eval- uating cultural alignment of LLMs. The top-down method asks direct survey-style questions about values, while the bottom-up approach asks models to reason about social norms in cultural narratives. population but in practice is more aligned with a “default” user demographic (Cao et al., 2023a; Liu et al., 2023; Arora et al., 2022; Hartmann et al., 2023). Finally, additional design choices in the devel- opment of LLMs, such as curation of (often pro- prietary) training data and “guardrails” designed to prevent models from generating harmful or stereotypical language, further leaks the values and norms of the (typically Western) developers into the models. Prior work (Cao et al., 2023b; Ramezani and Xu, 2023; Durmus et al., 2024, inter alia) captured LLMs’ cultural alignment and biases by leveraging existing surveys such as the Hofstede Culture Sur- vey (HCS; Hofstede, 1984), the World Values Sur- vey (WVS; Haerpfer et
https://arxiv.org/abs/2505.18322v1
al., 2020), and the Global Attitudes Survey (PEW), finding that LLMs ex- hibited a strong alignment with North American cultures, and to a lesser extent, with other Western 1arXiv:2505.18322v1 [cs.CL] 23 May 2025 English speaking countries. However, such sur- veys that are designed to ask people about their values implicitly assume that people are consistent between their reported values and their real-life behavior. LLMs, on the other hand, don’t have a consistent “persona” and are optimized to generate human-like responses. In this work, rather than asking LLMs questions such as “For an average Chinese, how important it is to do work that is interesting (1-5)?”, we embed these cultural aspects into narratives, such as the one presented in Figure 1. Such narratives may capture the more nuanced ways in which cultural conditioning implicitly affects people’s everyday decisions and judgments (Selbst et al., 2019). Con- sequently, evaluating the responses from LLMs for such narratives can help identify inherent biases in LLM-backed decision making. We adopt a bottom-up experimental design and use existing, human-written narratives from dif- ferent cultures – specifically, plots from English Wikipedia for movies produced in various coun- tries: China, India, Iran, and the United States. We instruct both annotators from the respective countries as well as GPT-4 to reason about the so- cial norms in the movies, in the form of rules of thumb (RoT; Forbes et al., 2020). The annotators then judge the RoTs for their accuracy, culture- specificity, and stereotypicality. We find that GPT-4 tends to generate norms that, while not necessarily incorrect, are significantly more generic. While GPT-4 generated norms were considered less stereotypical – likely thanks to its “guardrails”– reversing the question and ask- ing GPT-4 to predict the agreement of people from certain countries with a particular norm resurfaces stereotypes and reveals the superficiality of the guardrails. Our study thus sheds light on the default representational biases of a prominent commercial language model, demonstrating that these models fail to live up to idealized (and probably impossi- ble) egalitarian representations of a global public, and instead recapitulate the usual ever-present East vs. West racial hierarchies (Said, 1978).1 Content Warning: This work contains examples that potentially implicate stereotypes, associations, and other harms that could be offensive to individ- uals in certain regions. 1https://github.com/liuzz10/movies_culture_nlp2 Background Liu et al. (2024) define cultures using a taxonomy that includes cultural concepts, knowledge, values, norms and morals, linguistic form, and artifacts. In this paper, we focus on evaluating LLMs’ judg- ments pertaining to social norms (§2.1), how well they align with various cultures, and to what extent these models are reinforcing stereotypes (§2.2). 2.1 Values, Norms, and Morals With the recent progress in language technologies and their widespread adoption, there is vast interest in equipping these technologies with human-like values and norms.2Prior efforts in NLP focused on building norm banks for training norm-aware models (Forbes et al., 2020; Ziems et al., 2023), but they predominantly focused on Western norms (Liu et al., 2024). At the same time, there is growing interest re- cently in serving users from diverse cultures (Her- shcovich et
https://arxiv.org/abs/2505.18322v1
al., 2022). Various papers showed that LLMs exhibit a strong alignment with the values of North American cultures, and to a lesser ex- tent, with other WEIRD countries, raising concerns about fairness (See for example, Johnson et al., 2022; Ramezani and Xu, 2023; Havaldar et al., 2023; Arora et al., 2023; Cao et al., 2023b; San- turkar et al., 2023; Tao et al., 2024; Durmus et al., 2024; Wang et al., 2024a; Masoud et al., 2025). Several of these papers experimented with differ- ent types of prompts, including mentioning the country name (“cultural prompting”) or translating the prompt to the local language. These experi- ments typically reveal that when prompted in En- glish without mentioning a cultural context, models by default assume a Western or even US culture. The vast majority of studies in this area lever- age existing surveys such as the Hofstede Cul- ture Survey (HCS; Hofstede, 1984), which is cen- tered around power distance, uncertainty avoidance, individualism-collectivism, masculinity-femininity, and short vs. long-term orientation; the World Val- ues Survey (WVS; Haerpfer et al., 2020), which involves questions pertaining to social values, atti- tudes and stereotypes, well-being, trust, and more; 2Liu et al. (2024) define norms and morals as a “set of rules or principles that govern people’s behavior and everyday reasoning”, making the distinction from values , which are defined as “beliefs, desirable end states or behaviors ranked by relative importance that can guide evaluations of things”. We largely ignore this distinction in this paper. 2 Country Min Max Mean Median Token Count China 165 296 222.2 198.5 India 172 299 226.1 221.5 Iran 56 362 156.9 132.5 United States 171 276 217.4 224.0 Verb Count China 16 40 27.80 28.5 India 18 49 31.95 31.0 Iran 5 53 21.40 17.0 United States 21 47 30.35 30.5 Table 1: Descriptive statistics of token counts and verb counts across movie plots by country. or the Global Attitudes Survey (PEW),3which asks people about their views on current global affairs. These studies present the survey questions to LLMs, directly asking them about their values. With the caveat of social desirability bias (Grimm, 2010) and other factors which may af- fect people’s responses, we can expect people to be largely consistent between their reported val- ues and their real-life behavior. LLMs, on the other hand, don’t have a consistent “persona” and are optimized to generate human-like responses. Thus, rather than asking questions about values and norms directly, Wang et al. (2024b) and Rao et al. (2025) start with prescribed social norms and use LLMs to generate more natural narratives in which these values should be considered. We rather take a bottom-up approach, prompting GPT-4 to reason about social norms in existing narratives from different cultural contexts. 2.2 Stereotypes and Cultural Bias LLMs learn societal biases from their web-based training data, pertaining to race, gender, religion, profession, and more (Nadeem et al., 2021; Jha et al., 2023). Modern LLMs such as Gemini (Team et al., 2023) and GPT-4 (Achiam et al., 2023) do a better job at avoiding generating harmful or of- fensive content, thanks
https://arxiv.org/abs/2505.18322v1
to their instruction tuning and preference tuning steps and other proprietary “guardrails” implemented by their developers. How- ever, these superficial avoidance strategies likely only mask rather than remove the biases in these models. For example, Reuter and Schulze (2023) reveal the superficiality of the “guardrails” by show- ing that merely including the word “Muslim” in the prompt increased ChatGPT’s response refusal rate – likely due to the association of this group with the hate speech it encounters online. 3https://www.pewresearch.org/Top Keywords Topic rescue, captain, aircraft, bomb, ship military film, village, children, doctor, women rural life marriage, father, daughter, wife, Rajesh family love, school, marry, proposes, life romance cow, teacher, village, barn, son rural life friend, crush, high, student, picture romance girlfriend, baby, sister, father, love family government, president, future, mother, son politics police, law, prison, duty, media law enforcement home, husband, family, mother, house family Table 2: Top keywords for each topic extracted using BERTopic from the movie plot dataset, along with our interpretation of the topic theme. In another line of work, researchers revealed that in some setups, LLMs still express subtle or mild stereotypes towards various population groups, such as describing Arab characters as “poor and struggling” (Naous et al., 2024a) and Black people as “tall and athletic” (Cheng et al., 2023). This is especially concerning given the rise of pop- ularity in using LLMs to generate synthetic users and study participants (Boelaert et al., 2025). In this work, we contribute to this line of work by showing that when asked to generate cultural norms, GPT-4 avoids generating stereotypes. How- ever, when used to simulate the agreement of peo- ple from that country with the stereotype, it predicts they would agree with it. 3 Data In contrast to prior work that asks models about their values directly in a top-down approach, we take a bottom-up approach, presenting models with narratives from different cultural contexts and prompting them to reason about the social norms that these situations invoke. To that end, we first scraped the plots of movies produced in various countries from English Wikipedia, to serve as culturally-grounded narratives (§3.1). We then prompt annotators from the respective countries (§3.2), as well as GPT-4 (§3.3), to list the social norms invoked in these narratives. 3.1 Culturally-Grounded Narratives To explore how social norms are reflected through culturally grounded narratives that affect people’s everyday decisions and judgments, we focus on movie plots – widely consumed narrative media that often depict rich social behaviors and implicit norms. Such movie plots allow for context-rich interpretation and cultural priming. 3 Figure 2: Overview of RoT collection and evaluation process. We first scrape movie plots from English Wikipedia. Each plot is shown to both human annotators and GPT-4. Each human annotator (after the cultural activation task) write 3–5 RoTs per movie, while GPT-4 is prompted under two settings: default prompting and cultural prompting. This results in a total of 19–25 RoTs per movie across both human and model sources. Following prior work (Shen et al., 2024a; Huang and Yang, 2023; Qiu et al., 2025), we focus on four
https://arxiv.org/abs/2505.18322v1
geographically diverse countries: United States, China, India, and Iran. China and India were selected due to their large populations and glob- ally recognized cultural distinctiveness; Iran repre- sents a smaller culture group with unique traditions and perspectives; and the US represents a West- ern country overrepresented in English web text. We scraped the movies from English Wikipedia,4 retaining 20 movie plots for each country, and en- suring a moderate length to facilitate smooth anno- tation.5Table 1 summarizes the dataset statistics. To evaluate the diversity of social and cultural themes in our dataset, we applied topic modeling using BERTopic (Grootendorst, 2022). Table 2 presents representative keywords for each of the 10 topics identified by the model, along with our interpretation of the topic theme. We observe a wide range of themes, including family dynamics, romantic relationships, rural life, political events, and law enforcement. This thematic variety ensures that the cultural norms derived from the plots reflect a rich and heterogeneous set of lived experiences. 3.2 Human-written RoTs Social Norms Format. To investigate culturally grounded social norms, we follow Forbes et al. (2020) and describe social norms in the Rules of Thumb (RoTs) format. RoTs are short declarative 4We chose English Wikipedia as it offers the most com- prehensive and consistent coverage of international films in a single language, facilitating downstream analysis without requiring multilingual NLP tools. 5We randomly sampled 20 movies from the movies that fall between the 40th and 60th percentiles in terms of length for each country.statements describing appropriate or expected be- havior. RoTs typically conform to the form “It is [judgment] [action]”, where judgment is an adjec- tive (e.g., “immoral” in Fig 2) and action is a clause (e.g., “to carry on an affair while being married”). Annotators. Figure 2 illustrates the RoT collec- tion process which we detail below. We recruited annotators from the respective countries through the CloudConnect platform by Cloud Research.6 as well as through word of mouth. Annotators were compensated $20-25 USD per hour. We collected annotations from 88 annotators across four cultural groups. Among those who reported, annotators ranged in age from 18 to 62 years (M = 34.1, SD = 9.5). The gender distribution was balanced, and the majority held a bachelor’s degree. Detailed demo- graphic breakdowns are provided in Appendix A.1. Cultural Priming. Following Bhatia et al. (2024), to ensure their cultural affinity, we recruited annotators that have lived in the respective coun- try for at least 5 years in the past 15 years. With that said, by design, CloudConnect annotators re- side in English speaking countries, making them bicultural. Thus, to activate the cultural identity associated with the study (other than their current country of residence), we applied cultural priming, a technique widely validated in cultural psychologi- cal research (Hong et al., 2000; Oyserman and Lee, 2008; Liu et al., 2015). Specifically, annotators go through a small cultural activation task before their annotation task, in which they are shown five images pertaining to their culture, such as cultural icons, country flags, historical sites, and festivals, and are tasked with answering questions about
https://arxiv.org/abs/2505.18322v1
the 6https://www.cloudresearch.com/ 4 <Country>-culture-driven : RoTs should align with established norms and practices in <Country>. Judgment + Action : Each RoT is in a single sentence with a straightfor- ward structure: it is [the judgment] of [an action]. Verb-centric : Anchor each RoT to a specific verb from the story. Specificity : Avoid overly generic statements. Table 3: Instructions for the RoT writing task, adapted from Forbes et al. (2020). images to make sure they perceived and reflected on the priming material (See Appendix A.2). RoT Writing Task. After completing the cul- tural activation task, annotators were asked to read a movie plot from their culture and provide 3–5 RoTs that are invoked by the narrative and that they perceive would be accepted within their cul- ture. Each movie was annotated by three annota- tors. To help them come up with RoTs, we high- lighted all the verb phrases in the plot as potential action terms, and prefilled a dropdown box with the 625 judgment adjectives from Forbes et al. (2020). See Table 3 for the annotation instructions and Ap- pendix A.2 for the interface. Overall, we collected 396-441 human-written RoTs per culture. 3.3 GPT-generated RoTs We use GPT-4o (OpenAI, 2024) with few-shot learning to generate 5 RoTs for a given movie plot. We prompted the model twice for each movie in the following setups: Default Prompting: We ask the model to gen- erate RoTs without referencing any cultural back- ground (see Appendix B.1 for the prompt). This setup allows us to learn about the model’s “default” cultural values. Cultural Prompting: Mirroring the human anno- tation setup, we added “As someone with a <Coun- try> cultural background...” to the default prompt. This framing encourages the model to generate culturally-aligned responses, simulating the per- spective of a person from the specified country. We acknowledge that some content in the movie plot – such as mention of cultural traditions, con- cepts, or names – may leak information about the culture to the model in the default prompting setup, making this setup less than 100% culture- agnostic. However, this setup provides less direct information about the target culture than the cul- tural prompting setup.4 Results We address three core research questions: (1) Which cultures does GPT-4 know about? We com- pare the accuracy and culture-specificity of GPT-4- generated norms to human-written ones (§4.1). (2) Which cultures is GPT-4 aligned with? We measure which cultures align best with its default, culturally unmarked judgments (§4.2). (3) Does GPT-4 rein- force stereotypes? We measure the stereotypicality of GPT-4 generated norms and judgments (§4.3). 4.1 Which Cultures does GPT-4 Know about? Human Ratings of RoTs. To assess the correct- ness and cultural alignment of the GPT-generated RoTs compared to the human-written ones, we recruited different sets of annotators from the tar- get countries in a similar process to the annotation task in §3.2. Annotators similarly went through a cultural priming step before they proceeded to the main task. Each RoT was judged by 5 anno- tators – in the context of the plot from which is emerged – with
https://arxiv.org/abs/2505.18322v1
respect to the following criteria (see Appendix A.2 for the annotation task): 1.Accuracy: On a Likert-scale from 1 to 5, to what extent the RoT accurately represents a so- cial norm. 2.Culture-specificity: On a Likert-scale from 1 to 5, to what extent the RoT reflects norms unique to the target culture, vs. a more generic or nearly-universal accepted norm. 3.Stereotypicality: Whether the RoT reinforces stereotypes about the target culture. We collected annotations from 56 annotators across countries. Among those who reported, annotators ranged in age from 18 to 66 years (M = 33.3, SD = 10.6). The gender distribution was relatively balanced, and the majority held a bachelor’s degree. Detailed demographic breakdowns are provided in Appendix A.1. GPT-4-generated RoTs are – by and large – as accurate as human-written RoTs. Figure 3(A) presents the mean accuracy across RoTs for each combination of country and condition (human- written, GPT-4 generated with default prompting, and GPT-4 with cultural prompting). Overall, GPT- 4- generated RoTs are rated as fairly accurate (≥3.8) across countries. The accuracy is iden- tical to that of the human-written RoTs for India and Iran, and slightly higher ( +0.1) than human- written RoTs for the US, but the difference is not 5 US China India Iran1.01.52.02.53.03.54.04.55.0 4.4 3.73.9 3.84.5 3.73.9 3.84.4 3.9 3.9 3.8A. AccuracyGPT-4 GPT-4+country Human US China India Iran1.01.52.02.53.03.54.04.55.0 1.72.2 1.82.6 1.72.3 1.82.6 1.92.4 2.12.8B. Cultural-specificity US China India Iran0.00.20.40.60.81.0 0.10.3 0.10.2 0.10.3 0.10.2 0.20.4 0.20.2C. StereotypicalityFigure 3: Average (A) accuracy, (B) culture-specificity, and (C) stereotypicality scores for each country and condition combination. statistically significant ( β=−0.31,p=.003). For China, however, a small gap of +0.2points favoring human-written RoTs was found to be sta- tistically significant ( β= 0.25,p=.001).7There were no statistically significant differences between the two prompting strategies, indicating no clear advantage from cultural prompting, as was previ- ously shown (Cao et al., 2023a). It’s possible that the movie plot already provides implicit cultural cues, making it unnecessary to explicitly include the country name in the prompt. GPT-4-generated RoTs are less culture-specific and more generic than human-written RoTs. Figure 3(B) shows the average specificity ratings for each combination of country and condition. Overall, RoTs were ranked as moderately culture- specific ( 1.7−2.8), suggesting a good number of generic or supposedly-universal RoTs across con- ditions. With that said, across countries, human- written RoTs were rated as more culturally-specific than GPT-4-generated RoTs ( β= 0.17,p=.037) – suggesting that GPT-4’s accuracy could in part be attributed to its tendency to generate generic norms that people across cultures can agree with. Again, cultural prompting showed no significant advantage over default prompting. 4.2 Which Cultures is GPT-4 Aligned with? To evaluate which cultural perspective GPT-4 aligns with most closely, we reverse the roles and ask it to rate the accuracy of RoTs as a person from country X. The idea is that if GPT-4 shares implicit assumptions with a given culture, its default rat- ings should be closely aligned with those generated under that culture’s perspective. 7OLS regression: F(11,1645) = 30 .69,p < . 001,R2= 0.17. US China India Iran0.00.20.40.60.81.0 Figure 4: Average JSD
https://arxiv.org/abs/2505.18322v1
between GPT-4’s default predic- tions and culture-specific ratings across four countries. The model aligns most closely with the United States norms and deviates most from Chinese norms. We randomly sampled 20 RoTs from each coun- try and prompted GPT-4 to rate the accuracy of each RoT on a Likert scale of 1–5. We prompt the model in different settings; in the default prompt- ingsetting, we ask the model to rate the RoT with- out referencing any cultural background, testing its default, culture-agnostic stance. In the cultural prompting setting, we instruct the model to rate the RoT “As someone with a [country X] cultural back- ground...”, where X is each one of the countries in our experiments. For each setting, we estimate the distribution over ratings by prompting the model to generate 30 independent responses per RoT, using a temperature of 0.8 to introduce variability. See Appendix B.2 for the prompts. Following Durmus et al. (2024), we use Jensen- Shannon Divergence (JSD) to measure distance be- tween distributions. Specifically, we are interested in the distance between the distributions obtained from the default prompt and the country-specific distributions. We average the JSD values across the 20 RoTs to obtain an overall measure of cultural alignment for each culture. 6 GPT-4 is mostly aligned with the US. As shown in Figure 4, the average divergence is lowest for the United States (0.12), indicating that GPT-4’s de- fault ratings are most similar to those given when prompted from the US perspective. Divergence increases for Iran (0.20), India (0.24), and is high- est for China (0.33), suggesting that GPT-4’s im- plicit normative stance deviates most from Chi- nese cultural framing. These results are consistent with prior findings that LLMs tend to default to dominant or Western cultural perspectives (Dur- mus et al., 2024; Naous et al., 2024b; Saha et al., 2025). The poorer performance on Iranian social norms compared to the US corroborates prior find- ings (Shen et al., 2024b; Saffari et al., 2025). 4.3 Does GPT-4 Reinforce Stereotypes? Human-written RoTs are more stereotypical than GPT-4’s. Figure 3(C) shows the average stereotypicality ratings from annotators across countries and conditions. While the majority of RoTs were perceived to be non-stereotypical, human-written RoTs were significantly more stereotypical than GPT-4-generated ones across countries ( β= 0.11,p < . 001). This seemingly- surprising finding could be explained by the fol- lowing. First, one reason that the human-written RoTs are not completely stereotype-avoidant may be that people are less careful to avoid stereotyping their own group because there is more tolerance for stereotypes coming from in-group members (Bourhis et al., 1977; Thai et al., 2019). Another reason could be that GPT-4 lacks culture specific knowledge, including knowledge of stereotypes (Zhou et al., 2025), relative to the crowdworkers. Indeed, the Pearson correlation between culture- specificity and stereotypicality across all samples yields a moderate positive correlation ( r= 0.462, p < . 001). Finally, GPT-4’s stereotype avoidance could be attributed to its preference tuning and other safety mechanisms that prevent it from gen- erating harmful, biased, or offensive responses. In- deed, previous work showed
https://arxiv.org/abs/2505.18322v1
that LLMs are less likely to be critical of social groups (Boelaert et al., 2025; Zhang et al., 2025). GPT-4’s biases are hidden, not removed. GPT- 4 appears to be stereotype-avoidant on the surface. However, given the technical difficulty of debias- ing, there are reasons to believe that the training process that led to stereotype-avoidance is merely hiding the bias rather than removing it – akin toother ML models (Gonen and Goldberg, 2019; Kong et al., 2024). Modern language models are preference-tuned in the aims of avoiding generating text that are harmful towards minority groups (con- sider, for example, Hartvigsen et al., 2022). How- ever, preference-tuning on text-generation datasets may fail to mitigate other forms of stereotypical judgments that the model can make, through clas- sifications or ordinal predictions (Hofmann et al., 2024). That is, by shifting to a constrained ordi- nal setting, we hypothesize we can surface stereo- typical associations between cultural groups and social norms that are prevalent in pretraining data (Caliskan et al., 2017). Specifically, to assess whether stereotypical as- sociations can be recovered via a classificatory prompt, we start by identifying RoTs that were deemed as highly stereotypical by the human an- notators. Then, we select the top quartile of RoTs with the highest stereotypicality scores within each cultural group. Additionally, we manually include a small number of RoTs whose negation reflects culturally salient stereotypes (e.g., “It is bad to work all of the time”, China). This results in a set of 338 highly stereotypical norms (see Appendix C for examples). We follow the same approach as in Sec. 4.2 to obtain the distribution of accuracy scores for each country as well as an unmarked (default) country. Figure 5 shows example RoTs where GPT-4’s rat- ings appear to reflect cultural stereotypes. The first 5 distributions reflect the distribution of responses from GPT-4 and the last one ( human annotators ) shows the distribution of votes from annotators of the target culture. On the top left, GPT-4 predicted that most Irani- ans would strongly agree that “It is not encouraged to live alone as a young bachelor”, whereas human judgments about the accuracy of this RoT varied considerably. The predictions for China and India are similar, but in contrast, GPT-4 predicted that people from the US would strongly disagree with this RoT. On the top right, GPT-4 predicted that people from India would strongly agree with the RoT “It is commendable to marry within your own religion” – accurately reflecting the human annotators from India but also perpetuating a norm that they judged as highly stereotypical. As harmless as this norm may seem, LLMs learning from web text might fail to account for changes in cultural norms over time (Zhou et al., 2025). In particular, while in- 7 0102030 18 12Default 0102030 15 15United States 0102030 21 9China 1 2 3 4 50102030 624India 1 2 3 4 50102030 723Iran 1 2 3 4 5024 1 12 1Iranian AnnotatorsIt is not encoraged to live alone as a young bachelor (Iran) 0102030 29 1Default 0102030 20 10United States 0102030 622
https://arxiv.org/abs/2505.18322v1
2China 1 2 3 4 50102030 1218India 1 2 3 4 5010203030Iran 1 2 3 4 5024 23Indian AnnotatorsIt is commendable to to marry within your own religion (India) 0102030 129Default 0102030 224 4United States 0102030 24 6China 1 2 3 4 50102030 722 1India 1 2 3 4 50102030 818 4Iran 1 2 3 4 5024 1 12Chinese AnnotatorsIt is bad to work all of the time (China) 0102030 524 1Default 0102030 425 1United States 0102030 126 3China 1 2 3 4 50102030 820 2India 1 2 3 4 50102030 817 5Iran 1 2 3 4 5024 3 2Chinese AnnotatorsIt is unethical to tell a lie to get benefits for yourself (China)Figure 5: Comparison of GPT-4 responses across four culturally grounded RoTs. Each subplot shows probability distributions of ratings (1–5) across five prompting conditions. Human annotations were provided by individuals from the cultural group associated with each RoT. terfaith marriage in India is still uncommon, there is some level of acceptance towards it (Verma and Sukhramani, 2018). A similar phenomenon can be demonstrated with anti-stereotypes. On the bottom left, Chinese an- notators’ agreement with the norm “It is bad to work all of the time” was moderate. But when asked to rate the RoT as a Chinese person, GPT-4 assigned the highest probability to “disagree”, sug- gesting that it assumes most Chinese people are ok with working all the time. This is in line with the stereotypical view of Chinese culture as endorsing overwork which is likely prevalent in the model’s training data. This perception of Chinese culture fails to account for both individual differences as well as changes in cultural norms over time, such as the recent push back among the younger popu- lation against the “996” work schedule (9 a.m. to 9 p.m., 6 days a week; Zhu and Peng, 2023). As a control, GPT-4 predicts that a “default”, culturally- unmarked person would highly agree that it’s bad to work all the time, and its predictions for other countries vary but are not as overwhelmingly dis- agreeing as the Chinese predictions. A similar behavior is observed for the RoT “It is unethical to tell a lie to get benefits for yourself” (bottom right in Fig. 5); GPT-4’s predicted distribu- tion for Chinese raters places the highest likelihood on a neutral rating, suggesting uncertainty about whether such behavior is wrong – reflecting thestereotype that Chinese people are dishonest. This is an oversimplification of Chinese values that as- sess the morality of deception in light of its effects and the broader context in which it occurred, in contrast to the Western perception that dishonesty is always bad (Blum, 2007; Kwiatkowska, 2015). Again, as a control, GPT-4’s ratings for other coun- tries show stronger disapproval. 5 Conclusion We show that GPT-4 exhibits default represen- tational biases when reasoning about culturally- grounded social norms. Specifically, its latent cul- tural representation aligns most closely with the US and least with China, with India and Iran falling in between. Moreover, while the model tends to avoid generating overtly stereotypical language, these stereotypes are still implicitly
https://arxiv.org/abs/2505.18322v1
ingrained in the model and can be resurfaced – due to lack of real technical solutions. Finally, our findings also highlight a key tension in the design of culturally- competent LLMs, which on the one hand need to possess culture-specific knowledge, while on the other hand risk perpetuating stereotypes about the same cultures. Addressing these challenges is cru- cial given the diverse user base of LLMs and their widespread usage in downstream applications. 8 Limitations Scope. Our study uses countries as a proxy for cultures, which is the most common proxy in NLP research despite its limitations (Zhou et al., 2025). Due to the cost of human annotations and API calls, we focused on four geographically- and culturally- diverse countries, and only evaluated GPT-4, which we selected due to its popularity and wide reach. Finally, due to the relatively small number of hu- man annotators from each culture, we did not study individual differences between annotators in this study. Future work would need to cover a wider range of cultures and models to draw a complete picture of LLMs’ default cultural representations. Cultural Grounding. In this paper, we deviated from the common practice to prompt LLMs directly about their values and instead prompted them to reason about social norms in existing narratives. We intentionally looked for human-written (as op- posed to LLM-generated) narratives grounded in different cultures. We chose movies because they often reflect cultural norms (Rai et al., 2025). Yet, it is possible that movies exhibit a certain “report- ing bias” to depict more unusual events. Further- more, to factor out the effect of the multilingual capabilities of GPT-4 on our study, we strictly lim- ited the experiments to English text.8It is possible that a movie plot in English Wikipedia has been written from the perspective of a Western editor (Kumar, 2021). This setup, and the availability of crowdsourcing workers, also required us to employ bicultural annotators – individuals who identified with the target culture but currently live in English- speaking countries – which could have impacted their judgments. We attempted to activate a spe- cific cultural identify through cultural priming tech- niques. Nevertheless, even with our simplifying assumptions, our study takes a step forward from quantifying LLMs’ cultural alignment through sur- veys with direct question about values. Ethical Considerations Annotator Selection and Compensation. The study was conducted with the approval of our in- stitute’s Behavioral Research Ethics Board that re- viewed the data collection procedures to ensure they posed no risk of harm to human participants. Annotators were compensated fairly according to 8In preliminary experiments we also tested translating prompts to the local language, which yielded subpar results.CloudResearch’s compensation guidelines, which exceed local minimum wage standards. All anno- tation instructions explicitly directed participants to avoid including any personally identifiable infor- mation in their responses. Screening for Harmful Content. Prior to human evaluation, we conducted a thorough review of the movie plots to screen for and remove any harm- ful or unsafe content. These steps were taken to ensure ethical compliance, participant safety, and data integrity throughout the study. Using Country
https://arxiv.org/abs/2505.18322v1
as a Cultural Proxy. We also acknowledge that cultural identity does not map neatly onto geographic or national boundaries, and that cultural variation exists at the individual level, shaped by personal history and experience. How- ever, for the purposes of this study, we use country as a proxy for cultural grouping, consistent with prior work. Inadvertent Stereotypes. We used culturally rel- evant images to prime annotators before norm gen- eration and collected social norms rooted in spe- cific cultural contexts. While our intention was to support cultural reflection, we acknowledge that both the images and the resulting norms may inad- vertently reflect or reinforce cultural stereotypes. Acknowledgements This work was funded, in part, by the Vector In- stitute for AI, Canada CIFAR AI Chairs program, CIFAR AI Catalyst grant, Accelerate Foundation Models Research Program Award from Microsoft, and an NSERC discovery grant. We thank Saman- tha Stilwell, Yuwei Yin, EunJeong Hwang, Aditya Chinchure, Jonath Sujan, and Jason Doornenbal for their valuable feedback and support. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 . Arnav Arora, Lucie-aimée Kaffee, and Isabelle Augen- stein. 2023. Probing pre-trained language models for cross-cultural differences in values. In Proceedings of the First Workshop on Cross-Cultural Considera- tions in NLP (C3NLP) , pages 114–130, Dubrovnik, Croatia. Association for Computational Linguistics. 9 Arnav Arora, Lucie-Aimée Kaffee, and Isabelle Augen- stein. 2022. Probing pre-trained language models for cross-cultural differences in values. Emily M Bender, Timnit Gebru, Angelina McMillan- Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Confer- ence on Fairness, Accountability, and Transparency (FAccT) , pages 610–623. Mehar Bhatia, Sahithya Ravi, Aditya Chinchure, Eu- nJeong Hwang, and Vered Shwartz. 2024. From local concepts to universals: Evaluating the multi- cultural understanding of vision-language models. InProceedings of the 2024 Conference on Empiri- cal Methods in Natural Language Processing , pages 6763–6782, Miami, Florida, USA. Association for Computational Linguistics. Susan D Blum. 2007. Lies that bind: Chinese truth, other truths . Rowman & Littlefield Publishers. Julien Boelaert, Samuel Coavoux, Étienne Ollion, Ivaylo Petev, and Patrick Präg. 2025. Machine bias. how do generative language models answer opinion polls?1. Sociological Methods & Research . Richard Y . Bourhis, Nicholas J. Gadfield, Howard Giles, and Henri Tajfel. 1977. Context and ethnic humour in intergroup relations. In ANTONY J. CHAPMAN and HUGH C. FOOT, editors, It’s a Funny Thing, Humour , pages 261–265. Pergamon. Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from lan- guage corpora contain human-like biases. Science , 356(6334):183–186. Yong Cao, Li Zhou, Seolhwa Lee, Laura Cabello, Min Chen, and Daniel Hershcovich. 2023a. Assessing cross-cultural alignment between chatgpt and hu- man societies: An empirical study. arXiv preprint arXiv:2303.17466 . Yong Cao, Li Zhou, Seolhwa Lee, Laura Cabello, Min Chen, and Daniel Hershcovich. 2023b. Assessing cross-cultural alignment between ChatGPT and hu- man societies: An empirical study. In Proceedings of the First Workshop on Cross-Cultural
https://arxiv.org/abs/2505.18322v1
Considerations in NLP (C3NLP) , pages 53–67, Dubrovnik, Croatia. Association for Computational Linguistics. Myra Cheng, Esin Durmus, and Dan Jurafsky. 2023. Marked personas: Using natural language prompts to measure stereotypes in language models. In Proceed- ings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 1504–1532, Toronto, Canada. Association for Computational Linguistics. Esin Durmus, Karina Nguyen, Thomas Liao, Nicholas Schiefer, Amanda Askell, Anton Bakhtin, Carol Chen, Zac Hatfield-Dodds, Danny Hernandez, Nicholas Joseph, Liane Lovitt, Sam McCandlish, Orowa Sikder, Alex Tamkin, Janel Thamkul, JaredKaplan, Jack Clark, and Deep Ganguli. 2024. To- wards measuring the representation of subjective global opinions in language models. In First Confer- ence on Language Modeling . Maxwell Forbes, Jena D. Hwang, Vered Shwartz, Maarten Sap, and Yejin Choi. 2020. Social chem- istry 101: Learning to reason about social and moral norms. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 653–670, Online. Association for Computational Linguistics. Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. InProceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) , pages 609–614, Minneapolis, Minnesota. Association for Computa- tional Linguistics. Pamela Grimm. 2010. Social desirability bias. Wiley international encyclopedia of marketing . Maarten Grootendorst. 2022. Bertopic: Neural topic modeling with a class-based tf-idf procedure. arXiv preprint arXiv:2203.05794 . Christian Haerpfer, Ronald Inglehart, Alejandro Moreno, Christian Welzel, Kseniya Kizilova, Jaime Diez-Medrano, Marta Lagos, Pippa Norris, Eduard Ponarin, and Bi Puranen. 2020. World values sur- vey wave 7 (2017-2020) cross-national data-set. (No Title) . Jochen Hartmann, Jasper Schwenzow, and Maximil- ian Witte. 2023. The political ideology of conver- sational ai: Converging evidence on chatgpt’s pro- environmental, left-libertarian orientation. arXiv preprint arXiv:2301.01768 . Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, and Ece Kamar. 2022. ToxiGen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection. InProceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 3309–3326, Dublin, Ireland. Association for Computational Linguistics. Shreya Havaldar, Bhumika Singhal, Sunny Rai, Langchen Liu, Sharath Chandra Guntuku, and Lyle Ungar. 2023. Multilingual language models are not multicultural: A case study in emotion. In Proceed- ings of the 13th Workshop on Computational Ap- proaches to Subjectivity, Sentiment, & Social Media Analysis , pages 202–214, Toronto, Canada. Associa- tion for Computational Linguistics. Joseph Henrich, Steven J Heine, and Ara Norenzayan. 2010. The weirdest people in the world? Behavioral and Brain Sciences , 33(2-3):61–83. 10 Daniel Hershcovich, Stella Frank, Heather Lent, Miryam de Lhoneux, Mostafa Abdou, Stephanie Brandl, Emanuele Bugliarello, Laura Cabello Pi- queras, Ilias Chalkidis, Ruixiang Cui, Constanza Fierro, Katerina Margatina, Phillip Rust, and Anders Søgaard. 2022. Challenges and strategies in cross- cultural NLP. In Proceedings of the 60th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers) , pages 6997–7013, Dublin, Ireland. Association for Computational Lin- guistics. Valentin Hofmann,
https://arxiv.org/abs/2505.18322v1
Pratyusha Ria Kalluri, Dan Jurafsky, and Sharese King. 2024. Ai generates covertly racist decisions about people based on their dialect. Nature , 633(8028):147–154. Geert Hofstede. 1984. Culture’s consequences: Interna- tional differences in work-related values , volume 5. sage. Ying-Yi Hong, Michael W Morris, Chi-yue Chiu, and Veronica Benet-Martínez. 2000. Multicultural minds: A dynamic constructivist approach to culture and cognition. American Psychologist , 55(7):709–720. Jing Huang and Diyi Yang. 2023. Culturally aware natu- ral language inference. In Findings of the Association for Computational Linguistics: EMNLP 2023 , pages 7591–7609. Association for Computational Linguis- tics. Akshita Jha, Aida Mostafazadeh Davani, Chandan K Reddy, Shachi Dave, Vinodkumar Prabhakaran, and Sunipa Dev. 2023. SeeGULL: A stereotype bench- mark with broad geo-cultural coverage leveraging generative models. In Proceedings of the 61st Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers) , pages 9851–9870, Toronto, Canada. Association for Computational Lin- guistics. Rebecca L Johnson, Giada Pistilli, Natalia Menédez- González, Leslye Denisse Dias Duran, Enrico Panai, Julija Kalpokiene, and Donald Jay Bertulfo. 2022. The ghost in the machine has an american ac- cent: value conflict in gpt-3. arXiv preprint arXiv:2203.07785 . Fanjie Kong, Shuai Yuan, Weituo Hao, and Ricardo Henao. 2024. Mitigating test-time bias for fair image retrieval. Advances in Neural Information Processing Systems , 36. Sangeet Kumar. 2021. The digital frontier: Infrastruc- tures of control on the global web . Indiana University Press. Anna Kwiatkowska. 2015. How do others deceive? cultural aspects of lying and cheating. The small and big deceptions: In psychology and evolutionary sciences perspective , pages 46–72. Yizhi Li, Ge Zhang, Bohao Yang, Chenghua Lin, Shi Wang, Anton Ragni, and Jie Fu. 2022. Herb: Measur- ing hierarchical regional bias in pre-trained languagemodels. In Findings of the Association for Com- putational Linguistics: AACL-IJCNLP 2022 , pages 334–346. Association for Computational Linguistics. Chen Cecilia Liu, Iryna Gurevych, and Anna Korhonen. 2024. Culturally aware and adapted nlp: A taxonomy and a survey of the state of the art. arXiv preprint arXiv:2406.03930 . Chen Cecilia Liu, Fajri Koto, Timothy Baldwin, and Iryna Gurevych. 2023. Are multilingual llms culturally-diverse reasoners? an investigation into multicultural proverbs and sayings. Zhuozhuo Liu, Menxue Cheng, Kaiping Peng, and Dan Zhang. 2015. Self-construal priming selectively mod- ulates the scope of visual attention. Frontiers in Psy- chology , 6:1508. Reem Masoud, Ziquan Liu, Martin Ferianc, Philip C. Treleaven, and Miguel Rodrigues Rodrigues. 2025. Cultural alignment in large language models: An explanatory analysis based on hofstede‘s cultural di- mensions. In Proceedings of the 31st International Conference on Computational Linguistics , pages 8474–8503, Abu Dhabi, UAE. Association for Com- putational Linguistics. Moin Nadeem, Anna Bethke, and Siva Reddy. 2021. StereoSet: Measuring stereotypical bias in pretrained language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Lin- guistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) , pages 5356–5371, Online. Association for Computational Linguistics. Tarek Naous, Michael J Ryan, Alan Ritter, and Wei Xu. 2024a. Having beer after prayer? measuring cultural bias in large language models. In Proceedings of the 62nd Annual Meeting of the
https://arxiv.org/abs/2505.18322v1
Association for Compu- tational Linguistics (Volume 1: Long Papers) , pages 16366–16393, Bangkok, Thailand. Association for Computational Linguistics. Tarek Naous, Michael J. Ryan, Alan Ritter, and Wei Xu. 2024b. Having beer after prayer? measuring cul- tural bias in large language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 1234–1245, Toronto, Canada. Association for Computational Linguistics. Richard E Nisbett, Kaiping Peng, Incheol Choi, and Ara Norenzayan. 2001. Culture and systems of thought: Holistic versus analytic cognition. Psychological Review , 108(2):291–310. OpenAI. 2024. Gpt-4o technical report. Accessed: 2025-05-15. Daphna Oyserman and Spike W. S. Lee. 2008. Does culture influence what and how we think? effects of priming individualism and collectivism. Psychologi- cal Bulletin , 134(2):311–342. 11 Haoyi Qiu, Alexander Fabbri, Divyansh Agarwal, Kung- Hsiang Huang, Sarah Tan, Nanyun Peng, and Chien- Sheng Wu. 2025. Evaluating cultural and social awareness of llm web agents. In Findings of the Association for Computational Linguistics: NAACL 2025 , pages 3978–4005, Albuquerque, New Mexico. Association for Computational Linguistics. Sunny Rai, Khushang Zaveri, Shreya Havaldar, Soumna Nema, Lyle Ungar, and Sharath Chandra Guntuku. 2025. Social norms in cinema: A cross-cultural anal- ysis of shame, pride and prejudice. In Proceedings of the 2025 Conference of the Nations of the Amer- icas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 11396–11415, Albuquerque, New Mexico. Association for Computational Linguis- tics. Aida Ramezani and Yang Xu. 2023. Knowledge of cultural moral norms in large language models. In Proceedings of the 61st Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers) , pages 428–446, Toronto, Canada. As- sociation for Computational Linguistics. Abhinav Sukumar Rao, Akhila Yerukola, Vishwa Shah, Katharina Reinecke, and Maarten Sap. 2025. Nor- mAd: A framework for measuring the cultural adapt- ability of large language models. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Lin- guistics: Human Language Technologies (Volume 1: Long Papers) , pages 2373–2403, Albuquerque, New Mexico. Association for Computational Linguistics. Max Reuter and William Schulze. 2023. I’m afraid i can’t do that: Predicting prompt refusal in black- box generative language models. arXiv preprint arXiv:2306.03423 . Hamidreza Saffari, Mohammadamin Shafiei, Donya Rooein, Francesco Pierri, and Debora Nozza. 2025. Can I introduce my boyfriend to my grandmother? evaluating large language models capabilities on Ira- nian social norm classification. In Findings of the Association for Computational Linguistics: NAACL 2025 , pages 6060–6074, Albuquerque, New Mexico. Association for Computational Linguistics. Sougata Saha, Saurabh Kumar Pandey, and Monojit Choudhury. 2025. Meta-cultural competence: Climb- ing the right hill of cultural awareness. In Proceed- ings of the 2025 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies (Volume 1: Long Papers) , pages 8025–8042, Albuquerque, New Mexico. Association for Computational Linguistics. Edward W Said. 1978. Orientalism . Pantheon Books. Shibani Santurkar, Esin Durmus, Faisal Ladhak, Cinoo Lee, Percy Liang, and Tatsunori Hashimoto. 2023. Whose opinions do language models reflect? In Proceedings of the 40th International Conferenceon
https://arxiv.org/abs/2505.18322v1
Machine Learning , volume 202 of Proceedings of Machine Learning Research , pages 29971–30004. PMLR. Andrew D Selbst, Danah Boyd, Sorelle A Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and abstraction in sociotechnical systems. InProceedings of the 2019 Conference on Fair- ness, Accountability, and Transparency , pages 59–68. ACM. Siqi Shen, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Soujanya Poria, and Rada Mihalcea. 2024a. Understanding the capabilities and limita- tions of large language models for cultural common- sense. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies (Volume 1: Long Papers) , pages 5668–5680, Mexico City, Mexico. Association for Computational Linguistics. Siqi Shen, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Soujanya Poria, and Rada Mihalcea. 2024b. Understanding the capabilities and limita- tions of large language models for cultural common- sense. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies (Volume 1: Long Papers) , pages 5668–5680, Mexico City, Mexico. Association for Computational Linguistics. Yan Tao, Olga Viberg, Ryan S Baker, and René F Kizil- cec. 2024. Cultural bias and cultural alignment of large language models. PNAS Nexus , 3(9):pgae346. Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean- Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, Katie Millican, et al. 2023. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805 . Michael Thai, Alex M. Borgella, and Melanie S. Sanchez. 2019. It’s only funny if we say it: Dis- paragement humor is better received if it originates from a member of the group being disparaged. Jour- nal of Experimental Social Psychology , 85:103838. Shweta Verma and Neelam Sukhramani. 2018. Inter- faith marriages and negotiated spaces. Society and Culture in South Asia , 4(1):16–43. Wenxuan Wang, Wenxiang Jiao, Jingyuan Huang, Ruyi Dai, Jen-tse Huang, Zhaopeng Tu, and Michael Lyu. 2024a. Not all countries celebrate thanksgiving: On the cultural dominance in large language models. In Proceedings of the 62nd Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers) , pages 6349–6384, Bangkok, Thailand. Association for Computational Linguistics. Yuhang Wang, Yanxu Zhu, Chao Kong, Shuyu Wei, Xi- aoyuan Yi, Xing Xie, and Jitao Sang. 2024b. CDEval: A benchmark for measuring the cultural dimensions 12 of large language models. In Proceedings of the 2nd Workshop on Cross-Cultural Considerations in NLP, pages 1–16, Bangkok, Thailand. Association for Computational Linguistics. Simone Zhang, Janet Xu, and AJ Alvero. 2025. Genera- tive ai meets open-ended survey responses: Research participant use of ai and homogenization. Sociologi- cal Methods & Research , page 00491241251327130. Naitian Zhou, David Bamman, and Isaac L Bleaman. 2025. Culture is not trivia: Sociocultural theory for cultural nlp. arXiv preprint arXiv:2502.12057 . Ying Zhu and Junqi Peng. 2023. From diaosi to sang to tangping: The chinese dst youth subculture online. Global Storytelling: Journal of Digital and Moving Images , 3(2):13–38. Caleb Ziems, Jane Dwivedi-Yu, Yi-Chia Wang, Alon Halevy, and Diyi Yang. 2023. NormBank: A knowl- edge bank of situational social norms. In Proceed- ings of the
https://arxiv.org/abs/2505.18322v1
61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 7756–7776, Toronto, Canada. Association for Computational Linguistics. A Human Annotation A.1 Annotators Demographics RoTs Collection Task (§3.2). We collected an- notations from 88 annotators across four cultural groups, primarily from the US (n = 44), followed by India (23), Iran (13), and China (10). Table 4 reports the country of residence composition for each culture. Among those who reported, annota- tors ranged in age from 18 to 62 years (M = 34.1, SD = 9.5), with most between 26 and 50. The gen- der distribution included 45 women (52.9%), 39 men (45.9%), and 1 non-binary participant (1.2%), and most reported holding a bachelor’s (42.4%) or master’s degree (24.7%). CultureCountry of Residence Canada United Kingdom United States China 40.0 10.0 50.0 India 13.0 8.7 78.3 Iran 12.5 0.0 87.5 US 0.0 0.0 100.0 Table 4: Country of residence composition (%) of anno- tators within each cultural group for the RoT collection task. RoTs Rating Task (§4). We collected annota- tions from 56 participants across four cultural groups, with the largest from the US (n = 25), fol- lowed by China (14), Iran (10), and India (7). Ta- ble 5 reports the country of residence compositionfor each culture. Participants ranged in age from 18 to 66 years (M = 33.3, SD = 10.6), with the majority between 18 and 50 years old. The sample included 28 women (50.0%), 19 men (33.9%), and 1 non-binary participant (1.8%), with most holding a bachelor’s degree (50.0%). CultureCountry of Residence Canada United States China 28.6 71.4 India 16.7 83.3 Iran 0.0 100.0 US 0.0 100.0 Table 5: Country of residence composition (%) of anno- tators within each cultural group for the RoT evaluation task. A.2 Annotation Interface Figures 6, 7, and 8 present the user interfaces for the cultural activation, RoT collection, and RoT evaluation tasks respectively, using Iran as an ex- ample culture. B LLM Prompts B.1 RoT Collection Write rules-of-thumb to explain social norms to a computer. Your task is write social norms Rules-of-Thumb (RoTs) inspired by the provided story. I want RoTs that explain what we consider good and bad behaviour. You can think of the computer like a five-year-old child who doesn’t know how the world works. We have to explain the basics to them about what people think is right and wrong in society, and how our actions make other people feel. Requirements for writing RoTs: 1. Judgment+action: Each RoT is in a single sentence with a straightforward structure: it is [the judgment] of [an action]. For example, It is rude to curse at people. 2. Verb-Centric: Anchor each RoT to a specific verb from the story (tense doesn’t matter). For example: “curse; It is rude to curse at people.”, “likes; It is good to like your relatives.”, “invited; It is devastating to be excluded from a wedding you were invited to.” 3. Specificity: Avoid overly generic statements. 4. Quantity: Write 3 to 5 RoTs. Bad Examples: 1. curse; Cursing at people is rude. (too rigid in format)
https://arxiv.org/abs/2505.18322v1
2. sleep; Studies have shown... (encyclopedic, not normative) 3. cut; It’s ok to cut off contact... (requires original story context) 4. helped; It’s kind to help people. (too vague) Table 6: Prompt for the RoT collection task. 13 Figure 6: Cultural activation task interface. Annotators are presented with five culturally relevant images (e.g., national flag, historical figures, landmarks, daily life, and festivals) and asked to answer short questions. This task primes participants to reflect on their cultural identity before writing social norms. Shown here is an example used for Iranian participants. 14 Figure 7: Interface for collecting Rules-of-Thumb (RoTs) from annotators. After completing the cultural activation task, annotators are shown a movie plot from their culture and asked to generate 3–5 RoTs that reflect social norms grounded in their cultural context. Action verbs in the story are automatically highlighted; clicking on a verb opens an input box where annotators can write norm statements anchored to that action. Instructions emphasize cultural alignment, verb specificity, and a clear judgment-action format (e.g., “It is rude to curse at people”). 15 Figure 8: Evaluation interface. Annotators from each cultural background rate each RoT based on three criteria: accuracy, cultural specificity, and stereotypicality. They complete the same cultural priming task prior to rating, ensuring culturally grounded evaluations. 16 B.2 RoT Evaluation I will provide a statement (in double quotes). Estimate how people with a X cultural background would rate the extent to which this statement accurately reflects social norms and practices in X. Use a scale from 1 (not accurate) to 5 (accurate). Return only a single number from 1 to 5 and nothing else. Table 7: Prompt for the RoT evaluation task. Red text was included only in the cultural prompting setup, with X set to one of the countries in our experiments (e.g., “Chinese background”, “in China”).C Example RoTs Table 8 presents example stereotypical RoTs from each culture along with the human ratings for accu- racy, culture-specificity, and stereotypicality. 17 Country RoT Accuracy Specificity Stereotypicality ChinaIt is inappropriate to shut down a college. [1, 2, 5] [4, 4, 3] [1, 1, 1] It is honorable to fulfill family obligations. [5, 5, 5, 5, 3] [4, 2, 4, 3, 1] [1, 1, 1, 1, 1] It is important to study hard in school. [5, 5, 5, 5, 5] [3, 1, 4, 1, 5] [1, 1, 1, 1, 1] It is important to study hard. [5, 5, 5, 5, 5] [3, 1, 3, 1, 5] [1, 1, 1, 1, 1] It is good to be known for hard work. [5, 5, 5] [3, 1, 3] [1, 1, 1] IndiaIt is terrible to kill cows for human consumption. [4, 5, 4, 2] [5, 5, 5, 3] [1, 1, 1, 1] It is traditional to get an arranged marriage. [5, 5, 5, 5, 5] [4, 2, 2, 3, 5] [1, 1, 1, 1, 1] It is mandatory to offer guests tea or coffee. [5, 5, 5, 5, 2] [2, 2, 3, 1, 5] [1, 1, 1, 1, 1] It is dutiful to include all of your family members. [5,
https://arxiv.org/abs/2505.18322v1
5, 5, 5, 5] [5, 3, 2, 2, 5] [1, 1, 1, 1, 0] It is responsible to arrange your sister’s marriage. [5, 4, 5, 5, 5] [4, 3, 2, 3, 5] [1, 1, 1, 1, 0] IranIt is important for a woman to wear a chador outside. [5, 3, 4, 5, 1, 5] [5, 3, 4, 2, 5, 5] [0, 1, 1, 1, 1, 0] It is admirable to go the extra mile even when tired. [4, 2, 4, 2, 4, 3] [4, 4, 4, 4, 4, 3] [0, 0, 1, 1, 1, 1] It is rude to marry someone non-Iranian. [3, 4, 4, 4, 4, 1] [3, 4, 4, 4, 4, 4] [1, 0, 1, 0, 1, 1] It is immoral to reveal the body in public. [5, 5, 4, 3, 3, 4] [5, 3, 4, 3, 3, 5] [0, 1, 1, 0, 1, 1] It is okay to marry your cousin. [4, 2, 4, 4, 5, 5] [4, 2, 4, 3, 5, 5] [0, 1, 1, 0, 1, 1] Table 8: Selected stereotypical RoTs per culture, along with individual annotator ratings for accuracy, cultural specificity, and stereotypicality. 18
https://arxiv.org/abs/2505.18322v1
arXiv:2505.18331v1 [cs.CL] 23 May 2025PerMedCQA: Benchmarking Large Language Models on Medical Consumer Question Answering in Persian Language Naghmeh Jamali1Milad Mohammadi2Danial Baledi2Zahra Rezvani3Hesham Faili2 1School of Computer Science, Institute for Research in Fundamental Sciences, Tehran, Iran. 2School of Electrical and Computer Engineering, University of Tehran, Tehran, Iran. 3Department of Computer Science, Faculty of Mathematical Sciences, Alzahra University, Tehran, Iran. naghme.jamali.ai@gmail.com, miladmohammadi@ut.ac.ir, baledi.danial@gmail.com, z.rezvani@gmail.com, hfaili@ut.ac.ir Abstract Medical consumer question answering (CQA) is crucial for empowering patients by providing personalized and reliable health information. Despite recent advances in large language mod- els (LLMs) for medical QA, consumer-oriented and multilingual resources—particularly in low-resource languages like Persian—remain sparse. To bridge this gap, we present PerMe- dCQA , the first Persian-language benchmark for evaluating LLMs on real-world, consumer- generated medical questions. Curated from a large medical QA forum, PerMedCQA con- tains 68,138 question-answer pairs, refined through careful data cleaning from an ini- tial set of 87,780 raw entries. We evalu- ate several state-of-the-art multilingual and instruction-tuned LLMs, utilizing MedJudge , a novel rubric-based evaluation framework driven by an LLM grader, validated against expert human annotators. Our results high- light key challenges in multilingual medical QA and provide valuable insights for devel- oping more accurate and context-aware med- ical assistance systems. The data is pub- licly available on https://huggingface.co/ datasets/NaghmehAI/PerMedCQA 1 Introduction Recent advances in large language models (LLMs) have significantly enhanced the capabilities of Medical Question Answering (MQA) systems, facilitating rapid and reliable access to health- care information and supporting clinical decision- making (Wang et al., 2024; He et al., 2025; Zheng et al., 2025). These systems have demonstrated im- pressive performance on standardized, exam-style questions predominantly within English-language contexts, significantly aiding clinicians and patients alike by providing timely, accurate medical knowl- edge (Meng et al., 2024; Shi et al., 2024; Tong et al., 2025). However, existing datasets and bench- marks primarily target structured, multiple-choice,or short-form question answering, often failing to capture the complexities and nuanced nature of real-world patient inquiries (Pal et al., 2022a; Manes et al., 2024; Kim et al., 2024). Additionally, the emphasis on high-resource languages, particu- larly English, leaves substantial gaps in linguistic and cultural diversity, posing significant limitations for truly inclusive and equitable healthcare AI sys- tems (Tian et al., 2019; Daoud et al., 2025; Sviri- dova et al., 2024). Addressing these gaps necessitates the develop- ment of consumer-oriented Medical QA datasets that authentically reflect informal, culturally spe- cific, and real-world patient questions encountered in everyday healthcare interactions (Nguyen et al., 2023; Hosseini et al., 2024). Such resources are particularly scarce for low-resource languages like Persian (Farsi). To the best of our knowledge, no existing studies or datasets explicitly focus on Per- sian medical consumer question answering, high- lighting a critical barrier to the advancement of equitable and culturally-aligned healthcare AI sys- tems in this language. To bridge this critical gap, we introduce Per- MedCQA , the first large-scale Persian-language benchmark specifically designed for consumer- oriented medical question answering. PerMed- CQA comprises 68,138 carefully curated question- answer pairs derived from real-world interactions collected from four prominent Persian online med- ical forums. These interactions
https://arxiv.org/abs/2505.18331v1
involve authen- tic consumer-generated health queries answered by licensed medical professionals, encompassing a wide range of medical specialties and enriched with detailed metadata such as patient demograph- ics, physician specialty, and platform source. To ensure the dataset’s reliability, clinical util- ity, and compliance with privacy standards, we implemented a rigorous two-stage data cleaning process. Initially, rule-based heuristics were ap- plied to remove invalid entries, extremely short 1 Figure 1: Overview of PerMedCQA Data Preprocessing, Filtering and P II detectionFinding Suitable Data Source Annotation (ICD-11 and Q uestion T ype) Selecting and Finetuning LLMs Translation Few-Shot Prompting Role Based PromptingLEVERAGING LLMS USING DIFFRENT TECHNIQUES Choosing Best Non-Fine-Tuned Model Med-Judge DATASET CONSTRUCTION PerMedCQA Medical Consumer Question Answering Dataset, Including 68,138 QA pairs.INFERENCE MODELS EVALUATION Coverage Clinical ImpactCorrectnessSupported by human evaluation interactions, duplicates, and non-textual content, significantly enhancing the data’s structural qual- ity. Subsequently, we employed a LLM to label instances containing personally identifiable infor- mation (PII) such as names, phone numbers, ad- dresses, and emails, further ensuring user privacy and ethical compliance. This comprehensive pre- processing pipeline resulted in a high-quality, de- identified dataset suitable for robust medical AI research. To facilitate rigorous benchmarking and fine- grained analyses, we systematically annotated each QA pair along two critical dimensions: (1) disease categorization using the International Classification of Diseases, 11th Revision (ICD-11) (Khoury et al., 2017), comprising 28 distinct medical categories; and (2) classification into one of 25 standardized question types that capture the structural and se- mantic intent of patient inquiries (Abacha et al., 2019a). This extensive annotation significantly en- hances PerMedCQA’s utility for comprehensive evaluation and analysis of medical LLM capabili- ties. Given the inherent challenges in evaluating open- ended medical QA tasks, where clinically accept- able responses often vary significantly in phrasing, we adopt an innovative evaluation framework utiliz- ing a large language model ( Med-Judge ) as an au- tomated grader. Validated through expert physician assessments, this rubric-driven system systemati- cally compares model-generated responses against expert-provided answers, enabling nuanced evalua- tions beyond traditional lexical similarity metrics like BLEU or ROUGE. We conducted extensive benchmarking experi- ments across diverse state-of-the-art language mod- els, including both proprietary and prominent open- source variants. Robust baseline performance wasestablished through zero-shot inference, followed by exploration of several inference-time enhance- ments—such as role-based conditioning and pivot translation—to improve response quality without parameter updates. Additionally, supervised fine- tuning experiments using parameter-efficient meth- ods (LoRA) were performed on selected smaller models to assess the learnability and practical util- ity of PerMedCQA as a training resource. Our key contributions can be summarized as follows: •We introduce PerMedCQA , the first large- scale, real-world Persian medical QA bench- mark, meticulously constructed through a rig- orous two-step data cleaning process—rule- based filtering and LLM-based PII detec- tion—and annotated comprehensively with ICD-11 categories and standardized question types, substantially addressing the resource gap for low-resource language medical QA while ensuring data quality, privacy, and ethi- cal compliance. •We present an automated yet clinically in- formed evaluation framework ( Med-Judge ), validated by expert physician reviews, pro- viding reliable and nuanced assessments of open-ended medical question
https://arxiv.org/abs/2505.18331v1
answering qual- ity. •We comprehensively benchmark a variety of proprietary and open-source LLMs, identify- ing substantial performance variations and demonstrating the effectiveness of prompt- based techniques for enhancing model out- puts. •We evaluate supervised fine-tuning strategies using PerMedCQA, highlighting their poten- 2 tial and limitations for improving smaller- scale model performance and clinical relia- bility. An overview of the complete PerMedCQA workflow—from data collection and annotation through to model evaluation—is provided in Fig- ure 1. Through these contributions, PerMed- CQA establishes a critical foundation for future re- search aimed at developing trustworthy, culturally- sensitive, and linguistically inclusive medical AI systems specifically tailored to Persian-speaking populations. 2 Related Work The rapid development of Large Language Mod- els (LLMs) has significantly advanced the field of Medical Question Answering (Medical QA). While models such as GPT-4 (Nori et al., 2023), PaLM (Chowdhery et al., 2023), Mistral (Chap- lot et al., 2023), and LLaMA (Touvron et al., 2023) have achieved impressive results on English- language Medical QA benchmarks, the progress of LLMs in non-English and consumer-focused medi- cal domains remains underexplored. In this section, we review key datasets and modeling approaches that have shaped current research in Medical QA. 2.1 Medical Question Answering Dataset The progress in Medical QA is closely tied to the availability of high-quality and diverse datasets. To evaluate clinical accuracy and factual consistency, LLMs have been tested on exam-style multiple- choice benchmarks such as MedQA (Jin et al., 2021) and PubMedQA (Jin et al., 2019). These benchmarks feature questions framed in the style of medical licensing exams, focusing on factual recall. However, multiple-choice formats often fail to reflect the complexity and nuance of real-world medical inquiries, as they constrain responses to predefined options and limit models’ ability to gen- erate explanatory or contextual answers (Welivita and Pu, 2023). To address these limitations, recent benchmarks such as Medical Long-Form QA (Hos- seini et al., 2024) and MedRedQA (Nguyen et al., 2023) prioritize practical utility and open-ended responses, aligning more closely with consumer- oriented healthcare needs. While most Medical QA datasets are in English, efforts have been made to extend coverage to other languages. For Chinese, ChiMed (Tian et al., 2019), built from large online medical forums, serves as abenchmark for QA in Chinese. MedDialog (Zeng et al., 2020), comprising real-world, open-ended medical dialogues in both Chinese and English, has enabled progress in conversational medical systems via transfer learning. Huatuo-26M (Li et al., 2023a), a large-scale dataset with 26 mil- lion QA pairs sourced from Chinese encyclope- dias, knowledge bases, and online consultations, has further boosted model training. More recently, MMedC (Qiu et al., 2024) was introduced as a mul- tilingual medical corpus containing approximately 25.5 billion tokens across six languages, aiming to support the development of more capable and generalizable medical LLMs, particularly for low- resource languages. For Arabic, AraMed (Alasmari et al., 2024) provides QA pairs extracted from Al- Tibbi, an online doctor-patient discussion platform. Several foundational datasets have been consol- idated into comprehensive benchmarks for eval- uating LLMs in medical QA. (Jin et al., 2019; Ben Abacha et al., 2019; Jin
https://arxiv.org/abs/2505.18331v1
et al., 2021) Mul- tiMedQA (Singhal et al., 2023b), for instance, aggregates six multiple-choice datasets—MedQA (USMLE), MedMCQA (Pal et al., 2022b), Pub- MedQA, LiveQA (Abacha et al., 2019b), Medica- tionQA (Abacha et al., 2019a), and MMLU clinical topics (Hendrycks et al., 2020)—along with the HealthSearchQA (Singhal et al., 2023a) dataset, to support a wide range of evaluation tasks. In the con- text of Retrieval-Augmented Generation (RAG), MIRAGE (Xiong et al., 2024) offers a standard- ized benchmark that combines subsets of MedQA, MedMCQA, PubMedQA, MMLU clinical topics, and BioASQ-YN (Tsatsaronis et al., 2015), en- abling systematic evaluation of RAG techniques in medical QA. 2.2 Medical Large Language Models (LLMs in healthcare) Recent years have seen the emergence of several domain-specific LLMs for healthcare, such as Med- Gemini (Saab et al., 2024), Med-PaLM (Sing- hal et al., 2023b), MedPaLM-2 (Singhal et al., 2025), BioMistral (Labrak et al., 2024a), PMC- LLaMa (Wu et al., 2024) and MMed-Llama 3 (Qiu et al., 2024), each demonstrating superior perfor- mance in medical reasoning and generation tasks. These advancements necessitate the development of more robust and nuanced benchmarks to thor- oughly assess model capabilities. The field has also expanded with fine-tuned med- ical LLMs such as HuatuoGPT (Zhang et al., 2023), 3 BianQue (Chen et al., 2023), ClinicalGPT (Wang et al., 2023), DoctorGLM (Xiong et al., 2023), Chatdoctor (Li et al., 2023b), Baize-healthcare (Xu et al., 2023), zhongjing (Yang et al., 2024), Clinical Camel (Toma et al., 2023), and Me-LLaMA (Xie et al., 2024), many of which were trained on real- world clinical notes and patient-doctor dialogues. In parallel, multilingual medical models such as CareBot (Zhao et al., 2025), Medical-mT5 (García- Ferrero et al., 2024), ChiMed-GPT (Tian et al., 2024), and BiMediX (Pieri et al., 2024) have ex- tended support for cross-lingual medical applica- tions, increasing the global accessibility of medical AI systems. Despite these advancements, There is a gap in the literature when it comes to real-world datasets and most open-ended Medical QA datasets remain limited to English, leaving a substantial gap in mul- tilingual evaluation—especially for low-resource languages. 3 PerMedCQA Dataset Construction To support the development and evaluation of med- ical consumer question answering systems in Per- sian, we introduce PerMedCQA — a large-scale dataset of real-world consumer health questions and expert answers, sourced from verified special- ists across multiple public Persian forums. This section details the data collection process, prepro- cessing pipeline, automatic annotation, and bench- mark construction. 3.1 Data Sources and Raw Collection The initial dataset comprises 87,780 question- answer pairs collected from four major Persian- language health Q&A platforms: (DrYab), (HiSala- mat), (GetZoop), and (Mavara-e-Teb). Each plat- form hosts a diverse set of verified physicians, col- lectively covering over 100 medical specialties. All data was collected from publicly accessible pages between November 10, 2022, and April 2, 2024, in compliance with ethical standards for public web data usage. Each QA instance includes metadata fields such asTitle ,Category (user-assigned), Physician Specialty ,Age, and Sex. All metadata across sources were standardized to a unified format, en- suring consistency. The dataset includes various QA interaction structures: (1)
https://arxiv.org/abs/2505.18331v1
single-turn dialogues (DrYab, HiSalamat), (2) multi-turn conversations (GetZoop), and (3) cases with multiple expert re-sponses (Mavara-e-Teb). 3.2 Preprocessing and Cleaning We employed a two-stage data cleaning pipeline: rule-based preprocessing and large language model (LLM)–based processing (filtering and tagging). Rule-based Filtering. Briefly, the following heuristics were applied across all data sources: (1) Entries without a valid user or assistant message were removed. (2) QA pairs where either message had fewer than three words, images, videos, or URLs were discarded. (3) Duplicate (user, assis- tant) pairs across different files were eliminated. These rule-based filters reduced the dataset from 87,780 to 73,416 instances, removing a total of 14,364 entries. PII Detection. Given the sensitive nature of healthcare data, we further removed QA pairs con- taining personally identifiable information (PII), such as names, phone numbers, addresses, and emails. GPT-4o-mini was employed to detect any records containing PII with high accuracy. This stage resulted in the removal of 5,278 additional in- stances. The final cleaned dataset contains 68,138 QA pairs, resulting the final numbers QA pairs in PerMedCQA. Table 1 shows the distribution of QA pairs in terms of their data source. Table 1: Distribution of QA pairs in the data sources Data Source #QA Pairs Percentage DrYab 34427 50.5% GetZoop 24006 35.2% HiSalamat 5011 7.4% Mavara-E-Teb 4692 6.9% 3.3 Annotation and Benchmark Splitting ICD-11 Tagging. (Khoury et al., 2017) To facili- tate medically meaningful categorization, the Inter- national Classification of Diseases 11th Revision (ICD-11) that consists of 28 categories, assigned to each QA pair by employing GPT-4o-mini. The prompt for ICD-11 classification and PII tagging showed in Figure 3 in Appendix A. This process yielded 28 distinct ICD-11 classes. Unlike user- assigned categories, ICD-11 provides a consistent and standardized taxonomy for disease classifica- tion. Figure 5 shows the distribution of ICD-11 categories in PerMedCQA. 4 Figure 2: Gender Distribution in PerMedCQA Question Type Tagging. We categorized each QA pair into one of 25 predefined question types (Abacha et al., 2019a), enabling a deeper structural understanding of the dataset. The defi- nition of each category illustrates in Table 3 along with some examples and the prompt of question type tagging is shown in Figure 7. Figure 6 shows the distribution of ICD-11 categories in PerMed- CQA. Extra Analysis based on Gender. Culture af- fects shaping the distribution of questions across ICD-11 disease categories 5, question types 6, and the gender of questioners. As shown in Figure 2, the gender-based distribution indicates that women and unknown constitute the majority of users who post questions on Persian Forums. The highest number of questions are allocated to the categories of sexual health (9,266 questions), digestive sys- tems (6,868 questions), and Skin care and disease (5,478 questions), comprising roughly one-third of the entire dataset. In all three categories, the domi- nant "gender" of the questioners is "women", and the "question type" is predominantly "information". This observation highlights the substantial cultural influence on the prominence of these categories. For example, the tendency among women to ask about their sexual health concerns anonymously, seek home remedies for digestive issues, and ob- tain skincare
https://arxiv.org/abs/2505.18331v1
advice without in-person consultation reflects cultural norms, explaining the higher par- ticipation of women in Persian-language medical forums. Benchmark Split. To support model training and evaluation, PerMedCQA were split according toappropriate percent subsets for training, evaluation and test, partitioning into: Train set contains 64,280 instances, Eval set includes 345 instances (15 per ICD-11 category), and Test set comprises 3,513 instances (150 per ICD-11 category). Evaluation and test splits were stratified by ICD-11 category to ensure balanced representation across medical domains. 4 Experiments This section outlines the experimental setup used to evaluate models on the PerMedCQA dataset. Due to the long-form (LF) nature of the task, we adopt a structured evaluation approach using a large lan- guage model as an automatic judge (Zheng et al., 2023), referred to as Med-Judge, used to assess model performance on the PerMedCQA dataset. To validate the reliability of this automatic evalu- ation protocol, we also conducted a human expert evaluation like (Hosseini et al., 2024). We then benchmarked a range of state-of-the-art language models using zero-shot inference to establish ro- bust baselines. Based on this initial evaluation, the best-performing model in Persian for further analysis was chosen to apply a variety of advanced inference techniques (e.g, prompt-based strategy). Furthermore, to assess the quality and learnability of the PerMedCQA dataset, supervised fine-tuning (SFT) was applied to a set of language models. 4.1 LLM–based Evaluation with Med-Judge Traditional evaluation metrics such as BLEU (Pa- pineni et al., 2002), ROUGE (Lin, 2004), ME- TEOR (Banerjee and Lavie, 2005) penalize legit- imate paraphrases and thus fail for open-ended medical QA, where multiple correct answers may differ significantly in wording while still convey- ing clinically equivalent information. To address this limitation, we adopted Med-Judge , a LLM (Gemini-Flash-2.5) (Team et al., 2024)) prompted to evaluate model answers against expert references using a predefined rubric. Med-Judge is based on the criteria (Hosseini et al., 2024), comprising Correctness (Correct, Partially correct, Incorrect, Contradictory), Coverage (equal, model_subset, ex- pert_subset, overlap_none), and Critical_Impact (Negligible, Moderate, Significant, Critical), ex- plaining further details in A.1. Our full prompt of Med-Judge is shown in Figure 11 and 12 in section A. 5 4.2 Baseline Model Benchmarking To establish robust performance benchmarks on the PerMedCQA dataset, we conducted zero-shot evaluations as an initial evaluation across a diverse suite of both proprietary and open-source language models. In total, we tested 16 models spanning a range of scales and training paradigms, primarily multilingual general-purpose models, along with a single biomedical-tuned variant. All evaluations were conducted on the fixed PerMedCQA test set, allowing for consistent comparisons across meth- ods. The full list of evaluated models is summa- rized in Table 2. Table 2: The list of LLM models in our experiments Model Name Affiliation GPT-4.1, 4.1-mini, 4.1-nano (Achiam et al., 2023) OpenAI Claude 3.5 Haiku (Claude3.5) Anthropic Claude 3.7 Sonnet (Claude3.7) Anthropic Mistral-Saba (mistral-saba) Mistral Command A- 111B (Cohere et al., 2025) Cohere BioMistral-7B (Labrak et al., 2024b) Mistral DeepSeek-V3-670B (Liu et al., 2024) DeepSeek LLaMA 4 Scout-109B (LLaMA4Scout) Meta LLaMA 3.3 70B Meta LLaMA 3.1 8B (Grattafiori et al., 2024) Meta Gemma3
https://arxiv.org/abs/2505.18331v1
(Team, 2025) 4B, 12B, 27B DeepMind qwen3-14b (Yang et al., 2025) Qwen All models baseline experiments were evaluated under a consistent zero-shot setting using the same prompt. This prompt instructed the model to adopt the persona of a professional medical doctor an- swering in fluent Persian. The instructions empha- sized direct, precise, and actionable guidance, dis- couraging excessive elaboration or default referrals unless medically necessary. Prompt structure and a representative example of the prompt is shown in Figure 8 in Appendix A. 4.3 Prompt-based Enhancement Methods In addition to the baseline zero-shot evaluations, inference-time and non-parametric strategies were explored to aim the improvement of model per- formance without gradient updates or fine-tuning. These techniques were designed to enhance answer quality by modifying the input context or inter- action style, while keeping the model parameters fixed. The three main techniques considered are described in the following paragraphs: Pivot Translation for LLM Processing. (Tanaka et al., 2024) To leverage the strength of LLMs in English, we translated the Persianinput question into English, requested an English- language response from the model, and then performed back-translation the answer to Persian. GPT-4.1 was employed for both directions, the prompt shows in Figure 10 in A. This method introduces latency but often enhances fluency, completeness, and structured reasoning, poten- tially compensating for model weaknesses in low-resource language handling. Role-based Prompting. (Grabb, 2023) Each question was prepended with a system-level role prompt tailored to its ICD-11 category. For exam- ple, if the ICD-11 tag was related to mental dis- orders (tag 6), the model received the instruction: "You are an experienced ’Psychiatrist’ providing reliable..." . This approach aimed to inject domain- specific priors and guide the model toward more specialized, context-aware answers. The prompt shows in Figure 9 in A. Few-shot Prompting. (Maharjan et al., 2024) To test few-shot prompting, where each model was shown five randomly selected QA pairs from the PerMedCQA training set before answering the tar- get question. The exemplars were selected without regard to topic simila rity to preserve generalizabil- ity. This strategy aimed to improve format con- sistency, response completeness, and adherence to clinical style by offering implicit demonstration of expected output structure. These techniques were applied to the best-performing model from the baseline evaluation, selected based on Med- Judge scores. Their effects discussed in Section 5. 4.4 Supervised Fine-Tuning To assess the learnability and utility of the PerMe- dCQA dataset, supervised fine-tuning (SFT) ex- periments were conducted on three language mod- els, comprising Gemma 4B, LLaMA 3.1 8B, and BioMistral 7B. In terms of accessibility, architec- tural diversity, and suitability for efficient instruc- tion tuning in resource-constrained environments, these models were selected. The training was con- ducted on the full PerMedCQA training set for 1 epoch, comprising 64,279 QA pairs. Moreover, de- spite lacking native support for Persian, BioMistral was included due to its specialization in biomed- ical and clinical domains. We employed LoRA- based (Low-Rank Adaptation) parameter-efficient fine-tuning (Hu et al., 2022), using LLaMaFactory framework (Zheng et al., 2024). LoRA configu- 6 ration included a rank of 8 and α= 16 ,
https://arxiv.org/abs/2505.18331v1
with a learning rate of 2×10−5and context length of 2048 tokens. Rather than aiming for state-of-the-art perfor- mance, these fine-tuning experiments were primar- ily designed to evaluate the effectiveness of PerMe- dCQA as a training resource, and to explore how much medical QA capability could be learned by modestly sized models through supervised instruc- tion tuning. 5 Results and Analysis and Conclusion This section is currently under development and will be included in a future revision. Limitations Despite the comprehensive scale and rigorous cleaning of PerMedCQA, our study faces several limitations. The dataset, while large, is derived from a limited set of public Persian medical forums, which may introduce topical and demographic bi- ases and restrict generalizability beyond Persian- speaking communities or to clinical domains not well represented in our sources. Furthermore, while we implemented robust quality control—including automated PII removal and human expert valida- tion—our human evaluation was limited by the availability of clinical experts, restricting the depth and diversity of manual review. Finally, the dataset focuses on text-only QA, excluding cases requir- ing visual information, which are common in real- world healthcare settings and present important avenues for future work. Ethical Considerations Throughout this work, we prioritized privacy, fair- ness, and legal compliance. All data were col- lected exclusively from publicly accessible forums, and a thorough multi-stage pipeline was applied to remove any personally identifiable information (PII), ensuring no private patient or clinician data is present in the released dataset. We confirmed that data collection practices respected the terms and conditions of the source websites, and we will respond to takedown requests as needed. While releasing PerMedCQA, we recognize the risk of perpetuating cultural or topical biases inherent in consumer-generated content, as well as the poten- tial for LLMs to generate incorrect or mislead- ing medical advice. As such, both the dataset and any models fine-tuned with it are intendedstrictly for research purposes and should not be used as a substitute for professional medical care. We strongly recommend that downstream applica- tions include clear disclaimers, robust safety mea- sures, and, when possible, human expert oversight to prevent harm from inaccurate or inappropriate responses. References Asma Ben Abacha, Yassine Mrabet, Mark Sharp, Travis R Goodwin, Sonya E Shooshan, and Dina Demner-Fushman. 2019a. Bridging the gap between consumers’ medication questions and trusted an- swers. In MEDINFO 2019: Health and Wellbeing e-Networks for All , pages 25–29. IOS Press. Asma Ben Abacha, Chaitanya Shivade, and Dina Demner-Fushman. 2019b. Overview of the mediqa 2019 shared task on textual inference, question en- tailment and question answering. In Proceedings of the 18th BioNLP Workshop and Shared Task , pages 370–379. Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, and 1 others. 2023. Gpt-4 techni- cal report. arXiv preprint arXiv:2303.08774 . Ashwag Alasmari, Sarah Alhumoud, and Waad Alsham- mari. 2024. AraMed: Arabic medical question an- swering using pretrained transformer language mod- els. In Proceedings of the 6th Workshop on Open- Source Arabic Corpora and Processing Tools (OS- ACT) with Shared Tasks on Arabic LLMs
https://arxiv.org/abs/2505.18331v1
Halluci- nation and Dialect to MSA Machine Translation @ LREC-COLING 2024 , pages 50–56, Torino, Italia. ELRA and ICCL. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with im- proved correlation with human judgments. In Pro- ceedings of the ACL Workshop on Intrinsic and Ex- trinsic Evaluation Measures for Machine Transla- tion and/or Summarization , pages 65–72, Ann Arbor, Michigan. Association for Computational Linguis- tics. Asma Ben Abacha, Chaitanya Shivade, and Dina Demner-Fushman. 2019. Overview of the MEDIQA 2019 shared task on textual inference, question entail- ment and question answering. In Proceedings of the 18th BioNLP Workshop and Shared Task , pages 370– 379, Florence, Italy. Association for Computational Linguistics. Devendra Singh Chaplot, Diego de las Casas, Flo- rian Bressand, Gianna Lengyel, Guillaume Lam- ple, Lucile Saulnier, Lélio Renard Lavaud, Marie- Anne Lachaux, Pierre Stock, Teven Le Scao, and 1 others. 2023. Albert q. jiang, alexandre sablay- rolles, arthur mensch, chris bamford, devendra singh 7 chaplot, diego de las casas, florian bressand, gianna lengyel, guillaume lample, lucile saulnier, lélio re- nard lavaud, marie-anne lachaux, pierre stock, teven le scao, thibaut lavril, thomas wang, timothée lacroix, william el sayed. arXiv preprint arXiv:2310.06825 . Yirong Chen, Zhenyu Wang, Xiaofen Xing, Zhipei Xu, Kai Fang, Junhong Wang, Sihang Li, Jieling Wu, Qi Liu, Xiangmin Xu, and 1 others. 2023. Bianque: Balancing the questioning and suggestion ability of health llms with multi-turn health conversations pol- ished by chatgpt. arXiv preprint arXiv:2310.15896 . Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebas- tian Gehrmann, and 1 others. 2023. Palm: Scaling language modeling with pathways. Journal of Ma- chine Learning Research , 24(240):1–113. Claude3.5. 2025. https://www.anthropic.com/ news/claude-3-family . Claude3.7. 2025. https://api.semanticscholar. org/CorpusID:276612236 . Team Cohere, Arash Ahmadian, Marwan Ahmed, Jay Alammar, Yazeed Alnumay, Sophia Althammer, Arkady Arkhangorodsky, Viraat Aryabumi, Dennis Aumiller, Raphaël Avalos, and 1 others. 2025. Com- mand a: An enterprise-ready large language model. arXiv preprint arXiv:2504.00698 . Mouath Abu Daoud, Chaimae Abouzahir, Leen Kharouf, Walid Al-Eisawi, Nizar Habash, and Farah E Shamout. 2025. Medarabiq: Benchmark- ing large language models on arabic medical tasks. arXiv preprint arXiv:2505.03427 . DrYab. 2025. https://doctor-yab.ir/faq/ . Iker García-Ferrero, Rodrigo Agerri, Aitziber Atutxa, Elena Cabrio, Iker de la Iglesia, Alberto Lavelli, Bernardo Magnini, Benjamin Molinet, Johana Ramirez-Romero, German Rigau, and 1 others. 2024. Medical mt5: An open-source multilingual text-to- text llm for the medical domain. In LREC-COLING 2024-2024 Joint International Conference on Compu- tational Linguistics, Language Resources and Evalu- ation . Gemini-Flash-2.5. 2025. https://cloud.google. com/vertex-ai/generative-ai/docs/models/ gemini/2-5-flash . GetZoop. 2025. https://getzoop.com/doctors/ online . Declan Grabb. 2023. The impact of prompt engineering in large language model performance: a psychiatric example. Journal of Medical Artificial Intelligence , 6. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten,Alex Vaughan, and 1 others. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 . Kai He, Rui Mao, Qika Lin, Yucheng Ruan, Xiang Lan, Mengling Feng, and Erik Cambria. 2025. A survey of large language models for healthcare: from data, technology,
https://arxiv.org/abs/2505.18331v1
and applications to accountability and ethics. Information Fusion , page 102963. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language under- standing. arXiv preprint arXiv:2009.03300 . HiSalamat. 2025. https://www.hisalamat.com . Pedram Hosseini, Jessica M Sin, Bing Ren, Bryce- ton G Thomas, Elnaz Nouri, Ali Farahanchi, and Saeed Hassanpour. 2024. A benchmark for long- form medical question answering. arXiv preprint arXiv:2411.09834 . Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, and 1 others. 2022. Lora: Low-rank adaptation of large language models. ICLR , 1(2):3. Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang, and Peter Szolovits. 2021. What disease does this patient have? a large-scale open domain question answering dataset from medical exams. Ap- plied Sciences , 11(14):6421. Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William Cohen, and Xinghua Lu. 2019. Pubmedqa: A dataset for biomedical research question answering. In Pro- ceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th In- ternational Joint Conference on Natural Language Processing (EMNLP-IJCNLP) . Association for Com- putational Linguistics. Brigitte Khoury, Cary Kogan, and Sariah Daouk. 2017. International Classification of Diseases 11th Edition (ICD-11) , pages 1–6. Springer International Publish- ing, Cham. Yunsoo Kim, Jinge Wu, Yusuf Abdulle, and Honghan Wu. 2024. MedExQA: Medical question answer- ing benchmark with multiple explanations. In Pro- ceedings of the 23rd Workshop on Biomedical Natu- ral Language Processing , pages 167–181, Bangkok, Thailand. Association for Computational Linguistics. Yanis Labrak, Adrien Bazoge, Emmanuel Morin, Pierre- Antoine Gourraud, Mickael Rouvier, and Richard Dufour. 2024a. BioMistral: A collection of open- source pretrained large language models for medical domains. In Findings of the Association for Compu- tational Linguistics: ACL 2024 , pages 5848–5864, Bangkok, Thailand. Association for Computational Linguistics. Yanis Labrak, Adrien Bazoge, Emmanuel Morin, Pierre- Antoine Gourraud, Mickael Rouvier, and Richard 8 Dufour. 2024b. Biomistral: A collection of open- source pretrained large language models for medical domains. arXiv preprint arXiv:2402.10373 . Jianquan Li, Xidong Wang, Xiangbo Wu, Zhiyi Zhang, Xiaolong Xu, Jie Fu, Prayag Tiwari, Xiang Wan, and Benyou Wang. 2023a. Huatuo-26m, a large- scale chinese medical qa dataset. arXiv preprint arXiv:2305.01526 . Yunxiang Li, Zihan Li, Kai Zhang, Ruilong Dan, Steve Jiang, and You Zhang. 2023b. Chatdoctor: A medical chat model fine-tuned on a large language model meta-ai (llama) using medical domain knowledge. Cureus , 15(6). Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out , pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, and 1 others. 2024. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437 . LLaMA4Scout. 2025. https://ai.meta.com/blog/ llama-4-multimodal-intelligence/ . Jenish Maharjan, Anurag Garikipati, Navan Preet Singh, Leo Cyrus, Mayank Sharma, Madalina Ciobanu, Gina Barnes, Rahul Thapa, Qingqing Mao, and Ri- tankar Das. 2024. Openmedlm: prompt engineer- ing can out-perform fine-tuning in medical question- answering with open-source large language models. Scientific Reports , 14(1):14156. Itay Manes, Naama
https://arxiv.org/abs/2505.18331v1
Ronn, David Cohen, Ran Ilan Ber, Zehavi Horowitz-Kugler, and Gabriel Stanovsky. 2024. K-qa: A real-world medical q&a benchmark. InProceedings of the 23rd Workshop on Biomedical Natural Language Processing , pages 277–294. Mavara-e-Teb. 2025. https://mavarateb.com . Xiangbin Meng, Xiangyu Yan, Kuo Zhang, Da Liu, Xi- aojuan Cui, Yaodong Yang, Muhan Zhang, Chunxia Cao, Jingjia Wang, Xuliang Wang, and 1 others. 2024. The application of large language models in medicine: A scoping review. Iscience , 27(5). mistral-saba. 2025. https://mistral.ai/news/ mistral-saba . Vincent Nguyen, Sarvnaz Karimi, Maciej Rybinski, and Zhenchang Xing. 2023. MedRedQA for medical con- sumer question answering: Dataset, tasks, and neural baselines. In Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Vol- ume 1: Long Papers) , pages 629–648, Nusa Dua, Bali. Association for Computational Linguistics.Harsha Nori, Nicholas King, Scott Mayer McKinney, Dean Carignan, and Eric Horvitz. 2023. Capabili- ties of gpt-4 on medical challenge problems. arXiv preprint arXiv:2303.13375 . Ankit Pal, Logesh Kumar Umapathi, and Malaikan- nan Sankarasubbu. 2022a. Medmcqa: A large-scale multi-subject multi-choice dataset for medical do- main question answering. In Proceedings of the Con- ference on Health, Inference, and Learning , volume 174 of Proceedings of Machine Learning Research , pages 248–260. PMLR. Ankit Pal, Logesh Kumar Umapathi, and Malaikan- nan Sankarasubbu. 2022b. Medmcqa: A large-scale multi-subject multi-choice dataset for medical do- main question answering. In Conference on health, inference, and learning , pages 248–260. PMLR. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics , pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Sara Pieri, Sahal Shaji Mullappilly, Fahad Shah- baz Khan, Rao Muhammad Anwer, Salman Khan, Timothy Baldwin, and Hisham Cholakkal. 2024. BiMediX: Bilingual medical mixture of experts LLM. InFindings of the Association for Computational Lin- guistics: EMNLP 2024 , pages 16984–17002, Miami, Florida, USA. Association for Computational Lin- guistics. PII. 2025. https://www.dol.gov/general/ppii . Pengcheng Qiu, Chaoyi Wu, Xiaoman Zhang, Weixiong Lin, Haicheng Wang, Ya Zhang, Yanfeng Wang, and Weidi Xie. 2024. Towards building multilingual lan- guage model for medicine. Nature Communications , 15(1):8384. Khaled Saab, Tao Tu, Wei-Hung Weng, Ryutaro Tanno, David Stutz, Ellery Wulczyn, Fan Zhang, Tim Strother, Chunjong Park, Elahe Vedadi, and 1 others. 2024. Capabilities of gemini models in medicine. arXiv preprint arXiv:2404.18416 . Xiaoming Shi, Zeming Liu, Li Du, Yuxuan Wang, Hon- gru Wang, Yuhang Guo, Tong Ruan, Jie Xu, Xiaofan Zhang, and Shaoting Zhang. 2024. Medical dialogue system: A survey of categories, methods, evaluation and challenges. Findings of the Association for Com- putational Linguistics ACL 2024 , pages 2840–2861. Karan Singhal, Shekoofeh Azizi, Tao Tu, S Sara Mah- davi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, and 1 others. 2023a. Large language models encode clinical knowledge. Nature , 620(7972):172–180. Karan Singhal, Shekoofeh Azizi, Tao Tu, S Sara Mah- davi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl,
https://arxiv.org/abs/2505.18331v1
9 and 1 others. 2023b. Publisher correction: Large language models encode clinical knowledge. Nature , 620(7973):19–19. Karan Singhal, Tao Tu, Juraj Gottweis, Rory Sayres, Ellery Wulczyn, Mohamed Amin, Le Hou, Kevin Clark, Stephen R Pfohl, Heather Cole-Lewis, and 1 others. 2025. Toward expert-level medical ques- tion answering with large language models. Nature Medicine , pages 1–8. Ekaterina Sviridova, Anar Yeginbergen, Ainara Es- tarrona, Elena Cabrio, Serena Villata, and Ro- drigo Agerri. 2024. Casimedicos-arg: A medi- cal question answering dataset annotated with ex- planatory argumentative structures. arXiv preprint arXiv:2410.05235 . Yudai Tanaka, Takuto Nakata, Ko Aiga, Takahide Etani, Ryota Muramatsu, Shun Katagiri, Hiroyuki Kawai, Fumiya Higashino, Masahiro Enomoto, Masao Noda, and 1 others. 2024. Performance of generative pre- trained transformer on the national medical licens- ing examination in japan. PLOS Digital Health , 3(1):e0000433. Gemini Team, Petko Georgiev, Ving Ian Lei, Ryan Burnell, Libin Bai, Anmol Gulati, Garrett Tanzer, Damien Vincent, Zhufeng Pan, Shibo Wang, and 1 others. 2024. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint arXiv:2403.05530 . Gemma Team. 2025. Gemma 3. Yuanhe Tian, Ruyi Gan, Yan Song, Jiaxing Zhang, and Yongdong Zhang. 2024. ChiMed-GPT: A Chi- nese medical large language model with full training regime and better alignment to human preferences. InProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 7156–7173, Bangkok, Thailand. Association for Computational Linguistics. Yuanhe Tian, Weicheng Ma, Fei Xia, and Yan Song. 2019. ChiMed: A Chinese medical corpus for ques- tion answering. In Proceedings of the 18th BioNLP Workshop and Shared Task , pages 250–260, Florence, Italy. Association for Computational Linguistics. Augustin Toma, Patrick R Lawler, Jimmy Ba, Rahul G Krishnan, Barry B Rubin, and Bo Wang. 2023. Clin- ical camel: An open expert-level medical language model with dialogue-based knowledge encoding. arXiv preprint arXiv:2305.12031 . Ran Tong, Ting Xu, Xinxin Ju, and Lanruo Wang. 2025. Progress in medical ai: Reviewing large language models and multimodal systems for diagonosis. AI Med, 1(1):165–186. Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, and 1 others. 2023. Llama 2: Open foun- dation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 .George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Michael R Alvers, Dirk Weissenborn, Anastasia Krithara, Sergios Petridis, Dimitris Polychronopou- los, and 1 others. 2015. An overview of the bioasq large-scale biomedical semantic indexing and ques- tion answering competition. BMC bioinformatics , 16:1–28. Guangyu Wang, Guoxing Yang, Zongxin Du, Longjun Fan, and Xiaohu Li. 2023. Clinicalgpt: large language models finetuned with diverse medical data and comprehensive evaluation. arXiv preprint arXiv:2306.09968 . Jinqiang Wang, Huansheng Ning, Yi Peng, Qikai Wei, Daniel Tesfai, Wenwei Mao, Tao Zhu, and Runhe Huang. 2024. A survey on large language mod- els from general purpose to medical applications: Datasets, methodologies, and evaluations. arXiv preprint arXiv:2406.10303 . Anuradha Welivita and Pearl Pu. 2023. A survey of consumer health question answering systems. Ai Magazine , 44(4):482–507. Chaoyi Wu, Weixiong Lin, Xiaoman Zhang, Ya Zhang, Weidi Xie, and Yanfeng Wang. 2024. Pmc-llama:
https://arxiv.org/abs/2505.18331v1
toward building open-source language models for medicine. Journal of the American Medical Infor- matics Association , 31(9):1833–1843. Qianqian Xie, Qingyu Chen, Aokun Chen, Cheng Peng, Yan Hu, Fongci Lin, Xueqing Peng, Jimin Huang, Jeffrey Zhang, Vipina Keloth, and 1 others. 2024. Me-llama: Foundation large language models for medical applications. Research square , pages rs–3. Guangzhi Xiong, Qiao Jin, Zhiyong Lu, and Aidong Zhang. 2024. Benchmarking retrieval-augmented generation for medicine. In Findings of the Associa- tion for Computational Linguistics ACL 2024 , pages 6233–6251. Honglin Xiong, Sheng Wang, Yitao Zhu, Zihao Zhao, Yuxiao Liu, Linlin Huang, Qian Wang, and Ding- gang Shen. 2023. Doctorglm: Fine-tuning your chi- nese doctor is not a herculean task. arXiv preprint arXiv:2304.01097 . Canwen Xu, Daya Guo, Nan Duan, and Julian McAuley. 2023. Baize: An open-source chat model with parameter-efficient tuning on self-chat data. arXiv preprint arXiv:2304.01196 . An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, Chujie Zheng, Day- iheng Liu, Fan Zhou, Fei Huang, Feng Hu, Hao Ge, Haoran Wei, Huan Lin, Jialong Tang, and 41 others. 2025. Qwen3 technical report. Preprint , arXiv:2505.09388. Songhua Yang, Hanjie Zhao, Senbin Zhu, Guangyu Zhou, Hongfei Xu, Yuxiang Jia, and Hongying Zan. 2024. Zhongjing: Enhancing the chinese medical 10 capabilities of large language model through expert feedback and real-world multi-turn dialogue. In Pro- ceedings of the AAAI conference on artificial intelli- gence , volume 38, pages 19368–19376. Guangtao Zeng, Wenmian Yang, Zeqian Ju, Yue Yang, Sicheng Wang, Ruisi Zhang, Meng Zhou, Jiaqi Zeng, Xiangyu Dong, Ruoyu Zhang, Hongchao Fang, Penghui Zhu, Shu Chen, and Pengtao Xie. 2020. MedDialog: Large-scale medical dialogue datasets. InProceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 9241–9250, Online. Association for Computa- tional Linguistics. Hongbo Zhang, Junying Chen, Feng Jiang, Fei Yu, Zhi- hong Chen, Guiming Chen, Jianquan Li, Xiangbo Wu, Zhang Zhiyi, Qingying Xiao, and 1 others. 2023. Huatuogpt, towards taming language model to be a doctor. In Findings of the Association for Com- putational Linguistics: EMNLP 2023 , pages 10859– 10885. Lulu Zhao, Weihao Zeng, Xiaofeng Shi, and Hua Zhou. 2025. Carebot: A pioneering full-process open- source medical language model. In Proceedings of the AAAI Conference on Artificial Intelligence , vol- ume 39, pages 26039–26047. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judg- ing llm-as-a-judge with mt-bench and chatbot arena. Preprint , arXiv:2306.05685. Yanxin Zheng, Wensheng Gan, Zefeng Chen, Zhen- lian Qi, Qian Liang, and Philip S Yu. 2025. Large language models for medicine: a survey. Interna- tional Journal of Machine Learning and Cybernetics , 16(2):1015–1040. Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, and Yongqiang Ma. 2024. Llamafactory: Unified efficient fine-tuning of 100+ language models. In Proceedings of the 62nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 3: System Demonstra- tions) , Bangkok, Thailand. Association for Computa- tional Linguistics. A Appendix A.1 Med_Jugde For every
https://arxiv.org/abs/2505.18331v1
test instance, the MedJudge LLM re- ceives the user’s question, the expert-provided “gold” answer, and the model-generated answer. The LLM is strictly instructed to evaluate based solely on the expert reference, without drawing from external sources or its own medical knowl- edge. The output is a structured JSON object with the following fields:•Brief_analysis : A concise comparison high- lighting key similarities or differences be- tween expert and model answers. •Key_missing_facts : Important medical facts present in the expert answer but missing from the model answer (in Persian). •Key_extra_facts : Additional details present in the model answer but absent from the expert answer (in Persian). •Correctness : Assessment of factual and clin- ical consistency: Correct ,Partially_correct , Incorrect , orContradictory . •Coverage : Degree of factual overlap: Equal ,Model_subset ,Expert_subset , orOver- lap_none . •Clinical_impact : Estimated clinical signifi- cance of any discrepancies: Negligible ,Mod- erate ,Significant , orCritical . •Judge_confidence : The LLM’s self-reported confidence ( High ,Medium ,Low). A.2 Label Definitions, Scoring, and Reliability Label Definitions Each output field is defined as follows: •Correctness : 1.Correct : Clinically equivalent; no mean- ingful differences. 2.Partially_correct : Minor deviations, no significant clinical impact. 3.Incorrect : Substantial differences that affect accuracy or completeness. 4.Contradictory : Model advice directly conflicts with the expert reference. •Coverage : 1.Equal : Both answers contain the same key facts. 2.Model_subset : Model omits critical facts present in the expert answer. 3.Expert_subset : Model introduces rele- vant facts not found in the expert answer. 4.Overlap_none : No substantial factual overlap. •Clinical_impact : 11 1.Negligible : No effect on care or under- standing. 2.Moderate : Slight effect on treatment or comprehension. 3.Significant : Likely to affect recommen- dations or outcomes. 4.Critical : May result in unsafe or harmful guidance. Reliability To assess the reliability of Med- Judge, we compared its labels against blinded ratings from board-certified physicians on a 100- item subset. Agreement on the primary dimen- sion— correctness —was 75% (collapsed to “ac- ceptable” vs. “problematic”), with quadratic Co- hen’s κ= 0.42(95% CI 0.19–0.58) and F1= 0.82. Ordinal rank-correlation was significant ( τ= 0.41, p < 0.001), indicating that higher human scores were mirrored by the LLM. For clinical-impact , MedJudge achieved 78% accuracy and detected one-third of truly high-impact discrepancies ( κ= 0.07; prevalence-adjusted). A.3 Prompt Engineering Prompt engineering is considered crucial for the evaluation of LLMs. In this section, the prompts used at various stages of the work are presented. Figures 3 and 4 illustrate the task instructions pro- vided for ICD-11 classification and PII tagging during the MedPerCQA stage. As shown in Fig- ure 7, question type tagging across 25 categories was performed using a dedicated prompt. In ad- dition, a wide range of prompts was employed in extensive experiments, as illustrated in Figures 8, 9, and 10. Finally, the structured output schema used in the Med-Judge evaluation pipeline is shown in Figure 13, enabling reliable parsing and analysis of model responses; the prompt design incorpo- rates a chain-of-thought strategy to guide the LLM through consistent reasoning steps. 12 Figure 3: Task instructions for ICD-11 classification and
https://arxiv.org/abs/2505.18331v1
PII tagging. ICD-11 and PII Tagging System Prompt Task Description You are a medical expert tasked with classifying and analyzing patient–expert dialogue. Your task has two main parts: 1) ICD-11 Classification Classify the content based on ICD-11 categories using the catalogue provided in the next page. Return only the integer (1–28) that best corresponds to the core subject matter. 2) PII Detection Check if any personal information of the patient or expert is exposed in the messages. Personal information includes examples such as real name, address, phone number, email, etc. If such information exists, set "identity" totrue ; otherwise, set it to false . Figure 4: Standardized ICD-11 classification codes used for QA annotation. ICD-11 Category Catalogue (28-class taxonomy) 1. (1A00–1H0Z) Certain infectious or parasitic diseases 2. (2A00–2F9Z) Neoplasms 3.(3A00–3C0Z) Diseases of the blood or blood-forming organs 4. (4A00–4B4Z) Diseases of the immune system 5.(5A00–5D46) Endocrine, nutritional or metabolic dis- eases 6.(6A00–6E8Z) Mental, behavioural or neurodevelop- mental disorders 7. (7A00–7B2Z) Sleep-wake disorders 8. (8A00–8E7Z) Diseases of the nervous system 9. (9A00–9E1Z) Diseases of the visual system 10.(AA00–AC0Z) Diseases of the ear or mastoid process 11. (BA00–BE2Z) Diseases of the circulatory system 12. (CA00–CB7Z) Diseases of the respiratory system 13. (DA00–DE2Z) Diseases of the digestive system 14. (EA00–EM0Z) Diseases of the skin 15.(FA00–FC0Z) Diseases of the musculoskeletal system or connective tissue16. (GA00–GC8Z) Diseases of the genitourinary system 17. (HA00–HA8Z) Conditions related to sexual health 18.(JA00–JB6Z) Pregnancy, childbirth or the puerperium 19.(KA00–KD5Z) Certain conditions originating in the perinatal period 20. (LA00–LD9Z) Developmental anomalies 21.(MA00–MH2Y) Symptoms, signs or clinical findings, not elsewhere classified 22.(NA00–NF2Z) Injury, poisoning or other conse- quences of external causes 23.(PA00–PL2Z) External causes of morbidity or mortal- ity 24.(QA00–QF4Z) Factors influencing health status or con- tact with health services 25. (RA00–RA26) Codes for special purposes 26.(SA00–SJ3Z) Traditional Medicine Conditions – Mod- ule I 27. (V A00–VC50) Functioning assessment 28. (XA0060–XY9U) Extension Codes 13 Table 3: Question Type Categories. Question Type Definition Example Information Asks for general identification or classifica- tion of a drug.What type of drug is am- phetamine? Dose Queries recommended or safe dosage. What is a daily amount of pred- nisolone eye drops to take? Usage Seeks instructions on how to take/administer a drug.How to self inject enoxaparin sodium? Side Effects Asks about adverse reactions. Does benazepril aggravate hepati- tis? Indication Asks why/for which condition the drug is prescribed.Why is pyridostigmine pre- scribed? Interaction Concerns compatibility with another drug/- substance.Can I drink cataflam when I drink medrol? Action Mechanism of action or physiological effect. How does xarelto affect homeosta- sis? Appearance Asks about physical look (colour, shape, im- print).What color is 30 mg prednisone? Usage/Time Best time of day to take the medicine. When is the best time to take lotensin? Stopping/Tapering How to discontinue or taper. How to come off citalopram? Ingredient Active ingredient(s) contained in a product. What opioid is in the bupropion patch? Action/Time Onset/duration of drug effect. How soon does losartan affect blood pressure? Storage and Dis- posalProper storage temperature or disposal method.In how much temp should BCG vaccine be stored? Comparison Compares two therapies/drugs. Why
https://arxiv.org/abs/2505.18331v1
is losartan prescribed rather than a calcium channel blocker? Contraindication Whether the drug is safe given allergy/con- dition.If I am allergic to sulfa can I take glipizide? Overdose Consequences of taking too much. What happens if your child ate a Tylenol tablet? Alternatives Asks for substitute medications. What medicine besides statins lowers cholesterol? Usage/Duration How long treatment should continue. How long should I take dutas- teride? Time (Other) Other time-related effectiveness/protection questions.How long are you protected after the Hep B vaccine? Brand Names Asks for commercial brand names. What is brand name of ac- etaminophen? Combination How to combine two treatments in one regi- men.How to combine dapagliflozin with metformin? Pronunciation How to pronounce a drug name. How do you pronounce Humira? Manufacturer Asks who makes or markets the drug. Who makes nitrofurantoin? Availability Whether the drug is still on the market/short- ages.Has lisinopril been taken off the market? Long-term- ConsequencesLong-term effects of prolonged use. What are the long-term conse- quences of using nicotine? 14 Figure 5: Distribution of ICD-11 Categories in PerMedCQA 15 Figure 6: Distribution of Question Type in PerMedCQA Figure 7: Prompt description for the Question Type Tagging Task. Question Type Tagging System Prompt Task Description You are a medical NLP annotator. Label every consumer medical question with: •question_type_explanation – a concise and brief (1 sentence) justification of why the chosen question_type is appropriate, citing question wording and any domain knowledge you considered. •question_type –exactly one of the 25 categories listed below in the table (use the label verbatim). Figure 8: Baseline system prompt used for all models in the default setting. Baseline System Prompt (Main Prompt) You are an experienced professional medical doctor providing reliable medical advice in fluent Persian. Users are reaching out through an online medical forum seeking clear, concise, and accurate answers to their health-related questions. Respond directly, briefly, and precisely to their inquiries. Always provide practical and actionable medical guidance. Refer users to consult a doctor only when it is strictly necessary and impossible to safely or accurately address their concerns online. Prioritize addressing their concerns clearly and thoroughly without unnecessary elaboration or hesitation. 16 Figure 9: Role-based prompting strategy using ICD-11-conditioned specialist titles (e.g., Psychiatrist , Gynecologist ). Role-Based Prompt (ICD-11 Conditioned) You are an experienced {role} providing reliable medical advice in fluent Persian. Users are reaching out through an online medical forum seeking clear, concise, and accurate answers to their health-related questions. Respond directly, briefly, and precisely to their inquiries. Always provide practical and actionable medical guidance. Refer users to consult a doctor only when it is strictly necessary and impossible to safely or accurately address their concerns online. Prioritize addressing their concerns clearly and thoroughly without unnecessary elaboration or hesitation. Figure 10: Translation pipeline (separate LLM inference) for leveraging English LLMs with Persian input and output. Translation-Based Prompting (Three-Step Pipeline) Step 1: Translate Question (Persian →English) You are a professional medical translator. Translate the following Persian medical question into clear, neutral English. Only output the translation. Step 2: Answer in English You are an experienced professional medical doctor providing reliable advice
https://arxiv.org/abs/2505.18331v1
in English. Answer directly, briefly and precisely. Provide practical guidance and advise an in-person visit only when absolutely necessary. Step 3: Translate Answer (English →Persian) You are a professional medical translator. Translate the following English answer into fluent Persian suitable for the patient. Only output the translation. 17 MedJudge System Prompt You are "Med-Judge" , an objective medical QA grader responsible for evaluating how closely a Model-generated answer aligns with an Expert-provided answer (the gold-standard). Strict Guidelines for Evaluation: 1.Exclusivity: Base your judgment only on the Expert’s provided answer , and never rely on your own medical knowledge or external resources. 2.Step-by-step and Explainable Evaluation: • Provide a brief and concise comparison analysis ( brief_analysis ), clearly explaining your evaluation. •Clearly identify the key medical facts, statements, or recommendations in both Expert and Model answers. •List explicitly any critical facts appearing only in the Expert’s answer ( key_missing_facts ) oronly in the Model’s answer ( key_extra_facts ). These should be keywords in Persian (Farsi). 3.Labeling Correctness: Categorize the answer strictly according to these definitions: •correct – Answers have essentially identical meaning with no clinically meaningful differences. •partially_correct – Answers differ slightly without critical differences affecting clinical understanding or advice. •incorrect – Answers substantially differ, significantly impacting clinical meaning or completeness. •contradictory – Model explicitly contradicts Expert’s answer or provides opposite medical advice. 4.Coverage Analysis: Evaluate factual coverage using: •equal – Both answers contain exactly the same key facts. •model_subset – Model answer misses one or more critical facts present in the Expert answer. •expert_subset – Model answer includes additional key facts not found in the Expert’s answer. •overlap_none – No meaningful shared key facts between answers. 5.Clinical Impact Estimation: Judge how the differences could clinically affect a patient’s safety or care using: •negligible ,moderate ,significant ,critical 6.Confidence Rating: Express your certainty level: •high ,medium ,low Attention: • Be strict and literal in comparing the two answers. • Do not assume or infer correctness beyond the Expert answer. • Be concise and structured. Avoid vague commentary. Output Format: Return only a valid JSON object matching the schema. Figure 11: MedJudge system prompt with structured grading rubric for model evaluation. MedJudge User Prompt Template Question: {question } Expert Answer: {expert_answer } Model Answer: {model_answer } Figure 12: Template used for submitting evaluation tasks to MedJudge. Placeholders are replaced at runtime. 18 Figure 13: MedJude Structured Output Schema Figure 14: Human Evaluation 19 Figure 15: Correctness for baselines and advanced Prompting Figure 16: Coverage for baselines and advanced Prompting Figure 17: Critical impact for baselines and advanced Prompting 20
https://arxiv.org/abs/2505.18331v1
arXiv:2505.18343v1 [cs.CL] 23 May 2025Model Editing with Graph-Based External Memory Yash Kumar Atri University of Virginia atri@virginia.eduAhmed Alaa UC Berkeley & UCSF amalaa@berkeley.eduTom Hartvigsen University of Virginia hartvigsen@virginia.edu Abstract Large language models (LLMs) have revolu- tionized natural language processing, yet their practical utility is often limited by persistent issues of hallucinations and outdated paramet- ric knowledge. Although post-training model editing offers a pathway for dynamic updates, existing methods frequently suffer from over- fitting and catastrophic forgetting. To tackle these challenges, we propose a novel frame- work that leverages hyperbolic geometry and graph neural networks for precise and stable model edits. We introduce HYPE1, which com- prises three key components: (i) Hyperbolic Graph Construction, which uses Poincaré em- beddings to represent knowledge triples in hy- perbolic space, preserving hierarchical relation- ships and preventing unintended side effects by ensuring that edits to parent concepts do not inadvertently affect child concepts; (ii) Möbius- Transformed Updates, which apply hyperbolic addition to propagate edits while maintaining structural consistency within the hyperbolic manifold, unlike conventional Euclidean up- dates that distort relational distances; and (iii) Dual Stabilization, which combines gradient masking and periodic GNN parameter resetting to prevent catastrophic forgetting by focusing updates on critical parameters and preserving long-term knowledge. Experiments on Counter- Fact, CounterFact+, and MQuAKE with GPT-J and GPT2-XL demonstrate that HYPE signifi- cantly enhances edit stability, factual accuracy, and multi-hop reasoning. 1 Introduction Large language models (LLMs) such as GPT-3 (Brown et al., 2020), PaLM (Chowdhery et al., 2023), and LLaMA (Touvron et al., 2023) have revolutionized natural language processing (NLP), enabling unprecedented capabilities in text gen- eration (Li et al., 2024a), reasoning (Chowdhery 1HYPE (HYperbolic Parameter Editing)et al., 2023), and contextual understanding (Zhao et al., 2024). These models underpin a wide range of applications, from conversational agents (Liu et al., 2024) to knowledge-intensive tasks like ques- tion answering (Phukan et al., 2024; Shah et al., 2024; Gao et al., 2024) and summarization (Atri et al., 2023a, 2021, 2023b,c; Dey et al., 2020; Atri et al., 2023d). However, their reliance on static, pre-trained parametric knowledge renders them prone to generating factual inaccuracies (Wang et al., 2024), hallucinations (Ji et al., 2023), and outdated information—critical limitations in real- world deployments where accuracy and timeliness are paramount (Lazaridou et al., 2021; Atri et al., 2025). While fine-tuning on updated data can mit- igate these issues, the computational and data de- mands of retraining billion-parameter models (Ka- plan et al., 2020) render this approach impractical for real-time knowledge updates. Instead, post- training model editing has emerged as a promising alternative, enabling targeted modifications to a model’s parametric knowledge without full retrain- ing (Mitchell et al., 2022a). Existing model editing methods, such as ROME (Meng et al., 2022a) and MEMIT (Meng et al., 2022b), rely on Euclidean geometry to modify spe- cific facts. While these methods achieve localized edits, they struggle with hierarchical relationships due to Euclidean space’s flat structure. For ex- ample, ROME enforces memorization via equality constraints, while MEMIT uses least-squares opti- mization, both of which fail to preserve relational depth (Yang et al., 2024). This leads to
https://arxiv.org/abs/2505.18343v1
geometric mismatch , where edits distort semantically related knowledge (Nickel and Kiela, 2017). Additionally, Euclidean updates in overparameterized models like GPT-2 (Radford et al., 2019a) cause update in- stability , propagating unintended changes through dense connections (Kaplan et al., 2020). Finally, these methods suffer from contextual fragility , lack- ing mechanisms to maintain global coherence dur- ing edits (Zhong et al., 2023). Hyperbolic geometry, in contrast, naturally en- codes hierarchical relationships through its expo- nential growth property, making it well-suited for modeling linguistic knowledge (Nickel and Kiela, 2017). Unlike Euclidean spaces, where distances are linear, hyperbolic space expands exponentially, allowing child nodes to be placed far from parent nodes while preserving their relational proximity (Ganea et al., 2018). This property enables precise parameter updates that do not propagate unintended side effects. For instance, modifying a parent con- cept in hyperbolic space affects only its immediate vicinity, preserving the stability of its hierarchical descendants. Furthermore, hyperbolic operations like Möbius addition (Ungar, 2013) ensure updates remain on the manifold, avoiding the geometric distortion caused by Euclidean vector addition. Motivated by these insights and the inherent advantages of hyperbolic geometry, we propose HYPE (HYperbolic Parameter Editing), a novel framework that leverages hyperbolic geometry (Ganea et al., 2018) for model editing through three core components. First, Hyperbolic Graph Con- struction projects knowledge graph into a learn- able curvature space using Poincaré embeddings (Nickel and Kiela, 2017), preserving hierarchical relationships—such as parent-child and whole-part associations—via exponential mapping and hyper- bolic distances. Second, Möbius-Transformed (Un- gar, 2013) Updates adjust model parameters using hyperbolic addition, a geometric operation that pre- serves the hierarchical structure of the data. Unlike Euclidean vector addition, which can distort re- lationships in hierarchical data, Möbius addition ensures that edits remain within the hyperbolic manifold. This prevents unintended side effects by maintaining the relative distances between re- lated concepts. Third, Dual Stabilization com- bines gradient-based sparsification (Frankle and Carbin, 2019), which masks negligible gradient updates, with periodic resetting of graph network parameters to prevent the overwriting of critical pre-existing knowledge during the integration of new edits. By unifying hyperbolic embeddings, Möbius-Transformed updates, and robust stabiliza- tion techniques, HYPE achieves precise and resilient model edits. We summarize our contributions as follows: 1.HYPE is the first approach to integrate hyper- bolic geometry with graph-based model edit- ing, leveraging Poincaré embeddings to projectknowledge graphs into a learnable curvature space that preserves complex hierarchical re- lationships during model updates. 2.HYPE introduces a novel editing mechanism that applies Möbius transformations for parameter updates, ensuring that edits remain consistent with the hyperbolic manifold and preserving the inherent hierarchical structure—thereby pre- venting the distortions common with Euclidean vector addition. 3.We validate HYPE across three widely bench- marked datasets—CounterFact (Meng et al., 2022a), CounterFact+ (Yao et al., 2023b), and MQuAKE (Zhong et al., 2023)—and two popular LLMs—GPT-J (Wang and Komat- suzaki, 2021), and GPT2-XL (Radford et al., 2019b)—demonstrating superior factual accu- racy, parameter efficiency, and edit stability compared to existing model editing methods. Code available at: https://github.com/ yashkumaratri/HYPE 2 Related Work Post-Training Model Editing: Model editing techniques broadly fall into parameter-modifying and parameter-preserving
https://arxiv.org/abs/2505.18343v1
approaches. Parameter- modifying methods, such as locate-then-edit strate- gies (Meng et al., 2022a,b; Hartvigsen et al., 2023; Kolbeinsson et al., 2025) and meta-learning frame- works (Mitchell et al., 2022a; Cao et al., 2021), adjust model parameters via hypernetworks or rank- one weight updates. Zhang et al. (2024) enhances specificity using knowledge graphs, but these meth- ods typically treat parameter changes as isolated scalar operations (Ma et al., 2024; Gu et al., 2024), disregarding structured knowledge representations in LLMs (Cohen et al., 2023). This often leads to unintended side effects (Yao et al., 2023a; Kolbeins- son et al., 2025) and poor support for compositional updates (Hase and Bansal, 2023). Parameter-preserving approaches avoid direct weight updates by leveraging external memories or prompts (Mitchell et al., 2022b; Huang et al., 2023; Madaan et al., 2022; Zheng et al., 2023), while con- tinual learning methods (Zeng et al., 2019; Fara- jtabar et al., 2020) use orthogonal projections to mitigate catastrophic forgetting. However, these strategies require architectural modifications or fail to maintain long-term coherence in factual updates. Graph-Based Model Editing: Knowledge graphs provide a structured approach to model editing by KT = {(s,r,o), …} (“The Hobbit”, “author” “J.R.R. Tolkien”) GNN Layer b) Hyperbolic Graph Construction a) Knowledge Triples (Sentence, Relation, Object) Poincaré ball Poincaré emb y0y1y2y3yn …Layer1h01h02 h03 h04 h0n …Layer2h11 h12 h12 h13 h1n …Layer3LayerLy0y1y2y3yn …… c) Möbius-Transformed Updates d) Dual Stabilization e) Transformer Layers Rank-1 Update dh(x,y) WΔ WnewWnew W ΔC = 0.5 Hyperbolic Space ∮proj WfinIf t = t+1: Θ(GNN)t+1 = Θ(GNN)0 tt t+1Figure 1: The illustration delineates our proposed model, HYPE . We begin by constructing a hyperbolic knowledge graph (b) using Poincaré embeddings to encode hierarchical relationships. When an edit is required, we apply Möbius transformations (c) to update the weights while ensuring curvature-aware consistency. To maintain stability, Dual stabilization strategy (d) removes transient or spurious updates. The edited knowledge is then integrated into the model (e), preserving factual accuracy and structural integrity. aligning factual updates with relational dependen- cies. Cohen et al. (2023) aligns LLM attention with Wikidata triples, while Cao et al. (2021) edits GNNs through subgraph swaps. Zhang et al. (2024) integrates knowledge graphs into transformer lay- ers but relies on Euclidean embeddings, distort- ing hierarchical relationships. Similarly, Qin et al. (2023) models higher-order dependencies via hy- pergraphs but lacks geometric constraints to main- tain relational consistency. Hyperbolic Model Editing: Hyperbolic embed- dings have proven effective for encoding hierarchi- cal knowledge (Chami et al., 2019), particularly in NLP (Valentino et al., 2024) and knowledge graph applications (Chami et al., 2020). Chen et al. (2022) introduces hyperbolic attention for LLMs requiring full retraining, while Cohen et al. (2023) links LLMs to graphs using Euclidean projections, which fails to capture hyperbolic curvature. HYPE is the first framework to unify hyperbolic embeddings and Möbius updates for model edit- ing. Unlike Euclidean methods (e.g., ROME (Meng et al., 2022b), MEMIT (Meng et al., 2022b), GLAME (Zhang et al., 2024)), which struggle with hierarchical edits and stability, HYPE lever- ages hyperbolic geometry to preserve relationalconsistency. Compared to parameter-preserving ap- proaches, it avoids architectural overhead
https://arxiv.org/abs/2505.18343v1
while maintaining edit precision. Experiments show HYPE achieves +9.12 higher Edit Quality Score (EDS) and +3.59 better efficacy than state-of-the- art methods (Table 1), demonstrating the benefits of curvature-aware updates. 3 Methodology In this section, we detail the methodology underly- ingHYPE , our proposed framework for hyperbolic model editing. 3.1 Hyperbolic Graph Construction We first initialize a knowledge graph using Wiki- data (Vrande ˇci´c and Krötzsch, 2014) triples, which consist of subject-relation-object tuples {(s, r, o )...}. These triples encode factual knowl- edge, where srepresents an entity (e.g., "Albert Einstein"), orepresents another entity or concept associated with s(e.g., "Theory of Relativity"), andrdefines the semantic relationship between them (e.g., "contributed to"). To construct mean- ingful representations, we first obtain Euclidean embeddings using a graph neural network (GNN), capturing relational and structural dependencies. For any input string x(which may represent an en- tity or a relation), let veuc(x)denote its Euclidean embedding. We then project these embeddings into hyperbolic space via the exponential map on the Poincaré ball (Nickel and Kiela, 2017) vhyp(x) = expc 0(veuc(x)) = tanh√c∥veuc(x)∥ ·veuc(x)√c∥veuc(x)∥ (1) where cis the curvature parameter (set to 1.0in our experiments). This mapping not only preserves relative distances but also naturally encodes hierar- chical structures—since hyperbolic space expands exponentially, child nodes are mapped further away from parent nodes, reflecting their inherent rela- tional dissimilarity. For relation embeddings, we extract a set of unique relations {r1, r2, . . . , r m}from the triples. Each relation ris first embedded in Euclidean space to yield reuc(r)and then projected to hy- perbolic space as: rhyp(r) = expc 0(reuc(r)). (2) These hyperbolic relation embeddings serve as edge features in our graph. We then construct a directed graph G= (V, E), where each unique entity encountered in the Wikipedia triples is assigned a node v∈Vwith an associated feature ϕ(v)defined as: ϕ(v) = expc 0(veuc(v)). (3) For every triple (s, r, o ), an edge (s, o)∈Eis created. The edge feature corresponding to this edge is given by the hyperbolic relation embedding: es,o= expc 0(reuc(r)), (4) and a relation-to-index mapping is maintained to assign a type ID to each relation. To stabilize subsequent graph neural network (GNN) computations, self-loops are added to each node, and the in-degree of each node is computed to generate a normalization factor: norm(v) =deg(v)−1, (5) which is incorporated into the node features. Fur- thermore, we apply a persistence filter to retain topologically significant embeddings: persistent_filter (x) =σ(∥x∥ −τ), (6)where τis a persistence threshold (learnable param- eter), and σ(·)is the sigmoid function. The filter is applied to the initial relation embeddings: a=persistent_filter (rhyp). (7) This ensures that only structurally important rela- tions contribute to the model update process. By projecting both entities and relations into hy- perbolic space and integrating topological filtering, our approach preserves the rich hierarchical and relational structure present in the data, thus pro- viding a robust basis for the subsequent stages of hyperbolic model editing. 3.2 Möbius-Transformed Weight Update Traditional weight updates of the form w′=w+∆ fail to respect the curvature of hyperbolic space. In Euclidean geometry, vector addition is straightfor- ward, but
https://arxiv.org/abs/2505.18343v1
hyperbolic space’s non-Euclidean struc- ture requires operations that preserve its intrinsic geometry. To address this, we employ Möbius ad- dition, which ensures updates remain on the hyper- bolic manifold while maintaining hierarchical rela- tionships encoded in the data. Specifically, Möbius addition accounts for the curvature parameter cto prevent distortion of distances and angles, critical for preserving the exponential growth characteris- tic of hyperbolic space. This operation guarantees that the updated weight vector wnewadheres to the manifold’s constraints, enabling stable and context- aware edits. In a space with curvature c, the Möbius addition of wand∆is defined as wnew=w⊕c∆ =(1 + 2 c⟨w,∆⟩+c∥∆∥2)w 1 + 2 c⟨w,∆⟩+c2∥w∥2∥∆∥2 +(1−c∥w∥2)∆ 1 + 2 c⟨w,∆⟩+c2∥w∥2∥∆∥2(8) where⟨·,·⟩denotes the inner product and ∥ · ∥ is the Euclidean norm. This formulation ensures that the updated weight wnewadheres to the hyperbolic geometry. The update term ∆is computed using the rank-1 algorithm (Meng et al., 2022a). First, we derive two vectors: a left update vector u∈Rmand a right update vector v∈Rn. Their outer product forms the base update: ∆0=u⊗v,with(u⊗v)ij=uivj.(9) This base update is scaled by a residual factor γ, resulting in ∆1=γ∆0. (10) To ensure that only significant gradients con- tribute to the update, we compute for each output feature the average gradient magnitude: gi=1 nnX j=1 ∇wijLedit , and define a soft gradient mask via the sigmoid function: mi=σ gi−τg ,withσ(x) =1 1 +e−x,(11) where τgis a predefined gradient persistence thresh- old. By broadcasting the mask macross the cor- responding dimensions, the final update matrix is given by ∆ = ∆ 1⊙m, (12) where⊙denotes element-wise multiplication. Sub- stituting Eq. (12) into Eq. (8)yields the final weight update: wnew=w⊕c∆. (13) This Möbius-transformed update procedure ensures that the modifications to wrespect the underlying hyperbolic geometry, thereby preserving the hierar- chical structure encoded in the model. 3.3 Dual Stabilization Strategy To prevent catastrophic forgetting and maintain the geometric consistency of the model, we introduce a dual stabilization strategy comprising hyperbolic projection and periodic resetting of the graph neural network (GNN) parameters. Hyperbolic Projection: After applying the Möbius update, it is crucial to ensure that the up- dated weights remain within the valid region of the Poincaré ball. We achieve this by projecting the updated weights onto the Poincaré ball Dd c. The projection operator is defined as proj(w) = min 1,1/√c ∥w∥ w, (14) which guarantees that ∥w∥ ≤ 1/√c. Thus, the stabilized weight is given by wfinal=proj w⊕c∆ . (15)Graph Neural Network Reset: The update di- rections uandvare computed using a GNN op- erating on the hyperbolic graph constructed from Wikipedia triples. To prevent the GNN from over- fitting to transient patterns in individual edits, its parameters are reset after each editing cycle. For- mally, if θ(0) gnndenotes the initial GNN parameters, then after each update step t, we enforce θ(t+1) gnn =θ(0) gnn. (16) This resetting mechanism prevents the architecture from overfitting to individual edits, ensuring that transient adaptations do not accumulate and disrupt the model’s internal coherence. 4 Datasets, Baselines and Evaluation We evaluate HYPE on three widely benchmarked datasets – 1) CounterFact (Meng et al., 2022a), which assesses factual
https://arxiv.org/abs/2505.18343v1
accuracy, specificity, and generalization; 2) CounterFact+ (Yao et al., 2023b), which evaluates edit portability across paraphrased queries; 3) MQuAKE (Zhong et al., 2023), which tests multi-hop reasoning capabilities. We com- pare HYPE against seven state-of-the-art baselines – (i) Zeroshot (unmodified model), (ii) FT(Fine- Tuning), (iii) MEND (Mitchell et al., 2022a), (iv) ROME (Meng et al., 2022a), (v) MEMIT (Meng et al., 2022b), (vi) PMET , and (vii) RAE (Shi et al., 2024). Further details on datasets and baselines are provided in Appendix A. We measure HYPE ’s performance using the fol- lowing metrics – 1) Efficacy (Eff): Quantifies the accuracy of factual edits by measuring the model’s ability to correctly answer questions af- ter updates. 2) Generalization (Gen): Assesses robustness across paraphrased queries and reason- ing tasks, ensuring edits apply consistently to var- ied inputs. 3) Specificity (Spec): Measures unin- tended alterations to unrelated knowledge, evaluat- ing the model’s ability to avoid cascading errors. 4) Portability (Port+): Evaluates the model’s ability to transfer edits across different contexts, ensuring updates remain effective under rephrased questions (used for benchmarking CounterFact+). 5) Edit Quality Score (EDS): A composite metric defined as the harmonic mean of efficacy, generalization, and specificity, providing a holistic view of edit per- formance. 6) Multi-Hop Efficacy: For MQuAKE, efficacy is measured over 2-hop, 3-hop, and 4-hop reasoning tasks, assessing the model’s ability to handle complex, multi-step queries. Method ModelCounterfact/+ MQuAKE Eff Gen Spec Port+ EDS 2-hops 3-hops 4-hops Avg Zeroshot GPT-J15.47 17.43 80.96 10.27 37.95 14.67 22.19 10.43 15.76 FT 79.38 60.14 31.85 13.64 57.12 – – – – MEND 44.76 44.83 52.28 12.68 47.29 13.86 11.24 9.62 11.57 ROME 55.77 52.57 50.49 28.43 52.94 32.63 29.04 17.33 26.33 MEMIT 95.59 92.64 61.73 28.84 83.32 35.47 26.93 15.38 25.93 PMET 83.57 84.24 52.25 27.61 73.35 31.47 24.67 13.21 23.12 RAE 94.84 84.02 70.05 29.68 83.30 32.53 26.08 14.92 24.51 HYPE 99.43 98.35 79.47 29.83 92.42 46.68 39.73 23.13 36.51 ∆(HYPE - best base) ↑3.84↑5.71↑9.42↑0.15↑9.10↑11.21↑10.69↑5.80↑10.18 Zeroshot GPT2-XL21.56 23.61 76.17 10.06 40.45 23.61 21.95 14.37 19.98 FT 67.05 45.34 58.68 13.83 57.02 – – – – MEND 58.65 53.26 47.73 14.08 53.21 26.49 24.71 14.93 22.04 ROME 98.63 94.73 73.53 21.29 88.96 38.52 30.28 16.74 28.51 MEMIT 92.84 78.48 76.33 18.63 82.13 34.62 26.08 15.88 25.53 PMET 91.66 77.08 75.19 17.33 81.69 32.17 23.68 13.83 23.23 RAE 89.34 76.41 63.18 23.19 76.31 30.42 25.27 14.31 23.33 HYPE 99.57 97.18 77.86 24.53 91.54 43.62 33.83 22.79 33.41 ∆(HYPE - best base) ↑0.94↑2.45↑1.53↑1.34↑2.58↑5.10 ↑3.55↑6.05 ↑4.90 Table 1: Comprehensive evaluation of HYPE against model editing baselines on GPT-J and GPT-2 XL. We assess performance across three benchmarks: (1) Counterfact – measuring Efficacy (Eff), Generalization (Gen), Specificity (Spec), and Edit Quality Score (EDS); (2) Counterfact+ – evaluating Portability (Port+); and (3) MQuAKE – testing multi-hop reasoning across 2-, 3-, and 4-hop tasks. Results indicate that HYPE consistently outperforms all baselines, with particularly strong gains in multi-hop reasoning. Higher values denote better performance. 5 Experimental Results This section presents the evaluation of HYPE against seven baselines—Zeroshot, FT, MEND (Mitchell et al., 2022a), ROME (Meng et al., 2022a), MEMIT
https://arxiv.org/abs/2505.18343v1
(Meng et al., 2022b), PMET (Li et al., 2024b), and RAE (Shi et al., 2024) — on three benchmark datasets: (1) CounterFact (Meng et al., 2022a), as- sessed using Efficacy (Eff), Generalization (Gen), Specificity (Spec), and Edit Quality Score (EDS); (2) CounterFact+ (Yao et al., 2023b), evaluated with Portability Score (Port+); and (3) MQuAKE (Zhong et al., 2023), measured using Multi-Hop Efficacy (2,3,4-hop). We test HYPE with two LLMs — GPT-J (Wang and Komatsuzaki, 2021) and GPT- 2XL (Radford et al., 2019a). The results (Table 1) demonstrate that HYPE consistently outperforms all baselines across factual accuracy, edit stability, and multi-hop reasoning tasks. We also assess the indi- vidual contribution of each proposed module in the ablation study (Table 2 and Section 5.4). Further details on experimentation setup and evaluation protocols are provided in Appendix B. 5.1 Counterfact and Counterfact+ Results On the Counterfact dataset, HYPE achieves an ef- ficacy score of 99.43 and generalization score of 98.35 when evaluated on the GPT-J model. These results represent a +3.84 improvement in efficacyand a +5.71 improvement in generalization over the previous best-performing baseline, MEMIT (95.59 Eff, 92.64 Gen). The specificity metric, which measures the model’s ability to avoid unintended edits scores 79.47, a +9.42 points improvement over MEMIT’s 70.05. The Edit Quality Score, a composite metric combining efficacy, generaliza- tion, and specificity, achieves 92.42, outperforming MEMIT’s 83.32 by +9.10 points. For Counterfact+, which evaluates the portability of edits across dif- ferent contexts, HYPE attains a score of 29.83, sur- passing MEMIT’s 28.84 by +0.15. These results highlight HYPE ’s ability to maintain factual accu- racy and coherence while minimizing unintended side effects. When evaluated on the GPT2-XL model, HYPE continues to demonstrate superior performance. With an efficacy score of 99.57, a generalization score of 97.18, and a specificity of 77.86, it out- performs the best baselines by +0.94 (efficacy over ROME), +2.45 (generalization over ROME), and +1.53 (specificity over MEMIT). The Edit Quality Score reaches 91.54, a +2.58 improvement over ROME’s 88.96. These results indicate that HYPE maintains its effectiveness across different model architectures and scales, as shown in Table 1. Method ModelCounterfact/+ MQuAKE Eff Gen Spec Port+ EDS 2-hops 3-hops 4-hops Avg HYPE 99.43 98.35 79.47 29.83 92.42 46.68 39.73 23.13 36.51 HYPE w/o DualS GPT-J97.45 96.12 77.34 24.98 90.30 44.32 38.21 21.45 34.66 HYPE w/o HGraph 96.23 95.45 76.54 24.56 89.41 43.45 37.32 20.89 33.89 HYPE w/o Mobius updates 92.89 90.45 69.56 23.08 84.30 40.12 35.12 19.23 31.49 HYPE w/o HGraph & Mobius 90.12 88.34 68.56 22.34 82.34 38.45 33.45 18.76 30.22 HYPE w/o HGraph & Mobius & DualS 88.45 86.34 66.54 21.34 80.44 36.45 31.45 16.78 28.23 HYPE 99.57 97.18 77.86 24.53 91.54 43.62 33.83 22.79 33.41 HYPE w/o DualS GPT2-XL93.45 91.34 73.45 23.45 86.08 42.34 32.34 21.34 32.01 HYPE w/o HGraph 92.34 90.45 72.34 23.00 85.04 41.34 31.34 20.34 31.01 HYPE w/o Mobius updates 88.45 86.34 68.54 22.34 81.11 39.45 30.45 19.45 29.78 HYPE w/o HGraph & Mobius 86.45 84.34 67.45 21.34 79.41 38.45 29.45 18.45 28.78 HYPE w/o HGraph & Mobius & DualS 84.45 82.34 66.45 20.34 77.75 37.45
https://arxiv.org/abs/2505.18343v1
28.45 17.45 27.78 Table 2: Ablation study on GPT-J and GPT2-XL architectures. The highlighted rows show full HYPE performance; removing components (Dual Stabilization (DualS), Hyperbolic Graph (HGraph), and Mobius updates) progressively degrades performance. 5.2 MQuAKE Results The MQuAKE dataset evaluates multi-hop reason- ing capabilities, requiring models to answer com- plex questions that involve multiple steps of in- ference. HYPE achieves significant improvements across all multi-hop tasks. For 2-hop reasoning, HYPE attains an efficacy score of 46.68 on GPT-J, outperforming MEMIT’s 35.47 by +11.21 points. On 3-hop reasoning, HYPE scores 39.73, a +10.69 score improvement over ROME’s 29.04. For 4- hop reasoning, HYPE achieves 23.13, surpassing ROME’s 17.33 by +5.80 improvement score. The average accuracy across all multi-hop tasks reaches 36.51, a +10.18 improvement over ROME’s ef- ficacy of 26.33. These results highlight HYPE ’s ability to preserve hierarchical relationships and re- lational consistency during edits, which are critical for multi-hop reasoning. On the GPT2-XL model, HYPE achieves 43.62, 33.83, and 22.79 efficacy score in 2, 3, 4-hop, re- spectively. These results represent improvements of +5.10, +3.55, and +6.05 over ROME’s 38.52, 30.28, and 16.74 efficacy scores respectively. The average accuracy of 33.41 represents a +4.90 im- provement over ROME’s 28.51. These results fur- ther demonstrate HYPE ’s robustness across different model architectures and its ability to maintain co- herence in complex reasoning tasks. 5.3 Model-Specific Performance HYPE demonstrates consistent improvements across both GPT-J and GPT2-XL models. On GPT-J, the Edit Quality Score increases by +9.10 over MEMIT (92.42 vs. 83.32), with efficacy improving by +3.84and generalization by +5.71 points. On GPT2-XL, the EDS improves by +2.58 over ROME (91.54 vs. 88.96), with efficacy increasing by +0.94 (99.57 vs. 98.63) and generalization by +2.45 (97.18 vs. 94.73). These results indicate that HYPE ’s improve- ments are not limited to a specific model archi- tecture but generalize across different scales and configurations. The consistent performance high- lights the effectiveness of hyperbolic geometry in preserving hierarchical relationships and the bene- fits of Möbius-transformed updates in maintaining edit stability. 5.4 Analysis of Ablation Results The ablation study results in Table 2 demonstrate the critical role of each component in HYPE ’s ar- chitecture. On the GPT-J model, removing Dual Stabilization ( HYPE w/o DualS) reduces efficacy by 1.98 points and specificity by 2.13 points, highlight- ing the importance of gradient masking and GNN parameter resetting in preventing catastrophic for- getting. Further ablating the Hyperbolic Graph (HYPE w/o HGraph) causes a -3.20 drop in efficacy and a -2.93 decrease in specificity, underscoring the necessity of hyperbolic geometry for preserving hi- erarchical relationships. When both the Hyperbolic Graph and Möbius updates are removed, perfor- mance declines sharply, with efficacy falling by -9.31 and specificity by -10.93. The most drastic degradation occurs when all three components are removed ( HYPE w/o HGraph & Mobius & DualS), resulting in a -10.98 points drop in efficacy and a -12.93 decrease in specificity. Similar trends are observed on the larger GPT2-XL model, where 0.51 27580859095 cEDSCurvature Sensitivity 0.20.50.87580859095 τEDSPersistence Filtering Figure 2: Left: EDS peaks at c= 1.0due to an opti- mal
https://arxiv.org/abs/2505.18343v1
balance between expansion and numerical stability. Right: Edit success rate declines for τ > 0.5due to underfitting. removing Dual Stabilization reduces efficacy by -6.12 and specificity by -4.41, while ablating the Hyperbolic Graph and Möbius updates causes an -11.12 decline in efficacy and a -9.41 reduction in specificity. The full ablation ( HYPE w/o HGraph & Mobius & DualS) leads to a -15.12 decrease in effi- cacy and a -11.41 drop in specificity. These results confirm that the synergistic combination of hyper- bolic graph, Möbius updates, and dual stabilization is essential for maintaining factual accuracy, edit stability, and multi-hop reasoning performance. 6 Discussion and Analysis HYPE demonstrates strong performance on local- ized edits while maintaining edit stability, even in cases where multi-hop reasoning is challenging. Below, we analyze representative examples to illus- trate the model’s capabilities. The complete model outputs for the analyzed cases are presented in Ap- pendix Section C 6.1 Qualitative Analysis of Successful Edits HYPE excels at localized edits due to its hyperbolic graph construction and dual stabilization mech- anisms. For example, in Case ID 983 ("Larry Knechtel"), the model successfully updated the in- strument from guitar to violin, achieving 100% correctness in rewrite prompts. Despite low con- fidence in the new target ( target_new = 0.0192 vs.target_true = 6.1653 ), the model’s gradient masking prevented unintended side effects, preserv- ing global coherence. Similarly, in Case ID 729 ("Johann von Rist"), the model correctly changed the occupation from poet to astronomer, achiev- ing 100% correctness in rewrite and paraphrase prompts. These successes highlight the model’s ability to maintain edit stability and specificity, key advantages over Euclidean methods like ROME.6.2 Qualitative Analysis of Challenging Edits Although HYPE encounters challenges in multi- hop reasoning in some samples, it consistently demonstrates impressive edit stability and speci- ficity. For instance, in Case ID 560 ("British Rail- ways Board"), the model successfully updated the headquarters from London to Prague, achieving 100% correctness in rewrite prompts. While para- phrase and neighborhood edits showed some incon- sistency, the dual stabilization module effectively mitigated catastrophic forgetting by preserving crit- ical knowledge. Similarly, in Case ID 264 ("Toyota RA V4"), the model succeeded in 60% of neigh- borhood edits, illustrating its partial propagation capabilities. These results highlight HYPE ’s ability to handle complex edits while maintaining stability, a strength notably absent in Euclidean approaches. 6.3 Impact of Hyperbolic Geometry 1) Curvature Sensitivity: Curvature cdefines the rate of expansion in hyperbolic space. Experiments withc∈ {0.5,1.0,2.0}(Figure 2, left) showed optimal results at c= 1.0, where Edit Quality Score (EDS) reached 92.4 points. Higher curvature (c= 2.0) caused gradient instability, reducing EDS to 80.1, while lower curvature ( c= 0.5) resulted in insufficient relational separation (EDS = 85.2). 2) Persistence Filtering: This method uses a per- sistence threshold τto filter out topologically in- significant updates. The persistence threshold τin Eq.(6)critically affected edit stability (Figure 2, right). Lower τvalues ( <0.5) led to overfitting, with EDS dropping to 82.4 at τ= 0.8. Optimal performance (92.4 EDS) is achieved at τ= 0.5, balancing specificity and generality. Higher thresh- olds ( >0.8) caused underfitting
https://arxiv.org/abs/2505.18343v1
by filtering out critical updates. 7 Conclusion In this paper, we proposed HYPE , a novel post- training model editing framework that leverages hyperbolic geometry and graph neural networks to deliver precise and stable updates to large lan- guage models. Our approach overcomes critical issues in existing methods, including overfitting and catastrophic forgetting, by constructing a hy- perbolic graph with Poincaré embeddings, employ- ing a Möbius-transformed update strategy, and integrating a dual stabilization mechanism with gradient masking and GNN parameter resetting. Evaluations on CounterFact, CounterFact+, and MQuAKE with GPT-J and GPT2-XL show that HYPE significantly enhances factual accuracy and parameter efficiency while mitigating unintended side effects. Future work will explore domain- specific constraints where hyperbolic editing could enforce critical factual accuracy. 8 Limitations HYPE faces several limitations. While it improves edit stability and factual accuracy, the computa- tional overhead of hyperbolic operations and per- sistence filtering is higher than Euclidean methods, although this is offset by reduced retraining needs. Current implementation focuses on medium-sized LLMs (e.g., GPT-J, GPT2-XL); scaling to larger models (e.g., GPT-4, LLaMA-3-70b) would require distributed optimization and increased memory re- sources. The framework relies on structured knowl- edge graphs (e.g., Wikipedia triples), which may not capture implicit or evolving knowledge, and assumes the availability of clean, curated triples. Additionally, while effective for hierarchical edits, fine-grained edits to individual tokens or low-level linguistic features may require additional mecha- nisms beyond hyperbolic graph updates. 9 Ethics Statement The benchmarked datasets – Counterfact, Coun- terfact+, and MQuAKE are available under open- source licenses. The baselines are also covered by the open-source licenses. All experiments are performed on one Nvidia A6000 (48GB) GPU. References Yash Kumar Atri, Vikram Goyal, and Tanmoy Chakraborty. 2023a. Exploiting representation bias for data distillation in abstractive text summarization. Preprint , arXiv:2312.06022. Yash Kumar Atri, Vikram Goyal, and Tanmoy Chakraborty. 2023b. Fusing multimodal signals on hyper-complex space for extreme abstractive text summarization (tl;dr) of scientific contents. In Pro- ceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , KDD ’23, page 3724–3736, New York, NY , USA. Association for Computing Machinery. Yash Kumar Atri, Vikram Goyal, and Tanmoy Chakraborty. 2023c. Multi-document summariza- tion using selective attention span and reinforcement learning. IEEE/ACM Transactions on Audio, Speech, and Language Processing , 31:3457–3467.Yash Kumar Atri, Arun Iyer, Tanmoy Chakraborty, and Vikram Goyal. 2023d. Promoting topic coherence and inter-document consorts in multi-document sum- marization via simplicial complex and sheaf graph. InProceedings of the 2023 Conference on Empiri- cal Methods in Natural Language Processing , pages 2154–2166, Singapore. Association for Computa- tional Linguistics. Yash Kumar Atri, Shraman Pramanick, Vikram Goyal, and Tanmoy Chakraborty. 2021. See, hear, read: Leveraging multimodality with guided attention for abstractive text summarization. Knowledge-Based Systems , 227:107152. Yash Kumar Atri, Thomas H Shin, and Thomas Hartvigsen. 2025. Continually self-improving language models for bariatric surgery question– answering. Preprint , arXiv:2505.16102. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu,
https://arxiv.org/abs/2505.18343v1
Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Ma- teusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Ad- vances in Neural Information Processing Systems , volume 33, pages 1877–1901. Curran Associates, Inc. Nicola De Cao, Wilker Aziz, and Ivan Titov. 2021. Editing factual knowledge in language models. In EMNLP (1) , pages 6491–6506. Association for Com- putational Linguistics. Ines Chami, Adva Wolf, Da-Cheng Juan, Frederic Sala, Sujith Ravi, and Christopher Ré. 2020. Low- dimensional hyperbolic knowledge graph embed- dings. Preprint , arXiv:2005.00545. Ines Chami, Rex Ying, Christopher Ré, and Jure Leskovec. 2019. Hyperbolic graph convolutional neural networks. Preprint , arXiv:1910.12933. Weize Chen, Xu Han, Yankai Lin, Hexu Zhao, Zhiyuan Liu, Peng Li, Maosong Sun, and Jie Zhou. 2022. Fully hyperbolic neural networks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 5672–5686, Dublin, Ireland. Association for Computational Linguistics. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sashank Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vin- odkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, An- drew M. Dai, Thanumalayan Sankaranarayana Pil- lai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2023. Palm: scaling language mod- eling with pathways. J. Mach. Learn. Res. , 24(1). Roi Cohen, Mor Geva, and Jonathan Berant. 2023. Emergent world representations in llms. ICLR . Alvin Dey, Tanya Chowdhury, Yash Kumar Atri, and Tanmoy Chakraborty. 2020. Corpora evaluation and system bias detection in multi-document summariza- tion. In Findings of the Association for Computa- tional Linguistics: EMNLP 2020 , pages 2830–2840, Online. Association for Computational Linguistics. Mehrdad Farajtabar, Navid Azizan, Alex Mott, and Ang Li. 2020. Orthogonal gradient descent for continual learning. In AISTATS , volume 108 of Proceedings of Machine Learning Research , pages 3762–3773. PMLR. Jonathan Frankle and Michael Carbin. 2019. The lottery ticket hypothesis: Finding sparse, trainable neural networks. Preprint , arXiv:1803.03635. Octavian-Eugen Ganea, Gary Bécigneul, and Thomas Hofmann. 2018. Hyperbolic neural networks. In Proceedings of the 32nd International Conference on Neural Information Processing Systems , NIPS’18, page 5350–5360, Red Hook, NY , USA. Curran Asso- ciates Inc. Yifu Gao, Linbo Qiao, Zhigang Kan, Zhihua Wen, Yongquan He, and Dongsheng Li. 2024. Two-stage generative question answering on temporal knowl- edge graph using large language models. In Findings of the Association for Computational Linguistics: ACL 2024 , pages 6719–6734, Bangkok, Thailand. Association for Computational Linguistics. Jia-Chen Gu, Hao-Xiang Xu, Jun-Yu
https://arxiv.org/abs/2505.18343v1
Ma, Pan Lu, Zhen- Hua Ling, Kai-Wei Chang, and Nanyun Peng. 2024. Model editing harms general abilities of large lan- guage models: Regularization to the rescue. Preprint , arXiv:2401.04700. Tom Hartvigsen, Swami Sankaranarayanan, Hamid Palangi, Yoon Kim, and Marzyeh Ghassemi. 2023. Aging with GRACE: lifelong model editing with dis- crete key-value adaptors. In NeurIPS . Peter Hase and Mohit Bansal. 2023. Compositional edits in language models. EMNLP .Zeyu Huang, Yikang Shen, Xiaofeng Zhang, Jie Zhou, Wenge Rong, and Zhang Xiong. 2023. Transformer- patcher: One mistake worth one neuron. In ICLR . OpenReview.net. Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of halluci- nation in natural language generation. ACM Comput. Surv. , 55(12). Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. Preprint , arXiv:2001.08361. Arinbjörn Kolbeinsson, Kyle O’Brien, Tianjin Huang, Shanghua Gao, Shiwei Liu, Jonathan Richard Schwarz, Anurag Jayant Vaidya, Faisal Mahmood, Marinka Zitnik, Tianlong Chen, and Thomas Hartvigsen. 2025. Composable interventions for language models. In The Thirteenth International Conference on Learning Representations . Angeliki Lazaridou, Adhiguna Kuncoro, Elena Gri- bovskaya, Devang Agrawal, Adam Liška, Tayfun Terzi, Mai Gimenez, Cyprien de Masson d’Autume, Tomas Kocisky, Sebastian Ruder, Dani Yogatama, Kris Cao, Susannah Young, and Phil Blunsom. 2021. Mind the gap: assessing temporal generalization in neural language models. In Proceedings of the 35th International Conference on Neural Information Pro- cessing Systems , NIPS ’21, Red Hook, NY , USA. Curran Associates Inc. Taiji Li, Zhi Li, and Yin Zhang. 2024a. Improving faith- fulness of large language models in summarization via sliding generation and self-consistency. In Pro- ceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) , pages 8804– 8817, Torino, Italia. ELRA and ICCL. Xiaopeng Li, Shasha Li, Shezheng Song, Jing Yang, Jun Ma, and Jie Yu. 2024b. PMET: precise model edit- ing in a transformer. In AAAI , pages 18564–18572. AAAI Press. Na Liu, Liangyu Chen, Xiaoyu Tian, Wei Zou, Kai- jiang Chen, and Ming Cui. 2024. From llm to con- versational agent: A memory enhanced architecture with fine-tuning of large language models. Preprint , arXiv:2401.02777. Jun-Yu Ma, Hong Wang, Hao-Xiang Xu, Zhen- Hua Ling, and Jia-Chen Gu. 2024. Perturbation- restrained sequential model editing. CoRR , abs/2405.16821. Aman Madaan, Niket Tandon, Peter Clark, and Yim- ing Yang. 2022. Memory-assisted prompt edit- ing to improve GPT-3 after deployment. CoRR , abs/2201.06009. Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. 2022a. Locating and editing factual associ- ations in GPT. In NeurIPS . Kevin Meng, Arnab Sen Sharma, Alex Andonian, Yonatan Belinkov, and David Bau. 2022b. Mass editing memory in a transformer. arXiv preprint arXiv:2210.07229 . Eric Mitchell, Charles Lin, Antoine Bosselut, Chelsea Finn, and Christopher D. Manning. 2022a. Fast model editing at scale. In ICLR . OpenReview.net. Eric Mitchell, Charles Lin, Antoine Bosselut, Christo- pher D. Manning, and Chelsea Finn. 2022b. Memory- based model editing at scale. In ICML ,
https://arxiv.org/abs/2505.18343v1
volume 162 ofProceedings of Machine Learning Research , pages 15817–15831. PMLR. Maximillian Nickel and Douwe Kiela. 2017. Poincaré embeddings for learning hierarchical representations. InAdvances in Neural Information Processing Sys- tems, volume 30. Curran Associates, Inc. Anirudh Phukan, Shwetha Somasundaram, Apoorv Sax- ena, Koustava Goswami, and Balaji Vasan Srinivasan. 2024. Peering into the mind of language models: An approach for attribution in contextual question an- swering. In Findings of the Association for Compu- tational Linguistics: ACL 2024 , pages 11481–11495, Bangkok, Thailand. Association for Computational Linguistics. Hongchao Qin, Rong-Hua Li, Ye Yuan, Guoren Wang, and Yongheng Dai. 2023. Explainable hyperlink pre- diction: A hypergraph edit distance-based approach. In2023 IEEE 39th International Conference on Data Engineering (ICDE) , pages 245–257. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019a. Lan- guage models are unsupervised multitask learners. OpenAI blog , 1(8):9. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019b. Lan- guage models are unsupervised multitask learners. OpenAI blog , 1(8):9. Mili Shah, Joyce Cahoon, Mirco Milletari, Jing Tian, Fotis Psallidas, Andreas Mueller, and Nick Litombe. 2024. Improving LLM-based KGQA for multi-hop question answering with implicit reasoning in few- shot examples. In Proceedings of the 1st Workshop on Knowledge Graphs and Large Language Models (KaLLM 2024) , pages 125–135, Bangkok, Thailand. Association for Computational Linguistics. Yucheng Shi, Qiaoyu Tan, Xuansheng Wu, Shaochen Zhong, Kaixiong Zhou, and Ninghao Liu. 2024. Retrieval-enhanced knowledge editing in language models for multi-hop question answering. In Pro- ceedings of the 33rd ACM International Conference on Information and Knowledge Management , CIKM ’24, page 2056–2066, New York, NY , USA. Associa- tion for Computing Machinery.Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and effi- cient foundation language models. arXiv preprint arXiv:2302.13971 . Abraham A. Ungar. 2013. Möbius transformation and einsten velocity addition in the hyperbolic geometry of bolyai and lobachevsky. Preprint , arXiv:1303.4785. Marco Valentino, Danilo Carvalho, and Andre Freitas. 2024. Multi-relational hyperbolic word embeddings from natural language definitions. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 23–34, St. Julian’s, Malta. Association for Computational Linguistics. Denny Vrande ˇci´c and Markus Krötzsch. 2014. Wiki- data: a free collaborative knowledgebase. Commun. ACM , 57(10):78–85. Ben Wang and Aran Komatsuzaki. 2021. GPT-J- 6B: A 6 Billion Parameter Autoregressive Lan- guage Model. https://github.com/kingoflolz/ mesh-transformer-jax . Yuxia Wang, Minghan Wang, Muhammad Arslan Manzoor, Fei Liu, Georgi Georgiev, Rocktim Jy- oti Das, and Preslav Nakov. 2024. Factuality of large language models: A survey. Preprint , arXiv:2402.02420. Menglin Yang, Aosong Feng, Bo Xiong, Jihong Liu, Irwin King, and Rex Ying. 2024. Hyperbolic fine-tuning for large language models. Preprint , arXiv:2410.04010. Yunzhi Yao, Sachin Goyal, and Aditi Raghunathan. 2023a. Editing llms: Pitfalls of unstructured parame- ter updates. ACL. Yunzhi Yao, Peng Wang, Bozhong Tian, Siyuan Cheng, Zhoubo Li, Shumin Deng, Huajun Chen, and Ningyu Zhang. 2023b. Editing large language models: Prob- lems, methods, and opportunities. In Proceedings
https://arxiv.org/abs/2505.18343v1
of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 10222–10240, Singapore. Association for Computational Linguis- tics. Guanxiong Zeng, Yang Chen, Bo Cui, and Shan Yu. 2019. Continual learning of context-dependent pro- cessing in neural networks. Nat. Mach. Intell. , 1(8):364–372. Mengqi Zhang, Xiaotian Ye, Qiang Liu, Pengjie Ren, Shu Wu, and Zhumin Chen. 2024. Knowledge graph enhanced large language model editing. CoRR , abs/2402.13593. Zheng Zhao, Emilio Monti, Jens Lehmann, and Haytham Assem. 2024. Enhancing contextual un- derstanding in large language models through con- trastive decoding. In Proceedings of the 2024 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies (Volume 1: Long Papers) , pages 4225–4237, Mexico City, Mexico. Association for Computational Linguistics. Ce Zheng, Lei Li, Qingxiu Dong, Yuxuan Fan, Zhiyong Wu, Jingjing Xu, and Baobao Chang. 2023. Can we edit factual knowledge by in-context learning? CoRR , abs/2305.12740. Zexuan Zhong, Zhengxuan Wu, Christopher Manning, Christopher Potts, and Danqi Chen. 2023. MQuAKE: Assessing knowledge editing in language models via multi-hop questions. In Proceedings of the 2023 Conference on Empirical Methods in Natural Lan- guage Processing , pages 15686–15702, Singapore. Association for Computational Linguistics.Appendix A Datasets and Baselines In this appendix, we provide detailed descriptions of the datasets and baselines used in our experi- ments and the metrics employed to evaluate the performance of our proposed model, HYPE . A.1 Datasets We conduct our experiments on three widely used datasets for model editing: CounterFact ,Coun- terFact+ , and MQuAKE . •CounterFact dataset (Meng et al., 2022a) con- tains over 3,000 instances designed to evalu- ate a model’s ability to perform accurate, spe- cific, and generalizable factual edits. It assesses efficacy (Eff), generalization (Gen), specificity (Spec), and edit quality score (EDS) by mea- suring a model’s ability to update facts without unintended side effects. •CounterFact+ dataset (Yao et al., 2023b) ex- tends CounterFact by evaluating edit portabil- ity across paraphrased queries. It contains over 1,000 paraphrased questions and uses the porta- bility (Port+) metric to measure a model’s ability to transfer edits across different linguistic formu- lations. •MQuAKE dataset (Zhong et al., 2023) tests multi-hop reasoning with over 3,000 complex, multi-step questions (2-hop, 3-hop, 4-hop). Each instance includes one or more edits and associ- ated multi-hop questions, requiring models to leverage edited knowledge to answer multi-step queries. The dataset uses the multi-hop efficacy metric to evaluate performance, with 2-hop, 3- hop, and 4-hop questions. A.2 Baselines We compare HYPE against seven state-of-the-art model editing techniques: 1.Zeroshot : The unmodified model, serving as a baseline for unedited performance. 2.FT (Fine-Tuning) : The full model fine-tuned on edited data, updating all parameters. 3.MEND (Mitchell et al., 2022a): Updates gradient-based weights to align edits with fac- tual knowledge while preserving other informa- tion. 4.ROME (Meng et al., 2022a): Modifies specific relation parameters in the model, assuming that knowledge can be localized to a single layer. 5.MEMIT (Meng et al., 2022b): Performs memory-injected editing, updating weights across multiple layers to incorporate new knowl- edge. 6.PMET (Li et al., 2024b): Parameter-efficient editing using low-rank
https://arxiv.org/abs/2505.18343v1
weight updates to mini- mize computational overhead. 7.RAE (Shi et al., 2024): Uses a Retrieval- Augmented Generation (RAG) based approach, incorporating external knowledge during infer- ence. A.3 Evaluation Metrics We define the following metrics to assess model editing performance: Efficacy (Eff) Measures the accuracy of factual edits by evaluating whether the model answers up- dated questions correctly: Eih P fθ(oi|(si, ri)) >P fθ(oi c|(si, ri))i Generalization (Gen) Assesses how consistently the model applies edits to paraphrased queries: Eih P fθ(oi|N(si, ri)) >P fθ(oi c|N(si, ri))i Specificity (Spec) Quantifies unintended alter- ations to unrelated knowledge: Eih P fθ(oc i|O(si, ri)) >P fθ(oi c|O(si, ri))i Edit Quality Score (EDS) A composite measure given by the harmonic mean of Efficacy, General- ization, and Specificity. Portability (Port+) Evaluates how well edits transfer across rephrased contexts: Eih P fθ(oi|N(si, ri)) >P fθ(oi c|N(si, ri))i We use the similar Efficacy metric to com- pute the n-hop Efficacy scores over the MQuAKE dataset.B Experimental Setup and Evaluation Protocol B.1 Implementation Details The experiments are conducted using PyTorch 2.0, DGL 1.1, and HuggingFace Transformers 4.30 on an NVIDIA A6000 GPU with 48GB memory. We evaluate HYPE on three datasets: CounterFact, CounterFact+, and MQuAKE, using two base mod- els: GPT-J and GPT2-XL. The hyperparameters for each configuration were carefully tuned to optimize performance on each dataset. Parameter CF CF+ MQuAKE Layers 5 9 5/9 GNN Grad Steps 25–35 35–50 25–50 GNN Loss Layer 27/47 47 27/47 Learning Rate (gnn_lr) 5e−1 5 e−1 5 e−1 Weight Decay 1e−1/5e−1 5 e−1 1 e−1/5e−1 Dropout (Attn/Feat) 0.2/0.2–0.4 0.2/0.3–0.4 0.2/0.3–0.4 KL Factor 0.0625–0.075 0.0625–0.075 0.0725–0.075 Early Stopping Loss 3e−2/4e−2 4 e−2/5e−3 3 e−2/5e−3 Table 3: Hyperparameters for CounterFact (CF), Coun- terFact+ (CF+), and MQuAKE for GPT-J/GPT2-XL models. For the hyperbolic settings, we use the Poincaré Ball model from the geoopt2library with a cur- vature parameter c=1.0. Hyperbolic operations (Möbius addition) are applied during the model updates. The hyperbolic space is initialized with a learnable curvature, allowing the model to adapt to the hierarchical structure of the data. The hyperparameters for each dataset and model configuration are detailed in Table 3. The num- ber of GNN gradient steps and GNN loss layer are adjusted based on the dataset and model complex- ity. The learning rate, weight decay, and dropout rates are also tuned to achieve optimal performance. Early stopping is implemented using the ablated loss thresholds to prevent overfitting. B.2 Evaluation Protocol For the evaluation benchmarks, we preserve the dataset splits proposed in the original works for CounterFact (Meng et al., 2022a) and MQuAKE (Zhong et al., 2023). For CounterFact, we evaluate our method on the first 7500 records for both GPT- J and GPT2-XL models. For CounterFact+, we utilize the 1031 samples provided for testing. For MQuAKE, we follow the settings used in (Zhong et al., 2023) and use a subset of 3000 entries. These entries are evenly distributed across 2-hop, 3-hop, and 4-hop questions, with each category compris- ing 1000 entries. This distribution ensures a bal- 2https://github.com/geoopt/geoopt anced evaluation of the model’s performance across different levels of reasoning complexity. B.3 Hyperbolic Graph Construction The
https://arxiv.org/abs/2505.18343v1
foundation of our model lies in constructing a hyperbolic graph that accurately represents hier- archical relationships in the data. We initialize a Poincaré Ball model with a curvature param- eterc= 1.0, enabling effective embedding of knowledge triples in hyperbolic space. This curva- ture allows the model to capture hierarchical struc- tures more effectively than Euclidean spaces, as distances in hyperbolic space grow exponentially, aligning well with tree-like structures in knowledge graphs. To process the knowledge triples (following (Zhang et al., 2024)), we embed each entity (sub- ject and object) into the hyperbolic space using the exponential map: vhyp= expc 0(veucl) = tanh√c|veucl| ·veucl√c|veucl| (17) where veuclis the Euclidean entity vector, and vhyp is its hyperbolic counterpart. This transformation preserves hierarchical relationships in the embed- ding space. Similarly, relation embeddings are projected onto the Poincaré Ball using the same ex- ponential map, serving as edge features to encode directional dependencies. With node and edge features generated, the graph is constructed by defining nodes for each unique entity and adding edges based on relation types, while incorporating self-loops to enhance message passing. Node features are normalized as: ni=1p |N(i)|X j∈N(i)hj (18) where niis the normalized feature for node i,N(i) denotes its neighbors, and hjis the feature of node j, preventing gradient instability. To refine the graph, we apply a persistence-based filtering mechanism that prunes weak edges based on geometric significance in hyperbolic space, re- ducing noise and enhancing structure. Edge fea- tures are normalized for consistent scaling, ensur- ing balance across relation types, and the resulting hyperbolic graph effectively captures hierarchical and semantic structures. C Sample Model Outputs We present four model outputs from the Counter- Fact dataset, generated using HYPE . Listing 1: Output from model for sample 983 { " case_id ": 983 , " grouped_case_ids ": [983] , " num_edits ": 1, " requested_rewrite ": { " prompt ": "{} plays the instrument ", " relation_id ": " P1303 ", " target_new ": {" str ": " violin ", "id ": " Q8355 "}, " target_true ": {" str ": " guitar ", "id ": " Q6607 "}, " subject ": " Larry Knechtel " }, " time ": 10.52 , " post ": { " rewrite_prompts_probs ": [{" target_new ": 0.0192 , " target_true ": 6.16}] , " paraphrase_prompts_probs ": [{" target_new ": 1.13 , " target_true ": 3.08} , {" target_new ": 4.43 , " target_true ": 11.02}] , " neighborhood_prompts_probs ": [{" target_new ": 8.69 , " target_true ": 7.15}] } } Listing 2: Output from model for sample 729 { " case_id ": 729 , " grouped_case_ids ": [729] , " num_edits ": 1, " requested_rewrite ": { " prompt ": "{} works as", " relation_id ": " P106 ", " target_new ": {" str ": " astronomer ", "id ": " Q11063 "}, " target_true ": {" str ": " poet ", "id ": " Q49757 "}, " subject ": " Johann von Rist " }, " time ": 10.57 , " post ": { " rewrite_prompts_probs ": [{" target_new ": 0.0136 , " target_true ":
https://arxiv.org/abs/2505.18343v1
14.64}] , " paraphrase_prompts_probs ": [{" target_new ": 2.08 , " target_true ": 12.86}] , " neighborhood_prompts_probs ": [{" target_new ": 12.24 , " target_true ": 11.59}] } } Listing 3: Output from model for sample 560 { " case_id ": 560 , " grouped_case_ids ": [560] , " num_edits ": 1, " requested_rewrite ": { " prompt ": "{} 's headquarters are in", " relation_id ": " P159 ", " target_new ": {" str ": " Prague ", "id ": " Q1085 "}, " target_true ": {" str ": " London ", "id ": " Q84 "}, " subject ": " British Railways Board " }, " time ": 44.47 , " post ": { " rewrite_prompts_probs ": [{" target_new ": 8.77e -05 , " target_true ": 12.56}] , " paraphrase_prompts_probs ": [{" target_new ": 0.0009 , " target_true ": 9.14}] , " neighborhood_prompts_probs ": [{" target_new ": 8.06 , " target_true ": 2.31}] } } Listing 4: Output from model for sample 264 { " case_id ": 264 , " grouped_case_ids ": [264] , " num_edits ": 1, " requested_rewrite ": { " prompt ": "{} is created by", " relation_id ": " P176 ", " target_new ": {" str ": " Volvo ", "id ": " Q215293 "}, " target_true ": {" str ": " Toyota ", "id ": " Q53268 "}, " subject ": " Toyota RAV4 " }, " time ": 38.36 , " post ": { " rewrite_prompts_probs ": [{" target_new ": 0.0001 , " target_true ": 13.30}] , " paraphrase_prompts_probs ": [{" target_new ": 0.0002 , " target_true ": 11.79}] , " neighborhood_prompts_probs ": [{" target_new ": 8.07 , " target_true ": 1.95}] } } Algorithm HYPE Algorithm 1:procedure HYPERBOLIC GRAPH ({(s, r, o )}) 2: Pretrained Euclidean embeddings: veuc(x)for entities and relations 3: foreach entity xdo 4: Compute hyperbolic embedding: vhyp(x) = expc 0(veuc(x)) 5: end for 6: foreach relation rdo 7: Project to hyperbolic space: rhyp(r) = expc 0(reuc(r)) 8: end for 9: Construct graph G= (V, E)with edges and relation features 10: Add self-loops, compute norm (v) =deg(v)−1 11: Apply persistence filter: a=σ(∥rhyp∥ −τ) 12:end procedure 13:procedure MÖBIUS UPDATE (w,L) 14: Compute gradient magnitude gand mask m=σ(g−τg) 15: Compute update vectors u,vand∆ =γ(u⊗v)⊙m 16: Update weights: wnew=w⊕c∆ 17:end procedure 18:procedure STABILIZATION (w,θgnn) 19: Project weights: wfinal=proj(wnew) 20: Reset GNN parameters: θgnn←θ(0) gnn 21:end procedure 22:procedure EDITMODEL ({(s, r, o )},w) 23: Input: Triples {(s, r, o )}, model weights w 24: Output: Updated weights wfinal 25: Construct hyperbolic graph (Line 1) 26: do 27: Compute loss L=ComputeLoss (w,edit target ) 28: Update weights using Möbius method (Line 11) 29: Stabilize and reset GNN (Line 16) 30: while Editing criteria not met 31:end procedure
https://arxiv.org/abs/2505.18343v1
arXiv:2505.18350v1 [cs.LG] 23 May 2025Task Specific Pruning with LLM-Sieve: How Many Parameters Does Your Task Really Need? Waleed Reda Microsoft Research waleedreda@microsoft.comAbhinav Jangda Microsoft Research ajangda@microsoft.comKrishna Chintalapudi Microsoft Research krchinta@microsoft.com Abstract As Large Language Models (LLMs) are increasingly being adopted for narrow tasks—such as medical question answering or sentiment analysis—and deployed in resource-constrained settings, a key question arises: how many parameters does a task actually need? In this work, we present LLM-Sieve, the first comprehensive framework for task-specific pruning of LLMs that achieves 20–75% parameter reduction with only 1–5% accuracy degradation across diverse domains. Unlike prior methods that apply uniform pruning or rely on low-rank approximations of weight matrices or inputs in isolation, LLM-Sieve (i) learns task-aware joint projections to better approximate output behavior, and (ii) employs a Genetic Algo- rithm to discover differentiated pruning levels for each matrix. LLM-Sieve is fully compatible with LoRA fine-tuning and quantization, and uniquely demonstrates strong generalization across datasets within the same task domain. Together, these results establish a practical and robust mechanism to generate smaller performant task-specific models. 1 Introduction Large Language Models (LLMs) are increasingly employed for domain-specific tasks and deployed in environments with limited computational and memory resources, such as personal devices. When an LLM—such as LLaMA 3.1—is used for a narrow task (e.g., medical question answering or sentiment analysis), it is reasonable to hypothesize that only a fraction of its parameters are necessary. This follows from the observation that the distribution of inputs, outputs, and the reasoning complexity required for a specialized task typically constitutes a strict subset of what the full model was trained to handle. Consequently, it should be possible to identify and remove parameters that are redundant or irrelevant for the task, thereby reducing the model’s memory footprint and inference latency. In this work, we investigate the central question: Given a specific downstream task, what fraction of an LLM’s parameters can be pruned with minimal or no loss in performance—and how can such pruning be achieved effectively? While pruning has been extensively studied in neural networks [ 1], the question of how many parameters are truly needed for a specific task remains open for LLMs. We address this with LLM- Sieve, a task-aware pruning framework that reveals substantial redundancy in large models. We find that 25–75% of parameters can be removed with only 1–5% drop in task accuracy, depending on the model and task—yielding significant gains in memory and inference efficiency. In contrast, existing methods typically achieve only 1–5% reduction in parameters under similar constraints. To our knowledge, this work is the first to expose such large redundancies in LLM parameters for narrow tasks, enabling more efficient task specialization. State-of-the-art pruning methods often rely on low-rank approximations to reduce the dimensionality of matrix multiplications in LLMs, based on the intuition that most relevant information lies in a lower-dimensional subspace [ 2,3,4]. These methods typically project either the weight matrices or Preprint. Under review. the input distributions independently, assuming an implicit alignment between their subspaces—a condition that may not hold in practice. In contrast, LLM-Sieve learns a joint projection that
https://arxiv.org/abs/2505.18350v1
directly approximates the final output, better capturing task-specific structure. Second, LLM-Sieve introduces differentiated pruning, in which each matrix in the LLM is pruned to a task-dependent degree. Existing approaches typically apply uniform pruning ratios across all layers and matrix types—often leaving pruning granularity as a manual hyperparameter. In contrast, LLM- Sieve employs a Genetic Algorithm (GA) to automatically explore optimal, non-uniform pruning configurations. Our GA search revealed that not all matrices contribute equally to performance: some can be pruned by up to 95% with minimal loss, while others, identified as bottleneck matrices , exhibit significant degradation even with modest pruning (e.g., 5%). To compensate for pruning-induced degradation, many existing approaches apply Low-Rank Adap- tation (LoRA) [ 4] to recover task performance. However, we observe that state-of-the-art pruning methods often over-rely on LoRA, resulting in dataset-specific tuning that harms generalization across unseen datasets within the same task domain. In contrast, LLM-Sieve pruned models exhibit strong task domain generalization. We also find that LLM-Sieve and LoRA are in fact complementary — applying LoRA after pruning yields smaller, fine-tuned models with strong performance. Moreover, compression techniques such as quantization can be applied after pruning to further reduce model size. These results suggest a practical deployment pipeline with pruning, fine-tuning, and quantization, to obtain compact, efficient, and robust task-specific LLMs. In summary, our contributions are: •We propose LLM-Sieve, a novel task-specific pruning technique that removes 25–75% of an LLM’s parameters with only 1–5% accuracy loss— vastly outperforming prior methods and resulting in commensurate gains in memory efficiency and inference latency. To our knowledge, it is the first framework to reveal such large redundancy in LLM parameters and systematically address how much of an LLM is truly needed for narrow tasks. •LLM-Sieve introduces two key techniques: (i) it learns low-rank projections that accurately capture task-specific low-rank subspaces for each matrix multiplication in the LLM, and (ii) it applies differentiated pruning via a Genetic Algorithm, showing that some matrices can be pruned by up to 95% with minimal accuracy loss, while others— bottleneck matrices —degrade sharply with even minor pruning. We believe identifying such task-critical components may offer insight into how knowledge and reasoning are organized within LLMs. •We show that LLM-Sieve is compatible with LoRA fine-tuning and quantization, and generalizes well to unseen datasets within the same task domain. In contrast, prior methods often overfit to the calibration set, relying heavily on LoRA and failing to generalize across datasets. 2 Related Work Recent advancements in compressing large language models (LLMs) have led to various techniques aimed at reducing model size and computational requirements while maintaining performance. These methods can be broadly categorized into either pruning strategies or other downsizing approaches such as quantization and knowledge distillation. Pruning Techniques. Pruning methods can be classified into unstructured and structured ap- proaches. Unstructured pruning typically removes weights based on magnitude [ 5,6], resulting in sparse matrices. While effective at reducing parameter count, such sparsity often yields little practical speedup on GPUs, which are not optimized for irregular memory access patterns [ 7]. Unstructured pruning techniques like SparseGPT [ 7] and
https://arxiv.org/abs/2505.18350v1
Wanda [ 8] often use light-weight tuning to recover performance after aggressive pruning. Structured pruning, on the other hand, removes entire attention heads, neurons, or matrix blocks, yielding dense submatrices that better align with GPU hardware. LLM-Pruner [ 9], for example, leverages gradient-based saliency to identify and remove less critical coupled structures within the model, such as neuron channels and attention heads. Among structured methods, low-rank approximation [ 10] offers a particularly efficient path by reducing the dimensions of matrix multiplications. SliceGPT [ 2] reduces model size by projecting inputs and intermediate activations into lower-dimensional subspaces, effectively pruning entire 2 rows or columns from weight matrices. LASER [ 3] focuses on approximating weight matrices themselves through selective low-rank decompositions, targeting specific layers within the transformer architecture. LLM-Sieve builds on this structured, low-rank perspective, but departs from prior approaches by learning a joint projection over both inputs and outputs, providing higher fidelity. Other LLM Downsizing Techniques. Beyond pruning, quantization and knowledge distillation are widely used to reduce LLM size. Quantization [ 11,12,13,14] lowers precision (e.g., from FP32 to INT8/FP4), reducing memory and compute with minimal accuracy loss [ 15,16]. Distillation [ 17] transfers knowledge from a larger teacher model to a smaller student by aligning outputs. These techniques are orthogonal to pruning and can be used in combination. 3 Transformer Architecture Background In this section, we provide the necessary background on transformer networks [ 18], required for the rest of the paper. The transformer architecture (Figure 1) comprises a series of layers, each composed of a multi-head self-attention mechanism followed by a feed-forward network (FFN) block. Between these blocks, normalization layers such as LayerNorm [ 19] or RMSNorm [ 20] are applied, often in conjunction with residual connections. Inference begins with an embedding layer that transforms input token IDs and position IDs into dense vectors. After embeddings, the signal matrix X∈ ℜN×D, where Ddenotes the embedding dimension and Nthe sequence length, is passed through a LayerNorm operation which normalizes each row of X. After this Xis transformed as it passes through in each of the layers. In the attention block at the kthlayer, the signal is projected into key ( Kk=XWk K), query (Qk=XWk Q), and value ( Vk=XWk V) matrices using respective weight matrices Wk K,Wk Q, and Wk V, usually concatenated into a single matrix Wk QKV for efficient computation. The self-attention mechanism computes the attention scores: Attention (Qk, Kk, Vk) =softmaxQk(Kk)T √ D Vk(1) Multi-head attention extends this by performing these operations in parallel across hheads, concate- nating their outputs, and applying a final projection: MultiHead (Qk, Kk, Vk) =Concat (head 1,head 2, . . . , head h)Wk O (2) where Wk Ois called the output projection matrix. The FFN block typically consists of two linear transformations separated by a non-linear activation function, such as ReLU or GeLU. The operation of an FFN block is given by: FFN(X) =σ(XWk 1+bk 1)Wk 2+bk 2 (3) where Wk 1andWk 2are weight matrices, bk 1andbk 2are biases, and σrepresents the activation function. These blocks are repeated in every layer, and the embedding
https://arxiv.org/abs/2505.18350v1
output has to pass through all the layers. Due to the autoregressive nature of transformers, outputs are fed back and multiple forward passes are required to produce the output to the prompt. 4 LLM-Sieve LLM-Sieve aims to reduce the number of parameters in a Large Language Model (LLM) while ensuring that the degradation in end-to-end task performance remains within a user-specified tolerance ϵ.LetL(Θ)denote the original LLM with parameters Θ, which maps an input set X to outputs Y=L(X|Θ). A task-specific evaluation function e(Y)quantifies the quality of the model’s outputs—this may be a standard metric such as F1 score or a learned quality score, e.g., from GPT-4o. The objective of LLM-Sieve is to construct a pruned model ˆL(ˆΘ)such that its performance on a task-specific dataset XCsatisfies: e ˆL(XC|ˆΘ) ≥a0, where a0is the target performance level, defined in relation to the original model’s performance a∗=e(L(XC|Θ))as:a∗−a0 a∗=ϵ. In other words, LLM-Sieve ensures that the pruned model maintains at least (1−ϵ)of the original model’s task-specific performance. 3 Figure 1: How each matrix multiplication in an LLM is approximated in LLM-Sieve. Figure 2: Low-rank approximate multiplications in LLM-Sieve pruning. Figure 3: Intuition behind pruning in LLM-Sieve. Figure 4: Calibration step in LLM-Sieve 4.1 LLM-Sieve Pruning LLM-Sieve approximates each matrix multiplication in a task-specific lower-dimensional subspace (Figure 1). To preserve the dimensionality required by surrounding non-linear transformations, it applies different approximation strategies to the matrix multiplications that precede and follow non-linear layers. Approximating Pre-Nonlinear Multiplications (Figure 2). In these multiplications, Y=WX , where W∈ ℜH×Dis the weight matrix with output dimension Hand input dimension D, and Y∈ ℜHserves as input to a non-linear layer. For example WQKV in self-attention layer, followed by Softmax and W1in the FFN, followed by ReLU/Gelu. These multiplications are approximated as,˜Y= (WA) A†X (4) where, A†is the pseudo-inverse of the adaptor matrix A∈ ℜR×HandR < D is the reduced rank ofˆW=WA∈ ℜR×H. During inference, inputs are first transformed into a low-rank space as ˆX=A†X∈ ℜRand then the outputs are computed as ˜Y=ˆWˆX∈ ℜH. Approximating Post-Nonlinear Multiplications (Figure 2). In these multiplications, Y= WX ,W∈ ℜD×H, the input Xis the output of a non-linear layer. For example Woin self- attention, preceded by Softmax and W2in the FFN, preceded by ReLU/Gelu. These multiplications are approximated as, ˜Y=A†((AW)X) (5) where, A†is the pseudo-inverse of the adaptor matrix A∈ ℜR×DandR < D is the reduced rank ofˆW=AW∈ ℜR×H. During inference, the outputs are computed in a low dimensional space as ˆY=ˆWX∈ ℜRand then projected up using ˜Y=A†ˆY∈ ℜD. Intuition into LLM-Sieve’s Low Rank Projections. In any matrix multiplication Y=WX within an LLM, the inputs Xdepend on the task-specific data distribution, whereas the weight matrices Ware task-agnostic and trained for general-purpose language modeling. Existing pruning methods typically fall into two categories. The first reduces matrix dimensionality using Singular Value Decomposition (SVD), but this approach does not exploit the limited support of the task-specific input distribution. The second performs Principal Component Analysis (PCA) on the inputs X to identify a low-rank projection, assuming that input compression alone suffices. However, as illustrated in Figure 3, the subspaces spanned by WandXare not
https://arxiv.org/abs/2505.18350v1
necessarily aligned, which limits 4 the effectiveness of such decompositions. Rather than projecting weights or inputs independently, LLM-Sieve learns task-specific projections by directly approximating the output Y. Specifically, it seeks projections that minimize the average reconstruction error: ∥Y−˜Y∥. LLM-Sieve Calibration Step. The first step in LLM-Sieve is to perform inference over all inputs in the task-specific calibration data set XC, capturing input-output pairs for every matrix multiplication in the LLM (Figure 4). For instance, Llama-3.1-70B has 320 matrix multiplications – each of its 80 layers contains four key multiplications: WQKV ,Wo,W1andW2. Thus, for Llama-3.1-70B, 320 input-output pairs are captured, where each layer (k= 1,···,80)contributes <XCk 1,YCk 1>,< XCk 2,YCk 2>,<XCk 3,YCk 3>,<XCk 4,YCk 4>corresponding to the four multiplications per layer. These measurements serve to find the optimal values of the adaptor matrices Ak ias described next. Finding Adaptors Ak i.For each matrix multiplication in layer k, LLM-Sieve learns a task-specific adaptor matrix Ak iby solving a regression problem that minimizes the reconstruction error between the true outputs over the calbration data set YCk iand its low-rank approximation ˜YCk i.Ak iis determined by a gradient decent to minimize : ∥Yk Ck−˜Yk Ck∥. . Pruning Factor. Since, HandDare fixed in each multiplication, the choice of Rdetermines the extent of pruning. After pruning, the model stores ˆWinstead of Walong with the adaptor A. Thus, the total parameters after pruning are R(H+D). The pruning factor then is given by p=R(H+D) DH. 4.2 Differentiated Pruning LLM-Sieve pruning, as described in Section 4.1, requires specifying a parameter Rthat defines the intermediate rank used in each adaptor matrix A. In a model with Nlayers, there are 4Nmatrix multiplications—corresponding to WQKV,Wo,W1, andW2in each layer. Crucially, each of these multiplications may vary in its sensitivity to pruning, as different matrices encode differing amounts of task-specific information. To support differentiated pruning, we define the pruning factor vector: p=< p1 1, p1 2,···, pN 3, pN 4>∈ ℜ4N, where each pk i∈[0,1]specifies the fraction of parameters retained in the i-th matrix of layer k. Recall that the objective of LLM-Sieve is to minimize the total parameter count while ensuring that the loss in end-to-end task performance does not exceed a tolerance ϵ. Therefore, the goal of differentiated pruning is to determine the optimal pruning factor vector pthat satisfies: min p∥ˆΘ(p)∥0subject toa∗−e(ˆL(XC|ˆΘ(p))) a∗≤ϵ, (6) where ˆΘ(p)denotes the pruned parameter set induced by p, and a∗is the baseline performance of the unpruned model. The end-to-end evaluation function e(ˆL(XC|ˆΘ(p)))may not be differen- tiable—for example being defined as an average quality score obtained from a black-box model such as GPT-4o. This non-differentiability precludes using gradient-based optimization for finding p. 4.2.1 Common Approach: Uniform Pruning (UP) A commonly employed approach in many pruning techniques is to prune each matrix by the same pruning factor p∗. This constraint vastly reduces the search space and practitioners typically determine a suitable value of p∗through trial and error. Uniform Ratio Pruning With Binary Search. To automate the selection of an optimal uniform pruning factor p∗, LLM-Sieve employs a binary search strategy. The goal is to identify the smallest value of p∗(greatest pruning)
https://arxiv.org/abs/2505.18350v1
such that the pruned model still satisfies the end-to-end performance constraint. At each iteration, we maintain an upper bound pupand a lower bound plowsuch that the performance constraint (i.e., degradation < ϵ) is violated at plowbut satisfied at pup. We evaluate model performance on the calibration dataset using the midpoint pruning ratio:pUp+pLow 2. If the pruned model with ratio pmidmeets the accuracy constraint, we update the upper bound: pup←pmid. Otherwise, we update the lower bound: plow←pmid. This process repeats until convergence to the optimal pruning factor p∗. 5 4.2.2 Differentiated Pruning With Genetic Algorithm Unlike uniform pruning, which optimizes a single global pruning factor, differentiated pruning requires navigating a high-dimensional, non-convex, and non-differentiable search space—making gradient-based or exhaustive search methods impractical; consequently we use a Genetic Algorithm (GA) for efficient exploration. A Genetic Algorithm (GA) is an evolutionary optimization technique inspired by natural selection that is often employed to efficiently navigate the combinatorial search spaces. A population of chromosomes, each chromosome encoding a particular point in the search space, is maintained. The population evolves over successive generations through biologically inspired operations: crossover (which combines segments of two parent chromosomes) and mutation (which introduces small random changes). After each generation, only the fittest individuals—those with pruning configurations yielding the highest task-specific performance—are retained, ensuring gradual convergence toward an optimal or near-optimal solution. Chromosome Encoding in LLM-Sieve. In our GA, each chromosome is simply the pruning vector. In each generation, a population of pchromosomes P= [p1,p2,···,pM]is maintained (we chose M= 100 ). At the beginning, an initial set population of chromosomes is initialized by picking pruning factors randomly from a predefined sub-set of values in (0,1)-{1,0.9,0.75, 0.6,0.5,0.35,0.25,0.2,0.1,0.05 }in our implementation. 10 of the chromosomes are also initialized with all pruning factors set to the same value. Crossover Operation. In each crossover operation, two chormosomes are picked with probability proportional to their fitness functions. A random split point is selected, and the chromosomes are severed at this position. The offspring chromosomes are then formed by swapping and recombining the left and right segments. Wet set the crossover probability to 0.5 in our implementation. Mutation Operation. The mutation operator randomly perturbs the chromosome by increasing or decreasing reduction factors by one step from the sub-set of predefined values at random locations. Mutation probability is kept at 0.2% in our implementation. End-to-end Fitness Function. The fitness function is central to guiding the GA’s search. Since LLM-Sieve aims to maximize pruning while maintaining accuracy above a target threshold a0, the fitness function is designed to reward compression when performance exceeds a0and penalize it exponentially when it falls below: F(r, a) =c(r)× 1 +e50(a−a0) (7) Here, c(r)denotes the ratio of parameters in the pruned model to that of the original, and ais the measured end-to-end task accuracy. As described in our evaluation setup, we use GPT-4o-as-judge to score performance on our task dataset. The Genetic Algorithm terminates when no significant improvement (defined as less than 5% change) is observed over 10 consecutive generations. Embarrassingly Parallel Implementation To improve the efficiency of running the GA, we precompute pruned matrices for
https://arxiv.org/abs/2505.18350v1