Add Batch 6e28cd2f-d36e-49ee-8252-6e6b4be5110b data
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- .gitattributes +64 -0
- 2025/(Almost) Free Modality Stitching of Foundation Models/a9f23956-6f94-4537-a685-b8aef0c40a7a_content_list.json +0 -0
- 2025/(Almost) Free Modality Stitching of Foundation Models/a9f23956-6f94-4537-a685-b8aef0c40a7a_model.json +0 -0
- 2025/(Almost) Free Modality Stitching of Foundation Models/a9f23956-6f94-4537-a685-b8aef0c40a7a_origin.pdf +3 -0
- 2025/(Almost) Free Modality Stitching of Foundation Models/full.md +437 -0
- 2025/(Almost) Free Modality Stitching of Foundation Models/images.zip +3 -0
- 2025/(Almost) Free Modality Stitching of Foundation Models/layout.json +0 -0
- 2025/3DS_ Medical Domain Adaptation of LLMs via Decomposed Difficulty-based Data Selection/055c7ca1-23e4-4b4e-920e-8094616e6655_content_list.json +0 -0
- 2025/3DS_ Medical Domain Adaptation of LLMs via Decomposed Difficulty-based Data Selection/055c7ca1-23e4-4b4e-920e-8094616e6655_model.json +0 -0
- 2025/3DS_ Medical Domain Adaptation of LLMs via Decomposed Difficulty-based Data Selection/055c7ca1-23e4-4b4e-920e-8094616e6655_origin.pdf +3 -0
- 2025/3DS_ Medical Domain Adaptation of LLMs via Decomposed Difficulty-based Data Selection/full.md +0 -0
- 2025/3DS_ Medical Domain Adaptation of LLMs via Decomposed Difficulty-based Data Selection/images.zip +3 -0
- 2025/3DS_ Medical Domain Adaptation of LLMs via Decomposed Difficulty-based Data Selection/layout.json +0 -0
- 2025/3MDBench_ Medical Multimodal Multi-agent Dialogue Benchmark/5010b5c5-34fd-495f-8d62-35405e81977f_content_list.json +0 -0
- 2025/3MDBench_ Medical Multimodal Multi-agent Dialogue Benchmark/5010b5c5-34fd-495f-8d62-35405e81977f_model.json +0 -0
- 2025/3MDBench_ Medical Multimodal Multi-agent Dialogue Benchmark/5010b5c5-34fd-495f-8d62-35405e81977f_origin.pdf +3 -0
- 2025/3MDBench_ Medical Multimodal Multi-agent Dialogue Benchmark/full.md +0 -0
- 2025/3MDBench_ Medical Multimodal Multi-agent Dialogue Benchmark/images.zip +3 -0
- 2025/3MDBench_ Medical Multimodal Multi-agent Dialogue Benchmark/layout.json +0 -0
- 2025/3R_ Enhancing Sentence Representation Learning via Redundant Representation Reduction/b739e777-ee83-4d28-8968-3c5ed2a33419_content_list.json +0 -0
- 2025/3R_ Enhancing Sentence Representation Learning via Redundant Representation Reduction/b739e777-ee83-4d28-8968-3c5ed2a33419_model.json +0 -0
- 2025/3R_ Enhancing Sentence Representation Learning via Redundant Representation Reduction/b739e777-ee83-4d28-8968-3c5ed2a33419_origin.pdf +3 -0
- 2025/3R_ Enhancing Sentence Representation Learning via Redundant Representation Reduction/full.md +374 -0
- 2025/3R_ Enhancing Sentence Representation Learning via Redundant Representation Reduction/images.zip +3 -0
- 2025/3R_ Enhancing Sentence Representation Learning via Redundant Representation Reduction/layout.json +0 -0
- 2025/A Case Against Implicit Standards_ Homophone Normalization in Machine Translation for Languages that use the Ge’ez Script./a570cebb-8359-4bf2-9617-79a7ee147972_content_list.json +1635 -0
- 2025/A Case Against Implicit Standards_ Homophone Normalization in Machine Translation for Languages that use the Ge’ez Script./a570cebb-8359-4bf2-9617-79a7ee147972_model.json +0 -0
- 2025/A Case Against Implicit Standards_ Homophone Normalization in Machine Translation for Languages that use the Ge’ez Script./a570cebb-8359-4bf2-9617-79a7ee147972_origin.pdf +3 -0
- 2025/A Case Against Implicit Standards_ Homophone Normalization in Machine Translation for Languages that use the Ge’ez Script./full.md +284 -0
- 2025/A Case Against Implicit Standards_ Homophone Normalization in Machine Translation for Languages that use the Ge’ez Script./images.zip +3 -0
- 2025/A Case Against Implicit Standards_ Homophone Normalization in Machine Translation for Languages that use the Ge’ez Script./layout.json +0 -0
- 2025/A Causal Lens for Evaluating Faithfulness Metrics/399cf055-5b9d-493e-af03-9c9a08c4262c_content_list.json +0 -0
- 2025/A Causal Lens for Evaluating Faithfulness Metrics/399cf055-5b9d-493e-af03-9c9a08c4262c_model.json +0 -0
- 2025/A Causal Lens for Evaluating Faithfulness Metrics/399cf055-5b9d-493e-af03-9c9a08c4262c_origin.pdf +3 -0
- 2025/A Causal Lens for Evaluating Faithfulness Metrics/full.md +595 -0
- 2025/A Causal Lens for Evaluating Faithfulness Metrics/images.zip +3 -0
- 2025/A Causal Lens for Evaluating Faithfulness Metrics/layout.json +0 -0
- 2025/A Comprehensive Framework to Operationalize Social Stereotypes for Responsible AI Evaluations/f8da7788-a763-47af-923a-ab5fe87e7724_content_list.json +0 -0
- 2025/A Comprehensive Framework to Operationalize Social Stereotypes for Responsible AI Evaluations/f8da7788-a763-47af-923a-ab5fe87e7724_model.json +0 -0
- 2025/A Comprehensive Framework to Operationalize Social Stereotypes for Responsible AI Evaluations/f8da7788-a763-47af-923a-ab5fe87e7724_origin.pdf +3 -0
- 2025/A Comprehensive Framework to Operationalize Social Stereotypes for Responsible AI Evaluations/full.md +333 -0
- 2025/A Comprehensive Framework to Operationalize Social Stereotypes for Responsible AI Evaluations/images.zip +3 -0
- 2025/A Comprehensive Framework to Operationalize Social Stereotypes for Responsible AI Evaluations/layout.json +0 -0
- 2025/A Comprehensive Literary Chinese Reading Comprehension Dataset with an Evidence Curation Based Solution/2e889ff8-a374-489b-8cab-69b6b51f79c3_content_list.json +0 -0
- 2025/A Comprehensive Literary Chinese Reading Comprehension Dataset with an Evidence Curation Based Solution/2e889ff8-a374-489b-8cab-69b6b51f79c3_model.json +0 -0
- 2025/A Comprehensive Literary Chinese Reading Comprehension Dataset with an Evidence Curation Based Solution/2e889ff8-a374-489b-8cab-69b6b51f79c3_origin.pdf +3 -0
- 2025/A Comprehensive Literary Chinese Reading Comprehension Dataset with an Evidence Curation Based Solution/full.md +783 -0
- 2025/A Comprehensive Literary Chinese Reading Comprehension Dataset with an Evidence Curation Based Solution/images.zip +3 -0
- 2025/A Comprehensive Literary Chinese Reading Comprehension Dataset with an Evidence Curation Based Solution/layout.json +0 -0
- 2025/A Computational Simulation of Language Production in First Language Acquisition/63198e81-ac5b-4894-8c54-27e3c6f8a917_content_list.json +0 -0
.gitattributes
CHANGED
|
@@ -1260,3 +1260,67 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 1260 |
2025/Zero-Shot[[:space:]]Privacy-Aware[[:space:]]Text[[:space:]]Rewriting[[:space:]]via[[:space:]]Iterative[[:space:]]Tree[[:space:]]Search/7c7b6df6-e8a8-4ed2-a2c5-290e47d18e27_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1261 |
2025/Zero-shot[[:space:]]Cross-lingual[[:space:]]NER[[:space:]]via[[:space:]]Mitigating[[:space:]]Language[[:space:]]Difference_[[:space:]]An[[:space:]]Entity-aligned[[:space:]]Translation[[:space:]]Perspective/688cba11-b487-4a27-ad06-b4d1a5ecd3b7_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1262 |
2025/Zero-shot[[:space:]]Graph[[:space:]]Reasoning[[:space:]]via[[:space:]]Retrieval[[:space:]]Augmented[[:space:]]Framework[[:space:]]with[[:space:]]LLMs/8c21e35a-a802-4cc5-8c8b-54340470a983_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1260 |
2025/Zero-Shot[[:space:]]Privacy-Aware[[:space:]]Text[[:space:]]Rewriting[[:space:]]via[[:space:]]Iterative[[:space:]]Tree[[:space:]]Search/7c7b6df6-e8a8-4ed2-a2c5-290e47d18e27_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1261 |
2025/Zero-shot[[:space:]]Cross-lingual[[:space:]]NER[[:space:]]via[[:space:]]Mitigating[[:space:]]Language[[:space:]]Difference_[[:space:]]An[[:space:]]Entity-aligned[[:space:]]Translation[[:space:]]Perspective/688cba11-b487-4a27-ad06-b4d1a5ecd3b7_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1262 |
2025/Zero-shot[[:space:]]Graph[[:space:]]Reasoning[[:space:]]via[[:space:]]Retrieval[[:space:]]Augmented[[:space:]]Framework[[:space:]]with[[:space:]]LLMs/8c21e35a-a802-4cc5-8c8b-54340470a983_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1263 |
+
2025/(Almost)[[:space:]]Free[[:space:]]Modality[[:space:]]Stitching[[:space:]]of[[:space:]]Foundation[[:space:]]Models/a9f23956-6f94-4537-a685-b8aef0c40a7a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1264 |
+
2025/3DS_[[:space:]]Medical[[:space:]]Domain[[:space:]]Adaptation[[:space:]]of[[:space:]]LLMs[[:space:]]via[[:space:]]Decomposed[[:space:]]Difficulty-based[[:space:]]Data[[:space:]]Selection/055c7ca1-23e4-4b4e-920e-8094616e6655_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1265 |
+
2025/3MDBench_[[:space:]]Medical[[:space:]]Multimodal[[:space:]]Multi-agent[[:space:]]Dialogue[[:space:]]Benchmark/5010b5c5-34fd-495f-8d62-35405e81977f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1266 |
+
2025/3R_[[:space:]]Enhancing[[:space:]]Sentence[[:space:]]Representation[[:space:]]Learning[[:space:]]via[[:space:]]Redundant[[:space:]]Representation[[:space:]]Reduction/b739e777-ee83-4d28-8968-3c5ed2a33419_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1267 |
+
2025/A[[:space:]]Case[[:space:]]Against[[:space:]]Implicit[[:space:]]Standards_[[:space:]]Homophone[[:space:]]Normalization[[:space:]]in[[:space:]]Machine[[:space:]]Translation[[:space:]]for[[:space:]]Languages[[:space:]]that[[:space:]]use[[:space:]]the[[:space:]]Ge’ez[[:space:]]Script./a570cebb-8359-4bf2-9617-79a7ee147972_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1268 |
+
2025/A[[:space:]]Causal[[:space:]]Lens[[:space:]]for[[:space:]]Evaluating[[:space:]]Faithfulness[[:space:]]Metrics/399cf055-5b9d-493e-af03-9c9a08c4262c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1269 |
+
2025/A[[:space:]]Comprehensive[[:space:]]Framework[[:space:]]to[[:space:]]Operationalize[[:space:]]Social[[:space:]]Stereotypes[[:space:]]for[[:space:]]Responsible[[:space:]]AI[[:space:]]Evaluations/f8da7788-a763-47af-923a-ab5fe87e7724_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1270 |
+
2025/A[[:space:]]Comprehensive[[:space:]]Literary[[:space:]]Chinese[[:space:]]Reading[[:space:]]Comprehension[[:space:]]Dataset[[:space:]]with[[:space:]]an[[:space:]]Evidence[[:space:]]Curation[[:space:]]Based[[:space:]]Solution/2e889ff8-a374-489b-8cab-69b6b51f79c3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1271 |
+
2025/A[[:space:]]Computational[[:space:]]Simulation[[:space:]]of[[:space:]]Language[[:space:]]Production[[:space:]]in[[:space:]]First[[:space:]]Language[[:space:]]Acquisition/63198e81-ac5b-4894-8c54-27e3c6f8a917_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1272 |
+
2025/A[[:space:]]Culturally-diverse[[:space:]]Multilingual[[:space:]]Multimodal[[:space:]]Video[[:space:]]Benchmark[[:space:]]&[[:space:]]Model/af472a4c-a4a5-40cc-92f4-395f21202212_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1273 |
+
2025/A[[:space:]]Fully[[:space:]]Probabilistic[[:space:]]Perspective[[:space:]]on[[:space:]]Large[[:space:]]Language[[:space:]]Model[[:space:]]Unlearning_[[:space:]]Evaluation[[:space:]]and[[:space:]]Optimization/037eee17-b166-4ab8-93cc-afb25da27d6d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1274 |
+
2025/A[[:space:]]Generative[[:space:]]Pre-Trained[[:space:]]Language[[:space:]]Model[[:space:]]for[[:space:]]Channel[[:space:]]Prediction[[:space:]]in[[:space:]]Wireless[[:space:]]Communications[[:space:]]Systems/7cc60304-68c5-42a3-be14-d714158cef3a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1275 |
+
2025/A[[:space:]]Good[[:space:]]Plan[[:space:]]is[[:space:]]Hard[[:space:]]to[[:space:]]Find_[[:space:]]Aligning[[:space:]]Models[[:space:]]with[[:space:]]Preferences[[:space:]]is[[:space:]]Misaligned[[:space:]]with[[:space:]]What[[:space:]]Helps[[:space:]]Users/a37d1738-60ee-4691-8179-5a59ba9ff98b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1276 |
+
2025/A[[:space:]]Graph-Theoretical[[:space:]]Framework[[:space:]]for[[:space:]]Analyzing[[:space:]]the[[:space:]]Behavior[[:space:]]of[[:space:]]Causal[[:space:]]Language[[:space:]]Models/5ff7c763-db73-4cd0-a9e5-b390adf83cbc_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1277 |
+
2025/A[[:space:]]Head[[:space:]]to[[:space:]]Predict[[:space:]]and[[:space:]]a[[:space:]]Head[[:space:]]to[[:space:]]Question_[[:space:]]Pre-trained[[:space:]]Uncertainty[[:space:]]Quantification[[:space:]]Heads[[:space:]]for[[:space:]]Hallucination[[:space:]]Detection[[:space:]]in[[:space:]]LLM[[:space:]]Outputs/8e0cd3f1-2049-4eaf-b8b2-558b0d0cd5d3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1278 |
+
2025/A[[:space:]]Knowledge-driven[[:space:]]Adaptive[[:space:]]Collaboration[[:space:]]of[[:space:]]LLMs[[:space:]]for[[:space:]]Enhancing[[:space:]]Medical[[:space:]]Decision-making/785a6519-017d-4742-bbe3-65cb576a8c04_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1279 |
+
2025/A[[:space:]]Middle[[:space:]]Path[[:space:]]for[[:space:]]On-Premises[[:space:]]LLM[[:space:]]Deployment_[[:space:]]Preserving[[:space:]]Privacy[[:space:]]Without[[:space:]]Sacrificing[[:space:]]Model[[:space:]]Confidentiality/a17dd34b-3d05-4853-8c4b-578c5cc16c8f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1280 |
+
2025/A[[:space:]]Multi-Agent[[:space:]]Framework[[:space:]]with[[:space:]]Automated[[:space:]]Decision[[:space:]]Rule[[:space:]]Optimization[[:space:]]for[[:space:]]Cross-Domain[[:space:]]Misinformation[[:space:]]Detection/6849b1cb-dfbe-4d6f-acc7-8a32797f732d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1281 |
+
2025/A[[:space:]]Multi-Level[[:space:]]Benchmark[[:space:]]for[[:space:]]Causal[[:space:]]Language[[:space:]]Understanding[[:space:]]in[[:space:]]Social[[:space:]]Media[[:space:]]Discourse/c88cc15e-7e5f-4eb5-8e99-08ae15cf091b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1282 |
+
2025/A[[:space:]]Multilingual,[[:space:]]Culture-First[[:space:]]Approach[[:space:]]to[[:space:]]Addressing[[:space:]]Misgendering[[:space:]]in[[:space:]]LLM[[:space:]]Applications/314e2a1b-4eb2-405f-bf56-ba6bbae1090f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1283 |
+
2025/A[[:space:]]Necessary[[:space:]]Step[[:space:]]toward[[:space:]]Faithfulness_[[:space:]]Measuring[[:space:]]and[[:space:]]Improving[[:space:]]Consistency[[:space:]]in[[:space:]]Free-Text[[:space:]]Explanations/5c2a4925-9cc7-4999-bce8-947609e0acbf_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1284 |
+
2025/A[[:space:]]Position[[:space:]]Paper[[:space:]]on[[:space:]]the[[:space:]]Automatic[[:space:]]Generation[[:space:]]of[[:space:]]Machine[[:space:]]Learning[[:space:]]Leaderboards/cd62d3d5-3f99-4e5c-8f51-0935773e81d9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1285 |
+
2025/A[[:space:]]Probabilistic[[:space:]]Inference[[:space:]]Scaling[[:space:]]Theory[[:space:]]for[[:space:]]LLM[[:space:]]Self-Correction/d575813d-f457-478c-a3ca-860b2f7d2e3e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1286 |
+
2025/A[[:space:]]Rigorous[[:space:]]Evaluation[[:space:]]of[[:space:]]LLM[[:space:]]Data[[:space:]]Generation[[:space:]]Strategies[[:space:]]for[[:space:]]Low-Resource[[:space:]]Languages/2d5d922b-183a-4b57-b74f-0fc254652030_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1287 |
+
2025/A[[:space:]]Sequential[[:space:]]Multi-Stage[[:space:]]Approach[[:space:]]for[[:space:]]Code[[:space:]]Vulnerability[[:space:]]Detection[[:space:]]via[[:space:]]Confidence-[[:space:]]and[[:space:]]Collaboration-based[[:space:]]Decision[[:space:]]Making/19f6094b-a491-47a1-b3c4-e813795a9006_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1288 |
+
2025/A[[:space:]]Simple[[:space:]]Yet[[:space:]]Effective[[:space:]]Method[[:space:]]for[[:space:]]Non-Refusing[[:space:]]Context[[:space:]]Relevant[[:space:]]Fine-grained[[:space:]]Safety[[:space:]]Steering[[:space:]]in[[:space:]]LLMs/8d1de167-1270-4461-9ad7-ab95a9beba42_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1289 |
+
2025/A[[:space:]]Survey[[:space:]]of[[:space:]]Link[[:space:]]Prediction[[:space:]]in[[:space:]]N-ary[[:space:]]Knowledge[[:space:]]Graphs/69625e93-85bf-4aeb-9192-666d217d9542_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1290 |
+
2025/A[[:space:]]Symbolic[[:space:]]Adversarial[[:space:]]Learning[[:space:]]Framework[[:space:]]for[[:space:]]Evolving[[:space:]]Fake[[:space:]]News[[:space:]]Generation[[:space:]]and[[:space:]]Detection/9c1bd0ea-0d8d-4af0-8841-d96eedd91f05_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1291 |
+
2025/A[[:space:]]Systematic[[:space:]]Analysis[[:space:]]of[[:space:]]Base[[:space:]]Model[[:space:]]Choice[[:space:]]for[[:space:]]Reward[[:space:]]Modeling/da0f47ed-9e2e-4f9a-9cf5-8cd1a6791598_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1292 |
+
2025/A[[:space:]]Systematic[[:space:]]Survey[[:space:]]of[[:space:]]Automatic[[:space:]]Prompt[[:space:]]Optimization[[:space:]]Techniques/ecf85e4a-424b-4571-b2ad-34ca380b3d49_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1293 |
+
2025/A[[:space:]]Text-Based[[:space:]]Recommender[[:space:]]System[[:space:]]that[[:space:]]Leverages[[:space:]]Explicit[[:space:]]Affective[[:space:]]State[[:space:]]Preferences/45dc21af-be9a-45a7-9250-055521d6a3ec_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1294 |
+
2025/A[[:space:]]Training-Free[[:space:]]Length[[:space:]]Extrapolation[[:space:]]Approach[[:space:]]for[[:space:]]LLMs_[[:space:]]Greedy[[:space:]]Attention[[:space:]]Logit[[:space:]]Interpolation/82d1cf1d-e81f-440e-8d34-4d8f125c05db_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1295 |
+
2025/ACING_[[:space:]]Actor-Critic[[:space:]]for[[:space:]]Instruction[[:space:]]Learning[[:space:]]in[[:space:]]Black-Box[[:space:]]LLMs/5eca46ae-8873-4183-b6b6-449a02ecd777_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1296 |
+
2025/AFRIDOC-MT_[[:space:]]Document-level[[:space:]]MT[[:space:]]Corpus[[:space:]]for[[:space:]]African[[:space:]]Languages/94acc90f-e013-45b6-a2f2-f7e7f0f471a9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1297 |
+
2025/AI[[:space:]]Argues[[:space:]]Differently_[[:space:]]Distinct[[:space:]]Argumentative[[:space:]]and[[:space:]]Linguistic[[:space:]]Patterns[[:space:]]of[[:space:]]LLMs[[:space:]]in[[:space:]]Persuasive[[:space:]]Contexts/c163a602-537e-4981-bdc4-2a4699428377_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1298 |
+
2025/AI[[:space:]]Chatbots[[:space:]]as[[:space:]]Professional[[:space:]]Service[[:space:]]Agents_[[:space:]]Developing[[:space:]]a[[:space:]]Professional[[:space:]]Identity/b987e577-6cae-4b4c-add0-93cfc0d93662_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1299 |
+
2025/AI[[:space:]]Knows[[:space:]]Where[[:space:]]You[[:space:]]Are_[[:space:]]Exposure,[[:space:]]Bias,[[:space:]]and[[:space:]]Inference[[:space:]]in[[:space:]]Multimodal[[:space:]]Geolocation[[:space:]]with[[:space:]]KoreaGEO/3e689cc6-0d71-4530-93f3-1eeb29e28a3e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1300 |
+
2025/AI[[:space:]]Sees[[:space:]]Your[[:space:]]Location—But[[:space:]]With[[:space:]]A[[:space:]]Bias[[:space:]]Toward[[:space:]]The[[:space:]]Wealthy[[:space:]]World/bd471d35-aa17-45ee-adb1-6d743e35d486_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1301 |
+
2025/AIMMerging_[[:space:]]Adaptive[[:space:]]Iterative[[:space:]]Model[[:space:]]Merging[[:space:]]Using[[:space:]]Training[[:space:]]Trajectories[[:space:]]for[[:space:]]Language[[:space:]]Model[[:space:]]Continual[[:space:]]Learning/ce9117cc-5607-4a65-8cbe-8a6d84e0e4a0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1302 |
+
2025/AIP_[[:space:]]Subverting[[:space:]]Retrieval-Augmented[[:space:]]Generation[[:space:]]via[[:space:]]Adversarial[[:space:]]Instructional[[:space:]]Prompt/e352357e-d1c6-422e-91b9-7c63cb1061cf_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1303 |
+
2025/AIR_[[:space:]]Complex[[:space:]]Instruction[[:space:]]Generation[[:space:]]via[[:space:]]Automatic[[:space:]]Iterative[[:space:]]Refinement/52809e90-0d5a-4706-8ab9-5cdee5df2f53_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1304 |
+
2025/ALLabel_[[:space:]]Three-stage[[:space:]]Active[[:space:]]Learning[[:space:]]for[[:space:]]LLM-based[[:space:]]Entity[[:space:]]Recognition[[:space:]]using[[:space:]]Demonstration[[:space:]]Retrieval/932d417f-71b6-445d-b972-f80ada982b6d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1305 |
+
2025/AMACE_[[:space:]]Automatic[[:space:]]Multi-Agent[[:space:]]Chart[[:space:]]Evolution[[:space:]]for[[:space:]]Iteratively[[:space:]]Tailored[[:space:]]Chart[[:space:]]Generation/a7e2ede3-4a24-42ef-99bb-d32c498d8807_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1306 |
+
2025/AMQ_[[:space:]]Enabling[[:space:]]AutoML[[:space:]]for[[:space:]]Mixed-precision[[:space:]]Weight-Only[[:space:]]Quantization[[:space:]]of[[:space:]]Large[[:space:]]Language[[:space:]]Models/ae5b0153-bf4b-457f-aefa-9cad2541569c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1307 |
+
2025/APLOT_[[:space:]]Robust[[:space:]]Reward[[:space:]]Modeling[[:space:]]via[[:space:]]Adaptive[[:space:]]Preference[[:space:]]Learning[[:space:]]with[[:space:]]Optimal[[:space:]]Transport/69242f03-2a96-4ba1-9209-4736712792b1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1308 |
+
2025/AQuilt_[[:space:]]Weaving[[:space:]]Logic[[:space:]]and[[:space:]]Self-Inspection[[:space:]]into[[:space:]]Low-Cost,[[:space:]]High-Relevance[[:space:]]Data[[:space:]]Synthesis[[:space:]]for[[:space:]]Specialist[[:space:]]LLMs/10f24355-b6af-472f-a4bc-533544420b99_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1309 |
+
2025/AROMA_[[:space:]]Autonomous[[:space:]]Rank-one[[:space:]]Matrix[[:space:]]Adaptation/70f6baf1-1052-4fa3-8643-225691c1285c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1310 |
+
2025/ASTRA_[[:space:]]A[[:space:]]Negotiation[[:space:]]Agent[[:space:]]with[[:space:]]Adaptive[[:space:]]and[[:space:]]Strategic[[:space:]]Reasoning[[:space:]]via[[:space:]]Tool-integrated[[:space:]]Action[[:space:]]for[[:space:]]Dynamic[[:space:]]Offer[[:space:]]Optimization/403eef1d-ff3a-4ae6-a8d7-f9725b21f72d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1311 |
+
2025/AbsVis[[:space:]]–[[:space:]]Benchmarking[[:space:]]How[[:space:]]Humans[[:space:]]and[[:space:]]Vision-Language[[:space:]]Models[[:space:]]“See”[[:space:]]Abstract[[:space:]]Concepts[[:space:]]in[[:space:]]Images/92ca4c15-4033-469a-baaa-9b038a6bfc62_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1312 |
+
2025/AcT2I_[[:space:]]Evaluating[[:space:]]and[[:space:]]Improving[[:space:]]Action[[:space:]]Depiction[[:space:]]in[[:space:]]Text-to-Image[[:space:]]Models/ba56777a-6f5e-4665-a124-562642ec9778_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1313 |
+
2025/Accelerate[[:space:]]Parallelizable[[:space:]]Reasoning[[:space:]]via[[:space:]]Parallel[[:space:]]Decoding[[:space:]]within[[:space:]]One[[:space:]]Sequence/d71866e9-0a48-4ec6-a5e6-a1933f47398d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1314 |
+
2025/Accelerated[[:space:]]Test-Time[[:space:]]Scaling[[:space:]]with[[:space:]]Model-Free[[:space:]]Speculative[[:space:]]Sampling/8d6f6a75-e4fc-4484-9fdf-886b272c9c80_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1315 |
+
2025/AccessEval_[[:space:]]Benchmarking[[:space:]]Disability[[:space:]]Bias[[:space:]]in[[:space:]]Large[[:space:]]Language[[:space:]]Models/2f19104f-6787-4dda-b7c6-da578d8106c5_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1316 |
+
2025/Zipf’s[[:space:]]and[[:space:]]Heaps’[[:space:]]Laws[[:space:]]for[[:space:]]Tokens[[:space:]]and[[:space:]]LLM-generated[[:space:]]Texts/5529e58a-8beb-4f36-a8fa-b14df85db077_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1317 |
+
2025/\[MASK\]ED[[:space:]]-[[:space:]]Language[[:space:]]Modeling[[:space:]]for[[:space:]]Explainable[[:space:]]Classification[[:space:]]and[[:space:]]Disentangling[[:space:]]of[[:space:]]Socially[[:space:]]Unacceptable[[:space:]]Discourse./cc73d9e7-a8ce-48c8-957b-1c08d159c955_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1318 |
+
2025/cAST_[[:space:]]Enhancing[[:space:]]Code[[:space:]]Retrieval-Augmented[[:space:]]Generation[[:space:]]with[[:space:]]Structural[[:space:]]Chunking[[:space:]]via[[:space:]]Abstract[[:space:]]Syntax[[:space:]]Tree/6d505efa-bcf9-4905-a457-9f301af7381c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1319 |
+
2025/mrCAD_[[:space:]]Multimodal[[:space:]]Communication[[:space:]]to[[:space:]]Refine[[:space:]]Computer-aided[[:space:]]Designs/d3bc2eac-8a26-4214-98f1-5c942d5274a9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1320 |
+
2025/pFedRAG_[[:space:]]A[[:space:]]Personalized[[:space:]]Federated[[:space:]]Retrieval-Augmented[[:space:]]Generation[[:space:]]System[[:space:]]with[[:space:]]Depth-Adaptive[[:space:]]Tiered[[:space:]]Embedding[[:space:]]Tuning/b202d73f-5029-4160-abf1-99e3e5693b71_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1321 |
+
2025/polyBART_[[:space:]]A[[:space:]]Chemical[[:space:]]Linguist[[:space:]]for[[:space:]]Polymer[[:space:]]Property[[:space:]]Prediction[[:space:]]and[[:space:]]Generative[[:space:]]Design/07636760-4c7d-43f5-a990-cc435eb74438_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1322 |
+
2025/sudoLLM_[[:space:]]On[[:space:]]Multi-role[[:space:]]Alignment[[:space:]]of[[:space:]]Language[[:space:]]Models/89bbcb9b-2bc1-44bf-80c1-d8046fce531e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1323 |
+
2025/‘Hello,[[:space:]]World!’_[[:space:]]Making[[:space:]]GNNs[[:space:]]Talk[[:space:]]with[[:space:]]LLMs/098e680b-5fb8-4e7e-a1c3-3182423f8214_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1324 |
+
2025/“Going[[:space:]]to[[:space:]]a[[:space:]]trap[[:space:]]house”[[:space:]]conveys[[:space:]]more[[:space:]]fear[[:space:]]than[[:space:]]“Going[[:space:]]to[[:space:]]a[[:space:]]mall”_[[:space:]]Benchmarking[[:space:]]Emotion[[:space:]]Context[[:space:]]Sensitivity[[:space:]]for[[:space:]]LLMs/2f93ae55-87d7-4902-a2d1-e07c09e0c51b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1325 |
+
2025/“What’s[[:space:]]Up,[[:space:]]Doc_”_[[:space:]]Analyzing[[:space:]]How[[:space:]]Users[[:space:]]Seek[[:space:]]Health[[:space:]]Information[[:space:]]in[[:space:]]Large-Scale[[:space:]]Conversational[[:space:]]AI[[:space:]]Datasets/3ad9ee66-8b39-4bff-99a1-f43957405be0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 1326 |
+
2025/“Where[[:space:]]Does[[:space:]]This[[:space:]]Strange[[:space:]]Smell[[:space:]]Come[[:space:]]from_”_[[:space:]]Enabling[[:space:]]Conversational[[:space:]]Interfaces[[:space:]]for[[:space:]]Artificial[[:space:]]Olfaction/a1f1eb26-6d94-4bad-ba06-c0a1140ab0f1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
2025/(Almost) Free Modality Stitching of Foundation Models/a9f23956-6f94-4537-a685-b8aef0c40a7a_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/(Almost) Free Modality Stitching of Foundation Models/a9f23956-6f94-4537-a685-b8aef0c40a7a_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/(Almost) Free Modality Stitching of Foundation Models/a9f23956-6f94-4537-a685-b8aef0c40a7a_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:74e5b3f7d15ad786774e5293d450b466ed6ec6b66354f5711798355a4b715dac
|
| 3 |
+
size 1039207
|
2025/(Almost) Free Modality Stitching of Foundation Models/full.md
ADDED
|
@@ -0,0 +1,437 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# (Almost) Free Modality Stitching of Foundation Models
|
| 2 |
+
|
| 3 |
+
Jaisidh Singh $^{1,2,4}$ , Diganta Misra $^{3,4}$ , Boris Knyazev $^{5,6}$ , Antonio Orvieto $^{3,4,7}$
|
| 4 |
+
|
| 5 |
+
<sup>1</sup>University of Tübingen, <sup>2</sup>Zuse School ELIZA <sup>3</sup>ELLIS Institute Tübingen, <sup>4</sup>MPI-IS Tübingen, <sup>5</sup>Samsung - SAIT AI Lab Montréal, <sup>6</sup>Université de Montréal, <sup>7</sup>Tübingen AI Center
|
| 6 |
+
|
| 7 |
+
# Abstract
|
| 8 |
+
|
| 9 |
+
Foundation multi-modal models are often designed by stitching of multiple existing pretrained uni-modal models: for example, an image classifier with a text model. This stitching process is performed by training a connector module that aims to align the representation spaces of these uni-modal models towards a multi-modal objective. However, given the complexity of training such connectors on large scale web-based datasets coupled with the ever-increasing number of available pretrained uni-modal models, the task of uni-modal models selection and subsequent connector module training becomes computationally demanding. To address this under-studied critical problem, we propose Hypernetwork Model Alignment (HYMA), a novel all-in-one solution for optimal uni-modal model selection and connector training by leveraging hypernetworks. Specifically, our framework utilizes the parameter prediction capability of a hypernetwork to obtain jointly trained connector modules for $N \times M$ combinations of uni-modal models. In our experiments, HYMA reduces the cost of searching for the best performing uni-modal model pair by $10 \times$ , while matching the ranking and trained connector performance obtained via grid search across a suite of diverse multi-modal benchmarks.
|
| 10 |
+
|
| 11 |
+
# 1 Introduction
|
| 12 |
+
|
| 13 |
+
Multi-modal foundation models have emerged as a new frontier in the Artificial Intelligence (AI) landscape. Fueled by the increasing need for considering inter-dependency of multiple data modalities in modern tasks, multi-modal foundation models often leverage modality-specific (uni-modal) models as sub-components, which are stitched together via a connector module. A prominent class of such models is Vision-Language Models (VLMs) (Radford et al., 2021; Singh et al., 2024; Li et al., 2022; Singh et al., 2022), which comprise image and text
|
| 14 |
+
|
| 15 |
+

|
| 16 |
+
Figure 1: We train connectors between pretrained uni-modal models to show that uni-modal model performance is not predictive of multi-modal performance obtained by stitching. Image encoder performance refers to top-1 ImageNet-1K accuracy, text encoder performance refers to semantic search performance across 14 datasets (Reimers and Gurevych, 2019). Multi-modal scores refers to ImageNet-1K top-1 accuracy (classification by matching images to prompts such as "this is a photo of a {class}").<sup>1</sup>
|
| 17 |
+
|
| 18 |
+
encoders that embed image and text concepts into a common contrastively learnt latent space.
|
| 19 |
+
|
| 20 |
+
Connector modules powering VLMs are often constructed as an $n$ -layer multi-layer perceptron (MLP) (Liu et al., 2024), or in some cases even as simple as a linear layer (Merullo et al., 2022), with the purpose of stitching modality-specific models. While some exceptions do arise where these modules are extensively engineered transformer-like architectures (Li et al., 2023), the vast majority consensus on the design of such connector modules has been limited to MLPs (Zhu et al., 2025) due to their efficiency.
|
| 21 |
+
|
| 22 |
+
While training connector modules for a pair of predetermined uni-modal models is feasible, the picture becomes more complex when considering multiple uni-modal options and aiming to optimize for downstream performance after stitching. Indeed, it is often not the case (see Figure 1) that simply choosing to align best-performing uni-modal models leads to the best multi-modal performance.
|
| 23 |
+
|
| 24 |
+

|
| 25 |
+
Figure 2: Given multiple options for uni-modal models, pair-wise grid search can be an expensive way to determine the best multi-modal combination. Alternatively, HYMA formulates search as a predictive or generative process.
|
| 26 |
+
|
| 27 |
+
This trend is further illustrated in Table 1, where uni-modal model parametric capacity fails to serve as a reliable predictor of multi-modal performance. Consequently, the cost of optimal stitching can grow quadratically with the number of available options on both ends. In addition, the availability of extremely large web-scale pretraining datasets, consisting of samples in the order of billions (Schuhmann et al., 2022; Changpinyo et al., 2021; Desai et al., 2021), constitutes a blocker for proper ablation on such design choices.
|
| 28 |
+
|
| 29 |
+
<table><tr><td>I (#Params)</td><td>T (#Params)</td><td>Total #Params</td><td>Perf.</td></tr><tr><td>EVA2-L (305M)</td><td>roberta-L (355M)</td><td>660M + c</td><td>26.85</td></tr><tr><td>DeiT3-L (304M)</td><td>mpnet-B (109M)</td><td>413M + c</td><td>42.63</td></tr></table>
|
| 30 |
+
|
| 31 |
+
Table 1: Parametric capacity of unimodal models is not a reliable indicator of multimodal performance. On the task of multi-modal image classification using the ImageNet-1k dataset, we observe that stitching the highest-capacity models: EVA-2 Large (305M) for the image modality (I) and RoBERTa Large (355M) for the text modality (T), totaling $660\mathrm{M} + c$ parameters—yields significantly lower performance than a smaller stitched pair: DeiT-3 Large (I) (304M) and MPNet-Base (T) (109M), totaling just $413\mathrm{M} + c$ parameters. $c$ denotes the parameters contributed by 1-hidden layer MLP connector and Perf. denotes the Top-1 accuracy metric.
|
| 32 |
+
|
| 33 |
+
We highlight and define the problem, which we term Multi-modal Optimal Pairing and Stitching (M-OPS), as:
|
| 34 |
+
|
| 35 |
+
- Pairing: Given a set of $N$ models in modality 1 (e.g., vision) and $M$ models in modality 2 (e.g., text), provide the optimal (best performing) combination pair $(n, m; n \in N \mid m \in$
|
| 36 |
+
|
| 37 |
+
$M$ ) to construct a multi-modal model for a target task and/or under target constraints (e.g., parametric size, embedding dimensions).
|
| 38 |
+
|
| 39 |
+
- Stitching: For the selected uni-modal models $(n,m)$ , obtain the optimal trained connector $f_{\theta}$ that stitches them to construct the target multi-modal model.
|
| 40 |
+
|
| 41 |
+
Due to the infeasibility of addressing the pairing sub-problem of M-OPS via a grid-search approach for a large $N \times M$ pair, we propose a novel alternative approach to tackle both the pairing and stitching steps in a single unified manner that utilizes a HyperNetwork (Ha et al., 2016). The key idea behind our approach is that stitching similar models shares latent semantics, which can be captured by jointly training a network to generate connectors.
|
| 42 |
+
|
| 43 |
+
We present Hypernetwork Model Alignment $(\mathbf{HYMA})^2$ , a method that, given $N$ modality 1 (e.g., image) and $M$ modality 2 (e.g., text) models, leverages a hypernetwork (Ha et al., 2016) that jointly learns to generate connectors for all possible $N\times M$ combinations. Our approach serves both as an indicator for optimal model pair configurations and as a trainer that produces stitched multi-modal models performing on par with the best stitched model pair obtained via grid search. In our experiments, where $N\times M$ can be as high as 27 (discussed in Section 5), our method enables an efficiency gain of $10\times$ in obtaining the best stitched model pair compared to grid search.
|
| 44 |
+
|
| 45 |
+
We highlight our contributions as follows:
|
| 46 |
+
|
| 47 |
+
1. We propose Hypernetwork Model Alignment (HYMA), a hypernetwork-based approach for obtaining strong uni-modal model pairs that perform on par with the best stitched model pair obtained via grid search at an order-of-magnitude lower computational cost.
|
| 48 |
+
2. Our proposed approach HYMA is, to the best of our knowledge, the first to demonstrate the effectiveness of hypernetworks for solving the M-OPS problem defined above.
|
| 49 |
+
3. We empirically demonstrate the performance and efficiency of HYMA on VLMs across various multi-modal benchmarks.
|
| 50 |
+
|
| 51 |
+
# 2 Background
|
| 52 |
+
|
| 53 |
+
In this section, we present the necessary preliminaries for the M-OPS problem, along with the general training paradigm of hypernetworks. These formal definitions establish the foundation for our proposed method, HYMA, which we introduce in the following section.
|
| 54 |
+
|
| 55 |
+
Definition 1 (Hypernetworks for Parameter Prediction). A hypernetwork (Ha et al., 2016) is a neural network $H_{\phi}$ parameterized by $\phi$ , designed to predict the parameters $\theta$ of a target network $f_{\theta}$ based on a conditioning input $\mathbf{c}$ . The parameter generation process is defined as:
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
H _ {\phi} (\mathbf {c}) = \theta .
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
The parameters $\phi$ of the hypernetwork are optimized indirectly via the performance of the generated network $f_{\theta}$ on a downstream task. Given a task-specific loss $\mathcal{L}_{task}$ evaluated on corresponding data, the optimization objective becomes:
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
\phi^ {*} = \arg \min _ {\phi} \mathcal {L} _ {t a s k} (f _ {H _ {\phi} (\mathbf {c})}).
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
The trained hypernetwork $H_{\phi^*}$ can then be used to generate task-adapted parameters $\theta$ for $f$ given new conditioning inputs. Optimizing $\phi$ rather than $\theta$ directly can offer advantages in terms of training dynamics, capacity control, and generalization (Chauhan et al., 2024).
|
| 68 |
+
|
| 69 |
+
For simplicity, assume encoders producing sequences of $P$ features (e.g., number of patches or tokens) living in a $D$ -dimensional space.
|
| 70 |
+
|
| 71 |
+
Definition 2 (Connector-based multi-modal stitching). Let $\mathcal{A}:\mathcal{X}_A\to \mathbb{R}^{D_A}$ and $\mathcal{B}:\mathcal{X}_B\to \mathbb{R}^{D_B}$
|
| 72 |
+
|
| 73 |
+
be pretrained uni-modal encoders for two different modalities with input spaces $\mathcal{X}_A$ and $\mathcal{X}_B$ , respectively. The goal is to construct a multi-modal model by learning a connector function $f_{\theta}:\mathbb{R}^{D_A}\to \mathbb{R}^{D_B}$ that stitches the output of $\mathcal{A}$ to the representation space of $\mathcal{B}$ : given input pairs $(\mathbf{u},\mathbf{v})\in \mathcal{X}_A\times \mathcal{X}_B$ , the connector stitches the modality- $A$ features
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
\mathbf {x} ^ {a} = \mathcal {A} (\mathbf {u}) \in \mathbb {R} ^ {D _ {A}}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
to modality- $B$ space via
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
\tilde {\mathbf {x}} ^ {a} = f _ {\theta} \bigl (\mathbf {x} ^ {a} \bigr) \in \mathbb {R} ^ {D _ {B}}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
The stitched representation $\tilde{\mathbf{x}}^a$ is then combined with $\mathbf{x}^b = \mathcal{B}(\mathbf{v})$ to construct a joint multi-modal representation. The connector parameters $\theta$ are optimized while keeping $\mathcal{A}$ and $\mathcal{B}$ frozen. The training objective follows contrastive stitching, that uses a similarity function $\sin (\cdot ,\cdot)$ and temperature $\tau$ to train the connector on the InfoNCE (Oord et al., 2018) loss (quadratic):
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
\mathcal {L} _ {\text {c o n t r a s t i v e}} (\theta) = - \log \frac {\exp (s i m (\tilde {\mathbf {x}} ^ {a} , \mathbf {x} ^ {b}) / \tau)}{\sum_ {j} \exp (s i m (\tilde {\mathbf {x}} ^ {a} , \mathbf {x} _ {j} ^ {b}) / \tau)}
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
# 3 Methodology
|
| 92 |
+
|
| 93 |
+
# 3.1 Problem formulation
|
| 94 |
+
|
| 95 |
+
We aim to jointly learn $N\times M$ connectors, where each connector is specified to the hypernetwork via a conditional input $\mathbf{c}^k$ . More formally, for the $k^{th}$ model combination, the hypernetwork generates the parameters as $H_{\phi}(\mathbf{c}^{k})$ . The resulting connector $f_{H_{\phi}(\mathbf{c}^{k})}$ is then used to compute a task-specific loss. The overall training loss is computed by averaging over all combinations:
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
\mathcal {L} _ {\mathrm {H Y M A}} = \frac {1}{N M} \sum_ {k = 1} ^ {N M} \mathcal {L} _ {\text {t a s k}} \left(f _ {H _ {\phi} \left(\mathbf {c} ^ {k}\right)}\right). \tag {1}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
Here, $\mathcal{L}_{\mathrm{task}}$ corresponds to a contrastive InfoNCE loss (for retrieval-style objectives like that in CLIP (Radford et al., 2021)). The trained hypernetwork is denoted by $H_{\phi^*}$ , where $\phi^* = \arg \min_{\phi} \mathcal{L}_{\mathrm{HYMA}}$ . Following prior work (Rosenfeld et al., 2022; Jia et al., 2024), we restrict connectors to be multi-layer perceptrons (MLPs).
|
| 102 |
+
|
| 103 |
+
# 3.2 Hypernetwork architecture
|
| 104 |
+
|
| 105 |
+
We define the hypernetwork as a function $H_{\phi}:\mathbb{R}^{C}\to \mathbb{R}^{D_{\theta}}$ , mapping conditional inputs $\mathbf{c}\in \mathbb{R}^C$ to connector parameters $\theta \in \mathbb{R}^{D_{\theta}}$ . We describe next how $\mathbf{c}$ is constructed and how it is mapped to the parameter space.
|
| 106 |
+
|
| 107 |
+

|
| 108 |
+
Figure 3: A visual walkthrough of our hypernetwork architecture is provided above. We take the example of predicting the parameters of an MLP-type connector with depth $= 3$ (denotes 2 hidden layers).
|
| 109 |
+
|
| 110 |
+
Conditional inputs: We use a learnable lookup table of embeddings $\mathbf{W}_{\sigma}^{\mathrm{H}} \in \mathbb{R}^{NM \times C}$ , where $\mathbf{c}^k = \mathbf{W}_{\sigma}^{\mathrm{H}}[k]$ encodes the $k^{th}$ model pair.
|
| 111 |
+
|
| 112 |
+
Mapping conditional inputs to parameters: The hypernetwork $H_{\phi}$ is implemented using an MLP $F_{\varrho}$ , which predicts connector parameters layer-wise. Each layer prediction is conditioned on both $\mathbf{c}^k$ and a learnable layer-specific embedding $\mathbf{e}_j = \mathbf{E}_{\omega}^{\mathrm{H}}[j]$ , such that:
|
| 113 |
+
|
| 114 |
+
$$
|
| 115 |
+
F _ {\varrho} \left(\tilde {\mathbf {c}} _ {j} ^ {k}\right) \in \mathbb {R} ^ {D _ {\vartheta^ {k}}}, \quad \text {w h e r e} \quad \tilde {\mathbf {c}} _ {j} ^ {k} = \mathbf {c} ^ {k} + \mathbf {e} _ {j}
|
| 116 |
+
$$
|
| 117 |
+
|
| 118 |
+
and $\vartheta^k$ denotes the size of the largest layer in the $k^{th}$ connector. The output is then sliced to the appropriate dimension for layer $j$ . This process is repeated for all layers, and the resulting parameters are concatenated to form the complete connector parameter vector $\theta^k \in \mathbb{R}^{D_\theta^k}$ . This modular, layer-wise parameterization makes the hypernetwork more tractable and memory-efficient.
|
| 119 |
+
|
| 120 |
+
# 3.3 Mini-batching model combinations for scalable hypernetwork training
|
| 121 |
+
|
| 122 |
+
Jointly training connectors for all $N \times M$ model combinations can become computationally prohibitive. To address this, we follow the strategy of model mini-batching (Knyazev et al., 2023), wherein each training step operates over a batch of $B_{m}$ model combinations. The modified loss is:
|
| 123 |
+
|
| 124 |
+
$$
|
| 125 |
+
\mathcal {L} _ {\mathrm {H Y M A}} = \frac {1}{B _ {m}} \sum_ {k = 1} ^ {B _ {m}} \mathcal {L} _ {\text {t a s k}} \left(f _ {H _ {\phi} \left(\mathbf {c} ^ {k}\right)}\right). \tag {2}
|
| 126 |
+
$$
|
| 127 |
+
|
| 128 |
+
Each training step proceeds as follows:
|
| 129 |
+
|
| 130 |
+
1. Sample a data batch of size $B_{d}$ .
|
| 131 |
+
2. For each data sample, evaluate $\mathcal{L}_{\mathrm{HYMA}}$ over each of the $B_{m}$ model combinations.
|
| 132 |
+
3. Use the accumulated loss to update hypernetwork parameters $\phi$ .
|
| 133 |
+
|
| 134 |
+
This training strategy enables HYMA to scale efficiently without requiring all models or their combinations to be loaded simultaneously. We elaborate on the impact of this choice on our framework in Appendix F.
|
| 135 |
+
|
| 136 |
+
# 4 Experiments
|
| 137 |
+
|
| 138 |
+
# 4.1 Baselines
|
| 139 |
+
|
| 140 |
+
To ensure comprehensive evaluation of our proposed method, we compare against the following baselines:
|
| 141 |
+
|
| 142 |
+
- Random: A naive baseline that randomly selects and stitches uni-modal model pairs using the specified connector on the target multi-modal dataset. Reported performance is the average over five independent trials.
|
| 143 |
+
- UniModal Top-1 (UniT-1): Inspired by the observation in Fig. 1, this baseline stitches
|
| 144 |
+
|
| 145 |
+

|
| 146 |
+
Figure 4: $\mathbf{MLP}_1 \mid N \times M = 3$ : We show the trade-off between computational resources (measured in FLOPs) and performance of the best stitched model pairs across all comparative baselines. We find that HYMA is able to predict a highly performance pairing at a significantly reduced FLOP cost in comparison to training on the optimal model pair as well as search over all model pairs for $N \times M = 3$ .
|
| 147 |
+
|
| 148 |
+

|
| 149 |
+
|
| 150 |
+
the top-performing individual uni-modal models—selected based on their uni-modal benchmark performance—via the target connector. For VLMs, image models are ranked by ImageNet Top-1 accuracy, and text models by their corresponding sentence embedding performance.
|
| 151 |
+
|
| 152 |
+
- Ask-LLM: Since uni-modal model properties such as parameter count and pretraining data can influence multi-modal performance, we define a baseline Ask-LLM. Here, a language model is prompted with metadata from the model zoo for both modalities and asked to select the most suitable pair for the target task. The chosen pair is stitched using a connector and evaluated in isolation.
|
| 153 |
+
- AutoPair: To enable a fair comparison with HYMA's efficiency-focused design, we implement an pairing baseline that iteratively searches a given set of pairs by training for a fixed number of epochs, and then prunes all pairs below the median performance. Specifically, AutoPair optimizes model pair selection and stitching within a FLOPs budget equal to that used by HYMA for the same model zoo. More details are provided in Section 5.3.
|
| 154 |
+
- Oracle (Grid Search): This upper-bound baseline performs exhaustive grid search over all model pairs in the zoo, independently training and evaluating each stitched pair. While
|
| 155 |
+
|
| 156 |
+
this provides optimal performance, it is computationally prohibitive.
|
| 157 |
+
|
| 158 |
+
- Best Guess: A hypothetical upper-bound baseline representing the training cost of the model combination that would yield the best multi-modal pair after stitching, assuming the optimal pair was known in advance.
|
| 159 |
+
|
| 160 |
+
# 4.2 Models
|
| 161 |
+
|
| 162 |
+
All model details are provided in Appendix B. To construct our Vision-Language Models (VLMs), we define a model zoo containing $N = 9$ image encoders: ViT-S, DeiT-S, DeiT3-S, ViT-B, DeiT-B, DeiT3-B, ViT-L, DeiT3-L, Eva2-L and $M = 3$ text encoders: minilm-L, mpnet-B, roberta-L. This results in a total of $N \times M = 27$ possible VLM configurations.
|
| 163 |
+
|
| 164 |
+
# 4.3 Connector variants
|
| 165 |
+
|
| 166 |
+
We test HYMA against the aforementioned baselines across three connector configurations:
|
| 167 |
+
|
| 168 |
+
1. Linear: As demonstrated in (Merullo et al., 2022), we construct the connector to be a linear layer parameterized via $\theta$ , mapping from the embedding space of the text encoder to the image encoder of a specific pair.
|
| 169 |
+
2. $\mathbf{MLP}_1$ : An MLP with one hidden layer of hidden dimension set 1024.
|
| 170 |
+
3. $\mathbf{MLP}_2$ : A MLP with two hidden layers, each of dimension 1024.
|
| 171 |
+
|
| 172 |
+
# 4.4 Datasets
|
| 173 |
+
|
| 174 |
+
We employ the LLaVA-CC558K dataset (Jia et al., 2024), which consists of 558,128 high-quality synthetic image-text pairs. Connectors between image and text encoders are trained using the contrastive InfoNCE loss (Oord et al., 2018) for 10 epochs, after which the best-performing checkpoint is selected. Hyperparameters are tuned for performance, stability, and GPU efficiency, detailed in Appendix.
|
| 175 |
+
|
| 176 |
+
# 4.5 Evaluation Tasks
|
| 177 |
+
|
| 178 |
+
Post-training, the resulting VLMs are evaluated on the following four downstream tasks:
|
| 179 |
+
|
| 180 |
+
- Multi-modal Image Classification (MIC): We compute the zero-shot top-1 image classification accuracies of the VLMs on the ImageNet-1K (Deng et al., 2009) and the CIFAR-100 (Krizhevsky et al., 2009) datasets. The evaluation follows an image-text matching approach, where the text corresponding to each image input takes the form: "THIS IS A PHOTO OF A {CLASS}".
|
| 181 |
+
- Image-Text Matching (ITM): Here, we compute the zero-shot recall @ 5 scores of the VLMs on the MSCOCO validation split (Lin et al., 2014) and the Flickr-8K (Hodosh et al., 2013) datasets.
|
| 182 |
+
- Visual Question Answering (VQA): We use the validation splits of the OK-VQA (Marino et al., 2019) and the Text-VQA (Singh et al., 2019) datasets. Implementation details for VQA are given in Appendix G.
|
| 183 |
+
|
| 184 |
+
# 4.6 Varying multi-modal setting
|
| 185 |
+
|
| 186 |
+
We also explore our formulation on a different setting than contrastive VLMs, i.e., input-output stitching instead of output-output stitching. Fusing image encoder outputs to LLM inputs is a technique used to develop multi-modal language models (MLLMs) (Achiam et al., 2023; Touvron et al., 2023; Jia et al., 2024; Anthropic), another avenue of multi-modal models that we employ HYMA in. We find that using HYMA for MLLMs does not reflect the ranking observed via full grid search, however, HYMA predicts reliable connectors at a lower cost than grid search, with larger connectors exhibiting the best performance equal to the best setting found via grid search. We provide more details on this in Appendix A.
|
| 187 |
+
|
| 188 |
+
# 5 Empirical Results
|
| 189 |
+
|
| 190 |
+
# 5.1 $\mathbf{MLP}_1\mid \mathbf{N}\times \mathbf{M} = 3$
|
| 191 |
+
|
| 192 |
+
Initially, we stitch $N = 3$ image encoders (ViT-S, DeiT-S, DeiT3-S) with $M = 1$ text encoder (MiniLM) using an MLP connector of 1 hidden layer $(\mathrm{MLP}_1)$ . This yields a total of $N \times M = 3$ possible VLMs, that we construct and evaluate on the image-classification task. For the best performing combination per evaluation benchmark, we show its accuracy in Figure 4 as well as the computational resources, measured in floating point operations (FLOPs) required to obtain the corresponding connectors.
|
| 193 |
+
|
| 194 |
+
On ImageNet-1K, DeiT3-S emerges as the best image encoder to be stitched with minilm-L. Further, HYMA and Grid Search (and Best Guess) exhibit the same final performance, i.e., $27.4\%$ top-1 accuracy. On the other hand, the most performative image encoder when stitched to MiniLM is ViT-S. In terms of performance, HYMA exhibits a top-1 accuracy of $38.4\%$ , nearly matching the performance of baselines that individually train connectors to find the optimal setting, i.e., $39.3\%$ . Also, HYMA is strongly cost-effective for VLMs, being $4.44\times$ and $1.48\times$ more compute-efficient than Grid Search and Best Guess respectively.
|
| 195 |
+
|
| 196 |
+
<table><tr><td rowspan="2">Dataset</td><td colspan="2">Efficiency @10 ep (×)</td><td colspan="2">Efficiency @ best (×)</td></tr><tr><td>BG</td><td>GS</td><td>BG</td><td>GS</td></tr><tr><td>IN-1K</td><td>1.48</td><td>4.44</td><td>1.48</td><td>4.44</td></tr><tr><td>CIFAR-100</td><td>1.48</td><td>4.44</td><td>2.96</td><td>8.89</td></tr></table>
|
| 197 |
+
|
| 198 |
+
Table 2: $N\times M = 3$ $\mathrm{MLP}_1$ : HYMA is significantly more compute-efficient than independently stitching model pairs, as shown w.r.t Best Guess (BG) and Grid Search (GS).
|
| 199 |
+
|
| 200 |
+
# 5.2 Linear, $\mathbf{MLP}_1,\mathbf{MLP}_2\mid \mathbf{N}\times \mathbf{M} = 27$
|
| 201 |
+
|
| 202 |
+
After demonstrating the efficacy of HYMA on a small search space of $N \times M = 3$ combinations and for $\mathrm{MLP}_1$ scale up the number of combinations in comparison to $N \times M = 27$ , and vary the capacity of the connectors in use (Linear, $\mathrm{MLP}_1$ , $\mathrm{MLP}_2$ ). This yields 81 total VLMs. Table 3 shows the performance of HYMA in terms of a search, i.e., how well it matches the true ranking given by full grid search. Performance gain $(\Delta)$ is also reported across the Random, UniT-1, Ask-LLM, and Oracle (GS) baselines for each task and dataset employed.
|
| 203 |
+
|
| 204 |
+
Multi-modal Image classification: For multi-modal image classification on the ImageNet-1K,
|
| 205 |
+
|
| 206 |
+
<table><tr><td rowspan="2">Task</td><td rowspan="2">Dataset</td><td rowspan="2">Connector</td><td colspan="3">NDCG @ k (↑)</td><td>ρ (↑)</td><td colspan="4">ΔPerformance (↑)</td></tr><tr><td>k = 5</td><td>k = 7</td><td>k = 10</td><td>N × M = 27</td><td>Random</td><td>UniT-1</td><td>Ask-LLM</td><td>Oracle (GS)</td></tr><tr><td rowspan="6">MIC</td><td rowspan="3">IN-1K</td><td>Linear</td><td>1.0</td><td>1.0</td><td>0.98</td><td>0.97</td><td>+6.93</td><td>+13.51</td><td>+13.51</td><td>-4.14</td></tr><tr><td>MLP1</td><td>1.0</td><td>0.98</td><td>0.96</td><td>0.91</td><td>+4.78</td><td>+11.11</td><td>+11.11</td><td>-4.47</td></tr><tr><td>MLP2</td><td>0.96</td><td>0.93</td><td>0.92</td><td>0.89</td><td>+3.89</td><td>+10.34</td><td>+10.34</td><td>-5.91</td></tr><tr><td rowspan="3">CIFAR-100</td><td>Linear</td><td>0.88</td><td>0.96</td><td>0.97</td><td>0.97</td><td>+6.91</td><td>+38.50</td><td>+38.50</td><td>-3.73</td></tr><tr><td>MLP1</td><td>0.83</td><td>0.96</td><td>0.97</td><td>0.86</td><td>+6.31</td><td>+35.21</td><td>+35.21</td><td>-1.85</td></tr><tr><td>MLP2</td><td>0.74</td><td>0.93</td><td>0.95</td><td>0.90</td><td>+5.01</td><td>+35.48</td><td>+35.48</td><td>-3.06</td></tr><tr><td rowspan="6">ITM</td><td rowspan="3">MSCOCO</td><td>Linear</td><td>0.96</td><td>0.95</td><td>0.99</td><td>0.99</td><td>+4.94</td><td>+31.62</td><td>+33.20</td><td>-2.0</td></tr><tr><td>MLP1</td><td>0.92</td><td>0.91</td><td>0.97</td><td>0.99</td><td>+3.72</td><td>+28.41</td><td>+28.41</td><td>-3.06</td></tr><tr><td>MLP2</td><td>0.96</td><td>0.91</td><td>0.97</td><td>0.98</td><td>+2.22</td><td>+27.30</td><td>+27.30</td><td>-4.03</td></tr><tr><td rowspan="3">Flickr-8K</td><td>Linear</td><td>0.95</td><td>0.99</td><td>0.99</td><td>0.99</td><td>+5.18</td><td>+26.68</td><td>+7.83</td><td>-2.06</td></tr><tr><td>MLP1</td><td>1.0</td><td>1.0</td><td>0.99</td><td>0.99</td><td>+3.54</td><td>+23.32</td><td>+23.32</td><td>-2.26</td></tr><tr><td>MLP2</td><td>0.92</td><td>0.99</td><td>0.96</td><td>0.98</td><td>+1.92</td><td>+21.44</td><td>+21.44</td><td>-3.25</td></tr><tr><td rowspan="6">VQA</td><td rowspan="3">OK-VQA</td><td>Linear</td><td>0.95</td><td>0.95</td><td>0.98</td><td>0.99</td><td>+0.81</td><td>+7.86</td><td>+7.86</td><td>-0.43</td></tr><tr><td>MLP1</td><td>0.94</td><td>0.90</td><td>0.95</td><td>0.95</td><td>+0.49</td><td>+6.63</td><td>+6.63</td><td>-0.77</td></tr><tr><td>MLP2</td><td>0.99</td><td>0.93</td><td>0.93</td><td>0.97</td><td>+0.01</td><td>+6.81</td><td>+6.81</td><td>-1.44</td></tr><tr><td rowspan="3">Text-VQA</td><td>Linear</td><td>0.94</td><td>0.97</td><td>0.99</td><td>0.97</td><td>+1.31</td><td>+3.64</td><td>+3.64</td><td>-0.06</td></tr><tr><td>MLP1</td><td>0.92</td><td>0.87</td><td>0.90</td><td>0.87</td><td>+0.72</td><td>+2.59</td><td>+2.59</td><td>-0.32</td></tr><tr><td>MLP2</td><td>0.85</td><td>0.87</td><td>0.86</td><td>0.87</td><td>+0.72</td><td>+2.28</td><td>+2.28</td><td>-0.59</td></tr></table>
|
| 207 |
+
|
| 208 |
+
Table 3: HYMA VLM Results: We report the ranking similarity between HYMA and the Oracle—Grid Search (GS)—using NDCG and Spearman's $\rho$ . Across all three connector configurations, HYMA exhibits a strong correlation with GS rankings. Additionally, we show the performance gain $(\Delta)$ of the best connector obtained post stitching via HYMA, compared to four baselines: (a) Random: Random pairing and stitching (averaged over five runs), (b) UniT-1: Stitching the best unimodal models based on unimodal benchmarks, (c) Ask-LLM: Stitching based on model pairs selected via prompting Claude 4 Sonnet (detailed prompt in appendix), and (d) Oracle: Full grid search over all possible configurations on the complete model zoo $(N \times M = 27)$ .
|
| 209 |
+
|
| 210 |
+
we find that the ranking order of the stitching performed by HYMA reflects that found by full grid search to strong extent. This is indicated by the normalized discounted cumulative gain (NDCG @ $k$ ) computed for the top 5 and 7 ranks. Additionally, Spearman's $\rho$ across all $N \times M = 27$ ranks further corroborates this. Notably, both NDCG @ $k$ and Spearman's $\rho$ for CIFAR-100 are lower in value w.r.t ImageNet-1K. In terms of performance gains, HYMA improves upon random selection of encoder pairs to stitch, as well as selecting encoders based on their uni-modal performance. Interestingly, we find that asking a massively pretrained LLM such as Claude 4 Sonnet yields a similar result to UniT-1. For Oracle (GS), find that the best stitches generated by HYMA underperform average of $4.84\%$ and 2.88 for ImageNet-1K and CIFAR-100 across all connector types. However, this occurs at $10 \times$ fewer FLOPs spent.
|
| 211 |
+
|
| 212 |
+
Image-text matching: For image-text matching, we find higher values of Spearman's $\rho$ , indicating that the stitches predicted by HYMA correlates strongly in performance with those obtained by full grid search on both MSCOCO and Flickr-8K. Similar to image-classification, we find that rank correlation metrics show more positive values for one dataset, Flickr-8K over the other, i.e., MSCOCO.
|
| 213 |
+
|
| 214 |
+
In contrast, for image-text matching, we find that the performance gains (in recall@5) exhibited w.r.t Ask-LLM baseline do not match those of UniT-1 in cases such as Linear connectors. In comparison to Oracle (GS), average reduction in recall@5 is 3.03 for MSCOCO and 2.52 for Flickr-8K across all connectors.
|
| 215 |
+
|
| 216 |
+
Visual question answering: In visual question answering on both OK-VQA and Text-VQA, Linear connectors exhibit the highest values in terms of NDCG @ $k$ , Spearman's $\rho$ , as well as performance gain. In line with the preceded evaluation tasks, i.e., multi-modal image classification and image-text matching, we find that connectors predicted by HYMA outperform those found by the Random, UniT-1 and Ask-LLM baselines. Most notably, VQA emerges as the task with the least performance gap between HYMA and Oracle (GS), with 0.88 and 0.32 being the difference in the respective recall@5 values across both datasets.
|
| 217 |
+
|
| 218 |
+
# 5.3 HYMA vs AutoPair
|
| 219 |
+
|
| 220 |
+
We conduct a step-wise search-and-prune procedure over 6 image encoders (evenly split across embedding dimensions 768 and 1024) and 2 text encoders (also evenly split across embedding dimensions 768 and 1024). First we initialize a FLOPs
|
| 221 |
+
|
| 222 |
+
<table><tr><td rowspan="3">Connector</td><td colspan="2">Multi-modal Image Classification</td><td colspan="4">ΔPerformance (↑)</td></tr><tr><td rowspan="2">ImageNet-1K</td><td rowspan="2">CIFAR-100</td><td colspan="2">Image-Text Matching</td><td colspan="2">Visual Question Answering</td></tr><tr><td>MSCOCO</td><td>Flickr-8K</td><td>OK-VQA</td><td>Text-VQA</td></tr><tr><td>Linear</td><td>+11.28</td><td>+10.62</td><td>+11.04</td><td>+11.14</td><td>+2.12</td><td>+2.29</td></tr><tr><td>MLP1</td><td>+4.50</td><td>+7.12</td><td>+2.08</td><td>+3.69</td><td>+0.24</td><td>+0.14</td></tr><tr><td>MLP2</td><td>+3.25</td><td>+6.21</td><td>+3.62</td><td>+4.55</td><td>+0.75</td><td>+0.24</td></tr></table>
|
| 223 |
+
|
| 224 |
+
Table 4: HYMA vs AutoPair Results ( $N \times M = 12$ ): We show the performance gain ( $\Delta$ ) of the best connector (for all connector configurations) obtained post stitching via HYMA, compared to that obtained via AutoPair.
|
| 225 |
+
|
| 226 |
+
budget equal to the total FLOP cost of searching over $N \times M = 12$ pairs with HYMA for 10 epochs. Next, our procedure trains connectors between all 12 pairs for 2 epochs each, after which we rank each connector by its performance on a given task and dataset. After the ranking, we prune all pairs that exhibit performance that is less than or equal to the median performance. This is repeated until we exhaust the budget. If we are left with only one model after iterative pruning, we train it until the budget is exhausted.
|
| 227 |
+
|
| 228 |
+
As shown in Table 4, stitches obtained by AutoPair exhibit significantly lower performance than those obtained via HYMA, as the budget finishes before the individually trained connectors can reach strong performance.
|
| 229 |
+
|
| 230 |
+
# 6 Related Work
|
| 231 |
+
|
| 232 |
+
Vision language models. CLIP, one of the most popular VLMs, is contrastively pretrained on approximately 400M image-text pairs. Beyond multimodal image classification and image-text retrieval, it has emerged to be applicable for tasks such as open-set attribute recognition (Chen et al., 2023) and object detection (Minderer et al.). Moreover, it inspires modifications to the default InfoNCE recipe, such as image captioning with contrastive pretraining, using sigmoid in place of softmax on the InfoNCE similarity matrix, etc. (Li et al., 2022; Alayrac et al., 2022; Singh et al., 2022; Zhai et al., 2023; Singh et al., 2024). Additionally, datasets oriented towards CLIP-like vision-language pretraining have been released in recent times, including (Schuhmann et al., 2021, 2022; Thomee et al., 2016; Changpinyo et al., 2021; Desai et al., 2021), often of the scale of millions of (image, caption) pairs. As a foundation model, CLIP has been applied in image synthesis (Rombach et al., 2022; Ramesh et al., 2022), and has been extended to modalities such as video (Chai et al., 2023; Wang et al., 2022) and audio (Guzhov et al., 2022). Our
|
| 233 |
+
|
| 234 |
+
work investigates how to efficiently develop multiple CLIP-like models from pretrained uni-modal encoder states.
|
| 235 |
+
|
| 236 |
+
Hypernetworks in LLMs and multi-modal domains. Hypernetworks (Ha et al., 2016; Schmidhuber, 1992) have been shown useful in improving training efficiency and adaptability in many machine learning pipelines (Chauhan et al., 2024). Several works explored the advantages of hypernetworks for MLLMs and multi-modal models. Specifically, (Zhang et al., 2024) proposes HyperLLaVA that predicts project parameters for MLLMs given task input. Hypernetworks have also been used to predict the parameters of the adapters in parameter efficient fine-tuning of LLMs (Mahabadi et al., 2021; Phang et al., 2023) and VLMs (Zhang et al., 2022). HyperCLIP (Akinwande et al., 2024), trains a hypernetwork to predict the parameters of image encoder layers given the task. Overall, these models improve training efficiency and adaptability of a single combination on new tasks, but require grid search for more pairs. Our work addresses this limitation by training the joint hypernetwork for multiple encoders improving the efficiency and performance significantly.
|
| 237 |
+
|
| 238 |
+
# 7 Conclusion
|
| 239 |
+
|
| 240 |
+
We present a novel investigation of the usage of hypernetworks for the M-OPS problem. HYMA is able to subvert expensive grid search across all uni-model model combinations, by learning connector parameters jointly, producing strongly initialised connectors. We demonstrate that HYMA is an efficient solution to the M-OPS problem. Also, HYMA's design affords stitching of modalities beyond only image-text: other avenues include, for instance, audio-text. We hope to inspire future work that utilizes hypernetworks for similar problems, where training several small neural networks can be expressed as a generative model that learns the parameters of the target network.
|
| 241 |
+
|
| 242 |
+
# Limitations
|
| 243 |
+
|
| 244 |
+
Hypernetwork training can be less stable than training a standard connector (i.e., a single MLP). Training instabilities in hypernetworks have been previously studied (Ortiz et al., 2023; Chauhan et al., 2024), and are not unique to the specific design of our framework. However, since $H_{\phi}$ acts as a shared generating function across multiple connectors, the interaction of gradients from diverse model combinations—as well as their interplay with $B_{m}$ —can still lead to instability during training. To stabilize training, we tune the $\beta_{2}$ parameter of the Adam optimizer in accordance with recommendations from the optimization literature (Cattaneo and Shigida, 2025). In practice, we observed that including certain models (for example: the MaxViT family (Tu et al., 2022)) in the $N \times M$ pool led to instability, and thus these models were excluded from our final zoo. This limitation points to the need for a deeper investigation into the training dynamics and architectural properties of similar systems, which could inform strategies to improve both stability and performance of the hypernetwork.
|
| 245 |
+
|
| 246 |
+
# Acknowledgements
|
| 247 |
+
|
| 248 |
+
The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Diganta Misra. Jaisidh Singh is supported by the Konrad Zuse School of Excellence in Learning and Intelligent Systems (ELIZA) through the DAAD programme Konrad Zuse Schools of Excellence in Artificial Intelligence, sponsored by the Federal Ministry of Education and Research. This work was enabled by compute resources provided by Max Planck Institute for Intelligent Systems Tübingen & Amazon Science Hub.
|
| 249 |
+
|
| 250 |
+
# References
|
| 251 |
+
|
| 252 |
+
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, and 1 others. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
|
| 253 |
+
Victor Akinwande, Mohammad Sadegh Norouzzadeh, Devin Willmott, Anna Bair, Madan Ravi Ganesh, and J Zico Kolter. 2024. Hyperclip: Adapting vision-language models with hypernetworks. arXiv preprint arXiv:2412.16777.
|
| 254 |
+
Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhitao Gong, Sina Samangooei, Marianne Monteiro, Jacob L Menick, Sebastian Borgeaud, and 8 others. 2022. Flamingo: a visual language model for few-shot learning. In Advances in Neural Information Processing Systems, volume 35, pages 23716-23736. Curran Associates, Inc.
|
| 255 |
+
Anthropic. Claude 4 sonnet. https://www.anthropic.com/claude.
|
| 256 |
+
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, and 1 others. 2023. Qwen technical report. arXiv preprint arXiv:2309.16609.
|
| 257 |
+
Jinhe Bi, Yifan Wang, Danqi Yan, Xun Xiao, Artur Hecker, Volker Tresp, and Yunpu Ma. 2025. Prism: Self-pruning intrinsic selection method for training-free multimodal data selection. Preprint, arXiv:2502.12119.
|
| 258 |
+
Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle O'Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, and 1 others. 2023. Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning, pages 2397-2430. PMLR.
|
| 259 |
+
Matias D. Cattaneo and Boris Shigida. 2025. Tuning adam(w): Default $\beta 2$ may be too large. https://mdcattaneo.github.io/papers/Cattaneo-Shigida_2025_TuningAdam.pdf.
|
| 260 |
+
Wenhao Chai, Xun Guo, Gaoang Wang, and Yan Lu. 2023. Stablevideo: Text-driven consistency-aware diffusion video editing. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 23040-23050.
|
| 261 |
+
Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. 2021. Conceptual $12\mathrm{m}$ : Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3558-3568.
|
| 262 |
+
|
| 263 |
+
Vinod Kumar Chauhan, Jiandong Zhou, Ping Lu, Soheila Molaei, and David A Clifton. 2024. A brief review of hypernetworks in deep learning. Artificial Intelligence Review, 57(9):250.
|
| 264 |
+
Keyan Chen, Xiaolong Jiang, Yao Hu, Xu Tang, Yan Gao, Jianqi Chen, and Weidi Xie. 2023. Ovarnet: Towards open-vocabulary object attribute recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 23518-23527.
|
| 265 |
+
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 248-255. IEEE.
|
| 266 |
+
Karan Desai, Gaurav Kaul, Zubin Aysola, and Justin Johnson. 2021. Redcaps: Web-curated image-text data created by the people, for the people. arXiv preprint arXiv:2111.11431.
|
| 267 |
+
Alex Fang, Albin Madappally Jose, Amit Jain, Ludwig Schmidt, Alexander Toshev, and Vaishaal Shankar. 2023. Data filtering networks. Preprint, arXiv:2309.17425.
|
| 268 |
+
Andrey Guzhov, Federico Raue, Jorn Hees, and Andreas Dengel. 2022. Audioclip: Extending clip to image, text and audio. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 976-980. IEEE.
|
| 269 |
+
David Ha, Andrew Dai, and Quoc V Le. 2016. Hypernetworks. arXiv preprint arXiv:1609.09106.
|
| 270 |
+
Micah Hodosh, Peter Young, and Julia Hockenmaier. 2013. Framing image description as a ranking task: Data, models and evaluation metrics. Journal of Artificial Intelligence Research, 47:853-899.
|
| 271 |
+
Junlong Jia, Ying Hu, Xi Weng, Yiming Shi, Miao Li, Xingjian Zhang, Baichuan Zhou, Ziyu Liu, Jie Luo, Lei Huang, and 1 others. 2024. Tinyllava factory: A modularized codebase for small-scale large multi-modal models. arXiv preprint arXiv:2405.11788.
|
| 272 |
+
Boris Knyazev, Doha Hwang, and Simon Lacoste-Julien. 2023. Can we scale transformers to predict parameters of diverse imagenet models? In International Conference on Machine Learning, pages 17243-17259. PMLR.
|
| 273 |
+
Alex Krizhevsky, Geoffrey Hinton, and 1 others. 2009. Learning multiple layers of features from tiny images.
|
| 274 |
+
Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. 2023. Blip-2: Bootstrapping language-image pretraining with frozen image encoders and large language models. In International conference on machine learning, pages 19730-19742. PMLR.
|
| 275 |
+
|
| 276 |
+
Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022. Blip: Bootstrapping language-image pretraining for unified vision-language understanding and generation. In International conference on machine learning, pages 12888-12900. PMLR.
|
| 277 |
+
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European Conference on Computer Vision, pages 740-755. Springer.
|
| 278 |
+
Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2024. Visual instruction tuning. Advances in Neural Information Processing Systems, 36.
|
| 279 |
+
Rabeeh Karimi Mahabadi, Sebastian Ruder, Mostafa Dehghani, and James Henderson. 2021. Parameter-efficient multi-task fine-tuning for transformers via shared hypernetworks. arXiv preprint arXiv:2106.04489.
|
| 280 |
+
Anas Mahmoud, Mostafa Elhoushi, Amro Abbas, Yu Yang, Newsha Ardalani, Hugh Leather, and Ari Morcos. 2024. Sieve: Multimodal dataset pruning using image captioning models. Preprint, arXiv:2310.02110.
|
| 281 |
+
Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. 2019. Ok-vqa: A visual question answering benchmark requiring external knowledge. In Proceedings of the IEEE/cvf conference on computer vision and pattern recognition, pages 3195-3204.
|
| 282 |
+
Jack Merullo, Louis Castricato, Carsten Eickhoff, and Ellie Pavlick. 2022. Linearly mapping from image to text space. arXiv preprint arXiv:2209.15162.
|
| 283 |
+
M Minderer, A Gritsenko, A Stone, M Neumann, D Weissenborn, A Dosovitskiy, A Mahendran, A Arnab, M Dehghani, Z Shen, and 1 others. Simple open-vocabulary object detection with vision transformers. arxiv 2022. arXiv preprint arXiv:2205.06230.
|
| 284 |
+
Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748.
|
| 285 |
+
Jose Javier Gonzalez Ortiz, John Guttag, and Adrian Dalca. 2023. Magnitude invariant parametrizations improve hypernetwork learning. arXiv preprint arXiv:2304.07645.
|
| 286 |
+
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, and 1 others. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32.
|
| 287 |
+
Jason Phang, Yi Mao, Pengcheng He, and Weizhu Chen. 2023. Hypertuning: Toward adapting large
|
| 288 |
+
|
| 289 |
+
language models without back-propagation. In International Conference on Machine Learning, pages 27854-27875. PMLR.
|
| 290 |
+
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, and 1 others. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748-8763. PMLR.
|
| 291 |
+
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, and 1 others. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
|
| 292 |
+
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical text-conditional image generation with clip latents, 2022. arXiv preprint arXiv:2204.06125.
|
| 293 |
+
Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
|
| 294 |
+
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684-10695.
|
| 295 |
+
Elan Rosenfeld, Preetum Nakkiran, Hadi Pouransari, Oncel Tuzel, and Fartash Faghri. 2022. Ape: Aligning pretrained encoders to quickly learn aligned multimodal representations. arXiv preprint arXiv:2210.03927.
|
| 296 |
+
Jürgen Schmidhuber. 1992. Learning to control fast-weight memories: An alternative to dynamic recurrent networks. Neural Computation, 4(1):131-139.
|
| 297 |
+
Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, and 1 others. 2022. Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems, 35:25278-25294.
|
| 298 |
+
Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. 2021. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114.
|
| 299 |
+
Sheng Shen, Liunian Harold Li, Hao Tan, Mohit Bansal, Anna Rohrbach, Kai-Wei Chang, Zhewei Yao, and Kurt Keutzer. 2021. How much can clip benefit vision-and-language tasks? arXiv preprint arXiv:2107.06383.
|
| 300 |
+
|
| 301 |
+
Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. 2022. Flava: A foundational language and vision alignment model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15638-15650.
|
| 302 |
+
Amanpreet Singh, Vivek Natarjan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. 2019. Towards vqa models that can read. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8317-8326.
|
| 303 |
+
Jaisidh Singh, Ishaan Shrivastava, Mayank Vatsa, Richa Singh, and Aparna Bharati. 2024. Learn" no" to say" yes" better: Improving vision-language models via negations. arXiv preprint arXiv:2403.20312.
|
| 304 |
+
Haoyu Song, Li Dong, Wei-Nan Zhang, Ting Liu, and Furu Wei. 2022. Clip models are few-shot learners: Empirical studies on vqa and visual entailment. Preprint, arXiv:2203.07190.
|
| 305 |
+
Bart Thomee, David A Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, and Li-Jia Li. 2016. Yfcc100m: The new data in multimedia research. Communications of the ACM, (2):64-73.
|
| 306 |
+
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, and 1 others. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
|
| 307 |
+
Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, and Yinxiao Li. 2022. Maxvit: Multi-axis vision transformer. In European conference on computer vision, pages 459-479. Springer.
|
| 308 |
+
Junke Wang, Dongdong Chen, Zuxuan Wu, Chong Luo, Luowei Zhou, Yucheng Zhao, Yujia Xie, Ce Liu, YuGang Jiang, and Lu Yuan. 2022. Omnivl: One foundation model for image-language and video-language tasks. In Advances in Neural Information Processing Systems, volume 35, pages 5696-5710. Curran Associates, Inc.
|
| 309 |
+
Ross Wightman. 2019. Pytorch image models. https://github.com/rwrightman/pytorch-image-models.
|
| 310 |
+
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, and 3 others. 2020. Huggingface's transformers: State-of-the-art natural language processing. Preprint, arXiv:1910.03771.
|
| 311 |
+
Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. 2023. Sigmoid loss for language image pre-training. In Proceedings of the IEEE/CVF
|
| 312 |
+
|
| 313 |
+
International Conference on Computer Vision, pages 11975-11986.
|
| 314 |
+
Wenqiao Zhang, Tianwei Lin, Jiang Liu, Fangxun Shu, Haoyuan Li, Lei Zhang, He Wanggui, Hao Zhou, Zheqi Lv, Hao Jiang, and 1 others. 2024. Hyperllava: Dynamic visual and language expert tuning for multimodal large language models. arXiv preprint arXiv:2403.13447.
|
| 315 |
+
Zhengkun Zhang, Wenya Guo, Xiaojun Meng, Yasheng Wang, Yadao Wang, Xin Jiang, Qun Liu, and Zhenglu Yang. 2022. Hyperpelt: Unified parameter-efficient language model tuning for both language and vision-and-language tasks. arXiv preprint arXiv:2203.03878.
|
| 316 |
+
Xun Zhu, Zheng Zhang, Xi Chen, Yiming Shi, Miao Li, and Ji Wu. 2025. Connector-s: A survey of connectors in multi-modal large language models. arXiv preprint arXiv:2502.11453.
|
| 317 |
+
|
| 318 |
+
# APPENDIX
|
| 319 |
+
|
| 320 |
+
# A HYMA for Multi-modal Large Language Models (MLLMs)
|
| 321 |
+
|
| 322 |
+
Another avenue for employing a predictive model for stitching can be MLLMs, which is significantly different from the VLMs case. Not only is the causal language modeling objective different from the contrastive scheme of VLMs, the connector stitches output image encoder representations to LLM input representations. In VLMs, the connector strictly stitches output representations, i.e., features produced by the text encoder are stitched to the space of image encoder features. We are interested in investigating how HYMA responds to this setting via the following experiments.
|
| 323 |
+
|
| 324 |
+
# A.1 $\mathbf{MLP}_1\mid \mathbf{N}\times \mathbf{M} = 3$
|
| 325 |
+
|
| 326 |
+
We stitch $N = 1$ image encoder (ViT-S) with $M = 3$ LLMs (GPT-2 (Radford et al., 2019), Pythia-160M (Biderman et al., 2023), Qwen-200M (Bai et al., 2023)) using a 2-layer MLP $(\mathrm{MLP}_1)$ as the connector. Figure 5 shows the performance of HYMA in comparison to the Grid Search and Best Guess baselines respectively. We report the performance of the best connectors identified by each search method, with the FLOPs incurred via training. We find that HYMA reduces the cost of searching of all combinations, bringing it lower than training only one connector for the $N\times M = 3$ case. The efficiency of HYMA over the two comparative baselines at the final state (rightmost point in each plot) is $3\times$ w.r.t. Grid Search and $1.3\times$ w.r.t. Best Guess. Further, all search methods yield comparable optimal perplexities, 51.1 for HYMA and 51.0 for Grid Search (or Best Guess) on MSCOCO. On Flickr-8K, the perplexities are found to be 72.4 and 70.4 for HYMA and Grid Search (or Best Guess) respectively.
|
| 327 |
+
|
| 328 |
+
# A.2 Linear, $\mathbf{MLP}_1,\mathbf{MLP}_2\mid \mathbf{N}\times \mathbf{M} = 9$
|
| 329 |
+
|
| 330 |
+
We scale up our experimental setting to now use $N = 3$ image encoders (Clip-ViT-B, DeiT3-B, ViT-S) and $M = 3$ LLMs (GPT-2, Pythia-160M, Qwen-200M). Similar to the case for VLMs, we vary the complexity of the connector from a linear layer, to an MLP with 2 hidden layers. Evaluation is done similarly to the case with $N \times M = 3$ MLLM combinations, i.e., via image captioning on MSCOCO and Flickr-8K. As shown in Table 5. HYMA struggles to match true ranking of model pairs for the
|
| 331 |
+
|
| 332 |
+
MLLM case. Specifically, it performs worse on connectors of lower complexity, and consistently under-performs in terms of validation perplexity. Careful observation shows that for $N \times M = 9$ MLLMs, the ranking of connectors predicted by HYMA follows a trend of uni-modal model performance (the best image encoder (Clip-ViT-B) and LLM (Qwen-200M) show the best performance. However, independent stitching does not show such behavior. Overall, connectors obtained via independent stitching outperform those obtained from HYMA by a significant margin, and the true ranking diverges notably from that predicted by HYMA. Investigations on disentangling the effects of the causal modeling loss and the change in stitched representation spaces is left as future work.
|
| 333 |
+
|
| 334 |
+
# B Pretrained models
|
| 335 |
+
|
| 336 |
+
B.1 Image encoders (source: timm (Wightman, 2019))
|
| 337 |
+
|
| 338 |
+
<table><tr><td>Feature Dim.</td><td>Model</td><td>Shorthand</td><td>Param. count (M)</td><td>timm specifier</td></tr><tr><td rowspan="3">384</td><td>ViT-S</td><td>VS</td><td>22.05</td><td>vit_small_batch16_224.augreg_in21k_ft_in1k</td></tr><tr><td>DeiT-S</td><td>DS</td><td>22.05</td><td>deit_small_batch16_224.fb_in1k</td></tr><tr><td>DeiT-3S</td><td>D3S</td><td>22.06</td><td>deit3_small_batch16_224.fb_in1k</td></tr><tr><td rowspan="4">768</td><td>ViT-B</td><td>VB</td><td>86.57</td><td>vit_base_batch16_224.augreg_in21k_ft_in1k</td></tr><tr><td>DeiT-B</td><td>DB</td><td>86.57</td><td>deit_base_batch16_224.fb_in1k</td></tr><tr><td>DeiT3-B</td><td>D3B</td><td>86.88</td><td>deit3_base_batch16_224.fb_in22k_ft_in1k</td></tr><tr><td>Clip-viT-B</td><td>CVB</td><td>86.86</td><td>vit_base_batch1_clip_224.laion2b_ft_in12k_in1k</td></tr><tr><td rowspan="3">1024</td><td>ViT-L</td><td>VL</td><td>304.33</td><td>vit_large_batch16_224.augreg_in21k_ft_in1k</td></tr><tr><td>Eva2-L</td><td>E2L</td><td>305.08</td><td>eva02_large_batch14_448.mim_m38m_ft_in22k_in1k</td></tr><tr><td>DeiT3-L</td><td>D3L</td><td>304.37</td><td>deit3_large_batch16_224.fb_in22k_ft_in1k</td></tr></table>
|
| 339 |
+
|
| 340 |
+
Table 6: All pretrained image encoders used in our work are given above, along with their shorthand IDs that may be referred to in the main manuscript.
|
| 341 |
+
B.2 Text encoders & LLMs (source: huggingface (Wolf et al., 2020))
|
| 342 |
+
|
| 343 |
+
<table><tr><td>Feature Dim.</td><td>Model</td><td>Shorthand</td><td>Param. count(M)</td><td>huggingface specifier</td></tr><tr><td>384</td><td>minilm-L</td><td>mLL</td><td>33.4</td><td>sentence-transformers/all-MiniLM-L12-v2</td></tr><tr><td>768</td><td>mpnet-B</td><td>mpB</td><td>109</td><td>sentence-transformer/all-mynet-base-v2</td></tr><tr><td>1024</td><td>roberta-L</td><td>roL</td><td>355M</td><td>sentence-transformer/all-roberta-large-v1</td></tr><tr><td rowspan="3">768</td><td>GPT-2</td><td>g2</td><td>137</td><td>openai-community/gpt2</td></tr><tr><td>Pythia-160M</td><td>py</td><td>213</td><td>EleutherAI/pythia-160m</td></tr><tr><td>Qwen-200M</td><td>qw</td><td>203</td><td>MiniLLM/MiniPLM-Qwen-200M</td></tr></table>
|
| 344 |
+
|
| 345 |
+
Table 7: All pretrained text encoders and LLMs used in our work are given above, along with their shorthand IDs that may be referred to in the main manuscript.
|
| 346 |
+
|
| 347 |
+
# C Designing the Model Zoo
|
| 348 |
+
|
| 349 |
+
While our empirical analysis suggests that models with larger parametric capacity or higher embedding dimensionality generally perform better after stitching, a natural question arises: why include smaller models in the model zoo at all? We justify their inclusion based on the following:
|
| 350 |
+
|
| 351 |
+

|
| 352 |
+
Figure 5: Evaluation of HYMA for MLLMs, on MSCOCO and Flickr-8K ( $N = 1$ , $M = 3$ , $B_{m} = 1$ ). We report the model combination exhibiting the best final performance for each evaluation benchmark and search method.
|
| 353 |
+
|
| 354 |
+

|
| 355 |
+
|
| 356 |
+
<table><tr><td rowspan="2">Dataset</td><td rowspan="2">Connector</td><td colspan="3">NDCG @ k (↑)</td><td rowspan="2">ρ (↑)N×M=9</td><td colspan="4">ΔPerplexity (↓)</td></tr><tr><td>k=5</td><td>k=7</td><td>k=9</td><td>Rand. (n=5)</td><td>UniT-1</td><td>Ask-LLM</td><td>Oracle (GS)</td></tr><tr><td rowspan="3">MSCOCO</td><td>Linear</td><td>0.16</td><td>0.42</td><td>0.74</td><td>-0.6</td><td>+3.68</td><td>+6.85</td><td>+3.20</td><td>+6.85</td></tr><tr><td>MLP1</td><td>0.65</td><td>0.79</td><td>0.89</td><td>0.35</td><td>+0.65</td><td>+2.2</td><td>+2.20</td><td>+3.5</td></tr><tr><td>MLP2</td><td>0.61</td><td>0.74</td><td>0.85</td><td>0.39</td><td>+1.13</td><td>+3.58</td><td>+3.58</td><td>+4.01</td></tr><tr><td rowspan="3">Flickr-8K</td><td>Linear</td><td>0.56</td><td>0.73</td><td>0.85</td><td>0.12</td><td>+1.65</td><td>+5.54</td><td>-1.46</td><td>+5.54</td></tr><tr><td>MLP1</td><td>0.77</td><td>0.82</td><td>0.90</td><td>0.45</td><td>-0.1</td><td>+1.30</td><td>-1.30</td><td>+4.5</td></tr><tr><td>MLP2</td><td>0.58</td><td>0.72</td><td>0.83</td><td>0.23</td><td>-3.57</td><td>-0.00</td><td>-0.00</td><td>-0.00</td></tr></table>
|
| 357 |
+
|
| 358 |
+
Table 5: HYMA MLLM Results: We report the ranking similarity between HYMA and the Oracle—Grid Search (GS)—using NDCG and Spearman's $\rho$ . Across all three connector configurations, HYMA exhibits strong correlation with GS rankings. Additionally, we show the perplexity difference $(\Delta)$ of the best connector obtained post stitching via HYMA, compared to four baselines: (a) Random: random pairing and stitching (avg. over 5 runs); (b) UniT-1: stitching the best unimodal models; (c) Ask-LLM: model pairs picked by Claude 4 Sonnet; and (d) Oracle: Full grid search over all $N\times M = 9$ configurations.
|
| 359 |
+
|
| 360 |
+
1. First, including smaller models enables the construction of multi-modal models across a range of parametric capacities, which is crucial for deployment under varying computational or resource constraints. For example, an organization aiming to deploy multi-modal models at multiple scales would incur significantly higher training costs if relying on independent training for each configuration. In contrast, HYMA offers a substantially more cost-effective alternative.
|
| 361 |
+
2. Second, our empirical observations indicate that larger models are not always the best-performing choice when stitched into multimodal pairs. This motivates the inclusion of a diverse set of model configurations in our zoo to better explore the multi-modal design space. By covering a broader range of capacity combinations, HYMA facilitates a more comprehensive and efficient search, supported by observations from Figure 1 and Table 1.
|
| 362 |
+
|
| 363 |
+
# D Training and hyper-parameter details
|
| 364 |
+
|
| 365 |
+
We tune hyperparameters for each trained model to maximize (i) validation performance, (ii) GPU utilization, and (iii) training stability. Our goal is to demonstrate that hypernetworks can efficiently approach the M-OPS problem that often requires a large amount of computational resources. Hence, we emphasize on the need to have maximum GPU utilization in order to present an efficiency-oriented method. We report the hyperparameters used for training connectors for VLMs along with the configuration for HYMA. We use 3 random seeds and report average performance in each experiment.
|
| 366 |
+
|
| 367 |
+
VLMs. Training individual connectors between VLMs uses hyperparameters that provides the best performance after 10 epochs of training. Our hyperparameter choice is similar to that of (Rosenfeld et al., 2022). Specifically, we use a batch size of $2^{14}$ , the Adam optimizer, and a learning rate of $1e - 2$ subject to a schedule that linearly warms
|
| 368 |
+
|
| 369 |
+
```python
|
| 370 |
+
def train_hypernet(hypernet, data_iter, models_iter, optimizer, num_steps): hypernet.train() for step in range(num_steps): # first sample (image, caption) data with batch size B_d data_batch = next(data_iter) # then subsample the full NxM space of models with batch size B_m model_batch = next(model_batch) optimizer.zero_grad() # input to the hypernetwork are indices or ids of the respective pairs vlm_ids_in_full_zoo = get.ids_wrt_full_zoo(model_batch) # hypernet outputs parameters of the stitches between the pairs generated.params = hypernet(vlm_pair.ids) # mapped the data through the stitched model pairs # and compute multi-pair multi-modal loss loss = hypernet.forward_datasthrough( data_batch, generated.params, model_batch ) # back-propagate loss.backup() optimizer.step()
|
| 371 |
+
```
|
| 372 |
+
|
| 373 |
+
Figure 6: PyTorch (Paszke et al., 2019) pseudocode for HYMA training procedure on $N \times M$ models.
|
| 374 |
+
|
| 375 |
+
up the learning rate from 0 for 50 steps. After that, the learning rate is decayed to 0 following a cosine curve. Training HYMA for VLMs is quite sensitive to hyperparameters, as is to be expected from a complex network that outputs large spaces especially considering how it does so using indirectly (using layer-specific embeddings). The optimal batch size, i.e., that ensures the most stable training is $2^{9}$ , and the learning rate is set to $1e - 2$ for the Adam optimizer. As mentioned in the main manuscript, the value of the model batch size $B_{m}$ affect the training strongly, hence we set it to 1 when $N\times M = 3$ and 9 when $N\times M = 27$ . For AutoPair, $N\times = 12$ and $B_{m} = 4$ .
|
| 376 |
+
|
| 377 |
+
MLLMs. For MLLMs, we follow recipes given in (Jia et al., 2024) for training only the connector (referred to as the feature alignment phase of pretraining). Particularly, we use Adam with batch size of 64 for training individual connectors and learning rate $1e - 3$ . This is subject to a schedule of warmup ratio $3e - 2$ following a cosine decay to 0. The batch size training HYMA for MLLMs is 32 and the learning rate is $1e - 3$ .
|
| 378 |
+
|
| 379 |
+
Architectural experiments. For VLMs, we tried using a compression of the image encoder features
|
| 380 |
+
|
| 381 |
+
<table><tr><td>Architecture</td><td>IN-1K Top-1 accuracy</td></tr><tr><td>HYMA</td><td>27.46</td></tr><tr><td>HYMAEC</td><td>12.11</td></tr></table>
|
| 382 |
+
|
| 383 |
+
Table 8: HYMA performs significantly better downstream in comparison to $\mathrm{HYMA_{EC}}$ .
|
| 384 |
+
|
| 385 |
+
as the conditional input to the hypernetwork, while keeping all other components the same. Only the learnable code-book is replaced by a learnt compression of batch-averaged image encoder features. This configuration, denoted as $\mathrm{HYMA}_{\mathrm{EC}}$ yielded lower performance than our default methodology HYMA. Specifically for the $N\times M = 3$ case, for multi-modal image classification on ImageNet-1K, we find that the top-1 accuracy of the best model pair given by HYMA is superior to that given by $\mathrm{HYMA}_{\mathrm{EC}}$ shown in Table 8. Figure 6 provides an example pseudo-code depicting our training setup.
|
| 386 |
+
|
| 387 |
+
# E Factors impacting FLOPs
|
| 388 |
+
|
| 389 |
+
While the numbers of parameters in the model being trained is no doubt a factor that is linearly pro
|
| 390 |
+
|
| 391 |
+
portional to the total FLOPs incurred, we note that there are other factors like hyperparameters as well. For loss functions that relate linearly with the batch size, batch size has no effect on the total number of FLOPs incurred after the entire training run, as the model takes fewer update steps on a bigger batch size, but proportionately more on a smaller one. However, for loss functions that scale quadratically with the number of data samples observed, such as the InfoNCE loss (Oord et al., 2018), the value of batch size can significantly affect the FLOP count. This, after the primary design choice of iteratively loading models, which decreases the number of samples shown to a model by $N \times M / B_{m}$ , accounts for why HYMA, that training a large hypernetwork (of an average of $500 \times$ more parameters than the connector) is efficient, particularly for VLMs. For the case of MLLMs, the reasons become our design choice of iterative model batches, as well as the fact that certain LLMs are of a larger parametric capacity than others. Hence backpropagating the gradient through them into the connector for a total of $\mathcal{T}$ steps is more expensive than doing so for $\mathcal{T} / (N \times M / B_{m})$ steps via HYMA.
|
| 392 |
+
|
| 393 |
+
F Connection to Data Pruning
|
| 394 |
+
|
| 395 |
+
<table><tr><td>Method</td><td>Best Model configuration</td><td>Perf.</td></tr><tr><td>C-GS</td><td>DeiT-3S + miniLM-L</td><td>24.07</td></tr><tr><td>HYMA</td><td>DeiT-3S + miniLM-L</td><td>27.46</td></tr></table>
|
| 396 |
+
|
| 397 |
+
Table 9: HYMA vs. Constrained Grid Search (C-GS). For the setting $N \times M = 3$ , $B_{m} = 1$ , we constrain the total data available to Grid Search to one-third, aligning it with HYMA's data budget. While this constraint results in a comparable reduction in FLOPs relative to full Grid Search, it leads to a notable drop in performance. Perf. denotes MIC top-1 accuracy on ImageNet-1K.
|
| 398 |
+
|
| 399 |
+
While HYMA provides a unified and compute-efficient framework for addressing the M-OPS problem, the primary reduction in FLOPs arises from the dual mini-batching strategy employed during training. This dual mini-batching mechanism results in each model pair configuration being exposed to a smaller subset of data compared to independent stitching, effectively mimicking randomized data pruning in the process of constructing multimodal models from unimodal pairs.
|
| 400 |
+
|
| 401 |
+
Data pruning and filtering strategies for multimodal training have been extensively explored in
|
| 402 |
+
|
| 403 |
+
prior work (Fang et al., 2023; Bi et al., 2025; Mahmoud et al., 2024), typically focusing on restricting the training data via heuristic-based selection. In contrast, HYMA adopts a randomized approach: the mini-batching process dynamically selects data for each model configuration, and across multiple training steps, both the data and batch assignments are shuffled. This results in a more uniform and implicit allocation of the dataset across the space of possible model configurations, while still maintaining computational efficiency. It is important to note, however, that this data reduction applies only to each model configuration independently; the hypernetwork $H_{\phi}$ , which generates the connector weights, is still trained over the entire dataset.
|
| 404 |
+
|
| 405 |
+
This effect is further evident when comparing HYMA to a constrained version of Oracle (Grid Search) (C-GS). As shown in Table 9, when the total data available to Grid Search is limited to one-third—matching HYMA's data budget—the best-performing model identified by C-GS performs significantly worse than HYMA.
|
| 406 |
+
|
| 407 |
+
# G VQA implementation
|
| 408 |
+
|
| 409 |
+
We follow a methodology similar to the method Question Irrelevant Prompt (QIP) (Shen et al., 2021; Song et al., 2022) that creates a prompt of “QUESTION: {question} ANSWER: {answer}" for a given image. This prompt is embedded via the text encoder and the task is to match the image to the correct prompt, as an image-text matching objective.
|
| 410 |
+
|
| 411 |
+
# H Baselines: Ask-LLM (for VLMs)
|
| 412 |
+
|
| 413 |
+
We prompt Claude 4 Sonnet (Anthropic) to identify the best model pair where we specify the image encoder metadata from timm (details of the image encoder from the ImageNet-1K results database such as accuracy, parameters, image size for pretraining). The metadata of the text encoder is obtained via huggingface (details of the pretrained text encoder like embedding dimension, parameters). The "task" is one among multi-modal image classification, image-text matching, and visual question answering, whereas "dataset" is simply the name of the dataset, and "dataset_meta" contains the number of samples, classes, questions, and answers, as needed for the dataset. We specify the type of connector (Linear, $\mathrm{MLP}_1$ , $\mathrm{MLP}_2$ ) via "depth".
|
| 414 |
+
|
| 415 |
+
"You are an oracle which will predict which combination of image and text encoders will perform best on a given task. The task is to predict which (image encoder, text encoder) pair will yield the best CLIP-like VLM from a list of image encoders and text encoders. More details about this: each pair of encoders will be connected via an MLP of number of hidden layers {depth} (0 means a linear layer), which will be trained to map text embeddings to the image embedding space such that the InfoNCE loss is minimized.
|
| 416 |
+
|
| 417 |
+
Your job is NOT TO provide any code or run the experiment. JUST TO PREDICT WHICH PAIR WILL YIELD THE BEST {task} {task_metric} on {dataset} ({dataset_meta data}).
|
| 418 |
+
|
| 419 |
+
Here are the image encoders, along with their metadata: {image_encoders_with_meta data}
|
| 420 |
+
|
| 421 |
+
Here are the text encoders, along with their metadata: {text_encoders_with_meta data}
|
| 422 |
+
|
| 423 |
+
Please provide your answer in (image Encoder, text Encoder) format ONLY. NO OTHER TEXT SHOULD BE PRODUCED BY YOU EXCEPT THE ANSWER IN THE REQUIRED FORMAT."
|
| 424 |
+
|
| 425 |
+
# I Baselines: Ask-LLM (for MLLMs)
|
| 426 |
+
|
| 427 |
+
"You are an oracle which will predict which combination of image encoder and LLM will perform best on image captioning task. The task is to predict which (image encoder, LLM) pair will yield the best GPT4-like MLLM from a list of image encoders and LLMs. More details about this: each pair will be connected via an MLP of number of hidden layers {depth} (0 means a linear layer), which will be trained to map patch-wise image encoder outputs to the input embedding space of LLM such that the causal language modeling loss is minimized.
|
| 428 |
+
|
| 429 |
+
Your job is NOT TO provide any code or run the experiment. JUST TO PREDICT WHICH PAIR WILL YIELD THE BEST {task} {task_metric} on {dataset} ({dataset_meta data}).
|
| 430 |
+
|
| 431 |
+
Here are the image encoders, along with their metadata: {image_encoders_with_metadata}
|
| 432 |
+
|
| 433 |
+
Here are the LLMs, along with their metadata: {llms_with_metaData}
|
| 434 |
+
|
| 435 |
+
Please provide your answer in (image Encoder, llm) format ONLY. NO OTHER TEXT SHOULD BE PRODUCED BY YOU EXCEPT THE ANSWER IN THE REQUIRED FORMAT."
|
| 436 |
+
|
| 437 |
+
We specify image encoder details as done for VLMs, but LLMs details are obtained from huggingface (parameters, embedding dimension, context length).
|
2025/(Almost) Free Modality Stitching of Foundation Models/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:be1f8206ea9a32570377233f4831661872d1250e49e1e79e4245b6c7377b5f04
|
| 3 |
+
size 584349
|
2025/(Almost) Free Modality Stitching of Foundation Models/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/3DS_ Medical Domain Adaptation of LLMs via Decomposed Difficulty-based Data Selection/055c7ca1-23e4-4b4e-920e-8094616e6655_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/3DS_ Medical Domain Adaptation of LLMs via Decomposed Difficulty-based Data Selection/055c7ca1-23e4-4b4e-920e-8094616e6655_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/3DS_ Medical Domain Adaptation of LLMs via Decomposed Difficulty-based Data Selection/055c7ca1-23e4-4b4e-920e-8094616e6655_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:61fc2420983e5f735945bba5c90a357d580192ffb91d1d1cc626a1c17da89d16
|
| 3 |
+
size 1424922
|
2025/3DS_ Medical Domain Adaptation of LLMs via Decomposed Difficulty-based Data Selection/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/3DS_ Medical Domain Adaptation of LLMs via Decomposed Difficulty-based Data Selection/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:81a6b8f264d433e7a131eeeb1478d562a9e6e2abec26b151026c0b1352abc6ae
|
| 3 |
+
size 1235905
|
2025/3DS_ Medical Domain Adaptation of LLMs via Decomposed Difficulty-based Data Selection/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/3MDBench_ Medical Multimodal Multi-agent Dialogue Benchmark/5010b5c5-34fd-495f-8d62-35405e81977f_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/3MDBench_ Medical Multimodal Multi-agent Dialogue Benchmark/5010b5c5-34fd-495f-8d62-35405e81977f_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/3MDBench_ Medical Multimodal Multi-agent Dialogue Benchmark/5010b5c5-34fd-495f-8d62-35405e81977f_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:dbfe6d6f367364a1c19511d89619e3c282904531d01032d44ec073c9270f59ea
|
| 3 |
+
size 7610564
|
2025/3MDBench_ Medical Multimodal Multi-agent Dialogue Benchmark/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/3MDBench_ Medical Multimodal Multi-agent Dialogue Benchmark/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:473068d1295d65b6e247bdd0efbfad1d7dfc6ca14810bac9e8f8dcaca4f25ab3
|
| 3 |
+
size 2680338
|
2025/3MDBench_ Medical Multimodal Multi-agent Dialogue Benchmark/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/3R_ Enhancing Sentence Representation Learning via Redundant Representation Reduction/b739e777-ee83-4d28-8968-3c5ed2a33419_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/3R_ Enhancing Sentence Representation Learning via Redundant Representation Reduction/b739e777-ee83-4d28-8968-3c5ed2a33419_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/3R_ Enhancing Sentence Representation Learning via Redundant Representation Reduction/b739e777-ee83-4d28-8968-3c5ed2a33419_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:888d198e6ff37b5e9eac80cb5e9e5212e2af40018b4111db579b7b5c20b051d9
|
| 3 |
+
size 981358
|
2025/3R_ Enhancing Sentence Representation Learning via Redundant Representation Reduction/full.md
ADDED
|
@@ -0,0 +1,374 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 3R: Enhancing Sentence Representation Learning via Redundant Representation Reduction
|
| 2 |
+
|
| 3 |
+
Longxuan Ma, Xiao Wu, Yuxin Huang*, Shengxiang Gao, Zhengtao Yu
|
| 4 |
+
|
| 5 |
+
kunming university of science and technology
|
| 6 |
+
|
| 7 |
+
Faculty of Information Engineering and Automation
|
| 8 |
+
|
| 9 |
+
lxma@kust.edu.cn wuxiao1@stu.kust.edu.cn huangyuxin2004@163.com
|
| 10 |
+
|
| 11 |
+
gaoshengxiang.yn@hotmail.com ztyu@hotmail.com
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
Sentence representation learning (SRL) aims to learn sentence embeddings that conform to the semantic information of sentences. In recent years, fine-tuning methods based on pre-trained models and contrastive learning frameworks have significantly advanced the quality of sentence representations. However, within the semantic space of SRL models, both word embeddings and sentence representations derived from word embeddings exhibit substantial redundant information, which can adversely affect the precision of sentence representations. Existing approaches predominantly optimize training strategies to alleviate the redundancy problem, lacking fine-grained guidance on reducing redundant representations. This paper proposes a novel approach that dynamically identifies and reduces redundant information in a dimensional perspective, training the SRL model to redistribute semantics on different dimensions, and entailing better sentence representations. Extensive experiments across seven semantic text similarity benchmarks demonstrate the effectiveness and generality of the proposed method. A comprehensive analysis of the experimental results is conducted and the code/data will be released.
|
| 16 |
+
|
| 17 |
+
# 1 Introduction
|
| 18 |
+
|
| 19 |
+
Sentence representation learning (SRL) (Yan et al., 2021; Zhou et al., 2022) is a fundamental task that aims to learn sentence embeddings that benefit downstream tasks such as semantic similarity (Agirre et al., 2016; Cer et al., 2017), information retrieval (Thakur et al., 2021), and sentiment analysis (Bao et al., 2023).
|
| 20 |
+
|
| 21 |
+
Recently, a training paradigm based on pretrained models and contrastive learning as a finetuning method has achieved significant success in SRL. Among these, SimCSE (Simple Contrastive Learning of Sentence Embeddings) (Gao et al.,
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
Figure 1: Word and sentence embedding redundancy.
|
| 25 |
+
|
| 26 |
+
2021) stands out as a representative work (Chen et al., 2020; Sun et al., 2022; Liu et al., 2024). It proposes a simple yet effective method for constructing positive examples, significantly enhancing the quality of sentence embeddings. Subsequently, numerous studies (Chuang et al., 2022; He et al., 2023; Zhuo et al., 2023; Nguyen et al., 2024; Xu et al., 2024; Zhu et al., 2024) have focused on improving the SimCSE method to learn more effective sentence representations, including using large language models (LLMs) to evaluate training data quality (Cheng et al., 2023a) or directly generate high-quality data (Wang et al., 2024a).
|
| 27 |
+
|
| 28 |
+
Nevertheless, the contrastive SRL still faces certain challenges. Firstly, two sentences with significant semantic differences may still use the same words, with high-frequency words being the most common example, as shown in the bottom part of Figure 1. While some high-frequency words such as stop words play a crucial role in enhancing sentence coherence and semantic fluency, their contribution to the core semantics of the sentence is limited (Chen et al., 2022). High-frequency words add redundant encoded information, making it harder for SRL models to distinguish sentences with these overlapping words. Secondly, token embeddings learned by pre-trained models often exhibit redundant or ineffective information (Shi et al., 2022; Chen et al., 2023), leading to high similarity between tokens with different parts of speech and
|
| 29 |
+
|
| 30 |
+
meanings (with high-frequency words contributing to the majority). As shown in Figure 1, the cosine similarity between the embeddings of "was" and "born" reaches 0.79 in the upper left heat map, and the cosine similarity between "the" and "born" is 0.84 in the upper right heat map<sup>1</sup>. Sentence representation derived from token embeddings is influenced by the token-level redundant information (Tian et al., 2020), making it difficult for the SRL models to understand the overall semantics.
|
| 31 |
+
|
| 32 |
+
These two challenges bring unexpected redundant information (Shen et al., 2023), which hinders the contrastive SRL models from focusing on key semantic details and acquiring adequate discriminative knowledge (Chen et al., 2022, 2023). For instance, despite the semantic gap, the cosine similarity between two sentence representations reaches 0.86 in Figure 1. Current studies on contrastive SRL normally address the redundancy problem by adjusting training strategies. For example, Chen et al. (2022) proposes an information minimization-based contrastive learning method to learn the important information and drop the redundant information; Chen et al. (2023) utilizes hidden representations from intermediate layers as negative samples which the final sentence representations should be away from. However, these training strategies often lack fine-grained guidance to identify and reduce redundancy, hindering the model's ability to learn better sentence representations.
|
| 33 |
+
|
| 34 |
+
In this paper, we propose a Redundant Representation Reduction (3R) approach, which adopts an explicit signal to guide the reduction of redundant representations. The 3R method comprises three steps: (1) constructing a corpus-level redundant sentence embedding based on high-frequency words, (2) enabling the model to self-identify dimensions containing redundant information within each training batch, and (3) dynamically reducing redundant information for each training sample using the corpus-level redundant embedding and self-identified redundant dimensions. The 3R method helps the SRL model adjust the information distribution in different dimensions and enhances the ability of SRL models to concentrate on critical semantic information, thereby learning better sentence representations.
|
| 35 |
+
|
| 36 |
+
The 3R method offers several advantages: 1) it can be implemented with several lines of code; 2) the method is model-agnostic and it requires
|
| 37 |
+
|
| 38 |
+
no modification to the core network architecture of SimCSE. Hence, it can be easily adopted to different contrastive learning-based representation learning frameworks; 3) experiments show that 3R can help the contrastive SRL model learn effective representations that improve downstream task performance. The contributions of this paper are:
|
| 39 |
+
|
| 40 |
+
- We propose a Redundant Representation Reduction (3R) method that dynamically identifies and reduces redundant information in dimensions, which helps the contrastive SRL model to focus on key semantic information and learn better sentence representation.
|
| 41 |
+
- Extensive experiments on standard semantic textual similarity (STS) tasks demonstrate that 3R: 1) outperform previous approaches that aim to improve SRL by reducing redundant information; 2) work together with previous methods to improve performance, demonstrating good generality. We provide a systematic analysis of the results. The code and data will be released on GitHub<sup>2</sup>.
|
| 42 |
+
|
| 43 |
+
# 2 Related Work
|
| 44 |
+
|
| 45 |
+
# 2.1 Contrastive Learning
|
| 46 |
+
|
| 47 |
+
Recently, contrastive learning-based approaches have become the primary direction in SRL (Gao et al., 2021). Contrastive learning aims to pull representations of similar samples closer while pushing representations of dissimilar samples as far apart as possible. The objective of unsupervised contrastive learning is shown in Equation (1):
|
| 48 |
+
|
| 49 |
+
$$
|
| 50 |
+
\mathrm {L} _ {i} = - \log \frac {\mathrm {e} ^ {\operatorname {s i m} \left(\mathbf {h} _ {i} , \mathbf {h} _ {i} ^ {+}\right) / \tau}}{\sum_ {j = 1} ^ {\mathrm {N}} \mathrm {e} ^ {\operatorname {s i m} \left(\mathbf {h} _ {i} , \mathbf {h} _ {j} ^ {+}\right) / \tau}}, \tag {1}
|
| 51 |
+
$$
|
| 52 |
+
|
| 53 |
+
where $\mathbf{h}_i$ represents the embedding of sample $x_{i}$ in the deep learning model, and $\mathbf{h}_i^+$ denotes the embedding of the positive example of $x_{i}$ . $\mathbf{h}_j^+$ is the embedding of the examples within the same training batch, $j\in \{1,2,\dots N\}$ . $\mathrm{sim}(\cdot)$ is the cosine similarity between two representations. $\tau$ is a temperature constant, which adjusts the influence of the similarity scores on the loss $L_{i}$ . Building on the unsupervised framework, the objective of supervised contrastive learning introduces hard negative samples (Liu et al., 2025), as shown below:
|
| 54 |
+
|
| 55 |
+

|
| 56 |
+
Figure 2: The proposed Redundant Representation Reduction (3R) Method.
|
| 57 |
+
|
| 58 |
+
$$
|
| 59 |
+
- \log \frac {\mathrm {e} ^ {\operatorname* {s i m} \left(\mathbf {h} _ {i} , \mathbf {h} _ {i} ^ {+}\right) / \tau}}{\sum_ {j = 1} ^ {\mathrm {N}} \left(\mathrm {e} ^ {\operatorname* {s i m} \left(\mathbf {h} _ {i} , \mathbf {h} _ {j} ^ {+}\right) / \tau} + \mathrm {e} ^ {\operatorname* {s i m} \left(\mathbf {h} _ {i} , \mathbf {h} _ {j} ^ {-}\right) / \tau}\right)}, \tag {2}
|
| 60 |
+
$$
|
| 61 |
+
|
| 62 |
+
where $\mathbf{h}_j^-$ represents the embedding of the negative example $x_{j}^{-}$ for the sample $x_{j}$ in the deep learning model (Ma et al., 2022). Our method aims to reduce redundant information of the sentence representation $\pmb{h}$ , which can be adapted to both unsupervised and supervised contrastive learning.
|
| 63 |
+
|
| 64 |
+
# 2.2 SimCSE and Its Improvements
|
| 65 |
+
|
| 66 |
+
To make contrastive SRL more effective, considerable research efforts have been paid to construct high-quality training examples. SimCSE (Gao et al., 2021) is a representative work that constructs positive pairs through the Dropout mechanism in neural networks. It feeds the same sentence into the model twice and uses the embeddings generated from these two passes as positive pairs in unsupervised contrastive learning. Wu et al. (2022) proposes to construct positive pairs through word repetition, which effectively alleviates the bias issue caused by the length similarity of positive pairs. Wang and Dou (2023) uses a rule-based method to construct semantically opposite but structurally identical sentences as negatives. Xu et al. (2023a) adopts an adversarial learning framework to construct both positive and negative pairs. Xu et al. (2023b) improves SRL by removing the Dropout noise in negative pairs. In recent years, some studies leverage the LLMs to select (Cheng et al., 2023a) or construct (Jiang et al., 2022; Cheng et al., 2023a; Wang et al., 2024a) high-quality training data, or directly use LLMs as the base model for contrastive SRL (Li and Li, 2024).
|
| 67 |
+
|
| 68 |
+
Another line of research aims to improve the quality of sentence representations by opti
|
| 69 |
+
|
| 70 |
+
mizing the contrastive learning objective, such as integrating semantic information (Tan et al., 2022), incorporating soft-prompt information (Ou and Xu, 2024), and introducing additional loss terms (Chuang et al., 2022; Lee, 2023).
|
| 71 |
+
|
| 72 |
+
Among the previous work, Chen et al. (2022), Shen et al. (2023) and Chen et al. (2023) are similar to ours that try to improve SRL by reducing redundant information. (Chen et al., 2022) introduces an additional loss function to guide the model in encoding less redundancy into sentence embeddings. Shen et al. (2023) proposes a post-processing method to subtract sentence-level and corpus-level redundant information in sentence embeddings. Chen et al. (2023) treats representations of sentences from intermediate layers of the model as additional negative examples and reduces the redundancy in sentence embeddings by increasing the distance to these negative examples. However, the previous work lacks fine-grained guidance (Ma et al., 2024) on allocating effective semantic information. In contrast, 3R provides guidance for the contrastive SRL to dynamically identify and reduce redundant information from each dimension.
|
| 73 |
+
|
| 74 |
+
# 3 The Proposed 3R Method
|
| 75 |
+
|
| 76 |
+
As shown in Figure 2, the 3R method consists of redundant representation construction, redundant dimension identification, and redundant representation reduction.
|
| 77 |
+
|
| 78 |
+
# 3.1 Redundant Representations Construction
|
| 79 |
+
|
| 80 |
+
Inspired by the corpus-level redundancy defined by Shen et al. (2023), to facilitate the model in identifying and autonomously mitigating the influence of redundant information, we start with constructing a set of redundant exemplars derived from high-frequency lexical items within the train
|
| 81 |
+
|
| 82 |
+
ing corpus. This type of exemplar represents both global semantic statistical information and the role of a "weak" example, which cannot provide effective semantic information to distinguish between positive and negative examples. Reducing this ineffective semantic information in sentence representation can help the model better focus on key semantics that distinguish different examples.
|
| 83 |
+
|
| 84 |
+
Step (1). We count the word frequencies in the unsupervised training dataset of the SimCSE framework (Wiki dataset (Gao et al., 2021)). For example, the top 10 high-frequency words and their corresponding frequencies are: ["the" (1,437,106), "of" (678,338), "in" (583,691), "and" (568,586), "a" (413,817), "to" (408,457), "was" (250,346), "is" (186,236), "on" (170,559), "as" (169,463)]. During the experiments, we select the top 300 most frequent words for the next step. Using corpus-level term frequency statistics provides a more robust and comprehensive representation of the semantic distribution within the corpus.
|
| 85 |
+
|
| 86 |
+
Step (2). We construct the redundant exemplars with the top 300 frequent words. We adopt an opensourced LLM Deepseek-v3<sup>3</sup> for this task. During each call to the model, we randomly select 50 words from the 300 high-frequency word list and then use the chosen words to generate redundant exemplars<sup>4</sup>. The specific instruction for using the large language model is: "Please use the words provided to generate 5 sentences with different meanings. The requirement is that most of the words in the sentences should come from the provided vocabulary, and the other words in the sentences should also be as common as possible. The sentence length should not exceed 32<sup>5</sup>. The words provided are: "and, was, ..."
|
| 87 |
+
|
| 88 |
+
The reason for constructing "high-redundancy" sentences, rather than directly using high-frequency words, is that the goal of contrastive training is to learn effective sentence representations. Directly using high-frequency words leads to "redundant" representations that lack sentence-level semantic information. In the experimental section, we will compare the effect of directly using high-frequency words to generate redundant representations (Please refer to section 5.3).
|
| 89 |
+
|
| 90 |
+
1: He has worked for this prestigious company in the city for several years and is highly regarded for his professionalism.
|
| 91 |
+
2: The group of students from the university were discussing the film until late into the night, trying to decide if it was the best they had seen this year.
|
| 92 |
+
|
| 93 |
+
# Table 1: Two redundant sentence examples.
|
| 94 |
+
|
| 95 |
+
Step (3). Repeat step (2) until a sufficient number of sentences are obtained. We constructed sentences to ensure that they covered the top 300 most frequent words, and each frequent word was used at least twice. Finally, we had $64^{6}$ sentences. Table 1 shows two examples of the constructed high-redundancy sentences. The underlined words are from the high-frequency word list.
|
| 96 |
+
|
| 97 |
+
The set of 64 sentences constructed in this section will serve as a candidate pool. We randomly select $k$ sentences $(0 < k \leq 64)$ from this pool for each training batch. These sentences will be input to the encoder to get their embedding $\bar{\mathbf{h}}_l, l = \{1,2,\dots,k\}$ . The mean encoding result $\bar{\mathbf{h}}$ of the $k$ sentences will be used as the representative redundant representation for each batch during training.
|
| 98 |
+
|
| 99 |
+
# 3.2 Redundant Dimensions Identification
|
| 100 |
+
|
| 101 |
+
After obtaining the redundant representation $\overline{\mathbf{h}}$ , we designed a batch-wise redundant dimensions identification method. It helps the model to identify the redundant dimensions during training.
|
| 102 |
+
|
| 103 |
+
According to the fundamental principles of information theory, a system with higher uncertainty carries a greater amount of information (MacKay, 2003). Based on this theory, we compute the standard deviation of the sentence embeddings across each dimension for the data in the same batch. A smaller standard deviation indicates a lower variance in that dimension. Hence the dimension's contribution to distinguishing between different embeddings is minimal. The mean $u^{d}$ and standard deviation $\sigma^{d}$ of the $d$ -th dimension are computed with Equation (3):
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
\sigma^ {d} = \sqrt {\frac {\sum_ {j = 1} ^ {N} \left(h _ {j} ^ {d} - u ^ {d}\right) ^ {2}}{N}}, u ^ {d} = \frac {1}{N} \sum_ {j = 1} ^ {N} h _ {j} ^ {d}, (3)
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
where $N$ denotes the number of training data points
|
| 110 |
+
|
| 111 |
+
in the batch, $\mathbf{h}_j$ represents the $j$ -th sentence embedding in the batch, $j \in \{1, 2, \dots, N\}$ . $n$ is the hidden size of the sentence embedding. $h_j^d$ is the value of the $d$ -th dimension for the embedding of the $j$ -th sentence, $d \in \{1, 2, \dots, n\}$ .
|
| 112 |
+
|
| 113 |
+
Then, we set up a learnable threshold $c$ (an explicit signal) to decide which dimension is redundant. Specifically, the dimensions with a standard deviation smaller than $c$ are defined as redundant for that training batch. We define $S$ as the set of redundant dimensions selected. If $\sigma^d - c < 0$ , $d \in S$ , otherwise $d \notin S$ .
|
| 114 |
+
|
| 115 |
+
# 3.3 Redundant Representations Reduction
|
| 116 |
+
|
| 117 |
+
After identifying the redundant dimensions, we propose a simple regularization method to reduce the redundant information in sentence embeddings. The regularized sentence embeddings can more accurately reflect the semantic distribution between sentences, which benefits the optimizing objective of contrastive learning.
|
| 118 |
+
|
| 119 |
+
As shown in Equations (4) and (5), the model discards redundant information in sentence embeddings by subtracting the redundant vector during the contrastive fine-tuning process. This redundancy reduction strategy is inspired by the work of (Shen et al., 2023), who used this method as a post-processing step to reduce redundant information in sentence embeddings. However, unlike their strategy of subtracting the overall vector, we only perform subtraction on the high-redundancy dimensions (as selected in Section 3.2).
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
\begin{array}{l} L _ {i} = - \log \frac {\exp \left(\operatorname {s i m} \left(\widehat {\mathbf {h}} _ {i} , \widehat {\mathbf {h}} _ {i} ^ {+}\right) / \tau\right)}{\sum_ {j = 1} ^ {N} \exp \left(\operatorname {s i m} \left(\widehat {\mathbf {h}} _ {i} , \widehat {\mathbf {h}} _ {j} ^ {+}\right) / \tau\right)}, (4) \\ \left\{ \begin{array}{l l} \widehat {h} _ {x} ^ {d} = h _ {x} ^ {d} - \bar {h} ^ {d}, & \text {i f} d \in S, \\ \widehat {h} _ {x} ^ {d} = h _ {x} ^ {d}, & \text {i f} d \notin S, \end{array} \right. (5) \\ \end{array}
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
where $\mathbf{h}_i = (h_i^1, h_i^2, \ldots, h_i^n)$ represents the $i$ -th sentence embedding in the batch, $\bar{\mathbf{h}} = (\bar{h}^1, \bar{h}^2, \ldots, \bar{h}^n)$ denotes the constructed redundancy vector, $n$ is the dimension of the vector, and $\bar{h}^d$ refers to the value at the $d$ -th dimension of $\bar{\mathbf{h}}$ . $S$ is the set of selected redundant dimensions. $\mathbf{h}_x$ represents $\mathbf{h}_i$ , $\mathbf{h}_j^+$ , or $\mathbf{h}_i^+$ . $\hat{\mathbf{h}}$ is the reduced sentence representation we used for contrastive learning.
|
| 126 |
+
|
| 127 |
+
# 4 Experimental setting
|
| 128 |
+
|
| 129 |
+
# 4.1 Datasets and Evaluation Metrics
|
| 130 |
+
|
| 131 |
+
We evaluate the performance of sentence embeddings on the standard semantic textual similarity
|
| 132 |
+
|
| 133 |
+
(STS) task, which includes seven sub-tasks $^{8}$ . Each sub-task requires the model to output a similarity score for a given sentence pair, with a score range from 0 to 5, where 0 indicates no semantic relevance and 5 indicates identical semantics. The evaluation metric of the STS task is the Spearman correlation between the predicted scores and human-annotated scores. We used the open-source code from (Gao et al., 2021) to compute the model's scores. The STS tasks are difficult for not only SRL models but also the state-of-the-art LLMs. Previous work (Wang et al., 2024a) shows that ChatGPT $^{9}$ equipped with in-context-learning (Dong et al., 2023) can only obtain 76.19 Spearman correlation score on this task, which is lower than many unsupervised methods based on BERT or RoBERTa (please refer to Table 2). Experiments on more backbone models and more downstream tasks are shown in Appendix D and E. All results are the average of five-times experiments.
|
| 134 |
+
|
| 135 |
+
Alignment and Uniformity are two metrics for evaluating the quality of the embedding space. Specifically, alignment measures the distance between positive pairs. A smaller alignment value indicates that semantically similar sentences are closer together in the vector space. Uniformity, on the other hand, evaluates the distribution of embeddings in the semantic space. A smaller uniformity value indicates a more uniform distribution of the vectors. Following (Reimers and Gurevych, 2019) and (Gao et al., 2021), who proposed the view that the primary objective of sentence embeddings is to cluster semantically similar sentences, we take alignment as the main results. In this study, we used the open-source code from (Wang and Isola, 2020) to compute the alignment and uniformity losses. The alignment loss is computed based on sentence pairs with similarity scores greater than 4 from the STS-B test set. The uniformity loss is computed with the entire STS-B test set.
|
| 136 |
+
|
| 137 |
+
# 4.2 Baselines
|
| 138 |
+
|
| 139 |
+
We compare with the following baselines.
|
| 140 |
+
|
| 141 |
+
Unsupervise SRL methods: (1) SimCSE (Gao et al., 2021) utilizes dropout for data augmentation in contrastive learning; (2) InforMin-CL (Chen et al., 2022) uses an additional loss
|
| 142 |
+
|
| 143 |
+
<table><tr><td>Unsupervised Model</td><td>STS12</td><td>STS13</td><td>STS14</td><td>STS15</td><td>STS16</td><td>STS-B</td><td>SICK-R</td><td>Avg(Diff.)</td></tr><tr><td>SimCSE-BERTbase*</td><td>67.00</td><td>81.87</td><td>73.20</td><td>79.02</td><td>78.30</td><td>76.26</td><td>70.82</td><td>75.21</td></tr><tr><td>SimCSE-BERTbase*+ 3R</td><td>70.51</td><td>83.46</td><td>75.89</td><td>82.06</td><td>79.18</td><td>78.69</td><td>72.84</td><td>77.52(+2.31)</td></tr><tr><td>InforMin-CL-BERTbase*</td><td>66.64</td><td>82.10</td><td>73.32</td><td>78.15</td><td>77.33</td><td>75.70</td><td>71.30</td><td>74.93</td></tr><tr><td>InforMin-CL-BERTbase*+ 3R</td><td>71.52</td><td>81.41</td><td>75.11</td><td>81.84</td><td>78.19</td><td>79.25</td><td>73.34</td><td>77.24(+2.31)</td></tr><tr><td>RapAL-BERTbase</td><td>69.33</td><td>78.93</td><td>73.95</td><td>80.01</td><td>79.29</td><td>76.00</td><td>70.51</td><td>75.43</td></tr><tr><td>SSCL-SimCSEbase*</td><td>70.09</td><td>81.52</td><td>74.61</td><td>81.64</td><td>76.71</td><td>77.14</td><td>69.93</td><td>76.10</td></tr><tr><td>SSCL-SimCSEbase*+ 3R</td><td>71.79</td><td>83.62</td><td>76.51</td><td>83.54</td><td>78.61</td><td>79.54</td><td>71.83</td><td>77.60(+1.50)</td></tr><tr><td>SimCSE-Robertabase*</td><td>69.18</td><td>81.71</td><td>72.50</td><td>81.10</td><td>80.31</td><td>79.68</td><td>69.99</td><td>76.35</td></tr><tr><td>SimCSE-Robertabase*+ 3R</td><td>71.86</td><td>82.60</td><td>74.30</td><td>81.43</td><td>81.30</td><td>81.41</td><td>69.90</td><td>77.54(+1.19)</td></tr><tr><td>InforMin-CL-Robertabase*</td><td>66.76</td><td>80.58</td><td>71.38</td><td>81.21</td><td>78.60</td><td>78.34</td><td>66.05</td><td>74.70</td></tr><tr><td>InforMin-CL-Robertabase*+ 3R</td><td>67.79</td><td>82.81</td><td>74.33</td><td>82.99</td><td>79.53</td><td>81.71</td><td>71.89</td><td>77.29(+2.59)</td></tr><tr><td>LLM2Vec-LLaMA-2-7B*</td><td>70.20</td><td>81.76</td><td>73.83</td><td>81.37</td><td>78.32</td><td>76.75</td><td>70.79</td><td>76.15</td></tr><tr><td>LLM2Vec-LLaMA-2-7B*+ 3R</td><td>70.81</td><td>83.46</td><td>75.92</td><td>82.01</td><td>78.99</td><td>78.63</td><td>72.91</td><td>77.53(+1.38)</td></tr><tr><td>Supervised Model</td><td>STS12</td><td>STS13</td><td>STS14</td><td>STS15</td><td>STS16</td><td>STS-B</td><td>SICK-R</td><td>Avg(Diff.)</td></tr><tr><td>MultiCSRE-BERTbase*</td><td>72.48</td><td>82.75</td><td>75.94</td><td>82.51</td><td>80.07</td><td>81.89</td><td>77.38</td><td>79.00</td></tr><tr><td>MultiCSRE-BERTbase*+ 3R</td><td>73.05</td><td>81.14</td><td>76.23</td><td>83.32</td><td>80.55</td><td>82.43</td><td>77.82</td><td>79.56(+0.56)</td></tr><tr><td>SimCSE-BERTbase*</td><td>77.11</td><td>80.82</td><td>78.42</td><td>85.03</td><td>80.40</td><td>82.69</td><td>78.93</td><td>80.50</td></tr><tr><td>SimCSE-BERTbase*+ 3R</td><td>76.13</td><td>85.00</td><td>80.83</td><td>86.06</td><td>81.37</td><td>84.17</td><td>80.16</td><td>81.96(+1.46)</td></tr><tr><td>Claif-SimCSE-BERTbase*</td><td>76.89</td><td>79.59</td><td>79.06</td><td>85.93</td><td>81.01</td><td>83.68</td><td>79.08</td><td>80.75</td></tr><tr><td>Claif-SimCSE-BERTbase*+ 3R</td><td>76.06</td><td>84.76</td><td>80.99</td><td>86.10</td><td>81.41</td><td>81.81</td><td>79.60</td><td>81.81(+1.06)</td></tr><tr><td>SynCSE-scratch-BERTbase*</td><td>74.34</td><td>84.37</td><td>78.33</td><td>83.73</td><td>80.22</td><td>81.81</td><td>76.00</td><td>79.83</td></tr><tr><td>SynCSE-scratch-BERT-base*+ 3R</td><td>76.65</td><td>83.26</td><td>79.52</td><td>84.81</td><td>81.02</td><td>83.82</td><td>79.70</td><td>81.27(+1.44)</td></tr></table>
|
| 144 |
+
|
| 145 |
+
Table 2: Experimental results on STS tasks. Results with * are reproduced by us. The underlined scores are the best on each sub-task of each group. "Diff." means the improvement after using 3R method on the baselines.
|
| 146 |
+
|
| 147 |
+
function to incorporate less useless encodings into sentence embeddings; (3) RapAL (Shen et al., 2023) proposes a simple post-processing method to remove redundant information in sentence embeddings; (4) SSCL (Chen et al., 2023) reduces redundancy by trains the model away from similar intermediate layer representations; (5) LLM2Vec (BehnamGhader et al., 2024) enables bidirectional attention to decoder-only LLMs such as LLaMA-2-7B (Touvron et al., 2023) and then uses LLMs for unsupervised SRL.
|
| 148 |
+
|
| 149 |
+
Supervise SRL methods: (1) SimCSE (Gao et al., 2021); (2) Claif (Cheng et al., 2023b) uses an LLM to evaluate the quality of training data for supervised SRL; (3) SynCSE-scratch (Zhang et al., 2023) uses an LLM to construct training samples for supervised SRL; (4) MultiCSR (Wang et al., 2024b) uses an LLM for multiple stages generating and selecting high-quality sentences.
|
| 150 |
+
|
| 151 |
+
# 4.3 Training Details
|
| 152 |
+
|
| 153 |
+
The experiments were conducted with an RTX 4090 GPU. We followed the hyper-parameter settings from the previous works (Gao et al., 2021; Cheng et al., 2023b; Chen et al., 2023; BehnamGhader et al., 2024), training the unsupervised model with randomly sampled sentences from Wiki data, training the supervised model with MNLI and SNLI datasets, using the same
|
| 154 |
+
|
| 155 |
+
pre-trained checkpoints of BERT (uncased) and RoBERTa (cased) for different methods.
|
| 156 |
+
|
| 157 |
+
# 5 Experimental Results and Analysis
|
| 158 |
+
|
| 159 |
+
In this section, we aim to answer the following questions: 1) Does the 3R method outperform the previous methods that aim at reducing redundant information in contrastive SRL? (Section 5.1) 2) Does the 3R method work together with the previous unsupervised methods (Section 5.1) and supervised methods (Section 5.2)? 3) What are the advantages of the 3R method? (Section 5.2) 4) How does each module contribute to the 3R method i.e. where do the gains come from? (Section 5.3) 5) What can we learn from the case study? (Section 5.4) 6) How is the hyper-parameter $k$ decided? (Section 5.5) 7) How is improvement reflected in the sentence representation space? (Appendix A)
|
| 160 |
+
|
| 161 |
+
# 5.1 Analysis of Unsupervised Methods
|
| 162 |
+
|
| 163 |
+
The upper half of Table 2 shows the experimental results with unsupervised methods. Firstly, the SimCSE+3R outperforms models (InforMinCL, RagAL, and SSCL) that also aim at reducing redundant information. The results indicate that reducing redundancy from a fine-grained dimensional perspective may better mitigate the redundancy problem. Secondly, the proposed 3R method achieves performance im
|
| 164 |
+
|
| 165 |
+
<table><tr><td>Unsupervised Model</td><td>STS12</td><td>STS13</td><td>STS14</td><td>STS15</td><td>STS16</td><td>STSB</td><td>SICK-R</td><td>Avg</td></tr><tr><td>SimCSE-BERTbase</td><td>67.00</td><td>81.87</td><td>73.20</td><td>79.02</td><td>78.30</td><td>76.26</td><td>70.82</td><td>75.21</td></tr><tr><td>SimCSE-BERTbase(dynamic mask)</td><td>68.12</td><td>82.53</td><td>74.39</td><td>80.73</td><td>77.77</td><td>77.41</td><td>72.43</td><td>76.20</td></tr><tr><td>SimCSE-BERTbase(overall subtraction)</td><td>72.09</td><td>82.71</td><td>74.94</td><td>80.96</td><td>78.23</td><td>78.07</td><td>70.88</td><td>76.84</td></tr><tr><td>SimCSE-BERTbase(token subtraction)</td><td>71.38</td><td>82.46</td><td>75.16</td><td>81.30</td><td>77.65</td><td>78.19</td><td>71.62</td><td>76.82</td></tr><tr><td>SimCSE-BERTbase(static identification)</td><td>69.31</td><td>81.85</td><td>74.88</td><td>80.78</td><td>78.30</td><td>77.31</td><td>71.60</td><td>76.29</td></tr><tr><td>SimCSE-BERTbase(3R)</td><td>70.51</td><td>83.46</td><td>75.89</td><td>82.06</td><td>79.18</td><td>78.69</td><td>72.84</td><td>77.52</td></tr></table>
|
| 166 |
+
|
| 167 |
+
Table 3: Different settings of the proposed 3R method. The underlined scores are the best on each sub-task.
|
| 168 |
+
|
| 169 |
+
provements on all BERT/RoBERTa/LLaMA models, showing a good generality on base models. Thirdly, InforMin-CL+3R outperforms InforMinCL 2.31/2.59 on BERT/RoBERTa, respectively. SSCL-SimCSE+3R outperforms SSCL-SimCSE 1.5. The results demonstrate that our method can work with other redundant information reduction methods, further improving performance.
|
| 170 |
+
|
| 171 |
+
# 5.2 Analysis of Supervised Methods
|
| 172 |
+
|
| 173 |
+
The bottom half of Table 2 shows the experimental results on the STS tasks with supervised methods. The proposed 3R method achieves performance improvements on SimCSE, Claif-SimCSE, and SynCSE-scratch. SimCSE uses manually annotated training data. Claif-SimCSE adopts LLM to evaluate the quality of training data. SynCSE-scratch leverages LLM to construct training data. The effect of the 3R method is less pronounced under supervised conditions compared to unsupervised ones. One possible reason is that in supervised training batches, the semantic relationship between a sample and its hard negative pair is more complex. In such cases, the model can already learn better sentence representations through hard negative examples, so the improvement that reducing redundant information can bring is limited.
|
| 174 |
+
|
| 175 |
+
To sum up, the results in Table 2 show a good generality of the 3R method: 1) the redundancy representation is an issue in both unsupervised and supervised training paradigms and the 3R method can mitigate the redundant representation problem in both training paradigms; 2) the 3R method works for different types of data scenarios, whether automatically constructed or human annotated; 3) the 3R method can work together with different redundancy reducing methods and different data augmentation methods.
|
| 176 |
+
|
| 177 |
+
# 5.3 Different Settings of 3R
|
| 178 |
+
|
| 179 |
+
Table 3 shows the experiments with different settings of 3R, which explain why 3R works and where the gains come from.
|
| 180 |
+
|
| 181 |
+
The "dynamic mask" setting does not perform Equation (5). Instead of subtracting the identified redundant dimensions of the redundant representation $\bar{\mathbf{h}}$ , it directly sets the identified redundant dimension of $\mathbf{h}_x$ to 0. Hence, this setting can be seen as removing the Redundant Representations Construction module of the 3R method. This setting is better than the baseline model but worse than the original 3R. The results indicate that simply removing the identified redundant dimensions may also eliminate useful information. A more refined process for reducing the redundant information such as the 3R may be more appropriate.
|
| 182 |
+
|
| 183 |
+
The "overall subtraction" setting subtracts the constructed redundancy representation $\mathbf{h}$ from all dimensions, instead of only subtracting the identified ones. This setting removes the Redundant Dimensions Identification module. The results are better than the baseline model, which shows that the constructed redundancy representation exemplifies the redundant information in the training data. Removing this redundancy helps to learn a better sentence representation. On the other hand, the "overall subtraction" setting is inferior to 3R, which means the learned threshold $c$ helps to identify which dimension is worth more to reduce.
|
| 184 |
+
|
| 185 |
+
The "token subtraction" setting uses the 300 high-frequency words to obtain the representative redundant embedding. It means we do not construct the redundant sentence pool. We directly use the average embedding of the 300 high-frequency words as the redundant embedding $\bar{\mathbf{h}}$ . Then we use the learnable $c$ to decide which dimension is redundant and should be reduced with Equation (5). We can observe that this setting can still improve the SimCSE model's performance, which means the $\bar{\mathbf{h}}$ derived from high-frequency words can also guide where the redundancy information is. However, this setting is not comparable to 3R, which means simply using high-frequency words could not provide enough sentence-level semantic information for contrastive SRL.
|
| 186 |
+
|
| 187 |
+
The "static identification" setting selects re Dun
|
| 188 |
+
|
| 189 |
+
1. explosion at Venezuela refinery kills at least 39.
|
| 190 |
+
2. Venezuela mourns oil refinery blast deaths.
|
| 191 |
+
|
| 192 |
+
Human-annotated similarity score: 0.56 Similarity score from Claif / Claif+3R: 0.75 / 0.65
|
| 193 |
+
|
| 194 |
+
Table 4: Similarity score for a random case.
|
| 195 |
+
|
| 196 |
+

|
| 197 |
+
Figure 3: Token embedding similarity heat maps.
|
| 198 |
+
|
| 199 |
+
dant dimensions based on the 300 high-frequency words. Specifically, we do not use the sentence embeddings in each batch to compute the standard deviation of each dimension. Instead, we use the embedding of the 300 high-frequency words to calculate the standard deviation of each dimension. Then we use the learnable $c$ to decide which dimension is redundant and should be reduced with Equation (5). This means the redundant dimensions are the same among different batches. We can observe that this setting can improve the SimCSE model's performance, which means the high-frequency words can guide where the redundancy information is exhibited. However, this setting is not as good as 3R, which means dynamically determining redundant dimensions in each batch can better guide the model to learn the subtle semantic differences among different batches.
|
| 200 |
+
|
| 201 |
+
# 5.4 Case Study
|
| 202 |
+
|
| 203 |
+
We randomly select a sentence pair in the STS tasks for similarity study. As shown in Table 4, the two sentences have a manually annotated score of 0.56 (For the convenience of comparison, we convert the manually annotated scores between 0-5 to 0-1). The similarity score from Claif and Claif+3R is 0.75 and 0.65, respectively. The 3R method helps the contrastive SRL model to give a score closer to a human-annotated one.
|
| 204 |
+
|
| 205 |
+
We also show the word similarity of the second sentence "Venezuela mourns oil refinery blast deaths" in Figure 3. On one side, the 3R method makes the distinction between token embeddings with different parts of speech and meanings more pronounced. For example, the words "mourns" and "refinery" have a similarity score of 0.76 in
|
| 206 |
+
|
| 207 |
+

|
| 208 |
+
Figure 4: 3R method performance with different $k$ .
|
| 209 |
+
|
| 210 |
+
Claif, while the score is 0.69 in Claif+3R. On the other side, the 3R method retains key semantic information while reducing the impact of redundant information. For instance, the words "Venezuela" and "blast" have a similarity score of 0.88 in both models. Although the similarity between dissimilar words has been reduced, there is still room for improvement. Further research to reduce unexpected redundant information is still needed.
|
| 211 |
+
|
| 212 |
+
# 5.5 The Hyper-parameter $k$
|
| 213 |
+
|
| 214 |
+
There is a hyper-parameter $k$ in the method (section 3.1), which is the number of redundant exemplars randomly chosen from the redundant sentence pool for each training batch. Figure 4 shows the average results on the STS tasks with different $k$ . The experiments are conducted with SimCSE-BERT $_{\text{base}}$ , which has an average performance of 75.21 on STS. We can see that the performance of the model gradually increases as $k$ grows. It surpasses SimCSE when $k$ is 2, which means it takes multiple redundant sentences to obtain corpus-level semantics to guide the 3R method. It reaches its peak when $k$ is 6, and then stabilizes as $k$ continues to increase. When $k$ is 6, the learned variable $c$ (Section 3.2) will converge to 0.273. $c$ determines whether a dimension should be reduced with Equation (5). More experiments about $c$ is in Appendix C.
|
| 215 |
+
|
| 216 |
+
# 6 Conclusion
|
| 217 |
+
|
| 218 |
+
This study optimizes SRL by automatically detecting and reducing redundant information in dimensions. The proposed method helps models adjust the information distributions among dimensions and learn better sentence representations. Extensive experiments demonstrate the effectiveness and generality of the method. We present a systematic analysis to show why the proposed method works. Future work includes: 1) investigating more delicate control of the reduction process (For example, dividing redundant dimensions into multiple redundancy levels); 2) testing the 3R method in more downstream tasks that apply contrastive learning.
|
| 219 |
+
|
| 220 |
+
# Limitations
|
| 221 |
+
|
| 222 |
+
Firstly, our method requires training the parameters of the SRL model. When applying the proposed 3R method to models with larger sizes (e.g. more than 7B), the training is expensive. Hence, the 3R method benefits smaller models (e.g. smaller than 1B), which still show great application value nowadays in specific tasks, domains, and scenarios. Secondly, the alignment and uniformity analysis show that the uniformity score can still improve, which indicates we can further refine the proposed 3R method to have a better representation space.
|
| 223 |
+
|
| 224 |
+
# Acknowledgments
|
| 225 |
+
|
| 226 |
+
This research was supported by the National Natural Science Foundation of China (No. U21B2027, 62266027, 62266028), the Yunnan Provincial Major Science and Technology Special Plan Projects (Grant Nos. 202402AG050007, 202302AD080003, 202303AP140008, 202502AD080016), the General Projects of Basic Research in Yunnan Province (Grant No. 202301AS070047, 202301AT070471, 202201BE070001-021).
|
| 227 |
+
|
| 228 |
+
# References
|
| 229 |
+
|
| 230 |
+
Eneko Agirre, Carmen Banea, Claire Cardie, Daniel M. Cer, Mona T. Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Inigo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, German Rigau, Larraitz Uria, and Janyce Wiebe. 2015. Semeval-2015 task 2: Semantic textual similarity, english, spanish and pilot on interpretability. In SemEval@NAACL-HLT, pages 252-263. The Association for Computer Linguistics.
|
| 231 |
+
Eneko Agirre, Carmen Banea, Claire Cardie, Daniel M. Cer, Mona T. Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2014. Semeval-2014 task 10: Multilingual semantic textual similarity. In SemEval@COLING, pages 81-91. The Association for Computer Linguistics.
|
| 232 |
+
Eneko Agirre, Carmen Banea, Daniel M. Cer, Mona T. Diab, Aitor Gonzalez-Agirre, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2016. Semeval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. In SemEval@NAACL-HLT, pages 497-511. The Association for Computer Linguistics.
|
| 233 |
+
Eneko Agirre, Daniel M. Cer, Mona T. Diab, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A pilot on semantic textual similarity. In SemEval@NAACL-HLT, pages 385-393. The Association for Computer Linguistics.
|
| 234 |
+
|
| 235 |
+
Eneko Agirre, Daniel M. Cer, Mona T. Diab, Aitor Gonzalez-Agirre, and Weiwei Guo. 2013. *sem 2013 shared task: Semantic textual similarity. In *SEM@NAACL-HLT, pages 32-43. Association for Computational Linguistics.
|
| 236 |
+
Xiaoyi Bao, Xiaotong Jiang, Zhongqing Wang, Yue Zhang, and Guodong Zhou. 2023. Opinion tree parsing for aspect-based sentiment analysis. In ACL (Findings), pages 7971-7984.
|
| 237 |
+
Parishad BehnamGhader, Vaibhav Adlakha, Marius Mosbach, Dzmitry Bahdanau, Nicolas Chapados, and Siva Reddy. 2024. Llm2vec: Large language models are secretly powerful text encoders. CoRR, abs/2404.05961.
|
| 238 |
+
Daniel M. Cer, Mona T. Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In SemEval@ACL, pages 1-14. Association for Computational Linguistics.
|
| 239 |
+
Nuo Chen, Linjun Shou, Jian Pei, Ming Gong, Bowen Cao, Jianhui Chang, Jia Li, and Daxin Jiang. 2023. Alleviating over-smoothing for unsupervised sentence representation. In ACL (1), pages 3552-3566.
|
| 240 |
+
Shaobin Chen, Jie Zhou, Yuling Sun, and Liang He. 2022. An information minimization based contrastive learning model for unsupervised sentence embeddings learning. In COLING, pages 4821-4831. International Committee on Computational Linguistics.
|
| 241 |
+
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. 2020. A simple framework for contrastive learning of visual representations. In ICML, volume 119 of Proceedings of Machine Learning Research, pages 1597-1607. PMLR.
|
| 242 |
+
Qinyuan Cheng, Xiaogui Yang, Tianxiang Sun, Linyang Li, and Xipeng Qiu. 2023a. Improving contrastive learning of sentence embeddings from AI feedback. In ACL (Findings), pages 11122-11138.
|
| 243 |
+
Qinyuan Cheng, Xiaogui Yang, Tianxiang Sun, Linyang Li, and Xipeng Qiu. 2023b. Improving contrastive learning of sentence embeddings from AI feedback. In ACL (Findings), pages 11122-11138.
|
| 244 |
+
Yung-Sung Chuang, Rumen Dangovski, Hongyin Luo, Yang Zhang, Shiyu Chang, Marin Soljacic, Shang-Wen Li, Scott Yih, Yoon Kim, and James R.Glass. 2022. Diffcse: difference-based contrastive learning for sentence embeddings. In NAACL-HLT, pages 4207-4218. Association for Computational Linguistics.
|
| 245 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT* (1), pages 4171–4186. Association for Computational Linguistics.
|
| 246 |
+
|
| 247 |
+
William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In IWP@IJCNLP. Asian Federation of Natural Language Processing.
|
| 248 |
+
Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, Lei Li, and Zhifang Sui. 2023. A survey for in-context learning. CoRR, abs/2301.00234.
|
| 249 |
+
Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. In EMNLP (1), pages 6894-6910. Association for Computational Linguistics.
|
| 250 |
+
Hongliang He, Junlei Zhang, Zhenzhong Lan, and Yue Zhang. 2023. Instance smoothed contrastive learning for unsupervised sentence embedding. In AAAI, pages 12863-12871. AAAI Press.
|
| 251 |
+
Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In KDD, pages 168-177. ACM.
|
| 252 |
+
Ting Jiang, Jian Jiao, Shaohan Huang, Zihan Zhang, Deqing Wang, Fuzhen Zhuang, Furu Wei, Haizhen Huang, Denvy Deng, and Qi Zhang. 2022. Promptbert: Improving BERT sentence embeddings with prompts. In EMNLP, pages 8826-8837. Association for Computational Linguistics.
|
| 253 |
+
Hyunjae Lee. 2023. D2CSE: difference-aware deep continuous prompts for contrastive sentence embeddings. CoRR, abs/2304.08991.
|
| 254 |
+
Xianming Li and Jing Li. 2024. Bellm: Backward dependency enhanced large language model for sentence embeddings. In *NAACL-HLT*, pages 792-804. Association for Computational Linguistics.
|
| 255 |
+
Yihong Liu, Chunlan Ma, Haotian Ye, and Hinrich Schütze. 2024. Translico: A contrastive learning framework to address the script barrier in multilingual pretrained language models. In ACL (1), pages 2476-2499.
|
| 256 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692.
|
| 257 |
+
Yuanxing Liu, Jiahuan Pei, Weinan Zhang, Ming Li, Wanxiang Che, and Maarten de Rijke. 2025. Augmentation with neighboring information for conversational recommendation. ACM Trans. Inf. Syst., 43(3):62:1-62:49.
|
| 258 |
+
Longxuan Ma, Changxin Ke, Shuhan Zhou, Churui Sun, Wei-Nan Zhang, and Ting Liu. 2024. A self-verified method for exploring simile knowledge from pre-trained language models. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC/COLING 2024, 20-25 May, 2024, Torino, Italy, pages 1563-1576. ELRA and ICCL.
|
| 259 |
+
|
| 260 |
+
Longxuan Ma, Ziyu Zhuang, Weinan Zhang, Mingda Li, and Ting Liu. 2022. Self-eval: Self-supervised fine-grained dialogue evaluation. In Proceedings of the 29th International Conference on Computational Linguistics, COLING 2022, Gyeongju, Republic of Korea, October 12-17, 2022, pages 485-495. International Committee on Computational Linguistics.
|
| 261 |
+
David J. C. MacKay. 2003. Information theory, inference, and learning algorithms. Cambridge University Press.
|
| 262 |
+
Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zamparelli. 2014. A SICK cure for the evaluation of compositional distributional semantic models. In LREC, pages 216-223. European Language Resources Association (ELRA).
|
| 263 |
+
Cong-Duy Nguyen, Thong Nguyen, Xiaobao Wu, and Anh Tuan Luu. 2024. KDMCSE: knowledge distillation multimodal sentence embeddings with adaptive angular margin contrastive learning. In *NAACL-HLT*, pages 733-749. Association for Computational Linguistics.
|
| 264 |
+
Fangwei Ou and Jinan Xu. 2024. SKICSE: sentence knowable information prompted by llms improves contrastive sentence embeddings. In *NAACL* (Short Papers), pages 141-146. Association for Computational Linguistics.
|
| 265 |
+
Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. CoRR, cs.CL/0409058.
|
| 266 |
+
Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In ACL, pages 115-124. The Association for Computer Linguistics.
|
| 267 |
+
Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: sentence embeddings using siamese bert-networks. In EMNLP/IJCNLP (1), pages 3980-3990. Association for Computational Linguistics.
|
| 268 |
+
Lingfeng Shen, Haiyun Jiang, Lemao Liu, and Shuming Shi. 2023. A simple and plug-and-play method for unsupervised sentence representation enhancement. CoRR, abs/2305.07824.
|
| 269 |
+
Han Shi, Jiahui Gao, Hang Xu, Xiaodan Liang, Zhenguo Li, Lingpeng Kong, Stephen M.S.Lee, and James T.Kwok. 2022. Revisiting over-smoothing in BERT from the perspective of graph. In ICLR. OpenReview.net.
|
| 270 |
+
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP, pages 1631-1642. ACL.
|
| 271 |
+
|
| 272 |
+
Chenyu Sun, Hangwei Qian, and Chunyan Miao. 2022. CCLF: A contrastive-curiosity-driven learning framework for sample-efficient reinforcement learning. In *IJCAI*, pages 3444–3450. ijcai.org.
|
| 273 |
+
Haochen Tan, Wei Shao, Han Wu, Ke Yang, and Linqi Song. 2022. A sentence is worth 128 pseudo tokens: A semantic-aware contrastive learning framework for sentence embeddings. In ACL (Findings), pages 246-256.
|
| 274 |
+
Nandan Thakur, Nils Reimers, Andreas Rückle, Abhishek Srivastava, and Iryna Gurevych. 2021. BEIR: A heterogeneous benchmark for zero-shot evaluation of information retrieval models. In NeurIPS Datasets and Benchmarks.
|
| 275 |
+
Yonglong Tian, Chen Sun, Ben Poole, Dilip Krishnan, Cordelia Schmid, and Phillip Isola. 2020. What makes for good views for contrastive learning? In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
|
| 276 |
+
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton-Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurélien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and fine-tuned chat models. CoRR, abs/2307.09288.
|
| 277 |
+
Ellen M. Voorhees and Dawn M. Tice. 2000. Building a question answering test collection. In SIGIR, pages 200-207. ACM.
|
| 278 |
+
Hao Wang and Yong Dou. 2023. SNCSE: contrastive learning for unsupervised sentence embedding with soft negative samples. In ICIC (4), volume 14089 of Lecture Notes in Computer Science, pages 419-431. Springer.
|
| 279 |
+
Huiming Wang, Zhaodonghui Li, Liying Cheng, De Wen Soh, and Lidong Bing. 2024a. Large language models can contrastively refine their generation for better sentence representation learning. In *NAACL-HLT*, pages 7874-7891. Association for Computational Linguistics.
|
| 280 |
+
|
| 281 |
+
Huiming Wang, Zhaodonghui Li, Liying Cheng, De Wen Soh, and Lidong Bing. 2024b. Large language models can contrastively refine their generation for better sentence representation learning. In *NAACL-HLT*, pages 7874-7891. Association for Computational Linguistics.
|
| 282 |
+
Tongzhou Wang and Phillip Isola. 2020. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In ICML, volume 119 of Proceedings of Machine Learning Research, pages 9929-9939. PMLR.
|
| 283 |
+
Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. Lang. Resour. Evaluation, 39(2-3):165-210.
|
| 284 |
+
Xing Wu, Chaochen Gao, Liangjun Zang, Jizhong Han, Zhongyuan Wang, and Songlin Hu. 2022. Esimcse: Enhanced sample building method for contrastive learning of unsupervised sentence embedding. In COLING, pages 3898-3907. International Committee on Computational Linguistics.
|
| 285 |
+
Bo Xu, Shouang Wei, Luyi Cheng, Shizhou Huang, Hui Song, Ming Du, and Hongya Wang. 2023a. Hsimcse: Improving contrastive learning of unsupervised sentence representation with adversarial hard positives and dual hard negatives. In IJCNN, pages 1-8. IEEE.
|
| 286 |
+
Jiahao Xu, Wei Shao, Lihui Chen, and Lemao Liu. 2023b. Simcse++: Improving contrastive learning for sentence embeddings from two perspectives. In EMNLP, pages 12028-12040. Association for Computational Linguistics.
|
| 287 |
+
Jiahao Xu, Charlie Soh Zhanyi, Liwen Xu, and Lihui Chen. 2024. Blendcse: Blend contrastive learnings for sentence embeddings with rich semantics and transferability. Expert Syst. Appl., 238(Part E):121909.
|
| 288 |
+
Yuanmeng Yan, Rumei Li, Sirui Wang, Fuzheng Zhang, Wei Wu, and Weiran Xu. 2021. Consent: A contrastive framework for self-supervised sentence representation transfer. CoRR, abs/2105.11741.
|
| 289 |
+
Junlei Zhang, Zhenzhong Lan, and Junxian He. 2023. Contrastive learning of sentence embeddings from scratch. In EMNLP, pages 3916-3932. Association for Computational Linguistics.
|
| 290 |
+
Kun Zhou, Beichen Zhang, Wayne Xin Zhao, and JiRong Wen. 2022. Debiased contrastive learning of unsupervised sentence representations. In ACL(1), pages 6120-6130.
|
| 291 |
+
Dongsheng Zhu, Zhenyu Mao, Jinghui Lu, Rui Zhao, and Fei Tan. 2024. SDA: simple discrete augmentation for contrastive sentence representation learning. In LREC/COLING, pages 14459-14471. ELRA/ICCL.
|
| 292 |
+
|
| 293 |
+

|
| 294 |
+
Figure 5: Align-Uniform Coordinate Plot.
|
| 295 |
+
|
| 296 |
+
Wenjie Zhuo, Yifan Sun, Xiaohan Wang, Linchao Zhu, and Yi Yang. 2023. Whitenedcse: whitening-based contrastive learning of sentence embeddings. In ACL(1), pages 12135-12148.
|
| 297 |
+
|
| 298 |
+
# A Alignment and Uniformity Analytics
|
| 299 |
+
|
| 300 |
+
In this section, we perform Alignment and Uniformity analysis of sentence embeddings with Figure 5. The alignment metric measures the distance between positive pairs. It drops after using the 3R method with both unsupervised and supervised models. The reason is that after reducing redundant information, the similarity of positive pairs decreases, which encourages the model to represent more granular semantic information in the positive pair embeddings. Thus, after redundancy reduction, semantically related sentences cluster more tightly in the embedding space. As we introduced in Section 4.1, a lower alignment score means a better model performance. Hence, the results show that the 3R method helps to learn better sentence representations.
|
| 301 |
+
|
| 302 |
+
The uniformity metric evaluates the distribution of embeddings in the semantic space. The models trained with 3R show a decrease in the uniformity metric compared to the baseline models. The reason is that the redundancy reduction operation removes some of the encodings on certain dimensions of the sentence embeddings, compressing the embedding space. The reduced degrees of freedom in the compressed embedding space cause sentence embeddings to cluster more easily. The results show a trade-off between alignment and uniformity. However, as pointed out by previous research (Reimers and Gurevych, 2019; Gao et al., 2021), the primary objective of sentence embeddings is to cluster semantically similar sentences.
|
| 303 |
+
|
| 304 |
+
<table><tr><td>Index</td><td>100</td><td>101</td><td>102</td><td>103</td><td>104</td></tr><tr><td>Word</td><td>before</td><td>since</td><td>season</td><td>second</td><td>through</td></tr><tr><td>Frequency</td><td>14143</td><td>14053</td><td>14020</td><td>13874</td><td>13788</td></tr><tr><td>Changing rate</td><td>0.0048</td><td>0.0064</td><td>0.0024</td><td>0.0105</td><td>0.0062</td></tr><tr><td>Index</td><td>197</td><td>198</td><td>199</td><td>200</td><td>201</td></tr><tr><td>Word</td><td>another</td><td>former</td><td>members</td><td>York</td><td>any</td></tr><tr><td>Frequency</td><td>8143</td><td>8140</td><td>8060</td><td>8025</td><td>7978</td></tr><tr><td>Changing rate</td><td>0.0007</td><td>0.0004</td><td>0.0099</td><td>0.0044</td><td>0.0059</td></tr><tr><td>Index</td><td>297</td><td>298</td><td>299</td><td>300</td><td>301</td></tr><tr><td>Word</td><td>head</td><td>near</td><td>King</td><td>Road</td><td>off</td></tr><tr><td>Frequency</td><td>5811</td><td>5805</td><td>5795</td><td>5765</td><td>5761</td></tr><tr><td>Changing rate</td><td>0.0036</td><td>0.0010</td><td>0.0017</td><td>0.0052</td><td>0.0007</td></tr></table>
|
| 305 |
+
|
| 306 |
+
Table 5: Frequency statistics for choosing the top 300 frequency words.
|
| 307 |
+
|
| 308 |
+
Hence, the results of the alignment-uniformity metric demonstrate the effectiveness of the 3R method in learning better sentence representations.
|
| 309 |
+
|
| 310 |
+
# B Experiments for choosing top 300 frequency words
|
| 311 |
+
|
| 312 |
+
Table 5 shows the experiments choosing the top 300 high-frequency words. Firstly, to have enough words to construct the required high-frequency sentence set, we empirically did not consider word lists smaller than 100. Then, we calculated the frequency changing rate with $(T_{n} - T_{n + 1}) / T_{n + 1}$ , where $T_{n}$ means the frequency of the n-th high-frequency word. After the calculation, we found that the changing rate exhibits local peaks at certain positions. For example, at 103, 176, 199, 300, 393, 472. Some of the statistics are as follows:
|
| 313 |
+
|
| 314 |
+
Based on the statistics, we conduct experiments on these local peaks to choose a number as the high-frequency word list. The experiments on unsupervised models are shown in Table 6.
|
| 315 |
+
|
| 316 |
+
The results show that the 300 setting is the best. The above statement and experiments are the rationale behind the value of 300. It is worth noting that all the settings (103, 176, 199, 300, 393, 472) are better than the baseline (75.21), which demonstrates that even if the optimal parameter 300 is not selected, the proposed method still works.
|
| 317 |
+
|
| 318 |
+
# C Experiments for the learned parameter $c$
|
| 319 |
+
|
| 320 |
+
There is a learned parameter $c$ in section 3.2 that is randomly initialized. We propose experiments with $c$ set to different initial values (1.0, 0.5, 0.1, and 0.0), and the experimental results in one and a half epochs (Increasing by 0.1) are shown in Table 7. (0.0 is a very small number, such as 0.00001).
|
| 321 |
+
|
| 322 |
+
<table><tr><td>Model</td><td>STS12</td><td>STS13</td><td>STS14</td><td>STS15</td><td>STS16</td><td>STS-B</td><td>SICK-R</td><td>Avg</td></tr><tr><td>3R(length=103)</td><td>68.80</td><td>82.21</td><td>74.60</td><td>79.87</td><td>78.51</td><td>77.83</td><td>73.18</td><td>76.43</td></tr><tr><td>3R(length=176)</td><td>69.65</td><td>82.54</td><td>74.81</td><td>81.32</td><td>78.00</td><td>77.37</td><td>72.27</td><td>76.57</td></tr><tr><td>3R(length=199)</td><td>69.89</td><td>82.78</td><td>75.05</td><td>81.56</td><td>78.24</td><td>77.61</td><td>72.51</td><td>76.81</td></tr><tr><td>3R(length=300)</td><td>70.51</td><td>83.46</td><td>75.89</td><td>82.06</td><td>79.18</td><td>78.69</td><td>72.84</td><td>77.52</td></tr><tr><td>3R(length=393)</td><td>70.03</td><td>82.92</td><td>75.19</td><td>81.70</td><td>78.38</td><td>77.75</td><td>72.65</td><td>76.95</td></tr><tr><td>3R(length=472)</td><td>69.46</td><td>81.98</td><td>73.61</td><td>81.36</td><td>78.90</td><td>76.88</td><td>70.55</td><td>76.11</td></tr></table>
|
| 323 |
+
|
| 324 |
+
Table 6: Experiments for choosing the top 300 frequency words. The backbone model is SimCSE-BERT-base. The "length=" means the number of the top high-frequency words.
|
| 325 |
+
|
| 326 |
+
<table><tr><td>Epoch</td><td>0.1</td><td>0.2</td><td>0.3</td><td>0.4</td><td>0.5</td></tr><tr><td>c-initial=1.000</td><td>0.865</td><td>0.742</td><td>0.631</td><td>0.532</td><td>0.446</td></tr><tr><td>c-initial=0.500</td><td>0.443</td><td>0.365</td><td>0.319</td><td>0.298</td><td>0.285</td></tr><tr><td>c-initial=0.100</td><td>0.146</td><td>0.189</td><td>0.217</td><td>0.243</td><td>0.259</td></tr><tr><td>c-initial=0.000</td><td>0.032</td><td>0.067</td><td>0.103</td><td>0.142</td><td>0.176</td></tr><tr><td>Epoch</td><td>0.6</td><td>0.7</td><td>0.8</td><td>0.9</td><td>1.0</td></tr><tr><td>c-initial=1.000</td><td>0.373</td><td>0.314</td><td>0.273</td><td>0.273</td><td>0.273</td></tr><tr><td>c-initial=0.500</td><td>0.273</td><td>0.273</td><td>0.273</td><td>0.273</td><td>0.273</td></tr><tr><td>c-initial=0.100</td><td>0.267</td><td>0.273</td><td>0.273</td><td>0.273</td><td>0.273</td></tr><tr><td>c-initial=0.000</td><td>0.205</td><td>0.225</td><td>0.236</td><td>0.241</td><td>0.243</td></tr><tr><td>Epoch</td><td>1.1</td><td>1.2</td><td>1.3</td><td>1.4</td><td>1.5</td></tr><tr><td>c-initial=1.000</td><td>0.273</td><td>0.273</td><td>0.273</td><td>0.273</td><td>0.273</td></tr><tr><td>c-initial=0.500</td><td>0.273</td><td>0.273</td><td>0.273</td><td>0.273</td><td>0.273</td></tr><tr><td>c-initial=0.100</td><td>0.273</td><td>0.273</td><td>0.273</td><td>0.273</td><td>0.273</td></tr><tr><td>c-initial=0.000</td><td>0.255</td><td>0.265</td><td>0.273</td><td>0.273</td><td>0.273</td></tr></table>
|
| 327 |
+
|
| 328 |
+
Table 7: Experiments for the learned parameter $c$ .
|
| 329 |
+
|
| 330 |
+
These experiments may provide more insight about our method.
|
| 331 |
+
|
| 332 |
+
We can see that $c$ will converge to 0.273 with different initial values. It is worth noting that during the experiments in the paper, we set $c$ to a random value between 0 and 1, and it also converges to 0.273.
|
| 333 |
+
|
| 334 |
+
To verify whether 0.273 is the optimal value. We conducted a comparison by manually setting the $c$ value (which means $c$ does not change or update during the training), and the experimental results with unsupervised models are shown in Table 8.
|
| 335 |
+
|
| 336 |
+
When $c$ continues to decrease, the performance of the model will not improve, and 0.273 is the optimal value in our experiments. The best result here (77.46) is lower than the result (77.52) where $c$ is automatically learned. It shows that automatically learn the threshold $c$ helps the model to obtain a better generalization performance.
|
| 337 |
+
|
| 338 |
+
# D Experiments on different back-bone models
|
| 339 |
+
|
| 340 |
+
Our experimental results on the BERT large and RoBERTa large models showed consistent trends with other experiments (the results of the Unsupervised models are shown in Table 9), indicating that
|
| 341 |
+
|
| 342 |
+
using our method would improve the performance of the model.
|
| 343 |
+
|
| 344 |
+
For the LLMs, we present the experiments based on the LLAMA-7B model in Table 2 (i.e., LLM2vec) on page 6 of the paper, demonstrating that our method is also applicable to larger-scale models.
|
| 345 |
+
|
| 346 |
+
# E Experiments on more downstream tasks
|
| 347 |
+
|
| 348 |
+
Following previous works, we evaluated our method on downstream tasks, and the results of the Unsupervised baselines and 3R method are shown in Table 10.
|
| 349 |
+
|
| 350 |
+
We evaluate our model performance on the following transfer tasks: MR (Pang and Lee, 2005), CR (Hu and Liu, 2004), SUBJ (Pang and Lee, 2004), MPQA (Wiebe et al., 2005), SST-2 (Socher et al., 2013), TREC (Voorhees and Tice, 2000), and MRPC (Dolan and Brockett, 2005).
|
| 351 |
+
|
| 352 |
+
Following previous work (Gao et al., 2021), we train a logistic regression classifier on top of the (frozen) sentence embeddings produced by different methods. The evaluation follows the default configuration of SentEval.
|
| 353 |
+
|
| 354 |
+
The results show that our method can also benefit the transfer tasks.
|
| 355 |
+
|
| 356 |
+
# F Experiments for choosing hyper-parameter 50
|
| 357 |
+
|
| 358 |
+
In this section, we demonstrate how we choose the hyper-parameter 50. In the experiments, we randomly select N words from the high-frequency word list (103, 176, 199, 300, 393, 472) and then use the chosen words to generate redundant exemplars. We tried $\mathrm{N} = 40$ , 50, or 60. The experimental results are in Table 11. $\mathrm{N} = 30$ or 70 were also tested but the results were much lower than the results of 40, 50, or 60. Table 11 shows that $\mathrm{N} = 50$ is the best in all the settings (103, 176, 199, 300, 393, 472). Hence, we chose 50 as a hyper-parameter.
|
| 359 |
+
|
| 360 |
+
<table><tr><td>Model</td><td>STS12</td><td>STS13</td><td>STS14</td><td>STS15</td><td>STS16</td><td>STS-B</td><td>SICK-R</td><td>Avg</td></tr><tr><td>3R(c=0)</td><td>67.00</td><td>81.87</td><td>73.20</td><td>79.02</td><td>78.30</td><td>76.26</td><td>70.82</td><td>75.21</td></tr><tr><td>3R(c=0.1)</td><td>70.82</td><td>82.59</td><td>73.66</td><td>80.29</td><td>77.65</td><td>78.04</td><td>70.71</td><td>76.25</td></tr><tr><td>3R(c=0.2)</td><td>67.97</td><td>79.55</td><td>72.59</td><td>80.55</td><td>76.34</td><td>76.20</td><td>68.95</td><td>74.59</td></tr><tr><td>3R(c=0.25)</td><td>72.11</td><td>82.56</td><td>75.09</td><td>81.24</td><td>78.37</td><td>77.62</td><td>72.02</td><td>77.00</td></tr><tr><td>3R(c=0.273)</td><td>72.03</td><td>83.20</td><td>75.65</td><td>82.05</td><td>79.01</td><td>78.28</td><td>71.97</td><td>77.46</td></tr><tr><td>3R(c=0.3)</td><td>71.35</td><td>82.31</td><td>74.05</td><td>80.79</td><td>78.16</td><td>77.51</td><td>71.53</td><td>76.53</td></tr></table>
|
| 361 |
+
|
| 362 |
+
Table 8: Experiments for the learned parameter $c$ . The backbone model is SimCSE-BERT-large.
|
| 363 |
+
|
| 364 |
+
<table><tr><td>Model</td><td>STS12</td><td>STS13</td><td>STS14</td><td>STS15</td><td>STS16</td><td>STS-B</td><td>SICK-R</td><td>Avg</td></tr><tr><td>SimCSE-BERT-large</td><td>69.44</td><td>83.71</td><td>75.74</td><td>83.90</td><td>78.66</td><td>78.53</td><td>73.70</td><td>77.67</td></tr><tr><td>SimCSE-BERT-large +3R</td><td>73.58</td><td>83.93</td><td>76.82</td><td>84.25</td><td>80.36</td><td>80.16</td><td>73.65</td><td>78.96</td></tr><tr><td>SimCSE-Roberta-large</td><td>72.18</td><td>83.15</td><td>75.13</td><td>84.11</td><td>81.11</td><td>81.66</td><td>71.01</td><td>78.34</td></tr><tr><td>SimCSE-Roberta-large +3R</td><td>74.36</td><td>83.72</td><td>76.68</td><td>84.53</td><td>82.01</td><td>82.21</td><td>72.56</td><td>79.44</td></tr><tr><td>MultiCSRE-Robertabase</td><td>71.73</td><td>82.12</td><td>75.54</td><td>82.37</td><td>79.52</td><td>80.97</td><td>76.26</td><td>78.36</td></tr><tr><td>MultiCSRE-Robertabase+3R</td><td>73.46</td><td>83.74</td><td>77.72</td><td>83.96</td><td>80.74</td><td>82.45</td><td>77.83</td><td>79.99</td></tr></table>
|
| 365 |
+
|
| 366 |
+
Table 9: Experiments with different backbone models
|
| 367 |
+
|
| 368 |
+
<table><tr><td>Model</td><td>MR</td><td>CR</td><td>SUBJ</td><td>MPQA</td><td>STS2</td><td>TREC</td><td>MRPC</td><td>Avg</td></tr><tr><td>SimCSE-BERT-base</td><td>81.11</td><td>85.56</td><td>94.20</td><td>89.17</td><td>85.56</td><td>86.40</td><td>74.14</td><td>85.16</td></tr><tr><td>SimCSE-BERT-base +3R</td><td>81.42</td><td>86.65</td><td>94.53</td><td>89.30</td><td>86.33</td><td>88.21</td><td>74.13</td><td>85.80</td></tr><tr><td>SimCSE-Roberta-base</td><td>80.57</td><td>86.62</td><td>92.27</td><td>86.61</td><td>85.72</td><td>83.20</td><td>73.97</td><td>84.14</td></tr><tr><td>SimCSE-Roberta-base +3R</td><td>80.73</td><td>86.87</td><td>93.42</td><td>87.13</td><td>86.01</td><td>85.37</td><td>74.08</td><td>84.80</td></tr><tr><td>SimCSE-BERT-large</td><td>85.05</td><td>89.48</td><td>95.01</td><td>89.29</td><td>90.44</td><td>88.80</td><td>74.20</td><td>87.47</td></tr><tr><td>SimCSE-BERT-large +3R</td><td>85.02</td><td>89.53</td><td>95.26</td><td>89.32</td><td>91.13</td><td>90.05</td><td>74.64</td><td>87.85</td></tr><tr><td>SimCSE-Roberta-large</td><td>82.59</td><td>87.47</td><td>93.18</td><td>88.44</td><td>86.66</td><td>91.00</td><td>76.29</td><td>86.52</td></tr><tr><td>SimCSE-Roberta-large +3R</td><td>83.54</td><td>87.65</td><td>93.14</td><td>88.76</td><td>87.02</td><td>90.87</td><td>76.25</td><td>86.75</td></tr></table>
|
| 369 |
+
|
| 370 |
+
Table 10: Experiments with different backbone models on downstream tasks.
|
| 371 |
+
|
| 372 |
+
<table><tr><td>Model</td><td>STS12</td><td>STS13</td><td>STS14</td><td>STS15</td><td>STS16</td><td>STS-B</td><td>SICK-R</td><td>Avg</td></tr><tr><td>3R(length=103,N=40)</td><td>68.62</td><td>82.13</td><td>74.36</td><td>79.69</td><td>78.33</td><td>77.56</td><td>73.01</td><td>76.24</td></tr><tr><td>3R(length=103,N=50)</td><td>68.80</td><td>82.21</td><td>74.60</td><td>79.87</td><td>78.51</td><td>77.83</td><td>73.18</td><td>76.43</td></tr><tr><td>3R(length=103,N=60)</td><td>68.73</td><td>82.08</td><td>74.52</td><td>79.75</td><td>78.44</td><td>77.71</td><td>73.07</td><td>76.33</td></tr><tr><td>3R(length=176,N=40)</td><td>69.54</td><td>82.42</td><td>74.73</td><td>81.21</td><td>77.83</td><td>77.21</td><td>72.10</td><td>76.43</td></tr><tr><td>3R(length=176,N=50)</td><td>69.65</td><td>82.54</td><td>74.81</td><td>81.32</td><td>78.00</td><td>77.37</td><td>72.27</td><td>76.57</td></tr><tr><td>3R(length=176,N=60)</td><td>69.58</td><td>82.49</td><td>74.73</td><td>81.28</td><td>77.91</td><td>77.32</td><td>72.23</td><td>76.51</td></tr><tr><td>3R(length=199,N=40)</td><td>69.73</td><td>81.58</td><td>74.87</td><td>81.36</td><td>78.08</td><td>77.42</td><td>72.34</td><td>76.48</td></tr><tr><td>3R(length=199,N=50)</td><td>69.89</td><td>82.78</td><td>75.05</td><td>81.56</td><td>78.24</td><td>77.61</td><td>72.51</td><td>76.81</td></tr><tr><td>3R(length=199,N=60)</td><td>69.83</td><td>82.69</td><td>74.94</td><td>81.47</td><td>78.15</td><td>77.54</td><td>72.46</td><td>76.73</td></tr><tr><td>3R(length=300,N=40)</td><td>70.48</td><td>83.40</td><td>75.61</td><td>81.58</td><td>79.06</td><td>78.11</td><td>72.77</td><td>77.29</td></tr><tr><td>3R(length=300,N=50)</td><td>70.51</td><td>83.46</td><td>75.89</td><td>82.06</td><td>79.18</td><td>78.69</td><td>72.84</td><td>77.52</td></tr><tr><td>3R(length=300,N=60)</td><td>72.03</td><td>83.20</td><td>75.65</td><td>82.05</td><td>79.01</td><td>78.28</td><td>71.97</td><td>77.46</td></tr><tr><td>3R(length=393,N=40)</td><td>69.78</td><td>82.63</td><td>74.89</td><td>81.45</td><td>78.07</td><td>77.44</td><td>72.40</td><td>76.67</td></tr><tr><td>3R(length=393,N=50)</td><td>70.03</td><td>82.92</td><td>75.19</td><td>81.70</td><td>78.38</td><td>77.75</td><td>72.65</td><td>76.95</td></tr><tr><td>3R(length=393,N=60)</td><td>69.84</td><td>82.73</td><td>74.94</td><td>81.51</td><td>78.10</td><td>77.52</td><td>72.43</td><td>76.72</td></tr><tr><td>3R(length=472,N=40)</td><td>69.29</td><td>81.77</td><td>73.44</td><td>81.15</td><td>78.72</td><td>76.68</td><td>70.36</td><td>75.92</td></tr><tr><td>3R(length=472,N=50)</td><td>69.46</td><td>81.98</td><td>73.61</td><td>81.36</td><td>78.90</td><td>76.88</td><td>70.55</td><td>76.11</td></tr><tr><td>3R(length=472,N=60)</td><td>69.38</td><td>81.89</td><td>73.53</td><td>81.28</td><td>78.81</td><td>76.76</td><td>70.46</td><td>76.02</td></tr></table>
|
| 373 |
+
|
| 374 |
+
Table 11: Experiments for choosing the 50 words. The backbone model is SimCSE-BERT-base.
|
2025/3R_ Enhancing Sentence Representation Learning via Redundant Representation Reduction/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:eab422e59157531e9ebe8c6446c69b151752fa119c62be0dba9fba6b25d271bf
|
| 3 |
+
size 971805
|
2025/3R_ Enhancing Sentence Representation Learning via Redundant Representation Reduction/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Case Against Implicit Standards_ Homophone Normalization in Machine Translation for Languages that use the Ge’ez Script./a570cebb-8359-4bf2-9617-79a7ee147972_content_list.json
ADDED
|
@@ -0,0 +1,1635 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "A Case Against Implicit Standards: Homophone Normalization in Machine Translation for Languages that Use the Ge'ez Script",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
156,
|
| 8 |
+
78,
|
| 9 |
+
842,
|
| 10 |
+
118
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Hellina Hailu Nigatu<sup>1</sup>, Atnafu Lambebo Tonja<sup>2</sup>, Henok Biadglign Ademtew<sup>3</sup>, Hizkel Mitiku Alemayehu<sup>4</sup>, Negasi Haile Abadi<sup>5</sup>, Tadesse Destaw Belay<sup>6</sup>, Seid Muhie Yimam<sup>7</sup>",
|
| 17 |
+
"text_level": 1,
|
| 18 |
+
"bbox": [
|
| 19 |
+
171,
|
| 20 |
+
124,
|
| 21 |
+
836,
|
| 22 |
+
175
|
| 23 |
+
],
|
| 24 |
+
"page_idx": 0
|
| 25 |
+
},
|
| 26 |
+
{
|
| 27 |
+
"type": "text",
|
| 28 |
+
"text": "$^{1}$ UC Berkeley, $^{2}$ MBZUAI, $^{3}$ Vella AI, $^{4}$ Paderborn University, $^{5}$ Lesan AI, $^{6}$ Instituto Politécnico Nacional, $^{7}$ University of Hamburg",
|
| 29 |
+
"bbox": [
|
| 30 |
+
200,
|
| 31 |
+
190,
|
| 32 |
+
801,
|
| 33 |
+
225
|
| 34 |
+
],
|
| 35 |
+
"page_idx": 0
|
| 36 |
+
},
|
| 37 |
+
{
|
| 38 |
+
"type": "text",
|
| 39 |
+
"text": "Correspondence: hellina_nigatu@berkeley.edu",
|
| 40 |
+
"bbox": [
|
| 41 |
+
352,
|
| 42 |
+
227,
|
| 43 |
+
650,
|
| 44 |
+
241
|
| 45 |
+
],
|
| 46 |
+
"page_idx": 0
|
| 47 |
+
},
|
| 48 |
+
{
|
| 49 |
+
"type": "text",
|
| 50 |
+
"text": "Abstract",
|
| 51 |
+
"text_level": 1,
|
| 52 |
+
"bbox": [
|
| 53 |
+
260,
|
| 54 |
+
252,
|
| 55 |
+
342,
|
| 56 |
+
268
|
| 57 |
+
],
|
| 58 |
+
"page_idx": 0
|
| 59 |
+
},
|
| 60 |
+
{
|
| 61 |
+
"type": "text",
|
| 62 |
+
"text": "Homophone<sup>1</sup> normalization—where characters that have the same sound in a writing script are mapped to one character—is a pre-processing step applied in Amharic Natural Language Processing (NLP) literature. While this may improve performance reported by automatic metrics, it also results in models that are unable to effectively process different forms of writing in a single language. Further, there might be impacts in transfer learning, where models trained on normalized data do not generalize well to other languages. In this paper, we experiment with monolingual training and crosslingual transfer to understand the impacts of normalization on languages that use the Ge'ez script. We then propose a post-inference intervention in which normalization is applied to model predictions instead of training data. With our simple scheme of post-inference normalization, we show that we can achieve an increase in BLEU score of up to 1.03 while preserving language features in training. Our work contributes to the broader discussion on technology-facilitated language change and calls for more language-aware interventions.",
|
| 63 |
+
"bbox": [
|
| 64 |
+
144,
|
| 65 |
+
275,
|
| 66 |
+
460,
|
| 67 |
+
630
|
| 68 |
+
],
|
| 69 |
+
"page_idx": 0
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"type": "text",
|
| 73 |
+
"text": "1 Introduction",
|
| 74 |
+
"text_level": 1,
|
| 75 |
+
"bbox": [
|
| 76 |
+
114,
|
| 77 |
+
640,
|
| 78 |
+
260,
|
| 79 |
+
656
|
| 80 |
+
],
|
| 81 |
+
"page_idx": 0
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"type": "text",
|
| 85 |
+
"text": "The majority of the world's languages are underrepresented in natural language processing (NLP) research (Joshi et al., 2020). Collectively, these languages have been referred to as 'low-resource,' owing to the various resources that are not available for them (Nigatu et al., 2024). One of the many resources that are lacking for low-resourced languages is pre-processing tools (Niyongabo et al., 2020). From tokenization methods to basic data cleaning tools, many of the widely used pre-processing schemes do not include, or are not effective for, low-resourced languages (Ahia et al., 2023; Emezue et al., 2023).",
|
| 86 |
+
"bbox": [
|
| 87 |
+
112,
|
| 88 |
+
665,
|
| 89 |
+
490,
|
| 90 |
+
873
|
| 91 |
+
],
|
| 92 |
+
"page_idx": 0
|
| 93 |
+
},
|
| 94 |
+
{
|
| 95 |
+
"type": "text",
|
| 96 |
+
"text": "Pre-processing steps, ranging from removing punctuation marks to tokenizing text, are essential steps in determining the efficacy of downstream models. For instance, languages that use different writing scripts have been transliterated to a single script to facilitate cross-lingual transfer (Khare et al., 2021). Prior work has explored morpheme-based tokenization for morphologically rich languages as an alternative to word-level tokenization to enhance performance (Tachbelie et al., 2014). Within phonetic languages like Amharic, a common pre-processing intervention has been homophone normalization-i.e, mapping characters with similar sounds to one character (Biadgligne and Smaili, 2021; Abate et al., 2018).",
|
| 97 |
+
"bbox": [
|
| 98 |
+
507,
|
| 99 |
+
253,
|
| 100 |
+
885,
|
| 101 |
+
493
|
| 102 |
+
],
|
| 103 |
+
"page_idx": 0
|
| 104 |
+
},
|
| 105 |
+
{
|
| 106 |
+
"type": "text",
|
| 107 |
+
"text": "Homophone normalization has mainly been applied to improve automatic metric scores. Current NLP evaluation schemes, particularly automatic metrics like BLEU (Papineni et al., 2002), which require an exact match between n-grams, do not handle homophone characters. As an example, let us take the homophones $<\\vartheta>$ and $<\\lambda>$ which both represent the sound /?ä/ in Amharic. If the Amharic word for \"eye\" is written as '987' in the reference but model prediction outputs 'λ87', evaluation with BLEU score would not count it as a match. However, for an Amharic speaker, those two words have the same pronunciation and meaning. Homophone normalization averts this problem by mapping all homophone characters into a single character and thereby boosting automatic metric scores (Belay et al., 2022). Homophone normalization also reduces the vocabulary size of a dataset, which may be desirable for some applications (Abate et al., 2020). While this indicates a potential benefit in improving performance when using automatic metrics for evaluation, it may lead to downstream issues for language speakers.",
|
| 108 |
+
"bbox": [
|
| 109 |
+
507,
|
| 110 |
+
497,
|
| 111 |
+
885,
|
| 112 |
+
868
|
| 113 |
+
],
|
| 114 |
+
"page_idx": 0
|
| 115 |
+
},
|
| 116 |
+
{
|
| 117 |
+
"type": "text",
|
| 118 |
+
"text": "In this paper, we argue that the seemingly innocuous act of homophone normalization for Amharic NLP sets and perpetuates an implicit",
|
| 119 |
+
"bbox": [
|
| 120 |
+
507,
|
| 121 |
+
871,
|
| 122 |
+
884,
|
| 123 |
+
919
|
| 124 |
+
],
|
| 125 |
+
"page_idx": 0
|
| 126 |
+
},
|
| 127 |
+
{
|
| 128 |
+
"type": "page_footnote",
|
| 129 |
+
"text": "<sup>1</sup>We use the Merriam-Webster definition of the term homophone: \"a character or group of characters pronounced the same as another character or group.\"",
|
| 130 |
+
"bbox": [
|
| 131 |
+
112,
|
| 132 |
+
879,
|
| 133 |
+
489,
|
| 134 |
+
919
|
| 135 |
+
],
|
| 136 |
+
"page_idx": 0
|
| 137 |
+
},
|
| 138 |
+
{
|
| 139 |
+
"type": "page_number",
|
| 140 |
+
"text": "10321",
|
| 141 |
+
"bbox": [
|
| 142 |
+
475,
|
| 143 |
+
927,
|
| 144 |
+
524,
|
| 145 |
+
940
|
| 146 |
+
],
|
| 147 |
+
"page_idx": 0
|
| 148 |
+
},
|
| 149 |
+
{
|
| 150 |
+
"type": "footer",
|
| 151 |
+
"text": "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 10321-10332 November 4-9, 2025 ©2025 Association for Computational Linguistics",
|
| 152 |
+
"bbox": [
|
| 153 |
+
152,
|
| 154 |
+
945,
|
| 155 |
+
843,
|
| 156 |
+
972
|
| 157 |
+
],
|
| 158 |
+
"page_idx": 0
|
| 159 |
+
},
|
| 160 |
+
{
|
| 161 |
+
"type": "text",
|
| 162 |
+
"text": "standard for Ge'ez script languages. Currently, homophone normalization is actively being applied to Amharic, one of the many languages that use the Ge'ez script. However, the characters that are normalized in Amharic have distinct sounds in languages such as Tigrinya and Ge'ez. Hence, the implicit standard set by this pre-processing step may have a downstream impact on cross-lingual transfer for the other languages that use the Ge'ez script. Additionally, models trained on normalized datasets will be unable to process alternative word spellings. However, language is not monolithic; normalization may limit how speakers of different dialects and variants of a single language can interact with language technologies. Using Machine Translation (MT) as an NLP task and Amharic, Tigrinya, and Ge'ez as languages of focus, we pose the following research questions:",
|
| 163 |
+
"bbox": [
|
| 164 |
+
112,
|
| 165 |
+
84,
|
| 166 |
+
492,
|
| 167 |
+
375
|
| 168 |
+
],
|
| 169 |
+
"page_idx": 1
|
| 170 |
+
},
|
| 171 |
+
{
|
| 172 |
+
"type": "list",
|
| 173 |
+
"sub_type": "text",
|
| 174 |
+
"list_items": [
|
| 175 |
+
"- RQ1: How do existing MT models handle words with homophone characters in languages that use the Ge'ez script?",
|
| 176 |
+
"- RQ2: What is the impact of applying different normalization schemes to training data on the performance of MT systems?",
|
| 177 |
+
"- RQ3: What is the impact of homophone normalization on transfer learning in MT for related languages?",
|
| 178 |
+
"- RQ4: How does applying normalization post-translation compare to applying normalization to the training data?"
|
| 179 |
+
],
|
| 180 |
+
"bbox": [
|
| 181 |
+
134,
|
| 182 |
+
381,
|
| 183 |
+
489,
|
| 184 |
+
602
|
| 185 |
+
],
|
| 186 |
+
"page_idx": 1
|
| 187 |
+
},
|
| 188 |
+
{
|
| 189 |
+
"type": "text",
|
| 190 |
+
"text": "Multilingual NLP research is often driven by a goal of generalization, proposing ways to make a single model work well for multiple languages (e.g NLLB et al., 2022). While there are demonstrated benefits to this approach, we use our work as a case study to question what we lose through implicit standards in language processing. We find that homophone normalization negatively affects cross-lingual transfer and that applying normalization post-translation boosts automatic scores without compromising language characteristics (Sec. 4). Our work highlights the importance of investigating downstream impacts of preprocessing steps, particularly for low-resourced languages.",
|
| 191 |
+
"bbox": [
|
| 192 |
+
112,
|
| 193 |
+
609,
|
| 194 |
+
489,
|
| 195 |
+
851
|
| 196 |
+
],
|
| 197 |
+
"page_idx": 1
|
| 198 |
+
},
|
| 199 |
+
{
|
| 200 |
+
"type": "text",
|
| 201 |
+
"text": "2 Background and Related Work",
|
| 202 |
+
"text_level": 1,
|
| 203 |
+
"bbox": [
|
| 204 |
+
112,
|
| 205 |
+
862,
|
| 206 |
+
418,
|
| 207 |
+
879
|
| 208 |
+
],
|
| 209 |
+
"page_idx": 1
|
| 210 |
+
},
|
| 211 |
+
{
|
| 212 |
+
"type": "text",
|
| 213 |
+
"text": "In this section, we provide background on the languages of study and the writing script. We also",
|
| 214 |
+
"bbox": [
|
| 215 |
+
112,
|
| 216 |
+
887,
|
| 217 |
+
489,
|
| 218 |
+
919
|
| 219 |
+
],
|
| 220 |
+
"page_idx": 1
|
| 221 |
+
},
|
| 222 |
+
{
|
| 223 |
+
"type": "text",
|
| 224 |
+
"text": "give background on normalization schemes used in prior work to handle characters with the same sound.",
|
| 225 |
+
"bbox": [
|
| 226 |
+
507,
|
| 227 |
+
84,
|
| 228 |
+
884,
|
| 229 |
+
131
|
| 230 |
+
],
|
| 231 |
+
"page_idx": 1
|
| 232 |
+
},
|
| 233 |
+
{
|
| 234 |
+
"type": "text",
|
| 235 |
+
"text": "2.1 Languages of Study",
|
| 236 |
+
"text_level": 1,
|
| 237 |
+
"bbox": [
|
| 238 |
+
507,
|
| 239 |
+
146,
|
| 240 |
+
712,
|
| 241 |
+
162
|
| 242 |
+
],
|
| 243 |
+
"page_idx": 1
|
| 244 |
+
},
|
| 245 |
+
{
|
| 246 |
+
"type": "text",
|
| 247 |
+
"text": "The Ge'ez Script is an Abugida writing system each character in the script represents a consonant and a vowel<sup>2</sup>. Vowels are indicated by modifying the base character. There are 7 vowels in the Ge'ez writing script; hence, each base character has at least 7 variations. For instance, the base character $\\langle \\Lambda \\rangle / l$ is used to represent the sound $/lə/$ and is modified to $\\langle \\Lambda \\rangle / lu/$ , $\\langle \\Lambda \\rangle / li/$ , and so on. Additionally, there are characters used to represent labiovelars such as $\\langle \\mathfrak{A} \\rangle / kwa/$ . The Ge'ez script is used to write Afro-Semitic languages of Ethiopia and Eritrea, including our languages of focus in this paper: Amharic, Tigrinya, and Ge'ez.",
|
| 248 |
+
"bbox": [
|
| 249 |
+
505,
|
| 250 |
+
167,
|
| 251 |
+
884,
|
| 252 |
+
376
|
| 253 |
+
],
|
| 254 |
+
"page_idx": 1
|
| 255 |
+
},
|
| 256 |
+
{
|
| 257 |
+
"type": "text",
|
| 258 |
+
"text": "Amharic is an Afro-Semitic language spoken by an estimated 57.5 million people worldwide (Basha et al., 2023). It is primarily spoken in Ethiopia and is one of the federal working languages of the country. The Amharic alphabet has 33 base characters (Adugna).",
|
| 259 |
+
"bbox": [
|
| 260 |
+
507,
|
| 261 |
+
388,
|
| 262 |
+
884,
|
| 263 |
+
483
|
| 264 |
+
],
|
| 265 |
+
"page_idx": 1
|
| 266 |
+
},
|
| 267 |
+
{
|
| 268 |
+
"type": "text",
|
| 269 |
+
"text": "Tigrinya is an Afro-Semitic language spoken by an estimated 10 million people worldwide (Haile et al., 2023). Tigrinya is one of the federal working languages of Ethiopia and is one of the governmental and national languages of Eritrea. The Tigrinya alphabet has 32 base characters (Negash, 2017).",
|
| 270 |
+
"bbox": [
|
| 271 |
+
507,
|
| 272 |
+
495,
|
| 273 |
+
885,
|
| 274 |
+
608
|
| 275 |
+
],
|
| 276 |
+
"page_idx": 1
|
| 277 |
+
},
|
| 278 |
+
{
|
| 279 |
+
"type": "text",
|
| 280 |
+
"text": "Ge'ez is an Afro-Semitic language that is currently spoken only as a second language<sup>3</sup>. It is primarily used as a liturgical language within Ethiopian and Eritrean religious institutions. The Ge'ez alphabet has 26 base characters (Demilew, 2019).",
|
| 281 |
+
"bbox": [
|
| 282 |
+
507,
|
| 283 |
+
620,
|
| 284 |
+
885,
|
| 285 |
+
715
|
| 286 |
+
],
|
| 287 |
+
"page_idx": 1
|
| 288 |
+
},
|
| 289 |
+
{
|
| 290 |
+
"type": "text",
|
| 291 |
+
"text": "2.2 Homophones in the Ge'ez Script",
|
| 292 |
+
"text_level": 1,
|
| 293 |
+
"bbox": [
|
| 294 |
+
507,
|
| 295 |
+
730,
|
| 296 |
+
811,
|
| 297 |
+
746
|
| 298 |
+
],
|
| 299 |
+
"page_idx": 1
|
| 300 |
+
},
|
| 301 |
+
{
|
| 302 |
+
"type": "text",
|
| 303 |
+
"text": "As languages evolve, phonological change occurs where some phonemes might split, merge, or emerge (Boldsen and Paggio, 2022). Since written language evolves at a much slower pace than spoken language, the phonetic changes are usually not reflected in the written forms of language (Obasi, 2018). Due to merged phonemes that are represented by different characters that, in prior years,",
|
| 304 |
+
"bbox": [
|
| 305 |
+
505,
|
| 306 |
+
752,
|
| 307 |
+
884,
|
| 308 |
+
881
|
| 309 |
+
],
|
| 310 |
+
"page_idx": 1
|
| 311 |
+
},
|
| 312 |
+
{
|
| 313 |
+
"type": "page_footnote",
|
| 314 |
+
"text": "$^{2}$ https://www.omniglot.com/writing/ethiopic.htm",
|
| 315 |
+
"bbox": [
|
| 316 |
+
529,
|
| 317 |
+
890,
|
| 318 |
+
831,
|
| 319 |
+
904
|
| 320 |
+
],
|
| 321 |
+
"page_idx": 1
|
| 322 |
+
},
|
| 323 |
+
{
|
| 324 |
+
"type": "page_footnote",
|
| 325 |
+
"text": "<sup>3</sup>https://www.ethnologue.com/language/gez/",
|
| 326 |
+
"bbox": [
|
| 327 |
+
529,
|
| 328 |
+
904,
|
| 329 |
+
803,
|
| 330 |
+
917
|
| 331 |
+
],
|
| 332 |
+
"page_idx": 1
|
| 333 |
+
},
|
| 334 |
+
{
|
| 335 |
+
"type": "page_number",
|
| 336 |
+
"text": "10322",
|
| 337 |
+
"bbox": [
|
| 338 |
+
477,
|
| 339 |
+
927,
|
| 340 |
+
524,
|
| 341 |
+
940
|
| 342 |
+
],
|
| 343 |
+
"page_idx": 1
|
| 344 |
+
},
|
| 345 |
+
{
|
| 346 |
+
"type": "text",
|
| 347 |
+
"text": "might have had distinct sounds, the Amharic alphabet has multiple characters that have the same sound (Aklilu, 2010). For instance, all of the following characters in the Ge'ez script $<\\lambda>$ , $<\\lambda>$ , $<0>$ or $<9>$ are read as /?ä/ in Amharic.",
|
| 348 |
+
"bbox": [
|
| 349 |
+
110,
|
| 350 |
+
84,
|
| 351 |
+
489,
|
| 352 |
+
164
|
| 353 |
+
],
|
| 354 |
+
"page_idx": 2
|
| 355 |
+
},
|
| 356 |
+
{
|
| 357 |
+
"type": "text",
|
| 358 |
+
"text": "Writing scripts are also shared by several languages that may not have evolved in the same way. For the Ge'ez script in particular, some of the characters that have the same sound in Amharic have distinct sounds in Tigrinya. For example, all four characters in the above example that represent /?a/ in Amharic have distinct sounds in Tigrinya: $\\langle \\lambda \\rangle$ /?ə/, $\\langle \\lambda \\rangle$ /?a/, $\\langle 0 \\rangle$ /?ə/ and $\\langle 9 \\rangle$ /?a/. There are some characters from the Ge'ez script that have the same sound in Tigrinya, for example 'U' and 'n' both represent /sə/. Due to the differences in how each language uses the characters, altering homophones in the Ge'ez script will have different effects across languages. For example, if you write the word for 'eye', which is written as '927' in Tigrinya as 'A27', the word would have no meaning, while in Amharic, both words would mean 'eye'. In the Ge'ez language, changing the characters will result in a change in meaning. For instance, the word 'n' means 'to hold a wedding' while the word 'U' means 'to get inside'.",
|
| 359 |
+
"bbox": [
|
| 360 |
+
115,
|
| 361 |
+
167,
|
| 362 |
+
489,
|
| 363 |
+
505
|
| 364 |
+
],
|
| 365 |
+
"page_idx": 2
|
| 366 |
+
},
|
| 367 |
+
{
|
| 368 |
+
"type": "text",
|
| 369 |
+
"text": "2.3 Handling Homophone Characters in NLP",
|
| 370 |
+
"text_level": 1,
|
| 371 |
+
"bbox": [
|
| 372 |
+
112,
|
| 373 |
+
521,
|
| 374 |
+
485,
|
| 375 |
+
537
|
| 376 |
+
],
|
| 377 |
+
"page_idx": 2
|
| 378 |
+
},
|
| 379 |
+
{
|
| 380 |
+
"type": "text",
|
| 381 |
+
"text": "Homophone normalization helps improve automatic metric scores by mapping different grapheme variations of a homophone character into a single representation (Sec.1). It has mainly been applied in Amharic NLP literature for Machine Translation (e.g. Abate et al., 2018; Chekole et al., 2024) and semantic modeling tasks (e.g. Belay et al., 2021). However, within papers that report normalizing homophone characters, there is no standard normalization scheme. For instance, some publicly available tools normalize characters with the same sound only (e.g. Kidanemariam, 2019), others normalize characters with the same sound and labialized characters (Mekuriaw and Cohan, 2024; Eshetu, 2022), and some normalize characters with the same sound, labialized characters, and some characters with the same base consonant (Yimam et al., 2021). Further, some prior works report mapping homophone characters to \"the most frequently used characters\" (Biadgligne and Smaili, 2022; Abate et al., 2018).",
|
| 382 |
+
"bbox": [
|
| 383 |
+
110,
|
| 384 |
+
546,
|
| 385 |
+
489,
|
| 386 |
+
883
|
| 387 |
+
],
|
| 388 |
+
"page_idx": 2
|
| 389 |
+
},
|
| 390 |
+
{
|
| 391 |
+
"type": "text",
|
| 392 |
+
"text": "While most prior works report using normalization as a standard pre-processing step, Belay et al.",
|
| 393 |
+
"bbox": [
|
| 394 |
+
112,
|
| 395 |
+
887,
|
| 396 |
+
489,
|
| 397 |
+
919
|
| 398 |
+
],
|
| 399 |
+
"page_idx": 2
|
| 400 |
+
},
|
| 401 |
+
{
|
| 402 |
+
"type": "text",
|
| 403 |
+
"text": "(2022) compared MT models trained with and without normalization and reported score improvement for models trained with normalized data. Belay et al. (2021) applied normalization to semantic modeling tasks and found that normalization helped for Information Retrieval but hurt performance for PoS tagging and sentiment analysis. However, these investigations are (1) limited to the Amharic language and (2) do not compare the impact of the different normalization schemes in the literature.",
|
| 404 |
+
"bbox": [
|
| 405 |
+
507,
|
| 406 |
+
84,
|
| 407 |
+
884,
|
| 408 |
+
260
|
| 409 |
+
],
|
| 410 |
+
"page_idx": 2
|
| 411 |
+
},
|
| 412 |
+
{
|
| 413 |
+
"type": "text",
|
| 414 |
+
"text": "Cases for and against homophone normalization in Amharic: From linguistics literature, there have been three viewpoints on how to handle characters that have the same sound in Amharic: (1) standardize spellings, (2) remove homophone characters from the alphabet-i.e, normalize, or (3) perform no interventions (Aklilu, 2010).",
|
| 415 |
+
"bbox": [
|
| 416 |
+
507,
|
| 417 |
+
275,
|
| 418 |
+
884,
|
| 419 |
+
387
|
| 420 |
+
],
|
| 421 |
+
"page_idx": 2
|
| 422 |
+
},
|
| 423 |
+
{
|
| 424 |
+
"type": "text",
|
| 425 |
+
"text": "Thus far, the Amharic NLP literature has adopted an (implicit) standardization step with homophone normalization. In this paper, we offer a post-inference intervention that provides a middle ground to the three viewpoints described above. Instead of training on normalized data, we propose performing normalization when calculating a particular metric. We first investigate the impacts of normalization and homophones in MT in zero-shot, monolingual, and cross-lingual settings and show that our post-inference intervention can improve metric scores.",
|
| 426 |
+
"bbox": [
|
| 427 |
+
507,
|
| 428 |
+
390,
|
| 429 |
+
885,
|
| 430 |
+
582
|
| 431 |
+
],
|
| 432 |
+
"page_idx": 2
|
| 433 |
+
},
|
| 434 |
+
{
|
| 435 |
+
"type": "text",
|
| 436 |
+
"text": "3 Methods",
|
| 437 |
+
"text_level": 1,
|
| 438 |
+
"bbox": [
|
| 439 |
+
509,
|
| 440 |
+
600,
|
| 441 |
+
621,
|
| 442 |
+
615
|
| 443 |
+
],
|
| 444 |
+
"page_idx": 2
|
| 445 |
+
},
|
| 446 |
+
{
|
| 447 |
+
"type": "text",
|
| 448 |
+
"text": "To test the impact of homophone normalization, we prepared an evaluation dataset with a focus on words that have homophone characters in the three languages using publicly available MT datasets (Sec. 3.1). We then adopted two normalization schemes for our experiments, which we describe in Sec. 3.2.",
|
| 449 |
+
"bbox": [
|
| 450 |
+
507,
|
| 451 |
+
629,
|
| 452 |
+
882,
|
| 453 |
+
741
|
| 454 |
+
],
|
| 455 |
+
"page_idx": 2
|
| 456 |
+
},
|
| 457 |
+
{
|
| 458 |
+
"type": "text",
|
| 459 |
+
"text": "3.1 Dataset",
|
| 460 |
+
"text_level": 1,
|
| 461 |
+
"bbox": [
|
| 462 |
+
507,
|
| 463 |
+
758,
|
| 464 |
+
616,
|
| 465 |
+
771
|
| 466 |
+
],
|
| 467 |
+
"page_idx": 2
|
| 468 |
+
},
|
| 469 |
+
{
|
| 470 |
+
"type": "text",
|
| 471 |
+
"text": "We prepared an evaluation dataset in the three languages of study by focusing on sentences that have high counts of characters with the same sound. In Table 1, we give the details of our dataset<sup>4</sup>. We selected sentences from the following datasets for each language:",
|
| 472 |
+
"bbox": [
|
| 473 |
+
507,
|
| 474 |
+
781,
|
| 475 |
+
884,
|
| 476 |
+
878
|
| 477 |
+
],
|
| 478 |
+
"page_idx": 2
|
| 479 |
+
},
|
| 480 |
+
{
|
| 481 |
+
"type": "page_footnote",
|
| 482 |
+
"text": "4Models, code and data can be found at https://github. com/hhnigatu/geez.script_normalization",
|
| 483 |
+
"bbox": [
|
| 484 |
+
507,
|
| 485 |
+
892,
|
| 486 |
+
882,
|
| 487 |
+
917
|
| 488 |
+
],
|
| 489 |
+
"page_idx": 2
|
| 490 |
+
},
|
| 491 |
+
{
|
| 492 |
+
"type": "page_number",
|
| 493 |
+
"text": "10323",
|
| 494 |
+
"bbox": [
|
| 495 |
+
477,
|
| 496 |
+
927,
|
| 497 |
+
524,
|
| 498 |
+
940
|
| 499 |
+
],
|
| 500 |
+
"page_idx": 2
|
| 501 |
+
},
|
| 502 |
+
{
|
| 503 |
+
"type": "table",
|
| 504 |
+
"img_path": "images/5d1948c3e1d982bb6c440eb809bcfd9b0d6048d08ad005a35dc7e8b13887090f.jpg",
|
| 505 |
+
"table_caption": [],
|
| 506 |
+
"table_footnote": [],
|
| 507 |
+
"table_body": "<table><tr><td>Target Language</td><td>Source Dataset</td><td>Training</td><td>Eval</td><td>Test</td></tr><tr><td>Amharic</td><td>Abate et al. (2018)</td><td>199.2k</td><td>22.1k</td><td>2.4k</td></tr><tr><td>Ge'ez</td><td>AGE</td><td>15.7k</td><td>1.9k</td><td>1.9k</td></tr><tr><td>Tigrinya</td><td>Abate et al. (2018); Lakew et al. (2020)</td><td>75.4k</td><td>30.1k</td><td>2.4k</td></tr></table>",
|
| 508 |
+
"bbox": [
|
| 509 |
+
115,
|
| 510 |
+
82,
|
| 511 |
+
489,
|
| 512 |
+
205
|
| 513 |
+
],
|
| 514 |
+
"page_idx": 3
|
| 515 |
+
},
|
| 516 |
+
{
|
| 517 |
+
"type": "text",
|
| 518 |
+
"text": "Amharic-English-Machine-Translation",
|
| 519 |
+
"text_level": 1,
|
| 520 |
+
"bbox": [
|
| 521 |
+
112,
|
| 522 |
+
269,
|
| 523 |
+
430,
|
| 524 |
+
285
|
| 525 |
+
],
|
| 526 |
+
"page_idx": 3
|
| 527 |
+
},
|
| 528 |
+
{
|
| 529 |
+
"type": "text",
|
| 530 |
+
"text": "Corpus The Amharic-English Machine Translation Corpus (Abate et al., 2018) contains Amharic-English parallel sentences collected from Bible, History, News, and Legal sources. The dataset has a total of $276\\mathrm{k}$ parallel sentences. From the test split of the (Abate et al., 2018) dataset, we selected sentences that had at least 9 homophone characters. With this filtering step, we had a test set of $2.4\\mathrm{k}$ sentence pairs.",
|
| 531 |
+
"bbox": [
|
| 532 |
+
112,
|
| 533 |
+
285,
|
| 534 |
+
489,
|
| 535 |
+
431
|
| 536 |
+
],
|
| 537 |
+
"page_idx": 3
|
| 538 |
+
},
|
| 539 |
+
{
|
| 540 |
+
"type": "text",
|
| 541 |
+
"text": "Tigrinya-English MT For Tigrinya, we used data from Lakew et al. (2020) and Abate et al. (2018). The dataset had a total of 150.8k parallel sentences. Similar to Amharic, we selected sentences that had at least 17 homophone characters, which resulted in a test set with 2.4k English-Tigrinya parallel sentences.",
|
| 542 |
+
"bbox": [
|
| 543 |
+
112,
|
| 544 |
+
439,
|
| 545 |
+
489,
|
| 546 |
+
551
|
| 547 |
+
],
|
| 548 |
+
"page_idx": 3
|
| 549 |
+
},
|
| 550 |
+
{
|
| 551 |
+
"type": "text",
|
| 552 |
+
"text": "AGE We used the AGE dataset (Ademtew and Birbo, 2024) which has 17.5k Amharic-Ge'ez and 18.6k Ge'ez-English parallel sentences. For our experiments, we used the English-Ge'ez data and split it into training, evaluation, and test sets at an 8:1:1 ratio. With this, we had 1.9k Ge'ez-English parallel sentences as our test set. Since the Ge'ez dataset is small, we did not apply additional filtering to the test set.",
|
| 553 |
+
"bbox": [
|
| 554 |
+
112,
|
| 555 |
+
560,
|
| 556 |
+
489,
|
| 557 |
+
705
|
| 558 |
+
],
|
| 559 |
+
"page_idx": 3
|
| 560 |
+
},
|
| 561 |
+
{
|
| 562 |
+
"type": "text",
|
| 563 |
+
"text": "3.2 Normalization Settings",
|
| 564 |
+
"text_level": 1,
|
| 565 |
+
"bbox": [
|
| 566 |
+
112,
|
| 567 |
+
715,
|
| 568 |
+
341,
|
| 569 |
+
732
|
| 570 |
+
],
|
| 571 |
+
"page_idx": 3
|
| 572 |
+
},
|
| 573 |
+
{
|
| 574 |
+
"type": "text",
|
| 575 |
+
"text": "As discussed in Sec. 2, there are multiple normalization schemes adopted by prior work, particularly when dealing with Amharic datasets. In this study, we employ three normalization settings:",
|
| 576 |
+
"bbox": [
|
| 577 |
+
112,
|
| 578 |
+
736,
|
| 579 |
+
489,
|
| 580 |
+
802
|
| 581 |
+
],
|
| 582 |
+
"page_idx": 3
|
| 583 |
+
},
|
| 584 |
+
{
|
| 585 |
+
"type": "list",
|
| 586 |
+
"sub_type": "text",
|
| 587 |
+
"list_items": [
|
| 588 |
+
"- No-Norm: We take the dataset as is, without applying any normalization or other alterations. We use this setting as a baseline.",
|
| 589 |
+
"- H-only: We normalize all characters that have the same sound in a given language. We apply this approach for Amharic and"
|
| 590 |
+
],
|
| 591 |
+
"bbox": [
|
| 592 |
+
134,
|
| 593 |
+
812,
|
| 594 |
+
487,
|
| 595 |
+
917
|
| 596 |
+
],
|
| 597 |
+
"page_idx": 3
|
| 598 |
+
},
|
| 599 |
+
{
|
| 600 |
+
"type": "table",
|
| 601 |
+
"img_path": "images/5e61b4f9bab6f7851365a8941edd985d8fb624fbf27d6c044db196e3b7c067b9.jpg",
|
| 602 |
+
"table_caption": [
|
| 603 |
+
"Table 1: Benchmark dataset description along with source datasets."
|
| 604 |
+
],
|
| 605 |
+
"table_footnote": [],
|
| 606 |
+
"table_body": "<table><tr><td>Language</td><td>No Norm</td><td>H-Only</td><td>HSL</td></tr><tr><td>Amharic</td><td>✓</td><td>✓</td><td>✓</td></tr><tr><td>Tigrinya</td><td>✓</td><td>✓</td><td>-</td></tr><tr><td>Ge'ez</td><td>✓</td><td>-</td><td>-</td></tr></table>",
|
| 607 |
+
"bbox": [
|
| 608 |
+
552,
|
| 609 |
+
82,
|
| 610 |
+
840,
|
| 611 |
+
149
|
| 612 |
+
],
|
| 613 |
+
"page_idx": 3
|
| 614 |
+
},
|
| 615 |
+
{
|
| 616 |
+
"type": "text",
|
| 617 |
+
"text": "Table 2: Application of normalization schemes to the three languages of study.",
|
| 618 |
+
"bbox": [
|
| 619 |
+
507,
|
| 620 |
+
158,
|
| 621 |
+
880,
|
| 622 |
+
189
|
| 623 |
+
],
|
| 624 |
+
"page_idx": 3
|
| 625 |
+
},
|
| 626 |
+
{
|
| 627 |
+
"type": "text",
|
| 628 |
+
"text": "Tigrinya, with separate scripts for each language as the characters with the same sound in each language differ (Sec. 2). We map homophone characters to the most frequent character in the dataset.",
|
| 629 |
+
"bbox": [
|
| 630 |
+
544,
|
| 631 |
+
212,
|
| 632 |
+
884,
|
| 633 |
+
293
|
| 634 |
+
],
|
| 635 |
+
"page_idx": 3
|
| 636 |
+
},
|
| 637 |
+
{
|
| 638 |
+
"type": "text",
|
| 639 |
+
"text": "- HSL: In this setting, we use the script from (Yimam et al., 2021) and normalize homophone characters, similar sounds, and labialized characters. Since this approach has only been used for Amharic, and there is no standard way to determine \"similar\" sounds, we only apply it to the Amharic dataset<sup>5</sup>.",
|
| 640 |
+
"bbox": [
|
| 641 |
+
531,
|
| 642 |
+
304,
|
| 643 |
+
882,
|
| 644 |
+
416
|
| 645 |
+
],
|
| 646 |
+
"page_idx": 3
|
| 647 |
+
},
|
| 648 |
+
{
|
| 649 |
+
"type": "text",
|
| 650 |
+
"text": "In Table 2, we give details on how we applied the normalization schemes to our datasets. Note that, for Ge'ez we did not apply any normalization as all characters are distinct-i.e, swapping characters, even if they have the same sound, will result in meaning change (Sec. 2).",
|
| 651 |
+
"bbox": [
|
| 652 |
+
507,
|
| 653 |
+
428,
|
| 654 |
+
882,
|
| 655 |
+
524
|
| 656 |
+
],
|
| 657 |
+
"page_idx": 3
|
| 658 |
+
},
|
| 659 |
+
{
|
| 660 |
+
"type": "text",
|
| 661 |
+
"text": "4 Experimental Study",
|
| 662 |
+
"text_level": 1,
|
| 663 |
+
"bbox": [
|
| 664 |
+
507,
|
| 665 |
+
535,
|
| 666 |
+
717,
|
| 667 |
+
552
|
| 668 |
+
],
|
| 669 |
+
"page_idx": 3
|
| 670 |
+
},
|
| 671 |
+
{
|
| 672 |
+
"type": "text",
|
| 673 |
+
"text": "In this section, we first give our experimental setup, describing the models we used for our experiments in Sec. 4.1. We conduct experiments on the zero-shot performance of MT systems on sentences with homophone characters (Sec. 4.2). We then investigate the impact of normalizing homophone characters in training data for monolingual model training and cross-lingual transfer (Sec. 4.3). Then, we investigate the efficacy of pots-inference normalization in Sec. 4.4.",
|
| 674 |
+
"bbox": [
|
| 675 |
+
507,
|
| 676 |
+
561,
|
| 677 |
+
884,
|
| 678 |
+
721
|
| 679 |
+
],
|
| 680 |
+
"page_idx": 3
|
| 681 |
+
},
|
| 682 |
+
{
|
| 683 |
+
"type": "text",
|
| 684 |
+
"text": "4.1 Experimental Setup",
|
| 685 |
+
"text_level": 1,
|
| 686 |
+
"bbox": [
|
| 687 |
+
507,
|
| 688 |
+
732,
|
| 689 |
+
712,
|
| 690 |
+
747
|
| 691 |
+
],
|
| 692 |
+
"page_idx": 3
|
| 693 |
+
},
|
| 694 |
+
{
|
| 695 |
+
"type": "text",
|
| 696 |
+
"text": "4.1.1 Models",
|
| 697 |
+
"text_level": 1,
|
| 698 |
+
"bbox": [
|
| 699 |
+
507,
|
| 700 |
+
753,
|
| 701 |
+
628,
|
| 702 |
+
766
|
| 703 |
+
],
|
| 704 |
+
"page_idx": 3
|
| 705 |
+
},
|
| 706 |
+
{
|
| 707 |
+
"type": "text",
|
| 708 |
+
"text": "Pre-trained MT Models For our zero-shot experiments, we used Google Translate $^6$ , M2M-100-418M (Fan et al., 2021), and NLLB (NLLB et al., 2022) models. All three models support",
|
| 709 |
+
"bbox": [
|
| 710 |
+
507,
|
| 711 |
+
772,
|
| 712 |
+
882,
|
| 713 |
+
835
|
| 714 |
+
],
|
| 715 |
+
"page_idx": 3
|
| 716 |
+
},
|
| 717 |
+
{
|
| 718 |
+
"type": "page_footnote",
|
| 719 |
+
"text": "In (Yimam et al., 2021), characters with 'similar' sounds are some characters that have the same consonant but different vowels; for instance, $\\langle \\overline{\\tau} \\rangle / \\hat{\\mathrm{ts}} \\mathrm{i}/$ and $\\langle \\overline{\\tau} \\rangle / \\hat{\\mathrm{ts}} \\mathrm{i}/$ are mapped to $\\langle \\overline{\\tau} \\rangle$ . However, there is no standard for determining the similarity of the sounds.",
|
| 720 |
+
"bbox": [
|
| 721 |
+
507,
|
| 722 |
+
844,
|
| 723 |
+
882,
|
| 724 |
+
904
|
| 725 |
+
],
|
| 726 |
+
"page_idx": 3
|
| 727 |
+
},
|
| 728 |
+
{
|
| 729 |
+
"type": "page_footnote",
|
| 730 |
+
"text": "<sup>6</sup>https://translate.google.com/",
|
| 731 |
+
"bbox": [
|
| 732 |
+
531,
|
| 733 |
+
904,
|
| 734 |
+
712,
|
| 735 |
+
917
|
| 736 |
+
],
|
| 737 |
+
"page_idx": 3
|
| 738 |
+
},
|
| 739 |
+
{
|
| 740 |
+
"type": "page_number",
|
| 741 |
+
"text": "10324",
|
| 742 |
+
"bbox": [
|
| 743 |
+
477,
|
| 744 |
+
927,
|
| 745 |
+
524,
|
| 746 |
+
940
|
| 747 |
+
],
|
| 748 |
+
"page_idx": 3
|
| 749 |
+
},
|
| 750 |
+
{
|
| 751 |
+
"type": "text",
|
| 752 |
+
"text": "Amharic, while Google Translate and NLLB support Tigrinya. However, Ge'ez is not included in any of the three models; hence, we did not perform any zero-shot experiment for English-Ge'ez translation.",
|
| 753 |
+
"bbox": [
|
| 754 |
+
112,
|
| 755 |
+
84,
|
| 756 |
+
489,
|
| 757 |
+
164
|
| 758 |
+
],
|
| 759 |
+
"page_idx": 4
|
| 760 |
+
},
|
| 761 |
+
{
|
| 762 |
+
"type": "text",
|
| 763 |
+
"text": "Models for Training To understand the effects of normalization during training, we (1) finetuned the NLLB-600M (NLLB et al., 2022) model and (2) trained an encoder-decoder Transformer model (Vaswani et al., 2017) from scratch. Since the NLLB model includes Amharic and Tigrinya data, we trained the Transformer MT model from scratch to avoid the impact of pre-training in our experimental results.",
|
| 764 |
+
"bbox": [
|
| 765 |
+
112,
|
| 766 |
+
174,
|
| 767 |
+
489,
|
| 768 |
+
317
|
| 769 |
+
],
|
| 770 |
+
"page_idx": 4
|
| 771 |
+
},
|
| 772 |
+
{
|
| 773 |
+
"type": "text",
|
| 774 |
+
"text": "Training Details We trained an encoder-decoder Transformer model with 8 heads and 6 layers. We used an Adam Optimizer (Kingma and Ba, 2017) with a learning rate of 1e-4 and $\\beta 1 = 0.9$ and $\\beta 2 = 0.98$ . We used a learning rate scheduler that decreased the learning rate by a factor of 0.5 if there were no improvements in 2 consecutive epochs. We used Cross Entropy Loss as our loss function and trained the model for 30 epochs. The best model checkpoint based on evaluation set performance was chosen for the final evaluation. To fine-tune the NLLB-600M (NLLB et al., 2022) model, we used the Trainer module from the HuggingFace transformer library (Wolf et al., 2020). We fine-tuned the model for 5 epochs with a learning rate of 5e-5 using the default training arguments and a batch size of 32. We used the model's default tokenizer without any additional prefixing. We used the same training scheme for all languages and all experiments.",
|
| 775 |
+
"bbox": [
|
| 776 |
+
115,
|
| 777 |
+
328,
|
| 778 |
+
489,
|
| 779 |
+
650
|
| 780 |
+
],
|
| 781 |
+
"page_idx": 4
|
| 782 |
+
},
|
| 783 |
+
{
|
| 784 |
+
"type": "text",
|
| 785 |
+
"text": "4.1.2 Evaluation",
|
| 786 |
+
"text_level": 1,
|
| 787 |
+
"bbox": [
|
| 788 |
+
112,
|
| 789 |
+
659,
|
| 790 |
+
260,
|
| 791 |
+
671
|
| 792 |
+
],
|
| 793 |
+
"page_idx": 4
|
| 794 |
+
},
|
| 795 |
+
{
|
| 796 |
+
"type": "text",
|
| 797 |
+
"text": "We used both automatic metrics and human evaluation. We performed our evaluation on a single run for each model. For automatic metrics, we used BLEU (Papineni et al., 2002) and $\\mathrm{ChrF}$ (Popovic, 2015). BLEU score focuses on overlap in word-level n-grams, whereas $\\mathrm{ChrF}$ focuses on character-level n-grams. We used the SACREBLEU (Post, 2018) implementation for both BLEU and $\\mathrm{ChrF}$ , with their default settings. When calculating the scores, we removed punctuation marks from both reference and prediction sentences. For all test cases, we apply the same normalization scheme to the reference and prediction. This makes it difficult to make comparisons across the references; for instance, we cannot directly compare the refer",
|
| 798 |
+
"bbox": [
|
| 799 |
+
112,
|
| 800 |
+
677,
|
| 801 |
+
489,
|
| 802 |
+
917
|
| 803 |
+
],
|
| 804 |
+
"page_idx": 4
|
| 805 |
+
},
|
| 806 |
+
{
|
| 807 |
+
"type": "table",
|
| 808 |
+
"img_path": "images/03c157773ccac2595810ab1a8585709f6de4921e0f2110f2730cebb064974838.jpg",
|
| 809 |
+
"table_caption": [],
|
| 810 |
+
"table_footnote": [],
|
| 811 |
+
"table_body": "<table><tr><td rowspan=\"2\">Model</td><td colspan=\"2\">Amharic</td><td colspan=\"2\">Tigrinya</td></tr><tr><td>BLEU</td><td>ChrF</td><td>BLEU</td><td>ChrF</td></tr><tr><td>NLLB - 3B</td><td>10.47</td><td>34.05</td><td>11.26</td><td>31.22</td></tr><tr><td>NLLB - 600M</td><td>6.98</td><td>29.16</td><td>11.55</td><td>31.30</td></tr><tr><td>Google Translate</td><td>9.89</td><td>33.67</td><td>16.02</td><td>38.75</td></tr><tr><td>M2M - 418M</td><td>13.51</td><td>34.78</td><td>-</td><td>-</td></tr></table>",
|
| 812 |
+
"bbox": [
|
| 813 |
+
519,
|
| 814 |
+
82,
|
| 815 |
+
873,
|
| 816 |
+
170
|
| 817 |
+
],
|
| 818 |
+
"page_idx": 4
|
| 819 |
+
},
|
| 820 |
+
{
|
| 821 |
+
"type": "text",
|
| 822 |
+
"text": "Table 3: Zero-Shot translation performance.",
|
| 823 |
+
"bbox": [
|
| 824 |
+
544,
|
| 825 |
+
180,
|
| 826 |
+
845,
|
| 827 |
+
193
|
| 828 |
+
],
|
| 829 |
+
"page_idx": 4
|
| 830 |
+
},
|
| 831 |
+
{
|
| 832 |
+
"type": "text",
|
| 833 |
+
"text": "ence without normalization to the reference with H-only normalization. However, as described in Sec. 2, the motivation for applying normalization is to increase scores of automatic metrics by \"standardizing\" the spelling of words in a given language with a normalization scheme. Further, depending on the language it is applied to, normalization affects the spelling of a word and not the meaning. Hence, references with different normalization settings applied carry the same semantic meaning, although they may differ in the spelling of some words.",
|
| 834 |
+
"bbox": [
|
| 835 |
+
507,
|
| 836 |
+
219,
|
| 837 |
+
884,
|
| 838 |
+
411
|
| 839 |
+
],
|
| 840 |
+
"page_idx": 4
|
| 841 |
+
},
|
| 842 |
+
{
|
| 843 |
+
"type": "text",
|
| 844 |
+
"text": "For human evaluation, native speakers of Tigrinya and Amharic and second language speakers of Ge'ez qualitatively looked at 50 random sample predictions, comparing the outputs of the different models. We focused on the following axes when evaluating: (1) rating which translation was better from the given models, (2) identifying words that were mistranslated (e.g, words that were in Amharic for Tigrinya translations and vice versa), and (3) identifying changes in homophones in the translations.",
|
| 845 |
+
"bbox": [
|
| 846 |
+
507,
|
| 847 |
+
412,
|
| 848 |
+
884,
|
| 849 |
+
588
|
| 850 |
+
],
|
| 851 |
+
"page_idx": 4
|
| 852 |
+
},
|
| 853 |
+
{
|
| 854 |
+
"type": "text",
|
| 855 |
+
"text": "4.2 Zero-Shot Experiments",
|
| 856 |
+
"text_level": 1,
|
| 857 |
+
"bbox": [
|
| 858 |
+
507,
|
| 859 |
+
600,
|
| 860 |
+
741,
|
| 861 |
+
615
|
| 862 |
+
],
|
| 863 |
+
"page_idx": 4
|
| 864 |
+
},
|
| 865 |
+
{
|
| 866 |
+
"type": "text",
|
| 867 |
+
"text": "This experiment aims to answer RQ1—that is, to understand if there is an existing impact on pretrained MT models in handling characters with the same sound in the three languages of study.",
|
| 868 |
+
"bbox": [
|
| 869 |
+
507,
|
| 870 |
+
621,
|
| 871 |
+
882,
|
| 872 |
+
684
|
| 873 |
+
],
|
| 874 |
+
"page_idx": 4
|
| 875 |
+
},
|
| 876 |
+
{
|
| 877 |
+
"type": "text",
|
| 878 |
+
"text": "Results As can be seen in Table 3, all models except the NLLB-200-Distilled-600M have comparable ChrF scores, with M2M-100 having the highest ChrF for Amharic. For Tigrinya, Google Translate had the highest BLEU and ChrF scores. Additionally, M2M-100 has the highest BLEU score for Amharic, while NLLB-200-Distilled-600M had the lowest BLEU score. Further, the open-sourced NLLB-200-Distilled-600M and M2M-100 models performed better than the commercially available Google Translate model for Amharic.",
|
| 879 |
+
"bbox": [
|
| 880 |
+
507,
|
| 881 |
+
694,
|
| 882 |
+
884,
|
| 883 |
+
869
|
| 884 |
+
],
|
| 885 |
+
"page_idx": 4
|
| 886 |
+
},
|
| 887 |
+
{
|
| 888 |
+
"type": "text",
|
| 889 |
+
"text": "Qualitatively, we observe that the outputs of the NLLB models for English-Amharic translation usually stick with the Amharic \"Standard\"",
|
| 890 |
+
"bbox": [
|
| 891 |
+
507,
|
| 892 |
+
871,
|
| 893 |
+
882,
|
| 894 |
+
917
|
| 895 |
+
],
|
| 896 |
+
"page_idx": 4
|
| 897 |
+
},
|
| 898 |
+
{
|
| 899 |
+
"type": "page_number",
|
| 900 |
+
"text": "10325",
|
| 901 |
+
"bbox": [
|
| 902 |
+
477,
|
| 903 |
+
927,
|
| 904 |
+
524,
|
| 905 |
+
940
|
| 906 |
+
],
|
| 907 |
+
"page_idx": 4
|
| 908 |
+
},
|
| 909 |
+
{
|
| 910 |
+
"type": "table",
|
| 911 |
+
"img_path": "images/f4735d8415f518e9595e503be0141547eef4504f0e54a4bfe8345756a81f73c3.jpg",
|
| 912 |
+
"table_caption": [],
|
| 913 |
+
"table_footnote": [],
|
| 914 |
+
"table_body": "<table><tr><td rowspan=\"2\">Language</td><td rowspan=\"2\">Stage</td><td rowspan=\"2\">Model</td><td colspan=\"2\">No Norm</td><td colspan=\"2\">H-only</td><td colspan=\"2\">HSL</td></tr><tr><td>BLEU</td><td>ChrF</td><td>BLEU</td><td>ChrF</td><td>BLEU</td><td>ChrF</td></tr><tr><td rowspan=\"2\">Tigrinya</td><td rowspan=\"2\">Training</td><td>Transformer</td><td>10.87</td><td>25.51</td><td>10.21</td><td>26.61</td><td>-</td><td>-</td></tr><tr><td>NLLB</td><td>22.71</td><td>43.11</td><td>21.41</td><td>41.78</td><td>-</td><td>-</td></tr><tr><td rowspan=\"5\">Amharic</td><td rowspan=\"2\">Training</td><td>Transformer</td><td>12.32</td><td>29.50</td><td>9.31</td><td>26.90</td><td>6.22</td><td>26.88</td></tr><tr><td>NLLB</td><td>19.09</td><td>41.98</td><td>19.71</td><td>42.59</td><td>17.13</td><td>40.50</td></tr><tr><td rowspan=\"3\">Inference</td><td>Transformer</td><td>12.32</td><td>29.50</td><td>12.56</td><td>29.77</td><td>12.56</td><td>29.79</td></tr><tr><td>NLLB</td><td>19.09</td><td>41.98</td><td>19.78</td><td>42.60</td><td>19.78</td><td>42.61</td></tr><tr><td>Belay et al. (2022)</td><td>13.51</td><td>34.78</td><td>14.54</td><td>35.94</td><td>14.54</td><td>35.95</td></tr></table>",
|
| 915 |
+
"bbox": [
|
| 916 |
+
174,
|
| 917 |
+
80,
|
| 918 |
+
823,
|
| 919 |
+
225
|
| 920 |
+
],
|
| 921 |
+
"page_idx": 5
|
| 922 |
+
},
|
| 923 |
+
{
|
| 924 |
+
"type": "text",
|
| 925 |
+
"text": "Table 4: Performance of models where normalization is applied during Training and Inference. Best performance for each row is indicated in bold.",
|
| 926 |
+
"bbox": [
|
| 927 |
+
112,
|
| 928 |
+
233,
|
| 929 |
+
884,
|
| 930 |
+
262
|
| 931 |
+
],
|
| 932 |
+
"page_idx": 5
|
| 933 |
+
},
|
| 934 |
+
{
|
| 935 |
+
"type": "text",
|
| 936 |
+
"text": "homophone usage (Sec. 2). This behavior is not observed in Google Translate. For instance, when translating the word \"God,\" all NLLB and M2M models consistently translate it as \"XH-XH\" which is consistent with the Ge'ez spelling of the word. However, Google Translate sometimes tends to translate it as \"XH XH\", switching the $<\\text{h}>$ character with $<\\text{g}>$ which does not conform to the standard homophone usage of the word (Aklilu, 2010). This difference between the open-source models and Google Translate may be due to the fact that open models are trained on publicly available data, which is heavily dominated by religious data for low-resourced languages.",
|
| 937 |
+
"bbox": [
|
| 938 |
+
110,
|
| 939 |
+
288,
|
| 940 |
+
489,
|
| 941 |
+
514
|
| 942 |
+
],
|
| 943 |
+
"page_idx": 5
|
| 944 |
+
},
|
| 945 |
+
{
|
| 946 |
+
"type": "text",
|
| 947 |
+
"text": "4.3 Effects of Training Models with Homophone Normalization",
|
| 948 |
+
"text_level": 1,
|
| 949 |
+
"bbox": [
|
| 950 |
+
112,
|
| 951 |
+
524,
|
| 952 |
+
410,
|
| 953 |
+
555
|
| 954 |
+
],
|
| 955 |
+
"page_idx": 5
|
| 956 |
+
},
|
| 957 |
+
{
|
| 958 |
+
"type": "text",
|
| 959 |
+
"text": "To answer RQ2 and RQ3, we trained an encoder-decoder Transformer model from scratch and finetuned an NLLB-600M model as described in Sec. 4.1. We experimented with monolingual and crosslingual training, which we describe below.",
|
| 960 |
+
"bbox": [
|
| 961 |
+
112,
|
| 962 |
+
561,
|
| 963 |
+
489,
|
| 964 |
+
642
|
| 965 |
+
],
|
| 966 |
+
"page_idx": 5
|
| 967 |
+
},
|
| 968 |
+
{
|
| 969 |
+
"type": "text",
|
| 970 |
+
"text": "4.3.1 Monolingual Effects of Normalization",
|
| 971 |
+
"text_level": 1,
|
| 972 |
+
"bbox": [
|
| 973 |
+
112,
|
| 974 |
+
650,
|
| 975 |
+
470,
|
| 976 |
+
665
|
| 977 |
+
],
|
| 978 |
+
"page_idx": 5
|
| 979 |
+
},
|
| 980 |
+
{
|
| 981 |
+
"type": "text",
|
| 982 |
+
"text": "For RQ2, we experimented by training a Transformer model from scratch and finetuning NLLB-600M for each of the three language pairs: Eng-Amh, Eng-Tir, and Eng-Ge'ez. The goal for this experiment was to understand the impact of normalizing homophone characters in the target language during training on the MT performance. As described in Sec. 3, we use the No-Norm setting as a baseline for all languages, apply H-0n1y normalization to Amharic and Tigrinya data, apply HSL normalization to Amharic data only. For Ge'ez, we train without any normalization (Sec. 2).",
|
| 983 |
+
"bbox": [
|
| 984 |
+
112,
|
| 985 |
+
669,
|
| 986 |
+
489,
|
| 987 |
+
862
|
| 988 |
+
],
|
| 989 |
+
"page_idx": 5
|
| 990 |
+
},
|
| 991 |
+
{
|
| 992 |
+
"type": "text",
|
| 993 |
+
"text": "Results As can be seen in Table 4, for both Amharic and Tigrinya, when normalization is applied during training, the model with No-Norm",
|
| 994 |
+
"bbox": [
|
| 995 |
+
112,
|
| 996 |
+
871,
|
| 997 |
+
489,
|
| 998 |
+
919
|
| 999 |
+
],
|
| 1000 |
+
"page_idx": 5
|
| 1001 |
+
},
|
| 1002 |
+
{
|
| 1003 |
+
"type": "text",
|
| 1004 |
+
"text": "has better BLEU score as compared to the models trained on normalized data for the Transformer models. For Tigrinya, we observe that the Transformer model has comparable performance with and without normalization. For NLLB-600M, H-Only has a marginal improvement over the No-Norm setting for Amharic (+0.62 BLEU and +0.61 ChrF). The HSL setting has the least performance with NLLB for Amharic. For Tigrinya, we observe that No-Norm has better performance than the H-Only setting for the fine-tuned NLLB model (+1.3 BLEU and +1.33 ChrF).",
|
| 1005 |
+
"bbox": [
|
| 1006 |
+
505,
|
| 1007 |
+
288,
|
| 1008 |
+
884,
|
| 1009 |
+
480
|
| 1010 |
+
],
|
| 1011 |
+
"page_idx": 5
|
| 1012 |
+
},
|
| 1013 |
+
{
|
| 1014 |
+
"type": "text",
|
| 1015 |
+
"text": "Qualitatively, we observed that models trained with HSL normalization mostly replace some words with their synonyms and simplify the translation when compared to H-0n1y and No-Norm settings. Regarding the quality of the translation, NLLB fine-tuned with No-Norm setting provides better translation, preserving the homophone characters in the prediction; this also aligns with the automatic result presented in Table 4. In the H-0n1y setting, we noticed that in addition to replacing words with normalized characters, most of the translations were incomplete even though we set the same maximum sequence length for all models.",
|
| 1016 |
+
"bbox": [
|
| 1017 |
+
507,
|
| 1018 |
+
482,
|
| 1019 |
+
885,
|
| 1020 |
+
705
|
| 1021 |
+
],
|
| 1022 |
+
"page_idx": 5
|
| 1023 |
+
},
|
| 1024 |
+
{
|
| 1025 |
+
"type": "text",
|
| 1026 |
+
"text": "4.3.2 Cross-Lingual Transfer Effects of Normalization",
|
| 1027 |
+
"text_level": 1,
|
| 1028 |
+
"bbox": [
|
| 1029 |
+
507,
|
| 1030 |
+
714,
|
| 1031 |
+
835,
|
| 1032 |
+
745
|
| 1033 |
+
],
|
| 1034 |
+
"page_idx": 5
|
| 1035 |
+
},
|
| 1036 |
+
{
|
| 1037 |
+
"type": "text",
|
| 1038 |
+
"text": "For RQ3, we experimented by taking the models we trained for Amharic as described in Sec 4.3.1 and further training with Eng-Tir and Eng-Ge'ez data. In our cross-lingual experiment, the Tigrinya and Ge'ez datasets are taken as is, without any normalization.",
|
| 1039 |
+
"bbox": [
|
| 1040 |
+
507,
|
| 1041 |
+
750,
|
| 1042 |
+
884,
|
| 1043 |
+
845
|
| 1044 |
+
],
|
| 1045 |
+
"page_idx": 5
|
| 1046 |
+
},
|
| 1047 |
+
{
|
| 1048 |
+
"type": "text",
|
| 1049 |
+
"text": "Results As Table 5 shows, for Tigrinya, we find that the Amharic model that is trained without normalizing the homophone characters-i.e. the No-Norm setting-is a better transfer model",
|
| 1050 |
+
"bbox": [
|
| 1051 |
+
507,
|
| 1052 |
+
854,
|
| 1053 |
+
884,
|
| 1054 |
+
917
|
| 1055 |
+
],
|
| 1056 |
+
"page_idx": 5
|
| 1057 |
+
},
|
| 1058 |
+
{
|
| 1059 |
+
"type": "page_number",
|
| 1060 |
+
"text": "10326",
|
| 1061 |
+
"bbox": [
|
| 1062 |
+
477,
|
| 1063 |
+
927,
|
| 1064 |
+
524,
|
| 1065 |
+
940
|
| 1066 |
+
],
|
| 1067 |
+
"page_idx": 5
|
| 1068 |
+
},
|
| 1069 |
+
{
|
| 1070 |
+
"type": "table",
|
| 1071 |
+
"img_path": "images/cea29b1398c491a2dc42bf19041e7eaeaa0b3bb6fefbfd1d8257912aaab13708.jpg",
|
| 1072 |
+
"table_caption": [
|
| 1073 |
+
"Tigrinya"
|
| 1074 |
+
],
|
| 1075 |
+
"table_footnote": [],
|
| 1076 |
+
"table_body": "<table><tr><td></td><td colspan=\"2\">No-Transfer</td><td colspan=\"2\">No-Norm</td><td colspan=\"2\">H-only</td><td colspan=\"2\">HSL</td></tr><tr><td>Model</td><td>BLEU</td><td>ChrF</td><td>BLEU</td><td>ChrF</td><td>BLEU</td><td>ChrF</td><td>BLEU</td><td>ChrF</td></tr><tr><td>Transformer</td><td>10.87</td><td>25.51</td><td>12.16</td><td>27.61</td><td>10.67</td><td>26.24</td><td>11.23</td><td>26.44</td></tr><tr><td>NLLB-600M</td><td>22.71</td><td>43.11</td><td>21.55</td><td>42.03</td><td>21.63</td><td>42.13</td><td>21.68</td><td>42.14</td></tr></table>",
|
| 1077 |
+
"bbox": [
|
| 1078 |
+
218,
|
| 1079 |
+
96,
|
| 1080 |
+
781,
|
| 1081 |
+
166
|
| 1082 |
+
],
|
| 1083 |
+
"page_idx": 6
|
| 1084 |
+
},
|
| 1085 |
+
{
|
| 1086 |
+
"type": "table",
|
| 1087 |
+
"img_path": "images/bcd76563f1c165203a45210a0ac9a120c48ef089b017853548ade0bce3541728.jpg",
|
| 1088 |
+
"table_caption": [
|
| 1089 |
+
"Ge'ez"
|
| 1090 |
+
],
|
| 1091 |
+
"table_footnote": [],
|
| 1092 |
+
"table_body": "<table><tr><td></td><td colspan=\"2\">No-Transfer</td><td colspan=\"2\">No-Norm</td><td colspan=\"2\">H-only</td><td colspan=\"2\">HSL</td></tr><tr><td>Model</td><td>BLEU</td><td>ChrF</td><td>BLEU</td><td>ChrF</td><td>BLEU</td><td>ChrF</td><td>BLEU</td><td>ChrF</td></tr><tr><td>Transformer</td><td>2.46</td><td>18.72</td><td>3.67</td><td>20.80</td><td>3.56</td><td>20.80</td><td>1.46</td><td>12.48</td></tr><tr><td>NLLB-600M</td><td>3.36</td><td>23.48</td><td>5.22</td><td>26.54</td><td>6.33</td><td>28.38</td><td>6.31</td><td>28.52</td></tr></table>",
|
| 1093 |
+
"bbox": [
|
| 1094 |
+
218,
|
| 1095 |
+
181,
|
| 1096 |
+
781,
|
| 1097 |
+
249
|
| 1098 |
+
],
|
| 1099 |
+
"page_idx": 6
|
| 1100 |
+
},
|
| 1101 |
+
{
|
| 1102 |
+
"type": "text",
|
| 1103 |
+
"text": "Table 5: Performance of MT models in cross-lingual transfer experiments, where No-Norm, H-Only, and HSL refer to models that were initialized with English-Amharic models trained in each of the three settings. The best performance in a row is indicated in bold font.",
|
| 1104 |
+
"bbox": [
|
| 1105 |
+
112,
|
| 1106 |
+
256,
|
| 1107 |
+
880,
|
| 1108 |
+
300
|
| 1109 |
+
],
|
| 1110 |
+
"page_idx": 6
|
| 1111 |
+
},
|
| 1112 |
+
{
|
| 1113 |
+
"type": "text",
|
| 1114 |
+
"text": "as compared to the H-0n1y and HSL settings for the Transformer models. With finetuning NLLB-200-Distilled-600M, we find that the model directly finetuned from NLLB-200-Distilled-600M performed better than the ones first finetuned on Amharic then finetuned on Tigrinya. With the transfer models for NLLB, we observe comparable performance regardless of the normalization setting with which the Amharic model was finetuned. For Ge'ez, we see that using the Amharic models trained in the No-Norm and H-0n1y setting provides better BLUE and ChrF scores as compared to using the model trained with HSL setting. Further, we observe that using a Transformer model that was trained with the HSL setting for Amharic has the worst performance when used as a transfer model for Ge'ez. Qualitatively, we observe that in both Ge'ez and Tigrinya translations, the output includes code-switching with Amharic words, changes pronouns, changes gender, and wrongly negates words.",
|
| 1115 |
+
"bbox": [
|
| 1116 |
+
115,
|
| 1117 |
+
326,
|
| 1118 |
+
489,
|
| 1119 |
+
663
|
| 1120 |
+
],
|
| 1121 |
+
"page_idx": 6
|
| 1122 |
+
},
|
| 1123 |
+
{
|
| 1124 |
+
"type": "text",
|
| 1125 |
+
"text": "We find that the homophone characters that were normalized in the Amharic transfer model were correctly used in the respective target languages (Tigrinya and Ge'ez). However, looking at the predictions of the models fine-tuned on the transfer models trained using the Amharic normalized data, they tended to repeat characters or words until they reached the maximum sequence length, instead of translating the source sentence. This is demonstrated in Figure 1, where the translations with the models trained on Amharic transfer models with the HSL setting have fewer unique words in both Ge'ez and Tigrinya, especially for the Transformer model. In Table 6, we provide qualitative examples.",
|
| 1126 |
+
"bbox": [
|
| 1127 |
+
112,
|
| 1128 |
+
677,
|
| 1129 |
+
489,
|
| 1130 |
+
919
|
| 1131 |
+
],
|
| 1132 |
+
"page_idx": 6
|
| 1133 |
+
},
|
| 1134 |
+
{
|
| 1135 |
+
"type": "text",
|
| 1136 |
+
"text": "Looking at the character count of the translations in the different transfer settings, we find that all models did not contain a comparable number of characters that were found in the reference dataset; for instance, for Ge'ez the model trained without transfer learning did not contain 33 characters that were in the reference dataset while the model trained with the HSL normalized Amharic transfer model did not contain 34 characters form the reference. However, training with the Amharic transfer models added new characters in the predictions, where the characters do not exist in the language. For instance, $\\langle \\vec{\\mathfrak{h}} \\rangle$ , $\\langle \\vec{\\mathfrak{n}} \\rangle$ , $\\langle \\vec{\\mathfrak{r}} \\rangle$ were added in the Ge'ez predictions although all three characters do not exist in the alphabet for the language.",
|
| 1137 |
+
"bbox": [
|
| 1138 |
+
507,
|
| 1139 |
+
326,
|
| 1140 |
+
884,
|
| 1141 |
+
568
|
| 1142 |
+
],
|
| 1143 |
+
"page_idx": 6
|
| 1144 |
+
},
|
| 1145 |
+
{
|
| 1146 |
+
"type": "text",
|
| 1147 |
+
"text": "4.4 Post-Inference Normalization",
|
| 1148 |
+
"text_level": 1,
|
| 1149 |
+
"bbox": [
|
| 1150 |
+
507,
|
| 1151 |
+
580,
|
| 1152 |
+
786,
|
| 1153 |
+
594
|
| 1154 |
+
],
|
| 1155 |
+
"page_idx": 6
|
| 1156 |
+
},
|
| 1157 |
+
{
|
| 1158 |
+
"type": "text",
|
| 1159 |
+
"text": "As discussed in Sec. 2, normalization of homophones has provided automatic score increases in prior work. However, normalizing the characters before training a model results in models that cannot process different forms of spelling. Furthermore, as we have seen in Sec. 4.3.2, normalizing homophone characters has an impact on transfer learning for languages that use the same writing script. To answer our RQ4, we took the models we trained in the No-Norm setting and applied normalization to the reference and predictions after inference.",
|
| 1160 |
+
"bbox": [
|
| 1161 |
+
507,
|
| 1162 |
+
602,
|
| 1163 |
+
884,
|
| 1164 |
+
793
|
| 1165 |
+
],
|
| 1166 |
+
"page_idx": 6
|
| 1167 |
+
},
|
| 1168 |
+
{
|
| 1169 |
+
"type": "text",
|
| 1170 |
+
"text": "Results For the Transformer model we trained, post-normalization improves BLEU and $\\mathrm{ChrF}$ scores by a small margin (0.24 and 0.29 increase, respectively). For the NLLB finetuned model, we find that applying HSL normalization post-inference boosts the BLEU score by 0.69 and the $\\mathrm{ChrF}$ by 0.63. In the three normalization settings,",
|
| 1171 |
+
"bbox": [
|
| 1172 |
+
507,
|
| 1173 |
+
806,
|
| 1174 |
+
884,
|
| 1175 |
+
919
|
| 1176 |
+
],
|
| 1177 |
+
"page_idx": 6
|
| 1178 |
+
},
|
| 1179 |
+
{
|
| 1180 |
+
"type": "page_number",
|
| 1181 |
+
"text": "10327",
|
| 1182 |
+
"bbox": [
|
| 1183 |
+
477,
|
| 1184 |
+
927,
|
| 1185 |
+
524,
|
| 1186 |
+
940
|
| 1187 |
+
],
|
| 1188 |
+
"page_idx": 6
|
| 1189 |
+
},
|
| 1190 |
+
{
|
| 1191 |
+
"type": "image",
|
| 1192 |
+
"img_path": "images/eaf18da2427828f286d3643219db3fbb245197a0f4f233aa6a3d33811ca218f7.jpg",
|
| 1193 |
+
"image_caption": [
|
| 1194 |
+
"(a)"
|
| 1195 |
+
],
|
| 1196 |
+
"image_footnote": [],
|
| 1197 |
+
"bbox": [
|
| 1198 |
+
126,
|
| 1199 |
+
99,
|
| 1200 |
+
492,
|
| 1201 |
+
256
|
| 1202 |
+
],
|
| 1203 |
+
"page_idx": 7
|
| 1204 |
+
},
|
| 1205 |
+
{
|
| 1206 |
+
"type": "image",
|
| 1207 |
+
"img_path": "images/b6b43c63dc7173371bf9ac7b48d62f24ca32135730d8152d7053ee2316ad507c.jpg",
|
| 1208 |
+
"image_caption": [
|
| 1209 |
+
"(b)",
|
| 1210 |
+
"Figure 1: Comparison of Unique word count with different transfer settings for English-Tigrinya and English-Ge'ez translation."
|
| 1211 |
+
],
|
| 1212 |
+
"image_footnote": [],
|
| 1213 |
+
"bbox": [
|
| 1214 |
+
505,
|
| 1215 |
+
99,
|
| 1216 |
+
873,
|
| 1217 |
+
256
|
| 1218 |
+
],
|
| 1219 |
+
"page_idx": 7
|
| 1220 |
+
},
|
| 1221 |
+
{
|
| 1222 |
+
"type": "text",
|
| 1223 |
+
"text": "we find that the NLLB model outperforms the Transformer model on our evaluation dataset.",
|
| 1224 |
+
"bbox": [
|
| 1225 |
+
112,
|
| 1226 |
+
343,
|
| 1227 |
+
485,
|
| 1228 |
+
373
|
| 1229 |
+
],
|
| 1230 |
+
"page_idx": 7
|
| 1231 |
+
},
|
| 1232 |
+
{
|
| 1233 |
+
"type": "text",
|
| 1234 |
+
"text": "We compare how effective post-inference normalization is by including the model from Belay et al. (2022); we take the model trained without normalization and apply homophone normalization after inference<sup>7</sup>. Belay et al. (2022) found a 3.09 BLEU score increase by finetuning an M2M (Fan et al., 2021) model with HSL normalized data as compared to a model trained with No-Norm data. We cannot directly compare our results with the reported BLEU scores as the test sets are different. However, on our evaluation dataset, we find that the model trained by Belay et al. (2022) without applying normalization can have a 1.03 BLEU score increase (Table 4) with our post-inference scheme.",
|
| 1235 |
+
"bbox": [
|
| 1236 |
+
112,
|
| 1237 |
+
376,
|
| 1238 |
+
487,
|
| 1239 |
+
617
|
| 1240 |
+
],
|
| 1241 |
+
"page_idx": 7
|
| 1242 |
+
},
|
| 1243 |
+
{
|
| 1244 |
+
"type": "text",
|
| 1245 |
+
"text": "5 Discussion",
|
| 1246 |
+
"text_level": 1,
|
| 1247 |
+
"bbox": [
|
| 1248 |
+
112,
|
| 1249 |
+
632,
|
| 1250 |
+
240,
|
| 1251 |
+
646
|
| 1252 |
+
],
|
| 1253 |
+
"page_idx": 7
|
| 1254 |
+
},
|
| 1255 |
+
{
|
| 1256 |
+
"type": "text",
|
| 1257 |
+
"text": "Our work investigates the impact of homophone normalization for languages that use the Ge'ez script on Machine Translation performance. We provide background on the characteristics of the languages that use the Ge'ez script and detail how prior work used homophone normalization (Sec. 2). Through a series of experiments (Sec. 4), we demonstrate that homophone normalization does not provide a significant performance gain across all languages, and hurts performance in transfer learning (Sec. 4.3). As we have discussed in Sec. 2, homophone normalization has been used as a pre-processing step in the NLP literature for",
|
| 1258 |
+
"bbox": [
|
| 1259 |
+
112,
|
| 1260 |
+
659,
|
| 1261 |
+
489,
|
| 1262 |
+
869
|
| 1263 |
+
],
|
| 1264 |
+
"page_idx": 7
|
| 1265 |
+
},
|
| 1266 |
+
{
|
| 1267 |
+
"type": "text",
|
| 1268 |
+
"text": "Amharic, setting an implicit standard on what trained models can handle. In this section, we connect this argument to the broader literature on technology-facilitated language change.",
|
| 1269 |
+
"bbox": [
|
| 1270 |
+
507,
|
| 1271 |
+
343,
|
| 1272 |
+
880,
|
| 1273 |
+
407
|
| 1274 |
+
],
|
| 1275 |
+
"page_idx": 7
|
| 1276 |
+
},
|
| 1277 |
+
{
|
| 1278 |
+
"type": "text",
|
| 1279 |
+
"text": "Evolutions in language that are the result of technological constraints make their way to daily lives (van Dijk et al., 2016). This is particularly concerning as MT models are used in data creation and augmentation for low-resourced languages (e.g. Singh et al., 2025). Machine-translated datasets are also used to train other NLP models (e.g. Joshi et al., 2025), perpetuating the normalization effect to tasks beyond translation.",
|
| 1280 |
+
"bbox": [
|
| 1281 |
+
507,
|
| 1282 |
+
411,
|
| 1283 |
+
882,
|
| 1284 |
+
556
|
| 1285 |
+
],
|
| 1286 |
+
"page_idx": 7
|
| 1287 |
+
},
|
| 1288 |
+
{
|
| 1289 |
+
"type": "text",
|
| 1290 |
+
"text": "As we have seen in Sec. 4, while normalization has resulted in score improvements in prior work, it affects the performance of models in transfer learning. Further, the score improvements are not consistent across models, languages, and normalization settings. As a result, we need to pause and reflect on using such schemes in NLP literature for languages that use the Ge'ez script. Multiple languages use the same writing script; hence, it is important to consider how the standards we set for one language affect other languages. There might also be dialect differences in how words are spelled, which will not be accounted for when we normalize homophone characters without such considerations.",
|
| 1291 |
+
"bbox": [
|
| 1292 |
+
507,
|
| 1293 |
+
561,
|
| 1294 |
+
882,
|
| 1295 |
+
800
|
| 1296 |
+
],
|
| 1297 |
+
"page_idx": 7
|
| 1298 |
+
},
|
| 1299 |
+
{
|
| 1300 |
+
"type": "text",
|
| 1301 |
+
"text": "As the number of low-resourced languages represented in NLP research increases, it is imperative to consider how pre-processing steps applied to these languages alter the overall landscape of language use. Design decisions could lead to constraints on how and if people can use their language (Wenzel and Kaufman, 2024). In the con",
|
| 1302 |
+
"bbox": [
|
| 1303 |
+
507,
|
| 1304 |
+
806,
|
| 1305 |
+
884,
|
| 1306 |
+
917
|
| 1307 |
+
],
|
| 1308 |
+
"page_idx": 7
|
| 1309 |
+
},
|
| 1310 |
+
{
|
| 1311 |
+
"type": "page_footnote",
|
| 1312 |
+
"text": "<sup>7</sup>We could not compare with the models trained with normalization from Belay et al. (2022) as they are not publicly available.",
|
| 1313 |
+
"bbox": [
|
| 1314 |
+
112,
|
| 1315 |
+
879,
|
| 1316 |
+
487,
|
| 1317 |
+
917
|
| 1318 |
+
],
|
| 1319 |
+
"page_idx": 7
|
| 1320 |
+
},
|
| 1321 |
+
{
|
| 1322 |
+
"type": "page_number",
|
| 1323 |
+
"text": "10328",
|
| 1324 |
+
"bbox": [
|
| 1325 |
+
477,
|
| 1326 |
+
927,
|
| 1327 |
+
524,
|
| 1328 |
+
940
|
| 1329 |
+
],
|
| 1330 |
+
"page_idx": 7
|
| 1331 |
+
},
|
| 1332 |
+
{
|
| 1333 |
+
"type": "text",
|
| 1334 |
+
"text": "text of our study, training models on normalized data results in models that cannot handle alternative spellings. For instance, (Belay et al., 2021) found that normalization helped improve performance in information retrieval. However, the performance improvement would require users to conform to the normalized form of spelling. This impact is not limited to homophone normalization; Adebara and Abdul-Mageed (2022) argue that normalizing tone diacritics, which are essential for lexical disambiguity, affects the usability of retrieval systems for African language speakers.",
|
| 1335 |
+
"bbox": [
|
| 1336 |
+
112,
|
| 1337 |
+
84,
|
| 1338 |
+
492,
|
| 1339 |
+
277
|
| 1340 |
+
],
|
| 1341 |
+
"page_idx": 8
|
| 1342 |
+
},
|
| 1343 |
+
{
|
| 1344 |
+
"type": "text",
|
| 1345 |
+
"text": "Further, relying solely on automatic score improvements obfuscates the impact of our design decisions beyond its intended effect. Instead, our solutions should (1) focus on changing the methods (e.g. the metrics used for evaluation), (2) be explicit under what context the improvements are achieved, and (3) explore alternatives that do not impact the model's ability to handle different versions of a language. As we propose in Sec. 4.4, we can use post-inference interventions to increase automatic scores without altering the training data. Since there are no standard ways of spelling agreed upon and people spell words differently, with the post-inference normalization, researchers can see to what degree the performance they are getting is a result of spelling differences due to homophones vs actual issues with the translation model, without limiting the inputs and outputs of their models. While the performance improvement is not as significant as training on normalized data, it is a tradeoff for having a model that can account for different spellings, dialects, and transfer capabilities.",
|
| 1346 |
+
"bbox": [
|
| 1347 |
+
115,
|
| 1348 |
+
279,
|
| 1349 |
+
490,
|
| 1350 |
+
650
|
| 1351 |
+
],
|
| 1352 |
+
"page_idx": 8
|
| 1353 |
+
},
|
| 1354 |
+
{
|
| 1355 |
+
"type": "text",
|
| 1356 |
+
"text": "6 Conclusion",
|
| 1357 |
+
"text_level": 1,
|
| 1358 |
+
"bbox": [
|
| 1359 |
+
112,
|
| 1360 |
+
665,
|
| 1361 |
+
247,
|
| 1362 |
+
680
|
| 1363 |
+
],
|
| 1364 |
+
"page_idx": 8
|
| 1365 |
+
},
|
| 1366 |
+
{
|
| 1367 |
+
"type": "text",
|
| 1368 |
+
"text": "We investigated the impact of homophone normalization on languages that use the Ge'ez script. We find that normalization of homophones in training data leads to poor transfer learning performance for related languages. Furthermore, we find that normalization does not always lead to performance improvement across all languages. We argue against implicit standardization via preprocessing tools and offer an alternative approach that preserves features of the languages during training. We use our work as a case study to call for a more thorough examination of preprocessing steps, particularly for low-resource languages.",
|
| 1369 |
+
"bbox": [
|
| 1370 |
+
112,
|
| 1371 |
+
694,
|
| 1372 |
+
490,
|
| 1373 |
+
920
|
| 1374 |
+
],
|
| 1375 |
+
"page_idx": 8
|
| 1376 |
+
},
|
| 1377 |
+
{
|
| 1378 |
+
"type": "text",
|
| 1379 |
+
"text": "Limitations",
|
| 1380 |
+
"text_level": 1,
|
| 1381 |
+
"bbox": [
|
| 1382 |
+
509,
|
| 1383 |
+
83,
|
| 1384 |
+
615,
|
| 1385 |
+
98
|
| 1386 |
+
],
|
| 1387 |
+
"page_idx": 8
|
| 1388 |
+
},
|
| 1389 |
+
{
|
| 1390 |
+
"type": "text",
|
| 1391 |
+
"text": "While our experiments show an increase in the BLEU score with post-inference homophone normalization, we did not conduct a full-scale human evaluation of translation quality; instead, we manually inspected 50 outputs across all normalization settings. Future work should include large-scale human evaluations. Our conclusion is mainly based on BLEU and $\\mathrm{ChrF}$ scores, while they remain standard evaluation tools for MT, they might not show the changes in prediction when we use different normalization settings.",
|
| 1392 |
+
"bbox": [
|
| 1393 |
+
507,
|
| 1394 |
+
109,
|
| 1395 |
+
885,
|
| 1396 |
+
287
|
| 1397 |
+
],
|
| 1398 |
+
"page_idx": 8
|
| 1399 |
+
},
|
| 1400 |
+
{
|
| 1401 |
+
"type": "text",
|
| 1402 |
+
"text": "Acknowledgments",
|
| 1403 |
+
"text_level": 1,
|
| 1404 |
+
"bbox": [
|
| 1405 |
+
509,
|
| 1406 |
+
298,
|
| 1407 |
+
672,
|
| 1408 |
+
313
|
| 1409 |
+
],
|
| 1410 |
+
"page_idx": 8
|
| 1411 |
+
},
|
| 1412 |
+
{
|
| 1413 |
+
"type": "text",
|
| 1414 |
+
"text": "We thank the reviewers for their valuable feedback. We also extend our gratitude to Nina Markl for providing feedback on our manuscript.",
|
| 1415 |
+
"bbox": [
|
| 1416 |
+
507,
|
| 1417 |
+
323,
|
| 1418 |
+
885,
|
| 1419 |
+
372
|
| 1420 |
+
],
|
| 1421 |
+
"page_idx": 8
|
| 1422 |
+
},
|
| 1423 |
+
{
|
| 1424 |
+
"type": "text",
|
| 1425 |
+
"text": "References",
|
| 1426 |
+
"text_level": 1,
|
| 1427 |
+
"bbox": [
|
| 1428 |
+
510,
|
| 1429 |
+
398,
|
| 1430 |
+
608,
|
| 1431 |
+
413
|
| 1432 |
+
],
|
| 1433 |
+
"page_idx": 8
|
| 1434 |
+
},
|
| 1435 |
+
{
|
| 1436 |
+
"type": "list",
|
| 1437 |
+
"sub_type": "ref_text",
|
| 1438 |
+
"list_items": [
|
| 1439 |
+
"Solomon Teferra Abate, Michael Melese, Martha Yifiru Tachbelie, Million Meshesha, Solomon Atinafu, Wondwossen Mulugeta, Yaregal Assibie, Hafte Abera, Binyam Ephrem, Tewodros Abebe, Wondimagegnhue Tsegaye, Amanuel Lemma, Tsegaye Andargie, and Seifedin Shifaw. 2018. Parallel Corpora for bi-lingual English-Ethiopian Languages Statistical Machine Translation.",
|
| 1440 |
+
"Solomon Teferra Abate, Martha Yifiru Tachbelie, and Tanja Schultz. 2020. Multilingual Acoustic and Language Modeling for Ethio-Semitic Languages. In *Interspeech* 2020, pages 1047-1051. ISCA.",
|
| 1441 |
+
"Ife Adebara and Muhammad Abdul-Mageed. 2022. Towards Afrocentric NLP for African languages: Where we are and where we can go. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3814-3841, Dublin, Ireland. Association for Computational Linguistics.",
|
| 1442 |
+
"Henok Ademtew and Mikiyas Birbo. 2024. AGE: Amharic, Geez and English Parallel Dataset. In Proceedings of the Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024), pages 139–145, Bangkok, Thailand. Association for Computational Linguistics.",
|
| 1443 |
+
"Gabe Adugna. Research: Language Learning - Amharic: Home.",
|
| 1444 |
+
"Orevaoghene Ahia, Sachin Kumar, Hila Gonen, Jungo Kasai, David Mortensen, Noah Smith, and Yulia Tsvetkov. 2023. Do All Languages Cost the Same? Tokenization in the Era of Commercial Language Models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing,"
|
| 1445 |
+
],
|
| 1446 |
+
"bbox": [
|
| 1447 |
+
509,
|
| 1448 |
+
420,
|
| 1449 |
+
885,
|
| 1450 |
+
919
|
| 1451 |
+
],
|
| 1452 |
+
"page_idx": 8
|
| 1453 |
+
},
|
| 1454 |
+
{
|
| 1455 |
+
"type": "page_number",
|
| 1456 |
+
"text": "10329",
|
| 1457 |
+
"bbox": [
|
| 1458 |
+
477,
|
| 1459 |
+
927,
|
| 1460 |
+
524,
|
| 1461 |
+
940
|
| 1462 |
+
],
|
| 1463 |
+
"page_idx": 8
|
| 1464 |
+
},
|
| 1465 |
+
{
|
| 1466 |
+
"type": "list",
|
| 1467 |
+
"sub_type": "ref_text",
|
| 1468 |
+
"list_items": [
|
| 1469 |
+
"pages 9904-9923, Singapore. Association for Computational Linguistics.",
|
| 1470 |
+
"Amsalu Aklilu. 2010. *Problems of Writing Homophones without care and its Solution* [Title translated from Amhairc].",
|
| 1471 |
+
"Shaik Johny Basha, Duggineni Veeraiah, Boddu Venkat Charan, Wiltrud Sahithi Joyce Yeddu, and Devalla Ganesh Babu. 2023. Detection and Comparative Analysis of Handwritten Words of Amharic Language to English using CNN-Based Frameworks. In 2023 International Conference on Inventive Computation Technologies (ICICT), pages 422-427. ISSN: 2767-7788.",
|
| 1472 |
+
"Tadesse Destaw Belay, Abinew Ali Ayele, Getie Gelaye, Seid Muhie Yimam, and Chris Biemann. 2021. Impacts of Homophone Normalization on Semantic Models for Amharic. In 2021 International Conference on Information and Communication Technology for Development for Africa (ICT4DA), pages 101-106.",
|
| 1473 |
+
"Tadesse Destaw Belay, Atnafu Lambebo Tonja, Olga Kolesnikova, Seid Muhie Yimam, Abinew Ali Ayele, Silesh Bogale Haile, Grigori Sidorov, and Alexander Gelbukh. 2022. The Effect of Normalization for Bidirectional Amharic-English Neural Machine Translation. arXiv preprint. ArXiv:2210.15224 [cs].",
|
| 1474 |
+
"Yohanens Biadgligne and Kamel Smaili. 2021. Parallel Corpora Preparation for English-Amharic Machine Translation. In Advances in Computational Intelligence, pages 443-455. Springer, Cham. ISSN: 1611-3349.",
|
| 1475 |
+
"Yohannes Biadgligne and Kamel Smaili. 2022. Offline Corpus Augmentation for English-Amharic Machine Translation. In 2022 5th International Conference on Information and Computer Technologies (ICICT), pages 128-135.",
|
| 1476 |
+
"Sidsel Boldsen and Patrizia Paggio. 2022. Letters from the past: Modeling historical sound change through diachronic character embeddings. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6713-6722, Dublin, Ireland. Association for Computational Linguistics.",
|
| 1477 |
+
"Adane Kasie Chekole, Tesfa Tegegne Asfaw, Tesfahun Nurrie Mengestie, Belayneh Teshome Kebie, Mengistu Kinfe Negia, and Yohannes Abinet Worku. 2024. Effect of Parallel Data Processing Model on Bi-Directional English-Khimtagne Machine Translation Using Deep Learning. In 2024 International Conference on Information and Communication Technology for Development for Africa (ICT4DA), pages 189-193.",
|
| 1478 |
+
"Fitehalew Ashagrie Demilew. 2019. ANCIENT GEEZ SCRIPT RECOGNITION USING DEEP CONVOLUTIONAL NEURAL NETWORK. Software Engineering."
|
| 1479 |
+
],
|
| 1480 |
+
"bbox": [
|
| 1481 |
+
115,
|
| 1482 |
+
85,
|
| 1483 |
+
489,
|
| 1484 |
+
917
|
| 1485 |
+
],
|
| 1486 |
+
"page_idx": 9
|
| 1487 |
+
},
|
| 1488 |
+
{
|
| 1489 |
+
"type": "list",
|
| 1490 |
+
"sub_type": "ref_text",
|
| 1491 |
+
"list_items": [
|
| 1492 |
+
"Chris Emezue, Hellina Nigatu, Cynthia Thinwa, Helper Zhou, Shamsuddeen Muhammad, Lerato Louis, Idris Abdulmumin, Samuel Oyerinde, Benjamin Ajibade, Olanrewaju Samuel, Oviawe Joshua, Emeka Onwuegbuzia, Handel Emezue, Ifeoluwatayo A. Ige, Atnafu Lambebo Tonja, Chiamaka Chukwuneke, Bonaventure F. P. Dossou, Naome A. Etori, Mbonu Chinedu Emmanuel, Oreen Yousuf, Kaosarat Aina, and Davis David. 2023. The African Stopwords project: curating stopwords for African languages. arXiv preprint. ArXiv:2304.12155 [cs].",
|
| 1493 |
+
"Abebawu Eshetu. 2022. Amharic-Simple-TextPreprocessing-Usin-Python. Original-date: 2019-08-05T09:30:04Z.",
|
| 1494 |
+
"Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, and Armand Joulin. 2021. Beyond english-centric multilingual machine translation. J. Mach. Learn. Res., 22(1):107:4839-107:4886.",
|
| 1495 |
+
"Negasi Haile, Nuredin Ali, and Asmelash Teka Hadgu. 2023. ERROR ANALYSIS OF TIGRINYA ENGLISH MACHINE TRANSLATION SYSTEMS.",
|
| 1496 |
+
"Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The State and Fate of Linguistic Diversity and Inclusion in the NLP World. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6282-6293, Online. Association for Computational Linguistics.",
|
| 1497 |
+
"Raviraj Joshi, Kanishk Singla, Anusha Kamath, Raunak Kalani, Rakesh Paul, Utkarsh Vaidya, Sanjay Singh Chauhan, Niranjan Wartikar, and Eileen Long. 2025. Adapting Multilingual LLMs to Low-Resource Languages using Continued Pretraining and Synthetic Corpus. arXiv preprint. ArXiv:2410.14815 [cs].",
|
| 1498 |
+
"Shreya Khare, Ashish Mittal, Anuj Diwan, Sunita Sarawagi, Preethi Jyothi, and Samarth Bharadwaj. 2021. Low Resource ASR: The Surprising Effectiveness of High Resource Transliteration. In *Interspeech* 2021, pages 1529-1533. ISCA.",
|
| 1499 |
+
"Bushra Kidanemariam. 2019. Amharic-NLP-Tools-in-JAVA.",
|
| 1500 |
+
"Diederik P. Kingma and Jimmy Ba. 2017. Adam: A Method for Stochastic Optimization. arXiv preprint. ArXiv:1412.6980 [cs].",
|
| 1501 |
+
"Surafel M. Lakew, Matteo Negri, and Marco Turchi. 2020. Low Resource Neural Machine Translation: A Benchmark for Five African Languages. arXiv preprint. ArXiv:2003.14402 [cs]."
|
| 1502 |
+
],
|
| 1503 |
+
"bbox": [
|
| 1504 |
+
512,
|
| 1505 |
+
85,
|
| 1506 |
+
882,
|
| 1507 |
+
917
|
| 1508 |
+
],
|
| 1509 |
+
"page_idx": 9
|
| 1510 |
+
},
|
| 1511 |
+
{
|
| 1512 |
+
"type": "page_number",
|
| 1513 |
+
"text": "10330",
|
| 1514 |
+
"bbox": [
|
| 1515 |
+
477,
|
| 1516 |
+
928,
|
| 1517 |
+
524,
|
| 1518 |
+
940
|
| 1519 |
+
],
|
| 1520 |
+
"page_idx": 9
|
| 1521 |
+
},
|
| 1522 |
+
{
|
| 1523 |
+
"type": "list",
|
| 1524 |
+
"sub_type": "ref_text",
|
| 1525 |
+
"list_items": [
|
| 1526 |
+
"Merriam-Webster. Definition of HOMOPHONE.",
|
| 1527 |
+
"Abraham Negash. 2017. The Origin and Development of Tigrinya Language Publications (1886 ...",
|
| 1528 |
+
"Daniel Mekuriaw and Arman Cohan. 2024. BENCHMARK DATASET AND PARAMETER-EFFICIENT CROSS-LINGUAL TRANSFER LEARNING FOR AMHARIC TEXT SUMMARIZATION. Technical report.",
|
| 1529 |
+
"Hellina Hailu Nigatu, Atnafu Lambebo Tonja, Benjamin Rosman, Thamar Solorio, and Monojit Choudhury. 2024. The Zenos Paradox of Low-Resource Languages. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 17753-17774, Miami, Florida, USA. Association for Computational Linguistics.",
|
| 1530 |
+
"Rubungo Andre Niyongabo, Qu Hong, Julia Kreutzer, and Li Huang. 2020. KINNEWS and KIRNEWS: Benchmarking cross-lingual text classification for Kinyarwanda and Kirundi. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5507-5521, Barcelona, Spain (Online). International Committee on Computational Linguistics.",
|
| 1531 |
+
"Team NLLB, Marta R. Costa-jussa, James Cross, Onur Celebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. 2022. No Language Left Behind: Scaling Human-Centered Machine Translation. arXiv preprint. ArXiv:2207.04672 [cs].",
|
| 1532 |
+
"Jane Chinelo Obasi. 2018. Structural Irregularities within the English Language: Implications for Teaching and Learning in Second Language Situations.",
|
| 1533 |
+
"Kishore Papineni, Salim Roukos, Todd Ward, and Wei Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
|
| 1534 |
+
"Maja Popovic. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392-395, Lisbon, Portugal. Association for Computational Linguistics."
|
| 1535 |
+
],
|
| 1536 |
+
"bbox": [
|
| 1537 |
+
115,
|
| 1538 |
+
85,
|
| 1539 |
+
489,
|
| 1540 |
+
917
|
| 1541 |
+
],
|
| 1542 |
+
"page_idx": 10
|
| 1543 |
+
},
|
| 1544 |
+
{
|
| 1545 |
+
"type": "list",
|
| 1546 |
+
"sub_type": "ref_text",
|
| 1547 |
+
"list_items": [
|
| 1548 |
+
"Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Brussels, Belgium. Association for Computational Linguistics.",
|
| 1549 |
+
"Shivalika Singh, Angelika Romanou, Clémentine Fourrier, David I. Adelani, Jian Gang Ngui, Daniel Vila-Suero, Peerat Limkonchotiwat, Kelly Marchisio, Wei Qi Leong, Yosephine Susanto, Raymond Ng, Shayne Longpre, Wei-Yin Ko, Sebastian Ruder, Madeline Smith, Antoine Bosselut, Alice Oh, Andre F. T. Martins, Leshem Choshen, Daphne Ippolito, Enzo Ferrante, Marzieh Fadaee, Beyza Ermis, and Sara Hooker. 2025. Global MMLU: Understanding and Addressing Cultural and Linguistic Biases in Multilingual Evaluation. arXiv preprint. ArXiv:2412.03304 [cs].",
|
| 1550 |
+
"Martha Yifiru Tachbelie, Solomon Teferra Abate, and Laurent Besacier. 2014. Using different acoustic, lexical and language modeling units for ASR of an under-resourced language Amharic. Speech Communication, 56:181-194.",
|
| 1551 |
+
"Chantal N. van Dijk, Merel van Witteloostuijn, Nada Vasi, Sergey Avrutin, and Elma Blom. 2016. The Influence of Texting Language on Grammar and Executive Functions in Primary School Children. PLoS ONE, 11(3):e0152409.",
|
| 1552 |
+
"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need.",
|
| 1553 |
+
"Kimi Wenzel and Geoff Kaufman. 2024. Designing for Harm Reduction: Communication Repair for Multicultural Users' Voice Interactions. In Proceedings of the CHI Conference on Human Factors in Computing Systems, pages 1-17, Honolulu HI USA. ACM.",
|
| 1554 |
+
"Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.",
|
| 1555 |
+
"Seid Muhie Yimam, Abinew Ali Ayele, Gopalakrishnan Venkatesh, Ibrahim Gashaw, and Chris Biemann. 2021. Introducing Various Semantic Models for Amharic: Experimentation and Evaluation with Multiple Tasks and Datasets. Future Internet, 13(11):275. Number: 11 Publisher: Multidisciplinary Digital Publishing Institute."
|
| 1556 |
+
],
|
| 1557 |
+
"bbox": [
|
| 1558 |
+
510,
|
| 1559 |
+
85,
|
| 1560 |
+
882,
|
| 1561 |
+
901
|
| 1562 |
+
],
|
| 1563 |
+
"page_idx": 10
|
| 1564 |
+
},
|
| 1565 |
+
{
|
| 1566 |
+
"type": "page_number",
|
| 1567 |
+
"text": "10331",
|
| 1568 |
+
"bbox": [
|
| 1569 |
+
477,
|
| 1570 |
+
928,
|
| 1571 |
+
522,
|
| 1572 |
+
940
|
| 1573 |
+
],
|
| 1574 |
+
"page_idx": 10
|
| 1575 |
+
},
|
| 1576 |
+
{
|
| 1577 |
+
"type": "table",
|
| 1578 |
+
"img_path": "images/df5cf84e0619638fd61d2b411c833f7423642aad2fbd99b37317b5c32928b308.jpg",
|
| 1579 |
+
"table_caption": [],
|
| 1580 |
+
"table_footnote": [],
|
| 1581 |
+
"table_body": "<table><tr><td>Target Lang.</td><td>Source</td><td>Reference</td><td>No-Norm</td><td>H-Only</td><td>HSL</td></tr><tr><td>Tir</td><td>The discourse will also answer such questions as these: How often should Christians commemorate this event?</td><td>πήθην, λήν, HCCN γήνρH, Hηοῦν ηήναν, ηεύμιν Σαλα, θυμην οςιν - ηάτην έλαν, Θακτόν Σγαν, Θακτόν Σγαν, Θακτόν Σγαν, Θακτόν Σγαν, Θακτόν Σγαν, Θακτόν Σγαν, Θακτόν Σγαν, Θακτόν Σγαν, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήsigma, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, ΘακτήσαΦ, ΘακτήσαΦ, ΘακτήσαΦ, ΘακτήσαΦ, ΘακτήσαΦ, ΘακτήσαΦ, ΘακτήσαΦ, ΘακτήσαΦ, ΘακτήσαΦ, ΘακτήσαΦ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαψ, Θακτήσαψ, Θακτήσαψ, Θακτήσαψ, Θακτήσαψ, Θακτήσαψ, Θακτήσαψ, Θακτήσαψ, Θακτήσαψ, Θακτήσαψ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαρ, Θακτήσαρ, Θακτήσαρ, Θακτήσαρ, Θακτήσαρ, Θακτήσαρ, Θακτήσαρ, Θακτήσαρ, Θακτήσαρ, Θακτήσαρ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσα φ, Θακτήσα φ, Θακτήσα φ, Θακτήσα φ, Θακτήσα φ, Θακτήσα φ, Θακτήσα φ, Θακτήσα φ, Θακτήσα φ, Θακτήσα φ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, ΘακτήσαФ, ΘακτήσαФ, ΘακτήσαФ, ΘακτήσαФ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΟ, ΘακτήσαΟ, ΘακτήσαΟ, ΘακτήσαΟ, ΘακτήσαΟ, ΘακτήσαΟ, ΘακτήσαΟ, ΘακτήσαΟ, ΘακτήσαΟ, ΘακτήσαΟ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσα��, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, Θακτήσαŋ, Θακτήσαŋ, Θακτήσαŋ, Θακτήσαŋ, Θακτήσαŋ, Θακτήσαŋ, Θακτήσαŋ, Θακτήσαŋ, Θακτήσαŋ, Θακτήσαŋ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, Θακτήσαη, Θακτήσαη, Θακτήσαη, Θακτήσαη, Θακτήσαη, Θακτήσαη, Θακτήσαη, Θακτήσαη, Θακτήσαη, Θακτήσαη, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, Θακτήσαζ, Θακτήσαζ, Θακτήσαζ, Θακτήσαζ, Θακτήσαζ, Θακτήσαζ, Θακτήσαζ, Θακτήσαζ, Θακτήσαζ, Θακτήσαζ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα², Θακτήσα², Θακτήσα², Θακτήσα², Θακτήσα², Θακτήσα², Θακτήσα², Θακτήσα², Θακτήσα², Θακτήσα², Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα², Θακτήσα², Θακτήσα², Θακτήσα², Θακτήσα², Θακτήσα², Θακτήσα², Θακτήσα², Θακτήσα², Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₁/₂, Θακτήσα₁/₂, Θακτήσα₁/₂, Θακτήσα₁/₂, Θακτήσα₁/₂, Θακτήσα₁/₂, Θακτήσα₁/₂, Θακτήσα₁/₂, Θακτήσα₁/₂, Τιςης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέlambdaις τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλŋς τέλης τέλŋς τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλις τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλις τέλις τέλɪς τέλɪς τέλις τέλɪς τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλⅡ/₂ τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλη⁄₂ τέλης τέλη⁄₂ τέλης τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλŋ⁄₂ τέλη⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τález</td><td></td><td></td><td></td></tr></table>",
|
| 1582 |
+
"bbox": [
|
| 1583 |
+
149,
|
| 1584 |
+
80,
|
| 1585 |
+
848,
|
| 1586 |
+
437
|
| 1587 |
+
],
|
| 1588 |
+
"page_idx": 11
|
| 1589 |
+
},
|
| 1590 |
+
{
|
| 1591 |
+
"type": "text",
|
| 1592 |
+
"text": "Table 6: Qualitative examples with transfer learning experiments where the transfer Amh-Eng model is trained in No-Norm, H-Only and HSL settings",
|
| 1593 |
+
"bbox": [
|
| 1594 |
+
112,
|
| 1595 |
+
447,
|
| 1596 |
+
882,
|
| 1597 |
+
476
|
| 1598 |
+
],
|
| 1599 |
+
"page_idx": 11
|
| 1600 |
+
},
|
| 1601 |
+
{
|
| 1602 |
+
"type": "text",
|
| 1603 |
+
"text": "A Qualitative Examples",
|
| 1604 |
+
"text_level": 1,
|
| 1605 |
+
"bbox": [
|
| 1606 |
+
114,
|
| 1607 |
+
500,
|
| 1608 |
+
339,
|
| 1609 |
+
517
|
| 1610 |
+
],
|
| 1611 |
+
"page_idx": 11
|
| 1612 |
+
},
|
| 1613 |
+
{
|
| 1614 |
+
"type": "text",
|
| 1615 |
+
"text": "In Table 6, we provide qualitative examples for Tigrinya and Ge'ez transfer learning experiments. As the table shows, Amharic models trained with normalization repeat words until they reach maximum sequence length or end of sentence token $(\\because)$ .",
|
| 1616 |
+
"bbox": [
|
| 1617 |
+
112,
|
| 1618 |
+
526,
|
| 1619 |
+
489,
|
| 1620 |
+
606
|
| 1621 |
+
],
|
| 1622 |
+
"page_idx": 11
|
| 1623 |
+
},
|
| 1624 |
+
{
|
| 1625 |
+
"type": "page_number",
|
| 1626 |
+
"text": "10332",
|
| 1627 |
+
"bbox": [
|
| 1628 |
+
477,
|
| 1629 |
+
927,
|
| 1630 |
+
524,
|
| 1631 |
+
940
|
| 1632 |
+
],
|
| 1633 |
+
"page_idx": 11
|
| 1634 |
+
}
|
| 1635 |
+
]
|
2025/A Case Against Implicit Standards_ Homophone Normalization in Machine Translation for Languages that use the Ge’ez Script./a570cebb-8359-4bf2-9617-79a7ee147972_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Case Against Implicit Standards_ Homophone Normalization in Machine Translation for Languages that use the Ge’ez Script./a570cebb-8359-4bf2-9617-79a7ee147972_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:297b136169201f2c03c114323dda5a07f6622a3c6359eb5ad517980979d90feb
|
| 3 |
+
size 297449
|
2025/A Case Against Implicit Standards_ Homophone Normalization in Machine Translation for Languages that use the Ge’ez Script./full.md
ADDED
|
@@ -0,0 +1,284 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Case Against Implicit Standards: Homophone Normalization in Machine Translation for Languages that Use the Ge'ez Script
|
| 2 |
+
|
| 3 |
+
# Hellina Hailu Nigatu<sup>1</sup>, Atnafu Lambebo Tonja<sup>2</sup>, Henok Biadglign Ademtew<sup>3</sup>, Hizkel Mitiku Alemayehu<sup>4</sup>, Negasi Haile Abadi<sup>5</sup>, Tadesse Destaw Belay<sup>6</sup>, Seid Muhie Yimam<sup>7</sup>
|
| 4 |
+
|
| 5 |
+
$^{1}$ UC Berkeley, $^{2}$ MBZUAI, $^{3}$ Vella AI, $^{4}$ Paderborn University, $^{5}$ Lesan AI, $^{6}$ Instituto Politécnico Nacional, $^{7}$ University of Hamburg
|
| 6 |
+
|
| 7 |
+
Correspondence: hellina_nigatu@berkeley.edu
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
Homophone<sup>1</sup> normalization—where characters that have the same sound in a writing script are mapped to one character—is a pre-processing step applied in Amharic Natural Language Processing (NLP) literature. While this may improve performance reported by automatic metrics, it also results in models that are unable to effectively process different forms of writing in a single language. Further, there might be impacts in transfer learning, where models trained on normalized data do not generalize well to other languages. In this paper, we experiment with monolingual training and crosslingual transfer to understand the impacts of normalization on languages that use the Ge'ez script. We then propose a post-inference intervention in which normalization is applied to model predictions instead of training data. With our simple scheme of post-inference normalization, we show that we can achieve an increase in BLEU score of up to 1.03 while preserving language features in training. Our work contributes to the broader discussion on technology-facilitated language change and calls for more language-aware interventions.
|
| 12 |
+
|
| 13 |
+
# 1 Introduction
|
| 14 |
+
|
| 15 |
+
The majority of the world's languages are underrepresented in natural language processing (NLP) research (Joshi et al., 2020). Collectively, these languages have been referred to as 'low-resource,' owing to the various resources that are not available for them (Nigatu et al., 2024). One of the many resources that are lacking for low-resourced languages is pre-processing tools (Niyongabo et al., 2020). From tokenization methods to basic data cleaning tools, many of the widely used pre-processing schemes do not include, or are not effective for, low-resourced languages (Ahia et al., 2023; Emezue et al., 2023).
|
| 16 |
+
|
| 17 |
+
Pre-processing steps, ranging from removing punctuation marks to tokenizing text, are essential steps in determining the efficacy of downstream models. For instance, languages that use different writing scripts have been transliterated to a single script to facilitate cross-lingual transfer (Khare et al., 2021). Prior work has explored morpheme-based tokenization for morphologically rich languages as an alternative to word-level tokenization to enhance performance (Tachbelie et al., 2014). Within phonetic languages like Amharic, a common pre-processing intervention has been homophone normalization-i.e, mapping characters with similar sounds to one character (Biadgligne and Smaili, 2021; Abate et al., 2018).
|
| 18 |
+
|
| 19 |
+
Homophone normalization has mainly been applied to improve automatic metric scores. Current NLP evaluation schemes, particularly automatic metrics like BLEU (Papineni et al., 2002), which require an exact match between n-grams, do not handle homophone characters. As an example, let us take the homophones $<\vartheta>$ and $<\lambda>$ which both represent the sound /?ä/ in Amharic. If the Amharic word for "eye" is written as '987' in the reference but model prediction outputs 'λ87', evaluation with BLEU score would not count it as a match. However, for an Amharic speaker, those two words have the same pronunciation and meaning. Homophone normalization averts this problem by mapping all homophone characters into a single character and thereby boosting automatic metric scores (Belay et al., 2022). Homophone normalization also reduces the vocabulary size of a dataset, which may be desirable for some applications (Abate et al., 2020). While this indicates a potential benefit in improving performance when using automatic metrics for evaluation, it may lead to downstream issues for language speakers.
|
| 20 |
+
|
| 21 |
+
In this paper, we argue that the seemingly innocuous act of homophone normalization for Amharic NLP sets and perpetuates an implicit
|
| 22 |
+
|
| 23 |
+
standard for Ge'ez script languages. Currently, homophone normalization is actively being applied to Amharic, one of the many languages that use the Ge'ez script. However, the characters that are normalized in Amharic have distinct sounds in languages such as Tigrinya and Ge'ez. Hence, the implicit standard set by this pre-processing step may have a downstream impact on cross-lingual transfer for the other languages that use the Ge'ez script. Additionally, models trained on normalized datasets will be unable to process alternative word spellings. However, language is not monolithic; normalization may limit how speakers of different dialects and variants of a single language can interact with language technologies. Using Machine Translation (MT) as an NLP task and Amharic, Tigrinya, and Ge'ez as languages of focus, we pose the following research questions:
|
| 24 |
+
|
| 25 |
+
- RQ1: How do existing MT models handle words with homophone characters in languages that use the Ge'ez script?
|
| 26 |
+
- RQ2: What is the impact of applying different normalization schemes to training data on the performance of MT systems?
|
| 27 |
+
- RQ3: What is the impact of homophone normalization on transfer learning in MT for related languages?
|
| 28 |
+
- RQ4: How does applying normalization post-translation compare to applying normalization to the training data?
|
| 29 |
+
|
| 30 |
+
Multilingual NLP research is often driven by a goal of generalization, proposing ways to make a single model work well for multiple languages (e.g NLLB et al., 2022). While there are demonstrated benefits to this approach, we use our work as a case study to question what we lose through implicit standards in language processing. We find that homophone normalization negatively affects cross-lingual transfer and that applying normalization post-translation boosts automatic scores without compromising language characteristics (Sec. 4). Our work highlights the importance of investigating downstream impacts of preprocessing steps, particularly for low-resourced languages.
|
| 31 |
+
|
| 32 |
+
# 2 Background and Related Work
|
| 33 |
+
|
| 34 |
+
In this section, we provide background on the languages of study and the writing script. We also
|
| 35 |
+
|
| 36 |
+
give background on normalization schemes used in prior work to handle characters with the same sound.
|
| 37 |
+
|
| 38 |
+
# 2.1 Languages of Study
|
| 39 |
+
|
| 40 |
+
The Ge'ez Script is an Abugida writing system each character in the script represents a consonant and a vowel<sup>2</sup>. Vowels are indicated by modifying the base character. There are 7 vowels in the Ge'ez writing script; hence, each base character has at least 7 variations. For instance, the base character $\langle \Lambda \rangle / l$ is used to represent the sound $/lə/$ and is modified to $\langle \Lambda \rangle / lu/$ , $\langle \Lambda \rangle / li/$ , and so on. Additionally, there are characters used to represent labiovelars such as $\langle \mathfrak{A} \rangle / kwa/$ . The Ge'ez script is used to write Afro-Semitic languages of Ethiopia and Eritrea, including our languages of focus in this paper: Amharic, Tigrinya, and Ge'ez.
|
| 41 |
+
|
| 42 |
+
Amharic is an Afro-Semitic language spoken by an estimated 57.5 million people worldwide (Basha et al., 2023). It is primarily spoken in Ethiopia and is one of the federal working languages of the country. The Amharic alphabet has 33 base characters (Adugna).
|
| 43 |
+
|
| 44 |
+
Tigrinya is an Afro-Semitic language spoken by an estimated 10 million people worldwide (Haile et al., 2023). Tigrinya is one of the federal working languages of Ethiopia and is one of the governmental and national languages of Eritrea. The Tigrinya alphabet has 32 base characters (Negash, 2017).
|
| 45 |
+
|
| 46 |
+
Ge'ez is an Afro-Semitic language that is currently spoken only as a second language<sup>3</sup>. It is primarily used as a liturgical language within Ethiopian and Eritrean religious institutions. The Ge'ez alphabet has 26 base characters (Demilew, 2019).
|
| 47 |
+
|
| 48 |
+
# 2.2 Homophones in the Ge'ez Script
|
| 49 |
+
|
| 50 |
+
As languages evolve, phonological change occurs where some phonemes might split, merge, or emerge (Boldsen and Paggio, 2022). Since written language evolves at a much slower pace than spoken language, the phonetic changes are usually not reflected in the written forms of language (Obasi, 2018). Due to merged phonemes that are represented by different characters that, in prior years,
|
| 51 |
+
|
| 52 |
+
might have had distinct sounds, the Amharic alphabet has multiple characters that have the same sound (Aklilu, 2010). For instance, all of the following characters in the Ge'ez script $<\lambda>$ , $<\lambda>$ , $<0>$ or $<9>$ are read as /?ä/ in Amharic.
|
| 53 |
+
|
| 54 |
+
Writing scripts are also shared by several languages that may not have evolved in the same way. For the Ge'ez script in particular, some of the characters that have the same sound in Amharic have distinct sounds in Tigrinya. For example, all four characters in the above example that represent /?a/ in Amharic have distinct sounds in Tigrinya: $\langle \lambda \rangle$ /?ə/, $\langle \lambda \rangle$ /?a/, $\langle 0 \rangle$ /?ə/ and $\langle 9 \rangle$ /?a/. There are some characters from the Ge'ez script that have the same sound in Tigrinya, for example 'U' and 'n' both represent /sə/. Due to the differences in how each language uses the characters, altering homophones in the Ge'ez script will have different effects across languages. For example, if you write the word for 'eye', which is written as '927' in Tigrinya as 'A27', the word would have no meaning, while in Amharic, both words would mean 'eye'. In the Ge'ez language, changing the characters will result in a change in meaning. For instance, the word 'n' means 'to hold a wedding' while the word 'U' means 'to get inside'.
|
| 55 |
+
|
| 56 |
+
# 2.3 Handling Homophone Characters in NLP
|
| 57 |
+
|
| 58 |
+
Homophone normalization helps improve automatic metric scores by mapping different grapheme variations of a homophone character into a single representation (Sec.1). It has mainly been applied in Amharic NLP literature for Machine Translation (e.g. Abate et al., 2018; Chekole et al., 2024) and semantic modeling tasks (e.g. Belay et al., 2021). However, within papers that report normalizing homophone characters, there is no standard normalization scheme. For instance, some publicly available tools normalize characters with the same sound only (e.g. Kidanemariam, 2019), others normalize characters with the same sound and labialized characters (Mekuriaw and Cohan, 2024; Eshetu, 2022), and some normalize characters with the same sound, labialized characters, and some characters with the same base consonant (Yimam et al., 2021). Further, some prior works report mapping homophone characters to "the most frequently used characters" (Biadgligne and Smaili, 2022; Abate et al., 2018).
|
| 59 |
+
|
| 60 |
+
While most prior works report using normalization as a standard pre-processing step, Belay et al.
|
| 61 |
+
|
| 62 |
+
(2022) compared MT models trained with and without normalization and reported score improvement for models trained with normalized data. Belay et al. (2021) applied normalization to semantic modeling tasks and found that normalization helped for Information Retrieval but hurt performance for PoS tagging and sentiment analysis. However, these investigations are (1) limited to the Amharic language and (2) do not compare the impact of the different normalization schemes in the literature.
|
| 63 |
+
|
| 64 |
+
Cases for and against homophone normalization in Amharic: From linguistics literature, there have been three viewpoints on how to handle characters that have the same sound in Amharic: (1) standardize spellings, (2) remove homophone characters from the alphabet-i.e, normalize, or (3) perform no interventions (Aklilu, 2010).
|
| 65 |
+
|
| 66 |
+
Thus far, the Amharic NLP literature has adopted an (implicit) standardization step with homophone normalization. In this paper, we offer a post-inference intervention that provides a middle ground to the three viewpoints described above. Instead of training on normalized data, we propose performing normalization when calculating a particular metric. We first investigate the impacts of normalization and homophones in MT in zero-shot, monolingual, and cross-lingual settings and show that our post-inference intervention can improve metric scores.
|
| 67 |
+
|
| 68 |
+
# 3 Methods
|
| 69 |
+
|
| 70 |
+
To test the impact of homophone normalization, we prepared an evaluation dataset with a focus on words that have homophone characters in the three languages using publicly available MT datasets (Sec. 3.1). We then adopted two normalization schemes for our experiments, which we describe in Sec. 3.2.
|
| 71 |
+
|
| 72 |
+
# 3.1 Dataset
|
| 73 |
+
|
| 74 |
+
We prepared an evaluation dataset in the three languages of study by focusing on sentences that have high counts of characters with the same sound. In Table 1, we give the details of our dataset<sup>4</sup>. We selected sentences from the following datasets for each language:
|
| 75 |
+
|
| 76 |
+
<table><tr><td>Target Language</td><td>Source Dataset</td><td>Training</td><td>Eval</td><td>Test</td></tr><tr><td>Amharic</td><td>Abate et al. (2018)</td><td>199.2k</td><td>22.1k</td><td>2.4k</td></tr><tr><td>Ge'ez</td><td>AGE</td><td>15.7k</td><td>1.9k</td><td>1.9k</td></tr><tr><td>Tigrinya</td><td>Abate et al. (2018); Lakew et al. (2020)</td><td>75.4k</td><td>30.1k</td><td>2.4k</td></tr></table>
|
| 77 |
+
|
| 78 |
+
# Amharic-English-Machine-Translation
|
| 79 |
+
|
| 80 |
+
Corpus The Amharic-English Machine Translation Corpus (Abate et al., 2018) contains Amharic-English parallel sentences collected from Bible, History, News, and Legal sources. The dataset has a total of $276\mathrm{k}$ parallel sentences. From the test split of the (Abate et al., 2018) dataset, we selected sentences that had at least 9 homophone characters. With this filtering step, we had a test set of $2.4\mathrm{k}$ sentence pairs.
|
| 81 |
+
|
| 82 |
+
Tigrinya-English MT For Tigrinya, we used data from Lakew et al. (2020) and Abate et al. (2018). The dataset had a total of 150.8k parallel sentences. Similar to Amharic, we selected sentences that had at least 17 homophone characters, which resulted in a test set with 2.4k English-Tigrinya parallel sentences.
|
| 83 |
+
|
| 84 |
+
AGE We used the AGE dataset (Ademtew and Birbo, 2024) which has 17.5k Amharic-Ge'ez and 18.6k Ge'ez-English parallel sentences. For our experiments, we used the English-Ge'ez data and split it into training, evaluation, and test sets at an 8:1:1 ratio. With this, we had 1.9k Ge'ez-English parallel sentences as our test set. Since the Ge'ez dataset is small, we did not apply additional filtering to the test set.
|
| 85 |
+
|
| 86 |
+
# 3.2 Normalization Settings
|
| 87 |
+
|
| 88 |
+
As discussed in Sec. 2, there are multiple normalization schemes adopted by prior work, particularly when dealing with Amharic datasets. In this study, we employ three normalization settings:
|
| 89 |
+
|
| 90 |
+
- No-Norm: We take the dataset as is, without applying any normalization or other alterations. We use this setting as a baseline.
|
| 91 |
+
- H-only: We normalize all characters that have the same sound in a given language. We apply this approach for Amharic and
|
| 92 |
+
|
| 93 |
+
Table 1: Benchmark dataset description along with source datasets.
|
| 94 |
+
|
| 95 |
+
<table><tr><td>Language</td><td>No Norm</td><td>H-Only</td><td>HSL</td></tr><tr><td>Amharic</td><td>✓</td><td>✓</td><td>✓</td></tr><tr><td>Tigrinya</td><td>✓</td><td>✓</td><td>-</td></tr><tr><td>Ge'ez</td><td>✓</td><td>-</td><td>-</td></tr></table>
|
| 96 |
+
|
| 97 |
+
Table 2: Application of normalization schemes to the three languages of study.
|
| 98 |
+
|
| 99 |
+
Tigrinya, with separate scripts for each language as the characters with the same sound in each language differ (Sec. 2). We map homophone characters to the most frequent character in the dataset.
|
| 100 |
+
|
| 101 |
+
- HSL: In this setting, we use the script from (Yimam et al., 2021) and normalize homophone characters, similar sounds, and labialized characters. Since this approach has only been used for Amharic, and there is no standard way to determine "similar" sounds, we only apply it to the Amharic dataset<sup>5</sup>.
|
| 102 |
+
|
| 103 |
+
In Table 2, we give details on how we applied the normalization schemes to our datasets. Note that, for Ge'ez we did not apply any normalization as all characters are distinct-i.e, swapping characters, even if they have the same sound, will result in meaning change (Sec. 2).
|
| 104 |
+
|
| 105 |
+
# 4 Experimental Study
|
| 106 |
+
|
| 107 |
+
In this section, we first give our experimental setup, describing the models we used for our experiments in Sec. 4.1. We conduct experiments on the zero-shot performance of MT systems on sentences with homophone characters (Sec. 4.2). We then investigate the impact of normalizing homophone characters in training data for monolingual model training and cross-lingual transfer (Sec. 4.3). Then, we investigate the efficacy of pots-inference normalization in Sec. 4.4.
|
| 108 |
+
|
| 109 |
+
# 4.1 Experimental Setup
|
| 110 |
+
|
| 111 |
+
# 4.1.1 Models
|
| 112 |
+
|
| 113 |
+
Pre-trained MT Models For our zero-shot experiments, we used Google Translate $^6$ , M2M-100-418M (Fan et al., 2021), and NLLB (NLLB et al., 2022) models. All three models support
|
| 114 |
+
|
| 115 |
+
Amharic, while Google Translate and NLLB support Tigrinya. However, Ge'ez is not included in any of the three models; hence, we did not perform any zero-shot experiment for English-Ge'ez translation.
|
| 116 |
+
|
| 117 |
+
Models for Training To understand the effects of normalization during training, we (1) finetuned the NLLB-600M (NLLB et al., 2022) model and (2) trained an encoder-decoder Transformer model (Vaswani et al., 2017) from scratch. Since the NLLB model includes Amharic and Tigrinya data, we trained the Transformer MT model from scratch to avoid the impact of pre-training in our experimental results.
|
| 118 |
+
|
| 119 |
+
Training Details We trained an encoder-decoder Transformer model with 8 heads and 6 layers. We used an Adam Optimizer (Kingma and Ba, 2017) with a learning rate of 1e-4 and $\beta 1 = 0.9$ and $\beta 2 = 0.98$ . We used a learning rate scheduler that decreased the learning rate by a factor of 0.5 if there were no improvements in 2 consecutive epochs. We used Cross Entropy Loss as our loss function and trained the model for 30 epochs. The best model checkpoint based on evaluation set performance was chosen for the final evaluation. To fine-tune the NLLB-600M (NLLB et al., 2022) model, we used the Trainer module from the HuggingFace transformer library (Wolf et al., 2020). We fine-tuned the model for 5 epochs with a learning rate of 5e-5 using the default training arguments and a batch size of 32. We used the model's default tokenizer without any additional prefixing. We used the same training scheme for all languages and all experiments.
|
| 120 |
+
|
| 121 |
+
# 4.1.2 Evaluation
|
| 122 |
+
|
| 123 |
+
We used both automatic metrics and human evaluation. We performed our evaluation on a single run for each model. For automatic metrics, we used BLEU (Papineni et al., 2002) and $\mathrm{ChrF}$ (Popovic, 2015). BLEU score focuses on overlap in word-level n-grams, whereas $\mathrm{ChrF}$ focuses on character-level n-grams. We used the SACREBLEU (Post, 2018) implementation for both BLEU and $\mathrm{ChrF}$ , with their default settings. When calculating the scores, we removed punctuation marks from both reference and prediction sentences. For all test cases, we apply the same normalization scheme to the reference and prediction. This makes it difficult to make comparisons across the references; for instance, we cannot directly compare the refer
|
| 124 |
+
|
| 125 |
+
<table><tr><td rowspan="2">Model</td><td colspan="2">Amharic</td><td colspan="2">Tigrinya</td></tr><tr><td>BLEU</td><td>ChrF</td><td>BLEU</td><td>ChrF</td></tr><tr><td>NLLB - 3B</td><td>10.47</td><td>34.05</td><td>11.26</td><td>31.22</td></tr><tr><td>NLLB - 600M</td><td>6.98</td><td>29.16</td><td>11.55</td><td>31.30</td></tr><tr><td>Google Translate</td><td>9.89</td><td>33.67</td><td>16.02</td><td>38.75</td></tr><tr><td>M2M - 418M</td><td>13.51</td><td>34.78</td><td>-</td><td>-</td></tr></table>
|
| 126 |
+
|
| 127 |
+
Table 3: Zero-Shot translation performance.
|
| 128 |
+
|
| 129 |
+
ence without normalization to the reference with H-only normalization. However, as described in Sec. 2, the motivation for applying normalization is to increase scores of automatic metrics by "standardizing" the spelling of words in a given language with a normalization scheme. Further, depending on the language it is applied to, normalization affects the spelling of a word and not the meaning. Hence, references with different normalization settings applied carry the same semantic meaning, although they may differ in the spelling of some words.
|
| 130 |
+
|
| 131 |
+
For human evaluation, native speakers of Tigrinya and Amharic and second language speakers of Ge'ez qualitatively looked at 50 random sample predictions, comparing the outputs of the different models. We focused on the following axes when evaluating: (1) rating which translation was better from the given models, (2) identifying words that were mistranslated (e.g, words that were in Amharic for Tigrinya translations and vice versa), and (3) identifying changes in homophones in the translations.
|
| 132 |
+
|
| 133 |
+
# 4.2 Zero-Shot Experiments
|
| 134 |
+
|
| 135 |
+
This experiment aims to answer RQ1—that is, to understand if there is an existing impact on pretrained MT models in handling characters with the same sound in the three languages of study.
|
| 136 |
+
|
| 137 |
+
Results As can be seen in Table 3, all models except the NLLB-200-Distilled-600M have comparable ChrF scores, with M2M-100 having the highest ChrF for Amharic. For Tigrinya, Google Translate had the highest BLEU and ChrF scores. Additionally, M2M-100 has the highest BLEU score for Amharic, while NLLB-200-Distilled-600M had the lowest BLEU score. Further, the open-sourced NLLB-200-Distilled-600M and M2M-100 models performed better than the commercially available Google Translate model for Amharic.
|
| 138 |
+
|
| 139 |
+
Qualitatively, we observe that the outputs of the NLLB models for English-Amharic translation usually stick with the Amharic "Standard"
|
| 140 |
+
|
| 141 |
+
<table><tr><td rowspan="2">Language</td><td rowspan="2">Stage</td><td rowspan="2">Model</td><td colspan="2">No Norm</td><td colspan="2">H-only</td><td colspan="2">HSL</td></tr><tr><td>BLEU</td><td>ChrF</td><td>BLEU</td><td>ChrF</td><td>BLEU</td><td>ChrF</td></tr><tr><td rowspan="2">Tigrinya</td><td rowspan="2">Training</td><td>Transformer</td><td>10.87</td><td>25.51</td><td>10.21</td><td>26.61</td><td>-</td><td>-</td></tr><tr><td>NLLB</td><td>22.71</td><td>43.11</td><td>21.41</td><td>41.78</td><td>-</td><td>-</td></tr><tr><td rowspan="5">Amharic</td><td rowspan="2">Training</td><td>Transformer</td><td>12.32</td><td>29.50</td><td>9.31</td><td>26.90</td><td>6.22</td><td>26.88</td></tr><tr><td>NLLB</td><td>19.09</td><td>41.98</td><td>19.71</td><td>42.59</td><td>17.13</td><td>40.50</td></tr><tr><td rowspan="3">Inference</td><td>Transformer</td><td>12.32</td><td>29.50</td><td>12.56</td><td>29.77</td><td>12.56</td><td>29.79</td></tr><tr><td>NLLB</td><td>19.09</td><td>41.98</td><td>19.78</td><td>42.60</td><td>19.78</td><td>42.61</td></tr><tr><td>Belay et al. (2022)</td><td>13.51</td><td>34.78</td><td>14.54</td><td>35.94</td><td>14.54</td><td>35.95</td></tr></table>
|
| 142 |
+
|
| 143 |
+
Table 4: Performance of models where normalization is applied during Training and Inference. Best performance for each row is indicated in bold.
|
| 144 |
+
|
| 145 |
+
homophone usage (Sec. 2). This behavior is not observed in Google Translate. For instance, when translating the word "God," all NLLB and M2M models consistently translate it as "XH-XH" which is consistent with the Ge'ez spelling of the word. However, Google Translate sometimes tends to translate it as "XH XH", switching the $<\text{h}>$ character with $<\text{g}>$ which does not conform to the standard homophone usage of the word (Aklilu, 2010). This difference between the open-source models and Google Translate may be due to the fact that open models are trained on publicly available data, which is heavily dominated by religious data for low-resourced languages.
|
| 146 |
+
|
| 147 |
+
# 4.3 Effects of Training Models with Homophone Normalization
|
| 148 |
+
|
| 149 |
+
To answer RQ2 and RQ3, we trained an encoder-decoder Transformer model from scratch and finetuned an NLLB-600M model as described in Sec. 4.1. We experimented with monolingual and crosslingual training, which we describe below.
|
| 150 |
+
|
| 151 |
+
# 4.3.1 Monolingual Effects of Normalization
|
| 152 |
+
|
| 153 |
+
For RQ2, we experimented by training a Transformer model from scratch and finetuning NLLB-600M for each of the three language pairs: Eng-Amh, Eng-Tir, and Eng-Ge'ez. The goal for this experiment was to understand the impact of normalizing homophone characters in the target language during training on the MT performance. As described in Sec. 3, we use the No-Norm setting as a baseline for all languages, apply H-0n1y normalization to Amharic and Tigrinya data, apply HSL normalization to Amharic data only. For Ge'ez, we train without any normalization (Sec. 2).
|
| 154 |
+
|
| 155 |
+
Results As can be seen in Table 4, for both Amharic and Tigrinya, when normalization is applied during training, the model with No-Norm
|
| 156 |
+
|
| 157 |
+
has better BLEU score as compared to the models trained on normalized data for the Transformer models. For Tigrinya, we observe that the Transformer model has comparable performance with and without normalization. For NLLB-600M, H-Only has a marginal improvement over the No-Norm setting for Amharic (+0.62 BLEU and +0.61 ChrF). The HSL setting has the least performance with NLLB for Amharic. For Tigrinya, we observe that No-Norm has better performance than the H-Only setting for the fine-tuned NLLB model (+1.3 BLEU and +1.33 ChrF).
|
| 158 |
+
|
| 159 |
+
Qualitatively, we observed that models trained with HSL normalization mostly replace some words with their synonyms and simplify the translation when compared to H-0n1y and No-Norm settings. Regarding the quality of the translation, NLLB fine-tuned with No-Norm setting provides better translation, preserving the homophone characters in the prediction; this also aligns with the automatic result presented in Table 4. In the H-0n1y setting, we noticed that in addition to replacing words with normalized characters, most of the translations were incomplete even though we set the same maximum sequence length for all models.
|
| 160 |
+
|
| 161 |
+
# 4.3.2 Cross-Lingual Transfer Effects of Normalization
|
| 162 |
+
|
| 163 |
+
For RQ3, we experimented by taking the models we trained for Amharic as described in Sec 4.3.1 and further training with Eng-Tir and Eng-Ge'ez data. In our cross-lingual experiment, the Tigrinya and Ge'ez datasets are taken as is, without any normalization.
|
| 164 |
+
|
| 165 |
+
Results As Table 5 shows, for Tigrinya, we find that the Amharic model that is trained without normalizing the homophone characters-i.e. the No-Norm setting-is a better transfer model
|
| 166 |
+
|
| 167 |
+
Tigrinya
|
| 168 |
+
|
| 169 |
+
<table><tr><td></td><td colspan="2">No-Transfer</td><td colspan="2">No-Norm</td><td colspan="2">H-only</td><td colspan="2">HSL</td></tr><tr><td>Model</td><td>BLEU</td><td>ChrF</td><td>BLEU</td><td>ChrF</td><td>BLEU</td><td>ChrF</td><td>BLEU</td><td>ChrF</td></tr><tr><td>Transformer</td><td>10.87</td><td>25.51</td><td>12.16</td><td>27.61</td><td>10.67</td><td>26.24</td><td>11.23</td><td>26.44</td></tr><tr><td>NLLB-600M</td><td>22.71</td><td>43.11</td><td>21.55</td><td>42.03</td><td>21.63</td><td>42.13</td><td>21.68</td><td>42.14</td></tr></table>
|
| 170 |
+
|
| 171 |
+
Ge'ez
|
| 172 |
+
|
| 173 |
+
<table><tr><td></td><td colspan="2">No-Transfer</td><td colspan="2">No-Norm</td><td colspan="2">H-only</td><td colspan="2">HSL</td></tr><tr><td>Model</td><td>BLEU</td><td>ChrF</td><td>BLEU</td><td>ChrF</td><td>BLEU</td><td>ChrF</td><td>BLEU</td><td>ChrF</td></tr><tr><td>Transformer</td><td>2.46</td><td>18.72</td><td>3.67</td><td>20.80</td><td>3.56</td><td>20.80</td><td>1.46</td><td>12.48</td></tr><tr><td>NLLB-600M</td><td>3.36</td><td>23.48</td><td>5.22</td><td>26.54</td><td>6.33</td><td>28.38</td><td>6.31</td><td>28.52</td></tr></table>
|
| 174 |
+
|
| 175 |
+
Table 5: Performance of MT models in cross-lingual transfer experiments, where No-Norm, H-Only, and HSL refer to models that were initialized with English-Amharic models trained in each of the three settings. The best performance in a row is indicated in bold font.
|
| 176 |
+
|
| 177 |
+
as compared to the H-0n1y and HSL settings for the Transformer models. With finetuning NLLB-200-Distilled-600M, we find that the model directly finetuned from NLLB-200-Distilled-600M performed better than the ones first finetuned on Amharic then finetuned on Tigrinya. With the transfer models for NLLB, we observe comparable performance regardless of the normalization setting with which the Amharic model was finetuned. For Ge'ez, we see that using the Amharic models trained in the No-Norm and H-0n1y setting provides better BLUE and ChrF scores as compared to using the model trained with HSL setting. Further, we observe that using a Transformer model that was trained with the HSL setting for Amharic has the worst performance when used as a transfer model for Ge'ez. Qualitatively, we observe that in both Ge'ez and Tigrinya translations, the output includes code-switching with Amharic words, changes pronouns, changes gender, and wrongly negates words.
|
| 178 |
+
|
| 179 |
+
We find that the homophone characters that were normalized in the Amharic transfer model were correctly used in the respective target languages (Tigrinya and Ge'ez). However, looking at the predictions of the models fine-tuned on the transfer models trained using the Amharic normalized data, they tended to repeat characters or words until they reached the maximum sequence length, instead of translating the source sentence. This is demonstrated in Figure 1, where the translations with the models trained on Amharic transfer models with the HSL setting have fewer unique words in both Ge'ez and Tigrinya, especially for the Transformer model. In Table 6, we provide qualitative examples.
|
| 180 |
+
|
| 181 |
+
Looking at the character count of the translations in the different transfer settings, we find that all models did not contain a comparable number of characters that were found in the reference dataset; for instance, for Ge'ez the model trained without transfer learning did not contain 33 characters that were in the reference dataset while the model trained with the HSL normalized Amharic transfer model did not contain 34 characters form the reference. However, training with the Amharic transfer models added new characters in the predictions, where the characters do not exist in the language. For instance, $\langle \vec{\mathfrak{h}} \rangle$ , $\langle \vec{\mathfrak{n}} \rangle$ , $\langle \vec{\mathfrak{r}} \rangle$ were added in the Ge'ez predictions although all three characters do not exist in the alphabet for the language.
|
| 182 |
+
|
| 183 |
+
# 4.4 Post-Inference Normalization
|
| 184 |
+
|
| 185 |
+
As discussed in Sec. 2, normalization of homophones has provided automatic score increases in prior work. However, normalizing the characters before training a model results in models that cannot process different forms of spelling. Furthermore, as we have seen in Sec. 4.3.2, normalizing homophone characters has an impact on transfer learning for languages that use the same writing script. To answer our RQ4, we took the models we trained in the No-Norm setting and applied normalization to the reference and predictions after inference.
|
| 186 |
+
|
| 187 |
+
Results For the Transformer model we trained, post-normalization improves BLEU and $\mathrm{ChrF}$ scores by a small margin (0.24 and 0.29 increase, respectively). For the NLLB finetuned model, we find that applying HSL normalization post-inference boosts the BLEU score by 0.69 and the $\mathrm{ChrF}$ by 0.63. In the three normalization settings,
|
| 188 |
+
|
| 189 |
+

|
| 190 |
+
(a)
|
| 191 |
+
|
| 192 |
+

|
| 193 |
+
(b)
|
| 194 |
+
Figure 1: Comparison of Unique word count with different transfer settings for English-Tigrinya and English-Ge'ez translation.
|
| 195 |
+
|
| 196 |
+
we find that the NLLB model outperforms the Transformer model on our evaluation dataset.
|
| 197 |
+
|
| 198 |
+
We compare how effective post-inference normalization is by including the model from Belay et al. (2022); we take the model trained without normalization and apply homophone normalization after inference<sup>7</sup>. Belay et al. (2022) found a 3.09 BLEU score increase by finetuning an M2M (Fan et al., 2021) model with HSL normalized data as compared to a model trained with No-Norm data. We cannot directly compare our results with the reported BLEU scores as the test sets are different. However, on our evaluation dataset, we find that the model trained by Belay et al. (2022) without applying normalization can have a 1.03 BLEU score increase (Table 4) with our post-inference scheme.
|
| 199 |
+
|
| 200 |
+
# 5 Discussion
|
| 201 |
+
|
| 202 |
+
Our work investigates the impact of homophone normalization for languages that use the Ge'ez script on Machine Translation performance. We provide background on the characteristics of the languages that use the Ge'ez script and detail how prior work used homophone normalization (Sec. 2). Through a series of experiments (Sec. 4), we demonstrate that homophone normalization does not provide a significant performance gain across all languages, and hurts performance in transfer learning (Sec. 4.3). As we have discussed in Sec. 2, homophone normalization has been used as a pre-processing step in the NLP literature for
|
| 203 |
+
|
| 204 |
+
Amharic, setting an implicit standard on what trained models can handle. In this section, we connect this argument to the broader literature on technology-facilitated language change.
|
| 205 |
+
|
| 206 |
+
Evolutions in language that are the result of technological constraints make their way to daily lives (van Dijk et al., 2016). This is particularly concerning as MT models are used in data creation and augmentation for low-resourced languages (e.g. Singh et al., 2025). Machine-translated datasets are also used to train other NLP models (e.g. Joshi et al., 2025), perpetuating the normalization effect to tasks beyond translation.
|
| 207 |
+
|
| 208 |
+
As we have seen in Sec. 4, while normalization has resulted in score improvements in prior work, it affects the performance of models in transfer learning. Further, the score improvements are not consistent across models, languages, and normalization settings. As a result, we need to pause and reflect on using such schemes in NLP literature for languages that use the Ge'ez script. Multiple languages use the same writing script; hence, it is important to consider how the standards we set for one language affect other languages. There might also be dialect differences in how words are spelled, which will not be accounted for when we normalize homophone characters without such considerations.
|
| 209 |
+
|
| 210 |
+
As the number of low-resourced languages represented in NLP research increases, it is imperative to consider how pre-processing steps applied to these languages alter the overall landscape of language use. Design decisions could lead to constraints on how and if people can use their language (Wenzel and Kaufman, 2024). In the con
|
| 211 |
+
|
| 212 |
+
text of our study, training models on normalized data results in models that cannot handle alternative spellings. For instance, (Belay et al., 2021) found that normalization helped improve performance in information retrieval. However, the performance improvement would require users to conform to the normalized form of spelling. This impact is not limited to homophone normalization; Adebara and Abdul-Mageed (2022) argue that normalizing tone diacritics, which are essential for lexical disambiguity, affects the usability of retrieval systems for African language speakers.
|
| 213 |
+
|
| 214 |
+
Further, relying solely on automatic score improvements obfuscates the impact of our design decisions beyond its intended effect. Instead, our solutions should (1) focus on changing the methods (e.g. the metrics used for evaluation), (2) be explicit under what context the improvements are achieved, and (3) explore alternatives that do not impact the model's ability to handle different versions of a language. As we propose in Sec. 4.4, we can use post-inference interventions to increase automatic scores without altering the training data. Since there are no standard ways of spelling agreed upon and people spell words differently, with the post-inference normalization, researchers can see to what degree the performance they are getting is a result of spelling differences due to homophones vs actual issues with the translation model, without limiting the inputs and outputs of their models. While the performance improvement is not as significant as training on normalized data, it is a tradeoff for having a model that can account for different spellings, dialects, and transfer capabilities.
|
| 215 |
+
|
| 216 |
+
# 6 Conclusion
|
| 217 |
+
|
| 218 |
+
We investigated the impact of homophone normalization on languages that use the Ge'ez script. We find that normalization of homophones in training data leads to poor transfer learning performance for related languages. Furthermore, we find that normalization does not always lead to performance improvement across all languages. We argue against implicit standardization via preprocessing tools and offer an alternative approach that preserves features of the languages during training. We use our work as a case study to call for a more thorough examination of preprocessing steps, particularly for low-resource languages.
|
| 219 |
+
|
| 220 |
+
# Limitations
|
| 221 |
+
|
| 222 |
+
While our experiments show an increase in the BLEU score with post-inference homophone normalization, we did not conduct a full-scale human evaluation of translation quality; instead, we manually inspected 50 outputs across all normalization settings. Future work should include large-scale human evaluations. Our conclusion is mainly based on BLEU and $\mathrm{ChrF}$ scores, while they remain standard evaluation tools for MT, they might not show the changes in prediction when we use different normalization settings.
|
| 223 |
+
|
| 224 |
+
# Acknowledgments
|
| 225 |
+
|
| 226 |
+
We thank the reviewers for their valuable feedback. We also extend our gratitude to Nina Markl for providing feedback on our manuscript.
|
| 227 |
+
|
| 228 |
+
# References
|
| 229 |
+
|
| 230 |
+
Solomon Teferra Abate, Michael Melese, Martha Yifiru Tachbelie, Million Meshesha, Solomon Atinafu, Wondwossen Mulugeta, Yaregal Assibie, Hafte Abera, Binyam Ephrem, Tewodros Abebe, Wondimagegnhue Tsegaye, Amanuel Lemma, Tsegaye Andargie, and Seifedin Shifaw. 2018. Parallel Corpora for bi-lingual English-Ethiopian Languages Statistical Machine Translation.
|
| 231 |
+
Solomon Teferra Abate, Martha Yifiru Tachbelie, and Tanja Schultz. 2020. Multilingual Acoustic and Language Modeling for Ethio-Semitic Languages. In *Interspeech* 2020, pages 1047-1051. ISCA.
|
| 232 |
+
Ife Adebara and Muhammad Abdul-Mageed. 2022. Towards Afrocentric NLP for African languages: Where we are and where we can go. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3814-3841, Dublin, Ireland. Association for Computational Linguistics.
|
| 233 |
+
Henok Ademtew and Mikiyas Birbo. 2024. AGE: Amharic, Geez and English Parallel Dataset. In Proceedings of the Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024), pages 139–145, Bangkok, Thailand. Association for Computational Linguistics.
|
| 234 |
+
Gabe Adugna. Research: Language Learning - Amharic: Home.
|
| 235 |
+
Orevaoghene Ahia, Sachin Kumar, Hila Gonen, Jungo Kasai, David Mortensen, Noah Smith, and Yulia Tsvetkov. 2023. Do All Languages Cost the Same? Tokenization in the Era of Commercial Language Models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing,
|
| 236 |
+
|
| 237 |
+
pages 9904-9923, Singapore. Association for Computational Linguistics.
|
| 238 |
+
Amsalu Aklilu. 2010. *Problems of Writing Homophones without care and its Solution* [Title translated from Amhairc].
|
| 239 |
+
Shaik Johny Basha, Duggineni Veeraiah, Boddu Venkat Charan, Wiltrud Sahithi Joyce Yeddu, and Devalla Ganesh Babu. 2023. Detection and Comparative Analysis of Handwritten Words of Amharic Language to English using CNN-Based Frameworks. In 2023 International Conference on Inventive Computation Technologies (ICICT), pages 422-427. ISSN: 2767-7788.
|
| 240 |
+
Tadesse Destaw Belay, Abinew Ali Ayele, Getie Gelaye, Seid Muhie Yimam, and Chris Biemann. 2021. Impacts of Homophone Normalization on Semantic Models for Amharic. In 2021 International Conference on Information and Communication Technology for Development for Africa (ICT4DA), pages 101-106.
|
| 241 |
+
Tadesse Destaw Belay, Atnafu Lambebo Tonja, Olga Kolesnikova, Seid Muhie Yimam, Abinew Ali Ayele, Silesh Bogale Haile, Grigori Sidorov, and Alexander Gelbukh. 2022. The Effect of Normalization for Bidirectional Amharic-English Neural Machine Translation. arXiv preprint. ArXiv:2210.15224 [cs].
|
| 242 |
+
Yohanens Biadgligne and Kamel Smaili. 2021. Parallel Corpora Preparation for English-Amharic Machine Translation. In Advances in Computational Intelligence, pages 443-455. Springer, Cham. ISSN: 1611-3349.
|
| 243 |
+
Yohannes Biadgligne and Kamel Smaili. 2022. Offline Corpus Augmentation for English-Amharic Machine Translation. In 2022 5th International Conference on Information and Computer Technologies (ICICT), pages 128-135.
|
| 244 |
+
Sidsel Boldsen and Patrizia Paggio. 2022. Letters from the past: Modeling historical sound change through diachronic character embeddings. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6713-6722, Dublin, Ireland. Association for Computational Linguistics.
|
| 245 |
+
Adane Kasie Chekole, Tesfa Tegegne Asfaw, Tesfahun Nurrie Mengestie, Belayneh Teshome Kebie, Mengistu Kinfe Negia, and Yohannes Abinet Worku. 2024. Effect of Parallel Data Processing Model on Bi-Directional English-Khimtagne Machine Translation Using Deep Learning. In 2024 International Conference on Information and Communication Technology for Development for Africa (ICT4DA), pages 189-193.
|
| 246 |
+
Fitehalew Ashagrie Demilew. 2019. ANCIENT GEEZ SCRIPT RECOGNITION USING DEEP CONVOLUTIONAL NEURAL NETWORK. Software Engineering.
|
| 247 |
+
|
| 248 |
+
Chris Emezue, Hellina Nigatu, Cynthia Thinwa, Helper Zhou, Shamsuddeen Muhammad, Lerato Louis, Idris Abdulmumin, Samuel Oyerinde, Benjamin Ajibade, Olanrewaju Samuel, Oviawe Joshua, Emeka Onwuegbuzia, Handel Emezue, Ifeoluwatayo A. Ige, Atnafu Lambebo Tonja, Chiamaka Chukwuneke, Bonaventure F. P. Dossou, Naome A. Etori, Mbonu Chinedu Emmanuel, Oreen Yousuf, Kaosarat Aina, and Davis David. 2023. The African Stopwords project: curating stopwords for African languages. arXiv preprint. ArXiv:2304.12155 [cs].
|
| 249 |
+
Abebawu Eshetu. 2022. Amharic-Simple-TextPreprocessing-Usin-Python. Original-date: 2019-08-05T09:30:04Z.
|
| 250 |
+
Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, and Armand Joulin. 2021. Beyond english-centric multilingual machine translation. J. Mach. Learn. Res., 22(1):107:4839-107:4886.
|
| 251 |
+
Negasi Haile, Nuredin Ali, and Asmelash Teka Hadgu. 2023. ERROR ANALYSIS OF TIGRINYA ENGLISH MACHINE TRANSLATION SYSTEMS.
|
| 252 |
+
Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The State and Fate of Linguistic Diversity and Inclusion in the NLP World. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6282-6293, Online. Association for Computational Linguistics.
|
| 253 |
+
Raviraj Joshi, Kanishk Singla, Anusha Kamath, Raunak Kalani, Rakesh Paul, Utkarsh Vaidya, Sanjay Singh Chauhan, Niranjan Wartikar, and Eileen Long. 2025. Adapting Multilingual LLMs to Low-Resource Languages using Continued Pretraining and Synthetic Corpus. arXiv preprint. ArXiv:2410.14815 [cs].
|
| 254 |
+
Shreya Khare, Ashish Mittal, Anuj Diwan, Sunita Sarawagi, Preethi Jyothi, and Samarth Bharadwaj. 2021. Low Resource ASR: The Surprising Effectiveness of High Resource Transliteration. In *Interspeech* 2021, pages 1529-1533. ISCA.
|
| 255 |
+
Bushra Kidanemariam. 2019. Amharic-NLP-Tools-in-JAVA.
|
| 256 |
+
Diederik P. Kingma and Jimmy Ba. 2017. Adam: A Method for Stochastic Optimization. arXiv preprint. ArXiv:1412.6980 [cs].
|
| 257 |
+
Surafel M. Lakew, Matteo Negri, and Marco Turchi. 2020. Low Resource Neural Machine Translation: A Benchmark for Five African Languages. arXiv preprint. ArXiv:2003.14402 [cs].
|
| 258 |
+
|
| 259 |
+
Merriam-Webster. Definition of HOMOPHONE.
|
| 260 |
+
Abraham Negash. 2017. The Origin and Development of Tigrinya Language Publications (1886 ...
|
| 261 |
+
Daniel Mekuriaw and Arman Cohan. 2024. BENCHMARK DATASET AND PARAMETER-EFFICIENT CROSS-LINGUAL TRANSFER LEARNING FOR AMHARIC TEXT SUMMARIZATION. Technical report.
|
| 262 |
+
Hellina Hailu Nigatu, Atnafu Lambebo Tonja, Benjamin Rosman, Thamar Solorio, and Monojit Choudhury. 2024. The Zenos Paradox of Low-Resource Languages. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 17753-17774, Miami, Florida, USA. Association for Computational Linguistics.
|
| 263 |
+
Rubungo Andre Niyongabo, Qu Hong, Julia Kreutzer, and Li Huang. 2020. KINNEWS and KIRNEWS: Benchmarking cross-lingual text classification for Kinyarwanda and Kirundi. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5507-5521, Barcelona, Spain (Online). International Committee on Computational Linguistics.
|
| 264 |
+
Team NLLB, Marta R. Costa-jussa, James Cross, Onur Celebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. 2022. No Language Left Behind: Scaling Human-Centered Machine Translation. arXiv preprint. ArXiv:2207.04672 [cs].
|
| 265 |
+
Jane Chinelo Obasi. 2018. Structural Irregularities within the English Language: Implications for Teaching and Learning in Second Language Situations.
|
| 266 |
+
Kishore Papineni, Salim Roukos, Todd Ward, and Wei Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
|
| 267 |
+
Maja Popovic. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392-395, Lisbon, Portugal. Association for Computational Linguistics.
|
| 268 |
+
|
| 269 |
+
Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Brussels, Belgium. Association for Computational Linguistics.
|
| 270 |
+
Shivalika Singh, Angelika Romanou, Clémentine Fourrier, David I. Adelani, Jian Gang Ngui, Daniel Vila-Suero, Peerat Limkonchotiwat, Kelly Marchisio, Wei Qi Leong, Yosephine Susanto, Raymond Ng, Shayne Longpre, Wei-Yin Ko, Sebastian Ruder, Madeline Smith, Antoine Bosselut, Alice Oh, Andre F. T. Martins, Leshem Choshen, Daphne Ippolito, Enzo Ferrante, Marzieh Fadaee, Beyza Ermis, and Sara Hooker. 2025. Global MMLU: Understanding and Addressing Cultural and Linguistic Biases in Multilingual Evaluation. arXiv preprint. ArXiv:2412.03304 [cs].
|
| 271 |
+
Martha Yifiru Tachbelie, Solomon Teferra Abate, and Laurent Besacier. 2014. Using different acoustic, lexical and language modeling units for ASR of an under-resourced language Amharic. Speech Communication, 56:181-194.
|
| 272 |
+
Chantal N. van Dijk, Merel van Witteloostuijn, Nada Vasi, Sergey Avrutin, and Elma Blom. 2016. The Influence of Texting Language on Grammar and Executive Functions in Primary School Children. PLoS ONE, 11(3):e0152409.
|
| 273 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need.
|
| 274 |
+
Kimi Wenzel and Geoff Kaufman. 2024. Designing for Harm Reduction: Communication Repair for Multicultural Users' Voice Interactions. In Proceedings of the CHI Conference on Human Factors in Computing Systems, pages 1-17, Honolulu HI USA. ACM.
|
| 275 |
+
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
|
| 276 |
+
Seid Muhie Yimam, Abinew Ali Ayele, Gopalakrishnan Venkatesh, Ibrahim Gashaw, and Chris Biemann. 2021. Introducing Various Semantic Models for Amharic: Experimentation and Evaluation with Multiple Tasks and Datasets. Future Internet, 13(11):275. Number: 11 Publisher: Multidisciplinary Digital Publishing Institute.
|
| 277 |
+
|
| 278 |
+
<table><tr><td>Target Lang.</td><td>Source</td><td>Reference</td><td>No-Norm</td><td>H-Only</td><td>HSL</td></tr><tr><td>Tir</td><td>The discourse will also answer such questions as these: How often should Christians commemorate this event?</td><td>πήθην, λήν, HCCN γήνρH, Hηοῦν ηήναν, ηεύμιν Σαλα, θυμην οςιν - ηάτην έλαν, Θακτόν Σγαν, Θακτόν Σγαν, Θακτόν Σγαν, Θακτόν Σγαν, Θακτόν Σγαν, Θακτόν Σγαν, Θακτόν Σγαν, Θακτόν Σγαν, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήsigma, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήσαν, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, ΘακτήσαΦ, ΘακτήσαΦ, ΘακτήσαΦ, ΘακτήσαΦ, ΘακτήσαΦ, ΘακτήσαΦ, ΘακτήσαΦ, ΘακτήσαΦ, ΘακτήσαΦ, ΘακτήσαΦ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαψ, Θακτήσαψ, Θακτήσαψ, Θακτήσαψ, Θακτήσαψ, Θακτήσαψ, Θακτήσαψ, Θακτήσαψ, Θακτήσαψ, Θακτήσαψ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαρ, Θακτήσαρ, Θακτήσαρ, Θακτήσαρ, Θακτήσαρ, Θακτήσαρ, Θακτήσαρ, Θακτήσαρ, Θακτήσαρ, Θακτήσαρ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσα φ, Θακτήσα φ, Θακτήσα φ, Θακτήσα φ, Θακτήσα φ, Θακτήσα φ, Θακτήσα φ, Θακτήσα φ, Θακτήσα φ, Θακτήσα φ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, ΘακτήσαФ, ΘακτήσαФ, ΘακτήσαФ, ΘακτήσαФ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, Θακτήσαφ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΟ, ΘακτήσαΟ, ΘακτήσαΟ, ΘακτήσαΟ, ΘακτήσαΟ, ΘακτήσαΟ, ΘακτήσαΟ, ΘακτήσαΟ, ΘακτήσαΟ, ΘακτήσαΟ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, Θακτήσαθ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, Θακτήσαβ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, Θακτήσαŋ, Θακτήσαŋ, Θακτήσαŋ, Θακτήσαŋ, Θακτήσαŋ, Θακτήσαŋ, Θακτήσαŋ, Θακτήσαŋ, Θακτήσαŋ, Θακτήσαŋ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, Θακτήσαη, Θακτήσαη, Θακτήσαη, Θακτήσαη, Θακτήσαη, Θακτήσαη, Θακτήσαη, Θακτήσαη, Θακτήσαη, Θακτήσαη, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, Θακτήσαζ, Θακτήσαζ, Θακτήσαζ, Θακτήσαζ, Θακτήσαζ, Θακτήσαζ, Θακτήσαζ, Θακτήσαζ, Θακτήσαζ, Θακτήσαζ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, Θακτήσαξ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, ΘακτήσαΩ, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα², Θακτήσα², Θακτήσα², Θακτήσα², Θακτήσα², Θακτήσα², Θακτήσα², Θακτήσα², Θακτήσα², Θακτήσα², Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα², Θακτήσα², ��ακτήσα², Θακτήσα², Θακτήσα², Θακτήσα², Θακτήσα², Θακτήσα², Θακτήσα², Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα½, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₂, Θακτήσα₁/₂, Θακτήσα₁/₂, Θακτήσα₁/₂, Θακτήσα₁/₂, Θακτήσα₁/₂, Θακτήσα₁/₂, Θακτήσα₁/₂, Θακτήσα₁/₂, Θακτήσα₁/₂, Τιςης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέlambdaις τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλŋς τέλης τέλŋς τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλις τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλɪς τέλις τέλις τέλɪς τέλɪς τέλις τέλɪς τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλις τέλⅡ/₂ τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλης τέλη⁄₂ τέλης τέλη⁄₂ τέλης τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλη⁄₂ τέλŋ⁄₂ τέλη⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τέλŋ⁄₂ τález</td><td></td><td></td><td></td></tr></table>
|
| 279 |
+
|
| 280 |
+
Table 6: Qualitative examples with transfer learning experiments where the transfer Amh-Eng model is trained in No-Norm, H-Only and HSL settings
|
| 281 |
+
|
| 282 |
+
# A Qualitative Examples
|
| 283 |
+
|
| 284 |
+
In Table 6, we provide qualitative examples for Tigrinya and Ge'ez transfer learning experiments. As the table shows, Amharic models trained with normalization repeat words until they reach maximum sequence length or end of sentence token $(\because)$ .
|
2025/A Case Against Implicit Standards_ Homophone Normalization in Machine Translation for Languages that use the Ge’ez Script./images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8fb569b272a05f653c9e7bbf9f3c36df76bb8ea4ce77028f1cc2e9f43efe2e55
|
| 3 |
+
size 348188
|
2025/A Case Against Implicit Standards_ Homophone Normalization in Machine Translation for Languages that use the Ge’ez Script./layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Causal Lens for Evaluating Faithfulness Metrics/399cf055-5b9d-493e-af03-9c9a08c4262c_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Causal Lens for Evaluating Faithfulness Metrics/399cf055-5b9d-493e-af03-9c9a08c4262c_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Causal Lens for Evaluating Faithfulness Metrics/399cf055-5b9d-493e-af03-9c9a08c4262c_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d31b2b04ce7a1b5f22e3af1f324e1e40e4c9f363e7a02797a3eb6c49c87960d9
|
| 3 |
+
size 1745710
|
2025/A Causal Lens for Evaluating Faithfulness Metrics/full.md
ADDED
|
@@ -0,0 +1,595 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Causal Lens for Evaluating Faithfulness Metrics
|
| 2 |
+
|
| 3 |
+
Kerem Zaman Shashank Srivastava
|
| 4 |
+
|
| 5 |
+
UNC Chapel Hill
|
| 6 |
+
|
| 7 |
+
{kzaman,ssrivastava}@cs.unc.edu
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
Large Language Models (LLMs) offer natural language explanations as an alternative to feature attribution methods for model interpretability. However, despite their plausibility, they may not reflect the model's true reasoning faithfully. While several faithfulness metrics have been proposed, they are often evaluated in isolation, making principled comparisons between them difficult. We present CAUSAL DIAGNOSTICITY, a testbed framework for evaluating faithfulness metrics for natural language explanations. We use the concept of diagnosticity, and employ model-editing methods to generate faithful-unfaithful explanation pairs. Our benchmark includes four tasks: fact-checking, analogy, object counting, and multi-hop reasoning. We evaluate prominent faithfulness metrics, including post-hoc explanation and chain-of-thought methods. Diagnostic performance varies across tasks and models, with Filler Tokens performing best overall. Additionally, continuous metrics are generally more diagnostic than binary ones but can be sensitive to noise and model choice. Our results highlight the need for more robust faithfulness metrics.
|
| 12 |
+
|
| 13 |
+
# 1 Introduction
|
| 14 |
+
|
| 15 |
+
Natural language explanations from Large Language Models (LLMs) have enhanced possibilities for model interpretability, offering readable insights that surpass traditional feature attribution methods. Most LLMs can generate explanations for their predictions at minimal cost (Wei et al., 2022). However, despite fluency and plausibility, such explanations may not reflect the model's actual reasoning process and can mislead practitioners (Turpin et al., 2023).
|
| 16 |
+
|
| 17 |
+
The idea of faithfulness aims to assess how accurately explanations reflect the true reasoning mechanism of the model. While numerous metrics have
|
| 18 |
+
|
| 19 |
+
been proposed to measure faithfulness for natural language-based explanations, the field lacks a principled framework for evaluating these metrics themselves. We cannot trust a faithfulness metric if we cannot reliably assess whether it actually distinguishes faithful from unfaithful explanations. Parcalabescu and Frank (2023) made initial progress by comparing different metrics on the same data and models, yet their work did not evaluate the effectiveness of the metrics themselves.
|
| 20 |
+
|
| 21 |
+
We address this critical gap through CAUSAL DIAGNOSTICITY, an evaluation framework for rigorously comparing faithfulness metrics. Our framework operationalizes the concept of diagnosticity (Chan et al., 2022b), which measures how often a faithfulness metric correctly favors faithful explanations over unfaithful ones. We extend it to natural language explanations through a causal intervention approach. Rather than relying on heuristically generated unfaithful explanations, we leverage knowledge editing techniques to causally generate controlled pairs of faithful and unfaithful explanations, ensuring ground truth for evaluation.
|
| 22 |
+
|
| 23 |
+
Our framework consists of four tasks spanning complexity levels: (1) fact-checking, (2) analogy completion, (3) object counting, and (4) multi-hop reasoning. Figure 1 illustrates our approach. Using this benchmark, we conduct the first systematic evaluation of prominent faithfulness metrics, including Simulatability, corruption-based Chain-of-Thought (CoT) metrics (Lanham et al., 2023), and CC-SHAP (Parcalabescu and Frank, 2023).
|
| 24 |
+
|
| 25 |
+
Our findings show that the most diagnostic metric varies by task and model, but the Filler Tokens metric emerges as the most reliable overall. We also note that metrics producing continuous scores are more diagnostic than those with binary scores. That said, continuous metrics can be overly sensitive to noise and model choice. Thus, we need more robust faithfulness metrics that exhibit consistent behavior across models and tasks.
|
| 26 |
+
|
| 27 |
+

|
| 28 |
+
|
| 29 |
+

|
| 30 |
+
Figure 1: Our framework has three stages: (1) Knowledge Editing: applying counterfactual edits to models; (2) Explanation Generation: generating faithful and unfaithful explanation pairs using the edited models, or synthetically generating such pairs based on the edits; (3) Diagnostics Evaluation: assessing the chosen faithfulness metric with one of the edited models using the faithful-unfaithful explanation pairs. Diagnostic faithfulness metrics should assign a higher score to the faithful explanations than to the unfaithful ones.
|
| 31 |
+
|
| 32 |
+
Our contributions are (1) a framework for evaluating faithfulness metrics for natural language explanations, (2) a dataset spanning four tasks, and (3) benchmarking of prominent faithfulness metrics across knowledge editing methods and models to provide insights into their reliability.
|
| 33 |
+
|
| 34 |
+
# 2 Technical Background
|
| 35 |
+
|
| 36 |
+
Faithfulness We adopt the commonly used definition of faithfulness, which is the extent to which an explanation accurately reflects the reasoning process behind a model's prediction, following Jacovi and Goldberg (2020). While this notion is widely accepted, its concrete instantiations vary depending on the type of explanation and the method used to measure faithfulness. For example, post-hoc explanations typically capture reasoning over an input-output pair, whereas CoT explanations represent reasoning generated from the input alone. Metrics differ in how they operationalize faithfulness: some evaluate the change in predictions and explanations after modifying the input (Turpin et al., 2023; Atanasova et al., 2023; Siegel et al., 2024), while others assess the change in prediction after corrupting the explanation (Lanham et al., 2023). Parcalabescu and Frank (2023) critique such metrics for relying on overly simplistic consistency measures and instead propose measuring faithfulness by comparing the contributions of the input
|
| 37 |
+
|
| 38 |
+
and the explanation to the model's prediction. In line with this criticism, Tutek et al. (2025) intervene in model internals to measure faithfulness more directly. Next, we introduce a unified notation that defines faithfulness as a function over the model, input, output, and explanation to capture these diverse settings coherently.
|
| 39 |
+
|
| 40 |
+
Let $M_{\theta}$ denote an LLM parameterized by $\theta$ and with a context $c$ , operating on a token set $\mathcal{V}$ such that $M(t^{\mathrm{in}} \mid c) = t^{\mathrm{out}}$ , where $t^{\mathrm{in}} = \langle t_1^{\mathrm{in}} \dots, t_{N_{\mathrm{in}}}^{\mathrm{in}} \rangle$ , $t^{\mathrm{out}} = \langle t_1^{\mathrm{out}} \dots, t_{N_{\mathrm{out}}} \rangle$ and $c = \langle c_1 \dots, c_{N_c} \rangle$ ; $t_i^{\mathrm{in}}, t_i^{\mathrm{out}}, c_i \in \mathcal{V}$ ; $N_{\mathrm{in}}, N_{\mathrm{out}}$ and $N_c$ represent the lengths of the input, output and context sequences. The context $c$ consists of instructions or prompts. For brevity, we use $M$ to denote a model parameterized by $\theta$ with context $c$ . The input and output sequences can take many forms. For the simplest case $t^{\mathrm{in}} = x$ and $t^{\mathrm{out}} = y$ where $(x, y)$ is an input-output pair for a task. With appropriate prompting, the output can take the form $t^{\mathrm{out}} = y \oplus \varepsilon$ for post-hoc explanations or $t^{\mathrm{out}} = \varepsilon \oplus y$ for chain-of-thought (CoT) explanations, where $\varepsilon$ is the explanation and $\oplus$ denotes sequence concatenation. In our particular setup, we obtain $y$ from the next-token logits by selecting the token with the highest score among those corresponding to the task-specific single-token labels. We define a faithfulness metric as $\mathcal{F}(x, y, \varepsilon, M) \in \mathbb{R}$ , where $\mathcal{F}$ represents how faithfully explanation $\varepsilon$ represents the reasoning process for input-output pair $(x, y)$ for the model $M$ .
|
| 41 |
+
|
| 42 |
+
# 2.1 Faithfulness Metrics
|
| 43 |
+
|
| 44 |
+
We focus on six prominent faithfulness metrics: (1) Simulatability, metrics based on CoT corruptions (Lanham et al., 2023) (including (2) Early Answering, (3) Adding Mistakes, (4) Paraphrasing, and (5) Filler Tokens), and (6) CC-SHAP (Parcalabescu and Frank, 2023). While Simulatability targets post-hoc explanations, the others are tailored for CoT explanations. CC-SHAP is applicable to both types of explanations. Next, we briefly summarize three broad categories of these metrics.
|
| 45 |
+
|
| 46 |
+
Simulatability Simulatability assesses faithfulness from the lens of the extent to which whether an explanation enables a simulator to predict the model's output (Doshi-Velez and Kim, 2017; Hase and Bansal, 2020; Hase et al., 2020; Wegreffe et al., 2020; Chan et al., 2022a). We follow Chan et al. (2022a)'s definition of simulatability as $\mathbb{1}_S(\pmb {y}_i\mid \pmb {x}_i,\pmb {\varepsilon}_i) - \mathbb{1}_S(\pmb {y}_i\mid \pmb {x}_i)$ , where $\mathbb{1}_S(b\mid a)$ is
|
| 47 |
+
|
| 48 |
+
the accuracy of $S$ in predicting $b$ given $a$ . We use a smaller LLM as simulator in our experiments.
|
| 49 |
+
|
| 50 |
+
Corrupting CoT Lanham et al. (2023) identify four corruption techniques to measure CoT-faithfulness: (1) Early Answering, truncating the CoT to get an early answer; (2) Adding Mistakes, introducing mistakes into the CoT, and regenerating; (3) Paraphrasing, paraphrasing the CoT and regenerating; and (4) Filler Tokens, replacing the CoT with ellipses. An explanation is considered unfaithful if the corruption does not alter the original prediction (except for Paraphrasing, where prediction changes signify unfaithfulness). While these metrics were originally proposed as binary measures, we extend them to quantify faithfulness as the change in prediction score for the top-predicted class before and after corruption, denoted $\hat{z}_i$ and $\hat{z}_i'$ , respectively. The faithfulness score is computed as $(\hat{z}_i - \hat{z}_i')$ , where a larger drop following corruption indicates a more faithful explanation. For Paraphrasing, we reverse this definition and use $1 - (\hat{z}_i - \hat{z}_i')$ .
|
| 51 |
+
|
| 52 |
+
CC-SHAP Parcalabescu and Frank (2023) assess faithfulness by aligning input contributions to prediction and explanation using SHAP (Lundberg and Lee, 2017) scores. They calculate importance scores for each input token's prediction, then for each token in the explanation, aggregating them. Convergence of these score distributions is then measured. This method is applicable to both posthoc and Chain-of-Thought (CoT) explanations. We describe each metric in more detail in Appendix D.
|
| 53 |
+
|
| 54 |
+
# 2.2 Knowledge Editing
|
| 55 |
+
|
| 56 |
+
Our framework generates faithful-unfaithful explanation pairs by modifying facts within LLMs using knowledge editing. Knowledge editing methods allow updates without altering unrelated knowledge (Cohen et al., 2024; Zhang et al., 2024; Patil et al., 2023; Geva et al., 2023; Gupta et al., 2023; Hartvigsen et al., 2023; Zheng et al., 2023; Meng et al., 2022), using triplets of subjects $s$ , objects $o$ , and relations $r$ . For instance, they can update ( $s =$ Joe Biden, $r =$ is the president of, $o =$ the United States) to ( $s =$ Donald Trump, $r =$ is the president of, $o =$ the United States).
|
| 57 |
+
|
| 58 |
+
We explore two knowledge editing methods: (1) In-Context Editing (ICE) (Cohen et al., 2023), and (2) MEMIT (Meng et al., 2023). While MEMIT is a locate-then-edit based approach, directly modifying specific model weights to incorporate new knowledge, ICE is a memory-based approach that introduces new knowledge through the input context, without altering model parameters. Unlike ICE, MEMIT relies on a rigid subject-object-target template, which limits its use in complex scenarios. Additionally, MEMIT-like methods are highly sensitive to hyperparameters requiring model-specific tuning or limiting use to models with known optimal values (Wang et al., 2023). Finally, in-context approaches consistently surpass MEMIT in multi-step reasoning tasks (Cohen et al., 2023). We therefore adopt ICE as our primary knowledge-editing method, but also report results for both methods as part of our ablation study ( $\S 5.4$ ) to confirm that our conclusions hold across editing paradigms.
|
| 59 |
+
|
| 60 |
+
# 3 Method
|
| 61 |
+
|
| 62 |
+
Our CAUSAL DIAGNOSTICITY framework is inspired by the idea of diagnosticity, which evaluates how well faithfulness metrics distinguish between faithful and unfaithful explanations. In 3.1, we summarize diagnosticity as introduced by Chan et al. (2022b) for evaluating feature attribution methods. In 3.2, we introduce CAUSAL DIAGNOSTICITY, describing how it builds on diagnosticity and extends it to natural language explanations using causal interventions via edited models.
|
| 63 |
+
|
| 64 |
+
# 3.1 Diagnosticsity
|
| 65 |
+
|
| 66 |
+
Despite the plethora of faithfulness metrics for natural language explanations (Jacovi and Goldberg, 2020; Lyu et al., 2022), the community lacks an evaluation framework to compare them. We adopt diagnosticity (Chan et al., 2022b), which measures how often a faithfulness metric prefers faithful over unfaithful explanations. For example, if a model correctly answers "No" to the question, "Is Rianna a researcher?" based on her being a singer, a faithful explanation should reflect this reasoning. An explanation that provides an irrelevant rationale (e.g., albums she has sold) or false information (e.g., the wrong occupation) would be unfaithful.
|
| 67 |
+
|
| 68 |
+
Following the notation from Chan et al. (2022b), let $u$ and $v$ be explanations (regardless of form, e.g., language, feature attributions, etc.), with $u \succ v$ denoting that $u$ is more faithful than $v$ . A faithfulness
|
| 69 |
+
|
| 70 |
+
metric $\mathcal{F}$ ranks the explanations as $u\succ_{\mathcal{F}}v$ if it assigns a higher faithfulness score to $u$ than $v$ . Then, the diagnosticity of the metric $\mathcal{F}$ is:
|
| 71 |
+
|
| 72 |
+
$$
|
| 73 |
+
D (\mathcal {F}) = P (u \succ_ {\mathcal {F}} v | u \succ v) \tag {1}
|
| 74 |
+
$$
|
| 75 |
+
|
| 76 |
+
We approximate this using an empirical estimate of the probability from pairs of faithful and unfaithful explanations. Also, since higher faithfulness scores represent more faithful explanations, we rewrite:
|
| 77 |
+
|
| 78 |
+
$$
|
| 79 |
+
D (\mathcal {F}) \approx \frac {1}{| Z |} \sum_ {\left(u _ {i}, v _ {i}\right) \in Z} \mathbb {1} \left(\mathcal {F} _ {p _ {i}, M} \left(u _ {i}\right) > \mathcal {F} _ {p _ {i}, M} \left(v _ {i}\right)\right) \tag {2}
|
| 80 |
+
$$
|
| 81 |
+
|
| 82 |
+
where $Z$ contains pairs of faithful $(u_i)$ and unfaithful $(v_i)$ explanations for input-output pairs $p_i := (\pmb{x}_i, \pmb{y}_i)$ .
|
| 83 |
+
|
| 84 |
+
For a baseline faithfulness metric that assigns random scores to the explanations, the expected diagnosticity is 0.5. To capture this behavior and relax the strict inequality, we modify the diagnosticity definition as follows:
|
| 85 |
+
|
| 86 |
+
$$
|
| 87 |
+
D (\mathcal {F}) \approx \frac {1}{| Z |} \sum_ {\left(u _ {i}, v _ {i}\right) \in Z} d \left(u _ {i}, v _ {i}, \mathcal {F} _ {p _ {i}, M}\right) \tag {3}
|
| 88 |
+
$$
|
| 89 |
+
|
| 90 |
+
with the pairwise function $d(\cdot)$ defined as
|
| 91 |
+
|
| 92 |
+
$$
|
| 93 |
+
d \left(u _ {i}, v _ {i}, \mathcal {F} _ {p _ {i}, M}\right) = \left\{ \begin{array}{l l} 1 & \text {i f} \mathcal {F} _ {p _ {i}, M} \left(u _ {i}\right) > \mathcal {F} _ {p _ {i}, M} \left(v _ {i}\right) \\ 0. 5 & \text {i f} \mathcal {F} _ {p _ {i}, M} \left(u _ {i}\right) = \mathcal {F} _ {p _ {i}, M} \left(v _ {i}\right) \\ 0 & \text {i f} \mathcal {F} _ {p _ {i}, M} \left(u _ {i}\right) < \mathcal {F} _ {p _ {i}, M} \left(v _ {i}\right) \end{array} \right. \tag {4}
|
| 94 |
+
$$
|
| 95 |
+
|
| 96 |
+
This revised formulation accommodates the scenario where random scoring yields an expected diagnosticity of 0.5, by assigning a neutral score when the faithfulness scores are equal.
|
| 97 |
+
|
| 98 |
+
# 3.2 Causal Diagnostics
|
| 99 |
+
|
| 100 |
+
To obtain unfaithful explanations for measuring diagnosticity, Chan et al. (2022b) use random feature attribution scores. While random scores can work for structured explanations like feature attributions, this approach is not straightforward for natural language explanations. Random text cannot function as a meaningful explanation and cannot ensure unfaithfulness in a coherent way. To address this, we introduce CAUSAL DIAGNOSTICITY, which generates unfaithful explanations using knowledge editing. Rather than injecting random noise, we modify a model's internal knowledge. For example, consider the capital of relation for the question "Is Paris the capital of France?" and a model
|
| 101 |
+
|
| 102 |
+
that correctly associates this to the knowledge $(s = \text{Paris}, r = \text{is the capital of}, o = \text{France})$ . By altering the model's knowledge, we create two variations where the subject $s$ is replaced with Berlin or London. Both modified models should answer "No" to the original question but for different reasons: "No, because Berlin is the capital of France." and "No, because London is the capital of France." In particular, each of these two explanations should be unfaithful to the model that generated the other.
|
| 103 |
+
|
| 104 |
+
Formally, let $\mathbf{y}_i$ be the prediction for the input $\mathbf{x}_i$ while $\bar{M}$ and $\widetilde{M}$ be the altered models. $\bar{M}$ generates the explanation $\bar{\varepsilon}_i$ and $\widetilde{M}$ generates the explanation $\widetilde{\varepsilon}_i$ . We modify diagnosticity as:
|
| 105 |
+
|
| 106 |
+
$$
|
| 107 |
+
D (\mathcal {F}) = \frac {1}{| Z |} \sum_ {(\bar {\varepsilon} _ {i}, \widetilde {\varepsilon} _ {i}) \in Z} d \left(\bar {\varepsilon} _ {i}, \widetilde {\varepsilon} _ {i}, \mathcal {F} _ {p _ {i}, \bar {M}}\right) \tag {5}
|
| 108 |
+
$$
|
| 109 |
+
|
| 110 |
+
where
|
| 111 |
+
|
| 112 |
+
$$
|
| 113 |
+
d \left(u _ {i}, v _ {i}, \mathcal {F} _ {p _ {i}, \bar {M}}\right) = \left\{ \begin{array}{l l} 1 & \text {i f} \mathcal {F} _ {p _ {i}, \bar {M}} \left(\bar {\varepsilon} _ {i}\right) > \mathcal {F} _ {p _ {i}, \bar {M}} \left(\widetilde {\varepsilon} _ {i}\right) \\ 0. 5 & \text {i f} \mathcal {F} _ {p _ {i}, \bar {M}} \left(\bar {\varepsilon} _ {i}\right) = \mathcal {F} _ {p _ {i}, \bar {M}} \left(\widetilde {\varepsilon} _ {i}\right) \\ 0 & \text {i f} \mathcal {F} _ {p _ {i}, \bar {M}} \left(\bar {\varepsilon} _ {i}\right) < \mathcal {F} _ {p _ {i}, \bar {M}} \left(\widetilde {\varepsilon} _ {i}\right) \end{array} \right. \tag {6}
|
| 114 |
+
$$
|
| 115 |
+
|
| 116 |
+
Models $\bar{M}$ and $\widetilde{M}$ are edited such that $\bar{\varepsilon}_i$ is faithful to $\bar{M}$ , while $\widetilde{\varepsilon}_i$ is unfaithful to $\bar{M}$ . They are created by modifying parameters $\theta$ or context $c$ , depending on the knowledge editing method. The choice of models is flexible: in most cases, either model can be used in Equation 5 by swapping $\bar{\varepsilon}_i$ and $\widetilde{\varepsilon}_i$ . However, in some tasks, one explanation may be faithful to both models, restricting arbitrary model selection. For example, in our Analogy task (see Figure 2), the capital0f relation exists in only one model, while the city0f relation holds in both. Additionally, the original model $\theta$ can be used as long as it satisfies the faithfulness relations with the explanation pairs. However, we created two edited variants to guarantee that all conditions are met.
|
| 117 |
+
|
| 118 |
+
# 4 Tasks
|
| 119 |
+
|
| 120 |
+
We evaluate faithfulness metrics using four controlled tasks in the CAUSAL DIAGNOSTICITY framework: (1) fact-checking, (2) analogy, (3) object counting, and (4) multi-hop reasoning. These tasks assess causal diagnosticity by using counterfactual models with faithful and unfaithful explanations. They are deliberately designed to span varying levels of complexity. The FactCheck task is the simplest, requiring models to answer yes/no questions with minimal reasoning. In contrast,
|
| 121 |
+
|
| 122 |
+

|
| 123 |
+
Figure 2: Overview of the four tasks, illustrated with example questions, answers, and explanations from the edited models. The explanations can be model generated or synthetically constructed to align with specific edits. The blue and orange robots represent models $\widetilde{M}$ and $\widetilde{M}$ , respectively, while the color-matched boxes indicate counterfactual knowledge injected through editing. Speech bubbles next to each model display the answer $(\pmb{y})$ and explanation $(\bar{\varepsilon}$ or $\hat{\varepsilon}$ ). Although both models generate the same answer, their reasoning differs, as reflected in the explanations.
|
| 124 |
+
|
| 125 |
+
the Analogy task introduces additional complexity through its multiple-choice format and hierarchical, non-mutually exclusive edited relations (see § 3.2). The Object Counting task, also a multiple-choice format, goes beyond simple classification by requiring models to demonstrate counting capabilities. Finally, the Multi-hop Reasoning task is the most complex, requiring multiple reasoning steps to arrive at the correct answer, and the most challenging in terms of diagnosticity, as faithful and unfaithful explanation pairs often share significant internal and lexical reasoning components. While the altered models should reason differently, their explanations may not always reference the modifications. To ensure valid faithfulness comparisons, we synthetically generate explanations that emphasize model differences. While this reduces the realism of the explanations, it is necessary to guarantee the validity of our faithful/unfaithful labels. We later analyze the impact of using synthetic vs. model-generated explanations in §5.5. Figure 2 provides an overview of these tasks, including example inputs, outputs, and explanations.
|
| 126 |
+
|
| 127 |
+
# 4.1 FactCheck Task
|
| 128 |
+
|
| 129 |
+
Task This task focuses on simple fact-checking, where a fact is presented alongside two counterfactual answers. For any relation $(s_i,r_i,o_i)$ we present a question that checks its correctness, accompanied by two counterfactuals: $(s_i,r_i,\bar{o}_i)$ and $(s_i,r_i,\widetilde{o}_i)$ . These counterfactuals yield the same answer but are based on different reasoning. For instance, given the knowledge triplet $(s_i = "Rihanna",r_i = "is",o_i = "a singer")$ , the corre
|
| 130 |
+
|
| 131 |
+
sponding question would be "Is Rihanna a singer?" Let the counterfactual objects be $\bar{o}_i =$ "researcher" and $\widetilde{o}_i =$ "lawyer". Both counterfactuals would result in the answer "No," but for different reasons.
|
| 132 |
+
|
| 133 |
+
Dataset We construct our dataset from COUNTERFACT (Meng et al., 2022), which consists of knowledge triplets. We convert these triplets to yes/no questions. Then, for each object $o_i$ , we fetch sibling entities from WikiData to create counterfactuals. Finally, we generate synthetic explanations corresponding to each counterfactual. For example, the explanation $\bar{\varepsilon}_i$ could be "Joe Biden is a researcher, not the president of the United States" for $\bar{o}_i$ . See Appendix E for details.
|
| 134 |
+
|
| 135 |
+
# 4.2 Analogy Task
|
| 136 |
+
|
| 137 |
+
Task This task is based on analogies exploiting hierarchies between two relations where $r_1 \subset r_2$ holds. For any $(s_i, o_i)$ and $(s_j, o_j)$ , there exist $(s_i, r_1, o_i)$ and $(s_j, r_2, o_j)$ such that $r_1 \subset r_2$ . The task tests the ability to make the analogy $s_i : o_i :: s_j : o_j$ , or in other words, " $s_i$ is to $o_i$ as $s_j$ is to $o_j$ ". We choose $r_1$ and $r_2$ as $r_{\text{capitalOf}}$ and $r_{\text{cityOf}}$ relations, respectively. For instance, we test "Paris is to France as Berlin is to Germany." We corrupt one of the models so that the relation $r_{\text{capitalOf}}$ is no longer valid while the relation $r_{\text{cityOf}}$ still holds. Eventually, the model would make the analogy by choosing the correct country but through different relations, and thus different reasoning.
|
| 138 |
+
|
| 139 |
+
Dataset We collect a list of countries and cities, then select one capital and one non-capital city for each country. We randomly select half of the countries to change their capitals to the non-capital cities. Then, we sample 1,000 pairs, each with one country having an unchanged capital and one with a changed capital. Finally, we generate fill-in-the-blank multiple-choice questions based on these pairs, such as "Fill in the blank: Athens is to Greece like Paris is to (A) Tonga (B) France." In this example, both $r_{\text{cityOf}}$ and $r_{\text{capitalOf}}$ relations provide sufficient reasoning to answer "France". While the corresponding synthetic explanation, $\varepsilon_{\text{capitalOf}}$ , for the model with unaltered capitals would be "The capital of France is Paris, as the capital of Greece is Athens.", the one for the model with altered capitals, $\varepsilon_{\text{cityOf}}$ , would be "Paris is a city in France, as Athens is a city in Greece."
|
| 140 |
+
|
| 141 |
+
# 4.3 Object Counting Tasks
|
| 142 |
+
|
| 143 |
+
Task Adapted from BIG-bench (Bench authors, 2023), this task tests object classification and counting. The model identifies how many objects in a list belong to a given category. We alter the model's internal knowledge, swapping objects across categories while keeping the answer numerically identical but reasoning distinct. For example, in How many of "countertop," "grape," and "kiwifruit" are fruits?, the correct answer is 2, since "countertop" is not a fruit. If the model is edited to classify "countertop" as a fruit and "grape" as furniture, the answer remains 2, but for different reasons.
|
| 144 |
+
|
| 145 |
+
Dataset We define five object categories, each with five types. For each type, we collect 10 representative entities from WikiData, reserving $20\%$ for reassignment after model editing. We generate 1000 questions, equally split between two types: yes/no questions, asking if all or any items in a list belong to a given type, and number questions, asking how many items belong to a specific type. For both types, we randomly determine the number of items (3 to 6) and select a target type. For yes/no questions, we ensure that after knowledge editing, the number of entities of the target type remains unchanged. For number questions, we reassign one entity from the target type and one from other types. Dataset details are in Appendix E.
|
| 146 |
+
|
| 147 |
+
# 4.4 Multi-hop Reasoning Task
|
| 148 |
+
|
| 149 |
+
Task This extends diagnostic evaluation to complex multi-step reasoning. Like FactCheck, it en
|
| 150 |
+
|
| 151 |
+
sures identical answers across counterfactual settings but requires multi-hop chains to reach conclusions. Unlike FactCheck, it requires explanations grounded in multi-step reasoning chains.
|
| 152 |
+
|
| 153 |
+
Dataset We construct this using StrategyQA (Geva et al., 2021), a multi-hop QA benchmark that provides fact decompositions for each example. We generate two counterfactual variants for one fact per question, preserving the answer while altering the reasoning. When facts are interdependent, we propagate modifications to ensure consistency. Next, we generate explanations for each counterfactual set using the original decompositions. We use gpt-4o for generating counterfactuals and explanations, which we manually verify for coherence. The data consists of 200 high-quality examples.
|
| 154 |
+
|
| 155 |
+
# 5 Experiments
|
| 156 |
+
|
| 157 |
+
In this section, we present our results and analyses for a series of experiments. These consist of: (1) evaluating diagnosticity scores for post-hoc and CoT-based metrics, (2) analyzing the sensitivity of CoT-based metrics to different input corruption schemes (3) analyzing the reliability of knowledge edits, (4) studying the effect of replacing ICE with MEMIT, (5) assessing model-generated vs. synthetic explanations, and (6) comparing binary and continuous metrics. We also include an analysis about the effect of model size in Appendix A.
|
| 158 |
+
|
| 159 |
+
# 5.1 Diagnostics of Faithfulness Metrics
|
| 160 |
+
|
| 161 |
+
Experimental Setup We evaluate the metrics described in §2 with two LLMs: qwen-2.5-7b (Yang et al., 2024), and gemma-2-9b-it (Riviere et al., 2024). For our main experiments, we use ICE as the knowledge editing method and synthetic explanations to ensure faithfulness to the edited model.
|
| 162 |
+
|
| 163 |
+
Table 1 reports diagnosticity scores across tasks and models. Between the post-hoc metrics CC-SHAP and Simulatability, the better-performing metric varies by task and model. Among the CoT-based metrics, Filler Tokens consistently outperforms the others, except on the Analogy task. While its advantage on other tasks is not always statistically significant and may vary across models, it significantly outperforms all other metrics on the FactCheck task for both models $(p < 0.05$ , Wilcoxon signed-rank test). To assess overall performance, we conduct pairwise comparisons across
|
| 164 |
+
|
| 165 |
+
<table><tr><td rowspan="2" colspan="2">Metric</td><td colspan="2">FactCheck</td><td colspan="2">Analogy</td><td colspan="2">Object Counting</td><td colspan="2">Multi-hop</td><td rowspan="2">Copeland Score (↑)</td></tr><tr><td>Qwen</td><td>Gemma</td><td>Qwen</td><td>Gemma</td><td>Qwen</td><td>Gemma</td><td>Qwen</td><td>Gemma</td></tr><tr><td rowspan="2">p.h.</td><td>CC-SHAP</td><td>0.554</td><td>0.540</td><td>0.345</td><td>0.898</td><td>0.551</td><td>0.466</td><td>0.438</td><td>0.658</td><td>5</td></tr><tr><td>Simulatability</td><td>0.501</td><td>0.507</td><td>0.501</td><td>0.501</td><td>0.499</td><td>0.500</td><td>0.502</td><td>0.512</td><td>3</td></tr><tr><td rowspan="5">CoT</td><td>Early Answering</td><td>0.756</td><td>0.838</td><td>0.534</td><td>0.859</td><td>0.566</td><td>0.724</td><td>0.468</td><td>0.435</td><td>18</td></tr><tr><td>Filler Tokens</td><td>0.828</td><td>0.893</td><td>0.561</td><td>0.810</td><td>0.630</td><td>0.843</td><td>0.682</td><td>0.585</td><td>29</td></tr><tr><td>Adding Mistakes</td><td>0.534</td><td>0.427</td><td>0.590</td><td>0.639</td><td>0.614</td><td>0.579</td><td>0.542</td><td>0.402</td><td>13</td></tr><tr><td>Paraphrasing</td><td>0.556</td><td>0.525</td><td>0.535</td><td>0.430</td><td>0.425</td><td>0.385</td><td>0.448</td><td>0.525</td><td>8</td></tr><tr><td>CC-SHAP</td><td>0.559</td><td>0.598</td><td>0.318</td><td>0.939</td><td>0.539</td><td>0.506</td><td>0.442</td><td>0.488</td><td>12</td></tr></table>
|
| 166 |
+
|
| 167 |
+
Table 1: The diagnosticity scores of each metric across four tasks and two models. Qwen and Gemma correspond to qwen2.5-7b and gemma-2-9b-i t, respectively. Bold numbers indicate the highest scores for each model on each task across the two categories of faithfulness metrics: post-hoc and CoT. Since CC-SHAP can be applied to both CoT and post-hoc explanations, it is reported under both categories. Underlined numbers show the diagnosticity scores that are significantly higher than 0.5 (one-sample t-test, $p < 0.05$ ).
|
| 168 |
+
|
| 169 |
+

|
| 170 |
+
Figure 3: Comparison of original and modified Early Answering metrics across four tasks and two models: qwen2.5-7b, gemma-2-9b-it. Errorbars indicate the $95\%$ bootstrap confidence intervals.
|
| 171 |
+
|
| 172 |
+
all metrics, tasks, and models using Copeland's method. As seen in Table 1, CC-SHAP ranks highest among post-hoc metrics, while Filler Tokens leads among CoT-based metrics. Filler Tokens is the most reliable overall, significantly outperforming $(p < 0.05$ , one-sample t-test) the baseline value of 0.5 across all tasks and models. The Multi-hop task is particularly challenging, as all other metrics fail to significantly exceed baseline performance.
|
| 173 |
+
|
| 174 |
+
# 5.2 Metric Sensitivity
|
| 175 |
+
|
| 176 |
+
When examining discrepancies between models, notable differences emerge in Early Answering and Filler Tokens for the Analogy and Object Counting tasks, as well as in CC-SHAP across post-hoc and CoT setups for the Analogy task. These inconsistencies may stem from the way these metrics oper-
|
| 177 |
+
|
| 178 |
+
ate. In Early Answering metric, truncated explanations may result in incomplete sentences, which can be out-of-distribution (OOD) for the model. Thus, drops in prediction scores may not solely reflect the unfaithfulness but rather the model's sensitivity to OOD inputs (Hooker et al., 2018). To investigate this, we explore alternative input corruption schemes for Filler Tokens and Early Answering.
|
| 179 |
+
|
| 180 |
+
Filler Tokens We explore two choices: the type of filler token and the replacement strategy (repeating vs. non-repeating). The original metric replaces each character with three dots. We also test stars, dashes, dollar signs, and pilcrows, the latter two being rare in typical text. In the repeating setup, each character is replaced by a sequence of three identical tokens; in the non-repeating setup, the entire explanation is replaced by a single threetoken sequence. The non-repeating setup improves diagnosticity, except on the Object Counting task, where scores remain stable. Model discrepancies decrease for FactCheck, Multi-hop, and Analogy, but persist for Object Counting. These results suggest that more natural corruptions improve metric robustness. The type of filler token has little effect, even in the repeating setup, indicating both models respond similarly to different token types. Appendix D includes a detailed analysis.
|
| 181 |
+
|
| 182 |
+
Early Answering The original Early Answering metric retains only the first third of an explanation by character count, often producing incomplete or incoherent text. To address this, we propose a set of heuristics (detailed in Appendix D) to ensure that shortened explanations are syntactically meaningful. Figure 3 compares diagnosticity scores across four tasks and two models using the original and
|
| 183 |
+
|
| 184 |
+
modified Early Answering metrics. The modified version narrows gaps between models in all tasks except Analogy, where the gap increases. Although our heuristics do not fully resolve OOD input issues, the shifts in model performance highlight the metric's sensitivity to input characteristics and and how these are interpreted by different models.
|
| 185 |
+
|
| 186 |
+
We further analyzed the CoT corruptions after observing diagnosticity shifts across different schemes, and found some metrics very sensitive to minor noise. See Appendix D for details.
|
| 187 |
+
|
| 188 |
+
# 5.3 Reliability of Edits
|
| 189 |
+
|
| 190 |
+
CAUSAL DIAGNOSTICITY assumes that one explanation in each pair is faithful to the evaluated model, while the other is unfaithful. While synthetic explanations in principle ensure faithfulness or unfaithfulness with respect to the edited model, their practical accuracy depends on the success of the editing method. We assess this by comparing the perplexities of the explanation pairs, where where faithful explanations are expected to have lower perplexity than unfaithful ones.
|
| 191 |
+
|
| 192 |
+

|
| 193 |
+
Figure 4: Percentage of faithful explanations with lower perplexity than unfaithful ones by task and model. Higher values indicate higher success in applied edits. Errorbars indicate $95\%$ bootstrap confidence intervals.
|
| 194 |
+
|
| 195 |
+
Figure 4 shows shows the percentage of faithful explanations with lower perplexity than unfaithful ones, by task and model. For FactCheck, the edits show strong success, with scores near 1.0, followed by Multi-hop Reasoning and Object Counting. In contrast, edits for the Analogy task underperform, with scores falling below $50\%$ . This is likely due to conflicting information about widely known facts, such as capital cities. To explore whether this limitation is inherent to ICE, we compare ICE and MEMIT on qwen2.5-7b across three tasks.
|
| 196 |
+
|
| 197 |
+

|
| 198 |
+
Figure 5: Diagnostics scores for each metric on qwen-2.5-7b using two knowledge editing methods: ICE and MEMIT, averaged across three tasks: FactCheck, Analogy and Object Counting.
|
| 199 |
+
|
| 200 |
+
MEMIT edits show significant improvements in Analogy and Object Counting, but cause near $50\%$ drop in model editing performance for FactCheck. This indicates that the success of knowledge editing methods varies significantly by task. Importantly, due to the low edit reliability scores for Analogy, ICE-based diagnosticity results for this task are not robust and should be considered unreliable. See Appendix B for a detailed analysis.
|
| 201 |
+
|
| 202 |
+
# 5.4 Effect of Knowledge Editing Method
|
| 203 |
+
|
| 204 |
+
We replace ICE with MEMIT (Meng et al., 2023), a locate-and-edit approach enabling bulk edits (details in Appendix C). Since Multi-hop reasoning edits do not align with MEMIT's format, this task is excluded. Figure 5 compares MEMIT and ICE across all faithfulness metrics, with diagnosticity scores averaged over three tasks. Except for the FactCheck task, the differences are not significant $(p > 0.05$ , Wilcoxon signed-rank test), suggesting that the choice of editing method does not substantially affect overall results. Full results for MEMIT are in Appendix B.
|
| 205 |
+
|
| 206 |
+
# 5.5 Effect of Explanation Type
|
| 207 |
+
|
| 208 |
+
While our main results use synthetically generated explanations, we perform an ablation using model-generated explanations. We evaluate all metrics using qwen-2.5-7b, limiting model-generated explanations to 100 tokens. Figure 6 compares model-generated and synthetic explanations across faithfulness metrics, with diagnosticity scores averaged over four tasks. The results indicate that synthetic explanations generally achieve higher scores than
|
| 209 |
+
|
| 210 |
+
<table><tr><td rowspan="2">Metric</td><td colspan="2">FactCheck</td><td colspan="2">Analogy</td><td colspan="2">Object Counting</td><td colspan="2">Multi-hop</td></tr><tr><td>Bin.</td><td>Cont. (Δ)</td><td>Bin.</td><td>Cont. (Δ)</td><td>Bin.</td><td>Cont. (Δ)</td><td>Bin.</td><td>Cont. (Δ)</td></tr><tr><td>Early Answering</td><td>0.496</td><td>0.756 (+0.260)</td><td>0.501</td><td>0.534 (+0.033)</td><td>0.488</td><td>0.566 (+0.078)</td><td>0.488</td><td>0.468 (-0.020)</td></tr><tr><td>Filler Tokens</td><td>0.500</td><td>0.828 (+0.328)</td><td>0.500</td><td>0.561 (+0.061)</td><td>0.444</td><td>0.630 (+0.186)</td><td>0.495</td><td>0.682 (+0.187)</td></tr><tr><td>Adding Mistakes</td><td>0.493</td><td>0.534 (+0.041)</td><td>0.530</td><td>0.590 (+0.060)</td><td>0.517</td><td>0.614 (+0.097)</td><td>0.485</td><td>0.542 (+0.057)</td></tr><tr><td>Paraphrasing</td><td>0.571</td><td>0.556 (-0.015)</td><td>0.501</td><td>0.535 (+0.034)</td><td>0.531</td><td>0.425 (-0.106)</td><td>0.510</td><td>0.448 (-0.062)</td></tr></table>
|
| 211 |
+
|
| 212 |
+
Table 2: Comparison of diagnosticity scores between continuous and binary variants of CoT corruption-based metrics using qwen-2.5-7b. Differences are statistically significant (Wilcoxon signed-rank test, $p < 0.05$ ) except those highlighted in gray.
|
| 213 |
+
|
| 214 |
+

|
| 215 |
+
Figure 6: Diagnostics scores for each metric using model generated and synthetic explanations with qwen-2.5-7b, averaged across all four tasks.
|
| 216 |
+
|
| 217 |
+
model-generated ones, though differences across explanation types are not statistically significant $(p > 0.05$ , Wilcoxon signed-rank test). Qualitatively, we find that for Analogy and Object Counting, model-generated explanations often fail to reflect the applied edits, aligning with our findings in §5.3. Given consistency and low generation cost, synthetic explanations remain a strong alternative.
|
| 218 |
+
|
| 219 |
+
# 5.6 Binary vs. Continuous Metrics
|
| 220 |
+
|
| 221 |
+
In Table 1, low diagnosticity scores of Simulatability, which is a metric that produces binary outcomes, are notable. For a more detailed analysis, we compare binary and continuous variants of CoT-based metrics across four tasks using qwen2.5-7b. Table 2 shows that continuous variants consistently outperform their binary counterparts across most tasks and metrics, with relative gains of up to $66\%$ . Even in cases where binary variants perform better, the differences are generally small and not statistically significant. While Siegel et al. (2024) provide theoretical rationale for preferring continuous alternatives in Counterfactual Edits (Atanasova et al., 2023), we are the first to empirically confirm this
|
| 222 |
+
|
| 223 |
+
trend across multiple metrics and tasks.
|
| 224 |
+
|
| 225 |
+
# 6 Conclusion
|
| 226 |
+
|
| 227 |
+
Our work here provides a testbed for faithfulness metrics, laying the groundwork for improvements in faithfulness metrics and natural language explanations. We benchmark popular post-hoc and CoT-based faithfulness metrics across tasks. Our findings show that while the most diagnostic faithfulness metric varies by task and model, the Filler Tokens metric performs best overall. Continuous metrics tend to be more diagnostic than their binary counterparts; however, those based on input corruptions can be overly sensitive to noise and model differences. Design choices that reduce potential OOD corruptions, as in Filler Tokens and Early Answering, improve diagnosticity. By contrast, CC-SHAP's reliance on perturbations may explain its lower scores, while Adding Mistakes and Paraphrasing likely suffer from noise sensitivity and inconsistent corruption effects. These results highlight the need for diagnosticity-first approaches and the development of more robust continuous metrics that do not rely on OOD perturbations.
|
| 228 |
+
|
| 229 |
+
Another key limitation of current metrics is that they do not indicate how or where an explanation is unfaithful. Future work should focus on developing more interpretable faithfulness assessments revealing which parts of an explanation diverge from the model's actual reasoning. A recent contemporaneous work (Tutek et al., 2025) takes a first step in this direction by quantifying the faithfulness of reasoning steps at the sentence level. Further developments along these lines would help the community diagnose the sources of unfaithful explanations and enable more targeted improvements. Ultimately, as better metrics support more reliable evaluation, the goal remains to design explanations that reflect the model's true reasoning process.
|
| 230 |
+
|
| 231 |
+
# Limitations
|
| 232 |
+
|
| 233 |
+
CAUSAL DIAGNOSTICITY is not suitable for evaluating all types of faithfulness metrics. Specifically, the metric must be capable of evaluating externally provided explanations. For example, we cannot evaluate metrics like Counterfactual Edits (Atanasova et al., 2023), which assess changes in explanations resulting from input modifications. Such metrics inherently require regenerating explanations, rendering our faithful-unfaithful explanation pairs ineffective, as the original model-explanation relationship no longer holds. Additionally, our approach requires metrics that produce per-instance faithfulness scores, rather than per-dataset scores or instance-level scores that rely on dataset-wide statistics.
|
| 234 |
+
|
| 235 |
+
Our framework also substantially depends upon the efficacy of the knowledge editing method. It presupposes that the applied edits can generalize across diverse surface forms and reasoning processes while maintaining compositionality. Previous research on knowledge editing assesses the portability of edits by employing various benchmarks (Yao et al., 2023; Zhong et al., 2023; Cohen et al., 2024), wherein they curate downstream applications for each specific edit. Nevertheless, the creation of such benchmarks pertinent to our tasks necessitates substantial effort, which is not within the scope of this study. Consequently, we utilize the perplexity relationship between edits and synthetically generated explanations as an indicative measure of model editing success.
|
| 236 |
+
|
| 237 |
+
While we perform an ablation study employing MEMIT, the potential benefits of model-generated explanations and more extensive models employing alternative editing techniques remains unexamined. This is primarily due to the considerable computational expense associated with resolving issues in model-generated explanations, which involve parameter-updating methods or memory-based approaches that necessitate extended contexts.
|
| 238 |
+
|
| 239 |
+
Additionally, our scaling experiments exclude CC-SHAP owing to its slow execution. Specifically, memory-based methods considerably extend the duration of experiments involving CC-SHAP as they increase context length.
|
| 240 |
+
|
| 241 |
+
# Acknowledgements
|
| 242 |
+
|
| 243 |
+
The authors thank Peter Hase for useful pointers in knowledge editing literature, Rakesh R Menon for feedback on the paper's diagrams and All Digi
|
| 244 |
+
|
| 245 |
+
tal, Valery Zanimanski, BomSymbols and Twemoji team for providing various icons used in Figures 1 and 2. This work was supported in part by NSF grant DRL2112635.
|
| 246 |
+
|
| 247 |
+
# References
|
| 248 |
+
|
| 249 |
+
Pepa Atanasova, Oana-Maria Camburu, Christina Lioma, Thomas Lukasiewicz, Jakob Grue Simonsen, and Isabelle Augenstein. 2023. Faithfulness tests for natural language explanations. ArXiv, abs/2305.18029.
|
| 250 |
+
BIG Bench authors. 2023. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. Transactions on Machine Learning Research.
|
| 251 |
+
Nicola De Cao, Wilker Aziz, and Ivan Titov. 2021. Editing factual knowledge in language models. In *Conference on Empirical Methods in Natural Language Processing*.
|
| 252 |
+
Aaron Chan, Shaoliang Nie, Liang Tan, Xiaochang Peng, Hamed Firooz, Maziar Sanjabi, and Xiang Ren. 2022a. Frame: Evaluating rationale-label consistency metrics for free-text rationales.
|
| 253 |
+
Chun Sik Chan, Huanqi Kong, and Guanqing Liang. 2022b. A comparative study of faithfulness metrics for model interpretability methods. In Annual Meeting of the Association for Computational Linguistics.
|
| 254 |
+
Roi Cohen, Eden Biran, Ori Yoran, Amir Globerson, and Mor Geva. 2023. Evaluating the ripple effects of knowledge editing in language models. Transactions of the Association for Computational Linguistics, 12:283-298.
|
| 255 |
+
Roi Cohen, Eden Biran, Ori Yoran, Amir Globerson, and Mor Geva. 2024. Evaluating the ripple effects of knowledge editing in language models. Transactions of the Association for Computational Linguistics, 12:283-298.
|
| 256 |
+
Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv: Machine Learning.
|
| 257 |
+
Mor Geva, Jasmijn Bastings, Katja Filippova, and Amir Globerson. 2023. Dissecting recall of factual associations in auto-regressive language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 12216-12235, Singapore. Association for Computational Linguistics.
|
| 258 |
+
Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. Transactions of the Association for Computational Linguistics, 9:346-361.
|
| 259 |
+
|
| 260 |
+
Hengrui Gu, Kaixiong Zhou, Xiaotian Han, Ninghao Liu, Ruobing Wang, and Xin Wang. 2023. *Pokemqa: Programmable knowledge editing for multi-hop question answering*. In Annual Meeting of the Association for Computational Linguistics.
|
| 261 |
+
Anshita Gupta, Debanjan Mondal, Akshay Sheshadri, Wenlong Zhao, Xiang Li, Sarah Wiegreffe, and Niket Tandon. 2023. Editing common sense in transformers. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 8214-8232, Singapore. Association for Computational Linguistics.
|
| 262 |
+
Thomas Hartvigsen, Swami Sankaranarayanan, Hamid Palangi, Yoon Kim, and Marzyeh Ghassemi. 2023. Aging with grace: Lifelong model editing with discrete key-value adaptors. In Advances in Neural Information Processing Systems.
|
| 263 |
+
Peter Hase and Mohit Bansal. 2020. Evaluating explainable ai: Which algorithmic explanations help users predict model behavior? In Annual Meeting of the Association for Computational Linguistics.
|
| 264 |
+
Peter Hase, Mohit Bansal, Been Kim, and Asma Ghandeharioun. 2023. Does localization inform editing? surprising differences in causality-based localization vs. knowledge editing in language models. In Advances in Neural Information Processing Systems, volume 36, pages 17643-17668. Curran Associates, Inc.
|
| 265 |
+
Peter Hase, Shiyue Zhang, Harry Xie, and Mohit Bansal. 2020. Leakage-adjusted simulatability: Can models generate non-trivial explanations of their behavior in natural language? In *Findings*.
|
| 266 |
+
Sara Hooker, D. Erhan, Pieter-Jan Kindermans, and Been Kim. 2018. A benchmark for interpretability methods in deep neural networks. In Neural Information Processing Systems.
|
| 267 |
+
Alon Jacovi and Yoav Goldberg. 2020. Towards faithfully interpretable nlp systems: How should we define and evaluate faithfulness? In Annual Meeting of the Association for Computational Linguistics.
|
| 268 |
+
Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson E. Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, John Kernion, Kamil.e Lukovsiut.e, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Samuel McCandlish, Sandipan Kundu, and 11 others. 2023. Measuring faithfulness in chain-of-thought reasoning. ArXiv, abs/2307.13702.
|
| 269 |
+
Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, WeiMing Chen, Wei-Chen Wang, Guangxuan Xiao, Xingyu Dang, Chuang Gan, and Song Han. 2024. AWQ: Activation-aware weight quantization for ondevice llm compression and acceleration. In Proceedings of Machine Learning and Systems, volume 6, pages 87-100.
|
| 270 |
+
|
| 271 |
+
Scott M. Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Neural Information Processing Systems.
|
| 272 |
+
Qing Lyu, Marianna Apidianaki, and Chris Callison-Burch. 2022. Towards faithful model explanation in nlp: A survey. Computational Linguistics, 50:657-723.
|
| 273 |
+
Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. 2022. Locating and editing factual associations in gpt. In Neural Information Processing Systems.
|
| 274 |
+
Kevin Meng, Arnab Sen Sharma, Alex Andonian, Yonatan Belinkov, and David Bau. 2023. Mass editing memory in a transformer. The Eleventh International Conference on Learning Representations (ICLR).
|
| 275 |
+
Eric Mitchell, Charles Lin, Antoine Bosselut, Chelsea Finn, and Christopher D Manning. 2022a. Fast model editing at scale. In International Conference on Learning Representations.
|
| 276 |
+
Eric Mitchell, Charles Lin, Antoine Bosselut, Chelsea Finn, and Christopher D. Manning. 2022b. Memory-based model editing at scale. In International Conference on Machine Learning.
|
| 277 |
+
Letitia Parcalabescu and Anette Frank. 2023. On measuring faithfulness or self-consistency of natural language explanations.
|
| 278 |
+
Vaidehi Patil, Peter Hase, and Mohit Bansal. 2023. Can sensitive information be deleted from llms? objectives for defending against extraction attacks. ArXiv, abs/2309.17410.
|
| 279 |
+
Judea Pearl. 2001. Direct and indirect effects. Probabilistic and Causal Inference.
|
| 280 |
+
Gemma Team Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupatiraju, L'essonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ram'e, Johan Ferret, Peter Liu, Pouya Dehghani Tafti, Abe Friesen, Michelle Carbon, Sabela Ramos, Ravin Kumar, Charline Le Lan, Sammy Jerome, Anton Tsitsulin, and 176 others. 2024. Gemma 2: Improving open language models at a practical size. ArXiv, abs/2408.00118.
|
| 281 |
+
Noah Siegel, Oana-Maria Camburu, Nicolas Heess, and Maria Perez-Ortiz. 2024. The probabilities also matter: A more faithful metric for faithfulness of freetext explanations in large language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 530–546, Bangkok, Thailand. Association for Computational Linguistics.
|
| 282 |
+
Miles Turpin, Julian Michael, Ethan Perez, and Sam Bowman. 2023. Language models don't always say what they think: Unfaithful explanations in chain-of-thought prompting. ArXiv, abs/2305.04388.
|
| 283 |
+
|
| 284 |
+
Martin Tutek, Fateme Hashemi Chaleshtori, Ana Marasovi'sc, and Yonatan Belinkov. 2025. Measuring chain of thought faithfulness by unlearning reasoning steps.
|
| 285 |
+
|
| 286 |
+
Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Yaron Singer, and Stuart M. Shieber. 2020. Investigating gender bias in language models using causal mediation analysis. In Neural Information Processing Systems.
|
| 287 |
+
|
| 288 |
+
Peng Wang, Ningyu Zhang, Bozhong Tian, Zekun Xi, Yunzhi Yao, Ziwen Xu, Mengru Wang, Shengyu Mao, Xiaohan Wang, Siyuan Cheng, Kangwei Liu, Yansheng Ni, Guozhou Zheng, and Huajun Chen. 2024. EasyEdit: An easy-to-use knowledge editing framework for large language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations), pages 82-93, Bangkok, Thailand. Association for Computational Linguistics.
|
| 289 |
+
|
| 290 |
+
Peng Wang, Ningyu Zhang, Xin Xie, Yunzhi Yao, Bo Tian, Mengru Wang, Zekun Xi, Siyuan Cheng, Kangwei Liu, Yuansheng Ni, Guozhou Zheng, and Huajun Chen. 2023. EASYedit: An easy-to-use knowledge editing framework for large language models. ArXiv, abs/2308.07269.
|
| 291 |
+
|
| 292 |
+
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Huai hsin Chi, F. Xia, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. *ArXiv*, abs/2201.11903.
|
| 293 |
+
|
| 294 |
+
Sarah Wiegrefe, Ana Marasovic, and Noah A. Smith. 2020. Measuring association between labels and freetext rationales. In Conference on Empirical Methods in Natural Language Processing.
|
| 295 |
+
|
| 296 |
+
Qwen An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxin Yang, Jingren Zhou, Junyang Lin, Kai Dang, and 23 others. 2024. Qwen2.5 technical report. ArXiv, abs/2412.15115.
|
| 297 |
+
|
| 298 |
+
Yunzhi Yao, Peng Wang, Bozhong Tian, Siyuan Cheng, Zhoubo Li, Shumin Deng, Huajun Chen, and Ningyu Zhang. 2023. Editing large language models: Problems, methods, and opportunities. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 10222-10240, Singapore. Association for Computational Linguistics.
|
| 299 |
+
|
| 300 |
+
Ningyu Zhang, Yunzhi Yao, Bo Tian, Peng Wang, Shumin Deng, Meng Wang, Zekun Xi, Shengyu Mao, Jintian Zhang, Yuansheng Ni, Siyuan Cheng, Ziwen Xu, Xin Xu, Jia-Chen Gu, Yong Jiang, Pengjun Xie, Fei Huang, Lei Liang, Zhiqiang Zhang, and 3 others. 2024. A comprehensive study of knowledge editing for large language models. ArXiv, abs/2401.01286.
|
| 301 |
+
|
| 302 |
+

|
| 303 |
+
Figure 7: Comparison of diagnosticity scores with respect to model size for four metrics using 7B, 32B and 72B qwen2.5-instruct models. Shaded regions indicate the $95\%$ confidence interval calculated by bootstrap.
|
| 304 |
+
|
| 305 |
+
Ce Zheng, Lei Li, Qingxiu Dong, Yuxuan Fan, Zhiyong Wu, Jingjing Xu, and Baobao Chang. 2023. Can we edit factual knowledge by in-context learning? In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 4862-4876, Singapore. Association for Computational Linguistics.
|
| 306 |
+
|
| 307 |
+
Zexuan Zhong, Zhengxuan Wu, Christopher D. Manning, Christopher Potts, and Danqi Chen. 2023. Mquake: Assessing knowledge editing in language models via multi-hop questions. In Conference on Empirical Methods in Natural Language Processing.
|
| 308 |
+
|
| 309 |
+
Chen Zhu, Ankit Singh Rawat, Manzil Zaheer, Srinadh Bhojanapalli, Daliang Li, Felix X. Yu, and Sanjiv Kumar. 2020. Modifying memories in transformer models. ArXiv, abs/2012.00363.
|
| 310 |
+
|
| 311 |
+
# A Effect of Model Size
|
| 312 |
+
|
| 313 |
+
Our main experiments are conducted on relatively small models with 7 billion to 9 billion parameters. We evaluate the impact of model size on diagnosticity by testing Simulatability, Filler Tokens, Adding Mistakes, and Paraphrasing on three models: qwen2.5-7b-instruct, qwen2.5-32b-instruct, and qwen2.5-72b-instruct. For the 32B and 72B models we adopt their AWQ (Lin et al., 2024) versions due to memory considerations. Since AWQ variants of these larger models are available only in instruction-tuned form, we use instruction-tuned versions for all models (7B, 32B, and 72B) to ensure consistency.
|
| 314 |
+
|
| 315 |
+
Figure 7 shows no clear scaling trends in diagnosticity. Simulatability remains stable, while Adding Mistakes shows slight improvements with
|
| 316 |
+
|
| 317 |
+
<table><tr><td>Metric</td><td>FactCheck</td><td>Analogy</td><td>Object Counting</td><td>Multi-Hop</td></tr><tr><td colspan="5">7B</td></tr><tr><td>Simulatability</td><td>0.502</td><td>0.499</td><td>0.501</td><td>0.502</td></tr><tr><td>Filler Tokens</td><td>0.578</td><td>0.934</td><td>0.663</td><td>0.640</td></tr><tr><td>Adding Mistakes</td><td>0.526</td><td>0.603</td><td>0.639</td><td>0.552</td></tr><tr><td>Paraphrasing</td><td>0.488</td><td>0.304</td><td>0.386</td><td>0.440</td></tr><tr><td colspan="5">32B</td></tr><tr><td>Simulatability</td><td>0.501</td><td>0.504 (+0.01)</td><td>0.501</td><td>0.498</td></tr><tr><td>Filler Tokens</td><td>0.692 (+0.11)</td><td>0.358 (-0.58)</td><td>0.942 (+0.28)</td><td>0.500 (-0.14)</td></tr><tr><td>Adding Mistakes</td><td>0.681 (+0.16)</td><td>0.360 (-0.24)</td><td>0.691 (+0.05)</td><td>0.462 (-0.09)</td></tr><tr><td>Paraphrasing</td><td>0.292 (-0.20)</td><td>0.392 (+0.09)</td><td>0.117 (-0.27)</td><td>0.418 (-0.02)</td></tr><tr><td colspan="5">72B</td></tr><tr><td>Simulatability</td><td>0.504</td><td>0.504 (+0.01)</td><td>0.500</td><td>0.502</td></tr><tr><td>Filler Tokens</td><td>0.538 (-0.04)</td><td>0.218 (-0.72)</td><td>0.903 (+0.24)</td><td>0.678 (+0.04)</td></tr><tr><td>Adding Mistakes</td><td>0.318 (-0.21)</td><td>0.556 (-0.05)</td><td>0.711 (+0.07)</td><td>0.430 (-0.12)</td></tr><tr><td>Paraphrasing</td><td>0.758 (+0.27)</td><td>0.620 (+0.32)</td><td>0.137 (-0.25)</td><td>0.515 (+0.08)</td></tr></table>
|
| 318 |
+
|
| 319 |
+
Table 3: The change in diagnosticity scores across with respect to model size across four tasks. Underlined numbers show the diagnosticity scores that are significantly higher than 0.5 (one-sample t-test, $p < 0.05$ ).
|
| 320 |
+
|
| 321 |
+
scale in the Object Counting task but mixed patterns for other tasks. Paraphrasing scales well in Analogy, whereas Filler Tokens scales inversely in Object Counting. While Figure 8 suggests edit reliability improves with model size, our results indicate that scaling shows no uniform patterns across different configurations.
|
| 322 |
+
|
| 323 |
+
Table 3 examines how diagnosticity scores vary with model size for selected metrics.
|
| 324 |
+
|
| 325 |
+
# B Additional Results
|
| 326 |
+
|
| 327 |
+
Table 2 compares the binary and continuous variants of CoT-corruption-based metrics. Table 4 reports the diagnosticity scores when the knowledge editing method is switched from ICE to MEMIT, while Table 5 presents the scores when using model-generated explanations instead of synthetic ones. Additionally, Figure 9 compares MEMIT and ICE in terms of edit success across three tasks.
|
| 328 |
+
|
| 329 |
+
# C Knowledge Editing
|
| 330 |
+
|
| 331 |
+
# C.1 MEMIT
|
| 332 |
+
|
| 333 |
+
MEMIT (Meng et al., 2023) is a locate-and-edit-based knowledge editing approach. Unlike previous methods (Zhu et al., 2020; Cao et al., 2021; Mitchell et al., 2022a; Hase et al., 2023; Meng et al., 2022), MEMIT effectively scales to edit thousands
|
| 334 |
+
|
| 335 |
+
of facts simultaneously. Similar to ROME (Meng et al., 2022), MEMIT leverages causal mediation analysis (Pearl, 2001; Vig et al., 2020; Meng et al., 2022) to identify MLP layers in transformer networks that store factual knowledge and selectively modify them.
|
| 336 |
+
|
| 337 |
+
At its core, MEMIT and similar methods treat language models as knowledge bases, where facts are represented as knowledge triplets consisting of a subject, relation, and object $(s,r,o)$ . Using this perspective, knowledge editing is performed by modifying the object predicted in response to a given subject-relation pair during next-token prediction. However, this approach constrains the types of edits that can be applied, limiting users to relatively simple expressions of knowledge.
|
| 338 |
+
|
| 339 |
+
# C.2 In-Context Knowledge Editing
|
| 340 |
+
|
| 341 |
+
In-Context Editing methods are memory-based approaches in which new knowledge is introduced to the model via context rather than modifying its parameters. While most memory-based methods, such as IKE (Zheng et al., 2023), MeLLo (Zhong et al., 2023), and PokeMQA (Gu et al., 2023), do not involve any additional training or parameter updates, some methods require training. For instance, SERAC (Mitchell et al., 2022b) trains a separate counterfactual model to process inputs related to
|
| 342 |
+
|
| 343 |
+
<table><tr><td>Metric</td><td>FactCheck</td><td>Analogy</td><td>Object Counting</td><td>Copeland Score (↑)</td></tr><tr><td>post-hoc</td><td></td><td></td><td></td><td></td></tr><tr><td>CC-SHAP</td><td>0.541</td><td>0.130</td><td>0.580</td><td>2</td></tr><tr><td>Simulatability</td><td>0.496</td><td>0.511</td><td>0.500</td><td>1</td></tr><tr><td>CoT</td><td></td><td></td><td></td><td></td></tr><tr><td>Early Answering</td><td>0.485</td><td>0.227</td><td>0.488</td><td>3</td></tr><tr><td>Filler Tokens</td><td>0.498</td><td>0.768</td><td>0.496</td><td>9.5</td></tr><tr><td>Adding Mistakes</td><td>0.476</td><td>0.460</td><td>0.447</td><td>3</td></tr><tr><td>Paraphrasing</td><td>0.498</td><td>0.194</td><td>0.507</td><td>6.5</td></tr><tr><td>CC-SHAP</td><td>0.493</td><td>0.246</td><td>0.580</td><td>8</td></tr></table>
|
| 344 |
+
|
| 345 |
+
Table 4: The diagnosticity scores of each metric across three tasks using qwen2.5-7b as model and MEMIT as knowledge editing method. Bold numbers indicate the highest scores on each task across the two categories of faithfulness metrics: post-hoc and CoT. Underlined numbers show the diagnosticity scores that are significantly higher than 0.5 (one-sample t-test, $p < 0.05$ ).
|
| 346 |
+
|
| 347 |
+
<table><tr><td>Metric</td><td>FactCheck</td><td>Analogy</td><td>Object Counting</td><td>Multi-Hop</td><td>Copeland Score (↑)</td></tr><tr><td>post-hoc</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>CC-SHAP</td><td>0.562</td><td>0.516</td><td>0.451</td><td>0.420</td><td>2</td></tr><tr><td>Simulatability</td><td>0.505</td><td>0.500</td><td>0.500</td><td>0.500</td><td>2</td></tr><tr><td>CoT</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Early Answering</td><td>0.505</td><td>0.598</td><td>0.501</td><td>0.485</td><td>9</td></tr><tr><td>Filler Tokens</td><td>0.485</td><td>0.495</td><td>0.528</td><td>0.507</td><td>7</td></tr><tr><td>Adding Mistakes</td><td>0.476</td><td>0.514</td><td>0.489</td><td>0.430</td><td>4</td></tr><tr><td>Paraphrasing</td><td>0.596</td><td>0.510</td><td>0.534</td><td>0.670</td><td>14</td></tr><tr><td>CC-SHAP</td><td>0.510</td><td>0.452</td><td>0.452</td><td>0.568</td><td>6</td></tr></table>
|
| 348 |
+
|
| 349 |
+
Table 5: Diagnosticity scores of each metric across three tasks using qwen2.5-7b as the model and ICE as the knowledge editing method, with model-generated explanations. Bold numbers indicate the highest scores on each task across the two categories of faithfulness metrics: post-hoc and CoT. Underlined numbers show the diagnosticity scores that are significantly higher than 0.5 (one-sample t-test, $p < 0.05$ ).
|
| 350 |
+
|
| 351 |
+
updated knowledge without modifying the original model's parameters.
|
| 352 |
+
|
| 353 |
+
In this study, we adopt ICE (Cohen et al., 2024) as our knowledge editing method, which operates by preponding new facts to the input context. We adapt the prompt template used by Wang et al. (2024), as shown in Figure 10. Compared to MEMIT, ICE offers greater flexibility by not requiring adherence to a specific structure. When computing faithfulness scores, we exclude the prefixed edits from any operations and keep them fixed throughout the evaluation.
|
| 354 |
+
|
| 355 |
+
# C.3 Task-based Editing Templates
|
| 356 |
+
|
| 357 |
+
Table 6 shows the templates we use for editing models in each task. For the FactCheck task, there is a
|
| 358 |
+
|
| 359 |
+
variety of prompts where the action or situation of the subject differs, but the target is always located at the end of the prompt. In this task, both models are edited using counterfactuals to ensure the same answer is maintained, while for the other tasks, the edit pairs consist of factual and counterfactual prompts.
|
| 360 |
+
|
| 361 |
+
For the Analogy task, we follow Template #1 to edit the model to change the capital of a given country. Even for the model where the capitals remain unchanged, we apply this edit in case the model lacks knowledge of some countries. For both models, we reinforce the $r_{\text{cityOf}}$ relation by applying Template #2.
|
| 362 |
+
|
| 363 |
+
For the Object Counting task, we use the corresponding template in Table 6 to edit the model by
|
| 364 |
+
|
| 365 |
+

|
| 366 |
+
Table 6: Templates used for editing models. Blue boxes indicate the subject, while pink boxes represent the target for each given edit.
|
| 367 |
+
|
| 368 |
+
altering the types of entities. For the touristic attraction category, we use is located in instead of is. Similarly, for the model where entity types remain unchanged, we still apply this edit to account for possible gaps in the model's knowledge of certain objects.
|
| 369 |
+
|
| 370 |
+
# D Faithfulness Metrics
|
| 371 |
+
|
| 372 |
+
# D.1 Implementation Details
|
| 373 |
+
|
| 374 |
+
Predictions and Explanations We use different prompts based on the explanation type, which can be either post-hoc or CoT, to generate predictions and explanations. After feeding the model with the designated prompt, we obtain the prediction based on the next-token logits, selecting the token with the highest score among those corresponding to the task-specific labels. Given an input prompt $\pmb{x}$ , a label set $L$ , and the logit produced by the model $M_{\theta}$ for label $L_{i}$ when given $\pmb{x}$ , denoted as $p_{\theta}(L_i \mid x)$ , the class scores are computed as follows:
|
| 375 |
+
|
| 376 |
+
$$
|
| 377 |
+
\hat {z} _ {i} = \frac {\exp \left(p _ {\theta} \left(L _ {i} \mid \boldsymbol {x}\right)\right)}{\sum_ {L _ {j} \in L} \exp \left(p _ {\theta} \left(L _ {j} \mid \boldsymbol {x}\right)\right)} \tag {7}
|
| 378 |
+
$$
|
| 379 |
+
|
| 380 |
+
The predicted class is determined as
|
| 381 |
+
|
| 382 |
+
$$
|
| 383 |
+
\hat {y} = \arg \max _ {L _ {i} \in L} \hat {z} _ {i} \tag {8}
|
| 384 |
+
$$
|
| 385 |
+
|
| 386 |
+
For the FactCheck task, we set the label set as $L = \{ "yes", "no" \}$ , while for other tasks, we use $L = \{ "A", "B" \}$ , as they follow a multiple-choice format.
|
| 387 |
+
|
| 388 |
+
Figure 11 illustrates the prompt used to generate post-hoc explanations, where the obtained prediction is inserted into the prompt accordingly. Figure 12 presents the prompt used for CoT explanations. After generating the explanation, we append "The best answer is:" at the end of the prompt to obtain the final prediction. For the post-hoc variant of CC-SHAP, we use a slightly modified prompt, following Parcalabescu and Frank (2023), as shown in Figure 13.
|
| 389 |
+
|
| 390 |
+
# Simulatability We
|
| 391 |
+
|
| 392 |
+
use
|
| 393 |
+
|
| 394 |
+
llama-3.2-3b-instruct as our simulator model, employing the prompt shown in Figure 14.
|
| 395 |
+
|
| 396 |
+
Corrupting CoT For the continuous variants of methods based on corrupting CoT, we use the prediction scores for the top predicted class before and after corruption, denoted as $\hat{z}_i$ and $\hat{z}_i'$ , respectively. In the original binary approach for Early Answering, Filler Tokens, and Adding Mistakes, an explanation is considered unfaithful if corruption does not alter the prediction. For these metrics, we
|
| 397 |
+
|
| 398 |
+

|
| 399 |
+
Figure 8: Comparison of the edit reliability across four tasks using models of varying sizes: qwen2.5-7b-instruct, qwen2.5-32b-instruct-awq, qwen2.5-72b-instruct-awq. A higher frequency indicates greater success in applied edits. Errorbars indicate the $95\%$ confidence interval calculated by bootstrap.
|
| 400 |
+
|
| 401 |
+
instead use the change in prediction after intervention, $(\hat{z}_i - \hat{z}_i^{\prime})$ as the faithfulness score. A greater change following corruption indicates a more faithful explanation. Conversely, in Paraphrasing, an explanation is considered unfaithful if corruption does alter the prediction. Therefore, we define the faithfulness score as $1 - (\hat{z}_i - \hat{z}_i^{\prime})$ .
|
| 402 |
+
|
| 403 |
+
For specific corruption strategies, we follow established implementations:
|
| 404 |
+
|
| 405 |
+
- Early Answering: We truncate one-third of the explanation, following Parcalabescu and Frank (2023).
|
| 406 |
+
- Filler Tokens: We replace each character with " . . . ", following Parcalabescu and Frank (2023).
|
| 407 |
+
- Adding Mistakes & Paraphrasing: We use llama-3.2-3b-instruct as a helper model to introduce corruption.
|
| 408 |
+
|
| 409 |
+
Figures 15 and 16 illustrate the prompts used to generate the intended corruptions for Adding Mistakes and Paraphrasing, respectively.
|
| 410 |
+
|
| 411 |
+
Table 7 presents examples from each task, along with the applied corruptions and the corresponding prediction scores before and after corruption, as well as the associated faithfulness scores.
|
| 412 |
+
|
| 413 |
+

|
| 414 |
+
Figure 9: Comparison of the edit reliability of two editing methods across three tasks using qwen2.5-7b. A higher frequency indicates greater success in applied edits. Errorbars indicate the $95\%$ confidence interval calculated by bootstrap.
|
| 415 |
+
Figure 10: The prompt used for ICE.
|
| 416 |
+
Figure 11: The prompt used for post-hoc explanations.
|
| 417 |
+
Figure 12: The prompt used for CoT explanations.
|
| 418 |
+
Figure 13: The prompt used to generate post-hoc explanations for CC-SHAP.
|
| 419 |
+
|
| 420 |
+
```txt
|
| 421 |
+
User: Please acknowledge the following new facts and $\leftrightarrow$ use them to answer the question: New Fact:{fact} Prompt:
|
| 422 |
+
```
|
| 423 |
+
|
| 424 |
+
```txt
|
| 425 |
+
User: {input} Assistant: {predicted_label} User: Explain your reasoning. Assistant: Because
|
| 426 |
+
```
|
| 427 |
+
|
| 428 |
+
```txt
|
| 429 |
+
User: Please verbalize how you are thinking about the $\leftrightarrow$ problem. Then give your answer in the $\leftrightarrow$ format "The best answer is: X". It's very important that you stick to this $\leftrightarrow$ format.{input} Assistant: Let's think step by step:
|
| 430 |
+
```
|
| 431 |
+
|
| 432 |
+
```typescript
|
| 433 |
+
User: {input} Assistant: {predicted_label} User: Why? Assistant: Because
|
| 434 |
+
```
|
| 435 |
+
|
| 436 |
+
```txt
|
| 437 |
+
User: If{explanation},{input}
|
| 438 |
+
```
|
| 439 |
+
|
| 440 |
+
Figure 14: The prompt used for simulator model.
|
| 441 |
+
|
| 442 |
+
```typescript
|
| 443 |
+
User: Here is a text: {text} Can you please replace one word in that text for $\leftrightarrow$ me with antonyms / opposites such that $\leftrightarrow$ it makes no sense anymore? Only output $\leftrightarrow$ the edited text. Assistant: Sure, I can do that! Here's the text with $\leftrightarrow$ changed word:
|
| 444 |
+
```
|
| 445 |
+
|
| 446 |
+
Figure 15: The prompt used for adding mistakes to explanations.
|
| 447 |
+
|
| 448 |
+
```txt
|
| 449 |
+
User: Can you please paraphrase the following to me? $\leftrightarrow$ {"text"} Assistant: Sure, I can do that! Here's the rephrased $\rightarrow$ sentence:
|
| 450 |
+
```
|
| 451 |
+
|
| 452 |
+
Figure 16: The prompt used for paraphrasing explanations.
|
| 453 |
+
|
| 454 |
+
Table 7: Examples from four tasks for each metric based on corrupting CoT, along with the applied corruptions and the corresponding prediction scores before and after corruption, as well as the associated faithfulness scores.
|
| 455 |
+
|
| 456 |
+
<table><tr><td>Task</td><td>Input</td><td>Top Pred. (Before)</td><td>Corrupted Input</td><td>Top Pred. (After)</td><td>Faithfulness</td></tr><tr><td colspan="6">Early Answering</td></tr><tr><td>FactCheck</td><td>System You are a help-ful assistant.User Please acknowledge the following new facts and use them to answer the question:New Fact: Satchel Paige professionally plays the sport hurling.Prompt: Please verbalize how you are thinking about the problem. Then give your answer in the format "The best answer is: X". It's very important that you stick to this format. Does Satchel Paige professionally play baseball?Assistant Let's think step by step: Satchel Paige professionally plays the sport hurling, not baseball. The best answer is:</td><td>0.96 (No)</td><td>System You are a helpful assistant.User Please acknowledge the following new facts and use them to answer the question: New Fact: Satchel Paige professionally plays the sport hurling.Prompt: Please verbalize how you are thinking about the problem. Then give your answer in the format "The best answer is: X". It's very important that you stick to this format. Does Satchel Paige professionally play baseball?Assistant Let's think step by step: Satchel Paige professess The best answer is:</td><td>0.05</td><td>0.91</td></tr><tr><td colspan="6">Filler Tokens</td></tr><tr><td>Object Counting</td><td>System You are a help-ful assistant.User Please acknowledge the following new facts and use them to answer the question:New Fact: dog is animal.New Fact: saxophone is musical instrument.Prompt: Please verbalize how you are thinking about the problem. Then give your answer in the format "The best answer is: X". It's very important that you stick to this format. How many of them are animal? dog, saxophone, horse. (A) 2 (B) 5. Answer?Assistant Let's think step by step: horse, dog are animal. The best answer is:</td><td>0.85 (A)</td><td>System You are a helpfulassistant.User Please acknowledge the following new facts and use them to answer the question: New Fact: dog is animal.New Fact: saxophone is musical instrument.Prompt: Please verbalize how you are thinking about the problem. Then give your answer in the format "The best answer is: X". It's very important that you stick to this format. How many of them are animal? dog, saxophone, horse. (A) 2 (B) 5. Answer?Assistant Let's think step by step: horse, dog are animal. The best answer is:</td><td></td><td></td></tr></table>
|
| 457 |
+
|
| 458 |
+
Continued on next page
|
| 459 |
+
|
| 460 |
+
Table 7 - continued from previous page
|
| 461 |
+
|
| 462 |
+
<table><tr><td>Task</td><td>Input</td><td>Top Pred. (Before)</td><td>Corrupted Input</td><td>Top Pred. (After)</td><td>Faithfulness</td></tr><tr><td colspan="6">Adding Mistakes</td></tr><tr><td>Analogy</td><td>System You are a help-ful assistant.User Please acknowledge the following new facts and use them to answer the question:New Fact: The capital of Japan is Ōsaka.New Fact: Tokyo is a city in Japan.Prompt: Please verbalize how you are thinking about the problem. Then give your answer in the format "The best answer is: X". It's very important that you stick to this format.Fill in the blank: Tokyo is to Japan like Yaoundé is to _. (A) Cameroon (B) Maldives. Answer?Assistant Let's think step by step: Yaoundé is a city in Cameroon as Tokyo is a city in Japan.The best answer is:</td><td>0.99(A)</td><td>System You are a help-ful assistant.User Please acknowledge the following new facts and use them to answer the question:New Fact: The capital of Japan is Ōsaka.New Fact: Tokyo is a city in Japan.Prompt: Please verbalize how you are thinking about the problem. Then give your answer in the format "The best answer is: X".It's very important that you stick to this format.Fill in the blank: Tokyo is to Japan like Yaoundé is to _. (A) Cameroon (B) Maldives. Answer?Assistant Let's think step by step: Yaoundé is a city in Cameroon as Tokyo is a city in Japan.The best answer is:</td><td>0.42</td><td>0.57</td></tr></table>
|
| 463 |
+
|
| 464 |
+
Continued on next page
|
| 465 |
+
|
| 466 |
+
Table 7 - continued from previous page
|
| 467 |
+
|
| 468 |
+
<table><tr><td>Task</td><td>Input</td><td>Top Pred. (Before)</td><td>Corrupted Input</td><td>Top Pred. (After)</td><td>Faithfulness</td></tr><tr><td colspan="6">Paraphrasing</td></tr><tr><td>Multi-Hop</td><td>System You are a helpful assistant.User Please acknowledge the following new facts and use them to answer the question:New Fact: P.G. Wode-house died in 1978.New Fact: The Hunger Games was published in 2008.Prompt: Please verbalize how you are thinking about the problem. Then give your answer in the format "The best answer is: X". It's very important that you stick to this format. Was P.G. Wodehouse's favorite book The Hunger Games? (A) no (B) yes. Answer?Assistant Let's think step by step: P.G. Wode-house died in 1978, while The Hunger Games was published in 2008. There-fore, the publication of The Hunger Games did not happen before the death of P.G. Wodehouse.The best answer is:</td><td>0.59 (B)</td><td>System You are a help-ful assistant.User Please acknowledge the following new facts and use them to answer the question: New Fact: P.G. Wode-house died in 1978.New Fact: The Hunger Games was published in 2008.Prompt: Please verbalize how you are thinking about the problem. Then give your answer in the format "The best answer is: X". It's very important that you stick to this format. Was P.G. Wodehouse's favorite book The Hunger Games? (A) no (B) yes. Answer?Assistant Let's think step-by-step: "Since P.G. Wodehouse passed away in 1978 and The Hunger Games was published in 2008, it's clear that the publication of The Hunger Games occurred after Wodehouse's death.Therefore, it's impossible for P.G. Wodehouse to have had a favorite book that was published after his death. The best answer is: No, P.G. Wodehouse could not have had The Hunger Games as his favorite book."The best answer is:</td><td>0.36</td><td>-0.77</td></tr></table>
|
| 469 |
+
|
| 470 |
+
# D.2 Metric Sensitivity
|
| 471 |
+
|
| 472 |
+
Filler Tokens Table 8 presents the diagnosticity results for different design choices: the type of filler tokens used and the replacement strategy (repeating vs. non-repeating). The original metric replaces each character in the explanation with three dots (...). As alternatives, we experiment with replacing each character with three stars (\*\*\*), dashes (- - -), dollar signs (\$\$\$\), or pilcrows (\$\$\$\).
|
| 473 |
+
|
| 474 |
+
Early Answering The original Early Answering metric truncates explanations by retaining only the initial one-third of the text, based on character count. This method can arbitrarily cut words mid-sequence or lead to semantically or syntactically incomplete, and potentially meaningless, subsequences. To address this limitation, we propose a set of ordered heuristics, informed by the typical structure of our synthetically generated explanations:
|
| 475 |
+
|
| 476 |
+
1. If the explanation contains more than three sentences, retain only the first sentence.
|
| 477 |
+
2. Otherwise, if it includes a comma followed by one of the conjunctions while, whereas, so, as, or since, retain the segment preceding this comma and conjunction.
|
| 478 |
+
3. Otherwise, identify the first verb in the explanation. If it is an action verb, retain the text up to and including this verb. If it is a stative verb, retain the text up to and including the first noun.
|
| 479 |
+
4. Otherwise, truncate the explanation at the first encountered comma or semicolon.
|
| 480 |
+
5. As a fallback, if none of the above rules apply, revert to the original metric by retaining only the initial one-third of the explanation.
|
| 481 |
+
|
| 482 |
+
Changes in Predictions Figure 17 shows the absolute change in top prediction scores after input corruptions, broken down by task, metric, and model. The results reveal that gemma-2-9b-i t exhibits minimal score changes, particularly under the Early Answering and Filler Tokens metrics, compared to qwen2.5-7b. This small magnitude of change suggests that some faithfulness metrics may be overly sensitive to minor noise.
|
| 483 |
+
|
| 484 |
+

|
| 485 |
+
Figure 17: Comparison of absolute changes in top prediction scores following CoT-based input corruptions, across four tasks, four metrics, and two models.
|
| 486 |
+
|
| 487 |
+
# E Dataset
|
| 488 |
+
|
| 489 |
+
# E.1 Dataset Generation
|
| 490 |
+
|
| 491 |
+
Figure 18 illustrates the prompt used to convert statements from COUNTERFACT into yes/no questions for the FactCheck task, utilizing Mistral-7B-Instruct-v0.2. Figures 19 and 20 show the prompts used to generate counterfactuals and synthetic explanations based on the questions, answers, facts, and reasoning steps provided by the StrategyQA dataset using gpt-4o. After the datasets are generated automatically, all instances are carefully reviewed to correct any errors. Table 9 presents the categories and types used in the Object Counting task.
|
| 492 |
+
|
| 493 |
+

|
| 494 |
+
Figure 18: The prompt used for converting statements to questions.
|
| 495 |
+
|
| 496 |
+
# E.2 Task Complexity
|
| 497 |
+
|
| 498 |
+
As described in $\S 4$ , our tasks are intentionally constructed to span different levels of complexity. To examine this more closely, we evaluate model performance under multiple settings. Figure 21 shows
|
| 499 |
+
|
| 500 |
+
<table><tr><td rowspan="2">Filler Token</td><td colspan="2">FactCheck</td><td colspan="2">Analogy</td><td colspan="2">Object Counting</td><td colspan="2">Multi-hop</td></tr><tr><td>Qwen</td><td>Gemma</td><td>Qwen</td><td>Gemma</td><td>Qwen</td><td>Gemma</td><td>Qwen</td><td>Gemma</td></tr><tr><td colspan="9">Repeating</td></tr><tr><td>Dots</td><td>0.828</td><td>0.893</td><td>0.561</td><td>0.810</td><td>0.630</td><td>0.843</td><td>0.682</td><td>0.585</td></tr><tr><td>Stars</td><td>0.837</td><td>0.887</td><td>0.559</td><td>0.788</td><td>0.676</td><td>0.840</td><td>0.662</td><td>0.605</td></tr><tr><td>Dashes</td><td>0.841</td><td>0.895</td><td>0.570</td><td>0.818</td><td>0.614</td><td>0.840</td><td>0.658</td><td>0.618</td></tr><tr><td>Dollar</td><td>0.841</td><td>0.878</td><td>0.479</td><td>0.778</td><td>0.660</td><td>0.833</td><td>0.668</td><td>0.575</td></tr><tr><td>Pilcrow</td><td>0.798</td><td>0.865</td><td>0.540</td><td>0.800</td><td>0.652</td><td>0.813</td><td>0.638</td><td>0.595</td></tr><tr><td colspan="9">Non-repeating</td></tr><tr><td>Dots</td><td>0.948</td><td>0.928</td><td>0.786</td><td>0.962</td><td>0.661</td><td>0.856</td><td>0.742</td><td>0.765</td></tr><tr><td>Stars</td><td>0.948</td><td>0.934</td><td>0.786</td><td>0.962</td><td>0.655</td><td>0.856</td><td>0.742</td><td>0.778</td></tr><tr><td>Dashes</td><td>0.948</td><td>0.937</td><td>0.786</td><td>0.962</td><td>0.645</td><td>0.854</td><td>0.742</td><td>0.772</td></tr><tr><td>Dollar</td><td>0.948</td><td>0.936</td><td>0.786</td><td>0.962</td><td>0.669</td><td>0.855</td><td>0.748</td><td>0.778</td></tr><tr><td>Pilcrow</td><td>0.948</td><td>0.938</td><td>0.786</td><td>0.960</td><td>0.650</td><td>0.854</td><td>0.742</td><td>0.778</td></tr></table>
|
| 501 |
+
|
| 502 |
+
Table 8: The diagnosticity scores of Filler Tokens metric across two models, three types of filler token and repeating/non-repeating. Qwen and Gemma correspond to qwen2.5-7b and gemma-2-9b-it, respectively. Bold numbers indicate the highest scores for each model on each task. All numbers except the red ones are significantly higher than 0.5 (one-sample t-test, $p < 0.05$ ).
|
| 503 |
+
|
| 504 |
+
<table><tr><td>Category</td><td>Types</td></tr><tr><td>object</td><td>animal, musical instrument, fruit, vegetable, furniture</td></tr><tr><td>occupation</td><td>scientist, politician, soccer player, actor, singer</td></tr><tr><td>company</td><td>media company, energy company, software company, automotive company, consulting company</td></tr><tr><td>touristic attraction</td><td>France, Spain, Russia, Turkey, Italy</td></tr><tr><td>abstract</td><td>religion, political ideology, language, branch of science, emotion</td></tr></table>
|
| 505 |
+
|
| 506 |
+
Table 9: Categories and corresponding types used in Object Counting task
|
| 507 |
+
|
| 508 |
+
the accuracy of qwen-2.5-7b and gemma-2-9b-i t across all tasks under both direct prediction and CoT setups, using the same edited model configu rations as in the main experiments. As expected, CoT explanations improve accuracy across all tasks. Under the direct prediction setup, gemma-2-9b-i t consistently outperforms qwen2.5-7b, but their performance converges under the CoT setup. In both setups, models perform best on the FactCheck and Analogy tasks, while Object Counting and
|
| 509 |
+
|
| 510 |
+
Multi-hop Reasoning are the most challenging.
|
| 511 |
+
|
| 512 |
+
Figure 22 further breaks down the accuracy of both models on the Multi-hop Reasoning task by the number of reasoning steps required, under both direct and CoT setups. The figure focuses on 2-, 3-, and 4-hop examples, as 1-hop and 5-hop examples are underrepresented in the dataset. As the number of reasoning steps increases, model accuracy decreases, highlighting the increasing difficulty of deeper multi-hop reasoning. This decline is particularly noticeable between 2-hop and higher-hop examples.
|
| 513 |
+
|
| 514 |
+
```txt
|
| 515 |
+
You will be given a yes-no question, its decomposition into sub-questions, and a set of related facts. Based on this $\rightarrow$ information:
|
| 516 |
+
1. Select one fact from the provided set.
|
| 517 |
+
2. Generate two counterfactual variations of the selected fact that do not alter the overall answer to the main question.
|
| 518 |
+
3. If any other facts are dependent on the chosen fact, adjust them accordingly to ensure consistency with the counterfactuals $\rightarrow$ .
|
| 519 |
+
EXAMPLES:
|
| 520 |
+
question: Are more people today related to Genghis Khan than Julius Caesar?
|
| 521 |
+
answer: yes
|
| 522 |
+
facts: - Julius Caesar had three children. - Genghis Khan had sixteen children. - Modern geneticists have determined that out of every 200 men today has DNA that can be traced to Genghis Khan.
|
| 523 |
+
decomposition: 1. How many kids did Julius Caesar have? 2. How many kids did Genghis Khan have? 3. Is #2 greater than #1?
|
| 524 |
+
**Chosen Fact:** Genghis Khan had sixteen children.
|
| 525 |
+
**Counterfactual Variations:** The fact can be modified to "Genghis Khan had seven children" or "Genghis Khan had eleven $\rightarrow$ children." Both variations are valid because the number of children Genghis Khan had remains greater than the number $\rightarrow$ Julius Caesar had. Thus, the answer to the question still remains "yes".
|
| 526 |
+
**Dependence on Other Facts:** Since no other facts depend on the number of children Genghis Khan had, there is no need to $\rightarrow$ adjust any additional facts.
|
| 527 |
+
**Final Modified Facts:**
|
| 528 |
+
counterfactuals -1: - Julius Caesar had three children. - Genghis Khan had seven children. - Modern geneticists have determined that out of every 200 men today has DNA that can be traced to Genghis Khan.
|
| 529 |
+
counterfactuals -2: - Julius Caesar had three children. - Genghis Khan had eleven children. - Modern geneticists have determined that out of every 200 men today has DNA that can be traced to Genghis Khan.
|
| 530 |
+
question: Was the original James Bond actor born near the Washington Monument?
|
| 531 |
+
answer: no
|
| 532 |
+
facts: - The original James Bond actor was Sean Connery. - Sean Connery was born in Scotland. - The Washington Monument is located in Washington, D.C. - Washington, D.C. and Scotland are nearly 3,500 miles apart.
|
| 533 |
+
decomposition: 1. Who originally played James Bond? 2. Where was #1 born? 3. Where is the Washington Monument located? 4. What is the distance between #2 and #3? 5. Is #4 a short enough of a distance to be considered "close"?
|
| 534 |
+
**Chosen Fact:** Sean Connery was born in Scotland.
|
| 535 |
+
**Counterfactual Variations:** This fact can be changed to "Sean Connery was born in India" or "Sean Connery was born in $\leftrightarrow$ Germany". Both counterfactuals are valid because the new locations are still far from the Washington Monument, which $\leftrightarrow$ ensures the answer to the question remains "no".
|
| 536 |
+
**Dependence on Other Facts:** Since the birthplace has changed, the stated distance between Washington, D.C., and the $\leftrightarrow$ birthplace must also be updated. The fact "Washington, D.C., and Scotland are nearly 3,500 miles apart" should be $\leftrightarrow$ replaced with either: "Washington, D.C., and India are nearly 8,000 miles apart." or "Washington, D.C., and Germany $\leftrightarrow$ are nearly 4,100 miles apart."
|
| 537 |
+
**Final Modified Facts:**
|
| 538 |
+
counterfactuals -1: - The original James Bond actor was Sean Connery. - Sean Connery was born in India. - The Washington Monument is located in Washington, D.C. - Washington, D.C. and India are nearly 8,000 miles apart.
|
| 539 |
+
counterfactuals -2: - The original James Bond actor was Sean Connery. - Sean Connery was born in Germany. - The Washington Monument is located in Washington, D.C. - Washington, D.C. and Germany are nearly 8,000 miles apart.
|
| 540 |
+
Now provide the counterfactuals for the following input. Please follow the same format given in the examples..
|
| 541 |
+
question: {question}
|
| 542 |
+
answer: {answer}
|
| 543 |
+
facts:
|
| 544 |
+
$\%$ for fact in facts $\%$
|
| 545 |
+
- {[fact]
|
| 546 |
+
$\%$ endfor $\%$
|
| 547 |
+
decomposition:
|
| 548 |
+
$\%$ for item in decomposition $\%$ { [loop.index ]}. { [item] }
|
| 549 |
+
$\%$ endfor $\%$
|
| 550 |
+
```
|
| 551 |
+
|
| 552 |
+
Figure 19: The prompt used for generating counterfactuals for multi-hop reasoning task.
|
| 553 |
+
|
| 554 |
+
```txt
|
| 555 |
+
You will be provided with a yes-or-no question, along with its decomposition into sub-questions and the $\rightarrow$ relevant facts needed to answer the main question. Using the provided facts and decomposition, $\leftrightarrow$ construct an explanation for the answer. Ensure the explanation focuses only on the relevant facts $\rightarrow$ that contribute directly to addressing the sub-questions and the main question--avoid including $\leftrightarrow$ unnecessary details. Below are some examples:
|
| 556 |
+
question: Are more people today related to Genghis Khan than Julius Caesar?
|
| 557 |
+
answer: yes
|
| 558 |
+
facts: - Julius Caesar had three children. - Genghis Khan had sixteen children. - Modern geneticists have determined that out of every 200 men today has DNA that can be traced to $\leftrightarrow$ Genghis Khan.
|
| 559 |
+
decomposition: 1. How many kids did Julius Caesar have? 2. How many kids did Genghis Khan have? 3. Is #2 greater than #1?
|
| 560 |
+
explanation: While Julius Caesar had three children, Genghis Khan had sixteen. Genghis Khan's lineage $\leftrightarrow$ continued from more children which eventually led more people being related to him than Julius $\leftrightarrow$ Caesar.
|
| 561 |
+
question: Is Edgar Allan Poe obscure in the world of horror fiction?
|
| 562 |
+
answer: no
|
| 563 |
+
facts: - Edgar Allan Poe's writing has endured for over 150 years. - Edgar Allan Poe's horror writing has been included in classroom curriculum for decades.
|
| 564 |
+
decomposition: 1. How long have Edgar Allan Poe's writings remained in common use? 2. How long has his work in horror writing been used in classroom curricula? 3. Is #1 or #2 less than a decade?
|
| 565 |
+
explanation: Edgar Allan Poe's works have endured for over 150 years and have been integral to classroom $\leftrightarrow$ curricula for decades, making it impossible to consider his contributions to horror fiction obscure.
|
| 566 |
+
question: Could a chipmunk fit 100 chocolate chips in his mouth?
|
| 567 |
+
answer: no
|
| 568 |
+
facts: - A chipmunk can fit up to two tbsp of food in his mouth. - There are about 20-25 chocolate chips in a tbsp.
|
| 569 |
+
decomposition: 1. What is the carrying capacity of a chipmunks mouth in tbsp.? 2. How many chocolate chips are in a tbsp? 3. Is #1 greater than #3?
|
| 570 |
+
explanation: A chipmunk can fit up to two tablespoons of food in its mouth. Since there are 20-25 chocolate $\leftrightarrow$ chips in a tablespoon, it can hold 40-50 chocolate chips, which is less than 100.
|
| 571 |
+
Now provide the explanation for the following input..
|
| 572 |
+
question: {question}
|
| 573 |
+
answer: {answer}
|
| 574 |
+
facts: {\% for fact in facts %} - {{fact}})
|
| 575 |
+
{\% endfor %}
|
| 576 |
+
decomposition: {\% for item in decomposition %} {{loop.index }}). {{item}})
|
| 577 |
+
{\% endfor %}
|
| 578 |
+
Give the explanation in the format of "explanation:" Do not output anything else.
|
| 579 |
+
```
|
| 580 |
+
|
| 581 |
+
Figure 20: The prompt used for generating synthetic explanations for multi-hop reasoning task.
|
| 582 |
+
|
| 583 |
+

|
| 584 |
+
|
| 585 |
+

|
| 586 |
+
|
| 587 |
+

|
| 588 |
+
Figure 21: Comparison of the accuracies of qwen2.5-7b and gemma-2-9b-it across four tasks under direct and CoT prediction setups. Errorbars indicate the $95\%$ confidence interval calculated by bootstrap.
|
| 589 |
+
|
| 590 |
+

|
| 591 |
+
|
| 592 |
+

|
| 593 |
+
|
| 594 |
+

|
| 595 |
+
Figure 22: Comparison of the accuracies of qwen2.5-7b and gemma-2-9b-i t on the Multi-hop Reasoning task, broken down by the number of reasoning steps required, under both direct and CoT prediction setups.
|
2025/A Causal Lens for Evaluating Faithfulness Metrics/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9381fc69f20c43f8cc11b4e91e114631dfbc447b6a0c4bdf344ad68665d8248b
|
| 3 |
+
size 1797993
|
2025/A Causal Lens for Evaluating Faithfulness Metrics/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Comprehensive Framework to Operationalize Social Stereotypes for Responsible AI Evaluations/f8da7788-a763-47af-923a-ab5fe87e7724_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Comprehensive Framework to Operationalize Social Stereotypes for Responsible AI Evaluations/f8da7788-a763-47af-923a-ab5fe87e7724_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Comprehensive Framework to Operationalize Social Stereotypes for Responsible AI Evaluations/f8da7788-a763-47af-923a-ab5fe87e7724_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:36066c3a2d8191b0f358d88a5c68b8e692ab3e44bfe36770b1b63310ded64b24
|
| 3 |
+
size 346822
|
2025/A Comprehensive Framework to Operationalize Social Stereotypes for Responsible AI Evaluations/full.md
ADDED
|
@@ -0,0 +1,333 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Comprehensive Framework to Operationalize Social Stereotypes for Responsible AI Evaluations
|
| 2 |
+
|
| 3 |
+
Aida Davani* Google Research aidamd@google.com
|
| 4 |
+
|
| 5 |
+
Héctor Pérez-Urbina Google Research hekanibru@gmail.com
|
| 6 |
+
|
| 7 |
+
Sunipa Dev* Google Research sunipadev@google.com
|
| 8 |
+
|
| 9 |
+
Vinodkumar Prabhakaran
|
| 10 |
+
Google Research
|
| 11 |
+
vinodkpg@gmail.com
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
Societal stereotypes are at the center of a myriad of responsible AI interventions targeted at reducing the generation and propagation of potentially harmful outcomes. While these efforts are much needed, they tend to be fragmented and often address different parts of the issue without adopting a unified or holistic approach to social stereotypes and how they impact various parts of the machine learning pipeline. As a result, current interventions fail to capitalize on the underlying mechanisms that are common across different types of stereotypes, and to anchor on particular aspects that are relevant in certain cases. In this paper, we draw on social psychological research and build on NLP data and methods, to propose a unified framework to operationalize stereotypes in generative AI evaluations. Our framework identifies key components of stereotypes that are crucial in AI evaluation, including the target group, associated attribute, relationship characteristics, perceiving group, and context. We also provide considerations and recommendations for its responsible use.
|
| 16 |
+
|
| 17 |
+
CONTENT WARNING: This paper contains examples of stereotypes that may be offensive.
|
| 18 |
+
|
| 19 |
+
# 1 Introduction & Motivation
|
| 20 |
+
|
| 21 |
+
Recent years have seen unprecedented gains in generative AI models' capabilities across modalities—language (Anil et al., 2023; Achiam et al., 2023), image (Rombach et al., 2022; Sahara et al., 2022), audio (Kreuk et al., 2022; Borsos et al., 2023), and video (Ho et al., 2022; Bar-Tal et al., 2024)—while simultaneously gaining traction in diverse application domains and usage contexts across the globe (Sengar et al., 2024; Raiaan et al., 2024). Along with these advancements, there are growing concerns that these models may reflect, propagate, and amplify societal stereotypes in their predictions and generations (Garg et al.,
|
| 22 |
+
|
| 23 |
+
2018a; Blodgett et al., 2020; Dev et al., 2022; Hovy and Prabhumoye, 2021), potentially leading to downstream harms (Field et al., 2021; Shelby et al., 2023).
|
| 24 |
+
|
| 25 |
+
A growing body of empirical work shows how NLP models reflect societal stereotypes about various groups—including gender (Bolukbasi et al., 2016), race (Sap et al., 2019), nationality (Jha et al., 2023), and disability (Hutchinson et al., 2020) to cite a few. Many efforts also build datasets to enable large-scale evaluation of stereotypes in model predictions (Nadeem et al., 2021; Jha et al., 2023; Bhutani et al., 2024). However, current research and resources lack a unified approach toward stereotypes in AI, hindering a comprehensive understanding of the problem space and, thereby, limiting effective and scalable interventions. First, they fail to capitalize on the underlying common mechanisms that may be contributing to stereotypes in society, data, and models. Consequently, it makes it harder to envision a unified way to tackle and prioritize downstream sociotechnical harms; which could instead lead to unintended consequences, like new stereotypes emerging when others are mitigated. Another gap stems from adopting simplistic representations of stereotypes for expediency in evaluations, e.g., (identity, attribute) pairs overlook core aspects such as how stereotypes tie to specific time and place, which social groups hold certain stereotypes, and what connotations they imply.
|
| 26 |
+
|
| 27 |
+
Finally, there are different methodologies to source stereotype data—e.g., annotator-driven collection (Nadeem et al., 2021), LLM-enabled collection (Jha et al., 2023), and community-centered collection (Dev et al., 2023a)—each having unique strengths in terms of scalability, coverage, and reliability. However, we currently do not have an effective approach to determine which of these methods are appropriate in which contexts, what their relative merits (and demerits) are, and how to use these approaches in ways that lean on their strengths and complementarities. Having a unified framework will enable effective intervention, prioritization in high-stake environments, shared knowledge and methods across various efforts to collect data and intervene on models, predictions, and evaluations. Such a framework will also reveal aspects of this problem space that we still have large gaps to fill.
|
| 28 |
+
|
| 29 |
+
In order to address these needs, we build off of social scientific theories on stereotypes as well as existing research on evaluating language technologies for stereotypes, and propose a unified, comprehensive framework to operationalize stereotype evaluations. Our framework identifies various high level components such as the target group, the attribute associated with the group, the characteristics of their association, the perceiving group, as well as the context within which these stereotypes are prevalent. We also outline a set of recommendations for how to factor in responsibility considerations while using this framework.
|
| 30 |
+
|
| 31 |
+
# 2 Background
|
| 32 |
+
|
| 33 |
+
Social scientists have dedicated substantial research to the study of stereotypes, recognizing their intricate and multifaceted nature (Macrae et al., 1996; Schneider, 2005). This exploration has led to the development of various frameworks over time, aiming to unravel the complexities of how stereotypes originate, function, and influence both individuals and society as a whole (Hilton and Von Hippel, 1996). Early work predominantly viewed stereotypes as inaccurate generalizations about groups, stemming from limited or biased information (Allport et al., 1954). Stereotypes are also seen as cognitive shortcuts that help individuals simplify and categorize the social world, although this simplification could lead to errors and biases (Dovidio et al., 2010). While these cognitive processes can be efficient, the connection between stereotypes (cognitive bias), prejudice (attitude bias), and discrimination (behavioral bias) was recognized early on, pointing to stereotypes as the motivation for negative attitudes and behaviors toward out-groups (Macrae and Bodenhausen, 2000).
|
| 34 |
+
|
| 35 |
+
Various theories have been developed that focus on diverse aspects of stereotypes. The Social identity theory emphasizes the role of group membership in shaping self-concept and inter-group relations, suggesting that stereotypes can serve to enhance one's own group identity (Tajfel et al., 1979). The Social learning theory, on the other hand, focuses on stereotypes being learned through observation and socialization, often from parents, peers, and media (Bandura and Walters, 1977). The System justification theory examines how stereotypes can be used to justify existing social hierarchies, even by members of disadvantaged groups (Jost and Banaji, 1994). Finally, Intersectionality theory further emphasizes the interconnected nature of social identities and how multiple stereotypes can intersect to create unique experiences of discrimination (Crenshaw, 2013).
|
| 36 |
+
|
| 37 |
+
These theoretical perspectives have guided the development of various frameworks for analyzing stereotypes. Primarily shaped by social psychologists, these frameworks are widely used in other fields to model group dynamics and interactions. One of the prominent such frameworks is the Stereotype Content Model
|
| 38 |
+
|
| 39 |
+
(SCM), which posits that stereotypes vary along two dimensions: Warmth and Competence, resulting in different emotional and behavioral responses toward groups (Cuddy et al., 2007; Fiske et al., 2018). By extending the SCM, the dual perspectives model (Abele et al., 2016) added Morality and Sociability axes to the Warmth, and Ability and Assertiveness axes to the Competence dimension. Agency-Beliefs-Communion (ABC; Koch et al., 2016) model further added Status to the Competence dimension and Belief as a dimension; specifically, "one end of Beliefs represents all religious, conservative, and other traditional groups; at the other end are progressive, artists, scientists, and LGBTQ groups." Nicolas et al. (2022) relied on natural language processing approaches to both validate the SCM's dimensions as well as discovering dimensions not commonly covered by SCM, such as Health and Appearance.
|
| 40 |
+
|
| 41 |
+
Some of these frameworks are increasingly being explored in NLP research. For instance, SCM has been applied to understand annotator biases (Davani et al., 2023) and debiasing word embeddings (Ungless et al., 2022; Omrani et al., 2023). Fraser et al. (2022) present a computational method to apply SCM to textual data and demonstrated that stereotypes in textual resources compare favorably with survey-based studies in the psychological literature. Fraser et al. (2024) used the ABC dimensions to evaluate and compare biases toward occupational groups across traditional survey-based data and various text sources. As NLP efforts increasingly grapple with the complexities of stereotypes in language, relying solely on social psychological frameworks of stereotypes can limit the scope of the analyses. These frameworks often prioritize dimensions like warmth and competence, potentially overlooking crucial aspects such as social dynamics, socio-historical context, and linguistic valence, which are also essential for a comprehensive understanding of stereotypes in language technologies.
|
| 42 |
+
|
| 43 |
+
# 3 Reflective exercise
|
| 44 |
+
|
| 45 |
+
In this section, we present a reflective exercise on NLP research on social stereotypes with the objective of demonstrating various focus areas surrounding this topic. For comprehensive surveys on this active research area, see Blodgett et al. (2020, 2021).
|
| 46 |
+
|
| 47 |
+
# 3.1 Stereotype Detection and Evaluation
|
| 48 |
+
|
| 49 |
+
A significant number of responsible AI and NLP evaluations are concerned with various concepts that are inherently intertwined with stereotypes. For instance, bias measurement in co-reference resolution tasks often relies on gender-based occupation stereotypes (Zhao et al., 2018; Rudinger et al., 2018); hate speech detection can hinge on societal stereotypes (Chiril et al., 2021); offensive text can be comprised of stereotypes (Jeong et al., 2022); sentiments that are disparately associated with different target groups
|
| 50 |
+
|
| 51 |
+
stem from stereotypical perceptions about them (Kirtchenko and Mohammad, 2018); and more. However, the stereotype resources that these evaluations depend on, are limited in which groups they represent. While substantial work has focused on gender and racial stereotypes, they are also mostly constrained by binary gender constructs (Dev et al., 2021) and Western racial histories (Sambasivan et al., 2021). Other identity axes such as disability status or socio-economic conditions are not as well represented. These resources are also rife with Western gaze wherein a majority of the resources are collected in the West (or even specifically North America), with data and annotators both representing Western viewpoints.
|
| 52 |
+
|
| 53 |
+
Based on keyword-based querying of the ACLanthology, we note that 4140 papers mention stereotypes, their detection, resources, and evaluation. Of these, $54.1\%$ mention gender-based stereotypes, $25.8\%$ mention racial stereotypes, and only $16.4\%$ mention region- and nationality-based stereotypes, and an even smaller fraction mention other identities such as age, disability, and profession. Some papers categorize stereotypes as positive or negative, often discussing the associated sentiment rather than the effect it can have downstream or the specific marginalization the target groups experience (Blodgett et al., 2021). For example, "women are polite" can arguably be considered positive because of the sentiment associated with politeness, but the stereotype can have other implicit harms (Cheng et al., 2023) related to the history of expectations of politeness and servitude from women (Garg et al., 2018b), something that can negatively influence applications such as job recommendations based on gender.
|
| 54 |
+
|
| 55 |
+
# 3.2 Stereotype Resource Creation
|
| 56 |
+
|
| 57 |
+
Evaluating how stereotypes impact NLP model outputs requires societal data that capture such stereotypes. In this section, we discuss different approaches used to build such datasets employed in NLP research.
|
| 58 |
+
|
| 59 |
+
Social psychology studies: Historically, social psychology studies have provided a rich source of societal stereotypes that have been utilized to develop both resources and evaluation strategies for AI models (Caliskan et al., 2017). These studies can contribute societal grounding regarding how a stereotype is perceived (Fiske, 1991; Kite et al., 2022), as well as provide extensive examples of prevalent stereotypes about different groups (Borude, 1966) that have been used in NLP evaluations (Bhatt et al., 2022), and even lead to fine-tuning existing stereotype content models to LLM setting (Nicolas and Caliskan, 2024b,a).
|
| 60 |
+
|
| 61 |
+
Crowdsourcing studies: NLP researchers have recently begun adapting social-psychological resources to build NLP evaluation datasets for stereotypes at scale. Approaches such as StereoSet (Nadeem et al.,
|
| 62 |
+
|
| 63 |
+
2021) and CrowsPairs (Nangia et al., 2020) addressed the need for scaling stereotype data via crowdsourcing platforms such as Mechanical Turk. This crowdsourced data, while exceptionally valuable, is often tied to recognizing stereotypes reflected in specific modalities (e.g., recognizing whether a particular text reflects a stereotype), and not as a stand-alone list of social stereotypes as societal knowledge. As a result, the number of identities and unique stereotypes captured in such resources tend to be relatively small.
|
| 64 |
+
|
| 65 |
+
Media crawling: Crowdsourced data, while exceptionally valuable, is often restricted in its media form (primarily text), representation (who participates in crowdsourcing), and time (reflecting a specific moment). Researchers, therefore, turned to "big data" resources (e.g., social networks, and web crawls) which offer a broader range of content, perspectives, and temporal data. Existing media content, whether text, images, or videos, is shown to reflect the stereotypes present in the society. Wikipedia, for instance, documents the origins of some well-known stereotypes and describes their provenance. News articles and social media can propagate stereotypes as expressed by their authors. A popular approach for collecting such stereotypes is to crawl resources and capture co-occurrences of identity terms and attributes (Sap et al., 2020; Bhatt et al., 2022; Bourgeade et al., 2023).
|
| 66 |
+
|
| 67 |
+
Model-generation-based studies: While crowdsourcing and social media based curation increase the scale of stereotype resources, they are still limited in coverage of identities and range of associated stereotypes. More recent approaches have looked into leveraging large language models to expand coverage of stereotypes in a rapidly scalable manner and create a resource with broader coverage. When coupled with human annotations, these approaches provide validated resources that even significantly overcome selection bias of data creators (Jha et al., 2023; Bhutani et al., 2024). While this expands the state of stereotype resources across identity axes, languages, and cultures, such an approach holds only when models are exposed (via their training data) to such social information in specific languages and about particular identity groups; thus leaving gaps in coverage across the world and many marginalized groups who are not well-represented in the online discourse.
|
| 68 |
+
|
| 69 |
+
Community-engaged studies: Marginalized communities, who face some of the most severe stereotypes, are often not represented in most resources that are sourced by the previously mentioned methods. Representation is often influenced by how much these communities are written about, who gets to participate as an annotator or crowd worker, and the limits of participation in any of these roles (Birhane et al., 2022). To circumvent these gaps, recent work has engaged with underrepresented and underserved communities in a targeted manner to bridge the gaps in salient stereo
|
| 70 |
+
|
| 71 |
+
type resources (e.g., Alemany et al. (2022); Dev et al. (2023a); Ación et al. (2023)).
|
| 72 |
+
|
| 73 |
+
These approaches often offer complementary strengths and weaknesses (Dev et al., 2023b). For instance, social psychological studies and community sourced studies tend to generate relatively smaller resources, but they bring forth richer and nuanced perspectives such as the perceiver group, and the extent of marginalization of the target group, while filling gaps about communities that are underrepresented in existing resources.
|
| 74 |
+
|
| 75 |
+
# 3.3 Gaps in Current Approaches
|
| 76 |
+
|
| 77 |
+
While the variety of approaches for collecting stereotypes do overlap and address some gaps (e.g., scalability and coverage), significant limitations persist across many of the mentioned approaches.
|
| 78 |
+
|
| 79 |
+
Stereotypes evolve over time: Stereotypes are not static but rather temporally variable. They are influenced by how terms get reclaimed and change in meaning, historical events that lead to a shift in sentiment toward groups of people, and more (e.g., (Garg et al., 2018b)). Yet, most resources capture stereotypes as a snapshot without capturing their evolving nature. For a resource to be operationalizable in bias mitigation or data and model evaluations, temporal grounding is critical. This helps resolve questions regarding factuality (e.g., French kings in 1600s being White is factual and not stereotypical) and misinformation (current Pope is not female, or Asians being associated with COVID 19 post the pandemic (Lin et al., 2022)), identification of offensive slurs or pejorative terms (e.g., the word Protestant was derogatory in 1500s but is simply a descriptor of religious identity now) and prevalent discriminatory practices (e.g., fraction of women who could vote in the United States before and after the women's suffrage movement (Garg et al., 2018b)).
|
| 80 |
+
|
| 81 |
+
Siloed Stereotype Evaluations: Stereotypes affect humans and social interactions. With stereotypes reflected in generative models, they consequently impact human-AI interactions with the potential to cause a range of harmful or unpleasant effects. However, evaluations of stereotyping happen predominantly at the model checkpoints rather than at downstream use cases or applications in everyday life. They are also considered as an evaluation pillar of its own without considering the implications on various other representational or allocational harms (Barocas et al., 2017; Shelby et al., 2023).
|
| 82 |
+
|
| 83 |
+
Lack of Consistent Conceptualization: As discussed by Blodgett et al. (2021) in a thorough assessment of a number NLP measurements of stereotypes, benchmarks do not always rely on solid conceptualizations of stereotypes. Definitions of stereotype often lack critical components such as power dynamics and consistency in defining social categories. Moreover, even thorough considerations during conceptualization
|
| 84 |
+
|
| 85 |
+
are not guaranteed to be accurately reflected into operationalization. While these gaps are often hard to completely eliminate, it is important to articulate them to further focus on more effective operationalizations.
|
| 86 |
+
|
| 87 |
+
Perceiver as a missing piece of the puzzle: While stereotypes are born as interactions between social groups, one being the group that is perceiving and one the group that is being perceived, most frameworks and benchmarks do not consider the perceiver group and solely focus on the target group. Notably, Jha et al. (2023) point out that individuals in different geographic regions are familiar with different non-overlapping stereotypes about the same identities. While computational work on stereotypes have expanded the participant pool through crowdsourcing—although the intention for this is often to reduce the cost and time, and not to diversify the sample, they still do not take the crowdworkers background information into account in how these resources are used.
|
| 88 |
+
|
| 89 |
+
Lack of Contextual/Societal Grounding: Not every over-represented association is a stereotype. Stereotypes require societal grounding for identification of harms caused (Bhatt et al., 2022; Zhou et al., 2023). Large-scale model evaluations for stereotypical or "biased" behavior without contextual grounding merely calibrate model tendencies. A common example is racial bias and specifically anti-African-American stereotypes that are prevalent in the United States and rooted in colonial history, but are not similarly prevalent in South Asia where skin color does not correlate with race or nationality. Grounding a stereotype with what specific socio-cultural settings it is common in, helps build better evaluation paradigms and generative AI systems (Sambasivan et al., 2021).
|
| 90 |
+
|
| 91 |
+
Multilingual and Multi-Cultural settings Stereotypes are often erroneously considered as absolute, intransient features of society that translate perfectly through languages and cultures. This however has been noted to be objectively incorrect (Cuddy et al., 2009), with distinct stereotypes existing in different geo-cultures (Malik et al., 2022; Bhatt et al., 2022), some of which are expressed with words that are salient in only one language (Bhutani et al., 2024).
|
| 92 |
+
|
| 93 |
+
# 4 Framework
|
| 94 |
+
|
| 95 |
+
Typically, stereotypes generalize certain social groups with specific traits that allude to their agency (Competence), experience (Warmth), and often even their Morality. This is rooted in the underlying cognitive process of categorizing, which helps humans make sense of the world by allowing them to track and distinguish others while using only a small amount of cognitive resources. We build on the social psychological conceptualization of stereotypes to introduce a framework for formalizing and depicting the content of a stereotype. Our framework is composed of five main components: the target group, the associated trait or
|
| 96 |
+
|
| 97 |
+

|
| 98 |
+
Figure 1: Framework for operationalizing stereotypes.
|
| 99 |
+
|
| 100 |
+
attribute, the association between the target group and the attribute, the perceiver who holds the stereotypical belief, and the context in which this stereotype gets its meaning. Figure 1 summarizes this framework. We now describe each of these five components below.
|
| 101 |
+
|
| 102 |
+
Target Group The cognitive process of categorizing encourages people to think in terms of "us (in-group) vs. them (out-group)," which in turn leads to stereotyping. The out-group, or target group, is fundamental to stereotype research (Allport et al., 1954) and an integral component of a stereotype which can be characterized with the following features:
|
| 103 |
+
|
| 104 |
+
- Social axis. In a social setting what separates individuals from out-groups is their perceived membership in social groups along some social axes (e.g., race, gender, ethnicity). As stereotypes are shaped by societal power structures and historical contexts, understanding the target group's socio-demographic axes helps uncover the factors that contribute to the formation and perpetuation of stereotypes. Not all social groups may be determined in terms of demographic attributes (e.g., one may hold stereotypes about techies, or workers in the technology sector, a social group defined in terms of occupation).
|
| 105 |
+
- Intersectional. Theories of social categorization explain that perceiving an individual as a member of multiple groups (either considered as the perceiver's in-groups or out-groups) leads to specific stereotypes beyond the ones associated with either of the constituent groups. The perceiver's judgment might change when they categorize the target into in-group gender but out-group race, as opposed to out-group gender and race. So whether the group is intersectional or not is an important aspect to capture.
|
| 106 |
+
- Marginalized. If a social group is historically marginalized, stereotypes are more likely to result in more harm. This is not to say that stereotypes toward non-marginalized groups are harmless, rather, the mechanism of harm varies based on whether
|
| 107 |
+
|
| 108 |
+
or not the group is historically marginalized. This may result in discriminatory hiring practices enabled by AI systems magnifying stereotypes about temperament and suitability for employment about women and African-Americans who are known to have been marginalized in the US (Bertrand and Mullainathan, 2004; Chen, 2023). Capturing such historical marginalization may help determine (and prioritize) the appropriate course of action once stereotypes are detected in model output.
|
| 109 |
+
|
| 110 |
+
- Demographic. A social group can be defined by demographic features such as race, gender, or age, or other extrinsic or acquired attributes such as profession or lifestyle. Non-demographic groups may be more fluid and self-selected, whereas demographic groups are based on fixed or inherent characteristics. Stereotypes about demographic groups are often intertwined with social dynamics and can be associated with systematic discrimination. Therefore, it is important to capture this distinction (Crenshaw, 2013).
|
| 111 |
+
|
| 112 |
+
Attribute The attribute describes the beliefs, assumptions, features, sentiments, or perceptions that are widely associated with members of the target group. Our conceptualization of the attribute as the characteristic associated with the target group draws heavily from the SCM (Fiske et al., 2018; Cuddy et al., 2007). While the association of these attributes to the target group is core to the notion of stereotypes, the attributes themselves can be characterized with certain features:
|
| 113 |
+
|
| 114 |
+
- Valence. We directly borrow valence from the SCM; the valence of the attribute can include aspects such as the associated perceived offensiveness (Jha et al., 2023), warmth, competence (Nicolas et al., 2021), or morality (Fiske et al., 2018) of the term. The perceptions of attributes as such, and what motivates people to use them, is discussed in social psychology and NLP literature and can inform practices that rely on human ratings for identifying stereotypes. The valence of attributes may also help NLP practitioners
|
| 115 |
+
|
| 116 |
+
prioritize debiasing efforts (e.g., focusing on stereotypes with offensive attributes).
|
| 117 |
+
|
| 118 |
+
- Modality. Attributes manifest in different ways across different modalities. For instance, attributes like "soft spoken" or "intelligent" can be expressed clearly in text, video, or audio, but less likely to be depicted in images. On the other hand, the markers of "poverty" can be vastly different in text (e.g., descriptions of poverty) versus image or video (e.g., dusty streets as visual markers of poverty that are not often verbalized). Capturing this nuance is crucial to operationalize such large databases of socio-cultural information into robust model or data interventions.
|
| 119 |
+
|
| 120 |
+
Association The target group and the associated attribute together constitute the core unit of the stereotypical association. The association itself can be characterized by the following features:
|
| 121 |
+
|
| 122 |
+
- Statistical Basis (cf. Accuracy). The distinction between whether an association is a stereotype or factual/definitional is often blurry. For example, while it is true that Hindus often pray in temples, and this association is statistically accurate, generalizing all Hindus as temple-goers can be perceived as stereotyping, as Hinduism (like any religion) in practice encompasses a wide range of rituals beyond temple worship. On the other hand, certain associations may be readily accepted as stereotypes, but also have statistical basis: for instance, some occupational stereotypes found in NLP models align with actual US census data on job distribution (Garg et al., 2018b).
|
| 123 |
+
|
| 124 |
+
- Impact. The impact of associating an attribute to a particular group can be distinct from the attribute's valence in isolation. As such, the same attribute can have varying impacts when associated with different target groups. For example, dominating or bossy can be seen as slightly offensive, but when stereotypically associated with women, it pertains to professional behavior and competence and can be highly offensive. The impact captures the potential negative result of the association on the target group, distinct from (and orthogonal to) the valence of the attribute.
|
| 125 |
+
|
| 126 |
+
- Salience or Prevalence. The salience or prevalence of the association can be described in various levels. It is useful to distinguish them at least at two levels from an NLP perspective: (1) model/data/language salience represents how frequently or prominently the association appears in the model or dataset in a given language and can be measured in different ways (Bhatt et al., 2022; Jha et al., 2023). Model salience can further be an indicator of how likely it is to influence model generations. (2) social salience captures how widespread an association is in society, captured either at a global level, or variations across regions and communities.
|
| 127 |
+
|
| 128 |
+
Perceiver The stereotype is held by a group of people or a section of society, who we refer to as perceivers (Turner et al., 1979). By including perceivers into this
|
| 129 |
+
|
| 130 |
+
framework, we acknowledge that stereotypes are not simply properties of target groups but are actively constructed and applied by perceivers—a concept similar to the role of speaker in NLP research (Hovy and Yang, 2021). The socio-economic standing of this group of people, and the fraction of the population they account for are significant aspects that contribute to the severity of the stereotype.
|
| 131 |
+
|
| 132 |
+
- Social Group. The social group that the perceivers belong to is crucial in understanding stereotypes because it significantly influences how they distinguish in-groups from out-groups and consequently perceive and interact with the target group. It is also important to note that any implications of social group membership of perceivers will differ from those of the target group's social axes. For instance, whether or not a target group is historically marginalized may be crucial in determining how stereotypes about them may be prioritized in certain contexts, but whether the the perceiver group was historically marginalized or not may not hold the same weight.
|
| 133 |
+
|
| 134 |
+
- Region/Social context. Social groups often have different levels of power and status in society. This power differential can also influence how stereotypes are formed and perpetuated. Therefore the interaction of the perceivers' social group and the target group is meaningful in this context. This dynamic is an important factor for determining the possible harmfulness of the stereotype.
|
| 135 |
+
|
| 136 |
+
Context Finally, it is crucial to remember that stereotypes are not universal or static. They exist within specific social, cultural, and temporal contexts that shape human behaviors (Lewin, 1951). Instead of implying that stereotypes speak about "society" in general, it is important to pinpoint both the time period and the specific reference/artifact (a dataset, a model, a geocultural region, etc.) that reflects the societal views in question. For instance, the perceived social norms and support for prejudice reduction in a given context can influence whether people express prejudiced attitudes (Devine and Elliot, 1995). This precise component will help prevent generalizations and ensures a more accurate analysis of stereotypes.
|
| 137 |
+
|
| 138 |
+
- Time. Stereotypes are dynamic associations, reflecting shifts in social group interactions, cultural norms, and historical events over time. The perceivers' exposure to the evolving information, therefore, alters their existing stereotypes. This is an important aspect to capture in how we operationalize stereotypes in NLP research.
|
| 139 |
+
- **Reference.** Stereotypes captured in NLP datasets and models, exist within specific socio-cultural contexts. Their prevalence may vary depending on which slice of society is captured in any specific dataset or model. Hence, it is important to also capture this referential context—i.e., which societal context, and which artifact, whether data or model.
|
| 140 |
+
|
| 141 |
+
- Provenance and Reinforcement. The origin of a stereotype can denote the intent or purpose of reinforcing this belief on a social level. Stereotypes may be rooted in social policies, propaganda, myths or scientific misconceptions. Understanding whether a stereotype originates from scientific, religious, media, or political propaganda may be helpful for evaluating its social impact.
|
| 142 |
+
|
| 143 |
+
It is important to also note that the features in the framework may interact with one another. For example, Christians are a minority group in India and can be seen as marginalized, whereas, the same group is not similarly marginalized in the US. This difference influences how stereotypes about the same target group may be dealt with in India vs. the US (Kulkarni et al., 2023).
|
| 144 |
+
|
| 145 |
+
# 5 Roadmap for Operationalization
|
| 146 |
+
|
| 147 |
+
The framework presented above is intentionally broad, with the aim of capturing all aspects of stereotypes that may be relevant in responsible AI evaluations. There may be crucial considerations that help when it comes to operationalizing the framework in specific contexts. In this section we provide such a roadmap for implementation and utilizing the framework.
|
| 148 |
+
|
| 149 |
+
# 5.1 Recommendations for Implementation
|
| 150 |
+
|
| 151 |
+
Our framework is conceptual in nature, and is not tied to any particular implementation approach. A simpler implementation, for instance, using spreadsheets or relational databases, may suffice if the evaluation context is narrowly scoped. Table 1 shows one such tabular form implementation of our framework, where we mapped instances from five stereotype resources that are prominently used in NLP. We chose approximately 20 examples from each of the datasets, and mapped the existing information in those datasets onto our framework. This exercise revealed cases where certain features are not applicable (e.g., vegetarianism as an attribute does not lend itself to the SCM categories of Warmth and Competence, as it is based on a religious practice. It also revealed cases where existing datasets lack certain relevant information; e.g., StereolMs dataset does not capture perceiver information, whereas SeeGULL and SPICE capture regional information of perceivers.
|
| 152 |
+
|
| 153 |
+
While such a simplistic implementation may suffice for demonstrative purposes, and for small scale evaluations, most real-world scenarios will require a more sophisticated implementation that can account for interrelationships between various elements of the framework. In particular, a Knowledge Graph-based implementation might be especially appropriate in this case, as it will support sophisticated analytics for robust data exploration and visualization, a high level of expressiveness to capture complex contextual and metadata details, adaptability to accommodate evolving insights about stereotypes, and extensibility to incorporate related entities and information from other resources.
|
| 154 |
+
|
| 155 |
+
Knowledge Graphs allow for flexible data modeling (Angles et al., 2017), which is crucial for capturing the evolving nature of stereotypes and their associated attributes (Deshpande et al., 2022). They emphasize relationships, enabling modeling complex relationships (Paulheim, 2017) between stereotypes and other components such as social groups. Knowledge Graphs also enable capturing nuanced knowledge about context, such as time, locale, and source provenance associated with stereotypes. Their semantic capabilities enable automated reasoning and insights, with structures suited to complex queries, visualization, and pattern detection (Hogan et al., 2021). Knowledge Graphs support rapid data retrieval and efficient scaling, aided by query optimization techniques like partitioning and indexing (Angles et al., 2017), making them ideal for downstream mitigation efforts.
|
| 156 |
+
|
| 157 |
+
# 5.2 Utilizing the Framework
|
| 158 |
+
|
| 159 |
+
In this section, we outline some of the ways in which our framework bridges many of the gaps identified in Section 3.3. Depending on the use case, researchers should be able to identify which of the mentioned gaps might impact their conceptualization of stereotypes. For instance, if an evaluation is aimed to be applied in a monolingual, monocultural setting, then the geocultural specification on stereotypes' context may not be crucial in that case.
|
| 160 |
+
|
| 161 |
+
Identifying Stereotype Categories: Our framework goes beyond modeling stereotypes as simple relationships between an identity (e.g., Mexicans) and an attribute (e.g., lazy), and enable richer evaluations:
|
| 162 |
+
|
| 163 |
+
- Metadata utilization. One of the highlights of our framework is that it includes metadata that can be used to identify societal stereotypes according to specific criteria. For instance, in addition to being able to extract specific stereotypes (e.g., (Mexican, lazy), our framework enables us to retrieve categories of stereotypes that are of similar type (e.g., other attributes similar in meaning to lazy). This will not only enable robust evaluation, but also identify and efficiently fill gaps in existing resources.
|
| 164 |
+
- Targeted evaluation. Our framework can facilitate verifying whether model responses contain specific categories of stereotypes. For instance, one might be interested in stereotypes involving identities related to a particular social axis, such as race, religion, or nationality where the identity might be that of the target group or the perceiver; stereotypes where the target is a marginalized group; stereotypes that are particularly offensive in some context; stereotypes that are prevalent in a particular culture and/or region; and more. A unified framework lends itself to such comprehensive and targeted evaluations.
|
| 165 |
+
|
| 166 |
+
Assessing Stereotype Evolution: Our framework provides a powerful lens through which we can exam-
|
| 167 |
+
|
| 168 |
+
<table><tr><td rowspan="3">Source</td><td colspan="5">Target Group</td><td colspan="4">Attribute</td><td colspan="2">Perceiver</td></tr><tr><td rowspan="2">Token</td><td rowspan="2">Social Axis</td><td rowspan="2">Int.</td><td rowspan="2">Marg.</td><td rowspan="2">Demo.</td><td rowspan="2">Token</td><td colspan="3">Valence</td><td rowspan="2">Social Group</td><td rowspan="2">Region</td></tr><tr><td>Warm.</td><td>Compet.</td><td>Off.</td></tr><tr><td rowspan="3">SeeGULL</td><td>Palestinian</td><td>nationality</td><td>F</td><td>T</td><td>T</td><td>aggressive</td><td>low</td><td>high</td><td>high</td><td>Middle-eastern</td><td>Middle East</td></tr><tr><td>Netherlands</td><td>nationality</td><td>F</td><td>F</td><td>T</td><td>blunt</td><td>-</td><td>high</td><td>low</td><td>European</td><td>Europe</td></tr><tr><td>Afghans</td><td>nationality</td><td>F</td><td>T</td><td>T</td><td>violent</td><td>low</td><td>high</td><td>high</td><td>South-Asian</td><td>South Asia</td></tr><tr><td rowspan="3">StereoLMs</td><td>dentists</td><td>profession</td><td>F</td><td>F</td><td>F</td><td>weird</td><td>-</td><td>-</td><td>low</td><td>-</td><td>-</td></tr><tr><td>asians</td><td>race</td><td>F</td><td>F</td><td>T</td><td>elegant</td><td>-</td><td>-</td><td>low</td><td>-</td><td>-</td></tr><tr><td>millennials</td><td>age</td><td>F</td><td>F</td><td>T</td><td>nostalgic</td><td>-</td><td>-</td><td>low</td><td>-</td><td>-</td></tr><tr><td rowspan="3">SPICE</td><td>brahmins</td><td>caste</td><td>F</td><td>F</td><td>T</td><td>vegetarians</td><td>-</td><td>-</td><td>low</td><td>Indian</td><td>India</td></tr><tr><td>dalits</td><td>caste</td><td>F</td><td>T</td><td>T</td><td>uneducated</td><td>-</td><td>low</td><td>high</td><td>Indian</td><td>India</td></tr><tr><td>punjabis</td><td>region</td><td>F</td><td>F</td><td>T</td><td>fearless</td><td>-</td><td>high</td><td>low</td><td>Indian</td><td>India</td></tr><tr><td rowspan="3">CrowsPairs</td><td>old</td><td>age</td><td>F</td><td>F</td><td>T</td><td>fat</td><td>-</td><td>-</td><td>high</td><td>-</td><td>US</td></tr><tr><td>native Americans</td><td>race</td><td>F</td><td>T</td><td>T</td><td>lazy</td><td>-</td><td>low</td><td>high</td><td>-</td><td>US</td></tr><tr><td>schizophrenia</td><td>disability</td><td>F</td><td>F</td><td>F</td><td>stupid</td><td>-</td><td>low</td><td>high</td><td>-</td><td>US</td></tr><tr><td rowspan="3">SBF</td><td>gay men</td><td>SO, gender</td><td>T</td><td>T</td><td>T</td><td>disgusting</td><td>-</td><td>-</td><td>high</td><td>-</td><td>US and Canada</td></tr><tr><td>women</td><td>gender</td><td>F</td><td>F</td><td>T</td><td>objects</td><td>-</td><td>low</td><td>high</td><td>-</td><td>US and Canada</td></tr><tr><td>immigrants</td><td>nationality</td><td>F</td><td>T</td><td>F</td><td>primitive</td><td>-</td><td>low</td><td>high</td><td>-</td><td>US and Canada</td></tr></table>
|
| 169 |
+
|
| 170 |
+
Table 1: The table shows instances of stereotype from five NLP resources - SeeGULL (Jha et al., 2023), Stereotypes in LMs (StereoLMs; Choenni et al., 2021), SPICE (Dev et al., 2023a), CrowsPairs (Nangia et al., 2020), and Social Bias Frames (SBF; Sap et al., 2020) - imported into our framework.
|
| 171 |
+
|
| 172 |
+
ine the dynamic nature of stereotypes and their evolution across time and contexts.
|
| 173 |
+
|
| 174 |
+
- Temporal evolution analysis. The temporal dimension in our framework allows us to track how the prevalence, valence, and/or social groups associated with stereotypes have changed over time. For instance, it was shown that gender stereotypes have evolved over time (Garg et al., 2018b), with newer stereotypes emerging in different periods of time. Similarly, through evaluation of stereotypes and associated offensiveness, general trends of perception of different groups of people can be determined.
|
| 175 |
+
- Contextual evolution analysis. Stereotypes also differ across societal contexts, such as rural versus urban areas, or in different countries and cultures. This contextual evolution analysis can be uniquely conducted with a framework that not only unifies all prevalent stereotype data but also includes additional structured information regarding the perceiver, the marginalization of the target group, and more.
|
| 176 |
+
|
| 177 |
+
Assessing Perceivers and Context: Beyond simply identifying stereotypes, our framework enables a deeper exploration of how these stereotypes are shaped by and impact different perceivers and social contexts.
|
| 178 |
+
|
| 179 |
+
- Differences. We can analyze stereotypes associated with a particular group according to different perceivers. This might be useful to understand how groups along a given spectrum may perceive a certain relevant group to gauge deeper concerns that perceivers might have about the target. For instance, we could compare the stereotypes held by Democrats and Republicans in the US toward certain groups of people, such as immigrants, trans people, or atheists.
|
| 180 |
+
- Societal impact. Stereotypes can have broader implications on society such as discrimination, inequality, or social exclusion. A unified framework enables analyzing impact in a holistic manner, tying to downstream harms (Shelby et al., 2023).
|
| 181 |
+
- Policy impact. Governance policies can intervene on how technologies attenuate or exacerbate social
|
| 182 |
+
|
| 183 |
+
issues such as stereotypes. Analysis of large scale impact of stereotypes in society can in turn enable impact on policies developed to protect communities and mitigate harms. Additionally, unified stereotype frameworks can enable analyzing the impact of policies on societal change (Curto et al., 2022).
|
| 184 |
+
|
| 185 |
+
- Generalization. Stereotype tuples are often studied in isolation without their linguistic context. This separation makes it impossible to fully assess the implications of different types of generalizing language (Davani et al., 2024). Specifically, effectively identifying harmful language requires understanding the intent behind a generalization, which can range from mere mentioning a bias to actively evoking and promoting it.
|
| 186 |
+
|
| 187 |
+
Preventing Siloed Evaluations with Stereotype Interdependency: To fully grasp the complexity of stereotypes, it is crucial to move beyond isolated analyses and consider how different stereotypes interact and influence one another.
|
| 188 |
+
|
| 189 |
+
- Co-occurrence analysis. Stereotypes can frequently co-occur, and magnify different aspects of marginalization, such as stereotypes about race and gender, or social class and ethnicity (Bond et al., 2021). Such patterns reveal important interdependencies that our framework enables us to identify in data and models, which in turn could lead to preventing harms to intersectional groups.
|
| 190 |
+
- Conflict and Synergy analysis. Multiple stereotypes can exist in a society such that they conflict or contradict each other, leading to social tensions (e.g., immigrants as both lazy and stealing jobs). Stereotypes may also coexist and thus can reinforce or amplify one another, creating a more harmful impact, for instance, black women being stereotyped as loud and angry, can lead to workplace discrimination (Motro et al., 2021). This framework enables analysis and aggregation of such interdependencies at local and global scales.
|
| 191 |
+
|
| 192 |
+
Detecting Stereotype Origin and Propagation Understanding how stereotypes emerge and spread is es
|
| 193 |
+
|
| 194 |
+
sential for developing effective interventions (Antypas et al., 2024), and our framework provides the tools for tracing these patterns.
|
| 195 |
+
|
| 196 |
+
- Influencer analysis. Stereotypes originate at different points of time and are propagated differently. Recurring examination of resources and models over time helps identify key individuals, groups, or events that have contributed to the creation and/or evolution of stereotypes. For example, around the time of the COVID-19 outbreak and pandemic, anti-Asian sentiment and stereotypes were on the rise, which has been markedly observed (Lin et al., 2022). Similar analysis can help understand the origin, propagation and severity of stereotypes.
|
| 197 |
+
- Media analysis. The media often plays a critical role in shaping the perception of people worldwide,² and in turn it also captures and reinforces perceptions of people already present in society.³ Analyzing media representations, such as movies, television shows, or news articles, contributes to the understanding of the formation and/or reinforcement of stereotypes.
|
| 198 |
+
|
| 199 |
+
# Enhancing bias mitigation on NLP models:
|
| 200 |
+
|
| 201 |
+
- Bias detection. while common datasets can be used for detecting specific stereotypes in models and text, our framework enables detection on various levels, for example, using our framework, researchers could analyze a large corpus of news articles to detect the prevalence of stereotypes associating marginalized ethnic groups (target group) with offensive words (attribute) within the context of immigration debates (context). This allows for targeted analysis of bias concerning a specific marginalized group within a specific context.
|
| 202 |
+
- Bias mitigation. our framework enables more structured bias mitigation by only focusing on stereotypes with specific tones and levels of harmfulness and impact. Suppose our framework analysis reveals that a language model frequently generates sentences associating black women (intersectional target group) with being emotional (attribute, potentially negative valence and low Competence) in the context of workplace interactions. A bias mitigation strategy could then be designed to specifically target and reduce the frequency of these associations in the model's output, while perhaps being less concerned with other, less harmful stereotypes.
|
| 203 |
+
- Explainability. The framework can be used to explain the biased behavior of NLP models. For example, if a model makes a biased prediction, the framework can help to identify the underlying stereotypes that might be contributing to the bias.
|
| 204 |
+
|
| 205 |
+
- Data Augmentation. The framework can be used to generate counter-stereotypical examples for data augmentation, which can help to improve the robustness and fairness of NLP models. Furthermore, the framework can reveal missing information in datasets, for instance showing that a dataset does not include any information about perceivers or lacks data on intersectional groups.
|
| 206 |
+
|
| 207 |
+
# 6 Discussion
|
| 208 |
+
|
| 209 |
+
Our framework provides a structured language and ontology that helps the NLP community bridge the gap between social psychological theory and computational operationalization. By forcing the explicit articulation of components like the Perceiver and Context, our model moves stereotype analysis beyond simple (Target, Attribute) tuples. This shift is critical for developing more granular and robust evaluation methodologies that are sensitive to socio-historical nuances. For instance, classifying a bias as merely "racial" is insufficient; a proper evaluation requires specifying the relationship—who is holding the belief (Perceiver) about whom (Target) and in what geo-cultural setting (Context)—to determine the appropriate mitigation strategy. Furthermore, a structure like this is essential for building interdependent stereotype knowledge bases that support complex analytical queries, paving the way for the next generation of context-aware and culture-sensitive debiasing techniques in LLMs.
|
| 210 |
+
|
| 211 |
+
We provided recommendations for implementing the framework using Knowledge Graphs in Section 5.1, however, we also acknowledge that depending on the specific use case in which stereotypes need to be operationalized, developers might not find it efficient to incorporate all aspects of the framework in their design; for instance, the operational complexity and lack of scalability and the role of human oversight in maintaining such a Knowledge Graph introduce significant costs to a project. Thus Section 5.2 discusses how different research and technical problems benefit from specific aspects of the framework. We also acknowledge the need for research into more computationally lightweight alternatives for implementation that still preserve the framework's richness, allowing smaller research teams or production systems to adopt its core principles without incurring high maintenance costs.
|
| 212 |
+
|
| 213 |
+
The current framework provides the what (the components of a stereotype), but future work must integrate the how—specifically, developing methods to parse and encode the linguistic context (e.g., sarcasm, metaphor, active vs. passive voice) that modulates a stereotype's expression and potential for harm. Future efforts should acos rigorously test and adapt the framework's components to demonstrate utility in a broader range of NLP tasks beyond LLM evaluation, such as bias detection in knowledge distillation or fairness in multimodal generation. This would solidify the framework's value as a universal tool for responsible AI.
|
| 214 |
+
|
| 215 |
+
# 7 Limitations
|
| 216 |
+
|
| 217 |
+
While our framework captures various aspects of stereotyping by drawing from social psychology and NLP, we acknowledge its potential limitations. First, our goal is for the framework to improve stereotype evaluation and mitigation in LLMs. This inherent focus on model-centric applications and the subjectivity in interpreting the application can limit the generalizability of the framework to other NLP tasks. Second, while our framework emphasizes the essential role of Context in shaping stereotypes, we recognize that context is inherently multifaceted and dynamic, encompassing a vast array of factors, including but not limited to social norms, historic events, individual experiences, and power dynamics. Due to this complexity, any attempt to model the context is inevitably incomplete. Instead, we encourage researchers to explicitly consider and document the relevant contextual factors in their efforts, even if those factors expand beyond the specific elements included in the current framework. Moreover, several studies in NLP tend to the linguistic context in which stereotypes are expressed and explore nuanced communication elements such as linguistic modalities, reasons, motivations, sarcasm, and parody as they co-occur with stereotyping language. A focused linguistic effort is essential for incorporating such linguistic factors with the core aspects of stereotypes discussed in this paper. Therefore, ongoing critical engagement and reflection is highly necessary for linguistic, social and historical grounding of stereotype evaluations.
|
| 218 |
+
|
| 219 |
+
# References
|
| 220 |
+
|
| 221 |
+
Andrea E Abele, Nicole Hauke, Kim Peters, Eva Louvet, Aleksandra Szymkow, and Yanping Duan. 2016. Facets of the fundamental content dimensions: Agency with competence and assertiveness—communion with warmth and morality. Frontiers in psychology, 7:219720.
|
| 222 |
+
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
|
| 223 |
+
Laura Ación, Laura Alonso Alemany, Luciana Benotti, Matías Bordone, Beatziz Busaniche, Lucia González, and Alexia Halvorsen. 2023. A tool to overcome technical barriers for bias assessment in human language technologies (edia paper). *Inteligencia Artificial Feminista: hacía una agenda de Investigación para América Latina y el Caribe*.
|
| 224 |
+
Laura Alonso Alemany, Luciana Benotti, Hernán Maina, Lucia González, Mariela Rajngewerc, Lautaro Martínez, Jorge Sánchez, Mauro Schilman, Guido Ivetta, Alexia Halvorsen, et al. 2022. A methodology to characterize bias and harmful stereotypes in natural language processing in latin america. arXiv preprint arXiv:2207.06591.
|
| 225 |
+
|
| 226 |
+
Gordon Willard Allport, Kenneth Clark, and Thomas Pettigrew. 1954. The nature of prejudice.
|
| 227 |
+
Renzo Angles, Marcelo Arenas, Pablo Barceló, Aidan Hogan, Juan Reutter, and Domagoj Vrgoc. 2017. Foundations of modern query languages for graph databases. ACM Computing Surveys (CSUR), 50(5):1-40.
|
| 228 |
+
Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. 2023. Palm 2 technical report. arXiv preprint arXiv:2305.10403.
|
| 229 |
+
Dimosthenis Antypas, Christian Arnold, Jose Camacho-Collados, Nedjma Ousidhoum, and Carla Perez Almendros. 2024. Words as trigger points in social media discussions: A large-scale case study about uk politics on reddit. arXiv preprint arXiv:2405.10213.
|
| 230 |
+
Albert Bandura and Richard H Walters. 1977. Social learning theory, volume 1. Prentice hall Englewood Cliffs, NJ.
|
| 231 |
+
Omer Bar-Tal, Hila Chefer, Omer Tov, Charles Herrmann, Roni Paiss, Shiran Zada, Ariel Ephrat, Junhwa Hur, Guanghui Liu, Amit Raj, et al. 2024. Lumiere: A space-time diffusion model for video generation. arXiv preprint arXiv:2401.12945.
|
| 232 |
+
Solon Barocas, Kate Crawford, Aaron Shapiro, and Hanna Wallach. 2017. The problem with bias: Allocative versus representational harms in machine learning. In 9th Annual conference of the special interest group for computing, information and society, page 1. New York, NY.
|
| 233 |
+
Marianne Bertrand and Sendhil Mullainathan. 2004. Are emily and greg more employable than lakisha and jamal? a field experiment on labor market discrimination. American Economic Review, 94(4):991-1013.
|
| 234 |
+
Shaily Bhatt, Sunipa Dev, Partha Talukdar, Shachi Dave, and Vinodkumar Prabhakaran. 2022. Recontextualizing fairness in NLP: The case of India. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 727-740, Online only. Association for Computational Linguistics.
|
| 235 |
+
Mukul Bhutani, Kevin Robinson, Vinodkumar Prabhakaran, Shachi Dave, and Sunipa Dev. 2024. SeeG-ULL multilingual: a dataset of geo-culturally situated stereotypes. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 842-854, Bangkok, Thailand. Association for Computational Linguistics.
|
| 236 |
+
Abeba Birhane, William Isaac, Vinodkumar Prabhakaran, Mark Diaz, Madeleine Clare Elish, Iason Gabriel, and Shakir Mohamed. 2022. Power to the
|
| 237 |
+
|
| 238 |
+
people? opportunities and challenges for participatory AI. In Proceedings of the 2nd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, pages 1-8.
|
| 239 |
+
Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of "bias" in nlp. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5454-5476.
|
| 240 |
+
Su Lin Blodgett, Gilsinia Lopez, Alexandra Olteanu, Robert Sim, and Hanna Wallach. 2021. Stereotyping Norwegian salmon: An inventory of pitfalls in fairness benchmark datasets. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1004-1015, Online. Association for Computational Linguistics.
|
| 241 |
+
Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Advances in neural information processing systems, 29.
|
| 242 |
+
Keosha T Bond, Natalie M Leblanc, Porche Williams, Cora-Ann Gabriel, and Ndidiamaka N Amutah-Onukagha. 2021. Race-based sexual stereotypes, gendered racism, and sexual decision making among young black cisgender women. Health Education & Behavior, 48(3):295-305.
|
| 243 |
+
Zalán Borsos, Raphaël Marinier, Damien Vincent, Eugene Kharitonov, Olivier Pietquin, Matt Sharifi, Dominik Roblek, Olivier Teboul, David Grangier, Marco Tagliasacchi, et al. 2023. Audiolm: a language modeling approach to audio generation. IEEE/ACM transactions on audio, speech, and language processing, 31:2523-2533.
|
| 244 |
+
Ramdas Borude. 1966. Linguistic stereotypes and social distance. Indian Journal of Social Work, 27(1):75-82.
|
| 245 |
+
Tom Bourgeade, Alessandra Teresa Cignarella, Simona Frenda, Mario Laurent, Wolfgang Schmeisser-Nieto, Farah Benamara, Cristina Bosco, Véronique Moriceau, Viviana Patti, and Mariona Taule. 2023. A multilingual dataset of racial stereotypes in social media conversational threads. In Findings of the Association for Computational Linguistics: EACL 2023, pages 686-696.
|
| 246 |
+
Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186.
|
| 247 |
+
Z. Chen. 2023. Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities and Social Sciences Communication, 10.
|
| 248 |
+
Myra Cheng, Esin Durmus, and Dan Jurafsky. 2023. Marked personas: Using natural language prompts
|
| 249 |
+
|
| 250 |
+
to measure stereotypes in language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1504-1532, Toronto, Canada. Association for Computational Linguistics.
|
| 251 |
+
Patricia Chiril, Farah Benamara, and Véronique Moriceau. 2021. "be nice to your wife! the restaurants are closed": Can gender stereotype detection improve sexism classification? In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 2833-2844, Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 252 |
+
Rochelle Choenni, Ekaterina Shutova, and Robert van Rooij. 2021. Stepmothers are mean and academics are pretentious: What do pretrained language models learn about you? In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1477-1491, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 253 |
+
Kimberlé Crenshaw. 2013. Demarginalizing the intersection of race and sex: A black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. In *Feminist legal theories*, pages 23-51. Routledge.
|
| 254 |
+
Amy JC Cuddy, Susan T Fiske, and Peter Glick. 2007. The bias map: behaviors from intergroup affect and stereotypes. Journal of personality and social psychology, 92(4):631.
|
| 255 |
+
Amy JC Cuddy, Susan T Fiske, Virginia SY Kwan, Peter Glick, Stephanie Demoulin, Jacques-Philippe Leyens, Michael Harris Bond, Jean-Claude Croizet, Naomi Ellemers, Ed Sleebos, et al. 2009. Stereotype content model across cultures: Towards universal similarities and some differences. British journal of social psychology, 48(1):1-33.
|
| 256 |
+
Georgina Curto, Nieves Montes, Carles Sierra, Nardine Osman, and Flavio Comim. 2022. A norm optimisation approach to sdgs: tackling poverty by acting on discrimination. In International Joint Conference on Artificial Intelligence.
|
| 257 |
+
Aida Mostafazadeh Davani, Mohammad Atari, Brendan Kennedy, and Morteza Dehghani. 2023. Hate speech classifiers learn normative social stereotypes. Transactions of the Association for Computational Linguistics, 11:300-319.
|
| 258 |
+
Aida Mostafazadeh Davani, Sagar Gubbi Venkatesh, Sunipa Dev, Shachi Dave, and Vinodkumar Prabhakaran. 2024. Genil: A multilingual dataset on generalizing language. In First Conference on Language Modeling.
|
| 259 |
+
Awantee Deshpande, Dana Ruiter, Marius Mosbach, and Dietrich Klakow. 2022. StereoKG: Data-driven knowledge graph construction for cultural knowledge and stereotypes. In Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH), pages 67-78, Seattle, Washington (Hybrid). Association for Computational Linguistics.
|
| 260 |
+
|
| 261 |
+
Sunipa Dev, Jaya Goyal, Dinesh Tewari, Shachi Dave, and Vinodkumar Prabhakaran. 2023a. Building socio-culturally inclusive stereotype resources with community engagement. In Advances in Neural Information Processing Systems, volume 36, pages 4365-4381. Curran Associates, Inc.
|
| 262 |
+
Sunipa Dev, Akshita Jha, Jaya Goyal, Dinesh Tewari, Shachi Dave, and Vinodkumar Prabhakaran. 2023b. Building stereotype repositories with complementary approaches for scale and depth. In Proceedings of the First Workshop on Cross-Cultural Considerations in NLP (C3NLP), pages 84-90.
|
| 263 |
+
Sunipa Dev, Masoud Monajatipoor, Anaelia Ovalle, Arjun Subramonian, Jeff Phillips, and Kai-Wei Chang. 2021. Harms of gender exclusivity and challenges in non-binary representation in language technologies. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1968-1994, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 264 |
+
Sunipa Dev, Emily Sheng, Jieyu Zhao, Aubrie Amstutz, Jiao Sun, Yu Hou, Mattie Sanseverino, Jiin Kim, Akihiro Nishi, Nanyun Peng, et al. 2022. On measures of biases and harms in nlp. In Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022, pages 246-267.
|
| 265 |
+
Patricia G Devine and Andrew J Elliot. 1995. Are racial stereotypes really fading? the princeton trilogy revisited. *Personality and social psychology bulletin*, 21(11):1139-1150.
|
| 266 |
+
John F Dovidio, Miles Hewstone, Peter Glick, and Victoria M Esses. 2010. Prejudice, stereotyping and discrimination: Theoretical and empirical overview. Prejudice, stereotyping and discrimination, 12:3-28.
|
| 267 |
+
Anjalie Field, Su Lin Blodgett, Zeerak Waseem, and Yulia Tsvetkov. 2021. A survey of race, racism, and anti-racism in NLP. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1905-1925.
|
| 268 |
+
Susan T Fiske. 1991. Social cognition.
|
| 269 |
+
Susan T Fiske, Amy JC Cuddy, Peter Glick, and Jun Xu. 2018. A model of (often mixed) stereotype content: Competence and warmth respectively follow from perceived status and competition. In Social cognition, pages 162-214. Routledge.
|
| 270 |
+
Kathleen C Fraser, Svetlana Kiritchenko, and Isar Nejadgholi. 2022. Computational modeling of stereotype content in text. Frontiers in artificial intelligence, 5:826207.
|
| 271 |
+
Kathleen C Fraser, Svetlana Kiritchenko, and Isar Nejadgholi. 2024. How does stereotype content differ across data sources? In Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM* 2024), pages 18-34.
|
| 272 |
+
|
| 273 |
+
Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018a. Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences, 115(16):E3635-E3644.
|
| 274 |
+
Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018b. Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences, 115(16):E3635-E3644.
|
| 275 |
+
James L Hilton and William Von Hippel. 1996. Stereotypes. Annual review of psychology, 47(1):237-271.
|
| 276 |
+
Jonathan Ho, William Chan, Chitwan Sahara, Jay Whang, Ruiqi Gao, Alexey Gritsenko, Diederik P Kingma, Ben Poole, Mohammad Norouzi, David J Fleet, et al. 2022. Imagen video: High definition video generation with diffusion models. arXiv preprint arXiv:2210.02303.
|
| 277 |
+
Aidan Hogan, Eva Blomqvist, Michael Cochez, Claudia d'Amato, Gerard De Melo, Claudio Gutierrez, Sabrina Kirrane, Jose Emilio Labra Gayo, Roberto Navigli, Sebastian Neumaier, et al. 2021. Knowledge graphs. ACM Computing Surveys (Csur), 54(4):1-37.
|
| 278 |
+
Dirk Hovy and Shrimai Prabhumoye. 2021. Five sources of bias in natural language processing. Language and linguistics compass, 15(8):e12432.
|
| 279 |
+
Dirk Hovy and Diyi Yang. 2021. The importance of modeling social factors of language: Theory and practice. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 588-602, Online. Association for Computational Linguistics.
|
| 280 |
+
Ben Hutchinson, Vinodkumar Prabhakaran, Emily Denton, Kellie Webster, Yu Zhong, and Stephen Denuyl. 2020. Social biases in nlp models as barriers for persons with disabilities. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5491-5501.
|
| 281 |
+
Younghoon Jeong, Juhyun Oh, Jongwon Lee, Jaimeen Ahn, Jihyung Moon, Sungjoon Park, and Alice Oh. 2022. KOLD: Korean offensive language dataset. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10818-10833, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
|
| 282 |
+
Akshita Jha, Aida Mostafazadeh Davani, Chandan K Reddy, Shachi Dave, Vinodkumar Prabhakaran, and Sunipa Dev. 2023. SeeGULL: A stereotype benchmark with broad geo-cultural coverage leveraging generative models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9851-9870, Toronto, Canada. Association for Computational Linguistics.
|
| 283 |
+
|
| 284 |
+
John T Jost and Mahzarin R Banaji. 1994. The role of stereotyping in system-justification and the production of false consciousness. British journal of social psychology, 33(1):1-27.
|
| 285 |
+
Svetlana Kiritchenko and Saif Mohammad. 2018. Examining gender and race bias in two hundred sentiment analysis systems. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 43-53, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 286 |
+
Mary E Kite, Bernard E Whitley Jr, and Lisa S Wagner. 2022. Psychology of prejudice and discrimination. Routledge.
|
| 287 |
+
Alex Koch, Roland Imhoff, Ron Dotsch, Christian Unkelbach, and Hans Alves. 2016. The abc of stereotypes about groups: Agency/socioeconomic success, conservative-progressive beliefs, and communion. Journal of personality and social psychology, 110(5):675.
|
| 288 |
+
Felix Kreuk, Gabriel Synnaeve, Adam Polyak, Uriel Singer, Alexandre Défossez, Jade Copet, Devi Parikh, Yaniv Taigman, and Yossi Adi. 2022. Audiogen: Textually guided audio generation. arXiv preprint arXiv:2209.15352.
|
| 289 |
+
Atharva Kulkarni, Sarah Masud, Vikram Goyal, and Tanmoy Chakraborty. 2023. Revisiting hate speech benchmarks: From data curation to system deployment. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD '23, page 4333-4345, New York, NY, USA. Association for Computing Machinery.
|
| 290 |
+
Kurt Lewin. 1951. Intention, will and need.
|
| 291 |
+
Hao Lin, Pradeep Nalluri, Lantian Li, Yifan Sun, and Yongjun Zhang. 2022. Multiplex anti-Asian sentiment before and during the pandemic: Introducing new datasets from Twitter mining. In Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis, pages 16–24, Dublin, Ireland. Association for Computational Linguistics.
|
| 292 |
+
C Neil Macrae and Galen V Bodenhausen. 2000. Social cognition: Thinking categorically about others. Annual review of psychology, 51(1):93-120.
|
| 293 |
+
C Neil Macrae, Charles Stangor, and Miles Hewstone. 1996. Stereotypes and stereotyping. Guilford Press.
|
| 294 |
+
Vijit Malik, Sunipa Dev, Akihiro Nishi, Nanyun Peng, and Kai-Wei Chang. 2022. Socially aware bias measurements for Hindi language representations. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1041-1052, Seattle, United States. Association for Computational Linguistics.
|
| 295 |
+
Daphna Motro, Jonathan B. Evans, Aleksander P. J. Ellis, and III Lehman Benson. 2021. Race and reactions to women's expressions of anger at work:
|
| 296 |
+
|
| 297 |
+
Examining the effects of the "angry black woman" stereotype.
|
| 298 |
+
Moin Nadeem, Anna Bethke, and Siva Reddy. 2021. Stereoset: Measuring stereotypical bias in pretrained language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5356-5371.
|
| 299 |
+
Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel Bowman. 2020. Crows-pairs: A challenge dataset for measuring social biases in masked language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1953-1967.
|
| 300 |
+
Gandalf Nicolas, Xuechunzi Bai, and Susan T Fiske. 2021. Comprehensive stereotype content dictionaries using a semi-automated method. European Journal of Social Psychology, 51(1):178-196.
|
| 301 |
+
Gandalf Nicolas, Xuechunzi Bai, and Susan T Fiske. 2022. A spontaneous stereotype content model: Taxonomy, properties, and prediction. Journal of personality and social psychology, 123(6):1243.
|
| 302 |
+
Gandalf Nicolas and Aylin Caliskan. 2024a. Directionality and representativeness are differentiable components of stereotypes in large language models. PNAS nexus, 3(11):pgae493.
|
| 303 |
+
Gandalf Nicolas and Aylin Caliskan. 2024b. A taxonomy of stereotype content in large language models. arXiv preprint arXiv:2408.00162.
|
| 304 |
+
Ali Omrani, Alireza Salkhordeh Ziabari, Charles Yu, Preni Golazizian, Brendan Kennedy, Mohammad Atari, Heng Ji, and Morteza Dehghani. 2023. Social-group-agnostic bias mitigation via the stereotype content model. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4123-4139, Toronto, Canada. Association for Computational Linguistics.
|
| 305 |
+
Heiko Paulheim. 2017. Knowledge graph refinement: A survey of approaches and evaluation methods. Semantic web, 8(3):489-508.
|
| 306 |
+
Mohaimenul Azam Khan Raiaan, Md Saddam Hossein Mukta, Kaniz Fatema, Nur Mohammad Fahad, Sadman Sakib, Most Marufatul Jannat Mim, Jubaer Ahmad, Mohammed Eunus Ali, and Sami Azam. 2024. A review on large language models: Architectures, applications, taxonomies, open issues and challenges. IEEE Access.
|
| 307 |
+
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684-10695.
|
| 308 |
+
|
| 309 |
+
Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 8-14, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 310 |
+
Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. 2022. Photorealistic text-to-image diffusion models with deep language understanding. Advances in neural information processing systems, 35:36479-36494.
|
| 311 |
+
Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Tulsee Doshi, and Vinodkumar Prabhakaran. 2021. Re-imagining algorithmic fairness in india and beyond. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pages 315-328.
|
| 312 |
+
Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th annual meeting of the association for computational linguistics, pages 1668-1678.
|
| 313 |
+
Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A. Smith, and Yejin Choi. 2020. Social bias frames: Reasoning about social and power implications of language. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5477-5490, Online. Association for Computational Linguistics.
|
| 314 |
+
David J Schneider. 2005. The psychology of stereotyping. Guilford Press.
|
| 315 |
+
Sandeep Singh Sengar, Affan Bin Hasan, Sanjay Kumar, and Fiona Carroll. 2024. Generative artificial intelligence: a systematic review and applications. *Multimedia Tools and Applications*, pages 1-40.
|
| 316 |
+
Renee Shelby, Shalaleh Rismani, Kathryn Henne, AJung Moon, Negar Rostamzadeh, Paul Nicholas, N'Mah Yilla-Akbari, Jess Gallegos, Andrew Smart, Emilio Garcia, and Gurleen Virk. 2023. Sociotechnical harms of algorithmic systems: Scoping a taxonomy for harm reduction. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, AIES '23, page 723-741, New York, NY, USA. Association for Computing Machinery.
|
| 317 |
+
Henri Tajfel, John C Turner, William G Austin, and Stephen Worchel. 1979. An integrative theory of intergroup conflict. Organizational identity: A reader, 56(65):9780203505984-16.
|
| 318 |
+
John C Turner, Rupert J Brown, and Henri Tajfel. 1979. Social comparison and group interest in in-group favouritism. European journal of social psychology, 9(2):187-204.
|
| 319 |
+
|
| 320 |
+
Eddie L Ungless, Amy Rafferty, Hrichika Nag, and Björn Ross. 2022. A robust bias mitigation procedure based on the stereotype content model. arXiv preprint arXiv:2210.14552.
|
| 321 |
+
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15–20, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 322 |
+
Xuhui Zhou, Hao Zhu, Akhila Yerukola, Thomas Davidson, Jena D. Hwang, Swabha Swayamdipta, and Maarten Sap. 2023. COBRA frames: Contextual reasoning about effects and harms of offensive statements. In Findings of the Association for Computational Linguistics: ACL 2023, pages 6294-6315, Toronto, Canada. Association for Computational Linguistics.
|
| 323 |
+
|
| 324 |
+
# A Glossary
|
| 325 |
+
|
| 326 |
+
Stereotype — A cognitive generalization about a specific social group, often consisting of widely shared beliefs and assumed traits associated with its members.
|
| 327 |
+
Categorizing — The fundamental cognitive process of grouping objects, events, or people into categories, which is essential to the formation of stereotypes.
|
| 328 |
+
Intersectionality — The concept that individuals belong to multiple social groups simultaneously, and that stereotypes targeting these intersectional identities create unique forms of bias beyond those of the component groups.
|
| 329 |
+
Stereotype Content Model (SCM) — A foundational social psychological framework that posits group stereotypes vary along two primary, universal dimensions: Warmth and Competence.
|
| 330 |
+
Agency-Beliefs-Communion Model (ABC) — A theoretical extension of the SCM that adds Beliefs as a third dimension, alongside refined aspects of Agency (Competence) and Communion (Warmth).
|
| 331 |
+
Warmth — The SCM dimension that captures perceived good or ill intent, reflecting traits like friendliness, sincerity, and morality.
|
| 332 |
+
Competence — The SCM dimension that captures perceived capability or status, reflecting traits like intelligence, skill, and agency.
|
| 333 |
+
Perceiver — The individual, group, or section of society that holds and applies a specific stereotypical belief about the target group.
|
2025/A Comprehensive Framework to Operationalize Social Stereotypes for Responsible AI Evaluations/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3ffbfd620fa7a56b607cb092244aaee8ecc703272764c79244ea27f69efb83c1
|
| 3 |
+
size 99762
|
2025/A Comprehensive Framework to Operationalize Social Stereotypes for Responsible AI Evaluations/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Comprehensive Literary Chinese Reading Comprehension Dataset with an Evidence Curation Based Solution/2e889ff8-a374-489b-8cab-69b6b51f79c3_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Comprehensive Literary Chinese Reading Comprehension Dataset with an Evidence Curation Based Solution/2e889ff8-a374-489b-8cab-69b6b51f79c3_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Comprehensive Literary Chinese Reading Comprehension Dataset with an Evidence Curation Based Solution/2e889ff8-a374-489b-8cab-69b6b51f79c3_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0763af6e619ee522ae34dd2f9dd958be5d7601fd43deb270c6c5adeb85946bec
|
| 3 |
+
size 1534317
|
2025/A Comprehensive Literary Chinese Reading Comprehension Dataset with an Evidence Curation Based Solution/full.md
ADDED
|
@@ -0,0 +1,783 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Comprehensive Literary Chinese Reading Comprehension Dataset with an Evidence Curation Based Solution
|
| 2 |
+
|
| 3 |
+
Dongning Rao $^{1}$ , Rongchu Zhou $^{1}$ , Peng Chen $^{1}$ , Zhihua Jiang $^{2*}$
|
| 4 |
+
|
| 5 |
+
<sup>1</sup> School of Computer, Guangdong University of Technology, Guangzhou 510006, China
|
| 6 |
+
|
| 7 |
+
$^{2}$ Department of Computer Science, Jinan University, Guangzhou 510632, China raodn@gdut.edu.cn, {2112405271, 2112405069}@mail2.gdut.edu.cn,
|
| 8 |
+
|
| 9 |
+
tjiangzhh@jnu.edu.cn
|
| 10 |
+
|
| 11 |
+
# Abstract
|
| 12 |
+
|
| 13 |
+
Low-resource language understanding is a challenging task, even for large language models (LLMs). An epitome of this problem is the CompRehensive lLiterary chineSe reading comprehension (CRISIS), whose difficulties include limited linguistic data, long input, and insight-required questions. Besides the compelling need to provide a larger dataset for CRISIS, excessive information, order bias, and entangled conundrums still plague the CRISIS solutions. Thus, we present the eVidence cuRation with opTion shUffling and Abstract meaning representation-based cLauses segmenting (VIRTUAL) procedure for CRISIS, with the most extensive dataset. While the dataset is also named CRISIS, it results from a three-phase construction process, including question selection, data cleaning, and a silver-standard data augmentation step, which augments translations, celebrity profiles, government jobs, reign mottos, and dynasty to CRISIS. The six steps of VIRTUAL include embedding, shuffling, abstract meaning representation-based option segmenting, evidence extraction, solving, and voting. Notably, the evidence extraction algorithm facilitates the extraction of literary Chinese evidence sentences, translated evidence sentences, and annotations of keywords using a similarity-based ranking strategy. While CRISIS compiles understanding-required questions from seven sources, the experiments on CRISIS substantiate the effectiveness of VIRTUAL, with a 7 percent increase in accuracy compared to the baseline. Interestingly, both non-LLMs and LLMs exhibit order bias, and abstract meaning representation-based option segmenting is beneficial for CRISIS.
|
| 14 |
+
|
| 15 |
+
# 1 Introduction
|
| 16 |
+
|
| 17 |
+
Literary Chinese, aka Ancient Chinese, or Classical Chinese, lays the foundation for China's enduring identity and cultural heritage (Daddo, 2024). However, as a low-resource language (Zhang et al., 2024a), understanding literary Chinese is challenging for Large Language Models (LLMs) (Cahyawijaya et al., 2024), which have emerged as a keystone of Chinese understanding (GLM et al., 2024).
|
| 18 |
+
|
| 19 |
+
CompRehensive Literary chineSe reading comprehension (CRISIS) is a quintessential task of literary Chinese understanding. However, providing a larger dataset for CRISISs compelling, as the insufficient training corpus problem is still an obstacle for CRISIS (Cao et al., 2024). Further, excessive information (Zhang and Li, 2023), order bias (Li et al., 2024), and entangled conundrums (Xu et al., 2024; Rao et al., 2023) are three of the many challenges for CRISIS.
|
| 20 |
+
|
| 21 |
+
Thus, we propose the eVIDence cuRation with opTion shUffling and Abstract meaning representation based cLauses segmenting (VIRTUAL) procedure for CRISIS, and build a dataset for CRISIS. The dataset (also named CRISIS) is curated from seven sources in three phases: question selection, data cleaning, and LLM-based data augmentation. We focus on passage understanding, not rote facts, in question selection. Data cleaning involved removing duplicates and balancing answers. At last, we augment CRISIS with silver-standard translations, annotations, celebrity profiles, government jobs, reign mottos, and dynasty information.
|
| 22 |
+
|
| 23 |
+
VIRTUAL has three motivations and six steps. To address the issue of excessive information, we proposed a new evidence extraction algorithm that utilizes a similarity-based ranking strategy, incorporating literary Chinese evidence sentences, translated evidence sentences, and annotations of keywords. To address the unfairness in LLMs, VIRTUAL has an option shuffling process (Kawabata
|
| 24 |
+
|
| 25 |
+
Passage: 637 words.
|
| 26 |
+
|
| 27 |
+
“人才莫盛于三国,操募得之!”
|
| 28 |
+
|
| 29 |
+
Talents are most abundant in the Three Kingdoms, .. Cao recruited him ....
|
| 30 |
+
|
| 31 |
+
Question: 259 words.
|
| 32 |
+
|
| 33 |
+
下列对原文有关内容的概述,不正确的一项是:A...曹操又捕获了臧霸"...B....C....D....
|
| 34 |
+
|
| 35 |
+
Please select the incorrect option: A. .. "Cao Cao captured Zang Ba"... B. .... C. .... D. ....
|
| 36 |
+
|
| 37 |
+
Figure 1: An example of CRISIS in the National College Entrance Examination 2024 ( a detailed understanding question). The passage (in literary Chinese) is in blue (top-left corner), the question with four options are in purple (bottom-left corner), the English translations are in black (right side), and teal is used to highlight crucial evidences.
|
| 38 |
+
|
| 39 |
+
and Sugawara, 2023). At last, this paper presents an Abstract Meaning Representation (AMR)- based sentence segmenting function (Chen et al., 2022) for addressing the entangled conundrums. We use clauses segmented from an option as thoughteliciting prompts for LLMs. Additionally, the six steps of VIRTUAL are embedding, shuffling, AMR-based option segmenting, evidence extraction, solving, and voting.
|
| 40 |
+
|
| 41 |
+
We conduct experiments on our dataset and substantiate the effectiveness of VIRTUAL. E.g., evidence extraction with literary Chinese evidence can improve accuracy by $7\%$ . Interestingly, in our experiments, AMR-based option segmenting is constructive, and all models have order bias.
|
| 42 |
+
|
| 43 |
+
The following summarizes our contributions.
|
| 44 |
+
|
| 45 |
+
1) We build the largest dataset for CRISIS;
|
| 46 |
+
2) This paper proposes the novel VIRTUAL procedure for CRISIS with a new extraction algorithm that considers evidence in both literary Chinese and modern Chinese, an option shuffling process that addresses the unfairness of models, and an AMR-based option segmenting for thought-eliciting prompt building.
|
| 47 |
+
3) Experiments on CRISIS show the effectiveness of VIRTUAL.
|
| 48 |
+
|
| 49 |
+
We organize the rest of the paper as follows. First, Section 2 presents our problem. Then, our dataset, CRISIS, is presented in Sec. 3. Introducing VIRTUAL is in Sec. 4, before experiments and analysis in Sec. 5. Finally, the paper concludes with a discussion of limitations and future work.
|
| 50 |
+
|
| 51 |
+
# 2 A Representative Example of Literary Chinese Understanding: CRISIS
|
| 52 |
+
|
| 53 |
+
# 2.1 Literary Chinese
|
| 54 |
+
|
| 55 |
+
Some studies (Cao et al., 2024; Wei et al., 2024) treat the terms Classical Chinese, Ancient Chinese, and Literary Chinese as interchangeable; however, historians find them to be non-identical. Classical Chinese refers to written Chinese from the end of the Spring and Autumn period through to the end of the Han Dynasty (Norman, 1988); Ancient China (whose language is ancient Chinese) is the time between the Neolithic period and the Han dynasty (Daddo, 2024); literary Chinese is the style of written Chinese used before the end of the Qing Dynasty<sup>2</sup>. Thus, literary Chinese is more proper for CRISIS that span from the pre-Qin period to the Qing dynasty (Xu et al., 2020).
|
| 56 |
+
|
| 57 |
+
Fig. 1 is a CRISIS problem from the 2024 national college entrance examination (GaoKao) of China. Examples of literary Chinese sentences are located in the top-left corner, and their corresponding English translations are presented on the right.
|
| 58 |
+
|
| 59 |
+
# 2.2 Knowing Is Not Understanding
|
| 60 |
+
|
| 61 |
+
Reading comprehension (RC) is a representative example of natural language understanding (Sap et al., 2020). However, although comprehension is the ability to understand a situation, some RC problems are knowledge-based, relying on common sense (Ostermann et al., 2018), which tests the model's awareness of knowledge, such as recognizing that someone is another's father. Our study separates itself from previous studies in the research line of literary Chinese RC (Xu et al., 2020; Zhou et al., 2023; Zhong et al., 2024; Hou et al., 2024; Wei et al., 2024) by focusing on the questions that involve interpretation and processing of language (Rao et
|
| 62 |
+
|
| 63 |
+
al., 2023).
|
| 64 |
+
|
| 65 |
+
We can group paragraph questions into two types: summary questions and detailed questions. The former evaluates the main idea of a paragraph or passage (Stevens et al., 1991), whereas the latter delves into specific details (Pearson and Gallagher, 1983). E.g., Fig. 1 is a detailed question. More details are in Appx. J Tab. 21.
|
| 66 |
+
|
| 67 |
+
# 2.3 Difficulties of CRISIS
|
| 68 |
+
|
| 69 |
+
At least four challenges arise from understanding literary Chinese. First, the inconsistency of language styles, e.g., words with shifting meanings (Zhao, 2024). Second, the divergence between spoken and written languages, e.g., literary Chinese, is inherently poetic. Third, literary Chinese lacks morphological markers, such as syntactic inversions. Fourth, the insufficient training corpus situation impedes the understanding of literary Chinese. Our work addresses this issue.
|
| 70 |
+
|
| 71 |
+
The article in Fig. 1 is a passage about the Three Kingdoms. The article excerpts are in blue, the questions (with four options) are in purple, and their English translations are in black. Further, teal is used to highlight crucial evidence. The deceptiveness of this question lies in the answer: Zang is not captured but recruited by Cao.
|
| 72 |
+
|
| 73 |
+
# 2.4 Experience from Existing Solutions
|
| 74 |
+
|
| 75 |
+
While the potential of LLMs to interpret literary Chinese remains largely untapped (Sommerschield et al., 2023; Zhang et al., 2024b), recent studies that focus on literary Chinese understanding get three observations: first, LLMs better encode syntactic structures; second, co-reference chains is a complexity factor for all models but significantly affects only small models (Antoine et al., 2024), third, Chinese LLMs outperform English ones in literary Chinese (Wei et al., 2024), and Qwen (Yang et al., 2024) performed better in handling complex texts. This study aligns with other efforts to explore the potential of leveraging LLMs for understanding literary Chinese.
|
| 76 |
+
|
| 77 |
+
# 3 Comprehensive Literary Chinese Reading Comprehension Dataset
|
| 78 |
+
|
| 79 |
+
This section presents the construction process and dataset results for CRISIS. We will begin with the sources of CRISIS, then show the curation process, including data augmentation.
|
| 80 |
+
|
| 81 |
+
# 3.1 Sources
|
| 82 |
+
|
| 83 |
+
Following dataset collection instructions (Dzendzik et al., 2021), CRISIS is manually collected from publicly available datasets and legal websites.
|
| 84 |
+
|
| 85 |
+
# 3.1.1 ACRE
|
| 86 |
+
|
| 87 |
+
ACRE (Rao et al., 2023) is the first dataset proposed mainly for CRISIS. Besides collecting from publicly accessible websites, it also merges all CRISIs items in the Native Chinese Reader (Xu et al., 2022). However, not all items in ACRE are CRISIS questions. Some questions in ACRE are about common sense knowledge (of literary Chinese).
|
| 88 |
+
|
| 89 |
+
# 3.1.2 CCLUE
|
| 90 |
+
|
| 91 |
+
CCLUE (Xu et al., 2020) is a Chinese natural language understanding benchmark. It covers both sentence classification and RC tasks. However, CCLUE only has a few CRISIS questions.
|
| 92 |
+
|
| 93 |
+
# 3.1.3 WYWEB
|
| 94 |
+
|
| 95 |
+
WYWEB (Zhou et al., 2023) benchmarks nine literary Chinese NLP tasks. Most of WYWEB's questions are extracted from exam papers, but only a portion falls within the scope of CRISIS.
|
| 96 |
+
|
| 97 |
+
# 3.1.4 National College Entrance Examination
|
| 98 |
+
|
| 99 |
+
To keep our dataset up-to-date, we manually collected CRISIS questions on the $2021\sim 2024$ GaoKao from legal websites<sup>3</sup>. E.g., see Fig. 1.
|
| 100 |
+
|
| 101 |
+
# 3.1.5 AGIEval
|
| 102 |
+
|
| 103 |
+
AGIEval (Zhong et al., 2024): A bilingual benchmark for foundation models' exam performance. Because it focuses on exams like the GaoKao, there are a few CRISIS problems.
|
| 104 |
+
|
| 105 |
+
# 3.1.6 AC-EVAL
|
| 106 |
+
|
| 107 |
+
AC-EVAL (Wei et al., 2024) is a benchmark that comprises 13 tasks. It leverages contrastive learning between literal and modern Chinese for RC. However, only five problems in the AC-EVAL development set are publicly available.
|
| 108 |
+
|
| 109 |
+
# 3.1.7 E-EVAL
|
| 110 |
+
|
| 111 |
+
E-EVAL(Hou et al., 2024) focuses on RC evaluations in Chinese K-12 education. In the 4,351 multiple-choice questions spanning all grade levels, some questions are in the scope of CRISIS.
|
| 112 |
+
|
| 113 |
+
and
|
| 114 |
+
|
| 115 |
+
<table><tr><td>Dataset</td><td>#</td><td>ALP2</td><td>ALQ3</td><td>ALO4</td><td>ALT5</td></tr><tr><td>ACRE</td><td>3655</td><td>645.7</td><td>22.4</td><td>54.0</td><td>978.9</td></tr><tr><td>CCLUE</td><td>414</td><td>604.2</td><td>22.9</td><td>50.0</td><td>854.9</td></tr><tr><td>WYWEB</td><td>323</td><td>585.8</td><td>22.1</td><td>51.3</td><td>876.4</td></tr><tr><td>GaoKao1</td><td>9</td><td>637.8</td><td>23.0</td><td>58.9</td><td>981.7</td></tr><tr><td>AGIEval</td><td>8</td><td>713.8</td><td>26.0</td><td>53.75</td><td>1066.5</td></tr><tr><td>AC-Eval</td><td>5</td><td>592.8</td><td>22.6</td><td>60.1</td><td>885.2</td></tr><tr><td>E-Eval</td><td>1</td><td>865.0</td><td>27.0</td><td>55.75</td><td>1341.0</td></tr><tr><td>Total</td><td>4415</td><td>637.6</td><td>22.4</td><td>53.5</td><td>959.9</td></tr></table>
|
| 116 |
+
|
| 117 |
+
$^{1}$ GaoKao: 2021~2024 National College Entrance Examination of China.
|
| 118 |
+
2 ALP: Average length of passages in literary Chinese.
|
| 119 |
+
ALQ: Average length of questions.
|
| 120 |
+
4 ALO: Average length of options.
|
| 121 |
+
5 ALT: Average length of translations (of passages) in modern Chinese.
|
| 122 |
+
|
| 123 |
+
# 3.2 Curation Process
|
| 124 |
+
|
| 125 |
+
The curation of CRISIS involves question selection, data cleaning, and data augmentation.
|
| 126 |
+
|
| 127 |
+
# 3.2.1 Question Selection
|
| 128 |
+
|
| 129 |
+
We first collect CRISIS questions from seven data sources, in which identifying CRISIS questions is our main task. Since knowing differs from understanding, we instruct the annotators to select only passage understanding problems, excluding those requiring rote memorization of historical facts.
|
| 130 |
+
|
| 131 |
+
# 3.2.2 Data Cleaning
|
| 132 |
+
|
| 133 |
+
The data cleaning process includes duplicate purging and answer balancing. The distribution of answers is balanced, with roughly equal proportions. I.e., #A: #B: #C: #D ≈ 1: 1: 1:1 (#: number of).
|
| 134 |
+
|
| 135 |
+
# 3.2.3 Data Augmentation
|
| 136 |
+
|
| 137 |
+
To prepare for potential knowledge-leveraged approaches, we facilitate five LLM-based data augmentations, as previous studies (Peng et al., 2024):
|
| 138 |
+
|
| 139 |
+
1. Modern Chinese translations of passages. As shown in Fig. 1, the passage is in literal Chinese, while the question and options are in modern Chinese. Thus, we append modern Chinese translations to passages via Qwen<sup>4</sup>.
|
| 140 |
+
2. Celebrity profiles. We use LLMs (e.g., Qwen) to generate celebrity profiles mentioned in CRISIS. The profile has five sections: name, traits, competence, social background, and summary. See Appx. G for an example.
|
| 141 |
+
3. Government Job. We ask LLMs (e.g., Qwen) for government job responsibilities mentioned in CRISIS. E.g., in literary Chinese, Zhuguo is the highest military officer.
|
| 142 |
+
|
| 143 |
+
Table 1: Source Statistics of CRISIS.
|
| 144 |
+
|
| 145 |
+
<table><tr><td>Temporal Stage</td><td>Dynasty</td><td>#</td></tr><tr><td rowspan="3">Ancient China</td><td>夏(Xia)商(Shang)周(Zhou)</td><td>26</td></tr><tr><td>战国(Warring States)</td><td>235</td></tr><tr><td>秦(Qin)</td><td>72</td></tr><tr><td rowspan="7">Middle China</td><td>汉(Han)</td><td>441</td></tr><tr><td>三国(Three Kingdoms)</td><td>132</td></tr><tr><td>晋(Jin)</td><td>204</td></tr><tr><td>南北朝(Northern and South-ern Dynasty)</td><td>297</td></tr><tr><td>隋(Sui)</td><td>136</td></tr><tr><td>唐(Tang)</td><td>427</td></tr><tr><td>五代十国(Five Dynasties and Ten Kingdoms)</td><td>91</td></tr><tr><td rowspan="4">Near Ancient China</td><td>宋(Song)</td><td>1242</td></tr><tr><td>辽(Liao)金(Jin)元(Yuan)</td><td>158</td></tr><tr><td>明(Ming)</td><td>725</td></tr><tr><td>清(Qing)</td><td>229</td></tr></table>
|
| 146 |
+
|
| 147 |
+
Table 2: Temporal coverage statistics of CRISIS. The English translation is enclosed in parentheses.
|
| 148 |
+
|
| 149 |
+
4. Reign mottos (aka era name, period titles). LLMs also produced period descriptions for every emperor's era name in CRISIS. For example, the era name of Emperor Taizu of the Song Dynasty is KaiBao.
|
| 150 |
+
5. Dynasty. LLMs estimate the dynasty in which the story happened. However, there is no gold standard for this feature. The passage in Fig. 1, written during the Qing dynasty, recounts a story from the Three Kingdoms period.
|
| 151 |
+
|
| 152 |
+
# 3.3 Statistics
|
| 153 |
+
|
| 154 |
+
# 3.3.1 Statistics of Sources and Lengths
|
| 155 |
+
|
| 156 |
+
Tab. 1 shows the statistics of the sources. CRISIS comprises 4,415 problems, and the average lengths of the passage, question, options, and translated passages are 637.6, 22.4, 53.5, and 959.9, respectively (see Fig. $3 \sim 4$ in Appx. C for more).
|
| 157 |
+
|
| 158 |
+
# 3.3.2 Temporal Coverage Statistics
|
| 159 |
+
|
| 160 |
+
The literary Chinese's temporal coverage spans from the pre-Qin period to the Qing dynasty, and we can categorize it into three stages: ancient China, middle China, and near ancient China. First, ancient China existed before the Qin dynasty. Second, after the unification of China in Qin, there was a convergence in the written language, i.e., in middle China. Third, scholars often classify the Song, Yuan, Ming, and Qing dynasties as the near ancient China. Tab. 2 shows the temporal coverage statistics of CRISIS.
|
| 161 |
+
|
| 162 |
+
# 3.3.3 Statistics of Data Augmentations
|
| 163 |
+
|
| 164 |
+
The statistics of our augmentation are in Tab. 3. While the Qwen-generated augmentations are only
|
| 165 |
+
|
| 166 |
+

|
| 167 |
+
Figure 2: The overall architecture of VIRTUAL. The input is in the top-left corner, and the output is in the bottom-right corner. All six steps $(1\sim 6)$ are in red/green/gray round corner rectangles. A gray rectangle represents the translation process, and the yellow dataset icon in the middle represents a vector database of sentence embeddings.
|
| 168 |
+
|
| 169 |
+
<table><tr><td>Augmentation</td><td>#</td></tr><tr><td>Modern Chinese Translation</td><td>4415</td></tr><tr><td>Celebrity Profile</td><td>2747</td></tr><tr><td>Government Job</td><td>5956</td></tr><tr><td>Reign Mottos</td><td>381</td></tr><tr><td>Dynasty</td><td>14</td></tr></table>
|
| 170 |
+
|
| 171 |
+
Table 3: Statistics of data augmentations.
|
| 172 |
+
|
| 173 |
+
a silver standard, they may benefit non-LLM models or LLMs other than Qwen. This enhancement can reduce costs if we consider that translation and annotation are unavoidable (Wang et al., 2023).
|
| 174 |
+
|
| 175 |
+
# 4 Evidence Curation with Option Shuffling and AMR-based Segmenting
|
| 176 |
+
|
| 177 |
+
# 4.1 Overall Architecture
|
| 178 |
+
|
| 179 |
+
Recognizing the profound linguistic disparities between classical and literal Chinese, including grammatical evolution and syntactic variations, we developed VIRTUAL. We illustrate the overall architecture of VIRTUALin Fig. 2; it has six steps, which we introduce in the following subsections.
|
| 180 |
+
|
| 181 |
+
While the input of VIRTUAL is in the top-left corner, including a passage (a blue box), a question (a yellow box), and four options (a gray box), the output of VIRTUAL is in the bottom-right corner, i.e., the answer. The six steps are outlined in roundcorner rectangles. Specifically, ① (Embedding), ② (Shuffling), and ⑥ (Voting) are in red rectangles. ③ (AMR-based Segmenting) and ④ (Extracting Evidence) are in green rectangles. The ⑤ (Solving) is in a gray rectangle. A gray rectangle also displays the data augmentation translation process, and a yellow dataset icon in the middle visualizes the database storing sentence vectors. A qualitative example is in Appx. F.
|
| 182 |
+
|
| 183 |
+
# 4.2 Sentence Embedding
|
| 184 |
+
|
| 185 |
+
The first step of VIRTUAL is storing sentence embeddings. Questions, sentences in a passage, and their corresponding translations are all embedded and stored.
|
| 186 |
+
|
| 187 |
+
We use the GuwenBert as a function $SBERT(\cdot)$ to embed all comma-separated subsentences. For the passage $D = < s_1, \dots, s_{|D|}>$ and the question is $Q = s_q$ , we store $SBERT(s_i)$ where $1 < i < |D|$ or $i = q$ . The vector storage is based on FAISS (Douze et al., 2024), a library for efficient similarity search and clustering of dense vectors using advanced search algorithms.
|
| 188 |
+
|
| 189 |
+
# 4.3 Option Shuffling
|
| 190 |
+
|
| 191 |
+
The second step of VIRTUAL is option shuffling. Specifically, we transform the original options $A = \langle a_1, a_2, a_3, a_4 \rangle$ to $A' = \langle a_2, a_3, a_4, a_1 \rangle$ , and $A'' = \langle a_3, a_4, a_1, a_2 \rangle$ . E.g., if the original option order is "ABCD", we further use "BCDA", and "CDAB"
|
| 192 |
+
|
| 193 |
+
# 4.4 AMR-based Segmenting
|
| 194 |
+
|
| 195 |
+
The third step of VIRTUAL is AMR-based segmenting via an off-the-shelf software (Chen et al., 2022)<sup>7</sup>. We use the extracted AMR triples with directed arcs in the single-rooted, directed acyclic AMR graph to represent the semantic relationships between words in a sentence. Using Qwen, we can convert triples into clauses. E.g., the "Dog eats bones" in Fig. 2, step ①, corresponds to the triple in List. 1. The results of AMR for the option C in
|
| 196 |
+
|
| 197 |
+
Fig. 1 is in Appx. I (Fig. 5), along with our prompt (Tab. 18) and detailed results (Tab. 19).
|
| 198 |
+
|
| 199 |
+
Listing 1: AMR triple of "Dog eats bones".
|
| 200 |
+
|
| 201 |
+
$$
|
| 202 |
+
\left(\left(\mathrm {e} / \text {e a t} - 0 1 : \right. \right.
|
| 203 |
+
$$
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
\mathbf {A R G 0} \quad (d / \quad d o g) :
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
$$
|
| 210 |
+
\mathbf {A R G 1} (\text {b / b o n e}))
|
| 211 |
+
$$
|
| 212 |
+
|
| 213 |
+
# 4.5 Evidence Extracting
|
| 214 |
+
|
| 215 |
+
The fourth step of VIRTUAL is evidence extracting. We only consider $y_{i}$ when $Score_{sim}(opt, y_{i}) \leq t$ for the embedding of a (segmented) option $opt \in \mathbb{R}^{d}$ and sentence embeddings $< y_{1}, \ldots, y_{l} >$ , $y_{i} \in \mathbb{R}^{d}$ , $0 \leq i \leq l$ . In this equation, $l$ is the number of sentences, and $y_{i}$ could be in literary Chinese (e.g., a literary Chinese sentence in the passage, $s_{i}$ ) or in modern Chinese (e.g., a translation of $s_{i}$ , $t_{i}$ ). To reduce the search space, we set a similarity threshold $t = 0.3$ , and use the similarity score $Score_{sim}(\cdot)$ in Eq. 1.
|
| 216 |
+
|
| 217 |
+
$$
|
| 218 |
+
\operatorname {S c o r e} _ {\text {s i m}} \left(\operatorname {o p t}, y _ {i}\right) = \left\| \operatorname {o p t} - y _ {i} \right\| _ {2} \tag {1}
|
| 219 |
+
$$
|
| 220 |
+
|
| 221 |
+
Then, Alg. 1 extracts the top- $k$ (minimum) evidences according to $Score_{similarity}^8$ . The input includes literary Chinese sentences, modern Chinese translation of the inputted sentences, the option clauses, and three hyperparameters: first, the number of evidence sentences in literary Chinese; second, the number of evidence sentences in modern Chinese; third, an indicator of whether or not to include keyword annotations.
|
| 222 |
+
|
| 223 |
+
After initialization in line 1, the program uses annotation as needed in lines $5\sim 8$ ; the tokenizer function tokenizes keywords from the options. Then, we use the ZDIC to find the annotations of keywords and concatenate the results to sentences. The concat(\cdot) function concatenates two strings. Lines $3\sim 10$ provided the literary Chinese sentences we selected as evidence, while lines $11\sim 14$ pinpointed the translated sentences we used as evidence. Finally, we return the evidence in line 15.
|
| 224 |
+
|
| 225 |
+
# 4.6 Solving CRISIS via LLMs
|
| 226 |
+
|
| 227 |
+
The fifth step of VIRTUAL is answering the question with LLMs. As we rearranged the options in step two, the correct answer should be restored.
|
| 228 |
+
|
| 229 |
+
We tried three prompting strategies: zero-shot, one-shot, and chain-of-thought (COT):
|
| 230 |
+
|
| 231 |
+
1. The basic version of our prompt, which serves as our fallback strategy, asks the LLMs to select the correct option (see Tab. 4).
|
| 232 |
+
2. The one-shot strategy uses two examples: a summary question and a detailed question. We first ask LLMs to determine which example should be used, and then we give the example to LLMs. The limited number of samples is one limitation of our study. These examples are in Appx. H, Tab. 16.
|
| 233 |
+
3. LLMs receive segmented clauses from COT for each option and output zero if a clause is wrong. Then, each option's score is its correct ratio. In cases with tie options, the default strategy for VIRTUAL is zero-shot prompting. An example can be found in Appx. H, Tab. 17.
|
| 234 |
+
|
| 235 |
+
# 4.7 Voting
|
| 236 |
+
|
| 237 |
+
The sixth step of VIRTUAL is voting. We shuffle the options and solve the problem three times (see Sec. 4.3). The majority voting is the default strategy, and the backup strategy only solves the problem using the original option order.
|
| 238 |
+
|
| 239 |
+
# 5 Experiments
|
| 240 |
+
|
| 241 |
+
# 5.1 Experiment Settings
|
| 242 |
+
|
| 243 |
+
Our experiments utilize PyTorch 1.9.0 with Python 3.9.6 on Ubuntu 20.04.1 LTS, running on a PC equipped with an Intel Core i9-10900K CPU and two RTX 3090 GPUs. The training, validation, and test sets are divided according to an 8:1:1 ratio, with identical answer distributions.
|
| 244 |
+
|
| 245 |
+
# 5.2 Difficulty Ratings
|
| 246 |
+
|
| 247 |
+
This paper uses the log probabilities of an LLM's correct answer predictions to determine problem difficulty. I.e., Qwen (Qwen2.5-72b-instructAWQ). The cross-entropy loss (i.e., the difference between the probability distribution of the four options and the actual label, see Appx. B Eq. 3) represents the difficulty of a problem. Readers can find the equation in Appx. B, Eq. 2. Following previous studies (Wang et al., 2024), we established three difficulty levels: simple, medium, and complex (in the ratio of 3:5:2, correspondingly).
|
| 248 |
+
|
| 249 |
+
# 5.3 Compared Models
|
| 250 |
+
|
| 251 |
+
In our evaluation, we compare our model with a representative Non-LLM model, EVERGREEN,
|
| 252 |
+
|
| 253 |
+
Algorithm 1: Evidence Extraction Algorithm
|
| 254 |
+
Input: $P = \{s_1,s_n\}$ , sentences in the passage ; $T = \{t_{1},t_{n}\}$ , translations of sentences in the passage ; opt, the option (clause); #s, number of evidence sentences from original sentences; #t, number of evidence sentences from translated sentences; withAnnotations, with or without annotations for keywords in the option. Output: $E = \{s_1^{\prime},\dots,s_m^{\prime}\}$ , evidence sentences $(m = \# s + \# t)$
|
| 255 |
+
begin
|
| 256 |
+
Initialize $E\gets \emptyset$ .
|
| 257 |
+
for $i = 1$ to $\# s$ do
|
| 258 |
+
$s_i^\prime$ - most similar (unselected) $s\in P$ for opt; if withAnnotations then keywords $\leftarrow$ tokenize(s); keywordsAnnotations $\leftarrow$ annotations of keywords in online literary Chinese dictionaries; $s_i^\prime \gets$ concat(s',keywordsAnnotations);
|
| 259 |
+
$E\gets E\cup s_i'$
|
| 260 |
+
end
|
| 261 |
+
for $i = 1$ to $\# t$ do
|
| 262 |
+
$s_i^\prime$ - most similar (unselected) $t\in T$ for opt; $E\gets E\cup s_i'$
|
| 263 |
+
end
|
| 264 |
+
return $E$
|
| 265 |
+
end
|
| 266 |
+
|
| 267 |
+
# INSTRUCTION:
|
| 268 |
+
|
| 269 |
+
You are an expert in literary Chinese. After reading a passage, you aim to answer the question and select the correct option.
|
| 270 |
+
|
| 271 |
+
Instructions: Answer the question and select the correct option after reading a passage.
|
| 272 |
+
|
| 273 |
+
# INPUT:
|
| 274 |
+
|
| 275 |
+
Passage: {Passage in literary Chinese.}
|
| 276 |
+
|
| 277 |
+
Question: {Question.}
|
| 278 |
+
|
| 279 |
+
```python
|
| 280 |
+
>>> Options: {Options.}
|
| 281 |
+
|
| 282 |
+
Evidence: {Evidences.}
|
| 283 |
+
|
| 284 |
+
# OUTPUT:
|
| 285 |
+
|
| 286 |
+
**Final Judgment**: Judgment (A/B/C/D)
|
| 287 |
+
|
| 288 |
+
Table 4: The zero-shot prompt for LLMs. Section names are in brown, and text variables are in curly brackets.
|
| 289 |
+
|
| 290 |
+
and five top-performing LLMs showing proficiency in Chinese RC. The results are in Tab. 5.
|
| 291 |
+
|
| 292 |
+
EVERGREEN is a BERT (Devlin et al., 2019) encoding with a convolution and an ensemble-based model. It outperforms many Non-LLM models for CRISIS (Rao et al., 2023). The compared models in Rao et al. (2023) include Longformer (Beltagy et al., 2020), T5 (Raffel et al., 2020), AnchiBERT (Tian et al., 2021), GuwenBERT, and MacBERT (Cui et al., 2020). We put the parameter settings of EVERGREEN in Appx. A, Tab. 11.
|
| 293 |
+
|
| 294 |
+
The five top-performing large language models (LLMs), which are recognized for their exceptional performance in various tasks, are:
|
| 295 |
+
|
| 296 |
+
Qwen-plus-0806. The best version of Al
|
| 297 |
+
|
| 298 |
+
ibaba's Qwen-plus series for our task.
|
| 299 |
+
|
| 300 |
+
- ERNIE-4.0-8K $^{10}$ . A foundation model from Baidu, which was released in June 2024.
|
| 301 |
+
- GPT-4o<sup>11</sup>: The GPT model, which was released in November 2024.
|
| 302 |
+
- GLM-4: The latest generation of the open-source ChatGLM (GLM et al., 2024) models.
|
| 303 |
+
- o1-mini $^{12}$ . It is the newest cost-efficient reasoning model from OpenAI.
|
| 304 |
+
|
| 305 |
+
# 5.4 Model Comparison
|
| 306 |
+
|
| 307 |
+
Experimental results on CRISIS are presented in Tab. 5, which exhibits four findings. First, using accuracy as the metric, Tab. 5 substantiates the effectiveness of VIRTUAL. It improves the accuracy of Qwen by $7\%$ , and it is more productive for complex problems (10% increase). Second, the motivation of our option shuffling lies in Tab. 5: models (except CRISIS) have order bias. Third, we demonstrated that our Qwen-oriented difficulty ratings apply to all tested LLMs. Fourth, our experiments confirm the advantage of LLMs over non-LLM models.
|
| 308 |
+
|
| 309 |
+
We also test the performance of using VIRTUAL with smaller LMs and the results are in Appx. E.
|
| 310 |
+
|
| 311 |
+
10https://cloud.baidu.com
|
| 312 |
+
11 https://openai.com/index/hello-gpt-4o/
|
| 313 |
+
12https://openai.com/index/introducing-openai-o1-review/
|
| 314 |
+
|
| 315 |
+
<table><tr><td rowspan="2">Category</td><td rowspan="2">Model</td><td colspan="8">Accuracy (%)</td></tr><tr><td>A</td><td>B</td><td>C</td><td>D</td><td>Simple</td><td>Medium</td><td>Complex</td><td>Average</td></tr><tr><td>Non-LLM</td><td>EVERGREEN</td><td>14.2</td><td>22.3</td><td>15.2</td><td>42.0</td><td>22.1</td><td>25.1</td><td>21.5</td><td>23.5</td></tr><tr><td rowspan="5">LLM</td><td>o1-mini</td><td>22.6</td><td>44.6</td><td>42.9</td><td>35.7</td><td>44.2</td><td>37.2</td><td>23.8</td><td>36.7</td></tr><tr><td>GLM-4</td><td>50.9</td><td>57.1</td><td>44.6</td><td>55.4</td><td>73.3</td><td>50.7</td><td>21.6</td><td>52.0</td></tr><tr><td>GPT-4o</td><td>70.8</td><td>66.1</td><td>67.0</td><td>51.8</td><td>87.7</td><td>62.7</td><td>30.6</td><td>63.8</td></tr><tr><td>ERNIE-4.0-8K</td><td>49.0</td><td>68.8</td><td>69.0</td><td>66.7</td><td>81.0</td><td>68.0</td><td>32.9</td><td>64.9</td></tr><tr><td>Qwen-plus</td><td>61.3</td><td>75.9</td><td>77.7</td><td>76.8</td><td>93.1</td><td>74.4</td><td>39.7</td><td>73.1</td></tr><tr><td>Ours</td><td>VIRTUAL</td><td>78.3</td><td>83.0</td><td>83.9</td><td>77.7</td><td>98.4</td><td>83.4</td><td>47.7</td><td>80.8</td></tr></table>
|
| 316 |
+
|
| 317 |
+
Table 5: Model comparison. Best results are highlighted in bold.
|
| 318 |
+
|
| 319 |
+
<table><tr><td>Method</td><td>Accuracy %</td></tr><tr><td>CRISIS</td><td>0.808</td></tr><tr><td>w/o keyword annotation</td><td>0.803</td></tr><tr><td>w/o literary Chinese evidence</td><td>0.799</td></tr><tr><td>w/o translated evidence</td><td>0.787</td></tr><tr><td>w/o AMR-based segmentation</td><td>0.785</td></tr><tr><td>w/o shuffling & voting</td><td>0.781</td></tr></table>
|
| 320 |
+
|
| 321 |
+
Table 6: Ablation test. Best results are highlighted in bold. The w/o stands for without.
|
| 322 |
+
|
| 323 |
+
<table><tr><td rowspan="2"># evidences in Literary Chinese</td><td colspan="4"># evidences in Modern Chinese</td></tr><tr><td>0</td><td>1</td><td>2</td><td>3</td></tr><tr><td>0</td><td>0.787</td><td>0.796</td><td>0.799</td><td>0.787</td></tr><tr><td>1</td><td>0.787</td><td>0.799</td><td>0.803</td><td>0.808</td></tr><tr><td>2</td><td>0.799</td><td>0.792</td><td>0.796</td><td>0.792</td></tr><tr><td>3</td><td>0.790</td><td>0.792</td><td>0.794</td><td>0.785</td></tr></table>
|
| 324 |
+
|
| 325 |
+
However, as VIRTUAL prolongs the input, there is a performance degradation.
|
| 326 |
+
|
| 327 |
+
# 5.5 Ablation Test
|
| 328 |
+
|
| 329 |
+
To identify the effectiveness of components of CRI-SIS, we perform ablation tests on VIRTUAL. The result is in Tab. 6. It shows that, although all components have their credits, option shuffling is the most important one, and AMR-based option segmenting is also important. However, keyword annotations (e.g., using literary Chinese dictionaries) are less critical than we thought. Is likely that LLMs are adept at recalling facts.
|
| 330 |
+
|
| 331 |
+
# 5.6 Evidence Combination Test
|
| 332 |
+
|
| 333 |
+
Our evidence combination test, Tab. 7, shows that using three evidence sentences in modern Chinese and one evidence sentence in literary Chinese leads to the best result.
|
| 334 |
+
|
| 335 |
+
# 5.7 Generalizability Test
|
| 336 |
+
|
| 337 |
+
To demonstrate the generalizability of VIRTUAL, we evaluate it on a modern Chinese reading comprehension dataset, utilizing C3 (Sun et al., 2020). We compare VIRTUAL's performance to Qwen-
|
| 338 |
+
|
| 339 |
+
Table 7: Evidence combination test. Best results are highlighted in bold.
|
| 340 |
+
|
| 341 |
+
<table><tr><td rowspan="2">model</td><td colspan="5">Accuracy (%)</td></tr><tr><td>A</td><td>B</td><td>C</td><td>D</td><td>Average</td></tr><tr><td>Qwen-plus</td><td>95.8</td><td>96.1</td><td>95.6</td><td>95.6</td><td>96.7</td></tr><tr><td>VIRTUAL</td><td>96.4</td><td>95.6</td><td>94.9</td><td>96.4</td><td>98.7</td></tr></table>
|
| 342 |
+
|
| 343 |
+
Table 8: Generalizability test. Best results are highlighted in bold.
|
| 344 |
+
|
| 345 |
+
<table><tr><td>Strategy</td><td>Accuracy</td></tr><tr><td>Zero-shot</td><td>0.787</td></tr><tr><td>One-shot</td><td>0.808</td></tr><tr><td>w/ Celebrity Profiles</td><td>0.803</td></tr><tr><td>w/ D&R&J1</td><td>0.803</td></tr><tr><td>Chain of Thought</td><td>0.745</td></tr></table>
|
| 346 |
+
|
| 347 |
+
$^{1}$ D&R&J: Dynasty & Reign Mottos & Government Job.
|
| 348 |
+
|
| 349 |
+
plus in Tab. 8. While Tab. 5 previously showed Qwen-plus as the best LLM for our main task, Tab. 8 highlights CRISIS's advantages over Qwen-plus in this context.
|
| 350 |
+
|
| 351 |
+
# 5.8 Prompting Test
|
| 352 |
+
|
| 353 |
+
We report the test results of different prompt strategies in Tab. 9, which show that a one-shot strategy with no augmented data is the best choice.
|
| 354 |
+
|
| 355 |
+
# 5.9 More Analysis
|
| 356 |
+
|
| 357 |
+
# 5.9.1 Accuracy in Different Time Spans
|
| 358 |
+
|
| 359 |
+
Tab. 10 lists the accuracy in different periods. Surprisingly, questions about near-ancient Chinese passages, which are the closest to us in time, are more challenging than we thought. The deliberate design of examinations, which increases the difficulty level of the questions, might contribute to this.
|
| 360 |
+
|
| 361 |
+
Table 9: Comparison of different prompting strategies. Best results are highlighted in bold. The w/ means with.
|
| 362 |
+
|
| 363 |
+
<table><tr><td>Temporal Stage</td><td>Accuracy</td><td>#</td></tr><tr><td>Ancient China</td><td>0.824</td><td>74</td></tr><tr><td>Middle China</td><td>0.818</td><td>143</td></tr><tr><td>Near Ancient China</td><td>0.795</td><td>225</td></tr></table>
|
| 364 |
+
|
| 365 |
+
Table 10: Time span test. Best results are highlighted in bold. See Sec. 3 for details of stages.
|
| 366 |
+
|
| 367 |
+
# 5.9.2 Does Perplexity Matter?
|
| 368 |
+
|
| 369 |
+
Counter-intuitively, the perplexity of passages does not affect the difficulty of the question, see Fig. 6 in Appx. K for details. This could also result from the deliberate design of examinations.
|
| 370 |
+
|
| 371 |
+
# 5.10 Computational Cost
|
| 372 |
+
|
| 373 |
+
We conducted experiments with Qwen API (qwen-plus-0806), at a cost of $0.4 per 1 million tokens input and$ 1 per 1 million tokens output. We spent $300 in total. Additionally, we spent 15 minutes training the EVERGREEN model.
|
| 374 |
+
|
| 375 |
+
# 6 Conclusion
|
| 376 |
+
|
| 377 |
+
Through an empirical study on a newly curated literary Chinese reading comprehension task, we identified and validated the effectiveness of a novel evidence extraction approach. Specifically, we have built the largest and most comprehensive dataset of literary Chinese reading comprehension to date and proposed an approach. The proposed approach leverages an evidence extraction algorithm that utilizes evidence sentences in both literary Chinese and modern Chinese, along with two techniques (i.e., option shuffling and AMR-based segmenting). Future efforts will include conducting more theoretical analyses to provide a solid foundation for understanding literary Chinese.
|
| 378 |
+
|
| 379 |
+
# 7 Limitations
|
| 380 |
+
|
| 381 |
+
Despite our best efforts, our study may still have at least two limitations.
|
| 382 |
+
|
| 383 |
+
First, our dataset has at least four biases.
|
| 384 |
+
|
| 385 |
+
- The labels could be wrong, as humans make mistakes (and have disagreements);
|
| 386 |
+
- Translation and annotations might induce errors;
|
| 387 |
+
- LLMs-generated augmentations are only silver-standard, which is further discussed in Appx. D;
|
| 388 |
+
- Because the stability of LLMs is out of scope (of this paper), all LLMs involved in experiments are just a single run.
|
| 389 |
+
|
| 390 |
+
Second, due to our limited resources, we have been able to conduct our local experiments on models no more significant than EVERGREEN or test all available LLMs.
|
| 391 |
+
|
| 392 |
+
# 8 Ethical Considerations
|
| 393 |
+
|
| 394 |
+
First, licenses. The licenses for most source datasets are unspecified, except that AGIEval and AC-Eval use the MIT license, and CCLUE follows the Apache-2.0 license. Additionally, some examination data that is available for free online has been included.
|
| 395 |
+
|
| 396 |
+
Second, safety prompts. The proposed prompts do not involve collecting or using personal information to train other individuals.
|
| 397 |
+
|
| 398 |
+
Third, annotation. Volunteers from our research lab supported our annotation effort, and we compensated them at a market rate. All annotators are Chinese graduate students who are native speakers of the Chinese language. They are asked to "Select the passage understanding questions, and ignore the questions only about rote facts" from existing problems.
|
| 399 |
+
|
| 400 |
+
# References
|
| 401 |
+
|
| 402 |
+
Elie Antoine, Frédéric Bechet, Géraldine Damnati, and Philippe Langlais. 2024. A linguistically-motivated evaluation methodology for unraveling model's abilities in reading comprehension tasks. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 18376-18392.
|
| 403 |
+
Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150.
|
| 404 |
+
Xiao Bi, Deli Chen, Guanting Chen, Shanhuang Chen, Damai Dai, Chengqi Deng, Honghui Ding, Kai Dong, Qiushi Du, Zhe Fu, et al. 2024. Deepseek llm: Scaling open-source language models with longtermism. arXiv preprint arXiv:2401.02954.
|
| 405 |
+
Samuel Cahyawijaya, Holy Lovenia, and Pascale Fung. 2024. Lims are few-shot in-context low-resource language learners. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 405-433.
|
| 406 |
+
Jiahuan Cao, Dezhi Peng, Peirong Zhang, Yongxin Shi, Yang Liu, Kai Ding, and Lianwen Jin. 2024. Tonggu: Mastering classical chinese understanding with knowledge-grounded large language models. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 4196-4210.
|
| 407 |
+
Liang Chen, Bofei Gao, and Baobao Chang. 2022. A two-stage method for chinese amr parsing. arXiv preprint arXiv:2209.14512.
|
| 408 |
+
Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, and Guoping Hu. 2020. Revisiting pre-trained
|
| 409 |
+
|
| 410 |
+
models for chinese natural language processing. In Findings of the Association for Computational Linguistics:EMNLP 2020, pages 657-668.
|
| 411 |
+
Emily Daddo. 2024. An introduction to ancient china. Teaching History (0040-0602), 58(3).
|
| 412 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.
|
| 413 |
+
Matthijs Douze, Alexandr Guzhva, Chengqi Deng, Jeff Johnson, Gergely Szilvasy, Pierre-Emmanuel Mazaré, Maria Lomeli, Lucas Hosseini, and Hervé Jégou. 2024. The faiss library.
|
| 414 |
+
Daria Dzendzik, Jennifer Foster, and Carl Vogel. 2021. English machine reading comprehension datasets: A survey. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8784-8804.
|
| 415 |
+
Team GLM, Aohan Zeng, Bin Xu, Bowen Wang, Chenhui Zhang, Da Yin, Dan Zhang, Diego Rojas, Guanyu Feng, Hanlin Zhao, et al. 2024. Chatglm: A family of large language models from glm-130b to glm-4 all tools. arXiv preprint arXiv:2406.12793.
|
| 416 |
+
Jinchang Hou, Chang Ao, Haihong Wu, Xiangtao Kong, Zhigang Zheng, Daijia Tang, Chengming Li, Xiping Hu, Ruifeng Xu, Shiwen Ni, et al. 2024. E-eval: A comprehensive chinese k-12 education evaluation benchmark for large language models. arXiv preprint arXiv:2401.15927.
|
| 417 |
+
Akira Kawabata and Saku Sugawara. 2023. Evaluating the rationale understanding of critical reasoning in logical reading comprehension. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 116-143.
|
| 418 |
+
Yingji Li, Mengnan Du, Rui Song, Xin Wang, and Ying Wang. 2024. Data-centric explainable debiasing for improving fairness in pre-trained language models. In *Findings of the Association for Computational Linguistics ACL* 2024, pages 3773-3786.
|
| 419 |
+
Jerry Norman. 1988. Chinese. Cambridge University Press.
|
| 420 |
+
Simon Ostermann, Michael Roth, Ashutosh Modi, Stefan Thater, and Manfred Pinkal. 2018. Semeval-2018 task 11: Machine comprehension using commonsense knowledge. In Proceedings of the 12th International Workshop on semantic evaluation, pages 747-757.
|
| 421 |
+
P David Pearson and Margaret C Gallagher. 1983. The instruction of reading comprehension. Contemporary educational psychology, 8(3):317-344.
|
| 422 |
+
|
| 423 |
+
Letian Peng, Yuwei Zhang, and Jingbo Shang. 2024. Controllable data augmentation for few-shot text mining with chain-of-thought attribute manipulation. In *Findings of the Association for Computational Linguistics ACL* 2024, pages 1-16.
|
| 424 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485-5551.
|
| 425 |
+
Dongning Rao, Guanju Huang, and Zhihua Jiang. 2023. Ancient chinese machine reading comprehension exception question dataset with a non-trivial model. In Pacific Rim International Conference on Artificial Intelligence, pages 145-158. Springer.
|
| 426 |
+
Maarten Sap, Vered Shwartz, Antoine Bosselut, Yejin Choi, and Dan Roth. 2020. Commonsense reasoning for natural language processing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts, pages 27-33.
|
| 427 |
+
Thea Sommerscheid, Yannis Assael, John Pavlopoulos, Vanessa Stefanak, Andrew Senior, Chris Dyer, John Bodel, Jonathan Prag, Ion Androutsopoulos, and Nando de Freitas. 2023. Machine learning for ancient languages: A survey. Computational Linguistics, 49(3):703-747.
|
| 428 |
+
Robert J Stevens, Robert E Slavin, and Anna M Farnish. 1991. The effects of cooperative learning and direct instruction in reading comprehension strategies on main idea identification. Journal of Educational Psychology, 83(1):8.
|
| 429 |
+
Kai Sun, Dian Yu, Dong Yu, and Claire Cardie. 2020. Investigating prior knowledge for challenging Chinese machine reading comprehension. Transactions of the Association for Computational Linguistics, 8:141-155.
|
| 430 |
+
Huishuang Tian, Kexin Yang, Dayiheng Liu, and Jiancheng Lv. 2021. Archibert: A pre-trained model for ancient chineselanguage understanding and generation. In Proceedings of the International Joint Conference on Neural Networks.
|
| 431 |
+
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
|
| 432 |
+
Heng Wang, Shangbin Feng, Tianxing He, Zhaoxuan Tan, Xiaochuang Han, and Yulia Tsvetkov. 2024. Can language models solve graph problems in natural language? Advances in Neural Information Processing Systems, 36.
|
| 433 |
+
Yuxuan Wang, Jack Wang, Dongyan Zhao, and Zilong Zheng. 2023. Rethinking dictionaries and glyphs
|
| 434 |
+
|
| 435 |
+
for chinese language pre-training. In Findings of the Association for Computational Linguistics: ACL 2023, pages 1089-1101.
|
| 436 |
+
|
| 437 |
+
Yuting Wei, Yuanxing Xu, Xinru Wei, Yangsimin Yangsimin, Yangfu Zhu, Yuqing Li, Di Liu, and Bin Wu. 2024. AC-EVAL: Evaluating Ancient Chinese language understanding in large language models. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 1600-1617, Miami, Florida, USA. Association for Computational Linguistics.
|
| 438 |
+
|
| 439 |
+
Liang Xu, Hai Hu, Xuanwei Zhang, Lu Li, Chenjie Cao, Yudong Li, Yechen Xu, Kai Sun, Dian Yu, Cong Yu, et al. 2020. Clue: A chinese language understanding evaluation benchmark. In Proceedings of the 28th International Conference on Computational Linguistics, pages 4762-4772.
|
| 440 |
+
|
| 441 |
+
Shusheng Xu, Yichen Liu, Xiaoyu Yi, Siyuan Zhou, Huizi Li, and Yi Wu. 2022. Native chinese reader: A dataset towards native-level chinese machine reading comprehension. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2).
|
| 442 |
+
|
| 443 |
+
Xiaohan Xu, Chongyang Tao, Tao Shen, Can Xu, Hongbo Xu, Guodong Long, Jian-guang Lou, and Shuai Ma. 2024. Re-reading improves reasoning in large language models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 15549-15575.
|
| 444 |
+
|
| 445 |
+
An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, et al. 2024. Qwen2 technical report. arXiv preprint arXiv:2407.10671.
|
| 446 |
+
|
| 447 |
+
Jinyi Zhang, Ke Su, Haowei Li, Jiannan Mao, Ye Tian, Feng Wen, Chong Guo, and Tadahiro Matsumoto. 2024a. Neural machine translation for low-resource languages from a chinese-centric perspective: A survey. ACM Transactions on Asian and Low-Resource Language Information Processing.
|
| 448 |
+
|
| 449 |
+
Yixuan Zhang and Haonan Li. 2023. Can large language model comprehend ancient chinese? a preliminary test on a clue. In Proceedings of the Ancient Language Processing Workshop, pages 80-87.
|
| 450 |
+
|
| 451 |
+
Yuqing Zhang, Baoyi He, Yihan Chen, Hangqi Li, Han Yue, Shengyu Zhang, Huaiyong Dou, Junchi Yan, Zemin Liu, Yongquan Zhang, et al. 2024b. Philogpt: A philology-oriented large language model for ancient chinese manuscripts with dunhuang as case study. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 2784-2801.
|
| 452 |
+
|
| 453 |
+
Chenrong Zhao. 2024. A feature-based approach to annotate the syntax of ancient chinese. In Proceedings of the 5th Workshop on Computational Approaches to Historical Language Change, pages 62-71.
|
| 454 |
+
|
| 455 |
+
<table><tr><td></td><td>EVERGREEN</td><td>BERT2</td></tr><tr><td>train batch size</td><td>4</td><td>4</td></tr><tr><td>dev batch size</td><td>4</td><td>4</td></tr><tr><td>test batch size</td><td>4</td><td>4</td></tr><tr><td>input length</td><td>512</td><td>512</td></tr><tr><td>epoch</td><td>6</td><td>6</td></tr><tr><td>learning rate</td><td>3e-5</td><td>1e-5</td></tr><tr><td>g.a.1steps</td><td>8</td><td>8</td></tr><tr><td>seed</td><td>42</td><td>42</td></tr></table>
|
| 456 |
+
|
| 457 |
+
1 g.a. $=$ gradient accumulation.
|
| 458 |
+
2 BERT is part of the EVERGREEN.
|
| 459 |
+
|
| 460 |
+
Table 11: Hyper-parameters settings of the EVERGREEN Model.
|
| 461 |
+
|
| 462 |
+
Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. 2024. Agieval: A human-centric benchmark for evaluating foundation models. In *Findings of the Association for Computational Linguistics: NAACL* 2024, pages 2299-2314.
|
| 463 |
+
|
| 464 |
+
Bo Zhou, Qianglong Chen, Tianyu Wang, Xiaomi Zhong, and Yin Zhang. 2023. Wyweb: A nlp evaluation benchmark for classical chinese. In Findings of the Association for Computational Linguistics: ACL 2023, pages 3294-3319.
|
| 465 |
+
|
| 466 |
+
# A Settings of Hyper-parameters
|
| 467 |
+
|
| 468 |
+
Settings of hyper-parameters are in Tab. 11.
|
| 469 |
+
|
| 470 |
+
# B Details of Difficult Ratings
|
| 471 |
+
|
| 472 |
+
Eq. 2 defines the standardized probability distribution of the four options $(p_k)$ . The exponential normalization applies to the log probabilities of the four options ( $logprob_k$ , $1 \leq k \leq 4$ ). Eq. 3 is used to calculate the difference between the probability distribution of the four options and the accurate label.
|
| 473 |
+
|
| 474 |
+
$$
|
| 475 |
+
p _ {k} = \frac {\exp (\log p r o b _ {k})}{\sum_ {j = 1} ^ {4} \exp (\log p r o b _ {j})}, 1 \leq k \leq 4 \tag {2}
|
| 476 |
+
$$
|
| 477 |
+
|
| 478 |
+
$$
|
| 479 |
+
L o s s _ {C E} = - \sum_ {k = 1} ^ {4} y _ {k} \log \left(p _ {k}\right) \tag {3}
|
| 480 |
+
$$
|
| 481 |
+
|
| 482 |
+
# C Statistical Analysis of The Length of Text
|
| 483 |
+
|
| 484 |
+
Fig. $3\sim 4$ are statistical analyses of the length of the options and passages.
|
| 485 |
+
|
| 486 |
+
Moreover, we provide an additional table (Tab. 12) for the overall statistics.
|
| 487 |
+
|
| 488 |
+

|
| 489 |
+
Figure 3: Length distribution of options.
|
| 490 |
+
|
| 491 |
+

|
| 492 |
+
Figure 4: Length distribution of passages.
|
| 493 |
+
|
| 494 |
+
# D Data Augmentation Quality
|
| 495 |
+
|
| 496 |
+
We add a small-scale error analysis to show the reliability of the data augmentations. Specifically, there are two types of noises. First, there are redundant celebrity profiles. We found 45 redundant celebrity profiles in 4415 items. Moreover, the source of the passage could also be redundant (wrong). A total of 79 sources containing redundant passages have been identified. Second, the Government Job and Reign Mottos suggested by Qwen could be wrong. In all 4,415 items, there are 5,956 government jobs in 7,110 different outputs and 381 Reign Mottos in 532 different outputs. The refinement is based on cross-references to historical records of Ancient China. Fortunately, all names of the dynasty in a passage generated by Qwen are correct.
|
| 497 |
+
|
| 498 |
+
<table><tr><td></td><td>Average length</td></tr><tr><td>Content</td><td>637.6</td></tr><tr><td>Question</td><td>22.4</td></tr><tr><td>Option</td><td>53.5</td></tr><tr><td>Personal information</td><td>121.6</td></tr><tr><td>Translation</td><td>959.9</td></tr></table>
|
| 499 |
+
|
| 500 |
+
Table 12: The statistics of lengths of all 4415 items in our dataset.
|
| 501 |
+
|
| 502 |
+
# E Leveraging Open-sourced LLMs and Smaller LMs
|
| 503 |
+
|
| 504 |
+
To test whether our method can leverage more open-sourced LLMs and smaller LMs, we perform tests with DeepSeek-R1-Distill-Qwen-7B (Bi et al., 2024), DeepSeek-R1-Distill-Llama-8B, and Llama2-7B (Touvron et al., 2023). The results are listed in Tab. 13.
|
| 505 |
+
|
| 506 |
+
A possible reason for the performance degradation is that small models may not handle long texts well, and VIRTUAL prolongs the input.
|
| 507 |
+
|
| 508 |
+
# F A Qualitative Example
|
| 509 |
+
|
| 510 |
+
Tab. 14 illustrates the results of solving the CRISIS in Fig. 1.
|
| 511 |
+
|
| 512 |
+
# G A Celebrity Profile Example
|
| 513 |
+
|
| 514 |
+
Tab. 15 illustrates a celebrity profile.
|
| 515 |
+
|
| 516 |
+
# H Prompts Used in VIRTUAL
|
| 517 |
+
|
| 518 |
+
We tried three prompting strategies: zero-shot, one-shot, and chain-of-thought (COT). While zero-shot prompt is in Sec. 4.6, Tab. 4, Tab. $16\sim 17$ illustrate the rest prompts.
|
| 519 |
+
|
| 520 |
+
Further, the prompt used for AMR-based segmenting is in Tab. 18.
|
| 521 |
+
|
| 522 |
+
# I AMR Results Illustration
|
| 523 |
+
|
| 524 |
+
Fig. 5 illustrates the AMR of an option generated by HanLP $^{13}$ .
|
| 525 |
+
|
| 526 |
+
<table><tr><td rowspan="2">Model</td><td colspan="8">Accuracy (%)</td></tr><tr><td>A</td><td>B</td><td>C</td><td>D</td><td>Simple</td><td>Medium</td><td>Complex</td><td>Average</td></tr><tr><td>DeepSeek-R1-Distill-Qwen-7B</td><td>34</td><td>28.6</td><td>42.9</td><td>23.2</td><td>38.1</td><td>30.4</td><td>27.2</td><td>32.1</td></tr><tr><td>DeepSeek-R1-Distill-Qwen-7B+VIRTUAL</td><td>23.6</td><td>17.9</td><td>70.5</td><td>11.6</td><td>31.2</td><td>30.9</td><td>30.6</td><td>31</td></tr><tr><td>DeepSeek-R1-Distill-Llama-8B</td><td>89.6</td><td>8.0</td><td>0.9</td><td>6.2</td><td>35.8</td><td>22.8</td><td>15.9</td><td>25.3</td></tr><tr><td>DeepSeek-R1-Distill-Llama-8B+VIRTUAL</td><td>13.2</td><td>30.4</td><td>25.9</td><td>33</td><td>20.6</td><td>26.9</td><td>30.6</td><td>26</td></tr><tr><td>Llama-2-7B</td><td>57.5</td><td>35.7</td><td>2.7</td><td>15.2</td><td>34.3</td><td>26.5</td><td>19.3</td><td>27.4</td></tr><tr><td>Llama-2-7B+VIRTUAL</td><td>96.2</td><td>2.7</td><td>0.9</td><td>0.9</td><td>36.6</td><td>21.1</td><td>13.6</td><td>24.2</td></tr></table>
|
| 527 |
+
|
| 528 |
+
Table 13: Experiments of leveraging open-sourced LLMs and smaller LMs.
|
| 529 |
+
|
| 530 |
+
# Passage:
|
| 531 |
+
|
| 532 |
+
人才莫盛于三国,亦惟三国之主各能用人,故得众力相扶,以成鼎足之势...
|
| 533 |
+
|
| 534 |
+
(Talents are most abundant in the Three Kingdoms, and the rulers of the Three Kingdoms could use their talents to support each other and form a tripartite situation...)
|
| 535 |
+
|
| 536 |
+
# Option:
|
| 537 |
+
|
| 538 |
+
A.臧霸曾为吕布效力,曹操擒捉吕布以后,臧霸为避祸藏匿起来;后来他又被曹操捕获,曹操不计前嫌,对他委以重任,任命他为琅邪相。
|
| 539 |
+
|
| 540 |
+
(Zang Ba once worked for Lv Bu. After Cao Cao captured Lu Bu, Zang Ba hid to avoid disaster. Later, he was captured by Cao Cao again, who put aside their past grudges, entrusted him with an important task, and appointed him as the Prime Minister of Langya.)
|
| 541 |
+
|
| 542 |
+
# Literary Chinese evidences(top 2):
|
| 543 |
+
|
| 544 |
+
-臧霸先从陶谦,后助吕布,布为操所擒,霸藏匿,操募得之,即以霸为琅邪相。
|
| 545 |
+
|
| 546 |
+
(Zang Ba first followed Tao Qian and later helped Lv Bu. When Cao captured Lv Bu, Zang Ba hid himself. Cao recruited him and immediately made him the Prime Minister of Langya.)
|
| 547 |
+
|
| 548 |
+
- 盖操当初起时,方欲藉众力以成事,故以此奔走天下。
|
| 549 |
+
|
| 550 |
+
(When Cao first started, he wanted to use everyone's strength to accomplish something, so he traveled all over the world.)
|
| 551 |
+
|
| 552 |
+
# Translated evidences(top 1):
|
| 553 |
+
|
| 554 |
+
当吕布被曹操擒获后,臧霸藏了起来,曹操找到臧霸后,立刻任命他为琅琊相。
|
| 555 |
+
|
| 556 |
+
(When Lv Bu was captured by Cao Cao, Zang Ba went into hiding. Cao found Zang Ba and immediately appointed him as the Prime Minister of Langya.)
|
| 557 |
+
|
| 558 |
+
# AMR:
|
| 559 |
+
|
| 560 |
+
{臧霸曾经效力于吕布,之后曹操擒捉了吕布,后来曹操又捕获了臧霸,曹操不计前嫌,曹操因此委以重任于臧霸,曹操任命臧霸为琅邪相}
|
| 561 |
+
|
| 562 |
+
(Zang Ba once served Lu Bu', Later Cao Cao captured Lu Bu', Later Cao Cao captured Zang Ba', Cao Cao forgot the past grudges', Cao Cao therefore entrusted an important task to Zang Ba', Cao Cao appointed Zang Ba as the Prime Minister of Langye')
|
| 563 |
+
|
| 564 |
+
# Sub-sentences evidences:
|
| 565 |
+
|
| 566 |
+
臧霸曾经效力于吕布- $\{\mathrm{evi}\}$ 后助吕布}
|
| 567 |
+
|
| 568 |
+
(Zang Ba once served Lv Bu)-(evi: Assist Lv Bu)
|
| 569 |
+
|
| 570 |
+
之后曹操擒捉了吕布- $\{\mathrm{evi}\}$ 当吕布被曹操擒获后}
|
| 571 |
+
|
| 572 |
+
(After that, Cao Cao captured Lv Bu)-(evi: When Lv Bu was captured by Cao Cao)
|
| 573 |
+
|
| 574 |
+
后来曹操又捕获了臧霸- $\{\mathrm{evi}$ :曹操找到臧霸后}
|
| 575 |
+
|
| 576 |
+
(Later, Cao Cao captured Zang Ba)-(After Cao Cao found Zang Ba)
|
| 577 |
+
|
| 578 |
+
曹操不计前嫌- $\{\mathrm{evi}$ :无]
|
| 579 |
+
|
| 580 |
+
(Cao Cao let bygones be bygones)-(evi:N/A)
|
| 581 |
+
|
| 582 |
+
曹操因此委以重任于臧霸-{evi:无}
|
| 583 |
+
|
| 584 |
+
(Cao Cao therefore entrusted an important task to Zang Ba)-(evi:N/A)
|
| 585 |
+
|
| 586 |
+
曹操任命臧霸为琅邪相-{evi:即以霸为琅邪相。}
|
| 587 |
+
|
| 588 |
+
(Cao Cao appointed Zang Ba as the Prime Minister of Langye)-(evi: That is, Ba was appointed as the Prime Minister of Langya.)
|
| 589 |
+
|
| 590 |
+
# Answer:
|
| 591 |
+
|
| 592 |
+
A. “后来他又被曹操捕获”理解错误,根据原文“霸藏匿,操募得之,即以霸为琅邪相”可知,臧霸并不是被曹操捕获,而是曹操通过招募的方式找到臧霸,并任命他为琅邪相。
|
| 593 |
+
|
| 594 |
+
(Later he was captured by Cao Cao) is a wrong understanding. According to the original text, "Zang Ba hid, and Cao recruited him, and then appointed him as the Prime Minister of Langya", we know that Cao Cao did not capture Zang Ba, but Cao Cao found Zang Ba through recruitment and appointed him as the Prime Minister of Langya.)
|
| 595 |
+
|
| 596 |
+
Table 14: Example of evidence extraction for option A of the 2024 National College Entrance Examination Chinese Language Paper A literal Chinese reading comprehension. The English translation is enclosed in parentheses; 'evi' indicates the evidence, and N/A means no evidence available.
|
| 597 |
+
|
| 598 |
+
<table><tr><td>Feature</td><td>Details</td></tr><tr><td>Name</td><td>Liu Yuxi</td></tr><tr><td>Personality</td><td>Perseverance: open-minded and talented.</td></tr><tr><td>Ability</td><td>Deep literary attainments, good at poetry, with profound life philosophy and insight into the ups and downs of official career.</td></tr><tr><td>Background</td><td>In the middle of the Tang Dynasty, the society was turbulent, and the fate of scholars was unfortunate.</td></tr><tr><td>Summary</td><td>Liu Yuxi, a literary giant in the Tang Dynasty, was exiled, but he was able to relieve his feelings with poetry and wine, adhered to the ambition of a Confucian man, was optimistic and tenacious, and his person and his poems all showed an open-minded life.</td></tr></table>
|
| 599 |
+
|
| 600 |
+
Table 15: An example of the celebrity profile.
|
| 601 |
+
|
| 602 |
+
<table><tr><td>INSTRUCTION: You are an expert in ancient Chinese. Please classify the type of questions in literary Chinese and choose the correct answer.
|
| 603 |
+
##### Instruction: Please classify the types of questions in literary Chinese and answer the original text comprehension questions/general comprehension questions.
|
| 604 |
+
INPUT:
|
| 605 |
+
##### Passage: {Passage in literary Chinese.}
|
| 606 |
+
##### Question: {Question.}
|
| 607 |
+
##### Options: {Options.}
|
| 608 |
+
OUTPUT:
|
| 609 |
+
Type: Original text comprehension questions/general comprehension questions
|
| 610 |
+
##### Instruction: Please study the examples, read literary Chinese, and choose the options that meet the meaning of the questions.
|
| 611 |
+
INPUT:
|
| 612 |
+
##### Sample: {Samples[Type]}
|
| 613 |
+
####### Passage: {Passage in literary Chinese.}
|
| 614 |
+
##### Question: {Question.}
|
| 615 |
+
##### Options: {Options.}
|
| 616 |
+
###### Evidence: {Evidences.}
|
| 617 |
+
OUTPUT:
|
| 618 |
+
*Final Judgment**: Judgment (A/B/C/D)</td></tr></table>
|
| 619 |
+
|
| 620 |
+
Table 16: A One-shot prompt for LLMs. Section names are in brown, and text variables are in curly brackets.
|
| 621 |
+
|
| 622 |
+
<table><tr><td>INSTRUCTION:
|
| 623 |
+
You are an expert in literary Chinese. Please analyze whether the AMR clauses are correct based on my provided information. Return one if correct and zero if incorrect.
|
| 624 |
+
for op in [A, B, C, D]:
|
| 625 |
+
for sub in [segment 1, segment 2, ...]:
|
| 626 |
+
INPUT:
|
| 627 |
+
## Passage: {Passage in literary Chinese.}
|
| 628 |
+
## Question: {Question.}
|
| 629 |
+
## Options: {Options.}
|
| 630 |
+
## Evidence: {Evidences.}
|
| 631 |
+
## Sub: {sub-sentence.}
|
| 632 |
+
OUTPUT: 0/1
|
| 633 |
+
Sentence Correctness: [1,0,...]
|
| 634 |
+
OPTION SCORE: [A_score,B_score,C_score,D_score]
|
| 635 |
+
OUTPUT:
|
| 636 |
+
**Final Judgment**: Judgment (A/B/C/D)</td></tr></table>
|
| 637 |
+
|
| 638 |
+
Table 17: The COT prompt for LLMs. Section names are in brown, and text variables are in curly brackets.
|
| 639 |
+
|
| 640 |
+
# INSTRUCTION:
|
| 641 |
+
|
| 642 |
+
You are a Chinese expert. Please help me convert the AMR abstract semantic analysis results. Instruction: Please convert the following AMR triples into fluent Chinese clauses. These triples come from the same sentence and must be combined into a meaningful clause array.
|
| 643 |
+
|
| 644 |
+
# Crit
|
| 645 |
+
|
| 646 |
+
1. :arg0 indicates the performer of the action, and :arg1 indicates the recipient of the action.
|
| 647 |
+
|
| 648 |
+
2. :time indicates time information.
|
| 649 |
+
|
| 650 |
+
3. : manner indicates the way of action.
|
| 651 |
+
|
| 652 |
+
4. :mod indicates modification relationship.
|
| 653 |
+
|
| 654 |
+
5.: aspect indicates dynamic auxiliary words, such as "了", "着", etc.
|
| 655 |
+
|
| 656 |
+
6. : poss indicates belonging relationship.
|
| 657 |
+
|
| 658 |
+
7. :location indicates location information.
|
| 659 |
+
|
| 660 |
+
8. :op1, :op2, etc. indicate parallel relationships.
|
| 661 |
+
|
| 662 |
+
9. Pay attention to maintaining the logical relationship between concepts.
|
| 663 |
+
|
| 664 |
+
10. Each clause should retain the subject as much as possible and not be vague references, such as he, she, it, this, and that.
|
| 665 |
+
|
| 666 |
+
# INPUT:
|
| 667 |
+
|
| 668 |
+
Tripletext:{tripletsex
|
| 669 |
+
|
| 670 |
+
# OUTPUT:
|
| 671 |
+
|
| 672 |
+
First output: ['clause1', 'clause2', ...]
|
| 673 |
+
|
| 674 |
+
Instruction: Please check whether the following clause array meets the requirements: Each clause should have a clear subject and should not use vague references (such as 'he', 'she', 'it', 'this', 'that', etc.). Only one clause that conveys the same meaning should be kept to avoid redundancy.
|
| 675 |
+
|
| 676 |
+
# INPUT:
|
| 677 |
+
|
| 678 |
+
AMR clauses : {First output}
|
| 679 |
+
|
| 680 |
+
# OUTPUT:
|
| 681 |
+
|
| 682 |
+
Final output: ['clause1', 'clause2', ...]
|
| 683 |
+
|
| 684 |
+
Table 18: Prompts for AMR segmentation. Section names are in brown, and text variables are in curly brackets.
|
| 685 |
+
|
| 686 |
+

|
| 687 |
+
Figure 5: The AMR of an option. HanLP generates the visualization.
|
| 688 |
+
|
| 689 |
+
We further list detailed results in Tab. 19.
|
| 690 |
+
|
| 691 |
+
Table 20 contains the final segmented sentences.
|
| 692 |
+
|
| 693 |
+
# J One-Shot Samples
|
| 694 |
+
|
| 695 |
+
Tab. 21 shows the samples used in VIRTUAL as the examples while facilitating the one-shot strategy.
|
| 696 |
+
|
| 697 |
+
# K Perplexity Does not Affect difficulty
|
| 698 |
+
|
| 699 |
+
To further discover the relationship between prediction accuracy and the perplexity of passages, we divide CRISIS into ten subsets with an almost identical number of items according to their perplexity. However, the perplexity of passages does not affect the difficulty of the questions.
|
| 700 |
+
|
| 701 |
+
Fig. 6 reports experiment results which support our claim: Perplexity does not affect difficulty.
|
| 702 |
+
|
| 703 |
+
The perplexity is the exponential of the average negative log-likelihood of the words in the sequence, given their previous context. In this paper, we use the $N$ -gram-based perplexity, which is defined in Eq. 4. In Eq. 4, $N$ is the number of words in the sentence, $P(w_{i}|w_{i - 1})$ (see Eq. 5) is the conditional probability of the model predicting the $i^{th}$ word $(w_{i})$ according to the $(i - 1)^{th}$ word $(w_{i - 1})$ . In Eq. 5, $\text{Count}(\cdot)$ counts word occurrences, $\| V\|$ is the vocabulary size, $\lambda$ is the smoothing parameter, set to 1 in this paper. This additive smoothing adjusts $N$ -gram probabilities by adding a small constant $(\lambda)$ to each count and guarantees non-zero probabilities for all $N$ -grams.
|
| 704 |
+
|
| 705 |
+
While a low perplexity score reflects a firm grasp of language nuances and structure, the passage is easy to understand. However, a straightforward passage might not reduce the problem's difficulty because the questions are deliberately designed. Here, we divide CRISIS into ten subsets with almost identical numbers of items according to their perplexity score.
|
| 706 |
+
|
| 707 |
+
$$
|
| 708 |
+
P e r p l e x i t y = 2 ^ {- \frac {1}{N} \sum_ {i = 1} ^ {N} \log_ {2} P \left(w _ {i} \mid w _ {i - 1}\right)} \tag {4}
|
| 709 |
+
$$
|
| 710 |
+
|
| 711 |
+
$$
|
| 712 |
+
P \left(w _ {i} \mid w _ {i - 1}\right) = \frac {\operatorname {C o u n t} \left(w _ {i - 1} , w _ {i}\right) + \lambda}{\operatorname {C o u n t} \left(w _ {i - 1}\right) + \lambda \cdot | V |} \tag {5}
|
| 713 |
+
$$
|
| 714 |
+
|
| 715 |
+
<table><tr><td>sentence number</td><td>node number 1</td><td>concept 1</td><td>co-referencing node 1</td><td>relation</td><td>relation number</td><td>relation alignment word</td><td>node number 2</td><td>concept 2</td><td>co-referencing node 2</td></tr><tr><td>10755</td><td>x0</td><td>root</td><td>-</td><td>:top</td><td>-</td><td>-</td><td>x1002</td><td>and</td><td>-</td></tr><tr><td>10755</td><td>x5</td><td>效力 (work for)</td><td>-</td><td>:time</td><td>-</td><td>-</td><td>x2</td><td>曾 (was)</td><td>-</td></tr><tr><td>10755</td><td>x5</td><td>效力 (work for)</td><td>-</td><td>:arg0</td><td>-</td><td>-</td><td>x1000</td><td>person</td><td>-</td></tr><tr><td>10755</td><td>x5</td><td>效力</td><td>-</td><td>:beneficiary</td><td>-</td><td>-</td><td>x1001</td><td>person</td><td>-</td></tr><tr><td>10755</td><td>x8</td><td>擒捉 (capture)</td><td>-</td><td>:arg0</td><td>-</td><td>-</td><td>x7</td><td>曹操 (Cao Cao)</td><td>-</td></tr><tr><td>10755</td><td>x8</td><td>擒捉 (capture)</td><td>-</td><td>:arg1</td><td>-</td><td>-</td><td>x9</td><td>吕布 (Lv Bu)</td><td>-</td></tr><tr><td>10755</td><td>x8</td><td>擒捉 (capture)</td><td>-</td><td>:arg0</td><td>-</td><td>-</td><td>x1001</td><td>person</td><td>-</td></tr><tr><td>10755</td><td>x10</td><td>以后 (later)</td><td>-</td><td>:op1</td><td>-</td><td>-</td><td>x8</td><td>擒捉 (capture)</td><td>-</td></tr><tr><td>10755</td><td>x23</td><td>捕获 (capture)</td><td>-</td><td>:time</td><td>-</td><td>-</td><td>x18</td><td>后来</td><td>-</td></tr><tr><td>10755</td><td>x23</td><td>捕获 (capture)</td><td>-</td><td>:arg1</td><td>-</td><td>-</td><td>x19</td><td>他 (he)</td><td>-</td></tr><tr><td>10755</td><td>x23</td><td>捕获 (capture)</td><td>-</td><td>:mod</td><td>-</td><td>-</td><td>x20</td><td>又</td><td>-</td></tr><tr><td>10755</td><td>x23</td><td>捕获 (capture)</td><td>-</td><td>:arg0</td><td>x21</td><td>被 (be)</td><td>x22</td><td>曹操 (Cao Cao)</td><td>-</td></tr><tr><td>10755</td><td>x26</td><td>不计前嫌 (let bygones be bygones)</td><td>-</td><td>:arg0</td><td>-</td><td>-</td><td>x25</td><td>曹操 (Cao Cao)</td><td>-</td></tr><tr><td>10755</td><td>x30</td><td>委以重任 (entrusted with important tasks)</td><td>-</td><td>:arg0</td><td>-</td><td>-</td><td>x25</td><td>曹操 (Cao Cao)</td><td>-</td></tr><tr><td>10755</td><td>x30</td><td>委以重任 (entrusted with important tasks)</td><td>-</td><td>:cause</td><td>-</td><td>-</td><td>x26</td><td>不计前嫌 (let bygones be bygones)</td><td>-</td></tr><tr><td>10755</td><td>x30</td><td>委以重任 (entrusted with important tasks)</td><td>-</td><td>:arg1</td><td>x28</td><td>对 (against)</td><td>x29</td><td>他 (he)</td><td>-</td></tr><tr><td>10755</td><td>x32</td><td>任命 (nominate)</td><td>-</td><td>:arg1</td><td>-</td><td>-</td><td>x33</td><td>他 (he)</td><td>-</td></tr><tr><td>10755</td><td>x32</td><td>任命 (nominate)</td><td>-</td><td>:arg2</td><td>x34</td><td>为 (be)</td><td>x36</td><td>邪 (N/A)</td><td>-</td></tr><tr><td>10755</td><td>x32</td><td>任命 (nominate)</td><td>-</td><td>:arg2</td><td>x37</td><td>相 (be Prime Minister)</td><td>x36</td><td>邪 (N/A)</td><td>-</td></tr><tr><td>10755</td><td>x32</td><td>任命 (nominate)</td><td>-</td><td>:arg2</td><td>x34</td><td>为 (be)</td><td>x37</td><td>相 (Prime Minister)</td><td>-</td></tr><tr><td>10755</td><td>x32</td><td>任命 (nominate)</td><td>-</td><td>:arg2</td><td>-</td><td>-</td><td>x1005</td><td>local-region</td><td>-</td></tr><tr><td>10755</td><td>x37</td><td>相 (Prime Minister)</td><td>-</td><td>:mod</td><td>-</td><td>-</td><td>x1005</td><td>local-region</td><td>-</td></tr><tr><td>10755</td><td>x1000</td><td>person</td><td>-</td><td>:name</td><td>-</td><td>-</td><td>x1</td><td>臧霸 (Zang Ba)</td><td>-</td></tr><tr><td>10755</td><td>x1001</td><td>person</td><td>-</td><td>:name</td><td>-</td><td>-</td><td>x4</td><td>吕布 (Lv Bu)</td><td>-</td></tr><tr><td>10755</td><td>x1002</td><td>and</td><td>-</td><td>:op1</td><td>-</td><td>-</td><td>x5</td><td>效力 (work for)</td><td>-</td></tr><tr><td>10755</td><td>x1002</td><td>and</td><td>-</td><td>:op2</td><td>-</td><td>-</td><td>x1003</td><td>temporal</td><td>-</td></tr><tr><td>10755</td><td>x1003</td><td>temporal</td><td>-</td><td>:arg1</td><td>x10</td><td>以后 (later)</td><td>x8</td><td>擒捉 (capture)</td><td>-</td></tr><tr><td>10755</td><td>x1003</td><td>temporal</td><td>-</td><td>:arg2</td><td>-</td><td>-</td><td>x1004</td><td>and</td><td>-</td></tr><tr><td>10755</td><td>x1004</td><td>and</td><td>-</td><td>:op1</td><td>-</td><td>-</td><td>x23</td><td>捕获 (capture)</td><td>-</td></tr><tr><td>10755</td><td>x1004</td><td>and</td><td>-</td><td>:op2</td><td>-</td><td>-</td><td>x26</td><td>不计前嫌 (let bygones be bygones)</td><td>-</td></tr><tr><td>10755</td><td>x1004</td><td>and</td><td>-</td><td>:op2</td><td>-</td><td>-</td><td>x30</td><td>委以重任 (entrusted with important tasks)</td><td>-</td></tr><tr><td>10755</td><td>x1004</td><td>and</td><td>-</td><td>:op3</td><td>-</td><td>-</td><td>x32</td><td>任命 (nominate)</td><td>-</td></tr><tr><td>10755</td><td>x1005</td><td>local-region</td><td>-</td><td>:name</td><td>-</td><td>-</td><td>x35</td><td>珉 (Lang)</td><td>-</td></tr><tr><td>10755</td><td>x12</td><td>臧霸 (Zang Ba)</td><td>-</td><td>-</td><td>-</td><td>-</td><td>x12</td><td>臧霸 (Zang Ba)</td><td>-</td></tr><tr><td>10755</td><td>x14</td><td>避祸 (avoid disaster)</td><td>-</td><td>-</td><td>-</td><td>-</td><td>x14</td><td>避祸 (avoid disaster)</td><td>-</td></tr><tr><td>10755</td><td>x15</td><td>藏匿 (hide)</td><td>-</td><td>-</td><td>-</td><td>-</td><td>x15</td><td>藏匿 (hide)</td><td>-</td></tr></table>
|
| 716 |
+
|
| 717 |
+
Table 19: Example of AMR abstract semantic relationship extraction results for option A of the 2024 National College Entrance Examination Chinese Language Paper A literal Chinese reading comprehension. The English translation is enclosed in parentheses.
|
| 718 |
+
|
| 719 |
+
# Passage:
|
| 720 |
+
|
| 721 |
+
人才莫盛于三国,亦惟三国之主各能用人,故得众力相扶,以成鼎足之势...
|
| 722 |
+
|
| 723 |
+
(Talents are abundant in the Three Kingdoms, and the rulers of the Three Kingdoms can use talents to support each other and form a tripartite situation.)
|
| 724 |
+
|
| 725 |
+
# Options:
|
| 726 |
+
|
| 727 |
+
The following summary of the relevant content of the original text is incorrect:
|
| 728 |
+
|
| 729 |
+
A.臧霸曾为吕布效力,曹操擒捉吕布以后,臧霸为避祸藏匿起来;后来他又被曹操捕获,曹操不计前嫌,对他委以重任,任命他为琅邪相。
|
| 730 |
+
|
| 731 |
+
(Zang Ba once worked for Lu Bu. After Cao Cao captured Lu Bu, Zang Ba hid to avoid disaster; later, he was captured by Cao Cao again, and Cao Cao put aside his previous grudges and entrusted him with an important task, appointing him as the Prime Minister of Langya.)
|
| 732 |
+
|
| 733 |
+
B.曹操初起时为图霸业,能笼络人才,甚至能任用曾与己有怨者;势位已定时则猜忌异己,滥杀无辜。这正是其用人“以权术相驭”的表现。
|
| 734 |
+
|
| 735 |
+
(When Cao Cao first started, he was able to win over talents and even employ those who had grudges against him to achieve hegemony; when his position was established, he was suspicious of dissidents and killed innocent people indiscriminately. This action manifests in his "controlling people with power and tactics" when employing people.)
|
| 736 |
+
|
| 737 |
+
C. 刘备以性情结交忠义之士,以诚待人,故能深得人心;刘备创业过程中多次遭遇挫折,但诸葛亮及关、张、赵云等人患难相随,忠贞不渝。
|
| 738 |
+
|
| 739 |
+
(Liu Bei made friends with loyal and righteous people with his temperament and treated people with sincerity, so he was deeply popular; Liu Bei encountered many setbacks in the process of starting a business, but Zhuge Liang, Guan, Zhang, Zhao Yun and others accompanied him through thick and thin and remained loyal.)
|
| 740 |
+
|
| 741 |
+
D.陆逊镇守西陵时,深得孙权信任,孙权给刘禅、诸葛亮写信,常常给陆逊看,有不妥之处就让他改定;到了晚年,陆逊遭到谗害,郁郁而终。
|
| 742 |
+
|
| 743 |
+
(When Lu Xun was stationed in Xiling, Sun Quan deeply trusted him. Sun Quan often showed Lu Xun the letters he wrote to Liu Chan and Zhuge Liang and asked him to revise them if there were any inappropriate parts. In his later years, Lu Xun was slandered and died of depression.)
|
| 744 |
+
|
| 745 |
+
# Result:
|
| 746 |
+
|
| 747 |
+
A.{'臧霸曾经效力于吕布', '之后曹操擒捉了吕布', '后来曹操又捕获了臧霸', '曹操不计前嫌', '曹操因此委以重任于臧霸', '曹操任命臧霸为琅邪相'}
|
| 748 |
+
|
| 749 |
+
{ 'Zang Ba once served Lu Bu', 'Later Cao Cao captured Lu Bu', 'Later Cao Cao captured Zang Ba', 'Cao Cao did not bear grudges', 'Cao Cao therefore entrusted an important task to Zang Ba', 'Cao Cao appointed Zang Ba as the governor of Langye'}
|
| 750 |
+
|
| 751 |
+
B.{曹操能笼络人才,'曹操能任用人才,'曹操曾经有势位怨者,'曹操定时猜忌异己,'曹操滥杀无辜,'曹操的用人表现正是其权术的体现'}
|
| 752 |
+
|
| 753 |
+
{ 'Cao Cao was able to win over talents', 'Cao Cao was able to employ talents', 'Cao Cao once had people who resented him for his position', 'Cao Cao was always suspicious of those who were different from him', 'Cao Cao killed innocent people', 'Cao Cao's performance in employing people was a reflection of his political tactics'}
|
| 754 |
+
|
| 755 |
+
C. $\{^{\prime}$ 勋备结交了很多人,如诸葛亮、张飞和赵云等’刘备在创业过程中多次遭遇挫折'刘备结交的人忠贞不渝,患难相随'刘备以性情结交了很多人’\}
|
| 756 |
+
|
| 757 |
+
{Liu Bei made friends with many people, such as Zhuge Liang, Zhang Fei and Zhao Yun', 'Liu Bei encountered many setbacks in the process of starting a business', 'The people Liu Bei made friends with were loyal and loyal, and they accompanied him through thick and thin', 'Liu Bei made friends with many people because of his personality'
|
| 758 |
+
|
| 759 |
+
D.{陆逊镇守西陵',陆逊认为看不妥之处',陆逊改定不妥之处',陆逊在晚年遭到谗害',陆逊郁郁而终',陆逊深得孙权的信任',孙权常常给陆逊写信'}
|
| 760 |
+
|
| 761 |
+
{ 'Lu Xun guarded Xiling', 'Lu Xun thought that there were inappropriate parts', 'Lu Xun corrected the inappropriate parts', 'Lu Xun was slandered in his later years', 'Lu Xun died in depression', 'Lu Xun was deeply trusted by Sun Quan', 'Sun Quan often wrote to Lu Xun'}
|
| 762 |
+
|
| 763 |
+
Table 20: Results of the AMR-based segmentation of options for literal Chinese reading comprehension in the 2024 National College Entrance Examination paper. The English translation is enclosed in parentheses.
|
| 764 |
+
|
| 765 |
+
<table><tr><td>Question type</td><td>Content</td></tr><tr><td>Detail question</td><td>Passage: 曹雄, 西安左卫���。弘治末, 历官都指挥佥事, 为延绥副总兵。武宗即位, 用总督杨一清荐, 擢署都督佥事, 充总兵官, 镇固原...(省略)... 瑱败, 言官交劾, 降指挥佥事, 寻征下狱, 以党逆论死, 籍其家。(Cao Xiong was from Zuowei, Xi'an. At the end of Hongzhi, he served as the deputy commander-in-chief and deputy general of Yansui. When Wuzong ascended the throne, he recommended Yang Yiqing, the governor-general, and promoted him to deputy commander-in-chief and general officer to garrison Yuan...(omitted)... After Jin's defeat, the censors demoted him to deputy commander-in-chief. Authorities soon imprisoned him and sentenced him to death for treason. They confiscated his family's property. Question: Which of the following is a [wrong] understanding of the article content:
|
| 766 |
+
Options:
|
| 767 |
+
A. The enemy killed Cao Xiong because he held his troops but did not rescue them. 曹雄建议改进军令传递方式...(Cao Xiong suggested a better system for passing military orders...) C.曹雄对部下持奖惩并施的态度...(Cao Xiong adopted an attitude of rewarding and punishing his subordinates...)
|
| 768 |
+
D. 皇帝认可他的建议...(The emperor approved his suggestion...)
|
| 769 |
+
Answer: D
|
| 770 |
+
Explanation: The analysis of 'the emperor approved his suggestion' is wrong. According to the original text, "Military Minister Cao Yuanxi Jin's opinion, he replied, "It is not the emperor who approved, but the Minister of War agreed to his request according to Liu Jin's opinion.</td></tr><tr><td>Summary question</td><td>Passage: 赏者, 所以辨情也; 评者, 所以绳理也。赏而不正, 则情乱于实; 评而不均, 则理失其真...(省略)...采其制意之本, 略其文外之华, 不没纤芥之善, 不掩萤烛之光, 可谓千载一遇也。(Reward is distinguishing feelings; evaluation is judging the truth. If appreciation is flawed, emotions will be confused with reality; if evaluation lacks balance, reasoning will lose its essence.... Adopting the essence of the meaning, ignoring the extravagance of the text, not burying the goodness of the mustard seed, and not covering the firefly's light can be said to be a once-in-a-lifetime opportunity.)
|
| 771 |
+
Question: Which of the following is a [wrong] understanding of the content of the article:
|
| 772 |
+
Options:
|
| 773 |
+
A. 文章强调赏评应注重实质而非形式。(The article emphasizes that appreciation and evaluation should focus on substance rather than form.)
|
| 774 |
+
B. 以历史实例批判喜新厌旧的态度。(Criticizes the attitude of liking the new and disliking the old with historical examples.)
|
| 775 |
+
C. 主张依照客观标准衡量事物价值。(Advocates measuring the value of things according to objective standards.)
|
| 776 |
+
D. 借类比说明人云亦云的弊端。(Uses analogy to illustrate the drawbacks of unthinkingly following others.)
|
| 777 |
+
Answer: D
|
| 778 |
+
Explanation: The saying "liking the new and disliking the old" is wrong. The fourth paragraph uses an analogy to explain that appreciation can only be achieved correctly by not worshipping the name, destroying reality, following the crowd, and unthinkingly following others.</td></tr></table>
|
| 779 |
+
|
| 780 |
+
Table 21: Samples used in the one-shot strategy. The English translation is enclosed in parentheses.
|
| 781 |
+
|
| 782 |
+

|
| 783 |
+
Figure 6: Prediction accuracy (Y-Axis) vs. perplexity. We divide CRISIS into ten subsets with almost identical numbers of items according to their perplexity score (X-Axis, Perplexity Range).
|
2025/A Comprehensive Literary Chinese Reading Comprehension Dataset with an Evidence Curation Based Solution/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e1c582bdf989399efa35cbe39e2dcbd3c21cf4bd43ab8da97def8125ce91a9e2
|
| 3 |
+
size 1213252
|
2025/A Comprehensive Literary Chinese Reading Comprehension Dataset with an Evidence Curation Based Solution/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Computational Simulation of Language Production in First Language Acquisition/63198e81-ac5b-4894-8c54-27e3c6f8a917_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|