Chelsea707 commited on
Commit
415c0ac
·
verified ·
1 Parent(s): 2eedfef

Add Batch ad09a5f7-dad3-4921-af9b-60f7dad00958 data

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +64 -0
  2. 2024/A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning/43284e62-f62c-4088-9544-f4ff108eb4c5_content_list.json +0 -0
  3. 2024/A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning/43284e62-f62c-4088-9544-f4ff108eb4c5_model.json +0 -0
  4. 2024/A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning/43284e62-f62c-4088-9544-f4ff108eb4c5_origin.pdf +3 -0
  5. 2024/A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning/full.md +0 -0
  6. 2024/A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning/images.zip +3 -0
  7. 2024/A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning/layout.json +0 -0
  8. 2024/A Comprehensive Study of Gender Bias in Chemical Named Entity Recognition Models/8f8aca71-294d-4b9e-9495-82fce99e28eb_content_list.json +0 -0
  9. 2024/A Comprehensive Study of Gender Bias in Chemical Named Entity Recognition Models/8f8aca71-294d-4b9e-9495-82fce99e28eb_model.json +0 -0
  10. 2024/A Comprehensive Study of Gender Bias in Chemical Named Entity Recognition Models/8f8aca71-294d-4b9e-9495-82fce99e28eb_origin.pdf +3 -0
  11. 2024/A Comprehensive Study of Gender Bias in Chemical Named Entity Recognition Models/full.md +306 -0
  12. 2024/A Comprehensive Study of Gender Bias in Chemical Named Entity Recognition Models/images.zip +3 -0
  13. 2024/A Comprehensive Study of Gender Bias in Chemical Named Entity Recognition Models/layout.json +0 -0
  14. 2024/A Likelihood Ratio Test of Genetic Relationship among Languages/6adb42de-14e6-4986-ab30-294d02f67dca_content_list.json +1864 -0
  15. 2024/A Likelihood Ratio Test of Genetic Relationship among Languages/6adb42de-14e6-4986-ab30-294d02f67dca_model.json +0 -0
  16. 2024/A Likelihood Ratio Test of Genetic Relationship among Languages/6adb42de-14e6-4986-ab30-294d02f67dca_origin.pdf +3 -0
  17. 2024/A Likelihood Ratio Test of Genetic Relationship among Languages/full.md +357 -0
  18. 2024/A Likelihood Ratio Test of Genetic Relationship among Languages/images.zip +3 -0
  19. 2024/A Likelihood Ratio Test of Genetic Relationship among Languages/layout.json +0 -0
  20. 2024/A Preference-driven Paradigm for Enhanced Translation with Large Language Models/d7f9322a-d243-4010-965b-89c497fd221b_content_list.json +0 -0
  21. 2024/A Preference-driven Paradigm for Enhanced Translation with Large Language Models/d7f9322a-d243-4010-965b-89c497fd221b_model.json +0 -0
  22. 2024/A Preference-driven Paradigm for Enhanced Translation with Large Language Models/d7f9322a-d243-4010-965b-89c497fd221b_origin.pdf +3 -0
  23. 2024/A Preference-driven Paradigm for Enhanced Translation with Large Language Models/full.md +490 -0
  24. 2024/A Preference-driven Paradigm for Enhanced Translation with Large Language Models/images.zip +3 -0
  25. 2024/A Preference-driven Paradigm for Enhanced Translation with Large Language Models/layout.json +0 -0
  26. 2024/A Pretrainer’s Guide to Training Data_ Measuring the Effects of Data Age, Domain Coverage, Quality, & Toxicity/f3255e0a-3bee-47d2-8ffc-17ec6292afa6_content_list.json +0 -0
  27. 2024/A Pretrainer’s Guide to Training Data_ Measuring the Effects of Data Age, Domain Coverage, Quality, & Toxicity/f3255e0a-3bee-47d2-8ffc-17ec6292afa6_model.json +0 -0
  28. 2024/A Pretrainer’s Guide to Training Data_ Measuring the Effects of Data Age, Domain Coverage, Quality, & Toxicity/f3255e0a-3bee-47d2-8ffc-17ec6292afa6_origin.pdf +3 -0
  29. 2024/A Pretrainer’s Guide to Training Data_ Measuring the Effects of Data Age, Domain Coverage, Quality, & Toxicity/full.md +0 -0
  30. 2024/A Pretrainer’s Guide to Training Data_ Measuring the Effects of Data Age, Domain Coverage, Quality, & Toxicity/images.zip +3 -0
  31. 2024/A Pretrainer’s Guide to Training Data_ Measuring the Effects of Data Age, Domain Coverage, Quality, & Toxicity/layout.json +0 -0
  32. 2024/A Rationale-centric Counterfactual Data Augmentation Method for Cross-Document Event Coreference Resolution/3de756a6-bbf7-4cfb-ae83-0ff5c60e7686_content_list.json +0 -0
  33. 2024/A Rationale-centric Counterfactual Data Augmentation Method for Cross-Document Event Coreference Resolution/3de756a6-bbf7-4cfb-ae83-0ff5c60e7686_model.json +0 -0
  34. 2024/A Rationale-centric Counterfactual Data Augmentation Method for Cross-Document Event Coreference Resolution/3de756a6-bbf7-4cfb-ae83-0ff5c60e7686_origin.pdf +3 -0
  35. 2024/A Rationale-centric Counterfactual Data Augmentation Method for Cross-Document Event Coreference Resolution/full.md +0 -0
  36. 2024/A Rationale-centric Counterfactual Data Augmentation Method for Cross-Document Event Coreference Resolution/images.zip +3 -0
  37. 2024/A Rationale-centric Counterfactual Data Augmentation Method for Cross-Document Event Coreference Resolution/layout.json +0 -0
  38. 2024/A School Student Essay Corpus for Analyzing Interactions of Argumentative Structure and Quality/8156c041-3083-4221-9fa2-c2116611fd69_content_list.json +1945 -0
  39. 2024/A School Student Essay Corpus for Analyzing Interactions of Argumentative Structure and Quality/8156c041-3083-4221-9fa2-c2116611fd69_model.json +0 -0
  40. 2024/A School Student Essay Corpus for Analyzing Interactions of Argumentative Structure and Quality/8156c041-3083-4221-9fa2-c2116611fd69_origin.pdf +3 -0
  41. 2024/A School Student Essay Corpus for Analyzing Interactions of Argumentative Structure and Quality/full.md +356 -0
  42. 2024/A School Student Essay Corpus for Analyzing Interactions of Argumentative Structure and Quality/images.zip +3 -0
  43. 2024/A School Student Essay Corpus for Analyzing Interactions of Argumentative Structure and Quality/layout.json +0 -0
  44. 2024/A Study on the Calibration of In-context Learning/d325c270-6a74-49da-b92f-90bde3697c69_content_list.json +0 -0
  45. 2024/A Study on the Calibration of In-context Learning/d325c270-6a74-49da-b92f-90bde3697c69_model.json +0 -0
  46. 2024/A Study on the Calibration of In-context Learning/d325c270-6a74-49da-b92f-90bde3697c69_origin.pdf +3 -0
  47. 2024/A Study on the Calibration of In-context Learning/full.md +380 -0
  48. 2024/A Study on the Calibration of In-context Learning/images.zip +3 -0
  49. 2024/A Study on the Calibration of In-context Learning/layout.json +0 -0
  50. 2024/A Survey of Confidence Estimation and Calibration in Large Language Models/2f5c88ab-d105-40df-8549-a83aaa8adb25_content_list.json +0 -0
.gitattributes CHANGED
@@ -1509,3 +1509,67 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
1509
  2024/Tailoring[[:space:]]Vaccine[[:space:]]Messaging[[:space:]]with[[:space:]]Common-Ground[[:space:]]Opinions/0e389c0c-5a97-4e0f-8b32-1446aa9622f1_origin.pdf filter=lfs diff=lfs merge=lfs -text
1510
  2024/Targeted[[:space:]]Augmentation[[:space:]]for[[:space:]]Low-Resource[[:space:]]Event[[:space:]]Extraction/f2db35be-1c0c-4ae5-bb31-1210e73210a9_origin.pdf filter=lfs diff=lfs merge=lfs -text
1511
  2024/Task-Agnostic[[:space:]]Detector[[:space:]]for[[:space:]]Insertion-Based[[:space:]]Backdoor[[:space:]]Attacks/0102c997-ccfd-4dcf-a0a7-920c0150efe8_origin.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1509
  2024/Tailoring[[:space:]]Vaccine[[:space:]]Messaging[[:space:]]with[[:space:]]Common-Ground[[:space:]]Opinions/0e389c0c-5a97-4e0f-8b32-1446aa9622f1_origin.pdf filter=lfs diff=lfs merge=lfs -text
1510
  2024/Targeted[[:space:]]Augmentation[[:space:]]for[[:space:]]Low-Resource[[:space:]]Event[[:space:]]Extraction/f2db35be-1c0c-4ae5-bb31-1210e73210a9_origin.pdf filter=lfs diff=lfs merge=lfs -text
1511
  2024/Task-Agnostic[[:space:]]Detector[[:space:]]for[[:space:]]Insertion-Based[[:space:]]Backdoor[[:space:]]Attacks/0102c997-ccfd-4dcf-a0a7-920c0150efe8_origin.pdf filter=lfs diff=lfs merge=lfs -text
1512
+ 2024/A[[:space:]]Closer[[:space:]]Look[[:space:]]at[[:space:]]the[[:space:]]Self-Verification[[:space:]]Abilities[[:space:]]of[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]in[[:space:]]Logical[[:space:]]Reasoning/43284e62-f62c-4088-9544-f4ff108eb4c5_origin.pdf filter=lfs diff=lfs merge=lfs -text
1513
+ 2024/A[[:space:]]Comprehensive[[:space:]]Study[[:space:]]of[[:space:]]Gender[[:space:]]Bias[[:space:]]in[[:space:]]Chemical[[:space:]]Named[[:space:]]Entity[[:space:]]Recognition[[:space:]]Models/8f8aca71-294d-4b9e-9495-82fce99e28eb_origin.pdf filter=lfs diff=lfs merge=lfs -text
1514
+ 2024/A[[:space:]]Likelihood[[:space:]]Ratio[[:space:]]Test[[:space:]]of[[:space:]]Genetic[[:space:]]Relationship[[:space:]]among[[:space:]]Languages/6adb42de-14e6-4986-ab30-294d02f67dca_origin.pdf filter=lfs diff=lfs merge=lfs -text
1515
+ 2024/A[[:space:]]Preference-driven[[:space:]]Paradigm[[:space:]]for[[:space:]]Enhanced[[:space:]]Translation[[:space:]]with[[:space:]]Large[[:space:]]Language[[:space:]]Models/d7f9322a-d243-4010-965b-89c497fd221b_origin.pdf filter=lfs diff=lfs merge=lfs -text
1516
+ 2024/A[[:space:]]Pretrainer’s[[:space:]]Guide[[:space:]]to[[:space:]]Training[[:space:]]Data_[[:space:]]Measuring[[:space:]]the[[:space:]]Effects[[:space:]]of[[:space:]]Data[[:space:]]Age,[[:space:]]Domain[[:space:]]Coverage,[[:space:]]Quality,[[:space:]]&[[:space:]]Toxicity/f3255e0a-3bee-47d2-8ffc-17ec6292afa6_origin.pdf filter=lfs diff=lfs merge=lfs -text
1517
+ 2024/A[[:space:]]Rationale-centric[[:space:]]Counterfactual[[:space:]]Data[[:space:]]Augmentation[[:space:]]Method[[:space:]]for[[:space:]]Cross-Document[[:space:]]Event[[:space:]]Coreference[[:space:]]Resolution/3de756a6-bbf7-4cfb-ae83-0ff5c60e7686_origin.pdf filter=lfs diff=lfs merge=lfs -text
1518
+ 2024/A[[:space:]]School[[:space:]]Student[[:space:]]Essay[[:space:]]Corpus[[:space:]]for[[:space:]]Analyzing[[:space:]]Interactions[[:space:]]of[[:space:]]Argumentative[[:space:]]Structure[[:space:]]and[[:space:]]Quality/8156c041-3083-4221-9fa2-c2116611fd69_origin.pdf filter=lfs diff=lfs merge=lfs -text
1519
+ 2024/A[[:space:]]Study[[:space:]]on[[:space:]]the[[:space:]]Calibration[[:space:]]of[[:space:]]In-context[[:space:]]Learning/d325c270-6a74-49da-b92f-90bde3697c69_origin.pdf filter=lfs diff=lfs merge=lfs -text
1520
+ 2024/A[[:space:]]Survey[[:space:]]of[[:space:]]Confidence[[:space:]]Estimation[[:space:]]and[[:space:]]Calibration[[:space:]]in[[:space:]]Large[[:space:]]Language[[:space:]]Models/2f5c88ab-d105-40df-8549-a83aaa8adb25_origin.pdf filter=lfs diff=lfs merge=lfs -text
1521
+ 2024/A[[:space:]]Survey[[:space:]]of[[:space:]]Meaning[[:space:]]Representations[[:space:]]–[[:space:]]From[[:space:]]Theory[[:space:]]to[[:space:]]Practical[[:space:]]Utility/358c66a0-08f5-42ff-b3dd-361e6ba62f35_origin.pdf filter=lfs diff=lfs merge=lfs -text
1522
+ 2024/A[[:space:]]Symbolic[[:space:]]Framework[[:space:]]for[[:space:]]Evaluating[[:space:]]Mathematical[[:space:]]Reasoning[[:space:]]and[[:space:]]Generalisation[[:space:]]with[[:space:]]Transformers/789e958a-8502-40c0-909e-3e0ef66acfe4_origin.pdf filter=lfs diff=lfs merge=lfs -text
1523
+ 2024/A[[:space:]]Systematic[[:space:]]Comparison[[:space:]]of[[:space:]]Contextualized[[:space:]]Word[[:space:]]Embeddings[[:space:]]for[[:space:]]Lexical[[:space:]]Semantic[[:space:]]Change/bbf9efe2-18bc-4b0f-916e-ebc8de3f600d_origin.pdf filter=lfs diff=lfs merge=lfs -text
1524
+ 2024/A[[:space:]]Systematic[[:space:]]Comparison[[:space:]]of[[:space:]]Syllogistic[[:space:]]Reasoning[[:space:]]in[[:space:]]Humans[[:space:]]and[[:space:]]Language[[:space:]]Models/55c3705d-6a3c-4290-b783-26e6c9d86004_origin.pdf filter=lfs diff=lfs merge=lfs -text
1525
+ 2024/A[[:space:]]Theory[[:space:]]Guided[[:space:]]Scaffolding[[:space:]]Instruction[[:space:]]Framework[[:space:]]for[[:space:]]LLM-Enabled[[:space:]]Metaphor[[:space:]]Reasoning/6fcdb673-33ad-403d-b3e1-27b731e001f3_origin.pdf filter=lfs diff=lfs merge=lfs -text
1526
+ 2024/A[[:space:]]Universal[[:space:]]Dependencies[[:space:]]Treebank[[:space:]]for[[:space:]]Highland[[:space:]]Puebla[[:space:]]Nahuatl/ceada196-521f-4147-8a8c-151dc38bd3f6_origin.pdf filter=lfs diff=lfs merge=lfs -text
1527
+ 2024/A[[:space:]]Wolf[[:space:]]in[[:space:]]Sheep’s[[:space:]]Clothing_[[:space:]]Generalized[[:space:]]Nested[[:space:]]Jailbreak[[:space:]]Prompts[[:space:]]can[[:space:]]Fool[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]Easily/62c9a462-81ec-46ca-a872-e55f4ccd1df4_origin.pdf filter=lfs diff=lfs merge=lfs -text
1528
+ 2024/A[[:space:]]Zero-Shot[[:space:]]Monolingual[[:space:]]Dual[[:space:]]Stage[[:space:]]Information[[:space:]]Retrieval[[:space:]]System[[:space:]]for[[:space:]]Spanish[[:space:]]Biomedical[[:space:]]Systematic[[:space:]]Literature[[:space:]]Reviews/c9f112de-e2e5-4456-8390-38fd52262dee_origin.pdf filter=lfs diff=lfs merge=lfs -text
1529
+ 2024/ACLSum_[[:space:]]A[[:space:]]New[[:space:]]Dataset[[:space:]]for[[:space:]]Aspect-based[[:space:]]Summarization[[:space:]]of[[:space:]]Scientific[[:space:]]Publications/46b5e395-42f2-467e-92f6-1d33fdf49d9b_origin.pdf filter=lfs diff=lfs merge=lfs -text
1530
+ 2024/ALBA_[[:space:]]Adaptive[[:space:]]Language-Based[[:space:]]Assessments[[:space:]]for[[:space:]]Mental[[:space:]]Health/7f91b187-4a95-4f06-b982-71485c88ef37_origin.pdf filter=lfs diff=lfs merge=lfs -text
1531
+ 2024/ALoRA_[[:space:]]Allocating[[:space:]]Low-Rank[[:space:]]Adaptation[[:space:]]for[[:space:]]Fine-tuning[[:space:]]Large[[:space:]]Language[[:space:]]Models/34645583-ebd0-44f7-80a6-b4de415828c6_origin.pdf filter=lfs diff=lfs merge=lfs -text
1532
+ 2024/AMRFact_[[:space:]]Enhancing[[:space:]]Summarization[[:space:]]Factuality[[:space:]]Evaluation[[:space:]]with[[:space:]]AMR-Driven[[:space:]]Negative[[:space:]]Samples[[:space:]]Generation/c3692226-d852-4742-aa6f-be022e969565_origin.pdf filter=lfs diff=lfs merge=lfs -text
1533
+ 2024/ARES_[[:space:]]An[[:space:]]Automated[[:space:]]Evaluation[[:space:]]Framework[[:space:]]for[[:space:]]Retrieval-Augmented[[:space:]]Generation[[:space:]]Systems/811bf4b8-0830-44db-b790-dd0f4352134a_origin.pdf filter=lfs diff=lfs merge=lfs -text
1534
+ 2024/ARM_[[:space:]]Alignment[[:space:]]with[[:space:]]Residual[[:space:]]Energy-Based[[:space:]]Model/bd5e23a7-df8b-4971-bbf0-ac381d5b9f69_origin.pdf filter=lfs diff=lfs merge=lfs -text
1535
+ 2024/AWESOME_[[:space:]]GPU[[:space:]]Memory-constrained[[:space:]]Long[[:space:]]Document[[:space:]]Summarization[[:space:]]using[[:space:]]Memory[[:space:]]Mechanism[[:space:]]and[[:space:]]Global[[:space:]]Salient[[:space:]]Content/882ae052-0474-4133-b61b-e83ff4fa2988_origin.pdf filter=lfs diff=lfs merge=lfs -text
1536
+ 2024/Accurate[[:space:]]Knowledge[[:space:]]Distillation[[:space:]]via[[:space:]]n-best[[:space:]]Reranking/644ad9ce-2420-4341-aa40-1fc08b3573a1_origin.pdf filter=lfs diff=lfs merge=lfs -text
1537
+ 2024/Teaching[[:space:]]Llama[[:space:]]a[[:space:]]New[[:space:]]Language[[:space:]]Through[[:space:]]Cross-Lingual[[:space:]]Knowledge[[:space:]]Transfer/5d5740fb-d146-456c-a458-d298a92cbe89_origin.pdf filter=lfs diff=lfs merge=lfs -text
1538
+ 2024/Teaching[[:space:]]a[[:space:]]Multilingual[[:space:]]Large[[:space:]]Language[[:space:]]Model[[:space:]]to[[:space:]]Understand[[:space:]]Multilingual[[:space:]]Speech[[:space:]]via[[:space:]]Multi-Instructional[[:space:]]Training/132708b8-1889-4926-b6bd-b61aba47c376_origin.pdf filter=lfs diff=lfs merge=lfs -text
1539
+ 2024/Testing[[:space:]]the[[:space:]]Effect[[:space:]]of[[:space:]]Code[[:space:]]Documentation[[:space:]]on[[:space:]]Large[[:space:]]Language[[:space:]]Model[[:space:]]Code[[:space:]]Understanding/6c03624d-8e8a-4c27-a5d6-01c3aad326f4_origin.pdf filter=lfs diff=lfs merge=lfs -text
1540
+ 2024/Testing[[:space:]]the[[:space:]]limits[[:space:]]of[[:space:]]logical[[:space:]]reasoning[[:space:]]in[[:space:]]neural[[:space:]]and[[:space:]]hybrid[[:space:]]models/2b7f5d77-3dcb-4be7-b118-428c24052db5_origin.pdf filter=lfs diff=lfs merge=lfs -text
1541
+ 2024/The[[:space:]]Curious[[:space:]]Decline[[:space:]]of[[:space:]]Linguistic[[:space:]]Diversity_[[:space:]]Training[[:space:]]Language[[:space:]]Models[[:space:]]on[[:space:]]Synthetic[[:space:]]Text/8eed7714-8a66-41a1-abfe-a2b15e8d649d_origin.pdf filter=lfs diff=lfs merge=lfs -text
1542
+ 2024/The[[:space:]]Impact[[:space:]]of[[:space:]]Differential[[:space:]]Privacy[[:space:]]on[[:space:]]Group[[:space:]]Disparity[[:space:]]Mitigation/80a60177-251c-476e-bfb3-24f3b6e015f6_origin.pdf filter=lfs diff=lfs merge=lfs -text
1543
+ 2024/The[[:space:]]Whole[[:space:]]is[[:space:]]Better[[:space:]]than[[:space:]]the[[:space:]]Sum_[[:space:]]Using[[:space:]]Aggregated[[:space:]]Demonstrations[[:space:]]in[[:space:]]In-Context[[:space:]]Learning[[:space:]]for[[:space:]]Sequential[[:space:]]Recommendation/ed21fb3d-ba42-488c-bdcf-3be2afe726fc_origin.pdf filter=lfs diff=lfs merge=lfs -text
1544
+ 2024/Think[[:space:]]Before[[:space:]]You[[:space:]]Speak_[[:space:]]Cultivating[[:space:]]Communication[[:space:]]Skills[[:space:]]of[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]via[[:space:]]Inner[[:space:]]Monologue/aa51f82d-24c9-4bf1-a93d-092a4785c9e1_origin.pdf filter=lfs diff=lfs merge=lfs -text
1545
+ 2024/Think[[:space:]]While[[:space:]]You[[:space:]]Write_[[:space:]]Hypothesis[[:space:]]Verification[[:space:]]Promotes[[:space:]]Faithful[[:space:]]Knowledge-to-Text[[:space:]]Generation/16f04ac0-4023-4759-a395-0962abdc35d9_origin.pdf filter=lfs diff=lfs merge=lfs -text
1546
+ 2024/Time[[:space:]]Machine[[:space:]]GPT/8dba88ee-fa07-4c8c-9065-c170cf495818_origin.pdf filter=lfs diff=lfs merge=lfs -text
1547
+ 2024/Tokenization[[:space:]]Matters_[[:space:]]Navigating[[:space:]]Data-Scarce[[:space:]]Tokenization[[:space:]]for[[:space:]]Gender[[:space:]]Inclusive[[:space:]]Language[[:space:]]Technologies/f26bd3b0-feea-46b5-8ae8-da63abd26cad_origin.pdf filter=lfs diff=lfs merge=lfs -text
1548
+ 2024/Tokenizer[[:space:]]Choice[[:space:]]For[[:space:]]LLM[[:space:]]Training_[[:space:]]Negligible[[:space:]]or[[:space:]]Crucial_/9e24ef0b-46b8-4f92-9290-ab7a710a564e_origin.pdf filter=lfs diff=lfs merge=lfs -text
1549
+ 2024/Towards[[:space:]]Better[[:space:]]Generalization[[:space:]]in[[:space:]]Open-Domain[[:space:]]Question[[:space:]]Answering[[:space:]]by[[:space:]]Mitigating[[:space:]]Context[[:space:]]Memorization/feaad982-edd4-4d19-9089-45432fe91a4c_origin.pdf filter=lfs diff=lfs merge=lfs -text
1550
+ 2024/Towards[[:space:]]an[[:space:]]On-device[[:space:]]Agent[[:space:]]for[[:space:]]Text[[:space:]]Rewriting/c88fe251-6a2a-4416-8fa5-37739f30e335_origin.pdf filter=lfs diff=lfs merge=lfs -text
1551
+ 2024/Tram_[[:space:]]A[[:space:]]Token-level[[:space:]]Retrieval-augmented[[:space:]]Mechanism[[:space:]]for[[:space:]]Source[[:space:]]Code[[:space:]]Summarization/c1f88a2e-e251-42eb-84dd-25a7e10bdb2b_origin.pdf filter=lfs diff=lfs merge=lfs -text
1552
+ 2024/UEGP_[[:space:]]Unified[[:space:]]Expert-Guided[[:space:]]Pre-training[[:space:]]for[[:space:]]Knowledge[[:space:]]Rekindle/2f6eefff-ed14-46d4-89f2-47da1ee46068_origin.pdf filter=lfs diff=lfs merge=lfs -text
1553
+ 2024/UGIF-DataSet_[[:space:]]A[[:space:]]New[[:space:]]Dataset[[:space:]]for[[:space:]]Cross-lingual,[[:space:]]Cross-modal[[:space:]]Sequential[[:space:]]actions[[:space:]]on[[:space:]]the[[:space:]]UI/8a1be90e-f604-41fe-8859-3cd3eb508a8b_origin.pdf filter=lfs diff=lfs merge=lfs -text
1554
+ 2024/UNO-DST_[[:space:]]Leveraging[[:space:]]Unlabelled[[:space:]]Data[[:space:]]in[[:space:]]Zero-Shot[[:space:]]Dialogue[[:space:]]State[[:space:]]Tracking/14cfc691-6100-4048-8050-d5e521df42d3_origin.pdf filter=lfs diff=lfs merge=lfs -text
1555
+ 2024/Uncertainty[[:space:]]Estimation[[:space:]]on[[:space:]]Sequential[[:space:]]Labeling[[:space:]]via[[:space:]]Uncertainty[[:space:]]Transmission/e50a08f3-a479-468a-9ac6-67a5a8140164_origin.pdf filter=lfs diff=lfs merge=lfs -text
1556
+ 2024/Unleashing[[:space:]]the[[:space:]]Power[[:space:]]of[[:space:]]LLMs[[:space:]]in[[:space:]]Court[[:space:]]View[[:space:]]Generation[[:space:]]by[[:space:]]Stimulating[[:space:]]Internal[[:space:]]Knowledge[[:space:]]and[[:space:]]Incorporating[[:space:]]External[[:space:]]Knowledge/431fcc9e-ef01-455c-b5ca-4b91d7534a86_origin.pdf filter=lfs diff=lfs merge=lfs -text
1557
+ 2024/Unlocking[[:space:]]Parameter-Efficient[[:space:]]Fine-Tuning[[:space:]]for[[:space:]]Low-Resource[[:space:]]Language[[:space:]]Translation/b6edb24b-b744-48b9-acfb-b6fb0ec917f7_origin.pdf filter=lfs diff=lfs merge=lfs -text
1558
+ 2024/VLUE_[[:space:]]A[[:space:]]New[[:space:]]Benchmark[[:space:]]and[[:space:]]Multi-task[[:space:]]Knowledge[[:space:]]Transfer[[:space:]]Learning[[:space:]]for[[:space:]]Vietnamese[[:space:]]Natural[[:space:]]Language[[:space:]]Understanding/938312d9-e38f-4ede-8adc-d7aae9e70062_origin.pdf filter=lfs diff=lfs merge=lfs -text
1559
+ 2024/VOLTA_[[:space:]]Improving[[:space:]]Generative[[:space:]]Diversity[[:space:]]by[[:space:]]Variational[[:space:]]Mutual[[:space:]]Information[[:space:]]Maximizing[[:space:]]Autoencoder/1bb2cb4b-4ef7-4572-9b73-75569b0d5abc_origin.pdf filter=lfs diff=lfs merge=lfs -text
1560
+ 2024/ViGLUE_[[:space:]]A[[:space:]]Vietnamese[[:space:]]General[[:space:]]Language[[:space:]]Understanding[[:space:]]Benchmark[[:space:]]and[[:space:]]Analysis[[:space:]]of[[:space:]]Vietnamese[[:space:]]Language[[:space:]]Models/e30a3fa9-00e7-4723-b4e7-3272f33c3bbf_origin.pdf filter=lfs diff=lfs merge=lfs -text
1561
+ 2024/Visual[[:space:]]Enhanced[[:space:]]Entity-Level[[:space:]]Interaction[[:space:]]Network[[:space:]]for[[:space:]]Multimodal[[:space:]]Summarization/bf611172-49d7-46a6-a009-aafdc8c2a682_origin.pdf filter=lfs diff=lfs merge=lfs -text
1562
+ 2024/WaterJudge_[[:space:]]Quality-Detection[[:space:]]Trade-off[[:space:]]when[[:space:]]Watermarking[[:space:]]Large[[:space:]]Language[[:space:]]Models/0652d0ca-9731-49c3-8c6c-d345c1407f55_origin.pdf filter=lfs diff=lfs merge=lfs -text
1563
+ 2024/WebWISE_[[:space:]]Unlocking[[:space:]]Web[[:space:]]Interface[[:space:]]Control[[:space:]]for[[:space:]]LLMs[[:space:]]via[[:space:]]Sequential[[:space:]]Exploration/5781e518-888b-49e5-9014-d417e4f7b182_origin.pdf filter=lfs diff=lfs merge=lfs -text
1564
+ 2024/Weight-Inherited[[:space:]]Distillation[[:space:]]for[[:space:]]Task-Agnostic[[:space:]]BERT[[:space:]]Compression/c9b77bb9-f16a-4b6e-bb32-fbd7a0628ba8_origin.pdf filter=lfs diff=lfs merge=lfs -text
1565
+ 2024/What[[:space:]]Makes[[:space:]]Math[[:space:]]Word[[:space:]]Problems[[:space:]]Challenging[[:space:]]for[[:space:]]LLMs_/3f2cbc27-439d-4ea9-b505-75ecb2c5bba4_origin.pdf filter=lfs diff=lfs merge=lfs -text
1566
+ 2024/When[[:space:]]Hindsight[[:space:]]is[[:space:]]Not[[:space:]]20_20_[[:space:]]Testing[[:space:]]Limits[[:space:]]on[[:space:]]Reflective[[:space:]]Thinking[[:space:]]in[[:space:]]Large[[:space:]]Language[[:space:]]Models/37253066-e433-497d-bef5-b4608fdeadd8_origin.pdf filter=lfs diff=lfs merge=lfs -text
1567
+ 2024/When[[:space:]]Quantization[[:space:]]Affects[[:space:]]Confidence[[:space:]]of[[:space:]]Large[[:space:]]Language[[:space:]]Models_/70047ee9-db75-4cb2-9df3-25afd102acf5_origin.pdf filter=lfs diff=lfs merge=lfs -text
1568
+ 2024/Which[[:space:]]Modality[[:space:]]should[[:space:]]I[[:space:]]use[[:space:]]-[[:space:]]Text,[[:space:]]Motif,[[:space:]]or[[:space:]]Image_[[:space:]]_[[:space:]]Understanding[[:space:]]Graphs[[:space:]]with[[:space:]]Large[[:space:]]Language[[:space:]]Models/77ceabd5-a8d0-43df-bc96-dae4647bee4e_origin.pdf filter=lfs diff=lfs merge=lfs -text
1569
+ 2024/Why[[:space:]]So[[:space:]]Gullible_[[:space:]]Enhancing[[:space:]]the[[:space:]]Robustness[[:space:]]of[[:space:]]Retrieval-Augmented[[:space:]]Models[[:space:]]against[[:space:]]Counterfactual[[:space:]]Noise/864e71f7-42b5-4566-ba87-a3a67a628b70_origin.pdf filter=lfs diff=lfs merge=lfs -text
1570
+ 2024/X-LLaVA_[[:space:]]Optimizing[[:space:]]Bilingual[[:space:]]Large[[:space:]]Vision-Language[[:space:]]Alignment/27ed7f81-73c4-4094-b306-f72f91e25cb0_origin.pdf filter=lfs diff=lfs merge=lfs -text
1571
+ 2024/Z-GMOT_[[:space:]]Zero-shot[[:space:]]Generic[[:space:]]Multiple[[:space:]]Object[[:space:]]Tracking/5880eb2c-990b-4669-94d9-70b4d61873fc_origin.pdf filter=lfs diff=lfs merge=lfs -text
1572
+ 2024/ZSEE_[[:space:]]A[[:space:]]Dataset[[:space:]]based[[:space:]]on[[:space:]]Zeolite[[:space:]]Synthesis[[:space:]]Event[[:space:]]Extraction[[:space:]]for[[:space:]]Automated[[:space:]]Synthesis[[:space:]]Platform/4b2b99e2-5ca7-4456-a08f-67fb914e7973_origin.pdf filter=lfs diff=lfs merge=lfs -text
1573
+ 2024/i-Code[[:space:]]V2_[[:space:]]An[[:space:]]Autoregressive[[:space:]]Generation[[:space:]]Framework[[:space:]]over[[:space:]]Vision,[[:space:]]Language,[[:space:]]and[[:space:]]Speech[[:space:]]Data/8827dfda-97f2-4efc-9a81-adfd59822051_origin.pdf filter=lfs diff=lfs merge=lfs -text
1574
+ 2024/mOthello_[[:space:]]When[[:space:]]Do[[:space:]]Cross-Lingual[[:space:]]Representation[[:space:]]Alignment[[:space:]]and[[:space:]]Cross-Lingual[[:space:]]Transfer[[:space:]]Emerge[[:space:]]in[[:space:]]Multilingual[[:space:]]Models_/2dfaac5c-a455-474b-bd0a-2c8a8afff933_origin.pdf filter=lfs diff=lfs merge=lfs -text
1575
+ 2024/“Tell[[:space:]]me[[:space:]]who[[:space:]]you[[:space:]]are[[:space:]]and[[:space:]]I[[:space:]]tell[[:space:]]you[[:space:]]how[[:space:]]you[[:space:]]argue”_[[:space:]]Predicting[[:space:]]Stances[[:space:]]and[[:space:]]Arguments[[:space:]]for[[:space:]]Stakeholder[[:space:]]Groups/18cf9790-d8a5-43fe-b1d5-81f2f34255e6_origin.pdf filter=lfs diff=lfs merge=lfs -text
2024/A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning/43284e62-f62c-4088-9544-f4ff108eb4c5_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning/43284e62-f62c-4088-9544-f4ff108eb4c5_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning/43284e62-f62c-4088-9544-f4ff108eb4c5_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2568d4150486082b20a837ba744eea5bf0e01bd10a8e1fdd2d803c70785fa587
3
+ size 664161
2024/A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:841f62fd5c5ebb0d0040571a54788eadfea3e1e37430853f62d46b1393718b4a
3
+ size 4677661
2024/A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Comprehensive Study of Gender Bias in Chemical Named Entity Recognition Models/8f8aca71-294d-4b9e-9495-82fce99e28eb_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Comprehensive Study of Gender Bias in Chemical Named Entity Recognition Models/8f8aca71-294d-4b9e-9495-82fce99e28eb_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Comprehensive Study of Gender Bias in Chemical Named Entity Recognition Models/8f8aca71-294d-4b9e-9495-82fce99e28eb_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a413d71027c1e1bd6506e9d40da6f37706077e5a988ddeb9e95a1319419ee34a
3
+ size 318626
2024/A Comprehensive Study of Gender Bias in Chemical Named Entity Recognition Models/full.md ADDED
@@ -0,0 +1,306 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Comprehensive Study of Gender Bias in Chemical Named Entity Recognition Models
2
+
3
+ Xingmeng Zhao, Ali Niazi, and Anthony Rios
4
+
5
+ Department of Information Systems and Cyber Security
6
+
7
+ The University of Texas at San Antonio
8
+
9
+ {xingmeng.zhao,ali.niazi,anthony.rios}@utsa.edu
10
+
11
+ # Abstract
12
+
13
+ Chemical named entity recognition (NER) models are used in many downstream tasks, from adverse drug reaction identification to pharmacoepidemiology. However, it is unknown whether these models work the same for everyone. Performance disparities can potentially cause harm rather than the intended good. This paper assesses gender-related performance disparities in chemical NER systems. We develop a framework for measuring gender bias in chemical NER models using synthetic data and a newly annotated corpus of over 92,405 words with self-identified gender information from Reddit. Our evaluation of multiple biomedical NER models reveals evident biases. For instance, synthetic data suggests that female names are frequently misclassified as chemicals, especially when it comes to brand name mentions. Additionally, we observe performance disparities between female- and male-associated data in both datasets. Many systems fail to detect contraceptives such as birth control. Our findings emphasize the biases in chemical NER models, urging practitioners to account for these biases in downstream applications.
14
+
15
+ # 1 INTRODUCTION
16
+
17
+ Chemical named entity recognition (NER) is the extraction of chemical mentions (e.g., drug names) from the text. Chemical NER is essential in many downstream tasks, from pharmacovigilance (O'Connor et al., 2014) to facilitating drug discovery by mining biomedical research articles (Agarwal and Searls, 2008). For instance, Chemical NER systems are the first step in pipelines developed to mine adverse drug reactions (ADRs) (Farrugia and Abela, 2020; Mammiet al., 2013). However, it is unknown whether these systems perform the same for everyone. Who benefits from these systems, and who can be harmed? In this paper, we present a comprehensive analysis of
18
+
19
+ gender-related performance disparities of Chemical NER Systems.
20
+
21
+ Performance disparities have recently received substantial attention in the field of NLP. For example, there are differences in text classification models across sub-populations such as gender, race, and minority dialects (Dixon et al., 2018; Park et al., 2018; Badjatiya et al., 2019; Rios, 2020; Lwowski and Rios, 2021; Mozafari et al., 2020). Performance disparities can manifest in multiple parts of NLP systems, including the pre-trained models (e.g., word embeddings) and their downstream applications (Zhao et al., 2019; Goldfarb-Tarrant et al., 2021; Zhao et al., 2017). While previous research has explored these disparities for NER systems, the focus has been largely on synthetic data and non-biomedical NER applications (Mehrabi et al., 2020). Our study addresses this gap by providing a comprehensive examination of gender-related performance disparities in Chemical NER, focusing on both synthetic and real-world data.
22
+
23
+ This paper is most similar to Mehrabi et al. (2020) with two primary distinctions. First, our focus is on Chemical NER, a less studied area in Biomedical NLP despite its having major bias implications. Second, while Mehrabi et al. (2020) uses synthetic data and templates (e.g., NAME in LOCATION) for bias analysis, we delve deeper into the potential including an analysis of the interaction of morphology patterns on bias. For instance, Lieven et al. (2015) highlighted a preference for linguistically feminine brand names in the market, leading drug companies to adopt such naming conventions. These patterns in training data can inadvertently cause models to misclassify female names as chemicals.
24
+
25
+ We also examine real-world data looking at the performance of chemical NER systems on groups that identify as male or female. For instance, Sundbom et al. (2017) shows that women are more frequently prescribed antidepressants than men. Other
26
+
27
+ studies, like Riley III et al. (1998), reveal gender differences in pain sensitivity and opioid prescriptions, with women receiving opioids twice as often. If chemical NER models struggle to detect the drugs often mentioned, then it may cause gender-specific biases in their performance. Our analysis identifies some of these patterns in real data.
28
+
29
+ Overall, this paper presents a dual approach: we explore template data but also assemble and annotate a novel real-world dataset with self-identified gender information. Synthetic data allows us to target specific biases in the models (e.g., morphological issues). Likewise, we believe exploring data from people who have self-identified their demographic information will provide a more realistic understanding of how these models will perform based on how people write and what they write about.
30
+
31
+ Our main contributions are two-fold:
32
+
33
+ 1. We introduce a novel annotated Chemical NER dataset for social media data. Moreover, the dataset contains self-identified gender information to be used to measure gender bias in Chemical NER models. To the best of our knowledge, this is the first Reddit-based Chemical NER dataset, and it is the first Chemical NER dataset with self-identified gender information.
34
+
35
+ 2. We provide a comprehensive testing framework for gender bias in Chemical NER using both synthetic and real-world data. To the best of our knowledge, our results are the first to conduct bias analysis for chemical NER models. This allows a better understanding of modern chemical NER techniques.
36
+
37
+ # 2 RELATED WORK
38
+
39
+ Prior work extensively curated labeled data for chemical NER and developed domain-specific models. For example, the CHEMDNER corpus (Krallinger et al., 2015) was created for the 2014 BioCreative shared task on chemical extraction from text. Researchers recognize the importance of these systems and are working to make them as fair and accurate as possible. Likewise, the CDR (Li et al., 2016) dataset was developed to detect chemical-disease relations for the 2015 shared task. Similar to traditional NER tasks (Li et al., 2020), a broad range of approaches have
40
+
41
+ been proposed to detect chemicals (Rocktäschel et al., 2012; Chiu et al., 2021; Lee et al., 2020; Sun et al., 2021; López-Ábeda et al., 2021; Weber et al., 2021), from traditional conditional random fields to deep learning methods. Many recent neural network-based advances can be broken into three main groups of models, word, character, and contextual embedding-based models. For instance, Lee et al. (2020) trained a biomedical-specific BERT model that improved on many prior state-of-the-art results. HunFlair (Weber et al., 2021) introduced a method that matches the word, contextual, and character embeddings into a unified framework to achieve state-of-the-art performance. In this paper, we evaluate several state-of-the-art systems. Particularly, we focus on systems that use word embeddings, sub-word embeddings, and character embeddings, which allows us to understand the impact of morphological features of the chemical names on gender bias.
42
+
43
+ Several previous works have measured and highlighted bias in different NLP tasks. For instance, Sap et al. (2019) measures the bias of offensive language detection models on African American English. Likewise, Park et al. (2018) measures gender bias of abusive language detection models and evaluates various methods such as word embedding debiasing and data augmentation to improve biased methods. Davidson et al. (2019) shows racial and ethnic bias when identifying hate speech online and that tweets in the black-aligned corpus are more likely to be assigned hate speech. Gaut et al. (2020) creates the WikiGenderBias dataset to evaluate the gender bias in the relation extraction (RE) model, confirming that the RE system behaves differently when the target entities are of different genders. Cirillo et al. (2020) demonstrate that biases in biomedical applications can stem from various sources, such as skewed diagnoses resulting from clinical depression scales that measure symptoms more prevalent in women, potentially leading to a higher reported incidence of depression among this group (Martin et al., 2013). Other sources include the underrepresentation of minority populations such as pregnant women (Organization and for Women's Health in Society, 2009), non-representative samples in AI training data, and inherent algorithmic discrimination, all potentially contributing to inaccurate and unfair results.
44
+
45
+ Recent research has shown that although Large Language Models (LLMs) are now increasingly
46
+
47
+ being used for tasks such as Named Entity Recognition (Ashok and Lipton, 2023; Wang et al., 2023) and relation classification (Wan et al., 2023), they also have the potential to reinforce or exacerbate gender biases, which emphasizes the importance of careful deployment to prevent the reinforcement of stereotypes (Kotek et al., 2023).
48
+
49
+ Overall, several metrics have been proposed to measure gender bias. One of the most commonly used metrics involves measuring bias by examining model performance disparities on male and female data points (Kiritchenko and Mohammad, 2018). Performance disparities have been observed across a wide array of NLP tasks such as detecting virus-related text (Lwowski and Rios, 2021), language generation (Sheng et al., 2019), coreference resolution (Zhao et al., 2018), named entity recognition (Mehrabi et al., 2020), and machine translation (Font and Costa-jussa, 2019). Most related to this study, researchers have shown that traditional NER systems (i.e., to detect people, locations, and organizations) are biased concerning gender (Mehrabi et al., 2020). Specifically,ehrabi et al. (2020) demonstrates that female names are more likely to be misidentified as a location than male names. This stream of research underscores the importance of our investigation into performance disparities in NLP.
50
+
51
+ Finally, while not directly studied in prior NER experiments, it is important to discuss some background about morphological elements of chemical names. Morphological elements often representing masculinity or femininity are frequently used in chemical naming conventions. According to Lieven et al. (2015), consumers perceive linguistically feminine brand names as warmer and more likable. For instance, adding a diminutive suffix to the masculine form of the name usually feminizes it. The masculine names such as Robert, Julius, Antonio, and Carolus (more commonly Charles today) are feminized by adding the suffixes “a”, “ia”, “ina”, or “ine” to generate Roberta, Julia, Antonia, and Caroline, respectively. The suffixes “ia” and “a” is commonly used for inorganic oxides such as magnesia, zirconia, silica, and titania (Hepler-Smith, 2015). Likewise, “ine” is used as the suffix in many organic bases and base substances such as quinine, morphine, guanidine, xanthine, pyrimidine, and pyridine. Hence, while these practices were not originally “biased” in their original usage, they can potentially impact model performance
52
+
53
+ <table><tr><td></td><td># of Chems.</td><td># Sentences</td><td># Words</td></tr><tr><td>CDR</td><td>4,409</td><td>14,306</td><td>346,001</td></tr><tr><td>CHEMDNER</td><td>84,355</td><td>87,125</td><td>2,431,247</td></tr><tr><td>CHEBI</td><td>24,121</td><td>12,913</td><td>423,577</td></tr><tr><td>AskDoc MALE</td><td>1,501</td><td>2,862</td><td>52,221</td></tr><tr><td>AskDoc FEMALE</td><td>1,774</td><td>2,151</td><td>40,184</td></tr><tr><td>AskDoc ALL</td><td>3,275</td><td>5,013</td><td>92,405</td></tr><tr><td>Synthetic MALE</td><td>2,800,000</td><td>2,800,000</td><td>25,760,000</td></tr><tr><td>Synthetic FEMALE</td><td>2,800,000</td><td>2,800,000</td><td>25,760,000</td></tr><tr><td>Synthetic ALL</td><td>5,600,000</td><td>5,600,000</td><td>51,520,000</td></tr></table>
54
+
55
+ Table 1: Dataset statistics.
56
+
57
+ (e.g., feminine names can be detected as chemicals). Therefore, the patterns can cause biased models. As part of our approach to investigate this potential source of bias, we propose using synthetic data to quantify this phenomenon.
58
+
59
+ # 3 DATASETS
60
+
61
+ We use five main datasets used in our experiments: three are publicly-released datasets based on PubMed (CDR (Li et al., 2016), CHEMDNER (Krallinger et al., 2015), and CHEBI (Shardlow et al., 2018)) and two are newly curated datasets, one using social media data and another based on templates. Table 1 provides their statistics. We selected the PubMed datasets for their prominence in chemical NER research. At the same time, the r/AskDocs subreddit was chosen for its large community, diverse health discussions, and consistent gender identification format, such as "I [25 M]". We provide complete descriptions of the publicly-released datasets in the Appendix. In this section, focus on the description of the newly collected and annotated data.
62
+
63
+ Synthetic (Template) Data We designed a new synthetic dataset to quantify the gender bias in the Chemical NER models. Intuitively, the purpose of the synthetic dataset is to measure two items. First, do gender-related names and pronouns get incorrectly classified as chemicals (i.e., cause false positives)? Second, does the appearance of gender-related names/pronouns impact the prediction of other words (i.e., cause false negatives)? Specifically, we create templates such as "[NAME] said they have been taking [CHEMICAL] for an illness." In the "[NAME]" column, we filled in the names associated with the male and female genders based on the 200 most popular baby names provided by the Social Security Administration $^{2}$ . Hence, we
64
+
65
+ # Templates
66
+
67
+ [NAME] said they have been taking [CHEMICAL] for an illness.
68
+
69
+ Did you hear that [NAME] has been using [CHEMICAL].
70
+
71
+ [CHEMICAL] has really been harming [NAME], I hope they stop.
72
+
73
+ I think [NAME] is addicted to [CHEMICAL].
74
+
75
+ [NAME], please stop taking [CHEMICAL], it is bad for you.
76
+
77
+ Table 2: Templates used to create the synthetic dataset.
78
+
79
+ refer to these "gender-related" names in this paper. We recognize that gender is not binary and that names do not equal gender. We also recognize that the names do not accurately capture immigrants. This is a similar framework used by Mishra et al. (2020) and other gender bias papers (Kiritchenko and Mohammad, 2018). The "[CHEMICAL]" field is filled with the chemicals listed in the Unified Medical Language System (UMLS) (Bodenreider, 2004). For example, completed templates include "John said they have been taking citalopram for illness." and "Karen said they have been taking citalopram for illness." We created examples using five templates, 200 chemicals, and 200 names for each gender for each decade from 1880 to 2010, generating a total of 200,000 templates for each of the 14 decades. A list of additional templates is shown in Table 2. This dataset is only used for evaluation.
80
+
81
+ AskDocs We develop a new corpus using data from the Reddit community r/AskDocs. r/AskDocs provides a platform for peer-to-peer and patient-provider interactions on social media to ask medical-related questions. The providers are generally verified medical professionals. We collected all the posts from the community with self-identified gender mentions. To identify self-identified gender, we use a simple regular expression that looks for mentions of "I" or "My" followed by gender, and optionally age, e.g., "I [F34]", "My (23F)", "I [M]". Next, following general annotation recommendations for NLP (Pustejovsky and Stubbs, 2012), the annotation process was completed in two stages to increase the reliability of the labels. First, two graduate students annotated chemicals in the dataset resulting in an inter-annotator agreement of .874, achieving a similar agreement score as CDR and CHEMDNER. Second, a graduate student manually reviewed all disagreeing items to adjudicate the label and generate the gold standard. All students followed the same annotation guidelines developed for the CHEMDNER corpus.
82
+
83
+ Contrary to the synthetic dataset, the actual data will allow users to measure biases arising from text content differences across posts with different self-identified gender mentions.
84
+
85
+ # 4 Methods
86
+
87
+ The goal of NER is to classify words into a sequence of labels. Formally, given an input sequence $\mathcal{X} = [x_1,x_2,\dots ,x_N]$ with $\mathbf{N}$ tokens, the goal of NER is to output the corresponding label sequence $\mathcal{Y} = [y_{1},y_{2},\ldots ,y_{N}]$ with the same length, thus modeling the probabilities over a sequence $p(\mathcal{V}|\mathcal{X})$ For this task, we conducted an experiment evaluating out-of-domain models on the AskDoc corpus. Specifically, models were trained and optimized on the CHEMDNER and CDR datasets and then applied to the AskDoc dataset. All models are evaluated using precision, recall, and F1. To measure bias, we use precision, recall, and F1 differences (Czarnowska et al., 2021). Specifically, let $m$ be Males' performance metric (e.g., F1), and $f$ represent the Female metric. The bias is measured using the difference $f - m$
88
+
89
+ # 4.1 MODELS
90
+
91
+ We evaluate three distinct models: Word Embedding models (Mikolov et al., 2013b), Flair embedding models (Akbik et al., 2018), and BERT-based models (Devlin et al., 2019a). While the embeddings for each model type vary, the sequence processing component is the same for each method. Specifically, following best practices for state-of-the-art NER models (Akbik et al., 2019a), we use a Bidirectional long short-term memory network (BiLSTM) (Hochreiter and Schmidhuber, 1997) due to its sequential characteristics and capability to capture long-term dependencies. Recent research has shown that Bi-LSTM models can produce state-of-the-art performance when combined with contextual embeddings and Conditional Random Fields (CRFs) (Mueller et al., 2020; Veyseh et al., 2022). Hence, in this paper, we use the Bi-LSTM+CRF implementation in the Flair NLP framework (Akbik et al., 2019a). The Bi-LSTM+CRF model is flexible because it can accept arbitrary embeddings as input. It is not constrained to traditional word embeddings (e.g., Word2Vec).
92
+
93
+ # 4.2 EMBEDDINGS
94
+
95
+ We explore three sets of embeddings: Word2Vec, Flair, and BERT. For all embeddings, we ex
96
+
97
+ periment with domain-specific (e.g., trained on PubMed) and general embeddings (e.g., Google News corpus). We chose these three embedding types because they cover word, subword, and character-level embedding methods. Social media texts are brief and informal. Drugs and chemicals are typically described in descriptive, nontechnical language with spelling errors. These issues are challenging for chemical NER models trained on social media data. Moreover, some medications, like "all-trans-retinoic acid", contain morphologically difficult parts. Yet, similar-structured phrases still generally represent similar things (Zhang et al., 2021). How we represent words can directly impact performance and bias. We describe each embedding we use below:
98
+
99
+ Word2Vec. We use Word2Vec domain-specific embeddings pre-trained on PubMed and PubMed Central (Pyysalo et al., 2013) and general embeddings trained on the Google News corpus (Mikolov et al., 2013a). The embeddings are publicly released as part of the FLAIR package. It is important to state that word embeddings have a major limitation. Word embeddings use a distinct vector to represent each word and ignore words' internal structure (morphology). This can result in models not particularly good at learning rare or out-of-vocabulary (OOV) words in the data. The growing number of emerging chemicals/drugs with diverse morphological forms makes recognizing chemical entities on social media platforms particularly challenging. Another challenge posed by user-generated content is its unique characteristics and use of informal language, typically short context, noisy, sparse, and ambiguous content. Hence, we hypothesize that word embeddings would perform worse than other methods. However, it is unclear how these differences can impact bias.
100
+
101
+ Flair/HunFlair. Weber et al. (2021) and Akbik et al. (2019b) recently proposed a Flair contextual string embeddings (a character-level language model). Specifically, we use two versions of the embeddings in the HunFlair extension of the Flair package (Weber et al., 2021). The domain-specific embeddings are pre-trained on a corpus of three million full-text articles from the Pubmed Central BioC text mining collection (Comeau et al., 2019) and about twenty-five million abstracts from PubMed. The general embeddings are trained on a one billion word news corpus (Akbik et al., 2019b).
102
+
103
+ Unlike word embeddings mentioned above, Flair embeddings are a contextualized character-level representation. Flair embeddings are obtained from the hidden states of a bi-directional recurrent neural network (BiRNN). They are trained without any explicit notion of a word. Instead, Flair models a word as sequences of characters. Moreover, these embeddings are determined by the text surrounding them, i.e., the same word will have different embeddings depending on its contextual usage. The variant of the Flair embedding used in this study is the Pooled Flair embedding (Weber et al., 2021; Akbik et al., 2018). Furthermore, we use the forward and backward representations of Flair embeddings returned from the BiRNN. Intuitively, character-level embeddings can potentially help improve model predictions with better OOV handling.
104
+
105
+ (Bio)BERT. We also evaluate two transformer-based embeddings: BERT and BioBERT. Specifically, we use the BERT variant "bert-base-uncased" available Flair and HuggingFace (Wolf et al., 2020). BERT was pre-trained using the BooksCorpus (800M words) and English Wikipedia (2,500M words) (Devlin et al., 2019b). Likewise, BioBERT embeddings further fine-tuned BERT on PubMed (Lee et al., 2020).
106
+
107
+ BERT embeddings are based on subword tokenization, so BERT can potentially handle OOV better than word embeddings alone. Intuitively, it fits somewhere between Flair (generating word embeddings from character representations) and Word2Vec (which independently learns embeddings for each word). Likewise, each word representation is context-dependent. Hence, BERT is better at handling word polysemy by capturing word semantics in context.
108
+
109
+ # 5 RESULTS
110
+
111
+ CDR, CHEMDNER, and CHEBI Results. Table 3 reports the recall, precision, and F1 scores for each embedding type for the CDR, CHEMDNER, and CHEBI datasets. The reported scores are for the best models-hyperparameter combinations on their original validation datasets. Overall, we find that the Flair and BERT-based methods outperform word embeddings. The BERT embeddings result in the best performance for the CDR dataset. While in the CHEMDNER corpus, the PubMed Flair embeddings outperform the BERT embeddings (.9018 vs. .8938). For CHEMBI, the BioBERT embeddings work the best (.7720 vs. .7322 and .6372).
112
+
113
+ <table><tr><td></td><td>Prec.</td><td>Rec.</td><td>F1</td></tr><tr><td>CDR + PubMed Word</td><td>.8962</td><td>.8797</td><td>.8615</td></tr><tr><td>CDR + PubMed Flair</td><td>.9090</td><td>.8984</td><td>.8920</td></tr><tr><td>CDR + BioBERT</td><td>.9030</td><td>.8913</td><td>.8971</td></tr><tr><td>CDR + General Word</td><td>.8046</td><td>.8006</td><td>.8026</td></tr><tr><td>CDR + General Flair</td><td>.8794</td><td>.8580</td><td>.8686</td></tr><tr><td>CDR + BERT</td><td>.9181</td><td>.9174</td><td>.9100</td></tr><tr><td>CHEMDNER + PubMed Word</td><td>.8963</td><td>.8887</td><td>.8846</td></tr><tr><td>CHEMDNER + PubMed Flair</td><td>.9133</td><td>.9112</td><td>.9018</td></tr><tr><td>CHEMDNER + BioBERT</td><td>.9112</td><td>.8861</td><td>.8985</td></tr><tr><td>CHEMDNER + General Word</td><td>.8267</td><td>.7570</td><td>.7903</td></tr><tr><td>CHEMDNER + General Flair</td><td>.8985</td><td>.8696</td><td>.8838</td></tr><tr><td>CHEMDNER + BERT</td><td>.9122</td><td>.8840</td><td>.8938</td></tr><tr><td>CHEBI + PubMed Word</td><td>.7384</td><td>.7123</td><td>.7251</td></tr><tr><td>CHEBI + PubMed Flair</td><td>.8051</td><td>.7384</td><td>.7703</td></tr><tr><td>CHEBI + BioBERT</td><td>.7858</td><td>.7703</td><td>.7780</td></tr><tr><td>CHEBI + General Word</td><td>.5999</td><td>.6793</td><td>.6372</td></tr><tr><td>CHEBI + General Flair</td><td>.7454</td><td>.7196</td><td>.7322</td></tr><tr><td>CHEBI + BERT</td><td>.7740</td><td>.7700</td><td>.7720</td></tr></table>
114
+
115
+ Table 3: CDR, CHEMDNER, and CHEBI Results.
116
+
117
+ Synthetic (Template) Results. We evaluated several Named Entity Recognition (NER) models across multiple datasets and embeddings to assess gender bias, as summarized in Table 4. Specifically, the aggregate measures in the bottom section of Table 4 highlight the overall trends in bias across embedding training data sources (PubMed vs. General) and embedding types (Word, Flair, and BERT). The bias analysis reveals that models generally perform differently on male versus female templates. Particularly, PubMed-trained (including BioBERT) embeddings across all datasets show an average precision bias of .0242 against female names. The General embeddings exhibit substantially more bias, especially in precision with an average difference of .0407. Moreover, while the average scores for Word and (Bio)BERT embeddings show less bias, the General and Flair embeddings indicate more significant bias in precision and F1 scores. These aggregate measures underscore the pervasive nature of gender bias in NER systems and the importance of addressing it in future work.
118
+
119
+ Overall, the major source of bias is that female names are being classified as chemicals. Intuitively, the word embeddings are less biased than Flair and (Bio)BERT-based embeddings because gender-related names are treated independently using word embeddings, or better, do not appear in the embeddings at all. This is particularly evident in the differences in performance between general word
120
+
121
+ embeddings and the PubMed-based word embeddings. The PubMed embeddings do not generally have any direct mentions of named (e.g., John or Jane), hence they are generally less biased than the general domain.
122
+
123
+ This finding that female names are classified as chemicals is consistent with prior research on naming conventions for brands being gendered (Lieven et al., 2015). To further investigate this, we randomly sampled 100 chemicals from all three datasets and measured the number of brand name mentions. Overall, we found one brand name in the CHEMDNER dataset, 19 in the CDR dataset, and 32 in the ASKDOC dataset, which generally matches the bias performance differences in Table 4 (i.e., biases are generally worse in CDR and ASKDOC datasets than the CHEMDNER dataset).
124
+
125
+ AskDoc Results. The AskDoc results, as shown in Table 5, highlight various biases in chemical NER systems on real-world data. This table presents results from models trained on CDR, CHEMDNER, and CHEBI datasets, using different embeddings such as Word, Flair, and (Bio)BERT. Again, the embeddings are both trained on general and domain-specific corpora (e.g., PubMed).
126
+
127
+ For the fine-grained results, we note that bias and performance can vary depending on unique combinations of the dataset and embedding types. However, for the aggregate results, we have two major findings. First, we find that general domain embeddings are more biased when applied to the chemical NER task (e.g., .0056 vs. .0330 precision). This further emphasizes the results from the synthetic data study. Second, we find that word embeddings are generally less fair than Flair BERT/BioBERT embeddings for precision (.0071 vs. .0156 and .0352) and F1 (.0158 vs. .0242 and .0245).
128
+
129
+ What does this mean in real-world terms? Considering a sample of 1,000,000 chemical mentions across male and female posts (a relatively small number in social media), a $4\%$ recall difference results in an additional 40,000 false negatives for the female group. For example, there are well-known health disparities between men and women for depression, with absolute differences of less than $3\%$ (Salk et al., 2017). Hence, a $4\%$ recall difference can substantially impact findings if applied researchers or practitioners use out-of-domain models to understand medications for this disease. Such a considerable gap can markedly affect the utility and trustworthiness of these predictive outcomes
130
+
131
+ <table><tr><td rowspan="2">Dataset + Embeddings</td><td colspan="3">Male</td><td colspan="3">Female</td><td colspan="3">Difference</td></tr><tr><td>Prec.</td><td>Rec.</td><td>F1</td><td>Prec.</td><td>Rec.</td><td>F1</td><td>Prec.</td><td>Rec.</td><td>F1</td></tr><tr><td>CDR + PubMed Word</td><td>1</td><td>.8230</td><td>.9029</td><td>1</td><td>.8230</td><td>.9029</td><td>.0000</td><td>.0000</td><td>.0000</td></tr><tr><td>CDR + PubMed Flair</td><td>.9711</td><td>.9486</td><td>.9597</td><td>.9344</td><td>.9494</td><td>.9418</td><td>.0367</td><td>-.0008</td><td>.0179</td></tr><tr><td>CDR + BioBERT</td><td>.8446</td><td>.9044</td><td>.8733</td><td>.7764</td><td>.9036</td><td>.8352</td><td>.0682</td><td>.0007</td><td>.0381</td></tr><tr><td>CDR + General Word</td><td>.9536</td><td>.6756</td><td>.7907</td><td>.8530</td><td>.6756</td><td>.7539</td><td>.1006</td><td>.0000</td><td>.0368</td></tr><tr><td>CDR + General Flair</td><td>.8325</td><td>.9400</td><td>.8827</td><td>.7610</td><td>.9397</td><td>.8408</td><td>.0715</td><td>.0003</td><td>.0419</td></tr><tr><td>CDR + BERT</td><td>.9867</td><td>.8493</td><td>.9128</td><td>.9728</td><td>.8444</td><td>.9041</td><td>.0138</td><td>.0048</td><td>.0087</td></tr><tr><td>CHEMDNER + PubMed Word</td><td>.9990</td><td>.8625</td><td>.9257</td><td>.9968</td><td>.8622</td><td>.9246</td><td>.0021</td><td>.0003</td><td>.0011</td></tr><tr><td>CHEMDNER + PubMed Flair</td><td>.9982</td><td>.8836</td><td>.9374</td><td>.9885</td><td>.8852</td><td>.9340</td><td>.0097</td><td>-.007</td><td>.0034</td></tr><tr><td>CHEMDNER + BioBERT</td><td>.8847</td><td>.8968</td><td>.8907</td><td>.8625</td><td>.8963</td><td>.8790</td><td>.0222</td><td>.0005</td><td>.0116</td></tr><tr><td>CHEMDNER + General Word</td><td>.9614</td><td>.1966</td><td>.3264</td><td>.9311</td><td>.1957</td><td>.3233</td><td>.0302</td><td>.0009</td><td>.0030</td></tr><tr><td>CHEMDNER + General Flair</td><td>.9559</td><td>.8437</td><td>.8963</td><td>.9105</td><td>.8433</td><td>.8755</td><td>.0454</td><td>.0004</td><td>.0208</td></tr><tr><td>CHEMDNER + BERT</td><td>.9913</td><td>.8768</td><td>.9306</td><td>.9680</td><td>.8762</td><td>.9198</td><td>.0233</td><td>-.0006</td><td>.0107</td></tr><tr><td>ASKDOC + PubMed Word</td><td>.9739</td><td>.9330</td><td>.9530</td><td>.9739</td><td>.9330</td><td>.9530</td><td>.0000</td><td>.0000</td><td>.0000</td></tr><tr><td>ASKDOC + PubMed Flair</td><td>.8833</td><td>.9523</td><td>.9164</td><td>.8278</td><td>.9519</td><td>.8852</td><td>.0555</td><td>.0005</td><td>.0312</td></tr><tr><td>ASKDOC + BioBERT</td><td>.8026</td><td>.9444</td><td>.8677</td><td>.7703</td><td>.9443</td><td>.8483</td><td>.0323</td><td>.0001</td><td>.0194</td></tr><tr><td>ASKDOC + General Word</td><td>.9681</td><td>.6607</td><td>.7854</td><td>.9711</td><td>.6604</td><td>.7862</td><td>-.0030</td><td>.0003</td><td>-.0008</td></tr><tr><td>ASKDOC + General Flair</td><td>.8707</td><td>.9491</td><td>.9079</td><td>.8166</td><td>.9468</td><td>.8765</td><td>.0542</td><td>.0023</td><td>.0315</td></tr><tr><td>ASKDOC + BERT</td><td>.9394</td><td>.9288</td><td>.9340</td><td>.8967</td><td>.9282</td><td>.9121</td><td>.0427</td><td>.0006</td><td>.0220</td></tr><tr><td>CHEBI + PubMed Word</td><td>.9999</td><td>.8758</td><td>.9337</td><td>.9979</td><td>.8715</td><td>.9305</td><td>.0019</td><td>.0042</td><td>.0033</td></tr><tr><td>CHEBI + PubMed Flair</td><td>.9689</td><td>.9016</td><td>.9340</td><td>.9545</td><td>.9031</td><td>.9281</td><td>.0144</td><td>-.0015</td><td>.0060</td></tr><tr><td>CHEBI + PubMed BERT</td><td>.9170</td><td>.8673</td><td>.8914</td><td>.8690</td><td>.8689</td><td>.8690</td><td>.0480</td><td>-.0016</td><td>.0225</td></tr><tr><td>CHEBI + General Word</td><td>.9538</td><td>.5073</td><td>.6620</td><td>.9147</td><td>.4956</td><td>.6424</td><td>.0391</td><td>.0118</td><td>.0196</td></tr><tr><td>CHEBI + General Flair</td><td>.9832</td><td>.8720</td><td>.9242</td><td>.9677</td><td>.8701</td><td>.9163</td><td>.0155</td><td>.0019</td><td>.0079</td></tr><tr><td>CHEBI + BERT</td><td>.9779</td><td>.8892</td><td>.9314</td><td>.9223</td><td>.8882</td><td>.9048</td><td>.0556</td><td>.0011</td><td>.0266</td></tr><tr><td colspan="10">Aggregate Measures</td></tr><tr><td>AVERAGE PubMed/BioBERT</td><td>.9370</td><td>.8994</td><td>.9155</td><td>.9126</td><td>.8994</td><td>.9026</td><td>.0242</td><td>.0002</td><td>.0129</td></tr><tr><td>AVERAGE General</td><td>.9479</td><td>.7658</td><td>.8238</td><td>.9071</td><td>.7637</td><td>.8047</td><td>.0407</td><td>.0020</td><td>.0191</td></tr><tr><td>AVERAGE Word</td><td>.9763</td><td>.6919</td><td>.7850</td><td>.9548</td><td>.6897</td><td>.7771</td><td>.0214</td><td>.0022</td><td>.0079</td></tr><tr><td>AVERAGE Flair</td><td>.9329</td><td>.9114</td><td>.9199</td><td>.8951</td><td>.9112</td><td>.8998</td><td>.0378</td><td>-.0002</td><td>.0201</td></tr><tr><td>AVERAGE (Bio)BERT</td><td>.9181</td><td>.8946</td><td>.9040</td><td>.8797</td><td>.8938</td><td>.8840</td><td>.0382</td><td>.0011</td><td>.0199</td></tr></table>
132
+
133
+ Table 4: Synthetic (Template) Data Results. We **bold** the more biased aggregate measures and all differences greater than .01 to easily read the main findings.
134
+
135
+ in practical scenarios.
136
+
137
+ AskDoc Error Analysis. Our experiments show that Chemical NER systems are biased. However, what specifically is causing the errors? For the synthetic data, the answer is gender-related names. To understand the errors in the AskDoc data, we analyzed the errors made by the best NER models trained on the out-of-domain corpus (CHEMDNER and CDR) and tested the male and female splits of the AskDocs corpus. In Figure 1, we report the ratio of false negatives for different categories of drugs/chemicals. For every false negative made by the top models of each dataset-model combination, we manually categorized them into a general chemical class (e.g., Contraceptives, Analgesics/Pain Killers, and Stimulants). Formally, let $F N_{m}^{k}$ repre
138
+
139
+ ![](images/05fb9e7d1472480f9eb95f3ab2ed1b0cc934f6ff9278399d5fc6c3dfd20c27dd.jpg)
140
+ Figure 1: Ratio of false negatives for various drug categories. The ratio is represented next to each bar. For female-leaning errors, the female false negative count $(FN_{f}^{k})$ is in the numerator. For male-leaning errors, the male false negative count $(FN_{m}^{k})$ is in the numerator.
141
+
142
+ <table><tr><td rowspan="2">Dataset + Embeddings</td><td colspan="3">Male</td><td colspan="3">Female</td><td colspan="3">Difference</td></tr><tr><td>Prec.</td><td>Rec.</td><td>F1</td><td>Prec.</td><td>Rec.</td><td>F1</td><td>Prec.</td><td>Rec.</td><td>F1</td></tr><tr><td>CDR + PubMed Word</td><td>.8375</td><td>.6023</td><td>.7007</td><td>.8206</td><td>.6249</td><td>.7095</td><td>-.0169</td><td>.0226</td><td>.0088</td></tr><tr><td>CDR + PubMed Flair</td><td>.8614</td><td>.6160</td><td>.7183</td><td>.8778</td><td>.6702</td><td>.7601</td><td>.0164</td><td>.0542</td><td>.0418</td></tr><tr><td>CDR + BioBERT</td><td>.8303</td><td>.6352</td><td>.7198</td><td>.8042</td><td>.6693</td><td>.7306</td><td>-.0261</td><td>.0341</td><td>.0108</td></tr><tr><td>CDR + General Word</td><td>.7538</td><td>.6724</td><td>.7108</td><td>.7489</td><td>.6986</td><td>.7229</td><td>-.0049</td><td>.0262</td><td>.0121</td></tr><tr><td>CDR + General Flair</td><td>.8479</td><td>.6501</td><td>.7359</td><td>.8542</td><td>.6707</td><td>.7514</td><td>.0063</td><td>.0206</td><td>.0155</td></tr><tr><td>CDR + BERT</td><td>.8742</td><td>.6453</td><td>.7425</td><td>.8638</td><td>.6589</td><td>.7475</td><td>-.0104</td><td>.0136</td><td>.0050</td></tr><tr><td>CHEMDNER + PubMed Word</td><td>.8057</td><td>.5966</td><td>.6855</td><td>.8158</td><td>.6049</td><td>.6947</td><td>.0101</td><td>.0083</td><td>.0092</td></tr><tr><td>CHEMDNER + PubMed Flair</td><td>.8891</td><td>.6155</td><td>.7274</td><td>.8871</td><td>.6282</td><td>.7356</td><td>-.0020</td><td>.0127</td><td>.0082</td></tr><tr><td>CHEMDNER + BioBERT</td><td>.8537</td><td>.6238</td><td>.7208</td><td>.8735</td><td>.6434</td><td>.7410</td><td>.0198</td><td>.0196</td><td>.0202</td></tr><tr><td>CHEMDNER + General Word</td><td>.7490</td><td>.5546</td><td>.6373</td><td>.7975</td><td>.5842</td><td>.6743</td><td>.0485</td><td>.0296</td><td>.0370</td></tr><tr><td>CHEMDNER + General Flair</td><td>.8159</td><td>.5678</td><td>.6696</td><td>.8821</td><td>.6021</td><td>.7157</td><td>.0662</td><td>.0343</td><td>.0461</td></tr><tr><td>CHEMDNER + BERT</td><td>.7165</td><td>.6315</td><td>.6713</td><td>.8309</td><td>.6349</td><td>.7198</td><td>.1144</td><td>.0034</td><td>.0485</td></tr><tr><td>CHEBI + PubMed Word</td><td>.7574</td><td>.5998</td><td>.6694</td><td>.7548</td><td>.6287</td><td>.6860</td><td>-.0026</td><td>.0289</td><td>.0166</td></tr><tr><td>CHEBI + PubMed Flair</td><td>.7540</td><td>.6415</td><td>.6932</td><td>.7571</td><td>.6740</td><td>.7131</td><td>.0031</td><td>.0325</td><td>.0199</td></tr><tr><td>CHEBI + BioBERT</td><td>.6896</td><td>.5969</td><td>.6399</td><td>.7380</td><td>.6148</td><td>.6708</td><td>.0484</td><td>.0179</td><td>.0309</td></tr><tr><td>CHEBI + General Word</td><td>.6047</td><td>.6541</td><td>.6284</td><td>.6132</td><td>.6687</td><td>.6397</td><td>.0085</td><td>.0146</td><td>.0113</td></tr><tr><td>CHEBI + General Flair</td><td>.6066</td><td>.5775</td><td>.5917</td><td>.6103</td><td>.6001</td><td>.6052</td><td>.0037</td><td>.0226</td><td>.0135</td></tr><tr><td>CHEBI + BERT</td><td>.6274</td><td>.6478</td><td>.6374</td><td>.6923</td><td>.6467</td><td>.6687</td><td>.0649</td><td>-.0011</td><td>.0313</td></tr><tr><td colspan="10">Aggregate Measures</td></tr><tr><td>AVERAGE PubMed/BioBERT</td><td>.8087</td><td>.6142</td><td>.6972</td><td>.8143</td><td>.6398</td><td>.7157</td><td>.0056</td><td>.0256</td><td>.0185</td></tr><tr><td>AVERAGE General</td><td>.7329</td><td>.6223</td><td>.6694</td><td>.7659</td><td>.6405</td><td>.6939</td><td>.0330</td><td>.0182</td><td>.0245</td></tr><tr><td>AVERAGE Word</td><td>.7514</td><td>.6133</td><td>.6720</td><td>.7585</td><td>.6350</td><td>.6879</td><td>.0071</td><td>.0217</td><td>.0158</td></tr><tr><td>AVERAGE Flair</td><td>.7958</td><td>.6114</td><td>.6894</td><td>.8114</td><td>.6409</td><td>.7135</td><td>.0156</td><td>.0295</td><td>.0242</td></tr><tr><td>AVERAGE (Bio)BERT</td><td>.7653</td><td>.6301</td><td>.6886</td><td>.8005</td><td>.6447</td><td>.7131</td><td>.0352</td><td>.0146</td><td>.0245</td></tr></table>
143
+
144
+ Table 5: AskDoc Results. We bold the more biased aggregate measures and all differences greater than .01 to easily read the main findings.
145
+
146
+ sent the total number of false negatives for chemical types $k$ and male data $m$ . Let $FN_{f}^{k}$ represent the female false negatives. If $FN_{m}^{k}$ is larger than $FN_{f}^{k}$ , we define the ratio as $-(1 - FN_{m}^{k} / FN_{f}^{k})$ . Likewise, if $FN_{f}^{k}$ is greater than $FN_{m}$ , then we define the ratio as $1 - (FN_{f}^{k} / FN_{m}^{k})$ . Hence, when male ratios are higher, the score is negative; otherwise, it is positive.
147
+
148
+ Overall, we make several important findings. First, we find that the models make slightly more false negatives on the chemicals categories Contraceptives (e.g., birth control and Plan B One-Step), Hormones (e.g., Megace used to treat the symptoms of loss of appetite and wasting syndrome in people with illnesses such as breast cancer), Analgesics (i.e., Pain Killers such as Tylenol) and Antibiotics on the female dataset. In contrast, the models make slightly more errors in the chemical categories Anxiolytics (e.g., drugs used to treat anxiety), Antipsychotics (e.g., chemicals used to manage psychosis, principally in schizophrenia), and sexual function drugs (e.g., Viagra). Further
149
+
150
+ more, while the ratio for the most male- and female-related errors (Contraceptives and Sexual Function) are similar, the absolute magnitudes are substantially different. For instance, there are 397 Contraceptive $FNs$ in the female dataset, but only 75 Sexual Function $FNs$ appear in the male dataset. This provides an explanation for the large differences in recall on the AskDoc corpus between the male and female datasets.
151
+
152
+ # 6 LIMITATION
153
+
154
+ There were several limitations to our study. First, the adjudication of disagreeing items was dependent on the judgment of a single graduate student, potentially introducing human error and bias compared to a multi-adjudicator approach. Second, the vast volume of data from the active r/AskDoc subreddit community makes the feasibility of one person's comprehensive review debatable. Although our annotation method is in line with standard practices, a more multi-faceted approach involving numerous annotators and adjudicators might
155
+
156
+ offer improved accuracy and consistency in future datasets. Third, our study focuses on binary representations of gender (ignoring non-binary people). Moreover, the Social Security's Most Popular Baby Names (SSN) names may not adequately mention immigrant-related names. Hence, the results may be European-specific.
157
+
158
+ # 7 ETHICAL CONSIDERATIONS
159
+
160
+ In this study, we consider binary gender biases. While binary gender is a common area of study in NLP literature (Mehrabi et al., 2020), and we follow best practices of using self-identified gender (Larson, 2017), it leaves a large portion of individuals out of the study (i.e., not counted). Moreover, we also follow prior (Mehrabi et al., 2020) by relating names to gender. Nevertheless, names are not directly related to gender identity. Hence, in future work, we intend to explore data collection methods beyond binary gender. Specifically, we plan to collect data from other groups for detailed studies of model performance.
161
+
162
+ Additionally, using data from platforms like Reddit's r/AskDocs, where individuals share personal health experiences, raises ethical concerns about the potential exposure of personally identifiable information (PII) and sensitive personal health information (PHI). While our research aims to assess gender bias without examining personal details, the potential for identifiable information necessitates careful handling to protect privacy and confidentiality, following established ethical guidelines for internet research (Fiesler et al., 2024).
163
+
164
+ # 8 CONCLUSION
165
+
166
+ In this paper, we evaluate the gender bias of Chemical NER systems. Moreover, we compare bias measurements from synthetic data with real-world self-identified data. We make two major findings. First, Chemical NER systems are biased with regard to gender for synthetic data. Specifically, our study found that female name-like patterns feature prominently in chemical naming conventions. This characteristic leads to a notable bias in NER systems, where female names are disproportionately identified as chemicals, inadvertently escalating the gender bias in these systems. Second, we explored the performance of these models in real-world scenarios and found that most models perform better on male-related data than female-related data. A striking revelation was
167
+
168
+ the system's poor performance when identifying chemicals frequently found in female-related data, such as mentions of contraceptives.
169
+
170
+ In conclusion, the results of our study emphasize the urgent need for deliberate bias mitigation strategies in Chemical NER systems. Our findings spotlight the necessity for incorporating both synthetic and real-world data considerations to develop models that are both fair and reliable. There are two major paths for future research. First, while large language models are still behind in terms of performance for NER systems (Wang et al., 2023), they are becoming more common. Future work should explore biases in prompting-based NER solutions. Second, we plan to explore how the chemical NER biases impact downstream tasks such as relationship classification and question answering.
171
+
172
+ # ACKNOWLEDGEMENTS
173
+
174
+ This material is based upon work supported by the National Science Foundation (NSF) under Grant No. 1947697 and NSF award No. 2145357.
175
+
176
+ # References
177
+
178
+ Pankaj Agarwal and David B. Searls. 2008. Literature mining in support of drug discovery. *Briefings in bioinformatics*, 9 6:479-92.
179
+ Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. 2019a. Flair: An easy-to-use framework for state-of-the-art nlp. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 54-59.
180
+ Alan Akbik, Tanja Bergmann, and Roland Vollgraf. 2019b. Pooled contextualized embeddings for named entity recognition. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 724-728.
181
+ Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In COLING 2018, 27th International Conference on Computational Linguistics, pages 1638-1649.
182
+ Dhananjay Ashok and Zachary C Lipton. 2023. Prompter: Prompting for named entity recognition. arXiv preprint arXiv:2305.15444.
183
+ Pinkesh Badjatiya, Manish Gupta, and Vasudeva Varma. 2019. Stereotypical bias removal for hate speech detection task using knowledge-based generalizations. In The World Wide Web Conference, pages 49-59.
184
+
185
+ James Bergstra and Yoshua Bengio. 2012. Random search for hyper-parameter optimization. Journal of machine learning research, 13(2).
186
+ Olivier Bodenreider. 2004. The unified medical language system (uml's): integrating biomedical terminology. *Nucleic acids research*, 32(suppl_1):D267-D270.
187
+ Elisa Chilet-Rosell. 2014. Gender bias in clinical research, pharmaceutical marketing, and the prescription of drugs. *Global Health Action*, 7(1):25484.
188
+ Yu-Wen Chiu, Wen-Chao Yeh, Sheng-Jie Lin, and Yung-Chun Chang. 2021. Recognizing chemical entity in biomedical literature using a bert-based ensemble learning methods for the biocreative 2021 nlm-chem track. In Proceedings of the seventh BioCreative challenge evaluation workshop.
189
+ Davide Cirillo, Silvina Catuara-Solarz, Czuee Morey, Emre Guney, Laia Subirats, Simona Mellino, Annalisa Gigante, Alfonso Valencia, María José Rementeria, Antonella Santuccione Chadha, et al. 2020. Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare. NPJ digital medicine, 3(1):81.
190
+ Donald C Comeau, Chih-Hsuan Wei, Rezarta Islamaj Dogan, and Zhiyong Lu. 2019. Pmc text mining subset in bioc: about three million full-text articles and growing. Bioinformatics, 35(18):3533-3535.
191
+ Paula Czarnowska, Yogarshi Vyas, and Kashif Shah. 2021. Quantifying social biases in nlp: A generalization and empirical comparison of extrinsic fairness metrics. Transactions of the Association for Computational Linguistics, 9:1249-1267.
192
+ Thomas Davidson, Debasmita Bhattacharya, and Ingmar Weber. 2019. Racial bias in hate speech and abusive language detection datasets. In Proceedings of the Third Workshop on Abusive Language Online, pages 25-35, Florence, Italy. Association for Computational Linguistics.
193
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019a. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.
194
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019b. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
195
+
196
+ Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigating unintended bias in text classification. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 67-73.
197
+ Lizzy Farrugia and Charlie Abela. 2020. Mining drug-drug interactions for healthcare professionals. Proceedings of the 3rd International Conference on Applications of Intelligent Systems.
198
+ Casey Fiesler, Michael Zimmer, Nicholas Proferes, Sarah Gilbert, and Naiyan Jones. 2024. Remember the human: A systematic review of ethical considerations in reddit research. Proceedings of the ACM on Human-Computer Interaction, 8(GROUP):1-33.
199
+ Joel Escudé Font and Marta R Costa-jussà. 2019. Equalizing gender bias in neural machine translation with word embeddings techniques. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 147-154.
200
+ Andrew Gaut, Tony Sun, Shirlyn Tang, Yuxin Huang, Jing Qian, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, et al. 2020. Towards understanding gender bias in relation extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2943-2953.
201
+ Seraphina Goldfarb-Tarrant, Rebecca Marchant, Ricardo Muñoz Sánchez, Mugdha Pandya, and Adam Lopez. 2021. Intrinsic bias metrics do not correlate with application bias. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1926-1940.
202
+ Evan Hepler-Smith. 2015. "just as the structural formula does": Names, diagrams, and the structure of organic chemistry at the 1892 Geneva nomenclature congress. Ambix, 62(1):1-28.
203
+ Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.
204
+ Svetlana Kiritchenko and Saif Mohammad. 2018. Examining gender and race bias in two hundred sentiment analysis systems. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 43-53.
205
+ Hadas Kotek, Rikker Dockum, and David Sun. 2023. Gender bias and stereotypes in large language models. In Proceedings of The ACM Collective Intelligence Conference, pages 12-24.
206
+ Martin Krallinger, Obdulia Rabal, Florian Leitner, Miguel Vazquez, David Salgado, Zhiyong Lu, Robert Leaman, Yanan Lu, Donghong Ji, Daniel M Lowe, et al. 2015. The chemdner corpus of chemicals and drugs and its annotation principles. Journal of cheminformatics, 7(1):1-17.
207
+
208
+ Brian Larson. 2017. Gender as a variable in natural-language processing: Ethical considerations. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 1-11.
209
+ Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.
210
+ Jiao Li, Yueping Sun, Robin J Johnson, Daniela Sciaky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J Mattingly, Thomas C Wiegers, and Zhiyong Lu. 2016. Biocreative v cdr task corpus: a resource for chemical disease relation extraction. Database, 2016.
211
+ Jing Li, Aixin Sun, Jianglei Han, and Chenliang Li. 2020. A survey on deep learning for named entity recognition. IEEE Transactions on Knowledge and Data Engineering, 34(1):50-70.
212
+ Theo Lieven, Bianca Grohmann, Andreas Herrmann, Jan R Landwehr, and Miriam Van Tilburg. 2015. The effect of brand design on brand gender perceptions and brand preference. European Journal of Marketing.
213
+ Pilar López-Ábeda, Manuel Carlos Díaz-Galiano, L Alfonso Ureña-López, and M Teresa Martín-Valdivia. 2021. Combining word embeddings to extract chemical and drug entities in biomedical literature. BMC bioinformatics, 22(1):1-18.
214
+ Brandon Lwowski and Anthony Rios. 2021. The risk of racial bias while tracking influenza-related content on social media using machine learning. Journal of the American Medical Informatics Association, 28(4):839-849.
215
+ Maria Mammi, Rita Citraro, Giovanni Torcasio, Gennaro Cusato, Caterina Palleria, and Eugenio Donato di Paola. 2013. Pharmacovigilance in pharmaceutical companies: An overview. Journal of Pharmacology & Pharmacotherapeutics, 4:S33 - S37.
216
+ Lisa A Martin, Harold W Neighbors, and Derek M Griffith. 2013. The experience of symptoms of depression in men vs women: analysis of the national comorbidity survey replication. JAMA psychiatry, 70(10):1100-1106.
217
+ Ninareh Mehrabi, Thamme Gowda, Fred Morstatter, Nanyun Peng, and A. G. Galstyan. 2020. Man is to person as woman is to location: Measuring gender bias in named entity recognition. Proceedings of the 31st ACM Conference on Hypertext and Social Media.
218
+ Tomás Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. In 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings.
219
+
220
+ Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111-3119.
221
+ Shubhanshu Mishra, Sijun He, and Luca Belli. 2020. Assessing demographic bias in named entity recognition. arXiv preprint arXiv:2008.03415.
222
+ Marzieh Mozafari, Reza Farahbakhsh, and Noel Crespi. 2020. Hate speech detection and racial bias mitigation in social media based on bert model. *PloS one*, 15(8):e0237861.
223
+ David Mueller, Nicholas Andrews, and Mark Dredze. 2020. Sources of transfer in multilingual named entity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8093-8104.
224
+ World Health Organization and Key Centre for Women's Health in Society. 2009. Mental health aspects of women's reproductive health: a global review of the literature.
225
+ Karen O'Connor, Pranoti Pimpalkhute, Azadeh Nikfarjam, Rachel Ginn, Karen L Smith, and Graciela Gonzalez. 2014. Pharmacovigilance on twitter? mining tweets for adverse drug reactions. In AMIA annual symposium proceedings, volume 2014, page 924. American Medical Informatics Association.
226
+ Ji Ho Park, Jamin Shin, and Pascale Fung. 2018. Reducing gender bias in abusive language detection. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2799-2804.
227
+ James Pustejovsky and Amber Stubbs. 2012. Natural Language Annotation for Machine Learning: A guide to corpus-building for applications. "O'Reilly Media, Inc."
228
+ S Pyysalo, F Ginter, H Moen, T Salakoski, and S Ananiadou. 2013. Distributional semantics resources for biomedical text processing. In Proceedings of LBM 2013, pages 39-44.
229
+ Joseph L Riley III, Michael E Robinson, Emily A Wise, Cynthia D Myers, and Roger B Fillingim. 1998. Sex differences in the perception of noxious experimental stimuli: a meta-analysis. Pain, 74(2-3):181-187.
230
+ Anthony Rios. 2020. Fuzzy: Fuzzy fairness evaluation of offensive language classifiers on african-american english. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 881-889.
231
+ Tim Roktaschel, Michael Weidlich, and Ulf Leser. 2012. Chemspot: a hybrid system for chemical named entity recognition. Bioinformatics, 28(12):1633-1640.
232
+
233
+ Rachel H Salk, Janet S Hyde, and Lyn Y Abramson. 2017. Gender differences in depression in representative national samples: Meta-analyses of diagnoses and symptoms. Psychological bulletin, 143(8):783.
234
+ Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 1668-1678.
235
+ Mirsada Serdarevic, Catherine W Striley, and Linda B Cottler. 2017. Gender differences in prescription opioid use. Current opinion in psychiatry, 30(4):238.
236
+ Matthew Shardlow, Nhung Nguyen, Gareth Owen, Claire O'Donovan, Andrew Leach, John McNaught, Steve Turner, and Sophia Ananiadou. 2018. A new corpus to support text mining for the curation of metabolites in the chebi database. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).
237
+ Emily Sheng, Kai-Wei Chang, Prem Natarajan, and Nanyun Peng. 2019. The woman worked as a babysitter: On biases in language generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3407-3412.
238
+ David RM Smith, F Christiaan K Dolk, Timo Smieszek, Julie V Robotham, and Koen B Pouwels. 2018. Understanding the gender gap in antibiotic prescribing: a cross-sectional analysis of english primary care. BMJ open, 8(2):e020203.
239
+ Cong Sun, Zhihao Yang, Lei Wang, Yin Zhang, Hongfei Lin, and Jian Wang. 2021. Deep learning with language models improves named entity recognition for pharmaconer. BMC bioinformatics, 22(1):1-16.
240
+ Lena Thunander Sundbom, Kerstin Bingefors, Kerstin Hedborg, and Dag Isacson. 2017. Are men undertreated and women over-treated with antidepressants? findings from a cross-sectional survey in Sweden. *BJPsych bulletin*, 41(3):145-150.
241
+ Amir Pouran Ben Veyseh, Franck Dernoncourt, Bonan Min, and Thien Huu Nguyen. 2022. Generating complement data for aspect term extraction with gpt-2. In Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing, pages 203-213.
242
+ Zhen Wan, Fei Cheng, Zhuoyuan Mao, Qianying Liu, Haiyue Song, Jiwei Li, and Sadao Kurohashi. 2023. Gpt-re: In-context learning for relation extraction using large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 3534-3547.
243
+ Shuhe Wang, Xiaofei Sun, Xiaoya Li, Rongbin Ouyang, Fei Wu, Tianwei Zhang, Jiwei Li, and Guoyin Wang. 2023. Gpt-ner: Named entity recognition via large language models. arXiv preprint arXiv:2304.10428.
244
+
245
+ Leon Weber, Mario Sänger, Jannes Münchmeyer, Maryam Habibi, Ulf Leser, and Alan Akbik. 2021. Hunflair: an easy-to-use tool for state-of-the-art biomedical named entity recognition. Bioinformatics, 37(17):2792-2794.
246
+ Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
247
+ Tongxuan Zhang, Hongfei Lin, Yuqi Ren, Zhihao Yang, Jian Wang, Xiaodong Duan, and Bo Xu. 2021. Identifying adverse drug reaction entities from social media with adversarial transfer learning model. Neurocomputing, 453:254-262.
248
+ Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender bias in contextualized word embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 629–634, Minneapolis, Minnesota. Association for Computational Linguistics.
249
+ Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2979–2989.
250
+ Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15–20.
251
+
252
+ # A Appendix
253
+
254
+ # A.1 Datasets
255
+
256
+ CDR (Li et al., 2016) We use the BioCreative V CDR shared task corpus. The CDR corpus comprises 1,500 PubMed articles with 4,409 annotated chemicals, 5,818 diseases, and 3,116 chemical disease interactions. This corpus is designed to address two distinct tasks: Relation classification and NER. For this study, we focus on the NER for chemical entities. The annotator agreement for this corpus was .87. Finally, we used the same train,
257
+
258
+ <table><tr><td>Data</td><td>Embedding</td><td>Fine-tuning</td><td>hidden_size</td><td>dropout</td><td>lr</td></tr><tr><td rowspan="6">CDR</td><td>general word</td><td>TRUE</td><td>128</td><td>0.4</td><td>0.1</td></tr><tr><td>general flair</td><td>TRUE</td><td>256</td><td>0.3</td><td>0.1</td></tr><tr><td>BERT</td><td>TRUE</td><td>256</td><td>0.2</td><td>0.05</td></tr><tr><td>Pubmed word</td><td>FALSE</td><td>128</td><td>0.2</td><td>0.1</td></tr><tr><td>pubmed flair</td><td>FALSE</td><td>128</td><td>0.4</td><td>0.1</td></tr><tr><td>BioBERT</td><td>TRUE</td><td>1024</td><td>0.5</td><td>0.05</td></tr><tr><td rowspan="6">CHEMD</td><td>general word</td><td>TRUE</td><td>1024</td><td>0.2</td><td>0.1</td></tr><tr><td>general flair</td><td>TRUE</td><td>512</td><td>0.5</td><td>0.1</td></tr><tr><td>BERT</td><td>TRUE</td><td>1024</td><td>0.3</td><td>0.025</td></tr><tr><td>Pubmed word</td><td>TRUE</td><td>256</td><td>0.3</td><td>0.1</td></tr><tr><td>pubmed flair</td><td>FALSE</td><td>128</td><td>0.2</td><td>0.05</td></tr><tr><td>BioBERT</td><td>TRUE</td><td>1024</td><td>0.2</td><td>0.025</td></tr><tr><td rowspan="6">Askdoc</td><td>general word</td><td>TRUE</td><td>1024</td><td>0.2</td><td>0.1</td></tr><tr><td>general flair</td><td>TRUE</td><td>512</td><td>0.5</td><td>0.1</td></tr><tr><td>bert</td><td>TRUE</td><td>1024</td><td>0.3</td><td>0.025</td></tr><tr><td>Pubmed word</td><td>TRUE</td><td>256</td><td>0.3</td><td>0.1</td></tr><tr><td>pubmed flair</td><td>FALSE</td><td>128</td><td>0.2</td><td>0.05</td></tr><tr><td>biobert</td><td>TRUE</td><td>128</td><td>0.2</td><td>0.01</td></tr><tr><td rowspan="6">CHEBI</td><td>general word</td><td>TRUE</td><td>128</td><td>0.4</td><td>0.1</td></tr><tr><td>General Flair</td><td>TRUE</td><td>128</td><td>0.3</td><td>0.1</td></tr><tr><td>BERT</td><td>TRUE</td><td>1024</td><td>0.5</td><td>0.05</td></tr><tr><td>Pubmed word</td><td>TRUE</td><td>128</td><td>0.4</td><td>0.1</td></tr><tr><td>Pubmed flair</td><td>FALSE</td><td>512</td><td>0.3</td><td>0.1</td></tr><tr><td>BioBERT</td><td>TRUE</td><td>256</td><td>0.4</td><td>0.05</td></tr></table>
259
+
260
+ Table 6: Comprehensive List of Hyperparameters Investigated in the Search for the Optimal Model
261
+
262
+ validation, and test splits from the shared task for our experiments.
263
+
264
+ CHEMDNER (Krallinger et al., 2015) The CHEMDNER corpus includes abstracts from 10000 chemistry-related journals published in 2013 on PubMed. Each abstract was manually annotated for chemical mentions. These mentions were categorized into seven subtypes: abbreviation, family, formula, identifier, multiple, systematic, and trial. The BioCreative organizers divided the corpus into training (3500 abstracts), development (3500 abstracts), and test (3000 abstracts) sets. The BioCreative IV CHEMDNER corpus comprises 84,355 chemical mention annotations across 10,000 abstracts, with an inter-annotator agreement of .91 (Krallinger et al., 2015). For this study, we only use the major Chemical annotations and ignore the subtypes for consistency across corpora. Finally, we use the same train, validation, and test splits used in the shared task for our experiments.
265
+
266
+ CHEBI (Shardlow et al., 2018). We also use the ChEBI corpus, an extensive dataset consisting of 199 annotated abstracts and 100 full papers. This corpus contains over 15,000 named entity annotations and more than 6,000 inter-entity relations, specifically aligned with the needs of the ChEBI database curators. The dataset has annotated chemicals, proteins, species, biological activities, and spectral data. Moreover, it has a high inter-annotator agreement of 0.80-0.89 (F1 score, strict-matching). It also categorizes relationships into several types such as Isolated From, Associated With, Binds With, and Metabolite Of, offering a detailed view of the interactions between metabolites and other entities. This corpus is not only a rich source for exploring lexical characteristics of metabolites and associated entities but also serves as a critical resource for training machine learning algorithms in the recognition of these entities and their relations in the biochemical context.
267
+
268
+ <table><tr><td></td><td>Total Male</td><td>FNR Male</td><td>Total Female</td><td>FNR Female</td></tr><tr><td>Contraceptives</td><td>33</td><td>1.0000</td><td>408</td><td>.9730</td></tr><tr><td>Hormones</td><td>170</td><td>.0882</td><td>230</td><td>.1565</td></tr><tr><td>Analgesics</td><td>571</td><td>.1489</td><td>952</td><td>.2048</td></tr><tr><td>Antibiotics</td><td>326</td><td>.2454</td><td>347</td><td>.4438</td></tr><tr><td>Antihistamines</td><td>270</td><td>.5593</td><td>295</td><td>.6780</td></tr><tr><td>Stimulants</td><td>522</td><td>.3065</td><td>390</td><td>.5051</td></tr><tr><td>Antidepressants</td><td>781</td><td>.4110</td><td>1043</td><td>.3365</td></tr><tr><td>Minerals</td><td>605</td><td>.3983</td><td>785</td><td>.3312</td></tr><tr><td>Opioids</td><td>43</td><td>.5814</td><td>95</td><td>.2316</td></tr><tr><td>Organic Chemical</td><td>441</td><td>.3764</td><td>346</td><td>.3902</td></tr><tr><td>Illicit drug</td><td>353</td><td>.5666</td><td>311</td><td>.5048</td></tr><tr><td>Vaccine</td><td>108</td><td>1.0000</td><td>78</td><td>1.0000</td></tr><tr><td>Stomach Drug</td><td>55</td><td>.5455</td><td>44</td><td>.4545</td></tr><tr><td>Antipsychotics</td><td>47</td><td>.6170</td><td>95</td><td>.1368</td></tr><tr><td>Anxiolytics</td><td>126</td><td>.5603</td><td>100</td><td>.2300</td></tr><tr><td>Sexual Function Drug</td><td>78</td><td>.9615</td><td>8</td><td>1.0000</td></tr><tr><td>PCC between Total and FNR</td><td colspan="2">-.58</td><td colspan="2">-.26</td></tr></table>
269
+
270
+ # A.2 Hyper-Parameter Settings
271
+
272
+ In this section, we report the best hyperparameter for each model, shown in Table 6. Similar to random hyperparameter search (Bergstra and Bengio, 2012), we generate 100 samples using different parameters for each dataset-model combination (e.g., we generate 100 versions of BERT for the CDR dataset). For the specific hyper-parameters, we used sample dropout from .1 to .9, hidden layer sizes from \{128, 256, 512, 1024\}, learning rates selected from 1e-4 to 1e-1 at random, and the option of whether to fine-tune the embedding layers (i.e., True vs. False). In addition, we trained all models for 25 epochs with a mini-batch size set to 32, where only the best model on the validation dataset is saved after each epoch. Finally, all experiments were run on four NVidia GeForce GTX 1080 Ti GPUs.
273
+
274
+ # A.3 Error Analysis and Discussion
275
+
276
+ Interestingly, we find that the prevalence of chemicals across gender-related posts matches the prevalence found in traditional biomedical studies. Previous research report that women have been prescribed analgesics (e.g., pain killers such as opioids) twice as often as men (Chilet-Rosell, 2014; Serdarevic et al., 2017). While there is still limited understanding about whether men are under
277
+
278
+ Table 7: False negative rate (FNR) for female and male-related AskDoc datasets. The pearson correlation coefficient (PCC) between the frequency of each chemical type and the FNR for teach group is marked in the last row.
279
+
280
+ <table><tr><td></td><td>FNR</td><td>wFNR</td></tr><tr><td>Male</td><td>.3948</td><td>.6875</td></tr><tr><td>Female</td><td>.4064</td><td>.8088</td></tr><tr><td>Gap</td><td>.0116</td><td>.1213</td></tr><tr><td>Ratio</td><td>1.0294</td><td>1.1764</td></tr></table>
281
+
282
+ Table 8: FNR and weighted FNR (wFNR) results.
283
+
284
+ prescribed or women are over-prescribed, the disparities in prescriptions are evident. Thus, the finding in Figure 1 that we receive twice as many analgesics $FNs$ for female data is important. Depending on the downstream application of the Chemical NER system, these performance disparities may potentially increase harm to women. For example, if more varieties of drugs are prescribed to women, but our system does not detect them, then an ADR detection system will not be able to detect important harms.
285
+
286
+ We also find differences in Antibiotic $FNs$ in Figure 1. There have also been medical studies showing gender differences in Antibiotic prescriptions. For example, a recent meta-analysis of primary care found that women received more antibiotics than men, especially women aged 16-54,
287
+
288
+ receiving $36\% -40\%$ more than males of the same age (Smith et al., 2018). Again, if we do not detect many of the antibiotics prescribed to women, this can cause potential health disparities in downstream ADR (and other) systems.
289
+
290
+ Next, in Table 7, we report the false negative rate (FNR) for each category along with the general frequency of each category. Using the Pearson correlation coefficient, we relate the frequency of each category with the false negative rate for the male and female groups, respectively. Intuitively, we would expect the false negative rate to go down as the frequency increases, which matches our findings. However, we find that the correlation is much stronger for the male group than the female group.
291
+
292
+ In Table 8, we report the FNR for the female and male groups, respectively. We also introduce a new metric, weighted FNR, which assigns importance scores for each of the FNRs shown to create a macro-averaged metric. Intuitively, the distribution of categories is different for both the male and female groups. So, we want to test whether the FNR scores are distributed uniformly across all categories, irrespective of, whether the errors are more concentrated for gender-specific categories. More errors in gender-specific categories can adversely impact a group that is not captured with the global FNR metric. Formally, we define wFNR for the
293
+
294
+ female group as
295
+
296
+ $$
297
+ w F N R ^ {f} = \sum_ {i} ^ {N} w _ {i} ^ {f} F N R _ {i} ^ {f}
298
+ $$
299
+
300
+ where $FNR_{i}^{f}$ represents the female false negative rate for category $i$ . Likewise, $w_{i}^{f}$ is defined as
301
+
302
+ $$
303
+ w _ {i} ^ {f} = \frac {1}{\sum_ {i} w _ {i} ^ {f}} \cdot \frac {N _ {i} ^ {f} / N ^ {f}}{N _ {i} ^ {m} / N ^ {m}}
304
+ $$
305
+
306
+ where $N_{i}^{f}$ and $N_{f}^{m}$ represent the total number of times a category $i$ appears for the female and male groups, respectively. Intuitively, we are dividing the ratio of each category for female and male groups. So, if a category appears more often for females than males, proportionally, then the score will be higher. We normalize these scores for each group so they sum to one. Overall, we find an absolute gap of more than $1\%$ (3% relative difference) between the FNR for male and female groups. But, even worse, there is a much larger gap (.1213 vs .0116) when using wFNR. This result suggests that many of the false negatives are concentrated for gender-specific categories (e.g., contraceptives) for the female group more than the male group.
2024/A Comprehensive Study of Gender Bias in Chemical Named Entity Recognition Models/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:542e3afcb5783c3199ff98043134a8390660b6eedb9a930e8c07ade6df2e3974
3
+ size 829097
2024/A Comprehensive Study of Gender Bias in Chemical Named Entity Recognition Models/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Likelihood Ratio Test of Genetic Relationship among Languages/6adb42de-14e6-4986-ab30-294d02f67dca_content_list.json ADDED
@@ -0,0 +1,1864 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "A Likelihood Ratio Test of Genetic Relationship among Languages",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 154,
8
+ 89,
9
+ 842,
10
+ 111
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "V.S.D.S. Mahesh Akavarapu and Arnab Bhattacharya \nDept. of Computer Science and Engineering \nIndian Institute of Technology Kanpur \nmaheshak@cse.iitk.ac.in, arnabb@cse.iitk.ac.in",
17
+ "bbox": [
18
+ 263,
19
+ 139,
20
+ 739,
21
+ 206
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "text",
27
+ "text": "Abstract",
28
+ "text_level": 1,
29
+ "bbox": [
30
+ 260,
31
+ 260,
32
+ 339,
33
+ 275
34
+ ],
35
+ "page_idx": 0
36
+ },
37
+ {
38
+ "type": "text",
39
+ "text": "Lexical resemblances among a group of languages indicate that the languages could be genetically related, i.e., they could have descended from a common ancestral language. However, such resemblances can arise by chance and, hence, need not always imply an underlying genetic relationship. Many tests of significance based on permutation of wordlists and word similarity measures appeared in the past to determine the statistical significance of such relationships. We demonstrate that although existing tests may work well for bilateral comparisons, i.e., on pairs of languages, they are either infeasible by design or are prone to yield false positives when applied to groups of languages or language families. To this end, inspired by molecular phylogenetics, we propose a likelihood ratio test to determine if given languages are related based on the proportion of invariant character sites in the aligned wordlists applied during tree inference. Further, we evaluate some language families and show that the proposed test solves the problem of false positives. Finally, we demonstrate that the test supports the existence of macro language families such as Nostratic and Macro-Mayan.",
40
+ "bbox": [
41
+ 144,
42
+ 287,
43
+ 460,
44
+ 657
45
+ ],
46
+ "page_idx": 0
47
+ },
48
+ {
49
+ "type": "text",
50
+ "text": "1 Introduction",
51
+ "text_level": 1,
52
+ "bbox": [
53
+ 114,
54
+ 668,
55
+ 258,
56
+ 684
57
+ ],
58
+ "page_idx": 0
59
+ },
60
+ {
61
+ "type": "text",
62
+ "text": "Languages that descend from a common ancestral language are termed to be genetically related. The existence of lexical resemblances between the two languages is a preliminary indication that they could be related. Such resembling lexicons that truly have a common origin are called cognates. For instance, Sanskrit nama and English name are cognates that can be traced to Proto-Indo-European *h₃nómn. However, such resemblances can also occur out of sheer chance. For instance, Persian bad and behtar accidentally resemble English bad and better respectively, but are not true cognates<sup>1</sup>.",
63
+ "bbox": [
64
+ 112,
65
+ 694,
66
+ 489,
67
+ 887
68
+ ],
69
+ "page_idx": 0
70
+ },
71
+ {
72
+ "type": "text",
73
+ "text": "Hence, it is necessary to show statistical significance on any appropriate measure that captures the lexical relatedness before arguing for a genetic relationship among any group of languages or language families (Campbell, 2013).",
74
+ "bbox": [
75
+ 507,
76
+ 261,
77
+ 884,
78
+ 341
79
+ ],
80
+ "page_idx": 0
81
+ },
82
+ {
83
+ "type": "text",
84
+ "text": "Several significance tests appeared in the past to address this problem, with the majority of them based on permutation tests, starting from Oswalt (1970). Given wordlists of a group of languages to be evaluated for a genetic relationship, these tests obtain the null distribution of a certain measure capturing similarity between word pairs by random permutations of the wordlists. Such tests either act bilaterally, i.e., on a pair of languages or proto-languages, or multilaterally on a group of languages. Among these, the multilateral comparison, which was made famous by Greenberg (1963, 1971, 1987, 2000) in traditional historical linguistics, has been a subject of much criticism (Poser and Campbell, 2008). Hence, the preferred way of comparing two language families has been to compare their reconstructed proto-forms bilaterally. However, Greenberg (2005) argues that genetic classification should precede proto-language reconstruction. Moreover, there is often a lack of agreement on reconstructed proto-forms both in terms of phonology and semantics which gives room for sufficient manipulation of wordlists that can in turn alter the results of significance tests (Kessler, 2015). Further, we demonstrate that multilateral permutation tests (Kessler and Lehtonen, 2006; Kessler, 2007) yield false negatives even after incorporating complex word similarity metrics such as SCA and LexStat (List, 2010, 2012).",
85
+ "bbox": [
86
+ 507,
87
+ 343,
88
+ 884,
89
+ 809
90
+ ],
91
+ "page_idx": 0
92
+ },
93
+ {
94
+ "type": "text",
95
+ "text": "To overcome these issues, we turn to phylogenetic analysis (Wiley and Lieberman, 2011) that is known to approximately capture the ancestral states and has been applied to phonological reconstruction tasks such as proto-language and cognate",
96
+ "bbox": [
97
+ 507,
98
+ 812,
99
+ 884,
100
+ 892
101
+ ],
102
+ "page_idx": 0
103
+ },
104
+ {
105
+ "type": "page_footnote",
106
+ "text": "1Persian bad is of uncertain origin while behtar ultimately derives from PIE ${}^{*}h_{1}$ wesus. On the other hand, English better",
107
+ "bbox": [
108
+ 112,
109
+ 894,
110
+ 487,
111
+ 921
112
+ ],
113
+ "page_idx": 0
114
+ },
115
+ {
116
+ "type": "page_footnote",
117
+ "text": "derives from PIE $* b^{h}$ edrós and is cognate with Sanskrit bhadrá",
118
+ "bbox": [
119
+ 509,
120
+ 906,
121
+ 882,
122
+ 920
123
+ ],
124
+ "page_idx": 0
125
+ },
126
+ {
127
+ "type": "page_number",
128
+ "text": "2559",
129
+ "bbox": [
130
+ 480,
131
+ 927,
132
+ 519,
133
+ 940
134
+ ],
135
+ "page_idx": 0
136
+ },
137
+ {
138
+ "type": "footer",
139
+ "text": "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 2559-2570 June 16-21, 2024 ©2024 Association for Computational Linguistics",
140
+ "bbox": [
141
+ 139,
142
+ 945,
143
+ 857,
144
+ 986
145
+ ],
146
+ "page_idx": 0
147
+ },
148
+ {
149
+ "type": "text",
150
+ "text": "reflex prediction tasks (Jäger, 2019, 2022) with reasonably good results. Specifically, we propose a likelihood ratio test (LRT) where we expect the difference in likelihoods of the best trees under null and alternate hypotheses to capture genetic relatedness. The null hypothesis assumes negligible proportion of invariant sites while the alternate hypothesis assumes significant proportion of invariant sites. Intuitively, related languages should have more positions where a character or a sound class is invariant than unrelated languages. Hence, we essentially capture the notion of relatedness as possessing a relatively high proportion of invariant sites. Further in this test, reconstructed proto-forms are not required and at the same time, the evolutionary tree structure is strictly imposed by design, unlike the multilateral model, thereby effectively circumventing the aforementioned methodological problems. Although inspired by similar tests from molecular phylogenetics, the test we propose is novel in the sense that the problem of testing common descent never arises in biology since monogenesis is accepted as a fact therein (Kessler, 2008). We further evaluate the test on various language families and demonstrate that the test does not misclassify unrelated languages as related.",
151
+ "bbox": [
152
+ 115,
153
+ 84,
154
+ 490,
155
+ 501
156
+ ],
157
+ "page_idx": 1
158
+ },
159
+ {
160
+ "type": "text",
161
+ "text": "We finally show that the test supports the existence of the macro-families Nostratic (Bomhard and Kerns, 1994) and Macro-Mayan (Campbell, 1997). While such an attempt to justify the existence of macro-families using bootstrap analysis of distance-based phylogeny is found in Jäger (2015), expressing statistical significance in terms of likelihood ratio is preferred over bootstrap support values whose interpretation is debated in molecular phylogenetics (Anisimova and Gascuel, 2006).",
162
+ "bbox": [
163
+ 115,
164
+ 502,
165
+ 489,
166
+ 663
167
+ ],
168
+ "page_idx": 1
169
+ },
170
+ {
171
+ "type": "text",
172
+ "text": "Our contributions are summarized as follows.",
173
+ "bbox": [
174
+ 132,
175
+ 664,
176
+ 472,
177
+ 678
178
+ ],
179
+ "page_idx": 1
180
+ },
181
+ {
182
+ "type": "list",
183
+ "sub_type": "text",
184
+ "list_items": [
185
+ "- We have proposed a likelihood ratio test to determine the genetic relatedness of a group of languages based on invariant site proportions.",
186
+ "- We have demonstrated by applying various language sets that the test does not exhibit the problem of false positives nor requires reconstructed proto-forms, unlike the previously proposed tests.",
187
+ "- We have found through the test some supporting evidence for the existence of macrofamilies namely Nostratic and Macro-Mayan"
188
+ ],
189
+ "bbox": [
190
+ 136,
191
+ 686,
192
+ 489,
193
+ 882
194
+ ],
195
+ "page_idx": 1
196
+ },
197
+ {
198
+ "type": "text",
199
+ "text": "The rest of the paper is summarized as follows. Related work is discussed in $\\S 2$ . The methodology",
200
+ "bbox": [
201
+ 114,
202
+ 889,
203
+ 489,
204
+ 921
205
+ ],
206
+ "page_idx": 1
207
+ },
208
+ {
209
+ "type": "text",
210
+ "text": "of the test is presented in §3. Evaluation details such as datasets and details of previous methods and variants are discussed in §4. The results are discussed in §5. The application of the method on long-range comparisons is discussed in §6. The paper is concluded in §7.",
211
+ "bbox": [
212
+ 507,
213
+ 84,
214
+ 884,
215
+ 181
216
+ ],
217
+ "page_idx": 1
218
+ },
219
+ {
220
+ "type": "text",
221
+ "text": "2 Related Work",
222
+ "text_level": 1,
223
+ "bbox": [
224
+ 509,
225
+ 193,
226
+ 665,
227
+ 209
228
+ ],
229
+ "page_idx": 1
230
+ },
231
+ {
232
+ "type": "text",
233
+ "text": "Permutation test for bilateral language relationship comparisons was introduced by Oswalt (1970). The significance of sound correspondences by brute force probability calculation was proposed by Ringe (1992, 1996). This approach was however criticized for not being able to show significance for known related pairs of languages like Latin-English and also for accounting phonologically implausible sound correspondences (Kessler, 2001). Multilateral permutation tests were proposed by (Kessler and Lehtonen, 2006; Kessler, 2007). Several applications of permutation tests exist such as (Turchin et al., 2010; Kassian et al., 2015).",
234
+ "bbox": [
235
+ 507,
236
+ 219,
237
+ 884,
238
+ 429
239
+ ],
240
+ "page_idx": 1
241
+ },
242
+ {
243
+ "type": "text",
244
+ "text": "Some notable likelihood ratio tests in molecular phylogenetics, mostly on topologies, include (Huelsenbeck and Bull, 1996; Huelsenbeck et al., 1996; Goldman et al., 2000; Anisimova and Gascuel, 2006) where bootstrap analysis is argued to be not so optimal to establish statistical significance on phylogenies. Otherwise, support for macrofamilies through bootstrap analysis for distance-based trees is shown in Jäger (2015). Comparisons of various methods of phylogenetic reconstruction such as distance-based and binary-character-based are given by Jäger (2018). Sound-class character-based phylogenetic analysis is found in (Jäger, 2019, 2022). Usually, Bayesian phylogenetic inference on binary cognate encodings gives good results (Rama et al., 2018; Rama and List, 2019).",
245
+ "bbox": [
246
+ 507,
247
+ 430,
248
+ 884,
249
+ 687
250
+ ],
251
+ "page_idx": 1
252
+ },
253
+ {
254
+ "type": "text",
255
+ "text": "Although the likelihood ratio metric is common for both past and present-day language models, the utility of this test using invariant sites outside computational historical linguistics is unknown.",
256
+ "bbox": [
257
+ 507,
258
+ 688,
259
+ 882,
260
+ 753
261
+ ],
262
+ "page_idx": 1
263
+ },
264
+ {
265
+ "type": "text",
266
+ "text": "3 Methodology",
267
+ "text_level": 1,
268
+ "bbox": [
269
+ 509,
270
+ 766,
271
+ 658,
272
+ 782
273
+ ],
274
+ "page_idx": 1
275
+ },
276
+ {
277
+ "type": "text",
278
+ "text": "The key concept revolves around the idea that any hypothesis, in this case, a hypothesis on a phylogeny, is preferred over a competing null hypothesis if it is significantly more likely, i.e., has a higher likelihood than the latter. Given the wordlist data encoded as an aligned character matrix, related languages are expected to have a higher number of invariant columns. Thus, our null hypothesis con",
279
+ "bbox": [
280
+ 507,
281
+ 791,
282
+ 884,
283
+ 921
284
+ ],
285
+ "page_idx": 1
286
+ },
287
+ {
288
+ "type": "page_number",
289
+ "text": "2560",
290
+ "bbox": [
291
+ 480,
292
+ 927,
293
+ 521,
294
+ 940
295
+ ],
296
+ "page_idx": 1
297
+ },
298
+ {
299
+ "type": "image",
300
+ "img_path": "images/00777638efd1bfdabc87a320a657feae8c20321468817a5119c4a0594e13c4a7.jpg",
301
+ "image_caption": [
302
+ "Figure 1: A section of character matrix for Uto-Aztecan family consisting of concatenated Multiple Sequence Alignments (MSAs) of consonant classes, one from each concept"
303
+ ],
304
+ "image_footnote": [],
305
+ "bbox": [
306
+ 112,
307
+ 80,
308
+ 884,
309
+ 174
310
+ ],
311
+ "page_idx": 2
312
+ },
313
+ {
314
+ "type": "text",
315
+ "text": "sists of a phylogeny with a small proportion (fixed at $1\\%$ ) of invariant sites, whereas the alternative hypothesis consists of a phylogeny with a larger but reasonable proportion (fixed at $6\\%$ ) of invariant sites. The observed difference in their likelihood of real data is compared with that of data simulated from the null hypothesis through parametric bootstrapping and, accordingly, one of the hypotheses is rejected. The steps are elaborated next.",
316
+ "bbox": [
317
+ 112,
318
+ 234,
319
+ 490,
320
+ 380
321
+ ],
322
+ "page_idx": 2
323
+ },
324
+ {
325
+ "type": "text",
326
+ "text": "3.1 Character Matrix",
327
+ "text_level": 1,
328
+ "bbox": [
329
+ 112,
330
+ 392,
331
+ 302,
332
+ 406
333
+ ],
334
+ "page_idx": 2
335
+ },
336
+ {
337
+ "type": "text",
338
+ "text": "The wordlists of a given group of languages, as mentioned previously, are encoded in the form of a character matrix. It consists of concatenated aligned words per concept, i.e., meaning. Thus, each row represents a language or taxon, and each column, also referred to as site in this paper, consists of phoneme classes, e.g., Dolgopolsky classes. Formally, let the input language set be $\\{L_1,\\ldots ,L_m\\}$ , whose genetic relatedness is to be verified statistically. Let there be $n$ concepts $C_1,\\dots,C_n$ in the wordlists. Each language $L_{i}$ should have for each concept $C_j$ a single word, say $w_{ij}$ . If a language has multiple words for a single semantic slot, only the one with fundamental or core meaning is retained, following the recipe by Kessler (2001). For instance, if the meaning 'dull' has words dull and unsharp, dull is of core or fundamental meaning. Another example would be for the meaning 'belly', Latin venter is more fundamental than abdomen. If it so happens that it still remains unresolved after this step, a single word is randomly picked up. In case a language has no word for a semantic slot, it is represented as a gap $-$ . For each concept $C_j$ and alphabet set $\\mathbb{A}$ , let $W^{j}\\in \\mathbb{A}^{m\\times l_{j}}$ represent a multiple sequence alignment (MSA) of words where $l_{j}$ is the length or the number of phonemes with vowels removed² in",
339
+ "bbox": [
340
+ 112,
341
+ 414,
342
+ 489,
343
+ 848
344
+ ],
345
+ "page_idx": 2
346
+ },
347
+ {
348
+ "type": "table",
349
+ "img_path": "images/8f54a2388cb5383a06a21d7c37da6c4fca99e2268b269628b5d07e0278645211.jpg",
350
+ "table_caption": [],
351
+ "table_footnote": [],
352
+ "table_body": "<table><tr><td>Greek_Anc</td><td>K</td><td>R</td><td>-</td><td>S</td><td></td></tr><tr><td>Latin</td><td>K</td><td>R</td><td>N</td><td>-</td><td>-</td></tr><tr><td>English</td><td>H</td><td>R</td><td>N</td><td>-</td><td>-</td></tr><tr><td>Sanskrit</td><td>S</td><td>R</td><td>N</td><td>K</td><td>-</td></tr></table>",
353
+ "bbox": [
354
+ 579,
355
+ 231,
356
+ 815,
357
+ 297
358
+ ],
359
+ "page_idx": 2
360
+ },
361
+ {
362
+ "type": "text",
363
+ "text": "Table 1: Example of a Multiple Sequence Alignment (MSA) of consonant classes for a single concept 'horn'.",
364
+ "bbox": [
365
+ 507,
366
+ 307,
367
+ 882,
368
+ 338
369
+ ],
370
+ "page_idx": 2
371
+ },
372
+ {
373
+ "type": "text",
374
+ "text": "each word. The final character matrix $X\\in \\mathbb{A}^{m\\times N}$ is concatenation of $W^{j}$ , i.e., $[W^1\\dots W^n ]$ across columns and $N = \\sum_{j = 1}^{n}l_{j}$ .",
375
+ "bbox": [
376
+ 507,
377
+ 363,
378
+ 882,
379
+ 414
380
+ ],
381
+ "page_idx": 2
382
+ },
383
+ {
384
+ "type": "text",
385
+ "text": "For example, consider a cognate set meaning 'horn' from a few Indo-European languages namely, Ancient Greek keras, Latin cornu, English horn, and Sanskrit sýnga. The resultant character matrix for this single meaning is a multiple sequence alignment with vowels removed and consonants encoded as Dolgopolsky classes as illustrated in Table 1. The final character matrix is the concatenation of such matrices across all the concepts. For an illustration of a final character matrix, see Figure 1, which is generated by MEGA11 (Tamura et al., 2021). In general, multiple sequence alignment is a fundamental step in several state-of-the-art methods in computational historical linguistics (Akavarapu and Bhattacharya, 2023, 2024).",
386
+ "bbox": [
387
+ 505,
388
+ 414,
389
+ 884,
390
+ 656
391
+ ],
392
+ "page_idx": 2
393
+ },
394
+ {
395
+ "type": "text",
396
+ "text": "3.2 Substitution Model",
397
+ "text_level": 1,
398
+ "bbox": [
399
+ 507,
400
+ 671,
401
+ 707,
402
+ 686
403
+ ],
404
+ "page_idx": 2
405
+ },
406
+ {
407
+ "type": "text",
408
+ "text": "A substitution model describes the evolution of a character at a site assuming a Markovian process. Various substitution models have been described for various alphabets such as nucleotides, amino acids, etc. In this paper, we assume the simplest possible model where substitution rates are assumed to be equal between all the pairs of distinct characters. The resultant model is known as the Jukes-Cantor model (Jukes et al., 1969) in case of nucleotide substitutions and as Poisson (Bishop and Friday, 1987) in case of amino-acid substitutions. Formally, let the number of characters in the alphabet $\\mathbb{A}$ be $N$ . An element $q_{ij}$ of the rate matrix $Q$ , which denotes the rate at which character $i$",
409
+ "bbox": [
410
+ 505,
411
+ 696,
412
+ 884,
413
+ 921
414
+ ],
415
+ "page_idx": 2
416
+ },
417
+ {
418
+ "type": "page_footnote",
419
+ "text": "<sup>2</sup>Since the root form CVC is universal, including vowels results in spurious relationships. Further, languages of Caucasianus like Georgian are rich in consonant clusters and, as a result, comparing them to others becomes difficult when vowels are considered.",
420
+ "bbox": [
421
+ 112,
422
+ 858,
423
+ 489,
424
+ 920
425
+ ],
426
+ "page_idx": 2
427
+ },
428
+ {
429
+ "type": "page_number",
430
+ "text": "2561",
431
+ "bbox": [
432
+ 480,
433
+ 928,
434
+ 517,
435
+ 940
436
+ ],
437
+ "page_idx": 2
438
+ },
439
+ {
440
+ "type": "text",
441
+ "text": "mutates to character $j$ is defined as follows:",
442
+ "bbox": [
443
+ 112,
444
+ 84,
445
+ 440,
446
+ 99
447
+ ],
448
+ "page_idx": 3
449
+ },
450
+ {
451
+ "type": "equation",
452
+ "text": "\n$$\nq _ {i j} = \\mu \\cdot \\pi_ {i}, i \\neq j \\text {(e q u a l r a t e s)} \\tag {1}\n$$\n",
453
+ "text_format": "latex",
454
+ "bbox": [
455
+ 179,
456
+ 112,
457
+ 487,
458
+ 130
459
+ ],
460
+ "page_idx": 3
461
+ },
462
+ {
463
+ "type": "text",
464
+ "text": "where $\\pi_{i}$ denotes the frequency of character $i$ at the site and $\\mu$ is the rate of mutation. The diagonal element should satisfy the normalization constraint:",
465
+ "bbox": [
466
+ 112,
467
+ 142,
468
+ 489,
469
+ 190
470
+ ],
471
+ "page_idx": 3
472
+ },
473
+ {
474
+ "type": "equation",
475
+ "text": "\n$$\nq _ {i i} = - \\sum_ {j \\neq i} q _ {i j} \\tag {2}\n$$\n",
476
+ "text_format": "latex",
477
+ "bbox": [
478
+ 240,
479
+ 200,
480
+ 487,
481
+ 236
482
+ ],
483
+ "page_idx": 3
484
+ },
485
+ {
486
+ "type": "text",
487
+ "text": "The probability of transition $i\\rightarrow j$ in time $t$ is given by the matrix $P(t) = \\{p_{ij}\\} = e^{Qt}$ . Likelihood of an evolutionary tree with topology $T$ can be, thus, calculated from the substitution matrix where branch lengths $V$ would denote the time.",
488
+ "bbox": [
489
+ 112,
490
+ 247,
491
+ 487,
492
+ 326
493
+ ],
494
+ "page_idx": 3
495
+ },
496
+ {
497
+ "type": "text",
498
+ "text": "3.3 Maximum Likelihood Tree (ML-tree)",
499
+ "text_level": 1,
500
+ "bbox": [
501
+ 112,
502
+ 338,
503
+ 453,
504
+ 354
505
+ ],
506
+ "page_idx": 3
507
+ },
508
+ {
509
+ "type": "text",
510
+ "text": "For any phylogenetic tree with topology $T$ , branch lengths $V$ , other parameters such as shape parameter of heterogeneous rate, the proportion of invariant sites denoted by $\\Theta$ , and with the observed data i.e., character matrix $X$ , the likelihood is defined as the product of likelihoods at each site as given by the following equation, assuming independence for simplicity:",
511
+ "bbox": [
512
+ 112,
513
+ 357,
514
+ 487,
515
+ 487
516
+ ],
517
+ "page_idx": 3
518
+ },
519
+ {
520
+ "type": "equation",
521
+ "text": "\n$$\n\\mathcal {L} (T, V, \\Theta | X) = \\prod_ {i = 1} ^ {N} P \\left(X _ {i} | T, V, \\Theta\\right) \\tag {3}\n$$\n",
522
+ "text_format": "latex",
523
+ "bbox": [
524
+ 164,
525
+ 498,
526
+ 487,
527
+ 542
528
+ ],
529
+ "page_idx": 3
530
+ },
531
+ {
532
+ "type": "text",
533
+ "text": "The site independence assumption also restricts the number of parameters. Given the limited amount of data, which is restricted to 100-200 wordlists, this is, thus, more suitable. Complex models such as bigram-based ones may be employed if sufficient data is available.",
534
+ "bbox": [
535
+ 112,
536
+ 552,
537
+ 487,
538
+ 646
539
+ ],
540
+ "page_idx": 3
541
+ },
542
+ {
543
+ "type": "text",
544
+ "text": "The parameters that maximize the likelihood, $\\hat{T},\\hat{V}$ and $\\hat{\\Theta}$ define the maximum likelihood tree which is usually obtained by heuristic search in the parameter space. Typically, a tree is initialized either randomly or by some heuristic means, and from there, the tree space is explored through tree modifying operations to get the \"best\" tree. For a given tree, likelihood is computed using the well-known Felsenstein's pruning algorithm from phylogenetics (Felsenstein, 1973, 1981).",
545
+ "bbox": [
546
+ 112,
547
+ 649,
548
+ 487,
549
+ 809
550
+ ],
551
+ "page_idx": 3
552
+ },
553
+ {
554
+ "type": "text",
555
+ "text": "3.4 Invariant Sites",
556
+ "text_level": 1,
557
+ "bbox": [
558
+ 112,
559
+ 820,
560
+ 275,
561
+ 834
562
+ ],
563
+ "page_idx": 3
564
+ },
565
+ {
566
+ "type": "text",
567
+ "text": "Invariant sites are those sites that are constant or evolve very slowly. These can be estimated through a maximum likelihood search along with other parameters. The proportion of invariant sites, $P_{inv}$ may be known beforehand or estimated. Given the",
568
+ "bbox": [
569
+ 112,
570
+ 841,
571
+ 487,
572
+ 920
573
+ ],
574
+ "page_idx": 3
575
+ },
576
+ {
577
+ "type": "text",
578
+ "text": "invariant sites, the likelihood defined in §3.3 is only the product of likelihoods across the variant sites.",
579
+ "bbox": [
580
+ 507,
581
+ 84,
582
+ 880,
583
+ 115
584
+ ],
585
+ "page_idx": 3
586
+ },
587
+ {
588
+ "type": "text",
589
+ "text": "Our observation is that estimated $P_{inv}$ is higher ( $>0.06$ ) among related languages while lower ( $\\approx 0.01$ ) among (possibly) unrelated languages. Based on this observation and preliminaries, we now describe the likelihood ratio test.",
590
+ "bbox": [
591
+ 507,
592
+ 116,
593
+ 882,
594
+ 196
595
+ ],
596
+ "page_idx": 3
597
+ },
598
+ {
599
+ "type": "text",
600
+ "text": "3.5 Likelihood Ratio Test (LRT)",
601
+ "text_level": 1,
602
+ "bbox": [
603
+ 507,
604
+ 208,
605
+ 779,
606
+ 223
607
+ ],
608
+ "page_idx": 3
609
+ },
610
+ {
611
+ "type": "text",
612
+ "text": "Given a null hypothesis $H_0$ and a competing alternative hypothesis $H_{a}$ , the latter is preferred if it is more likely than the former i.e., $\\mathcal{L}_{H_a} > \\mathcal{L}_{H_0}$ . In our case, the hypotheses consist of respective phylogenetic tree parameters estimated for ML-trees, i.e., $H_0$ consists of $\\hat{T}_0,\\hat{V}_0,\\hat{\\Theta}_0$ and $H_{a}$ consists of $\\hat{T}_a,\\hat{V}_a,\\hat{\\Theta}_a$ . The likelihood ratio test defines the following metric to decide whether to reject the null hypothesis:",
613
+ "bbox": [
614
+ 507,
615
+ 229,
616
+ 882,
617
+ 373
618
+ ],
619
+ "page_idx": 3
620
+ },
621
+ {
622
+ "type": "equation",
623
+ "text": "\n$$\n\\delta = 2 \\cdot \\ln \\left(\\frac {\\mathcal {L} \\left(\\hat {T} _ {a} , \\hat {V} _ {a} , \\hat {\\Theta} _ {a}\\right)}{\\mathcal {L} \\left(\\hat {T} _ {0} , \\hat {V} _ {0} , \\hat {\\Theta} _ {0}\\right)}\\right) \\tag {4}\n$$\n",
624
+ "text_format": "latex",
625
+ "bbox": [
626
+ 586,
627
+ 385,
628
+ 882,
629
+ 426
630
+ ],
631
+ "page_idx": 3
632
+ },
633
+ {
634
+ "type": "text",
635
+ "text": "The Likelihood Ratio Test (LRT) metric $\\delta$ was shown to asymptotically follow a chi-squared distribution when the null hypothesis is assumed with the degrees of freedom $p - q$ , where $p$ and $q$ respectively are the numbers of free parameters in the alternate and the null hypotheses (Wilks, 1938). However, it was argued that this may not hold in general for phylogenetic problems due to the discrete nature of tree topology (see (Huesenbeck and Bull, 1996; Huelsenbeck et al., 1996; Anisimova and Gascuel, 2006) for relevant work). As a result, the distribution of $\\delta$ is determined by a parametric bootstrapping method where it is measured on the data simulated by the parameters estimated assuming the null hypothesis $H_0$ to hold, i.e., using the parameters $\\hat{T}_0$ , $\\hat{V}_0$ and $\\hat{\\Theta}_0$ .",
636
+ "bbox": [
637
+ 507,
638
+ 438,
639
+ 882,
640
+ 694
641
+ ],
642
+ "page_idx": 3
643
+ },
644
+ {
645
+ "type": "text",
646
+ "text": "As mentioned in §3.4, we propose LRT to test the relatedness of a group of languages using varying proportions of invariant sites. In other words words the null hypothesis $H_0$ consists of invariant site proportion $P_{inv}^0$ and alternate hypothesis $H_a$ consists of $P_{inv}^a$ where $P_{inv}^0 < P_{inv}^a$ as per the observations discussed in §3.4.",
647
+ "bbox": [
648
+ 507,
649
+ 696,
650
+ 882,
651
+ 807
652
+ ],
653
+ "page_idx": 3
654
+ },
655
+ {
656
+ "type": "text",
657
+ "text": "The typical way of obtaining the distribution for $\\delta$ under $H_0$ involves finding the parameters $\\{\\hat{T}_0, \\hat{V}_0, \\hat{\\Theta}_0\\}$ and $\\{\\hat{T}_a, \\hat{V}_a, \\hat{\\Theta}_a\\}$ for the best trees respectively under $H_0$ and $H_a$ along with observed $\\delta$ , say $\\hat{\\delta}$ . Further, several, say $k$ , bootstrap replicates are generated from the topology, branch lengths, and other parameters defined by $\\{\\hat{T}_0, \\hat{V}_0, \\hat{\\Theta}_0\\}$ , i.e.,",
658
+ "bbox": [
659
+ 507,
660
+ 809,
661
+ 882,
662
+ 921
663
+ ],
664
+ "page_idx": 3
665
+ },
666
+ {
667
+ "type": "page_number",
668
+ "text": "2562",
669
+ "bbox": [
670
+ 480,
671
+ 927,
672
+ 519,
673
+ 940
674
+ ],
675
+ "page_idx": 3
676
+ },
677
+ {
678
+ "type": "table",
679
+ "img_path": "images/cafbdbeafe3d324475777dc6d077e6783121c422e6238c9853bf35ee1c4efdf3.jpg",
680
+ "table_caption": [],
681
+ "table_footnote": [],
682
+ "table_body": "<table><tr><td>Family</td><td>Abbrv.</td><td>Languages</td><td>Concepts</td><td>Words</td></tr><tr><td>Afrasian</td><td>AfA</td><td>21</td><td>39</td><td>770</td></tr><tr><td>Dravidian</td><td>Drav</td><td>4</td><td>183</td><td>716</td></tr><tr><td>Indo-European</td><td>IE</td><td>12</td><td>185</td><td>2209</td></tr><tr><td>Kartvelian</td><td>Kart</td><td>1</td><td>180</td><td>180</td></tr><tr><td>Lolo-Burmese</td><td>LoBur</td><td>15</td><td>39</td><td>565</td></tr><tr><td>Mayan</td><td>May</td><td>30</td><td>94</td><td>2667</td></tr><tr><td>Mixe-Zoque</td><td>MZ</td><td>10</td><td>94</td><td>905</td></tr><tr><td>Mon-Khmer</td><td>MKh</td><td>9</td><td>199</td><td>1701</td></tr><tr><td>Mon-Khmer</td><td>MKh</td><td>16</td><td>94</td><td>1332</td></tr><tr><td>Munda</td><td>Mun</td><td>4</td><td>199</td><td>759</td></tr><tr><td>Uto-Aztecan</td><td>UAz</td><td>9</td><td>94</td><td>803</td></tr></table>",
683
+ "bbox": [
684
+ 119,
685
+ 80,
686
+ 484,
687
+ 233
688
+ ],
689
+ "page_idx": 4
690
+ },
691
+ {
692
+ "type": "text",
693
+ "text": "assuming $H_0$ . Next, the maximum likelihood search is run again on these replicates to obtain several samples for $\\delta$ , say $\\{\\delta_1,\\dots ,\\delta_k\\}$ . However, we found considerable variation in $\\hat{\\delta}$ , since the maximum likelihood search is only a heuristic and is affected by initialization. As a result, we obtain several samples for $\\hat{\\delta}$ , say $\\{\\hat{\\delta}_1,\\dots ,\\hat{\\delta}_k\\}$ by running the search $k$ times and based on the null parameters, a single bootstrap replicate is generated for each search to consequently obtain $\\{\\delta_1,\\dots ,\\delta_k\\}$ for corresponding $k$ searches. Finally the $p$ -value for $\\mathbb{E}[\\delta ] < \\mathbb{E}[\\hat{\\delta} ]$ is obtained by one-sided paired t-test. If the $p$ -value is less than a threshold (usually 0.05), we conclude that $H_{a}$ may hold or, in other words, there are at least $P_{inv}^{a}$ proportions of sites that are significantly invariant and, thus, the languages under consideration are likely to be related.",
694
+ "bbox": [
695
+ 112,
696
+ 282,
697
+ 489,
698
+ 557
699
+ ],
700
+ "page_idx": 4
701
+ },
702
+ {
703
+ "type": "text",
704
+ "text": "4 Experimental Setup",
705
+ "text_level": 1,
706
+ "bbox": [
707
+ 112,
708
+ 570,
709
+ 321,
710
+ 586
711
+ ],
712
+ "page_idx": 4
713
+ },
714
+ {
715
+ "type": "text",
716
+ "text": "The section discusses the details of the experiments including datasets, baseline models, and implementation details.",
717
+ "bbox": [
718
+ 112,
719
+ 596,
720
+ 489,
721
+ 644
722
+ ],
723
+ "page_idx": 4
724
+ },
725
+ {
726
+ "type": "text",
727
+ "text": "4.1 Datasets",
728
+ "text_level": 1,
729
+ "bbox": [
730
+ 112,
731
+ 658,
732
+ 228,
733
+ 671
734
+ ],
735
+ "page_idx": 4
736
+ },
737
+ {
738
+ "type": "text",
739
+ "text": "The data for evaluating the tests consists of wordlists from multiple language (sub-)families and their combinations. Combinations of related sub-families serve as positive examples while those of unrelated serve as negative examples. Evaluating the macro-families also consists of language groups whose relationship is only distantly suggested such as Nostratic (Bomhard and Kerns, 1994).",
740
+ "bbox": [
741
+ 112,
742
+ 678,
743
+ 487,
744
+ 806
745
+ ],
746
+ "page_idx": 4
747
+ },
748
+ {
749
+ "type": "text",
750
+ "text": "The details of data from each family are shown in Table 2. Out of these, Mon-Khmer and Munda (200 wordlists) are extracted from the Austro-Asiatic data from Rama et al. (2018). Data for Old languages of Nostratic comprising Indo-European, Dravidian, and Kartvelian are prepared by us from the Swadesh 200-wordlists available at Wik",
751
+ "bbox": [
752
+ 112,
753
+ 808,
754
+ 489,
755
+ 920
756
+ ],
757
+ "page_idx": 4
758
+ },
759
+ {
760
+ "type": "table",
761
+ "img_path": "images/1fe0e2f023343a46801d06169b220e9ff871a10a0f568350f4e79355f6cbf135.jpg",
762
+ "table_caption": [
763
+ "Table 2: Language families considered in this study."
764
+ ],
765
+ "table_footnote": [],
766
+ "table_body": "<table><tr><td>Family</td><td>Abbrv.</td><td>Languages</td><td>Concepts</td><td>Words</td></tr><tr><td>Austro-Asiatic</td><td>AA</td><td>58</td><td>200</td><td>11001</td></tr><tr><td>Austronesian</td><td>AN</td><td>45</td><td>210</td><td>8309</td></tr><tr><td>Indo-European</td><td>IE</td><td>42</td><td>208</td><td>8478</td></tr><tr><td>Pama-Nyungan</td><td>PN</td><td>67</td><td>183</td><td>11503</td></tr><tr><td>Sino-Tibetan</td><td>ST</td><td>64</td><td>110</td><td>6762</td></tr></table>",
767
+ "bbox": [
768
+ 512,
769
+ 80,
770
+ 878,
771
+ 159
772
+ ],
773
+ "page_idx": 4
774
+ },
775
+ {
776
+ "type": "text",
777
+ "text": "Table 3: Language family datasets for tree construction.",
778
+ "bbox": [
779
+ 507,
780
+ 168,
781
+ 880,
782
+ 183
783
+ ],
784
+ "page_idx": 4
785
+ },
786
+ {
787
+ "type": "text",
788
+ "text": "tionary<sup>3</sup>. Data for all the other families are obtained from Rama (2018) which were, in turn, collected from various publicly available sources. The datasets are the same as those found in related tasks such as automated cognate detection and protolanguage reconstruction.",
789
+ "bbox": [
790
+ 505,
791
+ 208,
792
+ 884,
793
+ 304
794
+ ],
795
+ "page_idx": 4
796
+ },
797
+ {
798
+ "type": "text",
799
+ "text": "In the Nostratic grouping, we considered the languages that are surviving or have surviving descendants and were attested by the 10th century CE. The motivation behind this choice is that older languages should be closer to the ancestral language and each other if at all there is any relationship. Several languages including literary Dravidian languages, Georgian, and Armenian are mostly conservative and deviate little from their old forms. The data is pre-processed by excluding motivated word forms including onomatopoeia, and nursery forms, listed in Kessler (2001). Short forms, i.e., words consisting of single syllables are also excluded. Such cleaning is necessary to avoid the appearance of spurious relationships. In the case of Nostratic, we were also careful to exclude borrowings by tracing etymologies from Wiktionary<sup>3</sup>. This step could not be extended to other language families due to a lack of readily available etymological information.",
800
+ "bbox": [
801
+ 505,
802
+ 305,
803
+ 884,
804
+ 611
805
+ ],
806
+ "page_idx": 4
807
+ },
808
+ {
809
+ "type": "text",
810
+ "text": "All the methods employed in this work, including both the proposed one and baseline ones described in §4.2, involve the construction of a phylogenetic tree. Hence, we also compare the methods on a tree construction task where we see how well the trees match the golden truth trees wherever available. The data for this task is taken from Rama et al. (2018) as summarized in Table 3.",
811
+ "bbox": [
812
+ 507,
813
+ 611,
814
+ 885,
815
+ 740
816
+ ],
817
+ "page_idx": 4
818
+ },
819
+ {
820
+ "type": "text",
821
+ "text": "4.2 Multilateral Permutation Test",
822
+ "text_level": 1,
823
+ "bbox": [
824
+ 507,
825
+ 752,
826
+ 791,
827
+ 766
828
+ ],
829
+ "page_idx": 4
830
+ },
831
+ {
832
+ "type": "text",
833
+ "text": "As mentioned in §1, most previous methods compare languages bilaterally, i.e., a pair at a time. As a result, the only possible way to compare the language families in this approach is to compare their reconstructed proto-languages. However, protoforms of a proto-language are not often universally agreed which leads to considerable allowance of",
834
+ "bbox": [
835
+ 507,
836
+ 772,
837
+ 884,
838
+ 885
839
+ ],
840
+ "page_idx": 4
841
+ },
842
+ {
843
+ "type": "page_footnote",
844
+ "text": "3https://en.wiktionary.org/wiki/Category: SwadeshLists_by(Language",
845
+ "bbox": [
846
+ 509,
847
+ 894,
848
+ 842,
849
+ 920
850
+ ],
851
+ "page_idx": 4
852
+ },
853
+ {
854
+ "type": "page_number",
855
+ "text": "2563",
856
+ "bbox": [
857
+ 480,
858
+ 928,
859
+ 519,
860
+ 940
861
+ ],
862
+ "page_idx": 4
863
+ },
864
+ {
865
+ "type": "text",
866
+ "text": "manipulation that can affect the results (Kessler, 2015). An alternate solution to determine the significance of the relationship among multiple languages was proposed by Kessler and Lehtonen (2006) and Kessler (2007) who employ a permutation test based on multilateral comparison. This has been well received in historical linguistics (Ringe and Eska, 2013).",
867
+ "bbox": [
868
+ 112,
869
+ 84,
870
+ 487,
871
+ 211
872
+ ],
873
+ "page_idx": 5
874
+ },
875
+ {
876
+ "type": "text",
877
+ "text": "The test is based on nearest-neighbour hierarchical clustering where at any point two closest clusters are lumped into one cluster. The basic distance measure, $\\hat{d}(A,B)$ , between any two clusters $A$ and $B$ is the average of distances between all possible pairs of languages in these clusters, i.e.,",
878
+ "bbox": [
879
+ 112,
880
+ 212,
881
+ 489,
882
+ 309
883
+ ],
884
+ "page_idx": 5
885
+ },
886
+ {
887
+ "type": "equation",
888
+ "text": "\n$$\n\\hat {d} (A, B) = \\frac {1}{| A | \\cdot | B |} \\sum_ {a \\in A} \\sum_ {b \\in B} d (a, b) \\tag {5}\n$$\n",
889
+ "text_format": "latex",
890
+ "bbox": [
891
+ 164,
892
+ 319,
893
+ 487,
894
+ 357
895
+ ],
896
+ "page_idx": 5
897
+ },
898
+ {
899
+ "type": "text",
900
+ "text": "where the distance $d(a,b)$ between any two languages $a$ and $b$ is the mean distance between the pairs of words over all concepts. Following the notations of $\\S 3.1$ where $w_{aj}$ and $w_{bj}$ are words in languages $a$ and $b$ respectively from concept $C_j$ ,",
901
+ "bbox": [
902
+ 112,
903
+ 369,
904
+ 487,
905
+ 450
906
+ ],
907
+ "page_idx": 5
908
+ },
909
+ {
910
+ "type": "equation",
911
+ "text": "\n$$\nd (a, b) = \\frac {\\sum_ {C _ {j} , w _ {a j} \\neq \\emptyset , w _ {b j} \\neq \\emptyset} d \\left(w _ {a j} , w _ {b j}\\right)}{\\left| \\left\\{C _ {j}: w _ {a j} \\neq \\emptyset , w _ {b j} \\neq \\emptyset \\right\\} \\right|} \\tag {6}\n$$\n",
912
+ "text_format": "latex",
913
+ "bbox": [
914
+ 139,
915
+ 461,
916
+ 487,
917
+ 500
918
+ ],
919
+ "page_idx": 5
920
+ },
921
+ {
922
+ "type": "text",
923
+ "text": "Taking an average over all languages essentially enforces multilateral comparison, i.e., multiple languages are being considered equally to compute the outcome. Further, the algorithm thus described is the same as UPGMA tree construction method (Sokal and Michener, 1958) where at any bifurcating node, a uniform rate of evolution is assumed across daughter clades. The final similarity metric $\\hat{s}(A,B)$ is determined by the following statistic that is computed based on a random permutation of words across each column (taxon) which yields random distances $d(A,B)$ :",
924
+ "bbox": [
925
+ 112,
926
+ 508,
927
+ 487,
928
+ 702
929
+ ],
930
+ "page_idx": 5
931
+ },
932
+ {
933
+ "type": "equation",
934
+ "text": "\n$$\n\\hat {s} (A, B) = \\frac {\\mathbb {E} [ d (A , B) ] - \\hat {d} (A , B)}{\\mathbb {E} [ d (A , B) ]} \\tag {7}\n$$\n",
935
+ "text_format": "latex",
936
+ "bbox": [
937
+ 169,
938
+ 714,
939
+ 487,
940
+ 750
941
+ ],
942
+ "page_idx": 5
943
+ },
944
+ {
945
+ "type": "text",
946
+ "text": "The $p$ -value of two language clusters $A$ and $B$ is the frequency of the event $\\hat{d}(A, B) \\geq d(A, B)$ relative to the total number of random permutations. Language clusters $A$ and $B$ are considered to be related if the $p$ -value is less than 0.05. The given languages are termed related if the final two clusters that are merged at the root are related (Kessler and Lehtonen, 2006).",
947
+ "bbox": [
948
+ 112,
949
+ 760,
950
+ 487,
951
+ 887
952
+ ],
953
+ "page_idx": 5
954
+ },
955
+ {
956
+ "type": "text",
957
+ "text": "Kessler (2007) ran this test using various word similarity metrics which almost give similar results.",
958
+ "bbox": [
959
+ 112,
960
+ 889,
961
+ 487,
962
+ 920
963
+ ],
964
+ "page_idx": 5
965
+ },
966
+ {
967
+ "type": "text",
968
+ "text": "Among these metrics, we ran on P1-dolgo which is a binary metric that determines whether the consonant class of the word's initial consonant matches or not. Additionally, we employ the binary similarity measure introduced by Turchin et al. (2010) to test the significance of the Altaic family where the first two consonants are considered. We further test continuous word distances introduced by List (2010) (SCA) and List (2012) (LexStat) that are based on sequence alignment techniques which were introduced in the context of automated cognate detection.",
969
+ "bbox": [
970
+ 507,
971
+ 84,
972
+ 884,
973
+ 275
974
+ ],
975
+ "page_idx": 5
976
+ },
977
+ {
978
+ "type": "text",
979
+ "text": "4.3 Implementation",
980
+ "text_level": 1,
981
+ "bbox": [
982
+ 507,
983
+ 288,
984
+ 680,
985
+ 304
986
+ ],
987
+ "page_idx": 5
988
+ },
989
+ {
990
+ "type": "text",
991
+ "text": "We mapped the consonant classes to the protein alphabet since phylogenetic software expects input as either nucleotide or amino acid sequences. Moreover, most of the amino acid letters and Dolgopolsky classes are identical. In this regard, there is only one exception, namely, 'J' which is absent in the former but present in the latter and is, hence, simply replaced with 'I', which is in turn absent in Dolgopolsky classes. The multiple sequence alignments are obtained from CLUSTALW2 (Larkin et al., 2007) while the best trees and their corresponding likelihoods were computed using IQTREE (Nguyen et al., 2015). As described in §3.4 and §3.5, the proportions of invariant sites $P_{inv}^{0}$ and $P_{inv}^{a}$ are set to 0.01 and 0.06 respectively for null $(H_0)$ and alternate $(H_a)$ hypotheses. The parametric bootstrap replicates are generated using Al-iSim (Ly-Trong et al., 2022), an extension of IQTREE. To replicate as closely as possible, gaps present in the original character matrices are retained in the replicates. We calculate the p-value based on a sample size of $k = 15$ . The outcomes are observed to be stable beyond this size. The word similarity metrics used in the baseline models are computed by using Lingpy (List and Forkel, 2021). For the phylogenetic tree construction task, MEGA11 (Tamura et al., 2021) was used to deduce the maximim likelihood tree (ML-tree) with the aforementioned model with an additional gamma rate heterogeneity parameter with two distinct rates whose shape is estimated. We name this method $ML-P + I + G2$ .",
992
+ "bbox": [
993
+ 507,
994
+ 309,
995
+ 884,
996
+ 822
997
+ ],
998
+ "page_idx": 5
999
+ },
1000
+ {
1001
+ "type": "text",
1002
+ "text": "The generalized quartet distances (GQD) (Pompei et al., 2011) between the predicted and the gold trees are computed from quartet distances obtained using qdist (Mailund and Pedersen, 2004). The quartet distance between two trees measures the number of four-leaf-subsets that have dissimilar",
1003
+ "bbox": [
1004
+ 507,
1005
+ 825,
1006
+ 882,
1007
+ 919
1008
+ ],
1009
+ "page_idx": 5
1010
+ },
1011
+ {
1012
+ "type": "page_number",
1013
+ "text": "2564",
1014
+ "bbox": [
1015
+ 480,
1016
+ 928,
1017
+ 519,
1018
+ 940
1019
+ ],
1020
+ "page_idx": 5
1021
+ },
1022
+ {
1023
+ "type": "table",
1024
+ "img_path": "images/e15fa0f73357313be96eb5f36409adce94017eb353cb72b6deaa45c35dda52c6.jpg",
1025
+ "table_caption": [],
1026
+ "table_footnote": [],
1027
+ "table_body": "<table><tr><td>Method</td><td>MKh</td><td>Mun</td><td>MKh-Mun</td><td>IE</td><td>Drav</td><td>May</td><td>MZ</td><td>UAz</td><td>MKh-May</td><td>MKh-UAz</td><td>AfA-LoBur</td></tr><tr><td>Related</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✗</td><td>✗</td><td>✗</td></tr><tr><td>P1-Dolgo</td><td>0.123(&lt;0.001)</td><td>0.243(&lt;0.001)</td><td>0.080(&lt;0.001)</td><td>0.071(&lt;0.001)</td><td>0.440(&lt;0.001)</td><td>0.228(&lt;0.001)</td><td>0.412(&lt;0.001)</td><td>0.572(&lt;0.001)</td><td>0.007(&lt;0.001)</td><td>0.005(0.063)</td><td>0.017(&lt;0.001)</td></tr><tr><td>Turchin</td><td>0.019(&lt;0.001)</td><td>0.124(&lt;0.001)</td><td>0.019(&lt;0.001)</td><td>0.028(&lt;0.001)</td><td>0.292(&lt;0.001)</td><td>0.126(&lt;0.001)</td><td>0.256(&lt;0.001)</td><td>0.402(&lt;0.001)</td><td>0.003(&lt;0.001)</td><td>0.003(0.005)</td><td>0.004(&lt;0.001)</td></tr><tr><td>LexStat</td><td>0.065(&lt;0.01)</td><td>0.138(&lt;0.01)</td><td>0.048(&lt;0.01)</td><td>0.036(&lt;0.01)</td><td>0.197(&lt;0.01)</td><td>0.129(&lt;0.01)</td><td>0.244(&lt;0.01)</td><td>0.306(&lt;0.01)</td><td>0.028(&lt;0.01)</td><td>0.018(&lt;0.01)</td><td>0.033(&lt;0.01)</td></tr><tr><td>SCA</td><td>0.087(&lt;0.01)</td><td>0.187(&lt;0.01)</td><td>0.074(&lt;0.01)</td><td>0.056(&lt;0.01)</td><td>0.296(&lt;0.01)</td><td>0.177(&lt;0.01)</td><td>0.304(&lt;0.01)</td><td>0.400(&lt;0.01)</td><td>0.015(&lt;0.01)</td><td>0.006(&lt;0.01)</td><td>0.031(&lt;0.01)</td></tr><tr><td>LRT</td><td>9.205(&lt;0.001)</td><td>1.58(&lt;0.001)</td><td>14.18(&lt;0.001)</td><td>26.154(&lt;0.001)</td><td>1.78(&lt;0.001)</td><td>68.212(&lt;0.001)</td><td>7.192(&lt;0.001)</td><td>10.448(&lt;0.001)</td><td>-14.359(0.280)</td><td>-12.188(0.065)</td><td>-10.768(0.979)</td></tr></table>",
1028
+ "bbox": [
1029
+ 117,
1030
+ 80,
1031
+ 880,
1032
+ 225
1033
+ ],
1034
+ "page_idx": 6
1035
+ },
1036
+ {
1037
+ "type": "text",
1038
+ "text": "topologies. Unlike biological phylogenetic trees, language trees are often multifurcated. Hence, GQD excludes penalties over the order of bifurcations. The code and relevant data have been made publicly available<sup>4</sup>. Further implementation details can be found in README.md therein.",
1039
+ "bbox": [
1040
+ 112,
1041
+ 302,
1042
+ 487,
1043
+ 398
1044
+ ],
1045
+ "page_idx": 6
1046
+ },
1047
+ {
1048
+ "type": "text",
1049
+ "text": "5 Results",
1050
+ "text_level": 1,
1051
+ "bbox": [
1052
+ 112,
1053
+ 414,
1054
+ 213,
1055
+ 429
1056
+ ],
1057
+ "page_idx": 6
1058
+ },
1059
+ {
1060
+ "type": "text",
1061
+ "text": "The primary results of the paper are tabulated in Table 4, where the results of LRT (last row) are compared with those of the multilateral permutation tests. Except for LRT, the column 'Method' indicates the distance metric employed in the permutation test. The row 'Related' indicates the current consensus about the relatedness of the language families. For the permutation test, the values indicate the similarity metric $\\hat{s}$ defined in Eq. (7), as measured at the root. On the other hand, for LRT the values indicate the mean of observed $\\hat{\\delta}$ (see §3.5). The p-values are indicated in parentheses. The standard threshold of 0.05 is assumed for p-values. Please refer to Table 2 and Table 3 for abbreviations of various language families.",
1062
+ "bbox": [
1063
+ 112,
1064
+ 443,
1065
+ 487,
1066
+ 684
1067
+ ],
1068
+ "page_idx": 6
1069
+ },
1070
+ {
1071
+ "type": "text",
1072
+ "text": "One can observe that false positives, indicated in red, are absent for LRT, in contrast with multilateral permutation tests which exhibit false positives in all cases (except P1-Dolgo for MKh-UAz). However, we note that the similarity scores of the Turchin measure are consistently small $(< 0.005)$ for negatives irrespective of the significance implied by the p-value. Hence, it may be noted that Turchin could be a good measure for permutation tests when similarity scores are taken into consideration.",
1073
+ "bbox": [
1074
+ 112,
1075
+ 686,
1076
+ 487,
1077
+ 845
1078
+ ],
1079
+ "page_idx": 6
1080
+ },
1081
+ {
1082
+ "type": "text",
1083
+ "text": "Further, one can observe from Table 4 that mean $\\hat{\\delta}$ values are small for valid families such as Mun and Drav. This has to do with the fact that the data",
1084
+ "bbox": [
1085
+ 112,
1086
+ 848,
1087
+ 487,
1088
+ 894
1089
+ ],
1090
+ "page_idx": 6
1091
+ },
1092
+ {
1093
+ "type": "table",
1094
+ "img_path": "images/2d6c4f0a59882ea082ce51433634912ac4b87b00700b0097f9c5dcd49338d32b.jpg",
1095
+ "table_caption": [
1096
+ "Table 4: Significance testing on various existent and non-existent families. The values indicate the similarity measure $\\hat{s}$ in the case of permutation tests and in the case of LRT they indicate the mean of statistic $\\hat{\\delta}$ . Values in parentheses indicate p-value. False positives are marked in red."
1097
+ ],
1098
+ "table_footnote": [],
1099
+ "table_body": "<table><tr><td>Method</td><td>AA</td><td>AN</td><td>IE</td><td>PN</td><td>ST</td><td>Avg</td></tr><tr><td>P1-Dolgo</td><td>0.060</td><td>0.208</td><td>0.033</td><td>0.175</td><td>0.188</td><td>0.133</td></tr><tr><td>Turchin</td><td>0.069</td><td>0.195</td><td>0.058</td><td>0.175</td><td>0.275</td><td>0.154</td></tr><tr><td>LexStat</td><td>0.051</td><td>0.178</td><td>0.020</td><td>0.164</td><td>0.096</td><td>0.102</td></tr><tr><td>SCA</td><td>0.049</td><td>0.119</td><td>0.025</td><td>0.166</td><td>0.087</td><td>0.089</td></tr><tr><td>ML-P+I+G2</td><td>0.026</td><td>0.065</td><td>0.033</td><td>0.145</td><td>0.125</td><td>0.079</td></tr></table>",
1100
+ "bbox": [
1101
+ 512,
1102
+ 299,
1103
+ 880,
1104
+ 375
1105
+ ],
1106
+ "page_idx": 6
1107
+ },
1108
+ {
1109
+ "type": "text",
1110
+ "text": "Table 5: Comparison of the methods on phylogenetic tree construction task provided as GQD scores. The best results are in bold.",
1111
+ "bbox": [
1112
+ 507,
1113
+ 384,
1114
+ 882,
1115
+ 426
1116
+ ],
1117
+ "page_idx": 6
1118
+ },
1119
+ {
1120
+ "type": "text",
1121
+ "text": "for these families consists of a lower number of taxa (see Table 2). Hence, although the $\\hat{\\delta}$ measure need not imply strength, its sign implies which hypothesis is to be preferred, i.e., the one with a larger proportion of invariant sites in case of a positive value and the one with a smaller proportion of invariant sites in case of a negative value.",
1122
+ "bbox": [
1123
+ 507,
1124
+ 453,
1125
+ 882,
1126
+ 565
1127
+ ],
1128
+ "page_idx": 6
1129
+ },
1130
+ {
1131
+ "type": "text",
1132
+ "text": "5.1 Tree Construction",
1133
+ "text_level": 1,
1134
+ "bbox": [
1135
+ 507,
1136
+ 577,
1137
+ 697,
1138
+ 592
1139
+ ],
1140
+ "page_idx": 6
1141
+ },
1142
+ {
1143
+ "type": "text",
1144
+ "text": "As mentioned in §4.1, both the methods output a tree, and, therefore, the methods have been evaluated on the tree construction task. The purpose of this task is to ensure that the proposed methods have indeed a good sense of phylogenetic inference and are, hence, appropriate to carry out significance tests over phylogenies. The results are tabulated in Table 5. By comparing with the mean scores of state-of-the-art language phylogeny inference methods on this data, ML-P+I+G2 (0.079) is a few steps behind Bayesian inferred tree (0.066) (Rama et al., 2018) and maximum a posteriori tree (0.051) (Rama and List, 2019). Hence, it can be concluded that consonant-class-based character matrix encoding is almost as good as cognate-based binary character matrix encoding while probabilistic methods based on character matrices are superior to distance-based methods for this task. Among the distance-based approaches, one with the SCA metric performs best. A similar situation was ob",
1145
+ "bbox": [
1146
+ 505,
1147
+ 599,
1148
+ 882,
1149
+ 920
1150
+ ],
1151
+ "page_idx": 6
1152
+ },
1153
+ {
1154
+ "type": "page_footnote",
1155
+ "text": "<sup>4</sup>https://github.com/mahesh-ak/PhyloVal",
1156
+ "bbox": [
1157
+ 134,
1158
+ 904,
1159
+ 425,
1160
+ 920
1161
+ ],
1162
+ "page_idx": 6
1163
+ },
1164
+ {
1165
+ "type": "page_number",
1166
+ "text": "2565",
1167
+ "bbox": [
1168
+ 480,
1169
+ 927,
1170
+ 519,
1171
+ 940
1172
+ ],
1173
+ "page_idx": 6
1174
+ },
1175
+ {
1176
+ "type": "table",
1177
+ "img_path": "images/04682f21704b6707b80d116b2c61fc4823c304e97ed13a35acfcface08310366.jpg",
1178
+ "table_caption": [],
1179
+ "table_footnote": [],
1180
+ "table_body": "<table><tr><td>Method</td><td>Drav-IE</td><td>Drav-IE-Kart</td><td>May-MZ</td><td>May-UAz</td><td>May-MZ-UAz</td></tr><tr><td>P1-Dolgo</td><td>0.046(&lt;0.001)</td><td>0.038(&lt;0.001)</td><td>0.033(&lt;0.001)</td><td>0.046(&lt;0.001)</td><td>0.036(&lt;0.001)</td></tr><tr><td>Turchin</td><td>0.017(&lt;0.001)</td><td>0.002(0.197)</td><td>0.012(&lt;0.001)</td><td>0.012(&lt;0.001)</td><td>0.008(&lt;0.001)</td></tr><tr><td>LexStat</td><td>0.024(&lt;0.01)</td><td>0.014(&lt;0.01)</td><td>0.033(&lt;0.01)</td><td>0.027(&lt;0.01)</td><td>0.024(&lt;0.01)</td></tr><tr><td>SCA</td><td>0.024(&lt;0.01)</td><td>0.007(0.01)</td><td>0.019(&lt;0.01)</td><td>0.024(&lt;0.01)</td><td>0.015(&lt;0.01)</td></tr><tr><td>LRT</td><td>24.882(&lt;0.001)</td><td>0.316(&lt;0.001)</td><td>20.988(&lt;0.001)</td><td>-1.035(&lt;0.001)</td><td>-9.819(&lt;0.001)</td></tr></table>",
1181
+ "bbox": [
1182
+ 117,
1183
+ 80,
1184
+ 485,
1185
+ 193
1186
+ ],
1187
+ "page_idx": 7
1188
+ },
1189
+ {
1190
+ "type": "text",
1191
+ "text": "Table 6: Results of evaluation of macro families. Parentheses contain p-values.",
1192
+ "bbox": [
1193
+ 112,
1194
+ 202,
1195
+ 489,
1196
+ 231
1197
+ ],
1198
+ "page_idx": 7
1199
+ },
1200
+ {
1201
+ "type": "text",
1202
+ "text": "served in Rama et al. (2018) and Rama and List (2019) where SCA-based cognates yield the best performance. However, it should be noted that SCA and LexStat-based measures yield false positives on significance testing (Table 4) despite their performance on this task.",
1203
+ "bbox": [
1204
+ 112,
1205
+ 256,
1206
+ 489,
1207
+ 353
1208
+ ],
1209
+ "page_idx": 7
1210
+ },
1211
+ {
1212
+ "type": "text",
1213
+ "text": "6 Evaluation of Macro Families",
1214
+ "text_level": 1,
1215
+ "bbox": [
1216
+ 112,
1217
+ 365,
1218
+ 405,
1219
+ 382
1220
+ ],
1221
+ "page_idx": 7
1222
+ },
1223
+ {
1224
+ "type": "text",
1225
+ "text": "We apply the tests on groupings of a few families from proposed macro families, namely Nostratic, Macro-Mayan, and Amerind. Under Nostratic, we test for groupings Dravidian-Indo-European (Drav-IE) and Dravidian-Indo-European-Kartvelian (Drav-IE-Kart) while we test Mayan-Mixe-Zoque (May-MZ) under Macro-Mayan and Mayan-Uto-Aztecan (May-UAz), Mayan-Mixe-Zoque-Uto-Aztecan (May-MZ-UAz) under Amerind. The results are tabulated in Table 6. While going by the p-values, the LRT test seems to support all of the mentioned families. However, the mean LRT statistic $\\hat{\\delta}$ is weak (negative or close to 0) for Drav-IE-Kart (Nostratic) and May-UAz, MayMZ-UAz (Amerind). In other words, by looking at Eq. (4), the alternate hypothesis $H_{a}$ , i.e., having higher invariant sites is not preferred. Thus, it may be concluded that LRT is a highly sensitive test since the mere addition of a single language (Georgian) to a strongly supported group of 16 languages (Drav-IE) alters the outcome drastically. This is a desirable property since the presence of even a single anomaly, an unrelated language in this case, can be detected. Note that other combinations in Nostratic such as Drav-Kart or IE-Kart are much weaker and not well supported by the permutation test itself, which is elaborated as follows.",
1226
+ "bbox": [
1227
+ 112,
1228
+ 391,
1229
+ 489,
1230
+ 825
1231
+ ],
1232
+ "page_idx": 7
1233
+ },
1234
+ {
1235
+ "type": "text",
1236
+ "text": "6.1 Analysis of Permutation tests on Nostratic",
1237
+ "text_level": 1,
1238
+ "bbox": [
1239
+ 112,
1240
+ 835,
1241
+ 487,
1242
+ 852
1243
+ ],
1244
+ "page_idx": 7
1245
+ },
1246
+ {
1247
+ "type": "text",
1248
+ "text": "Bilateral significances on Nostratic grouping Drav-IE-Kart for various distance metrics are reported in Figure 2, where the pairwise relationships based on p-value (with threshold 0.05) are color-coded. The",
1249
+ "bbox": [
1250
+ 112,
1251
+ 857,
1252
+ 489,
1253
+ 921
1254
+ ],
1255
+ "page_idx": 7
1256
+ },
1257
+ {
1258
+ "type": "text",
1259
+ "text": "computation follows the same steps as defined in §4.2 except that distances and similarities are calculated over pairs of languages instead of language clusters. This indeed forms the first iteration of a complete multilateral test.",
1260
+ "bbox": [
1261
+ 507,
1262
+ 84,
1263
+ 884,
1264
+ 164
1265
+ ],
1266
+ "page_idx": 7
1267
+ },
1268
+ {
1269
+ "type": "text",
1270
+ "text": "The languages are abbreviated in Fig. 2 as follows: Old Georgian (Ge), Old Kannada (Ka), Old Telugu (Te), Old Tamil (Ta), Old Malayalam (Ma), Ancient Greek (Gr), Old Armenian (Ar), Middle Persian (Pe), Sanskrit (Sa), Pali (Pa), Old Church Slavonic (CS), Old Irish (Ir), Latin (La), Old French (Fr), Old High German (HG), Old English (En) and Old Norse (No).",
1271
+ "bbox": [
1272
+ 507,
1273
+ 165,
1274
+ 885,
1275
+ 293
1276
+ ],
1277
+ "page_idx": 7
1278
+ },
1279
+ {
1280
+ "type": "text",
1281
+ "text": "It is visible that for each metric, languages of the same family (IE and Drav) are almost always related pairwise. Secondly, many pairs from Drav-IE appear related. However, except for LexStat, Georgian shows to be related to at most two languages from the Drav-IE grouping. Yet, in the permutation tests for these metrics, except for Turchin (Table 6), Drav-IE-Kart appears significantly related with sometimes even good similarity scores (in the case of P1-Dolgo). All that can be concluded here is that, except for the LexStat metric, permutation tests are very sensitive to pairwise language comparisons and may not yield false positives. However, if Drav-IE-Kart is to be considered a valid grouping, these tests may be said to yield false negatives.",
1282
+ "bbox": [
1283
+ 507,
1284
+ 294,
1285
+ 885,
1286
+ 535
1287
+ ],
1288
+ "page_idx": 7
1289
+ },
1290
+ {
1291
+ "type": "text",
1292
+ "text": "6.2 Analysis of ML-trees of Nostratic",
1293
+ "text_level": 1,
1294
+ "bbox": [
1295
+ 507,
1296
+ 546,
1297
+ 816,
1298
+ 561
1299
+ ],
1300
+ "page_idx": 7
1301
+ },
1302
+ {
1303
+ "type": "text",
1304
+ "text": "Unrooted maximum likelihood trees (ML-trees) are drawn in Figure 3 on various sub-groupings of Nostratic using MEGA11 assuming the Poisson+I model. For the IE tree (Figure 3(a)), the sub-families, except for the position of Old Church Slavonic, are highly faithful reflecting the existing notions. For instance, the topology of the Germanic family, i.e., (Old Norse, (Old English, Old High German)) contains the valid West-Germanic branch (Old English, Old High German). Similarly, the Italo-Celtic group (Old Irish, (Latin, Old French)) is visible. Also, one can distinguish a clear boundary between Western and Eastern IE languages reflecting the geographical distribution. However, the position of Old Church Slavonic intruded into Indo-Iranian appears problematic.",
1305
+ "bbox": [
1306
+ 507,
1307
+ 567,
1308
+ 885,
1309
+ 824
1310
+ ],
1311
+ "page_idx": 7
1312
+ },
1313
+ {
1314
+ "type": "text",
1315
+ "text": "Further, the addition of the Dravidian family in Drav-IE does not alter the IE topology (Figure 3(b)). It is intriguing to note the western inclination of Dravidian given its eastern geographical location in the present day. However, this is in line with the observation of Caldwell (1875),",
1316
+ "bbox": [
1317
+ 507,
1318
+ 825,
1319
+ 885,
1320
+ 921
1321
+ ],
1322
+ "page_idx": 7
1323
+ },
1324
+ {
1325
+ "type": "page_number",
1326
+ "text": "2566",
1327
+ "bbox": [
1328
+ 480,
1329
+ 928,
1330
+ 521,
1331
+ 940
1332
+ ],
1333
+ "page_idx": 7
1334
+ },
1335
+ {
1336
+ "type": "image",
1337
+ "img_path": "images/abd26860e50c55760c001b84d361d4ad8fc5c0e5469f79f83226302d24aabcd9.jpg",
1338
+ "image_caption": [
1339
+ "(a) P1-Dolgo"
1340
+ ],
1341
+ "image_footnote": [],
1342
+ "bbox": [
1343
+ 117,
1344
+ 80,
1345
+ 305,
1346
+ 181
1347
+ ],
1348
+ "page_idx": 8
1349
+ },
1350
+ {
1351
+ "type": "image",
1352
+ "img_path": "images/3827a97e3d28e7e721cf0f5e8194cc6b786d291ef128550b5a88dbc501a81cfa.jpg",
1353
+ "image_caption": [
1354
+ "(b) Turchin"
1355
+ ],
1356
+ "image_footnote": [],
1357
+ "bbox": [
1358
+ 309,
1359
+ 80,
1360
+ 497,
1361
+ 181
1362
+ ],
1363
+ "page_idx": 8
1364
+ },
1365
+ {
1366
+ "type": "image",
1367
+ "img_path": "images/db94984a81c2c8e35574fad1ba2a1dc0add7fe6e80e130526d38c1b1d8dd9eb4.jpg",
1368
+ "image_caption": [
1369
+ "(c) SCA"
1370
+ ],
1371
+ "image_footnote": [],
1372
+ "bbox": [
1373
+ 500,
1374
+ 80,
1375
+ 687,
1376
+ 181
1377
+ ],
1378
+ "page_idx": 8
1379
+ },
1380
+ {
1381
+ "type": "image",
1382
+ "img_path": "images/d703f13869afbd8d5e6bea6f588c9f857fd915eee0fca7b9d109b228a7d76a6c.jpg",
1383
+ "image_caption": [
1384
+ "(d) LexStat"
1385
+ ],
1386
+ "image_footnote": [],
1387
+ "bbox": [
1388
+ 694,
1389
+ 80,
1390
+ 880,
1391
+ 181
1392
+ ],
1393
+ "page_idx": 8
1394
+ },
1395
+ {
1396
+ "type": "image",
1397
+ "img_path": "images/a1c797f14e0ee5cd69bd79bbb87bedcaec674ab1a59060fcbfbf2e35cc808ec0.jpg",
1398
+ "image_caption": [
1399
+ "(a) IE",
1400
+ "Figure 3: Comparison of unrooted ML-trees on various groupings of Nostratic language families"
1401
+ ],
1402
+ "image_footnote": [],
1403
+ "bbox": [
1404
+ 146,
1405
+ 261,
1406
+ 371,
1407
+ 432
1408
+ ],
1409
+ "page_idx": 8
1410
+ },
1411
+ {
1412
+ "type": "image",
1413
+ "img_path": "images/23483a68b68089edb8270bc150e68810efc6dfd351d7f7ab42777e35961f739a.jpg",
1414
+ "image_caption": [
1415
+ "(b) Drav-IE"
1416
+ ],
1417
+ "image_footnote": [],
1418
+ "bbox": [
1419
+ 400,
1420
+ 261,
1421
+ 608,
1422
+ 432
1423
+ ],
1424
+ "page_idx": 8
1425
+ },
1426
+ {
1427
+ "type": "image",
1428
+ "img_path": "images/0d50c1032d6c70828c0c828841673d1f89575216a80dae39fd53e640b536da3f.jpg",
1429
+ "image_caption": [
1430
+ "Figure 2: Bilateral (pairwise) significance among the languages of Nostratic grouping. The yellow shade implies that the relationship is statistically significant $(p < 0.05)$ , while the purple shade implies otherwise.",
1431
+ "(c) Drav-IE-Kart"
1432
+ ],
1433
+ "image_footnote": [],
1434
+ "bbox": [
1435
+ 640,
1436
+ 259,
1437
+ 850,
1438
+ 432
1439
+ ],
1440
+ "page_idx": 8
1441
+ },
1442
+ {
1443
+ "type": "text",
1444
+ "text": "the founder of comparative Dravidian linguistics himself. Finally, the addition of Georgian invalidates the West-Germanic branch as well as pushes Old Greek problematically into the Western group (Figure 3(c)). However, much of the topology is undisturbed and one can also notice how the languages/families that are located south of the Caucasus namely, Armenian, Georgian, and Dravidian are grouped. Overall, it may be concluded that the addition of unrelated or weakly related languages can alter the actual topology.",
1445
+ "bbox": [
1446
+ 112,
1447
+ 502,
1448
+ 487,
1449
+ 678
1450
+ ],
1451
+ "page_idx": 8
1452
+ },
1453
+ {
1454
+ "type": "text",
1455
+ "text": "Similar analyses in case of Macro-Mayan and Amerind families are provided in Appendix A where one can observe similar perturbations in topology (see Fig. 5) of one family (Mayan) in presence of others (Mixe-Zoque and Uto-Aztecan).",
1456
+ "bbox": [
1457
+ 112,
1458
+ 681,
1459
+ 489,
1460
+ 760
1461
+ ],
1462
+ "page_idx": 8
1463
+ },
1464
+ {
1465
+ "type": "text",
1466
+ "text": "7 Conclusions",
1467
+ "text_level": 1,
1468
+ "bbox": [
1469
+ 112,
1470
+ 778,
1471
+ 253,
1472
+ 793
1473
+ ],
1474
+ "page_idx": 8
1475
+ },
1476
+ {
1477
+ "type": "text",
1478
+ "text": "In this paper, we have presented a likelihood ratio test based on the proportions of invariant sites to determine the genetic relatedness of a group of languages. Our proposed test does not yield false positives, which is in contrast with previous permutation-based tests that proved to be good only for pairwise language comparisons and not",
1479
+ "bbox": [
1480
+ 112,
1481
+ 808,
1482
+ 489,
1483
+ 921
1484
+ ],
1485
+ "page_idx": 8
1486
+ },
1487
+ {
1488
+ "type": "text",
1489
+ "text": "for validating a language group. By applying this test, we have found strong supporting evidence for macro-families such as Dravidian-Indo-European, Macro-Mayan (for Mayan-Mixe-Zoque, and weak evidence for Nostratic (Dravidian-Indo-European-Kartvelian) and Amerind (for Mayan-Uto-Aztecan). Through secondary analyses, we have also shown that probabilistic-based methods are superior to distance-based ones based on tree construction and the correlation of topologies with geography. In this work we did not touch upon semantic shifts, i.e., words changing meaning over time; for example, the word quick initially meant 'lively'. While considering semantic shifts may provide room for data manipulation favoring any particular hypothesis, few semantic slots such as 'bark'-'skin' are often found to have common words. In such cases, the slots may be merged into one as suggested by Kessler (2001).",
1490
+ "bbox": [
1491
+ 505,
1492
+ 502,
1493
+ 882,
1494
+ 807
1495
+ ],
1496
+ "page_idx": 8
1497
+ },
1498
+ {
1499
+ "type": "text",
1500
+ "text": "In summary, before constructing phylogenies of a group of languages, the relatedness of the group should be established through a significance test such as the one we have presented. Otherwise, the phylogenic grouping would not only be questionable but may also alter the topology of a related sub-group.",
1501
+ "bbox": [
1502
+ 507,
1503
+ 809,
1504
+ 882,
1505
+ 921
1506
+ ],
1507
+ "page_idx": 8
1508
+ },
1509
+ {
1510
+ "type": "page_number",
1511
+ "text": "2567",
1512
+ "bbox": [
1513
+ 480,
1514
+ 927,
1515
+ 519,
1516
+ 940
1517
+ ],
1518
+ "page_idx": 8
1519
+ },
1520
+ {
1521
+ "type": "text",
1522
+ "text": "Limitations",
1523
+ "text_level": 1,
1524
+ "bbox": [
1525
+ 114,
1526
+ 84,
1527
+ 220,
1528
+ 98
1529
+ ],
1530
+ "page_idx": 9
1531
+ },
1532
+ {
1533
+ "type": "text",
1534
+ "text": "The values of $P_{inv}^{0}$ and $P_{inv}^{a}$ (§3.5) are roughly decided based on the estimated ones from two examples, namely, Afrasian-Lolo-Burmese as a negative example and Indo-European as a positive example. The question of what should be the most appropriate values that should make the test optimal is not addressed here. Ideally, to address this question, more data is needed with several positive and negative examples to search for optimal values of these parameters. Also, the exact values may require calibration according to the phylogenetic software used since there could be significant differences in the implementations. Secondly, while analyzing Nostratic languages, Uralic, an important language family, has not been included due to the selection criteria (§4.1) that the languages should have been attested before 10th century CE. To include Uralic, the (Nostratic) languages that are attested around the same period as the earliest attested ones from Uralic (roughly 1300 CE onwards) should be considered to make 'fair' comparisons.",
1535
+ "bbox": [
1536
+ 115,
1537
+ 109,
1538
+ 489,
1539
+ 447
1540
+ ],
1541
+ "page_idx": 9
1542
+ },
1543
+ {
1544
+ "type": "text",
1545
+ "text": "Ethics Statement",
1546
+ "text_level": 1,
1547
+ "bbox": [
1548
+ 114,
1549
+ 460,
1550
+ 265,
1551
+ 474
1552
+ ],
1553
+ "page_idx": 9
1554
+ },
1555
+ {
1556
+ "type": "text",
1557
+ "text": "All the datasets are obtained from publicly available sources. Thus, there are no foreseen ethical considerations or conflicts of interest.",
1558
+ "bbox": [
1559
+ 112,
1560
+ 486,
1561
+ 489,
1562
+ 533
1563
+ ],
1564
+ "page_idx": 9
1565
+ },
1566
+ {
1567
+ "type": "text",
1568
+ "text": "References",
1569
+ "text_level": 1,
1570
+ "bbox": [
1571
+ 114,
1572
+ 561,
1573
+ 213,
1574
+ 576
1575
+ ],
1576
+ "page_idx": 9
1577
+ },
1578
+ {
1579
+ "type": "list",
1580
+ "sub_type": "ref_text",
1581
+ "list_items": [
1582
+ "V.S.D.S.Mahesh Akavarapu and Arnab Bhattacharya. 2023. Cognate Transformer for Automated Phonological Reconstruction and Cognate Reflex Prediction. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 6852-6862, Singapore. Association for Computational Linguistics.",
1583
+ "V.S.D.S.Mahesh Akavarapu and Arnab Bhattacharya. 2024. Automated Cognate Detection as a Supervised Link Prediction Task with Cognate Transformer. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 965-975, St. Julian's, Malta. Association for Computational Linguistics.",
1584
+ "Maria Anisimova and Olivier Gascuel. 2006. Approximate likelihood-ratio test for branches: A fast, accurate, and powerful alternative. Systematic Biology, 55(4):539-552.",
1585
+ "M J Bishop and A E Friday. 1987. Tetrapod relationships: The molecular evidence. Molecules and morphology in evolution: Conflict or compromise, pages 123-139."
1586
+ ],
1587
+ "bbox": [
1588
+ 115,
1589
+ 585,
1590
+ 489,
1591
+ 919
1592
+ ],
1593
+ "page_idx": 9
1594
+ },
1595
+ {
1596
+ "type": "list",
1597
+ "sub_type": "ref_text",
1598
+ "list_items": [
1599
+ "Allan R Bomhard and John C Kerns. 1994. The Nostratic macrofamily: A study in distant linguistic relationship. De Gruyter Mouton.",
1600
+ "Robert Caldwell. 1875. A comparative grammar of the Dravidian or South-Indian family of languages. Trübner.",
1601
+ "Lyle Campbell. 1997. American Indian languages: The historical linguistics of Native America, volume 4. Oxford University Press, USA.",
1602
+ "Lyle Campbell. 2013. *Historical linguistics*. Edinburgh University Press.",
1603
+ "Joseph Felsenstein. 1973. Maximum likelihood and minimum-steps methods for estimating evolutionary trees from data on discrete characters. Systematic Biology, 22(3):240-249.",
1604
+ "Joseph Felsenstein. 1981. Evolutionary trees from DNA sequences: A maximum likelihood approach. Journal of Molecular Evolution, 17:368-376.",
1605
+ "Nick Goldman, Jon P Anderson, and Allen G Rodrigo. 2000. Likelihood-based tests of topologies in phylogenetics. Systematic Biology, 49(4):652-670.",
1606
+ "Joseph H Greenberg. 1963. The languages of Africa. International Journal of American Linguistics.",
1607
+ "Joseph H Greenberg. 1971. The Indo-Pacific hypothesis. Current Trends in Linguistics, 8:807-871.",
1608
+ "Joseph H Greenberg. 1987. Language in the Americas. Stanford University Press.",
1609
+ "Joseph H Greenberg. 2000. Indo-European and its closest relatives: The Eurasiatic language family, volume 1, grammar, volume 1. Stanford University Press.",
1610
+ "Joseph H Greenberg. 2005. Genetic linguistics: Essays on theory and method. OUP Oxford.",
1611
+ "John P Huelsenbeck and JJ Bull. 1996. A likelihood ratio test to detect conflicting phylogenetic signal. Systematic Biology, 45(1):92-98.",
1612
+ "John P Huelsenbeck, David M Hillis, and Rasmus Nielsen. 1996. A likelihood-ratio test of monophyly. Systematic Biology, 45(4):546-558.",
1613
+ "Gerhard Jäger. 2015. Support for linguistic macrofamilies from weighted sequence alignment. Proceedings of the National Academy of Sciences, 112(41):12752-12757.",
1614
+ "Gerhard Jäger. 2018. Global-scale phylogenetic linguistic inference from lexical resources. Scientific Data, 5(1).",
1615
+ "Gerhard Jäger. 2019. Computational Historical Linguistics. Theoretical Linguistics, 45(3-4):151-182.",
1616
+ "Gerhard Jäger. 2022. Bayesian Phylogenetic Cognate Prediction. In Proceedings of the 4th Workshop on Research in Computational Linguistic Typology and Multilingual NLP, pages 63-69, Seattle, Washington. Association for Computational Linguistics."
1617
+ ],
1618
+ "bbox": [
1619
+ 510,
1620
+ 85,
1621
+ 884,
1622
+ 920
1623
+ ],
1624
+ "page_idx": 9
1625
+ },
1626
+ {
1627
+ "type": "page_number",
1628
+ "text": "2568",
1629
+ "bbox": [
1630
+ 480,
1631
+ 928,
1632
+ 519,
1633
+ 940
1634
+ ],
1635
+ "page_idx": 9
1636
+ },
1637
+ {
1638
+ "type": "list",
1639
+ "sub_type": "ref_text",
1640
+ "list_items": [
1641
+ "Thomas H Jukes, Charles R Cantor, et al. 1969. Evolution of protein molecules. *Mammalian protein metabolism*, 3:21-132.",
1642
+ "Alexei Kassian, Mikhail Zhivlov, and George Starostin. 2015. Proto-Indo-European-Uralic comparison from the probabilistic point of view. Journal of Indo-European Studies, 43(3-4):301-347.",
1643
+ "Brett Kessler. 2001. The significance of word lists. Stanford.",
1644
+ "Brett Kessler. 2007. Word Similarity Metrics and Multilateral Comparison. In Proceedings of Ninth Meeting of the ACL Special Interest Group in Computational Morphology and Phonology, pages 6-14, Prague, Czech Republic. Association for Computational Linguistics.",
1645
+ "Brett Kessler. 2008. The Mathematical Assessment of Long-Range Linguistic Relationships. Language and Linguistics Compass, 2(5):821-839.",
1646
+ "Brett Kessler. 2015. Response to Kassian et al., Proto-Indo-European-Uralic comparison from the probabilistic point of view. Journal of Indo-European Studies, 43(3-4):357-367.",
1647
+ "Brett Kessler and Annukka Lehtonen. 2006. Multilateral comparison and significance testing of the Indo-Uralic question. Phylogenetic methods and the prehistory of languages, pages 33-42.",
1648
+ "Mark A Larkin, Gordon Blackshields, Nigel P Brown, R Chenna, Paul A McGettigan, Hamish McWilliam, Franck Valentin, Iain M Wallace, Andreas Wilm, Rodrigo Lopez, et al. 2007. Clustal W and Clustal X version 2.0. Bioinformatics, 23(21):2947-2948.",
1649
+ "Johann-Mattis List. 2010. SCA: Phonetic alignment based on sound classes. In European Summer School in Logic, Language and Information, pages 32-51. Springer.",
1650
+ "Johann-Mattis List. 2012. LexStat: Automatic Detection of Cognates in Multilingual Wordlists. In Proceedings of the EACL 2012 Joint Workshop of LINGVIS & UNCLH, pages 117-125, Avignon, France. Association for Computational Linguistics.",
1651
+ "Johann-Mattis List and Robert Forkel. 2021. LingPy: A Python library for historical linguistics. Version 2.6.9.",
1652
+ "Nhan Ly-Trong, Suha Naser-Khdour, Robert Lanfear, and Bui Quang Minh. 2022. AliSim: a fast and versatile phylogenetic sequence simulator for the genomic era. Molecular Biology and Evolution, 39(5):msac092.",
1653
+ "Thomas Mailund and Christian NS Pedersen. 2004. QDist—Quartet distance between evolutionary trees. Bioinformatics, 20(10):1636-1637."
1654
+ ],
1655
+ "bbox": [
1656
+ 115,
1657
+ 85,
1658
+ 485,
1659
+ 919
1660
+ ],
1661
+ "page_idx": 10
1662
+ },
1663
+ {
1664
+ "type": "list",
1665
+ "sub_type": "ref_text",
1666
+ "list_items": [
1667
+ "Lam-Tung Nguyen, Heiko A Schmidt, Arndt Von Haeseler, and Bui Quang Minh. 2015. IQ-TREE: a fast and effective stochastic algorithm for estimating maximum-likelihood phylogenies. Molecular Biology and Evolution, 32(1):268-274.",
1668
+ "Robert L Oswalt. 1970. The detection of remote linguistic relationships. Computer Studies in the Humanities and Verbal Behavior, 3(3):117-129.",
1669
+ "Simone Pompei, Vittorio Loreto, and Francesca Tria. 2011. On the accuracy of language trees. *PloS One*, 6(6):e20109.",
1670
+ "William Poser and Lyle Campbell. 2008. Language Classification: History and Methods.",
1671
+ "Taraka Rama. 2018. Similarity Dependent Chinese Restaurant Process for Cognate Identification in Multilingual Wordlists. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 271-281, Brussels, Belgium. Association for Computational Linguistics.",
1672
+ "Taraka Rama and Johann-Mattis List. 2019. An Automated Framework for Fast Cognate Detection and Bayesian Phylogenetic Inference in Computational Historical Linguistics. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6225-6235, Florence, Italy. Association for Computational Linguistics.",
1673
+ "Taraka Rama, Johann-Mattis List, Johannes Wahle, and Gerhard Jäger. 2018. Are Automatic Methods for Cognate Detection Good Enough for Phylogenetic Reconstruction in Historical Linguistics? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 393-400, New Orleans, Louisiana. Association for Computational Linguistics.",
1674
+ "Donald A Ringe. 1992. On calculating the factor of chance in language comparison. Transactions of the American Philosophical Society, 82(1):1-110.",
1675
+ "Donald A Ringe. 1996. The mathematics of 'Amerind'. Diachronica, 13(1):135-154.",
1676
+ "Donald A Ringe and Joseph F Eska. 2013. *Historical linguistics: Toward a twenty-first century reintegration*. Cambridge University Press.",
1677
+ "Robert R. Sokal and Charles Duncan Michener. 1958. A statistical method for evaluating systematic relationships. University of Kansas Science Bulletin, 38:1409-1438.",
1678
+ "Koichiro Tamura, Glen Stecher, and Sudhir Kumar. 2021. MEGA11: molecular evolutionary genetics analysis version 11. Molecular Biology and Evolution, 38(7):3022-3027."
1679
+ ],
1680
+ "bbox": [
1681
+ 510,
1682
+ 85,
1683
+ 880,
1684
+ 919
1685
+ ],
1686
+ "page_idx": 10
1687
+ },
1688
+ {
1689
+ "type": "page_number",
1690
+ "text": "2569",
1691
+ "bbox": [
1692
+ 480,
1693
+ 928,
1694
+ 519,
1695
+ 940
1696
+ ],
1697
+ "page_idx": 10
1698
+ },
1699
+ {
1700
+ "type": "text",
1701
+ "text": "Peter Turchin, Ilia Peiros, and Murray Gell-Mann. 2010. Analyzing genetic connections between languages by matching consonant classes. Journal of Language Relationship, (5 (48)):117-126.",
1702
+ "bbox": [
1703
+ 114,
1704
+ 85,
1705
+ 487,
1706
+ 137
1707
+ ],
1708
+ "page_idx": 11
1709
+ },
1710
+ {
1711
+ "type": "text",
1712
+ "text": "Edward Orlando Wiley and Bruce S Lieberman. 2011. Phylogenetics: Theory and practice of phylogenetic systematics. John Wiley & Sons.",
1713
+ "bbox": [
1714
+ 114,
1715
+ 148,
1716
+ 487,
1717
+ 187
1718
+ ],
1719
+ "page_idx": 11
1720
+ },
1721
+ {
1722
+ "type": "text",
1723
+ "text": "S. S. Wilks. 1938. The Large-Sample Distribution of the Likelihood Ratio for Testing Composite Hypotheses. The Annals of Mathematical Statistics, 9(1):60-62.",
1724
+ "bbox": [
1725
+ 114,
1726
+ 197,
1727
+ 487,
1728
+ 237
1729
+ ],
1730
+ "page_idx": 11
1731
+ },
1732
+ {
1733
+ "type": "text",
1734
+ "text": "A Analysis of Macro-Mayan and Amerind",
1735
+ "text_level": 1,
1736
+ "bbox": [
1737
+ 114,
1738
+ 250,
1739
+ 413,
1740
+ 280
1741
+ ],
1742
+ "page_idx": 11
1743
+ },
1744
+ {
1745
+ "type": "image",
1746
+ "img_path": "images/53ef3ba50779dea390c00bcea2e4d677c58fa6c1d400b3fe0a9cd6b54abd78a6.jpg",
1747
+ "image_caption": [],
1748
+ "image_footnote": [],
1749
+ "bbox": [
1750
+ 117,
1751
+ 302,
1752
+ 300,
1753
+ 399
1754
+ ],
1755
+ "page_idx": 11
1756
+ },
1757
+ {
1758
+ "type": "image",
1759
+ "img_path": "images/dc17731f06288d3a733c2fb1ae677bca5d9d2bbee4d588a172b17c66dfae5caa.jpg",
1760
+ "image_caption": [
1761
+ "(b) Turchin"
1762
+ ],
1763
+ "image_footnote": [],
1764
+ "bbox": [
1765
+ 307,
1766
+ 302,
1767
+ 490,
1768
+ 399
1769
+ ],
1770
+ "page_idx": 11
1771
+ },
1772
+ {
1773
+ "type": "image",
1774
+ "img_path": "images/4f13ce05c99a7bb5f989195d8742e9f6090722aa2c1928e6df5652ff638fdb83.jpg",
1775
+ "image_caption": [
1776
+ "(a) P1-Dolgo",
1777
+ "(c) SCA"
1778
+ ],
1779
+ "image_footnote": [],
1780
+ "bbox": [
1781
+ 117,
1782
+ 420,
1783
+ 300,
1784
+ 517
1785
+ ],
1786
+ "page_idx": 11
1787
+ },
1788
+ {
1789
+ "type": "image",
1790
+ "img_path": "images/80cbe19fe9247911461cebbdd5c015c8dd0f77340be4cf24537b16543cdc4be3.jpg",
1791
+ "image_caption": [
1792
+ "(d) LexStat"
1793
+ ],
1794
+ "image_footnote": [],
1795
+ "bbox": [
1796
+ 309,
1797
+ 422,
1798
+ 487,
1799
+ 517
1800
+ ],
1801
+ "page_idx": 11
1802
+ },
1803
+ {
1804
+ "type": "image",
1805
+ "img_path": "images/dde95fc3a2981526374725e602e99e77456619ceb4bc3eb1f47d1ed004fc060a.jpg",
1806
+ "image_caption": [
1807
+ "(a) Mayan"
1808
+ ],
1809
+ "image_footnote": [],
1810
+ "bbox": [
1811
+ 547,
1812
+ 84,
1813
+ 843,
1814
+ 277
1815
+ ],
1816
+ "page_idx": 11
1817
+ },
1818
+ {
1819
+ "type": "image",
1820
+ "img_path": "images/2057a29f7660d21194bffbc6faacc7e9e0cd39bd5b84253d515c410c9d634b05.jpg",
1821
+ "image_caption": [],
1822
+ "image_footnote": [],
1823
+ "bbox": [
1824
+ 549,
1825
+ 303,
1826
+ 843,
1827
+ 511
1828
+ ],
1829
+ "page_idx": 11
1830
+ },
1831
+ {
1832
+ "type": "image",
1833
+ "img_path": "images/b9ef2d1cbbd5dfa956520e3ca41a10ee27efdbe59ef0a63c1548c5eb601d52eb.jpg",
1834
+ "image_caption": [
1835
+ "(b) Mayan-Mixe-Zoque"
1836
+ ],
1837
+ "image_footnote": [],
1838
+ "bbox": [
1839
+ 547,
1840
+ 538,
1841
+ 843,
1842
+ 670
1843
+ ],
1844
+ "page_idx": 11
1845
+ },
1846
+ {
1847
+ "type": "image",
1848
+ "img_path": "images/2f7bef32fc8dbdddf037a2936479ccbffa830197d1435ef096be9384d8c60ade.jpg",
1849
+ "image_caption": [
1850
+ "Figure 4: Bilateral (pairwise) significance among the languages of Macro-Mayan/Amerind grouping. The yellow shade implies that the relationship is statistically significant $(p < 0.05)$ , while the purple shade implies otherwise. While moving across the diagonal, the first cluster of significantly related languages is that of Mayan, the second is that of Mixe-Zoque and the thrid, Uto-Aztecan",
1851
+ "(c) Mayan-Uto-Aztecan",
1852
+ "(d) Mayan-Mixe-Zoque-Uto-Aztecan",
1853
+ "Figure 5: Comparison of unrooted ML-trees on various groupings of Macro-Mayan/Amerind language families"
1854
+ ],
1855
+ "image_footnote": [],
1856
+ "bbox": [
1857
+ 547,
1858
+ 695,
1859
+ 843,
1860
+ 873
1861
+ ],
1862
+ "page_idx": 11
1863
+ }
1864
+ ]
2024/A Likelihood Ratio Test of Genetic Relationship among Languages/6adb42de-14e6-4986-ab30-294d02f67dca_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Likelihood Ratio Test of Genetic Relationship among Languages/6adb42de-14e6-4986-ab30-294d02f67dca_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42bc2dac965367432c41e97f1ca54bfcf59addf0e4c8ecf8ceef73f3f2841915
3
+ size 1118120
2024/A Likelihood Ratio Test of Genetic Relationship among Languages/full.md ADDED
@@ -0,0 +1,357 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Likelihood Ratio Test of Genetic Relationship among Languages
2
+
3
+ V.S.D.S. Mahesh Akavarapu and Arnab Bhattacharya
4
+ Dept. of Computer Science and Engineering
5
+ Indian Institute of Technology Kanpur
6
+ maheshak@cse.iitk.ac.in, arnabb@cse.iitk.ac.in
7
+
8
+ # Abstract
9
+
10
+ Lexical resemblances among a group of languages indicate that the languages could be genetically related, i.e., they could have descended from a common ancestral language. However, such resemblances can arise by chance and, hence, need not always imply an underlying genetic relationship. Many tests of significance based on permutation of wordlists and word similarity measures appeared in the past to determine the statistical significance of such relationships. We demonstrate that although existing tests may work well for bilateral comparisons, i.e., on pairs of languages, they are either infeasible by design or are prone to yield false positives when applied to groups of languages or language families. To this end, inspired by molecular phylogenetics, we propose a likelihood ratio test to determine if given languages are related based on the proportion of invariant character sites in the aligned wordlists applied during tree inference. Further, we evaluate some language families and show that the proposed test solves the problem of false positives. Finally, we demonstrate that the test supports the existence of macro language families such as Nostratic and Macro-Mayan.
11
+
12
+ # 1 Introduction
13
+
14
+ Languages that descend from a common ancestral language are termed to be genetically related. The existence of lexical resemblances between the two languages is a preliminary indication that they could be related. Such resembling lexicons that truly have a common origin are called cognates. For instance, Sanskrit nama and English name are cognates that can be traced to Proto-Indo-European *h₃nómn. However, such resemblances can also occur out of sheer chance. For instance, Persian bad and behtar accidentally resemble English bad and better respectively, but are not true cognates<sup>1</sup>.
15
+
16
+ Hence, it is necessary to show statistical significance on any appropriate measure that captures the lexical relatedness before arguing for a genetic relationship among any group of languages or language families (Campbell, 2013).
17
+
18
+ Several significance tests appeared in the past to address this problem, with the majority of them based on permutation tests, starting from Oswalt (1970). Given wordlists of a group of languages to be evaluated for a genetic relationship, these tests obtain the null distribution of a certain measure capturing similarity between word pairs by random permutations of the wordlists. Such tests either act bilaterally, i.e., on a pair of languages or proto-languages, or multilaterally on a group of languages. Among these, the multilateral comparison, which was made famous by Greenberg (1963, 1971, 1987, 2000) in traditional historical linguistics, has been a subject of much criticism (Poser and Campbell, 2008). Hence, the preferred way of comparing two language families has been to compare their reconstructed proto-forms bilaterally. However, Greenberg (2005) argues that genetic classification should precede proto-language reconstruction. Moreover, there is often a lack of agreement on reconstructed proto-forms both in terms of phonology and semantics which gives room for sufficient manipulation of wordlists that can in turn alter the results of significance tests (Kessler, 2015). Further, we demonstrate that multilateral permutation tests (Kessler and Lehtonen, 2006; Kessler, 2007) yield false negatives even after incorporating complex word similarity metrics such as SCA and LexStat (List, 2010, 2012).
19
+
20
+ To overcome these issues, we turn to phylogenetic analysis (Wiley and Lieberman, 2011) that is known to approximately capture the ancestral states and has been applied to phonological reconstruction tasks such as proto-language and cognate
21
+
22
+ reflex prediction tasks (Jäger, 2019, 2022) with reasonably good results. Specifically, we propose a likelihood ratio test (LRT) where we expect the difference in likelihoods of the best trees under null and alternate hypotheses to capture genetic relatedness. The null hypothesis assumes negligible proportion of invariant sites while the alternate hypothesis assumes significant proportion of invariant sites. Intuitively, related languages should have more positions where a character or a sound class is invariant than unrelated languages. Hence, we essentially capture the notion of relatedness as possessing a relatively high proportion of invariant sites. Further in this test, reconstructed proto-forms are not required and at the same time, the evolutionary tree structure is strictly imposed by design, unlike the multilateral model, thereby effectively circumventing the aforementioned methodological problems. Although inspired by similar tests from molecular phylogenetics, the test we propose is novel in the sense that the problem of testing common descent never arises in biology since monogenesis is accepted as a fact therein (Kessler, 2008). We further evaluate the test on various language families and demonstrate that the test does not misclassify unrelated languages as related.
23
+
24
+ We finally show that the test supports the existence of the macro-families Nostratic (Bomhard and Kerns, 1994) and Macro-Mayan (Campbell, 1997). While such an attempt to justify the existence of macro-families using bootstrap analysis of distance-based phylogeny is found in Jäger (2015), expressing statistical significance in terms of likelihood ratio is preferred over bootstrap support values whose interpretation is debated in molecular phylogenetics (Anisimova and Gascuel, 2006).
25
+
26
+ Our contributions are summarized as follows.
27
+
28
+ - We have proposed a likelihood ratio test to determine the genetic relatedness of a group of languages based on invariant site proportions.
29
+ - We have demonstrated by applying various language sets that the test does not exhibit the problem of false positives nor requires reconstructed proto-forms, unlike the previously proposed tests.
30
+ - We have found through the test some supporting evidence for the existence of macrofamilies namely Nostratic and Macro-Mayan
31
+
32
+ The rest of the paper is summarized as follows. Related work is discussed in $\S 2$ . The methodology
33
+
34
+ of the test is presented in §3. Evaluation details such as datasets and details of previous methods and variants are discussed in §4. The results are discussed in §5. The application of the method on long-range comparisons is discussed in §6. The paper is concluded in §7.
35
+
36
+ # 2 Related Work
37
+
38
+ Permutation test for bilateral language relationship comparisons was introduced by Oswalt (1970). The significance of sound correspondences by brute force probability calculation was proposed by Ringe (1992, 1996). This approach was however criticized for not being able to show significance for known related pairs of languages like Latin-English and also for accounting phonologically implausible sound correspondences (Kessler, 2001). Multilateral permutation tests were proposed by (Kessler and Lehtonen, 2006; Kessler, 2007). Several applications of permutation tests exist such as (Turchin et al., 2010; Kassian et al., 2015).
39
+
40
+ Some notable likelihood ratio tests in molecular phylogenetics, mostly on topologies, include (Huelsenbeck and Bull, 1996; Huelsenbeck et al., 1996; Goldman et al., 2000; Anisimova and Gascuel, 2006) where bootstrap analysis is argued to be not so optimal to establish statistical significance on phylogenies. Otherwise, support for macrofamilies through bootstrap analysis for distance-based trees is shown in Jäger (2015). Comparisons of various methods of phylogenetic reconstruction such as distance-based and binary-character-based are given by Jäger (2018). Sound-class character-based phylogenetic analysis is found in (Jäger, 2019, 2022). Usually, Bayesian phylogenetic inference on binary cognate encodings gives good results (Rama et al., 2018; Rama and List, 2019).
41
+
42
+ Although the likelihood ratio metric is common for both past and present-day language models, the utility of this test using invariant sites outside computational historical linguistics is unknown.
43
+
44
+ # 3 Methodology
45
+
46
+ The key concept revolves around the idea that any hypothesis, in this case, a hypothesis on a phylogeny, is preferred over a competing null hypothesis if it is significantly more likely, i.e., has a higher likelihood than the latter. Given the wordlist data encoded as an aligned character matrix, related languages are expected to have a higher number of invariant columns. Thus, our null hypothesis con
47
+
48
+ ![](images/00777638efd1bfdabc87a320a657feae8c20321468817a5119c4a0594e13c4a7.jpg)
49
+ Figure 1: A section of character matrix for Uto-Aztecan family consisting of concatenated Multiple Sequence Alignments (MSAs) of consonant classes, one from each concept
50
+
51
+ sists of a phylogeny with a small proportion (fixed at $1\%$ ) of invariant sites, whereas the alternative hypothesis consists of a phylogeny with a larger but reasonable proportion (fixed at $6\%$ ) of invariant sites. The observed difference in their likelihood of real data is compared with that of data simulated from the null hypothesis through parametric bootstrapping and, accordingly, one of the hypotheses is rejected. The steps are elaborated next.
52
+
53
+ # 3.1 Character Matrix
54
+
55
+ The wordlists of a given group of languages, as mentioned previously, are encoded in the form of a character matrix. It consists of concatenated aligned words per concept, i.e., meaning. Thus, each row represents a language or taxon, and each column, also referred to as site in this paper, consists of phoneme classes, e.g., Dolgopolsky classes. Formally, let the input language set be $\{L_1,\ldots ,L_m\}$ , whose genetic relatedness is to be verified statistically. Let there be $n$ concepts $C_1,\dots,C_n$ in the wordlists. Each language $L_{i}$ should have for each concept $C_j$ a single word, say $w_{ij}$ . If a language has multiple words for a single semantic slot, only the one with fundamental or core meaning is retained, following the recipe by Kessler (2001). For instance, if the meaning 'dull' has words dull and unsharp, dull is of core or fundamental meaning. Another example would be for the meaning 'belly', Latin venter is more fundamental than abdomen. If it so happens that it still remains unresolved after this step, a single word is randomly picked up. In case a language has no word for a semantic slot, it is represented as a gap $-$ . For each concept $C_j$ and alphabet set $\mathbb{A}$ , let $W^{j}\in \mathbb{A}^{m\times l_{j}}$ represent a multiple sequence alignment (MSA) of words where $l_{j}$ is the length or the number of phonemes with vowels removed² in
56
+
57
+ <table><tr><td>Greek_Anc</td><td>K</td><td>R</td><td>-</td><td>S</td><td></td></tr><tr><td>Latin</td><td>K</td><td>R</td><td>N</td><td>-</td><td>-</td></tr><tr><td>English</td><td>H</td><td>R</td><td>N</td><td>-</td><td>-</td></tr><tr><td>Sanskrit</td><td>S</td><td>R</td><td>N</td><td>K</td><td>-</td></tr></table>
58
+
59
+ Table 1: Example of a Multiple Sequence Alignment (MSA) of consonant classes for a single concept 'horn'.
60
+
61
+ each word. The final character matrix $X\in \mathbb{A}^{m\times N}$ is concatenation of $W^{j}$ , i.e., $[W^1\dots W^n ]$ across columns and $N = \sum_{j = 1}^{n}l_{j}$ .
62
+
63
+ For example, consider a cognate set meaning 'horn' from a few Indo-European languages namely, Ancient Greek keras, Latin cornu, English horn, and Sanskrit sýnga. The resultant character matrix for this single meaning is a multiple sequence alignment with vowels removed and consonants encoded as Dolgopolsky classes as illustrated in Table 1. The final character matrix is the concatenation of such matrices across all the concepts. For an illustration of a final character matrix, see Figure 1, which is generated by MEGA11 (Tamura et al., 2021). In general, multiple sequence alignment is a fundamental step in several state-of-the-art methods in computational historical linguistics (Akavarapu and Bhattacharya, 2023, 2024).
64
+
65
+ # 3.2 Substitution Model
66
+
67
+ A substitution model describes the evolution of a character at a site assuming a Markovian process. Various substitution models have been described for various alphabets such as nucleotides, amino acids, etc. In this paper, we assume the simplest possible model where substitution rates are assumed to be equal between all the pairs of distinct characters. The resultant model is known as the Jukes-Cantor model (Jukes et al., 1969) in case of nucleotide substitutions and as Poisson (Bishop and Friday, 1987) in case of amino-acid substitutions. Formally, let the number of characters in the alphabet $\mathbb{A}$ be $N$ . An element $q_{ij}$ of the rate matrix $Q$ , which denotes the rate at which character $i$
68
+
69
+ mutates to character $j$ is defined as follows:
70
+
71
+ $$
72
+ q _ {i j} = \mu \cdot \pi_ {i}, i \neq j \text {(e q u a l r a t e s)} \tag {1}
73
+ $$
74
+
75
+ where $\pi_{i}$ denotes the frequency of character $i$ at the site and $\mu$ is the rate of mutation. The diagonal element should satisfy the normalization constraint:
76
+
77
+ $$
78
+ q _ {i i} = - \sum_ {j \neq i} q _ {i j} \tag {2}
79
+ $$
80
+
81
+ The probability of transition $i\rightarrow j$ in time $t$ is given by the matrix $P(t) = \{p_{ij}\} = e^{Qt}$ . Likelihood of an evolutionary tree with topology $T$ can be, thus, calculated from the substitution matrix where branch lengths $V$ would denote the time.
82
+
83
+ # 3.3 Maximum Likelihood Tree (ML-tree)
84
+
85
+ For any phylogenetic tree with topology $T$ , branch lengths $V$ , other parameters such as shape parameter of heterogeneous rate, the proportion of invariant sites denoted by $\Theta$ , and with the observed data i.e., character matrix $X$ , the likelihood is defined as the product of likelihoods at each site as given by the following equation, assuming independence for simplicity:
86
+
87
+ $$
88
+ \mathcal {L} (T, V, \Theta | X) = \prod_ {i = 1} ^ {N} P \left(X _ {i} | T, V, \Theta\right) \tag {3}
89
+ $$
90
+
91
+ The site independence assumption also restricts the number of parameters. Given the limited amount of data, which is restricted to 100-200 wordlists, this is, thus, more suitable. Complex models such as bigram-based ones may be employed if sufficient data is available.
92
+
93
+ The parameters that maximize the likelihood, $\hat{T},\hat{V}$ and $\hat{\Theta}$ define the maximum likelihood tree which is usually obtained by heuristic search in the parameter space. Typically, a tree is initialized either randomly or by some heuristic means, and from there, the tree space is explored through tree modifying operations to get the "best" tree. For a given tree, likelihood is computed using the well-known Felsenstein's pruning algorithm from phylogenetics (Felsenstein, 1973, 1981).
94
+
95
+ # 3.4 Invariant Sites
96
+
97
+ Invariant sites are those sites that are constant or evolve very slowly. These can be estimated through a maximum likelihood search along with other parameters. The proportion of invariant sites, $P_{inv}$ may be known beforehand or estimated. Given the
98
+
99
+ invariant sites, the likelihood defined in §3.3 is only the product of likelihoods across the variant sites.
100
+
101
+ Our observation is that estimated $P_{inv}$ is higher ( $>0.06$ ) among related languages while lower ( $\approx 0.01$ ) among (possibly) unrelated languages. Based on this observation and preliminaries, we now describe the likelihood ratio test.
102
+
103
+ # 3.5 Likelihood Ratio Test (LRT)
104
+
105
+ Given a null hypothesis $H_0$ and a competing alternative hypothesis $H_{a}$ , the latter is preferred if it is more likely than the former i.e., $\mathcal{L}_{H_a} > \mathcal{L}_{H_0}$ . In our case, the hypotheses consist of respective phylogenetic tree parameters estimated for ML-trees, i.e., $H_0$ consists of $\hat{T}_0,\hat{V}_0,\hat{\Theta}_0$ and $H_{a}$ consists of $\hat{T}_a,\hat{V}_a,\hat{\Theta}_a$ . The likelihood ratio test defines the following metric to decide whether to reject the null hypothesis:
106
+
107
+ $$
108
+ \delta = 2 \cdot \ln \left(\frac {\mathcal {L} \left(\hat {T} _ {a} , \hat {V} _ {a} , \hat {\Theta} _ {a}\right)}{\mathcal {L} \left(\hat {T} _ {0} , \hat {V} _ {0} , \hat {\Theta} _ {0}\right)}\right) \tag {4}
109
+ $$
110
+
111
+ The Likelihood Ratio Test (LRT) metric $\delta$ was shown to asymptotically follow a chi-squared distribution when the null hypothesis is assumed with the degrees of freedom $p - q$ , where $p$ and $q$ respectively are the numbers of free parameters in the alternate and the null hypotheses (Wilks, 1938). However, it was argued that this may not hold in general for phylogenetic problems due to the discrete nature of tree topology (see (Huesenbeck and Bull, 1996; Huelsenbeck et al., 1996; Anisimova and Gascuel, 2006) for relevant work). As a result, the distribution of $\delta$ is determined by a parametric bootstrapping method where it is measured on the data simulated by the parameters estimated assuming the null hypothesis $H_0$ to hold, i.e., using the parameters $\hat{T}_0$ , $\hat{V}_0$ and $\hat{\Theta}_0$ .
112
+
113
+ As mentioned in §3.4, we propose LRT to test the relatedness of a group of languages using varying proportions of invariant sites. In other words words the null hypothesis $H_0$ consists of invariant site proportion $P_{inv}^0$ and alternate hypothesis $H_a$ consists of $P_{inv}^a$ where $P_{inv}^0 < P_{inv}^a$ as per the observations discussed in §3.4.
114
+
115
+ The typical way of obtaining the distribution for $\delta$ under $H_0$ involves finding the parameters $\{\hat{T}_0, \hat{V}_0, \hat{\Theta}_0\}$ and $\{\hat{T}_a, \hat{V}_a, \hat{\Theta}_a\}$ for the best trees respectively under $H_0$ and $H_a$ along with observed $\delta$ , say $\hat{\delta}$ . Further, several, say $k$ , bootstrap replicates are generated from the topology, branch lengths, and other parameters defined by $\{\hat{T}_0, \hat{V}_0, \hat{\Theta}_0\}$ , i.e.,
116
+
117
+ <table><tr><td>Family</td><td>Abbrv.</td><td>Languages</td><td>Concepts</td><td>Words</td></tr><tr><td>Afrasian</td><td>AfA</td><td>21</td><td>39</td><td>770</td></tr><tr><td>Dravidian</td><td>Drav</td><td>4</td><td>183</td><td>716</td></tr><tr><td>Indo-European</td><td>IE</td><td>12</td><td>185</td><td>2209</td></tr><tr><td>Kartvelian</td><td>Kart</td><td>1</td><td>180</td><td>180</td></tr><tr><td>Lolo-Burmese</td><td>LoBur</td><td>15</td><td>39</td><td>565</td></tr><tr><td>Mayan</td><td>May</td><td>30</td><td>94</td><td>2667</td></tr><tr><td>Mixe-Zoque</td><td>MZ</td><td>10</td><td>94</td><td>905</td></tr><tr><td>Mon-Khmer</td><td>MKh</td><td>9</td><td>199</td><td>1701</td></tr><tr><td>Mon-Khmer</td><td>MKh</td><td>16</td><td>94</td><td>1332</td></tr><tr><td>Munda</td><td>Mun</td><td>4</td><td>199</td><td>759</td></tr><tr><td>Uto-Aztecan</td><td>UAz</td><td>9</td><td>94</td><td>803</td></tr></table>
118
+
119
+ assuming $H_0$ . Next, the maximum likelihood search is run again on these replicates to obtain several samples for $\delta$ , say $\{\delta_1,\dots ,\delta_k\}$ . However, we found considerable variation in $\hat{\delta}$ , since the maximum likelihood search is only a heuristic and is affected by initialization. As a result, we obtain several samples for $\hat{\delta}$ , say $\{\hat{\delta}_1,\dots ,\hat{\delta}_k\}$ by running the search $k$ times and based on the null parameters, a single bootstrap replicate is generated for each search to consequently obtain $\{\delta_1,\dots ,\delta_k\}$ for corresponding $k$ searches. Finally the $p$ -value for $\mathbb{E}[\delta ] < \mathbb{E}[\hat{\delta} ]$ is obtained by one-sided paired t-test. If the $p$ -value is less than a threshold (usually 0.05), we conclude that $H_{a}$ may hold or, in other words, there are at least $P_{inv}^{a}$ proportions of sites that are significantly invariant and, thus, the languages under consideration are likely to be related.
120
+
121
+ # 4 Experimental Setup
122
+
123
+ The section discusses the details of the experiments including datasets, baseline models, and implementation details.
124
+
125
+ # 4.1 Datasets
126
+
127
+ The data for evaluating the tests consists of wordlists from multiple language (sub-)families and their combinations. Combinations of related sub-families serve as positive examples while those of unrelated serve as negative examples. Evaluating the macro-families also consists of language groups whose relationship is only distantly suggested such as Nostratic (Bomhard and Kerns, 1994).
128
+
129
+ The details of data from each family are shown in Table 2. Out of these, Mon-Khmer and Munda (200 wordlists) are extracted from the Austro-Asiatic data from Rama et al. (2018). Data for Old languages of Nostratic comprising Indo-European, Dravidian, and Kartvelian are prepared by us from the Swadesh 200-wordlists available at Wik
130
+
131
+ Table 2: Language families considered in this study.
132
+
133
+ <table><tr><td>Family</td><td>Abbrv.</td><td>Languages</td><td>Concepts</td><td>Words</td></tr><tr><td>Austro-Asiatic</td><td>AA</td><td>58</td><td>200</td><td>11001</td></tr><tr><td>Austronesian</td><td>AN</td><td>45</td><td>210</td><td>8309</td></tr><tr><td>Indo-European</td><td>IE</td><td>42</td><td>208</td><td>8478</td></tr><tr><td>Pama-Nyungan</td><td>PN</td><td>67</td><td>183</td><td>11503</td></tr><tr><td>Sino-Tibetan</td><td>ST</td><td>64</td><td>110</td><td>6762</td></tr></table>
134
+
135
+ Table 3: Language family datasets for tree construction.
136
+
137
+ tionary<sup>3</sup>. Data for all the other families are obtained from Rama (2018) which were, in turn, collected from various publicly available sources. The datasets are the same as those found in related tasks such as automated cognate detection and protolanguage reconstruction.
138
+
139
+ In the Nostratic grouping, we considered the languages that are surviving or have surviving descendants and were attested by the 10th century CE. The motivation behind this choice is that older languages should be closer to the ancestral language and each other if at all there is any relationship. Several languages including literary Dravidian languages, Georgian, and Armenian are mostly conservative and deviate little from their old forms. The data is pre-processed by excluding motivated word forms including onomatopoeia, and nursery forms, listed in Kessler (2001). Short forms, i.e., words consisting of single syllables are also excluded. Such cleaning is necessary to avoid the appearance of spurious relationships. In the case of Nostratic, we were also careful to exclude borrowings by tracing etymologies from Wiktionary<sup>3</sup>. This step could not be extended to other language families due to a lack of readily available etymological information.
140
+
141
+ All the methods employed in this work, including both the proposed one and baseline ones described in §4.2, involve the construction of a phylogenetic tree. Hence, we also compare the methods on a tree construction task where we see how well the trees match the golden truth trees wherever available. The data for this task is taken from Rama et al. (2018) as summarized in Table 3.
142
+
143
+ # 4.2 Multilateral Permutation Test
144
+
145
+ As mentioned in §1, most previous methods compare languages bilaterally, i.e., a pair at a time. As a result, the only possible way to compare the language families in this approach is to compare their reconstructed proto-languages. However, protoforms of a proto-language are not often universally agreed which leads to considerable allowance of
146
+
147
+ manipulation that can affect the results (Kessler, 2015). An alternate solution to determine the significance of the relationship among multiple languages was proposed by Kessler and Lehtonen (2006) and Kessler (2007) who employ a permutation test based on multilateral comparison. This has been well received in historical linguistics (Ringe and Eska, 2013).
148
+
149
+ The test is based on nearest-neighbour hierarchical clustering where at any point two closest clusters are lumped into one cluster. The basic distance measure, $\hat{d}(A,B)$ , between any two clusters $A$ and $B$ is the average of distances between all possible pairs of languages in these clusters, i.e.,
150
+
151
+ $$
152
+ \hat {d} (A, B) = \frac {1}{| A | \cdot | B |} \sum_ {a \in A} \sum_ {b \in B} d (a, b) \tag {5}
153
+ $$
154
+
155
+ where the distance $d(a,b)$ between any two languages $a$ and $b$ is the mean distance between the pairs of words over all concepts. Following the notations of $\S 3.1$ where $w_{aj}$ and $w_{bj}$ are words in languages $a$ and $b$ respectively from concept $C_j$ ,
156
+
157
+ $$
158
+ d (a, b) = \frac {\sum_ {C _ {j} , w _ {a j} \neq \emptyset , w _ {b j} \neq \emptyset} d \left(w _ {a j} , w _ {b j}\right)}{\left| \left\{C _ {j}: w _ {a j} \neq \emptyset , w _ {b j} \neq \emptyset \right\} \right|} \tag {6}
159
+ $$
160
+
161
+ Taking an average over all languages essentially enforces multilateral comparison, i.e., multiple languages are being considered equally to compute the outcome. Further, the algorithm thus described is the same as UPGMA tree construction method (Sokal and Michener, 1958) where at any bifurcating node, a uniform rate of evolution is assumed across daughter clades. The final similarity metric $\hat{s}(A,B)$ is determined by the following statistic that is computed based on a random permutation of words across each column (taxon) which yields random distances $d(A,B)$ :
162
+
163
+ $$
164
+ \hat {s} (A, B) = \frac {\mathbb {E} [ d (A , B) ] - \hat {d} (A , B)}{\mathbb {E} [ d (A , B) ]} \tag {7}
165
+ $$
166
+
167
+ The $p$ -value of two language clusters $A$ and $B$ is the frequency of the event $\hat{d}(A, B) \geq d(A, B)$ relative to the total number of random permutations. Language clusters $A$ and $B$ are considered to be related if the $p$ -value is less than 0.05. The given languages are termed related if the final two clusters that are merged at the root are related (Kessler and Lehtonen, 2006).
168
+
169
+ Kessler (2007) ran this test using various word similarity metrics which almost give similar results.
170
+
171
+ Among these metrics, we ran on P1-dolgo which is a binary metric that determines whether the consonant class of the word's initial consonant matches or not. Additionally, we employ the binary similarity measure introduced by Turchin et al. (2010) to test the significance of the Altaic family where the first two consonants are considered. We further test continuous word distances introduced by List (2010) (SCA) and List (2012) (LexStat) that are based on sequence alignment techniques which were introduced in the context of automated cognate detection.
172
+
173
+ # 4.3 Implementation
174
+
175
+ We mapped the consonant classes to the protein alphabet since phylogenetic software expects input as either nucleotide or amino acid sequences. Moreover, most of the amino acid letters and Dolgopolsky classes are identical. In this regard, there is only one exception, namely, 'J' which is absent in the former but present in the latter and is, hence, simply replaced with 'I', which is in turn absent in Dolgopolsky classes. The multiple sequence alignments are obtained from CLUSTALW2 (Larkin et al., 2007) while the best trees and their corresponding likelihoods were computed using IQTREE (Nguyen et al., 2015). As described in §3.4 and §3.5, the proportions of invariant sites $P_{inv}^{0}$ and $P_{inv}^{a}$ are set to 0.01 and 0.06 respectively for null $(H_0)$ and alternate $(H_a)$ hypotheses. The parametric bootstrap replicates are generated using Al-iSim (Ly-Trong et al., 2022), an extension of IQTREE. To replicate as closely as possible, gaps present in the original character matrices are retained in the replicates. We calculate the p-value based on a sample size of $k = 15$ . The outcomes are observed to be stable beyond this size. The word similarity metrics used in the baseline models are computed by using Lingpy (List and Forkel, 2021). For the phylogenetic tree construction task, MEGA11 (Tamura et al., 2021) was used to deduce the maximim likelihood tree (ML-tree) with the aforementioned model with an additional gamma rate heterogeneity parameter with two distinct rates whose shape is estimated. We name this method $ML-P + I + G2$ .
176
+
177
+ The generalized quartet distances (GQD) (Pompei et al., 2011) between the predicted and the gold trees are computed from quartet distances obtained using qdist (Mailund and Pedersen, 2004). The quartet distance between two trees measures the number of four-leaf-subsets that have dissimilar
178
+
179
+ <table><tr><td>Method</td><td>MKh</td><td>Mun</td><td>MKh-Mun</td><td>IE</td><td>Drav</td><td>May</td><td>MZ</td><td>UAz</td><td>MKh-May</td><td>MKh-UAz</td><td>AfA-LoBur</td></tr><tr><td>Related</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✗</td><td>✗</td><td>✗</td></tr><tr><td>P1-Dolgo</td><td>0.123(&lt;0.001)</td><td>0.243(&lt;0.001)</td><td>0.080(&lt;0.001)</td><td>0.071(&lt;0.001)</td><td>0.440(&lt;0.001)</td><td>0.228(&lt;0.001)</td><td>0.412(&lt;0.001)</td><td>0.572(&lt;0.001)</td><td>0.007(&lt;0.001)</td><td>0.005(0.063)</td><td>0.017(&lt;0.001)</td></tr><tr><td>Turchin</td><td>0.019(&lt;0.001)</td><td>0.124(&lt;0.001)</td><td>0.019(&lt;0.001)</td><td>0.028(&lt;0.001)</td><td>0.292(&lt;0.001)</td><td>0.126(&lt;0.001)</td><td>0.256(&lt;0.001)</td><td>0.402(&lt;0.001)</td><td>0.003(&lt;0.001)</td><td>0.003(0.005)</td><td>0.004(&lt;0.001)</td></tr><tr><td>LexStat</td><td>0.065(&lt;0.01)</td><td>0.138(&lt;0.01)</td><td>0.048(&lt;0.01)</td><td>0.036(&lt;0.01)</td><td>0.197(&lt;0.01)</td><td>0.129(&lt;0.01)</td><td>0.244(&lt;0.01)</td><td>0.306(&lt;0.01)</td><td>0.028(&lt;0.01)</td><td>0.018(&lt;0.01)</td><td>0.033(&lt;0.01)</td></tr><tr><td>SCA</td><td>0.087(&lt;0.01)</td><td>0.187(&lt;0.01)</td><td>0.074(&lt;0.01)</td><td>0.056(&lt;0.01)</td><td>0.296(&lt;0.01)</td><td>0.177(&lt;0.01)</td><td>0.304(&lt;0.01)</td><td>0.400(&lt;0.01)</td><td>0.015(&lt;0.01)</td><td>0.006(&lt;0.01)</td><td>0.031(&lt;0.01)</td></tr><tr><td>LRT</td><td>9.205(&lt;0.001)</td><td>1.58(&lt;0.001)</td><td>14.18(&lt;0.001)</td><td>26.154(&lt;0.001)</td><td>1.78(&lt;0.001)</td><td>68.212(&lt;0.001)</td><td>7.192(&lt;0.001)</td><td>10.448(&lt;0.001)</td><td>-14.359(0.280)</td><td>-12.188(0.065)</td><td>-10.768(0.979)</td></tr></table>
180
+
181
+ topologies. Unlike biological phylogenetic trees, language trees are often multifurcated. Hence, GQD excludes penalties over the order of bifurcations. The code and relevant data have been made publicly available<sup>4</sup>. Further implementation details can be found in README.md therein.
182
+
183
+ # 5 Results
184
+
185
+ The primary results of the paper are tabulated in Table 4, where the results of LRT (last row) are compared with those of the multilateral permutation tests. Except for LRT, the column 'Method' indicates the distance metric employed in the permutation test. The row 'Related' indicates the current consensus about the relatedness of the language families. For the permutation test, the values indicate the similarity metric $\hat{s}$ defined in Eq. (7), as measured at the root. On the other hand, for LRT the values indicate the mean of observed $\hat{\delta}$ (see §3.5). The p-values are indicated in parentheses. The standard threshold of 0.05 is assumed for p-values. Please refer to Table 2 and Table 3 for abbreviations of various language families.
186
+
187
+ One can observe that false positives, indicated in red, are absent for LRT, in contrast with multilateral permutation tests which exhibit false positives in all cases (except P1-Dolgo for MKh-UAz). However, we note that the similarity scores of the Turchin measure are consistently small $(< 0.005)$ for negatives irrespective of the significance implied by the p-value. Hence, it may be noted that Turchin could be a good measure for permutation tests when similarity scores are taken into consideration.
188
+
189
+ Further, one can observe from Table 4 that mean $\hat{\delta}$ values are small for valid families such as Mun and Drav. This has to do with the fact that the data
190
+
191
+ Table 4: Significance testing on various existent and non-existent families. The values indicate the similarity measure $\hat{s}$ in the case of permutation tests and in the case of LRT they indicate the mean of statistic $\hat{\delta}$ . Values in parentheses indicate p-value. False positives are marked in red.
192
+
193
+ <table><tr><td>Method</td><td>AA</td><td>AN</td><td>IE</td><td>PN</td><td>ST</td><td>Avg</td></tr><tr><td>P1-Dolgo</td><td>0.060</td><td>0.208</td><td>0.033</td><td>0.175</td><td>0.188</td><td>0.133</td></tr><tr><td>Turchin</td><td>0.069</td><td>0.195</td><td>0.058</td><td>0.175</td><td>0.275</td><td>0.154</td></tr><tr><td>LexStat</td><td>0.051</td><td>0.178</td><td>0.020</td><td>0.164</td><td>0.096</td><td>0.102</td></tr><tr><td>SCA</td><td>0.049</td><td>0.119</td><td>0.025</td><td>0.166</td><td>0.087</td><td>0.089</td></tr><tr><td>ML-P+I+G2</td><td>0.026</td><td>0.065</td><td>0.033</td><td>0.145</td><td>0.125</td><td>0.079</td></tr></table>
194
+
195
+ Table 5: Comparison of the methods on phylogenetic tree construction task provided as GQD scores. The best results are in bold.
196
+
197
+ for these families consists of a lower number of taxa (see Table 2). Hence, although the $\hat{\delta}$ measure need not imply strength, its sign implies which hypothesis is to be preferred, i.e., the one with a larger proportion of invariant sites in case of a positive value and the one with a smaller proportion of invariant sites in case of a negative value.
198
+
199
+ # 5.1 Tree Construction
200
+
201
+ As mentioned in §4.1, both the methods output a tree, and, therefore, the methods have been evaluated on the tree construction task. The purpose of this task is to ensure that the proposed methods have indeed a good sense of phylogenetic inference and are, hence, appropriate to carry out significance tests over phylogenies. The results are tabulated in Table 5. By comparing with the mean scores of state-of-the-art language phylogeny inference methods on this data, ML-P+I+G2 (0.079) is a few steps behind Bayesian inferred tree (0.066) (Rama et al., 2018) and maximum a posteriori tree (0.051) (Rama and List, 2019). Hence, it can be concluded that consonant-class-based character matrix encoding is almost as good as cognate-based binary character matrix encoding while probabilistic methods based on character matrices are superior to distance-based methods for this task. Among the distance-based approaches, one with the SCA metric performs best. A similar situation was ob
202
+
203
+ <table><tr><td>Method</td><td>Drav-IE</td><td>Drav-IE-Kart</td><td>May-MZ</td><td>May-UAz</td><td>May-MZ-UAz</td></tr><tr><td>P1-Dolgo</td><td>0.046(&lt;0.001)</td><td>0.038(&lt;0.001)</td><td>0.033(&lt;0.001)</td><td>0.046(&lt;0.001)</td><td>0.036(&lt;0.001)</td></tr><tr><td>Turchin</td><td>0.017(&lt;0.001)</td><td>0.002(0.197)</td><td>0.012(&lt;0.001)</td><td>0.012(&lt;0.001)</td><td>0.008(&lt;0.001)</td></tr><tr><td>LexStat</td><td>0.024(&lt;0.01)</td><td>0.014(&lt;0.01)</td><td>0.033(&lt;0.01)</td><td>0.027(&lt;0.01)</td><td>0.024(&lt;0.01)</td></tr><tr><td>SCA</td><td>0.024(&lt;0.01)</td><td>0.007(0.01)</td><td>0.019(&lt;0.01)</td><td>0.024(&lt;0.01)</td><td>0.015(&lt;0.01)</td></tr><tr><td>LRT</td><td>24.882(&lt;0.001)</td><td>0.316(&lt;0.001)</td><td>20.988(&lt;0.001)</td><td>-1.035(&lt;0.001)</td><td>-9.819(&lt;0.001)</td></tr></table>
204
+
205
+ Table 6: Results of evaluation of macro families. Parentheses contain p-values.
206
+
207
+ served in Rama et al. (2018) and Rama and List (2019) where SCA-based cognates yield the best performance. However, it should be noted that SCA and LexStat-based measures yield false positives on significance testing (Table 4) despite their performance on this task.
208
+
209
+ # 6 Evaluation of Macro Families
210
+
211
+ We apply the tests on groupings of a few families from proposed macro families, namely Nostratic, Macro-Mayan, and Amerind. Under Nostratic, we test for groupings Dravidian-Indo-European (Drav-IE) and Dravidian-Indo-European-Kartvelian (Drav-IE-Kart) while we test Mayan-Mixe-Zoque (May-MZ) under Macro-Mayan and Mayan-Uto-Aztecan (May-UAz), Mayan-Mixe-Zoque-Uto-Aztecan (May-MZ-UAz) under Amerind. The results are tabulated in Table 6. While going by the p-values, the LRT test seems to support all of the mentioned families. However, the mean LRT statistic $\hat{\delta}$ is weak (negative or close to 0) for Drav-IE-Kart (Nostratic) and May-UAz, MayMZ-UAz (Amerind). In other words, by looking at Eq. (4), the alternate hypothesis $H_{a}$ , i.e., having higher invariant sites is not preferred. Thus, it may be concluded that LRT is a highly sensitive test since the mere addition of a single language (Georgian) to a strongly supported group of 16 languages (Drav-IE) alters the outcome drastically. This is a desirable property since the presence of even a single anomaly, an unrelated language in this case, can be detected. Note that other combinations in Nostratic such as Drav-Kart or IE-Kart are much weaker and not well supported by the permutation test itself, which is elaborated as follows.
212
+
213
+ # 6.1 Analysis of Permutation tests on Nostratic
214
+
215
+ Bilateral significances on Nostratic grouping Drav-IE-Kart for various distance metrics are reported in Figure 2, where the pairwise relationships based on p-value (with threshold 0.05) are color-coded. The
216
+
217
+ computation follows the same steps as defined in §4.2 except that distances and similarities are calculated over pairs of languages instead of language clusters. This indeed forms the first iteration of a complete multilateral test.
218
+
219
+ The languages are abbreviated in Fig. 2 as follows: Old Georgian (Ge), Old Kannada (Ka), Old Telugu (Te), Old Tamil (Ta), Old Malayalam (Ma), Ancient Greek (Gr), Old Armenian (Ar), Middle Persian (Pe), Sanskrit (Sa), Pali (Pa), Old Church Slavonic (CS), Old Irish (Ir), Latin (La), Old French (Fr), Old High German (HG), Old English (En) and Old Norse (No).
220
+
221
+ It is visible that for each metric, languages of the same family (IE and Drav) are almost always related pairwise. Secondly, many pairs from Drav-IE appear related. However, except for LexStat, Georgian shows to be related to at most two languages from the Drav-IE grouping. Yet, in the permutation tests for these metrics, except for Turchin (Table 6), Drav-IE-Kart appears significantly related with sometimes even good similarity scores (in the case of P1-Dolgo). All that can be concluded here is that, except for the LexStat metric, permutation tests are very sensitive to pairwise language comparisons and may not yield false positives. However, if Drav-IE-Kart is to be considered a valid grouping, these tests may be said to yield false negatives.
222
+
223
+ # 6.2 Analysis of ML-trees of Nostratic
224
+
225
+ Unrooted maximum likelihood trees (ML-trees) are drawn in Figure 3 on various sub-groupings of Nostratic using MEGA11 assuming the Poisson+I model. For the IE tree (Figure 3(a)), the sub-families, except for the position of Old Church Slavonic, are highly faithful reflecting the existing notions. For instance, the topology of the Germanic family, i.e., (Old Norse, (Old English, Old High German)) contains the valid West-Germanic branch (Old English, Old High German). Similarly, the Italo-Celtic group (Old Irish, (Latin, Old French)) is visible. Also, one can distinguish a clear boundary between Western and Eastern IE languages reflecting the geographical distribution. However, the position of Old Church Slavonic intruded into Indo-Iranian appears problematic.
226
+
227
+ Further, the addition of the Dravidian family in Drav-IE does not alter the IE topology (Figure 3(b)). It is intriguing to note the western inclination of Dravidian given its eastern geographical location in the present day. However, this is in line with the observation of Caldwell (1875),
228
+
229
+ ![](images/abd26860e50c55760c001b84d361d4ad8fc5c0e5469f79f83226302d24aabcd9.jpg)
230
+ (a) P1-Dolgo
231
+
232
+ ![](images/3827a97e3d28e7e721cf0f5e8194cc6b786d291ef128550b5a88dbc501a81cfa.jpg)
233
+ (b) Turchin
234
+
235
+ ![](images/db94984a81c2c8e35574fad1ba2a1dc0add7fe6e80e130526d38c1b1d8dd9eb4.jpg)
236
+ (c) SCA
237
+
238
+ ![](images/d703f13869afbd8d5e6bea6f588c9f857fd915eee0fca7b9d109b228a7d76a6c.jpg)
239
+ (d) LexStat
240
+
241
+ ![](images/a1c797f14e0ee5cd69bd79bbb87bedcaec674ab1a59060fcbfbf2e35cc808ec0.jpg)
242
+ (a) IE
243
+ Figure 3: Comparison of unrooted ML-trees on various groupings of Nostratic language families
244
+
245
+ ![](images/23483a68b68089edb8270bc150e68810efc6dfd351d7f7ab42777e35961f739a.jpg)
246
+ (b) Drav-IE
247
+
248
+ ![](images/0d50c1032d6c70828c0c828841673d1f89575216a80dae39fd53e640b536da3f.jpg)
249
+ Figure 2: Bilateral (pairwise) significance among the languages of Nostratic grouping. The yellow shade implies that the relationship is statistically significant $(p < 0.05)$ , while the purple shade implies otherwise.
250
+ (c) Drav-IE-Kart
251
+
252
+ the founder of comparative Dravidian linguistics himself. Finally, the addition of Georgian invalidates the West-Germanic branch as well as pushes Old Greek problematically into the Western group (Figure 3(c)). However, much of the topology is undisturbed and one can also notice how the languages/families that are located south of the Caucasus namely, Armenian, Georgian, and Dravidian are grouped. Overall, it may be concluded that the addition of unrelated or weakly related languages can alter the actual topology.
253
+
254
+ Similar analyses in case of Macro-Mayan and Amerind families are provided in Appendix A where one can observe similar perturbations in topology (see Fig. 5) of one family (Mayan) in presence of others (Mixe-Zoque and Uto-Aztecan).
255
+
256
+ # 7 Conclusions
257
+
258
+ In this paper, we have presented a likelihood ratio test based on the proportions of invariant sites to determine the genetic relatedness of a group of languages. Our proposed test does not yield false positives, which is in contrast with previous permutation-based tests that proved to be good only for pairwise language comparisons and not
259
+
260
+ for validating a language group. By applying this test, we have found strong supporting evidence for macro-families such as Dravidian-Indo-European, Macro-Mayan (for Mayan-Mixe-Zoque, and weak evidence for Nostratic (Dravidian-Indo-European-Kartvelian) and Amerind (for Mayan-Uto-Aztecan). Through secondary analyses, we have also shown that probabilistic-based methods are superior to distance-based ones based on tree construction and the correlation of topologies with geography. In this work we did not touch upon semantic shifts, i.e., words changing meaning over time; for example, the word quick initially meant 'lively'. While considering semantic shifts may provide room for data manipulation favoring any particular hypothesis, few semantic slots such as 'bark'-'skin' are often found to have common words. In such cases, the slots may be merged into one as suggested by Kessler (2001).
261
+
262
+ In summary, before constructing phylogenies of a group of languages, the relatedness of the group should be established through a significance test such as the one we have presented. Otherwise, the phylogenic grouping would not only be questionable but may also alter the topology of a related sub-group.
263
+
264
+ # Limitations
265
+
266
+ The values of $P_{inv}^{0}$ and $P_{inv}^{a}$ (§3.5) are roughly decided based on the estimated ones from two examples, namely, Afrasian-Lolo-Burmese as a negative example and Indo-European as a positive example. The question of what should be the most appropriate values that should make the test optimal is not addressed here. Ideally, to address this question, more data is needed with several positive and negative examples to search for optimal values of these parameters. Also, the exact values may require calibration according to the phylogenetic software used since there could be significant differences in the implementations. Secondly, while analyzing Nostratic languages, Uralic, an important language family, has not been included due to the selection criteria (§4.1) that the languages should have been attested before 10th century CE. To include Uralic, the (Nostratic) languages that are attested around the same period as the earliest attested ones from Uralic (roughly 1300 CE onwards) should be considered to make 'fair' comparisons.
267
+
268
+ # Ethics Statement
269
+
270
+ All the datasets are obtained from publicly available sources. Thus, there are no foreseen ethical considerations or conflicts of interest.
271
+
272
+ # References
273
+
274
+ V.S.D.S.Mahesh Akavarapu and Arnab Bhattacharya. 2023. Cognate Transformer for Automated Phonological Reconstruction and Cognate Reflex Prediction. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 6852-6862, Singapore. Association for Computational Linguistics.
275
+ V.S.D.S.Mahesh Akavarapu and Arnab Bhattacharya. 2024. Automated Cognate Detection as a Supervised Link Prediction Task with Cognate Transformer. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 965-975, St. Julian's, Malta. Association for Computational Linguistics.
276
+ Maria Anisimova and Olivier Gascuel. 2006. Approximate likelihood-ratio test for branches: A fast, accurate, and powerful alternative. Systematic Biology, 55(4):539-552.
277
+ M J Bishop and A E Friday. 1987. Tetrapod relationships: The molecular evidence. Molecules and morphology in evolution: Conflict or compromise, pages 123-139.
278
+
279
+ Allan R Bomhard and John C Kerns. 1994. The Nostratic macrofamily: A study in distant linguistic relationship. De Gruyter Mouton.
280
+ Robert Caldwell. 1875. A comparative grammar of the Dravidian or South-Indian family of languages. Trübner.
281
+ Lyle Campbell. 1997. American Indian languages: The historical linguistics of Native America, volume 4. Oxford University Press, USA.
282
+ Lyle Campbell. 2013. *Historical linguistics*. Edinburgh University Press.
283
+ Joseph Felsenstein. 1973. Maximum likelihood and minimum-steps methods for estimating evolutionary trees from data on discrete characters. Systematic Biology, 22(3):240-249.
284
+ Joseph Felsenstein. 1981. Evolutionary trees from DNA sequences: A maximum likelihood approach. Journal of Molecular Evolution, 17:368-376.
285
+ Nick Goldman, Jon P Anderson, and Allen G Rodrigo. 2000. Likelihood-based tests of topologies in phylogenetics. Systematic Biology, 49(4):652-670.
286
+ Joseph H Greenberg. 1963. The languages of Africa. International Journal of American Linguistics.
287
+ Joseph H Greenberg. 1971. The Indo-Pacific hypothesis. Current Trends in Linguistics, 8:807-871.
288
+ Joseph H Greenberg. 1987. Language in the Americas. Stanford University Press.
289
+ Joseph H Greenberg. 2000. Indo-European and its closest relatives: The Eurasiatic language family, volume 1, grammar, volume 1. Stanford University Press.
290
+ Joseph H Greenberg. 2005. Genetic linguistics: Essays on theory and method. OUP Oxford.
291
+ John P Huelsenbeck and JJ Bull. 1996. A likelihood ratio test to detect conflicting phylogenetic signal. Systematic Biology, 45(1):92-98.
292
+ John P Huelsenbeck, David M Hillis, and Rasmus Nielsen. 1996. A likelihood-ratio test of monophyly. Systematic Biology, 45(4):546-558.
293
+ Gerhard Jäger. 2015. Support for linguistic macrofamilies from weighted sequence alignment. Proceedings of the National Academy of Sciences, 112(41):12752-12757.
294
+ Gerhard Jäger. 2018. Global-scale phylogenetic linguistic inference from lexical resources. Scientific Data, 5(1).
295
+ Gerhard Jäger. 2019. Computational Historical Linguistics. Theoretical Linguistics, 45(3-4):151-182.
296
+ Gerhard Jäger. 2022. Bayesian Phylogenetic Cognate Prediction. In Proceedings of the 4th Workshop on Research in Computational Linguistic Typology and Multilingual NLP, pages 63-69, Seattle, Washington. Association for Computational Linguistics.
297
+
298
+ Thomas H Jukes, Charles R Cantor, et al. 1969. Evolution of protein molecules. *Mammalian protein metabolism*, 3:21-132.
299
+ Alexei Kassian, Mikhail Zhivlov, and George Starostin. 2015. Proto-Indo-European-Uralic comparison from the probabilistic point of view. Journal of Indo-European Studies, 43(3-4):301-347.
300
+ Brett Kessler. 2001. The significance of word lists. Stanford.
301
+ Brett Kessler. 2007. Word Similarity Metrics and Multilateral Comparison. In Proceedings of Ninth Meeting of the ACL Special Interest Group in Computational Morphology and Phonology, pages 6-14, Prague, Czech Republic. Association for Computational Linguistics.
302
+ Brett Kessler. 2008. The Mathematical Assessment of Long-Range Linguistic Relationships. Language and Linguistics Compass, 2(5):821-839.
303
+ Brett Kessler. 2015. Response to Kassian et al., Proto-Indo-European-Uralic comparison from the probabilistic point of view. Journal of Indo-European Studies, 43(3-4):357-367.
304
+ Brett Kessler and Annukka Lehtonen. 2006. Multilateral comparison and significance testing of the Indo-Uralic question. Phylogenetic methods and the prehistory of languages, pages 33-42.
305
+ Mark A Larkin, Gordon Blackshields, Nigel P Brown, R Chenna, Paul A McGettigan, Hamish McWilliam, Franck Valentin, Iain M Wallace, Andreas Wilm, Rodrigo Lopez, et al. 2007. Clustal W and Clustal X version 2.0. Bioinformatics, 23(21):2947-2948.
306
+ Johann-Mattis List. 2010. SCA: Phonetic alignment based on sound classes. In European Summer School in Logic, Language and Information, pages 32-51. Springer.
307
+ Johann-Mattis List. 2012. LexStat: Automatic Detection of Cognates in Multilingual Wordlists. In Proceedings of the EACL 2012 Joint Workshop of LINGVIS & UNCLH, pages 117-125, Avignon, France. Association for Computational Linguistics.
308
+ Johann-Mattis List and Robert Forkel. 2021. LingPy: A Python library for historical linguistics. Version 2.6.9.
309
+ Nhan Ly-Trong, Suha Naser-Khdour, Robert Lanfear, and Bui Quang Minh. 2022. AliSim: a fast and versatile phylogenetic sequence simulator for the genomic era. Molecular Biology and Evolution, 39(5):msac092.
310
+ Thomas Mailund and Christian NS Pedersen. 2004. QDist—Quartet distance between evolutionary trees. Bioinformatics, 20(10):1636-1637.
311
+
312
+ Lam-Tung Nguyen, Heiko A Schmidt, Arndt Von Haeseler, and Bui Quang Minh. 2015. IQ-TREE: a fast and effective stochastic algorithm for estimating maximum-likelihood phylogenies. Molecular Biology and Evolution, 32(1):268-274.
313
+ Robert L Oswalt. 1970. The detection of remote linguistic relationships. Computer Studies in the Humanities and Verbal Behavior, 3(3):117-129.
314
+ Simone Pompei, Vittorio Loreto, and Francesca Tria. 2011. On the accuracy of language trees. *PloS One*, 6(6):e20109.
315
+ William Poser and Lyle Campbell. 2008. Language Classification: History and Methods.
316
+ Taraka Rama. 2018. Similarity Dependent Chinese Restaurant Process for Cognate Identification in Multilingual Wordlists. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 271-281, Brussels, Belgium. Association for Computational Linguistics.
317
+ Taraka Rama and Johann-Mattis List. 2019. An Automated Framework for Fast Cognate Detection and Bayesian Phylogenetic Inference in Computational Historical Linguistics. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6225-6235, Florence, Italy. Association for Computational Linguistics.
318
+ Taraka Rama, Johann-Mattis List, Johannes Wahle, and Gerhard Jäger. 2018. Are Automatic Methods for Cognate Detection Good Enough for Phylogenetic Reconstruction in Historical Linguistics? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 393-400, New Orleans, Louisiana. Association for Computational Linguistics.
319
+ Donald A Ringe. 1992. On calculating the factor of chance in language comparison. Transactions of the American Philosophical Society, 82(1):1-110.
320
+ Donald A Ringe. 1996. The mathematics of 'Amerind'. Diachronica, 13(1):135-154.
321
+ Donald A Ringe and Joseph F Eska. 2013. *Historical linguistics: Toward a twenty-first century reintegration*. Cambridge University Press.
322
+ Robert R. Sokal and Charles Duncan Michener. 1958. A statistical method for evaluating systematic relationships. University of Kansas Science Bulletin, 38:1409-1438.
323
+ Koichiro Tamura, Glen Stecher, and Sudhir Kumar. 2021. MEGA11: molecular evolutionary genetics analysis version 11. Molecular Biology and Evolution, 38(7):3022-3027.
324
+
325
+ Peter Turchin, Ilia Peiros, and Murray Gell-Mann. 2010. Analyzing genetic connections between languages by matching consonant classes. Journal of Language Relationship, (5 (48)):117-126.
326
+
327
+ Edward Orlando Wiley and Bruce S Lieberman. 2011. Phylogenetics: Theory and practice of phylogenetic systematics. John Wiley & Sons.
328
+
329
+ S. S. Wilks. 1938. The Large-Sample Distribution of the Likelihood Ratio for Testing Composite Hypotheses. The Annals of Mathematical Statistics, 9(1):60-62.
330
+
331
+ # A Analysis of Macro-Mayan and Amerind
332
+
333
+ ![](images/53ef3ba50779dea390c00bcea2e4d677c58fa6c1d400b3fe0a9cd6b54abd78a6.jpg)
334
+
335
+ ![](images/dc17731f06288d3a733c2fb1ae677bca5d9d2bbee4d588a172b17c66dfae5caa.jpg)
336
+ (b) Turchin
337
+
338
+ ![](images/4f13ce05c99a7bb5f989195d8742e9f6090722aa2c1928e6df5652ff638fdb83.jpg)
339
+ (a) P1-Dolgo
340
+ (c) SCA
341
+
342
+ ![](images/80cbe19fe9247911461cebbdd5c015c8dd0f77340be4cf24537b16543cdc4be3.jpg)
343
+ (d) LexStat
344
+
345
+ ![](images/dde95fc3a2981526374725e602e99e77456619ceb4bc3eb1f47d1ed004fc060a.jpg)
346
+ (a) Mayan
347
+
348
+ ![](images/2057a29f7660d21194bffbc6faacc7e9e0cd39bd5b84253d515c410c9d634b05.jpg)
349
+
350
+ ![](images/b9ef2d1cbbd5dfa956520e3ca41a10ee27efdbe59ef0a63c1548c5eb601d52eb.jpg)
351
+ (b) Mayan-Mixe-Zoque
352
+
353
+ ![](images/2f7bef32fc8dbdddf037a2936479ccbffa830197d1435ef096be9384d8c60ade.jpg)
354
+ Figure 4: Bilateral (pairwise) significance among the languages of Macro-Mayan/Amerind grouping. The yellow shade implies that the relationship is statistically significant $(p < 0.05)$ , while the purple shade implies otherwise. While moving across the diagonal, the first cluster of significantly related languages is that of Mayan, the second is that of Mixe-Zoque and the thrid, Uto-Aztecan
355
+ (c) Mayan-Uto-Aztecan
356
+ (d) Mayan-Mixe-Zoque-Uto-Aztecan
357
+ Figure 5: Comparison of unrooted ML-trees on various groupings of Macro-Mayan/Amerind language families
2024/A Likelihood Ratio Test of Genetic Relationship among Languages/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:220e83c63c18e3dd690f335c01d6aedfeac0535a55638711d9efd9d9e7089445
3
+ size 662263
2024/A Likelihood Ratio Test of Genetic Relationship among Languages/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Preference-driven Paradigm for Enhanced Translation with Large Language Models/d7f9322a-d243-4010-965b-89c497fd221b_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Preference-driven Paradigm for Enhanced Translation with Large Language Models/d7f9322a-d243-4010-965b-89c497fd221b_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Preference-driven Paradigm for Enhanced Translation with Large Language Models/d7f9322a-d243-4010-965b-89c497fd221b_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:33f5549bfcd4a3afe5da93c3cdb5c5dcd28d7c27343626866a08310a730f0df3
3
+ size 863556
2024/A Preference-driven Paradigm for Enhanced Translation with Large Language Models/full.md ADDED
@@ -0,0 +1,490 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Preference-driven Paradigm for Enhanced Translation with Large Language Models
2
+
3
+ Dawei Zhu $^{1,2*}$ Sony Trenous $^{1}$ Xiaoyu Shen $^{1\dagger}$ Dietrich Klakow $^{2}$ Bill Byrne $^{1}$ Eva Hasler $^{1}$
4
+
5
+ $^{1}$ Amazon AGI
6
+
7
+ $^{2}$ Saarland University, Saarland Informatics Campus
8
+
9
+ {daweizhu,trenous,willbyrn,ehasler}@amazon.com
10
+
11
+ # Abstract
12
+
13
+ Recent research has shown that large language models (LLMs) can achieve remarkable translation performance through supervised fine-tuning (SFT) using only a small amount of parallel data. However, SFT simply instructs the model to imitate the reference translations at the token level, making it vulnerable to the noise present in the references. Hence, the assistance from SFT often reaches a plateau once the LLMs have achieved a certain level of translation capability, and further increasing the size of parallel data does not provide additional benefits. To overcome this plateau associated with imitation-based SFT, we propose a preference-based approach built upon the Plackett-Luce model. The objective is to steer LLMs towards a more nuanced understanding of translation preferences from a holistic view, while also being more resilient in the absence of gold translations. We further build a dataset named MAPLE to verify the effectiveness of our approach, which includes multiple translations of varying quality for each source sentence. Extensive experiments demonstrate the superiority of our approach in "breaking the plateau" across diverse LLMs and test settings. Our in-depth analysis underscores the pivotal role of diverse translations and accurate preference scores in the success of our approach.<sup>1</sup>
14
+
15
+ # 1 Introduction
16
+
17
+ The emergence of Large Language Models (LLMs) has significantly transformed the landscape of NLP, showcasing outstanding capabilities in a spectrum of NLP tasks (Brown et al., 2020; Scao et al., 2022; Chowdhery et al., 2023; Touvron et al., 2023a). This transformation extends to machine translation (MT) (OpenAI, 2023; Jiao et al., 2023b;
18
+
19
+ Hendy et al., 2023). Through supervised finetuning (SFT) using a small amount of parallel data, LLMs demonstrate the capability to compete with established commercial translation services such as Google Translate, particularly in high-resource languages (Jiao et al., 2023a; Zhang et al., 2023b).
20
+
21
+ Nevertheless, SFT trains the model to imitate reference translations token by token, making it vulnerable to the noise present within the data (Ott et al., 2018; Zhou et al., 2023; Touvron et al., 2023b). The noise can stem not only from the lack of attention by annotators, but also from the inherent challenge of achieving perfect translations due to the intricate interplay of language, culture, and vocabulary. As an adept translator requires not only linguistic proficiency but also a deep understanding of cultural contexts and nuances in both the source and target, it is nearly unattainable to gather extensive parallel translations of top-notch quality (Khayrallah and Koehn, 2018; Herold et al., 2022; Maillard et al., 2023). As a result, the performance enhancement achieved through SFT often quickly reaches a plateau. Further increasing the volume of parallel translations typically yields minimal additional benefits, and may instead impair the translation capabilities of LLMs (Xu et al., 2023).
22
+
23
+ To alleviate aforementioned limitation of SFT, endeavors have been made to provide LLMs with holistic assessment of contrasting examples rather than token-level imitations. Jiao et al. (2023a); Chen et al. (2023) add a flawed translation to the reference translation in the model input, encouraging the target LLM to recognize their quality difference. Zeng et al. (2023) also use a pair of translations, but they additionally optimize the LLM to favor better translations through ranking loss. Nevertheless, these works have shared limitations. First, the flawed translations are either generated by adding artificial noise to the reference translations or by other (smaller) MT systems. These imperfections in translations can be obvious and easy for
24
+
25
+ LLM to distinguish, weakening the learning signal. Second, they only provide the relative ranking of the two translations, without quantifying the extent of their quality differences.
26
+
27
+ In this work, we present a framework based on the Plackett-Luce model to explicitly align the generation probability of the target LLM with human preferences (Plackett, 1975). Instead of using artificial noise, we collect contrasting translations generated by our target LLM, directing our optimization efforts toward "hard negative examples" (Robinson et al., 2021). Human preferences are denoted with precise scores rather than general ranking orders to teach LLMs about the nuances in different translations. LLMs are then trained to enhance their capabilities incrementally from the learnt nuances without depending solely on the existence of "gold references", so as to effectively break the plateau associated with SFT.
28
+
29
+ We build a dataset, which we refer to as MAPLE, to facilitate preference learning. It equips each source sentence with five translations in diverse quality, scored by professional translators. By performing preference learning on MAPLE, our final MT model outperforms other MT models based on the same foundation LLM by up to 3.96 COMET score. We further show that while the intention of creating MAPLE is to enhance our target LLM, it can be reused to improve other LLMs, helping them break the performance plateau with up to 1.4M parallel data. Finally, we analyze the key factors that make preference learning effective.
30
+
31
+ Our contributions are as follows. (1) We leverage preference learning to teach LLMs a holistic notion of translation quality. Extensive experiments show that our model consistently outperforms strong baselines on two test sets across four translation directions. (2) We revisit the underlying modelling assumptions leading to the Bradley-Terry and Plackett-Luce ranking models and discuss how preference distances can be incorporated directly into the ranking models. (3) We meticulously construct an MT-oriented preference dataset, MAPLE, employing professional human translators to obtain quality scores for multiple translations corresponding to the same source sentence. We release our dataset to facilitate future MT research. (4) Our in-depth analysis reveals that high-contrast pairs and accurate quality scores are crucial in enhancing the effectiveness of our approach, providing guidance for maximizing the benefits of preference learning.
32
+
33
+ # 2 Related Work
34
+
35
+ LLM-based MT. One simple and effective approach to use LLMs for translation tasks is through prompting. Research in this field involves examining the impact of model sizes, the number of examples ("shots") used, and template choices (Zhang et al., 2023a; Bawden and Yvon, 2023; Mu et al., 2023; Zhang et al., 2024). Moreover, (Ghazvininejad et al., 2023; He et al., 2023) highlight that better translations can be achieved by adding supplementary information to prompts, or engaging LLMs in related tasks prior to translation. Alternatively, another research direction seeks to fully tailor LLMs for MT tasks. Jiao et al. (2023a); Zeng et al. (2023); Chen et al. (2023); Alves et al. (2023); Zhang et al. (2023b) further train LLMs on parallel data via (parameter-efficient) fine-tuning. Xu et al. (2023) show that increasing the size of parallel data may not further improve LLM. The diminished returns from increasing data volume are likely due to data noise. Recent analyses suggest that quality trumps quantity when it comes to data effectiveness (Zhu et al., 2023; Zhou et al., 2023). Leveraging these insights, we goes beyond merely fitting the reference translations. Instead, we aim to enhance the LLM's ability to discern translations of varying quality, encouraging the generation of more precise translations while suppressing flawed outputs.
36
+
37
+ Human preference alignment. Ouyang et al. (2022) align LLMs with human intentions and values by training a reward model for preference ranking and optimizing the LLMs through the PPO algorithm (Schulman et al., 2017). However, the online reinforcement learning nature of PPO leads to considerable computational costs and is known for its sensitivity to hyperparameters (Islam et al., 2017; Huang et al., 2022). To ease the alignment, Hu et al. (2023); Dong et al. (2023) suggest offline RL algorithms where samples are pre-generated. Further research goes a step beyond by directly employing the target LLMs as reward models. Yuan et al. (2023) use a ranking loss to steer LLMs towards generating helpful responses and avoiding harmful ones. In a similar vein, Rafailov et al. (2023); Song et al. (2023); Hejna et al. (2023) use the Plackett-Luce model (Plackett, 1975) to capture human preferences in alignment. In this work, we adopt the Plackett-Luce model to MT, teaching the model to discern nuances in different translations and to prefer accurate translations.
38
+
39
+ # 3 Methodology
40
+
41
+ We aim to enhance LLM in MT tasks via a two-stage optimization process. We first fine-tune the target LLM with a small set of high-quality parallel data to elicit its translation ability (Section 3.1). This mirrors the supervised fine-tuning approach used in prior work, where LLMs were tailored to follow instructions (Taori et al., 2023; Zheng et al., 2023). We then use preference learning to guide the LLM to prioritize the generation of accurate translations over flawed ones (Section 3.2).
42
+
43
+ # 3.1 Supervised fine-tuning
44
+
45
+ We begin with optimizing our target LLM on parallel data to specialize it for translation. Let $x$ and $y$ denote the source and target sentence, respectively. Following Jiao et al. (2023a) we first construct a prompt by applying an instruction template $\mathcal{I}$ to $x$ . The instruction template is randomly sampled from an instruction pool for each training sample. The target LLM, denoted by $\pi_{\theta}$ is optimized through the log-likelihood loss:
46
+
47
+ $$
48
+ \begin{array}{l} \mathcal {L} _ {S F T} (\pi_ {\theta}) = - \log \pi_ {\theta} (x, y) \\ = - \sum_ {t} \log P _ {\pi_ {\theta}} \left(y _ {t} \mid y _ {1, \dots , t - 1}, \mathcal {I} (x)\right) \tag {1} \\ \end{array}
49
+ $$
50
+
51
+ where $\pi_{\theta}(x,y)$ denotes the likelihood of $\pi_{\theta}$ generating output $y$ given input $x$ . Note that in a standard implementation, a decoder-only LLM will also predict tokens within $\mathcal{I}(x)$ , we zero-out the loss on these tokens as our main goal is to teach translation, not to model the input distribution (Touvron et al., 2023b).
52
+
53
+ # 3.2 Preference learning
54
+
55
+ The goal of the preference learning stage is to explicitly optimize the target LLM to favor accurate translations over erroneous ones. Formally, consider a set of translations $y^{1}, \dots, y^{L}$ corresponding to a source sentence $x$ . We assume that these translations are ordered by preference: $y^{i} \succ_{x} y^{j}$ for $i < j$ . That is, translation $y^{i}$ is preferred over $y^{j}$ as a translation of the source sentence $x$ . We further assume that there is some underlying reward model $r^{*}$ that reflects the quality of the translations, which we cannot access but which we can approximate. Under the Plackett-Luce ranking model (Plackett,
56
+
57
+ 1975), the distribution of preferences can be formulated as follows:
58
+
59
+ $$
60
+ p ^ {*} \left(y _ {> x} ^ {1: L} | x\right) = \prod_ {i = 1} ^ {L - 1} \frac {\exp \left(r ^ {*} \left(x , y ^ {i}\right)\right)}{\sum_ {j = i} ^ {L} \exp \left(r ^ {*} \left(x , y ^ {j}\right)\right)} \tag {2}
61
+ $$
62
+
63
+ where $y_{\succ_x}^{1:L}$ is a shorthand for the complete preference ranking $y^1 \succ_x, \dots, \succ_x y^L$ . In practice, given a training set $\mathcal{D}$ with translations equipped with a preference ranking, a reward model $r_\theta$ can be trained via maximum likelihood estimation (Cheng et al., 2010):
64
+
65
+ $$
66
+ \begin{array}{l} \mathcal {L} _ {P L} (r _ {\theta}) = - \mathbb {E} _ {x, y _ {\succ x} ^ {1: L} \in \mathcal {D}} \sum_ {i = 1} ^ {L - 1} [ r _ {\theta} (x, y ^ {i}) - \\ \left. \log \sum_ {j = i} ^ {L} \exp \left(r _ {\theta} \left(x, y ^ {j}\right)\right) \right] \tag {3} \\ \end{array}
67
+ $$
68
+
69
+ Following recent work (Rafailov et al., 2023; Song et al., 2023; Hejna et al., 2023), we parameterize the reward model using the target LLM $\pi_{\theta}$ and rewrite the above objective as:
70
+
71
+ $$
72
+ \mathcal {L} _ {P L} \left(\pi_ {\theta}\right) = - \mathbb {E} _ {x, y _ {> x} ^ {1: L} \in \mathcal {D}} \sum_ {i = 1} ^ {L - 1} \log \frac {\pi_ {\theta} \left(x , y ^ {i}\right)}{\sum_ {j = i} ^ {L} \pi_ {\theta} \left(x , y ^ {j}\right)} \tag {4}
73
+ $$
74
+
75
+ where $r_{\theta} \coloneqq \log(\pi_{\theta})$ . Through optimizing Equation 4, we explicitly align the LLM generation probability with the translation quality.
76
+
77
+ A caveat when optimizing Equation 4 is that the ranking information omits any measure of absolute translation quality, which may lead to inadvertent suppression of the likelihood of good translations. Consider a case where we have a pair of translations, $y^{1}$ and $y^{2}$ , which are both acceptable translations but have different word orders that causes minor difference in preference. Optimizing Equation 4 may cause the model to raise the probability of $y^{1}$ and to suppress the probability $y^{2}$ , which may damage the model. To address this issue, we follow Song et al. (2023) to consider the preference
78
+
79
+ distance in $\mathcal{L}_{PL}$
80
+
81
+ $$
82
+ \begin{array}{l} \mathcal {L} _ {P L D} (\pi_ {\theta}) = \\ - \mathbb {E} _ {x, y _ {\succ x} ^ {1: L} \in \mathcal {D}} \sum_ {i = 1} ^ {L - 1} \log \frac {\pi_ {\theta} ^ {d _ {i} ^ {i}} (x , y ^ {i})}{\sum_ {j = i} ^ {L} \pi_ {\theta} ^ {d _ {i} ^ {j}} (x , y ^ {j})} \\ \end{array}
83
+ $$
84
+
85
+ where
86
+
87
+ $$
88
+ \begin{array}{l} d _ {i} ^ {j} = r ^ {*} (x, y ^ {i}) - r ^ {*} (x, y ^ {j}), \text {f o r} j > i \\ d _ {i} ^ {i} = \max _ {j > i} \left(d _ {i} ^ {j}\right) \tag {5} \\ \end{array}
89
+ $$
90
+
91
+ We obtain the ground truth preference value $r^{*}(x,y)$ through human annotation, which will be detailed in Section 4. Finally, we combine a SFT loss calculated on the best translation $y^{1}$ with $\mathcal{L}_{PLD}$ , making the complete loss function:
92
+
93
+ $$
94
+ \mathcal {L} = \mathcal {L} _ {P L D} + \beta \mathcal {L} _ {S F T} \tag {6}
95
+ $$
96
+
97
+ where the hyperparameter $\beta$ balances the strengths of preference learning and SFT. We use PL as an abbreviation of our preference learning method (i.e., optimizing Equation 6) in the subsequent text.
98
+
99
+ We now provide some justification for directly incorporating preference distances into the Plackett-Luce model by studying the original derivation of the binary case $(L = 2)$ (Thurstone, 1927; Mosteller, 1951; Bradley, 1953; Hamilton et al., 2023). Denote the preferences for $y^{i}$ and $y^{j}$ by random variables $X_{i}$ and $X_{j}$ such that the probability that $y^{i}$ is preferred to $y^{j}$ is $\pi_{ij} = P(X_i > X_j)$ . Assuming that $X_{i}$ and $X_{j}$ follow Gumbel distributions with locations $s_i$ and $s_j$ and a common scale parameter $\gamma$ , the difference between the two random variables $d_{ij} = X_i - X_j$ follows a logistic distribution with location $s_i - s_j$ and scale $\gamma$ :
100
+
101
+ $$
102
+ d _ {i j} \sim \frac {1}{4 \gamma} \operatorname {s e c h} ^ {2} \left(\frac {d _ {i j} - \left(s _ {i} - s _ {j}\right)}{2 \gamma}\right) \tag {7}
103
+ $$
104
+
105
+ By defining $\pi_i = e^{s_i}$ , it follows that
106
+
107
+ $$
108
+ \begin{array}{l} \pi_ {i j} = P \left(d _ {i j} > 0\right) \\ = \int_ {0} ^ {\infty} \frac {1}{4 \gamma} \mathrm {s e c h} ^ {2} (\frac {d _ {i j} - (s _ {i} - s _ {j})}{2 \gamma}) d d _ {i j} \\ = \frac {\pi_ {i} ^ {\frac {1}{\gamma}}}{\pi_ {i} ^ {\frac {1}{\gamma}} + \pi_ {j} ^ {\frac {1}{\gamma}}} \tag {8} \\ \end{array}
109
+ $$
110
+
111
+ Usually the scale parameter $\gamma$ is set to 1 which yields the Bradley-Terry model (Bradley and Terry, 1952) (and Equation 13 of Bradley (1953)).
112
+
113
+ To introduce distance information for the binary preference case, we first note that $d_1^1 = d_1^2$ for $L = 2$ (from Equation 5). We then take $\gamma = \frac{1}{d_1^2}$ and $\pi_i = \pi_\theta(x, y^i)$ , which yields:
114
+
115
+ $$
116
+ \pi_ {1 2} = \frac {\pi_ {\theta} ^ {d _ {1} ^ {2}} (x , y ^ {1})}{\pi_ {\theta} ^ {d _ {1} ^ {1}} (x , y ^ {1}) + \pi_ {\theta} ^ {d _ {1} ^ {2}} (x , y ^ {2})} \tag {9}
117
+ $$
118
+
119
+ This shows that, for the binary case, preference distances based on the ground truth preferences can be incorporated exactly into the Bradley-Terry distribution by assuming that the $X_{1}$ and $X_{2}$ have Gumbel distributions with location parameters $s_i = \log \pi_\theta (x,y^i)$ and scale parameter $\gamma = \frac{1}{r^{*}(x,y^{1}) - r^{*}(x,y^{2})}$ .
120
+
121
+ We derive and discuss the more general case of Equation 5 $(L > 2)$ in Appendix A.
122
+
123
+ Connections with DPO The preference learning framework investigated here shares a common origin with DPO (Rafailov et al., 2023) in the Bradley-Terry and Plackett-Luce models over rankings (Equation 2, and Equation 18 of Rafailov et al. (2023)). Here, the target LLM $\pi_{\theta}$ serves directly as the reward function $(r_{\theta} = \log (\pi_{\theta}))$ , whereas the DPO reward function also includes a reference distribution $\pi_{ref}$ that arises from the KL-divergence constraint term in its RL objective function. By contrast, regularization in this work is through an external SFT term (Equation 6) distinct from the reward function. We note also that the use of distance functions based on ground truth reference values brings additional information into our ranking model beyond preference order alone.
124
+
125
+ # 4 Human preference data collection
126
+
127
+ We build MAPLE (MAchine translation dataset for Preference LEarning), a dataset derived from WMT20/21 test sets. It contains multiple translations per source sentence, each assigned a real-valued human preference score. MAPLE covers four translation directions: German-to-English $(\mathrm{de}\rightarrow \mathrm{en})$ , Chinese-to-English $(\mathrm{zh}\rightarrow \mathrm{en})$ , English-to-German $(\mathrm{en}\rightarrow \mathrm{de})$ , and English-to-Chinese $(\mathrm{en}\rightarrow \mathrm{zh})$ . For each direction, 1.1K source sentences are sampled from the test sets of WMT20/21. Each source sentence is associated with five translations, including one reference translation from
128
+
129
+ WMT20/21, and four translations generated by VicunaMT, our target LLM that we aim to improve through preference learning (see training details of VicunaMT in Section 5.1). Among the four translations, one is generated using beam search with a beam size of four, and three translations are obtained through nucleus sampling (Holtzman et al., 2020) with $p = 0.9$ . We also build a development set containing 200 source sentences per direction sourced from News Crawl 2022. Altogether, MAPLE contains 5.2K source sentences and 26K translations with preference scores. See Appendix B.1 for more detail on the translation collecting process.
130
+
131
+ ![](images/b2724cd8e3a1abf26bad96ddcd2292338279fed9c025c0bb46407b30bc8620e2.jpg)
132
+ Figure 1: Human score distribution of translations by rank (left) and source (right).
133
+
134
+ <table><tr><td>Source</td><td>Zu einem großen Tuning-Treffen ist es am Samstagabend (25. Juli 2020) in Nürnberg Südstadt gekommen.
135
+ (A large tuning meeting took place on Saturday evening (July 25, 2020) in Nuremberg&#x27;s Südstadt district.)</td></tr><tr><td>Reference translation</td><td>A large tuning meetup took place in a city south of Nürnberg this Saturday evening.</td></tr><tr><td>Best translation</td><td>On Saturday evening (25th July 2020) a large tuning meeting took place in Nuremberg&#x27;s south district.</td></tr></table>
136
+
137
+ Table 1: An example where the reference translation is less accurate than the best model prediction. More examples are in Appendix B.4.
138
+
139
+ Annotation guidance. We send both the source sentence and the corresponding five translations to a panel of translators for evaluation. Each example (source sentence and its translations) is assigned to two different professional translators. They observe the source and the five translations at the
140
+
141
+ same time, and assign scores between 1 (worst) and 6 (best) in increments of 0.2 using a slider. See Appendix B.2 for the full scoring rubric.
142
+
143
+ Dataset statistics. The score distribution is shown in Fig. 1. The left side shows the score distribution by rank, and we can see MAPLE contains translations that exhibit a wide range of qualities. The right side shows the score distribution by translation type, and as expected the reference is ranked highest, followed by the beam search and the nucleus samples. Nonetheless, there is considerable overlap in the score distributions, and we find that in $21\%$ of the cases, the beam search predictions are scored higher than the reference translation. Table 1 shows an example where the reference translation contains an error.
144
+
145
+ # 5 Experiments
146
+
147
+ In this section, we present our MT model trained using the proposed two-stage framework and compare it with strong LLM-based MT systems.
148
+
149
+ Datasets. We train and evaluate the model on data on four translation directions: $\text{en} \leftrightarrow \text{de}$ and $\text{en} \leftrightarrow \text{zh}$ . In the SFT stage, we use high-quality test sets from WMT17/18/19 for training, containing 30K parallel sentences in total across the four directions. The WMT21 test set is used for validation. In the preference learning stage, we train on MAPLE, and validation is done on the remaining data from WMT20/21 test sets which was not selected for inclusion in MAPLE. We evaluate trained models on the test sets of WMT22 (Kocmi et al., 2022) and FLORES-200 (Costa-jussà et al., 2022). Refer to Appendix C.1 for detailed data statistics.
150
+
151
+ Training. In both SFT and PL stages, we use a learning rate of 5e-6, an effective batch size of 96, and a linear learning rate schedule with a warmup ratio of 0.1. For each training instance, one MT instruction is randomly selected from an instruction pool containing 31 MT instructions. See Appendix C.2 for the complete list of instructions.
152
+
153
+ Evaluation. At inference time, a fixed MT translation instruction is used. The maximum generation length is set to 512. We use a beam size of 4 for decoding and report BLEU (Papineni et al., 2002) and COMET (Rei et al., 2022) scores.
154
+
155
+ # 5.1 SFT makes good translation models
156
+
157
+ The SFT stage seeks to train a well-performing foundation MT model using parallel data. When ap
158
+
159
+ plying SFT, we can either select a pre-trained LLM, or its instruction-tuned version. Prior research uses both types of LLMs interchangeably, leaving it unclear which is preferable in practice. To address this gap, we explore three popular families of open-access LLMs, performing SFT on both their raw (i.e., only pre-trained) and instructed-tuned versions. Specifically, we consider LLaMA-1 (Touvron et al., 2023a), Mistral (Jiang et al., 2023) and BLOOM (Scao et al., 2022); and their instruction-tuned versions, which are Vicuna (Zheng et al., 2023), Mistral-Instruct, and BLOOMZ (Muen-nighoff et al., 2023). The 7B parameter variants of these models are used here.
160
+
161
+ <table><tr><td></td><td>de→en</td><td>en→de</td><td>en→zh</td><td>zh→en</td><td>Avg.</td></tr><tr><td colspan="6">WMT22</td></tr><tr><td>BLOOM</td><td>49.86</td><td>41.95</td><td>51.59</td><td>55.21</td><td>49.65</td></tr><tr><td>+SFT</td><td>77.21</td><td>69.17</td><td>84.60</td><td>78.76</td><td>77.44</td></tr><tr><td>BLOOMZ</td><td>74.58</td><td>62.52</td><td>83.10</td><td>78.29</td><td>74.62</td></tr><tr><td>+SFT</td><td>77.24</td><td>69.32</td><td>84.95</td><td>78.77</td><td>77.57</td></tr><tr><td>Mistral</td><td>54.18</td><td>49.08</td><td>49.10</td><td>55.47</td><td>51.96</td></tr><tr><td>+SFT</td><td>83.15</td><td>81.10</td><td>81.48</td><td>78.05</td><td>80.95</td></tr><tr><td>Mistral-Ins.</td><td>82.45</td><td>80.39</td><td>76.57</td><td>77.73</td><td>79.28</td></tr><tr><td>+SFT</td><td>82.68</td><td>81.23</td><td>82.49</td><td>77.73</td><td>81.03</td></tr><tr><td>LLaMA-1</td><td>63.29</td><td>55.29</td><td>45.80</td><td>55.17</td><td>54.89</td></tr><tr><td>+SFT</td><td>83.30</td><td>82.54</td><td>77.58</td><td>75.78</td><td>79.80</td></tr><tr><td>Vicuna</td><td>82.55</td><td>82.02</td><td>81.42</td><td>74.81</td><td>80.20</td></tr><tr><td>+SFT</td><td>83.55</td><td>82.79</td><td>81.27</td><td>77.39</td><td>81.25</td></tr><tr><td colspan="6">FLORES-200</td></tr><tr><td>BLOOM</td><td>55.03</td><td>42.36</td><td>53.82</td><td>60.25</td><td>52.86</td></tr><tr><td>+SFT</td><td>83.69</td><td>67.43</td><td>86.06</td><td>85.45</td><td>80.66</td></tr><tr><td>Mistral</td><td>42.36</td><td>32.74</td><td>33.35</td><td>42.10</td><td>37.64</td></tr><tr><td>+SFT</td><td>88.63</td><td>84.49</td><td>80.97</td><td>85.17</td><td>84.81</td></tr><tr><td>Mistral-Ins.</td><td>88.04</td><td>82.55</td><td>73.20</td><td>83.70</td><td>81.87</td></tr><tr><td>+SFT</td><td>88.21</td><td>83.73</td><td>82.41</td><td>84.77</td><td>84.78</td></tr><tr><td>LLaMA-1</td><td>58.89</td><td>52.71</td><td>42.77</td><td>49.92</td><td>51.07</td></tr><tr><td>+SFT</td><td>88.50</td><td>84.82</td><td>76.73</td><td>83.09</td><td>83.29</td></tr><tr><td>Vicuna</td><td>87.82</td><td>84.17</td><td>81.52</td><td>81.53</td><td>83.76</td></tr><tr><td>+SFT</td><td>88.66</td><td>86.27</td><td>80.62</td><td>84.44</td><td>85.00</td></tr></table>
162
+
163
+ Table 2: Model performance (in COMET score) before and after performing SFT on parallel data. Rows in blue indicate instruction-tuned LLMs. Best results are in bold. Instruction-tuned LLMs yield high COMET scores even without SFT. Raw LLMs benefit the most from SFT. Vicuna performs the best on average on both test sets. We exclude BLOOMZ on FLORES-200 as it is a part of BLOOMZ's training data. Performance measured by BLEU score is reported in Appendix D.
164
+
165
+ Results. Table 2 presents the results before and after SFT. It can be seen that LLMs without instruction-tuning, e.g., BLOOM, perform poorly; we observe that they tend to overgenerate and re
166
+
167
+ peat tokens in the source sentences. In contrast, instruction-tuned models work out-of-the-box and exhibit decent performance. It can be also observed that SFT dramatically boosts the performance of raw LLMs, and slightly benefits instruction-tuned LLMs. For BLOOM and Mistral, the performance gap between raw and instruction-tuned models is mostly lost after SFT. An interesting case is Vicuna, where there is a considerable improvement on $\mathsf{en} \leftrightarrow \mathsf{zh}$ over its base model LLaMA-1. This implies that instruction-tuned LLMs may serve as a better base model for SFT. In addition, different LLMs excel in diverse translation directions and their instruction-tuned versions do not deviate from this pattern. For example, both BLOOM and BLOOMZ perform quite well on $\mathsf{en} \rightarrow \mathsf{zh}$ , but have a deficiency in $\mathsf{en} \rightarrow \mathsf{de}$ . For LLaMA-based models, the opposite holds. This could be due to the fact that German and Chinese are not included (at least, not intentionally) in BLOOM's and LLaMA's pre-training corpora, respectively.
168
+
169
+ The Vicuna+SFT model has the best overall performance and so we select it as our target LLM to be improved through preference learning. We call this model VicunaMT. The generated translations in the MAPLE dataset are produced by this model.
170
+
171
+ # 5.2 Refining through preference learning
172
+
173
+ Baselines. We continue training our VicunaMT model on MAPLE through preference learning and compare it with the following competitive systems from recent work: (1) ParroT (Jiao et al., 2023a) adds a "Hint" field to the model input, prompting the model to generate both correct and incorrect translations. At inference time, the "correct" version of the translations is used for evaluation. (2) TIM (Zeng et al., 2023) incorporates standard SFT with a ranking loss computed on a pair of correct and incorrect translations. (3) SWIE (Chen et al., 2023) proposes to attach an instruction adapter to enhance LLMs' long-term attention for better translation. (4) ALMA (Xu et al., 2023) first continues pre-training the LLM on monolingual data, followed by performing SFT on parallel data. Furthermore, as the preference learning stage introduces additional data, a performance gain could be trivial by exposing the model with more samples. To establish a fair comparison, we design two additional
174
+
175
+ <table><tr><td rowspan="2">System</td><td colspan="5">WMT22</td><td colspan="5">FLORES-200</td></tr><tr><td>de→en</td><td>en→de</td><td>en→zh</td><td>zh→en</td><td>Avg.</td><td>de→en</td><td>en→de</td><td>en→zh</td><td>zh→en</td><td>Avg.</td></tr><tr><td colspan="11">Commercial LLMs &amp; LLaMA-2-7B based MT systems</td></tr><tr><td>ChatGPT(3.5-turbo-0613)</td><td>85.38</td><td>86.92</td><td>87.00</td><td>82.42</td><td>85.43</td><td>89.58</td><td>88.68</td><td>88.56</td><td>86.91</td><td>88.02</td></tr><tr><td>GPT-4(gpt-4-0613)</td><td>85.57</td><td>87.36</td><td>87.29</td><td>82.88</td><td>85.78</td><td>89.66</td><td>88.89</td><td>88.91</td><td>87.25</td><td>88.68</td></tr><tr><td>ALMA-7B(LLaMA-2)</td><td>83.98</td><td>85.59</td><td>85.05</td><td>79.73</td><td>83.59</td><td>-⊗</td><td>-⊗</td><td>-⊗</td><td>-⊗</td><td>-⊗</td></tr><tr><td colspan="11">BLOOMZ-mt-7B based LLMs</td></tr><tr><td>ParroT(BLOOMZ-mt)</td><td>78.00</td><td>73.60</td><td>83.50</td><td>79.00</td><td>78.53</td><td>-*</td><td>-*</td><td>-*</td><td>-*</td><td>-*</td></tr><tr><td>TIM(BLOOMZ-mt)</td><td>77.65</td><td>74.16</td><td>84.89</td><td>79.50</td><td>79.05</td><td>-*</td><td>-*</td><td>-*</td><td>-*</td><td>-*</td></tr><tr><td>SWIE(BLOOMZ-mt)</td><td>78.80</td><td>75.17</td><td>84.53</td><td>79.15</td><td>79.41</td><td>-*</td><td>-*</td><td>-*</td><td>-*</td><td>-*</td></tr><tr><td colspan="11">LLaMA-1-7B based LLMs</td></tr><tr><td>ParroT(LLaMA-1)</td><td>82.40</td><td>81.60</td><td>80.30</td><td>75.90</td><td>80.05</td><td>88.40</td><td>84.60</td><td>81.20</td><td>83.40</td><td>84.40</td></tr><tr><td>TIM(LLaMA-1)</td><td>82.80</td><td>82.32</td><td>80.03</td><td>75.46</td><td>80.15</td><td>88.08</td><td>85.00</td><td>80.93</td><td>83.18</td><td>84.30</td></tr><tr><td>SWIE(LLaMA-1)</td><td>82.97</td><td>81.89</td><td>80.14</td><td>76.14</td><td>80.29</td><td>88.39</td><td>85.21</td><td>81.14</td><td>83.50</td><td>84.56</td></tr><tr><td>VicunaMT(LLaMA-1)</td><td>83.55</td><td>82.79</td><td>81.27</td><td>77.39</td><td>81.25</td><td>88.66</td><td>86.27</td><td>80.62</td><td>84.44</td><td>85.00</td></tr><tr><td>+ REF</td><td>83.88</td><td>83.37</td><td>82.86</td><td>78.19</td><td>82.07</td><td>88.48</td><td>86.11</td><td>83.35</td><td>84.54</td><td>85.62</td></tr><tr><td>+ BEST</td><td>83.61</td><td>83.08</td><td>83.20</td><td>78.35</td><td>82.06</td><td>88.67</td><td>85.87</td><td>84.02</td><td>84.55</td><td>85.78</td></tr><tr><td>+ PL</td><td>84.23</td><td>84.43</td><td>84.26</td><td>79.07</td><td>83.00</td><td>88.83</td><td>86.73</td><td>84.88</td><td>84.76</td><td>86.30</td></tr></table>
176
+
177
+ Table 3: Model performance in COMET scores. Best results of LLaMA-1 based models are in **bold**. Applying preference learning (+PL) on top of our VicunaMT model consistently leads to improvements in all cases, achieving the highest average performance among all BLOOM and LLaMA-1 based MT models. Performance in BLEU scores is reported in Appendix E. $^\circ$ : LLaMA-2 based models were not evaluated due to license constraints. WMT22 results are extracted from the original paper. *: BLOOMZ-family models use FLORES-200 for training.
178
+
179
+ belines: (5) REF trains VicunaMT with the reference translations in MAPLE. (6) BEST trains VicunaMT with the translations that are scored highest by our annotators. See Table 1 for an example comparison of the reference and best translations. All aforementioned baselines are performed on 7B LLMs (based either on BLOOM-7B or LLaMA-7B). Finally, we also compare our model against commercial LLMs, including ChatGPT and GPT-4.
180
+
181
+ Results. We report the MT performance of various baselines in Table 3. It can be seen that our VicunaMT model performs well compared to recent MT systems. PL further increases the performance advantage. Our final model, VicunaMT+PL, achieves the highest average performance (83 on WMT22 and 86.3 on FLORES-200), consistently outperforming all LLaMA-1 based models across all directions, with the largest improvement being a 3.96 increase in COMET score. $(\mathrm{en} \rightarrow \mathrm{zh}$ on WMT22). Notably, LLaMA-based models are originally much weaker in directions involving Chinese. Through preference learning, VicunaMT reaches a translation performance close to BLOOM-based LLMs. This becomes practically significant when the goal is to deploy a single LLM to handle multiple translation directions. Also, the PL model scores higher than VicunaMT models
182
+
183
+ fine-tuned on the reference and best translations, indicating that the performance gain does not just come from having more data. Compared to the ALMA model, which is based on LLaMA-2 (Touvron et al., 2023b), a widely recognized superior open access LLM, our model demonstrates only a slight deficit of 0.59 COMET scores. Note that our strategy is orthogonal to ALMA's approach, which leverages monolingual data. Combining both strategies should lead to even better performance.
184
+
185
+ We supplement our assessment with a human evaluation, contrasting VicunaMT+PL with SFT-only Vicuna variations including VicunaMT and VicunaMT+REF, as illustrated in Table 4. The human evaluation confirms the trend observed with automatic metrics, where PL substantially outperforms SFT-only variations.
186
+
187
+ # 6 Analysis
188
+
189
+ Reuse of preference data. MAPLE contains the translations generated by VicunaMT, which is also the target LLM we aim to improve. There would be additional value if this data could be reused to improve other LLMs. To investigate this, we train both Mistral-Instruct and BLOOMZ on MAPLE using PL. As shown in Table 5, PL improves both models, suggesting that the MAPLE is not limited for use with VicunaMT and can be reused for im
190
+
191
+ <table><tr><td></td><td>de→en</td><td>en→de</td><td>en→zh</td><td>zh→en</td></tr><tr><td></td><td></td><td colspan="3">VicunaMT+PL vs.</td></tr><tr><td>VicunaMT</td><td>+3.7%</td><td>+4.4%</td><td>+5.6%</td><td>+5.7%</td></tr><tr><td>VicunaMT+REF</td><td>+3.7%</td><td>+2.5%</td><td>+5.0%</td><td>+3.5%</td></tr></table>
192
+
193
+ proving other LLMs.
194
+
195
+ Table 4: Relative improvements of VicunaMT+PL over SFT-only models (VicunaMT and VicunaMT+REF), assessed through human evaluation on the WMT22 test set, employing the same scoring criteria as those specified in MAPLE. A two-sided t-test was conducted, with $95\%$ confidence intervals noted as $\pm 1.7\%$ . Positive values indicate the improvement achieved by VicunaMT+PL compared to the other models.
196
+
197
+ <table><tr><td></td><td colspan="5">WMT22</td></tr><tr><td></td><td>de→en</td><td>en→de</td><td>en→zh</td><td>zh→en</td><td>Avg.</td></tr><tr><td>BLOOMZ†</td><td>77.24</td><td>69.32</td><td>84.95</td><td>78.77</td><td>77.57</td></tr><tr><td>+REF</td><td>77.41</td><td>68.47</td><td>84.76</td><td>79.50</td><td>77.53</td></tr><tr><td>+BEST</td><td>77.48</td><td>68.64</td><td>85.15</td><td>79.59</td><td>77.72</td></tr><tr><td>+PL</td><td>77.83</td><td>69.84</td><td>85.36</td><td>80.67</td><td>78.42</td></tr><tr><td>Mistral-Ins.†</td><td>82.68</td><td>81.23</td><td>82.49</td><td>77.73</td><td>81.03</td></tr><tr><td>+REF</td><td>83.06</td><td>82.63</td><td>83.39</td><td>78.07</td><td>81.79</td></tr><tr><td>+BEST</td><td>82.98</td><td>81.84</td><td>83.34</td><td>78.33</td><td>81.62</td></tr><tr><td>+PL</td><td>83.35</td><td>82.94</td><td>84.71</td><td>79.25</td><td>82.56</td></tr></table>
198
+
199
+ Table 5: Model performance in COMET scores. Best results are in bold. MAPLE can be reused to improve BLOOMZ and Mistral-Instruct. See results on FLORES-200 and in BLEU scores in Appendix F. $\dagger$ : SFT stage has already been applied to these models.
200
+
201
+ Limited gains with additional parallel data. Section 5.2 shows that the MAPLE dataset, which contains $4.4\mathrm{K}$ preference examples, can be more valuable than an equivalent amount of parallel data with either the reference or the best translations. A natural follow-up question is whether adding more parallel data can close the gap. To answer this question, we collect more data by concatenating WMT20, WMT21 test data with News Commentary v16, making 1.4M parallel sentences in total. We fine-tune VicunaMT and Mistral-InstructMT (i.e., Mistral-Instruct after SFT stage) on different proportions of this data and plot the performance curve in Figure 2. In both cases, similar to observations in (Xu et al., 2023), adding more parallel data does not always improve these models and they never attain the performance level reached by using PL with MAPLE.
202
+
203
+ ![](images/19fcf874a21386ea43149e17a6d9cdb1147d2928bcfa67d2134c6c8e30080407.jpg)
204
+ Figure 2: Performance comparison between PL using 4.4K examples from MAPLE and SFT, employing up to 1.4M parallel data. Evaluation is done on WMT22, and COMET scores are averaged across four translation directions. Performing SFT on more parallel data does not always lead to performance gain. PL consistently outperforms SFT in all cases.
205
+
206
+ Diverse translations help more. By default, we perform PL using all five translations provided by MAPLE. We now study the relation between the final model performance and the number of preference translations used. We select $K = \{2,3,4\}$ translations and rerun the PL algorithm on VicunaMT and Mistral-InstructMT. We explore two selection modes for selecting $K$ translations. Given five translations sorted by human preference scores in descending order, the forward mode selects the first $K$ translations (i.e., the best $K$ ), while the reverse mode selects the first and last $K - 1$ translations. We compare both modes varying $K$ and present the results in Figure 3. There is a clear disparity in performance with these two selection modes. The reverse mode consistently outperforms the forward mode given the same number of translations, with a larger advantage in low-resource cases, such as when $K = 2$ . This is intuitive since the reverse mode always includes the highest- and lowest-scored translations and thus, PL may have a better chance to see "hard negatives" which have low human preference score but high generation probability. The general trend shows that including more preference samples is better, and using all available samples yields the best performance.
207
+
208
+ Distance information is crucial. Our framework considers the distance information in preference scores (Equation 5). We now investigate if this information can be replaced by simply using the ranking information. That is, we set $d_i^j = 1$ for all translations and rerun the PL algorithm. Table 6 shows that when the distance information is available, excluding the SFT loss does not harm
209
+
210
+ ![](images/b4e19f30d8afea01759be52c4deb4fb1044d1d52d27edce9132477c90b56b63c.jpg)
211
+ Figure 3: Model performance varying number of translations $(K)$ per source sentence. Evaluation conducted on WMT22 and COMET scores averaged across four translation directions are reported. Reverse mode selects more diverse translations and achieves better performance, especially when fewer translations are provided.
212
+
213
+ <table><tr><td></td><td>VicunaMT</td><td>Mistral-InstructMT</td></tr><tr><td>SFT stage</td><td>81.25</td><td>81.03</td></tr><tr><td>PL stage</td><td>83.00</td><td>82.56</td></tr><tr><td>w/o LSFT</td><td>83.00</td><td>82.54</td></tr><tr><td>w/o distance</td><td>82.22</td><td>81.92</td></tr><tr><td>w/o LSFT/dist.</td><td>74.65</td><td>60.70</td></tr><tr><td>LSFT only</td><td>82.07</td><td>81.79</td></tr></table>
214
+
215
+ Table 6: Ablation study. PL is less sensitive to $\mathcal{L}_{SFT}$ than the distance information. Disabling both factors leads to substantial model degradation.
216
+
217
+ the performance much. In fact, we achieve the best performance when setting $\beta = 0$ for VicunaMT. However, when the distance information is withheld, we see a clear degradation in performance. We find that a larger $\beta$ value is required when relying only on the ranking information, but this makes the PL algorithm closer to SFT. As a result, when only the ranking information is provided, VicunaMT performs similarly to the $\mathcal{L}_{SFT}$ only baseline. Finally, disabling both $L_{SFT}$ and distance cause a large performance drop.
218
+
219
+ Better Model Calibration. In our preference learning framework, the model learns both translation and the ability to differentiate between different translation quality. We analyze if PL has successfully transferred human preference to the model. Using the held-out set of MAPLE, we examine the sentence-level correlation between the scores assigned by the human annotators and model generation probability. Specifically, we compute the average Pearson and Kendall's tau correlation varying the number of preference samples (reverse mode). The results are presented in Figure 4. Com
220
+
221
+ ![](images/0f55a7d3b059ee5cdb6501c3393af6e2adb627f0f0d6baefe7a045e0bf9a665d.jpg)
222
+ Figure 4: Sentence-level correlation between model generation probability and human preference scores varying number of translations $(K)$ . PL helps the model align better with human judgement.
223
+
224
+ pared to the SFT baseline, VicunaMT, PL substantially improves the correlation, suggesting that our final model aligns better with human preference.
225
+
226
+ # 7 Conclusion
227
+
228
+ We present a preference learning framework to break the performance plateau faced when performing SFT. It enhances the translation capabilities of LLMs by motivating them to differentiate the nuances in different translations. To support this framework, we have carefully curated a preference dataset, named MAPLE, featuring translations of varying quality, each scored by professional translators. Extensive experiments, including human evaluations, confirm the effectiveness of this framework. In addition, we demonstrate that MAPLE can be reused to enhance other LLMs, further bolstering its practical usability. Future research could consider extending our framework into an iterative process for continuous improvement of LLMs' translation capabilities.
229
+
230
+ # Limitations
231
+
232
+ This work demonstrates that preference learning can effectively improve LLMs' translation capabilities. However, our study is not exhaustive and has the following limitations.
233
+
234
+ Low-resource languages. This work centers on translation directions involving high-resource languages where LLMs already exhibit proficiency. The extent to which translations for low-resource languages can leverage our framework remains uncertain. Nevertheless, it is important to emphasize that our framework is language- and model-agnostic, implying its potential applicability to low-resource languages. We leave the investigation into
235
+
236
+ this aspect to future work.
237
+
238
+ Annotation cost. Assigning preference scores for five translations per sentence can be costly, which may hinder the scaling up of the preference dataset. However, as we show in Section 6, a preference learning dataset such as MAPLE offers distinct learning signal that is not covered by massive parallel data. In addition, we highlight that the preference data can be reused to benefit other LLMs. Thus, the collected data is a valuable and reusable resource, rather than a one-time expense.
239
+
240
+ Noise in human judgement. Inevitably, human preference scores can be subjective, and annotators may not always agree. Additionally, there is a risk of annotators finding shortcuts in the annotation process (Ipeirotis et al., 2010; Hosking et al., 2023). To reduce the potential annotation mistakes, we average the scores of two translators for each sample and all translators we employ are experienced in translation assessment.
241
+
242
+ # Acknowledgements
243
+
244
+ Bill Byrne holds concurrent appointments as an Amazon Scholar and as Professor of Information Engineering at the University of Cambridge. This paper describes work performed at Amazon.
245
+
246
+ We thank Felix Hieber, Lei Sun, and Tobias Domhan for their thoughtful advice, in-depth discussions, and implementation support. We would also like to thank our anonymous reviewers for their constructive feedback.
247
+
248
+ # References
249
+
250
+ Duarte M. Alves, Nuno Miguel Guerreiro, João Alves, José Pombal, Ricardo Rei, José G. C. de Souza, Pierre Colombo, and André F. T. Martins. 2023. Steering large language models for machine translation with finetuning and in-context learning. CoRR, abs/2310.13448.
251
+ Rachel Bawden and François Yvon. 2023. Investigating the translation performance of a large multilingual language model: the case of BLOOM. In Proceedings of the 24th Annual Conference of the European Association for Machine Translation, EAMT 2023, Tampere, Finland, 12-15 June 2023, pages 157-170. European Association for Machine Translation.
252
+ Ralph Allan Bradley. 1953. Some statistical methods in taste testing and quality evaluation. Biometrics, 9(1):22-38.
253
+ Ralph Allan Bradley and Milton E Terry. 1952. Rank analysis of incomplete block designs: I. the method
254
+
255
+ of paired comparisons. Biometrika, 39(3/4):324-345.
256
+ Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901.
257
+ Yijie Chen, Yijin Liu, Fandong Meng, Yufeng Chen, Jinan Xu, and Jie Zhou. 2023. Improving translation faithfulness of large language models via augmenting instructions. arXiv preprint arXiv:2308.12674.
258
+ Weiwei Cheng, Krzysztof Dembczynski, and Eyke Hüllermeier. 2010. Label ranking methods based on the plackett-luce model. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), June 21-24, 2010, Haifa, Israel, pages 215-222. Omnipress.
259
+ Weiwei Cheng and Eyke Hüllermeier. 2008. Learning similarity functions from qualitative feedback. In LNAI 5239 Advances in Case-Based Reasoning: The 9th European Conference on Case-Based Reasoning (ECCBR-08), pages 129–134, Trier, Germany. Springer.
260
+ Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2023. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research, 24(240):1-113.
261
+ Marta R. Costa-jussa, James Cross, Onur Celebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp
262
+
263
+ Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. 2022. No language left behind: Scaling human-centered machine translation. CoRR, abs/2207.04672.
264
+ Yi Dong, Zhilin Wang, Makes Narsimhan Sreedhar, Xianchao Wu, and Oleksii Kuchaiev. 2023. Steerlm: Attribute conditioned SFT as an (user-steerable) alternative to RLHF. CoRR, abs/2310.05344.
265
+ Markus Freitag, Ricardo Rei, Nitika Mathur, Chi-kiu Lo, Craig Stewart, Eleftherios Avramidis, Tom Kocmi, George Foster, Alon Lavie, and Andre F. T. Martins. 2022. Results of WMT22 metrics shared task: Stop using BLEU – neural metrics are better and more robust. In Proceedings of the Seventh Conference on Machine Translation (WMT), pages 46–68, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
266
+ Marjan Ghazvininejad, Hila Gonen, and Luke Zettlemoyer. 2023. Dictionary-based phrase-level prompting of large language models for machine translation. CoRR, abs/2302.07856.
267
+ Ian Hamilton, Nick Tawn, and David Firth. 2023. The many routes to the ubiquitous Bradley-Terry model. (arXiv:2312.13619).
268
+ Zhiwei He, Tian Liang, Wenxiang Jiao, Zhuosheng Zhang, Yujiu Yang, Rui Wang, Zhaopeng Tu, Shuming Shi, and Xing Wang. 2023. Exploring human-like translation strategy with large language models. CoRR, abs/2305.04118.
269
+ Joey Hejna, Rafael Rafailov, Harshit Sikchi, Chelsea Finn, Scott Niekum, W. Bradley Knox, and Dorsa Sadigh. 2023. Contrastive preference learning: Learning from human feedback without RL. CoRR, abs/2310.13639.
270
+ Amr Hendy, Mohamed Abdelrehim, Amr Sharaf, Vikas Raunak, Mohamed Gabr, Hitokazu Matsushita, Young Jin Kim, Mohamed Afify, and Hany Hassan Awadalla. 2023. How good are GPT models at machine translation? a comprehensive evaluation. arXiv preprint arXiv:2302.09210.
271
+ Christian Herold, Jan Rosendahl, Joris Vanvinckenroye, and Hermann Ney. 2022. Detecting various types of noise for neural machine translation. In *Findings of the Association for Computational Linguistics: ACL* 2022, Dublin, Ireland, May 22-27, 2022, pages 2542-2551. Association for Computational Linguistics.
272
+ Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
273
+ Tom Hosking, Phil Blunsom, and Max Bartolo. 2023. Human feedback is not gold standard. CoRR, abs/2309.16349.
274
+
275
+ Jian Hu, Li Tao, June Yang, and Chandler Zhou. 2023. Aligning language models with offline reinforcement learning from human feedback. CoRR, abs/2308.12050.
276
+ Shengyi Huang, Rousslan Fernand Julien Dossa, Antonin Raffin, Anssi Kanervisto, and Weixun Wang. 2022. The 37 implementation details of proximal policy optimization. In ICLR Blog Track. https://iclr-blog-track.github.io/2022/03/25/ppo-implementation-details/.
277
+ Panagiotis G Ipeirotis, Foster Provost, and Jing Wang. 2010. Quality management on Amazon Mechanical Turk. In Proceedings of the ACM SIGKDD workshop on human computation, pages 64-67.
278
+ Riashat Islam, Peter Henderson, Maziar Gomrokchi, and Doina Precup. 2017. Reproducibility of benchmarked deep reinforcement learning tasks for continuous control. CoRR, abs/1708.04133.
279
+ Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de Las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7b. CoRR, abs/2310.06825.
280
+ Wenxiang Jiao, Jen-tse Huang, Wenxuan Wang, Xing Wang, Shuming Shi, and Zhaopeng Tu. 2023a. Parrot: Translating during chat using large language models. arXiv preprint arXiv:2304.02426.
281
+ Wenxiang Jiao, Wenxuan Wang, JT Huang, Xing Wang, and ZP Tu. 2023b. Is ChatGPT a good translator? yes with GPT-4 as the engine. arXiv preprint arXiv:2301.08745.
282
+ Huda Khayrallah and Philipp Koehn. 2018. On the impact of various types of noise on neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, NMT@ACL 2018, Melbourne, Australia, July 20, 2018, pages 74-83. Association for Computational Linguistics.
283
+ Tom Kocmi, Rachel Bawden, Ondrej Bojar, Anton Dvorkovich, Christian Federmann, Mark Fishel, Thamme Gowda, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Rebecca Knowles, Philipp Koehn, Christof Monz, Makoto Morishita, Masaaki Nagata, Toshiaki Nakazawa, Michal Novak, Martin Popel, and Maja Popovic. 2022. Findings of the 2022 conference on machine translation (WMT22). In Proceedings of the Seventh Conference on Machine Translation (WMT), pages 1-45, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
284
+ R. Duncan Luce. 1959. Individual choice behaviour. John Wiley.
285
+
286
+ Jean Maillard, Cynthia Gao, Elahe Kalbassi, Kaushik Ram Sadagopan, Vedanuj Goswami, Philipp Koehn, Angela Fan, and Francisco Guzman. 2023. Small data, big impact: Leveraging minimal data for effective machine translation. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2740-2756, Toronto, Canada. Association for Computational Linguistics.
287
+ Lucas Maystre and Matthias Grossglauer. 2015. Fast and accurate inference of Plackett-Luce models. In Advances in Neural Information Processing Systems, volume 28.
288
+ Frederick Mosteller. 1951. Remarks on the method of paired comparisons: I. the least squares solution assuming equal standard deviations and equal correlations. Psychometrika, 16(1):3-9.
289
+ Yongyu Mu, Abudurexiti Reheman, Zhiquan Cao, Yuchun Fan, Bei Li, Yinqiao Li, Tong Xiao, Chunliang Zhang, and Jingbo Zhu. 2023. Augmenting large language model translators via translation memories. In Findings of the Association for Computational Linguistics: ACL 2023, pages 10287-10299, Toronto, Canada. Association for Computational Linguistics.
290
+ Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M. Saiful Bari, Sheng Shen, Zheng Xin Yong, Hailey Schoelkopf, Xiangru Tang, Dragomir Radev, Alham Fikri Aji, Khalid Almubarak, Samuel Albanie, Zaid Alyafeai, Albert Webson, Edward Raff, and Colin Raffel. 2023. Crosslingual generalization through multitask finetuning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 15991-16111. Association for Computational Linguistics.
291
+ OpenAI. 2023. GPT-4 technical report. CoRR, abs/2303.08774.
292
+ Myle Ott, Michael Auli, David Grangier, and Marc'Aurelio Ranzato. 2018. Analyzing uncertainty in neural machine translation. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 3953-3962. PMLR.
293
+ Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems, volume 35, pages 27730-27744. Curran Associates, Inc.
294
+
295
+ Kishore Papineni, Salim Roukos, Todd Ward, and Wei Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
296
+ Robin L Plackett. 1975. The analysis of permutations. Journal of the Royal Statistical Society Series C: Applied Statistics, 24(2):193-202.
297
+ Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. CoRR, abs/2305.18290.
298
+ Ricardo Rei, José G. C. de Souza, Duarte M. Alves, Chrysoula Zerva, Ana C. Farinha, Taisiya Glushkova, Alon Lavie, Luísa Coheur, and André F. T. Martins. 2022. COMET-22: Unbabel-IST 2022 submission for the metrics shared task. In Proceedings of the Seventh Conference on Machine Translation, WMT 2022, Abu Dhabi, United Arab Emirates (Hybrid), December 7-8, 2022, pages 578-585. Association for Computational Linguistics.
299
+ Joshua David Robinson, Ching-Yao Chuang, Suvrit Sra, and Stefanie Jegelka. 2021. Contrastive learning with hard negative samples. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
300
+ Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Galle, Jonathan Tow, Alexander M. Rush, Stella Biderman, Albert Webson, Pawan Sasanka Ammanamanchi, Thomas Wang, Benoit Sagot, Niklas Muennighoff, Albert Villanova del Moral, Olatunj Ruwase, Rachel Bawden, Stas Bekman, Angelina McMillan-Major, Iz Beltagy, Huu Nguyen, Lucile Saulnier, Samson Tan, Pedro Ortiz Suarez, Victor Sanh, Hugo Laurençon, Yacine Jernite, Julien Launay, Margaret Mitchell, Colin Raffel, Aaron Gokaslan, Adi Simhi, Aitor Soroa, Alham Fikri Aji, Amit Alfassy, Anna Rogers, Ariel Kreisberg Nitzav, Canwen Xu, Chenghao Mou, Chris Emezue, Christopher Klamm, Colin Leong, Daniel van Strien, David Ifeoluwa Adelani, and et al. 2022. BLOOM: A 176b-parameter open-access multilingual language model. CoRR, abs/2211.05100.
301
+ John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. CoRR, abs/1707.06347.
302
+ Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, and Houfeng Wang. 2023. Preference ranking optimization for human alignment. CoRR, abs/2306.17492.
303
+ Saurabh Srivastava, Chengyue Huang, Weiguo Fan, and Ziyu Yao. 2023. Instance needs more care: Rewriting
304
+
305
+ prompts for instances yields better zero-shot performance. CoRR, abs/2310.02107.
306
+ Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford Alpaca: An instruction-following LLaMA model. https://github.com/tatsu-lab/stanford_alpaca.
307
+ Louis L Thurstone. 1927. Psychophysical analysis. The American journal of psychology, 38(3):368-389.
308
+ Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023a. LLaMA: Open and efficient foundation language models. CoRR, abs/2302.13971.
309
+ Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton-Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurélien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023b. Llama 2: Open foundation and fine-tuned chat models. CoRR, abs/2307.09288.
310
+ Kenneth Train. 2003. Discrete Choice Method With Simulation. Cambridge University Press.
311
+ Haoran Xu, Young Jin Kim, Amr Sharaf, and Hany Hassan Awadalla. 2023. A paradigm shift in machine translation: Boosting translation performance of large language models. CoRR, abs/2309.11674.
312
+ Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. 2023. RRHF: rank responses to align language models with human feedback without tears. CoRR, abs/2304.05302.
313
+ Jiali Zeng, Fandong Meng, Yongjing Yin, and Jie Zhou. 2023. Tim: Teaching large language models to translate with comparison. arXiv preprint arXiv:2307.04408.
314
+ Biao Zhang, Barry Haddow, and Alexandra Birch. 2023a. Prompting large language model for machine translation: A case study. In International
315
+
316
+ Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pages 41092-41110. PMLR.
317
+ Miaoran Zhang, Vagrant Gautam, Mingyang Wang, Jesujoba O. Alabi, Xiaoyu Shen, Dietrich Klakow, and Marius Mosbach. 2024. The impact of demonstrations on multilingual in-context learning: A multidimensional analysis.
318
+ Xuan Zhang, Navid Rajabi, Kevin Duh, and Philipp Koehn. 2023b. Machine translation with large language models: Prompting, few-shot learning, and fine-tuning with QLoRA. In Proceedings of the Eighth Conference on Machine Translation, pages 466-479, Singapore. Association for Computational Linguistics.
319
+ Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judging LLM-as-a-judge with MT-Bench and Chatbot Arena. CoRR, abs/2306.05685.
320
+ Chunting Zhou, Pengfei Liu, Puxin Xu, Smini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. 2023. Lima: Less is more for alignment. arXiv preprint arXiv:2305.11206.
321
+ Dawei Zhu, Xiaoyu Shen, Marius Mosbach, Andreas Stephan, and Dietrich Klakow. 2023. Weaker than you think: A critical look at weakly supervised learning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 14229-14253, Toronto, Canada. Association for Computational Linguistics.
322
+
323
+ # A Incorporating multiple preferences with distance information
324
+
325
+ In Section 3.2, we demonstrated how the distance information of two preferences can be integrated into preference modeling, as illustrated in Equation 9. A similar analysis can be done for the Plackett-Luce ranking model to incorporate distance metrics across multiple preferences. Specifically, we model the probability of a particular ordering $X_{1},\dots ,X_{L}$ as follows:
326
+
327
+ $$
328
+ \begin{array}{l} P \left(X _ {1} \geq X _ {2} \dots \geq X _ {L}\right) \\ = \prod_ {i = 1} ^ {L - 1} P _ {i} (X _ {i} > X _ {j}, \forall j > i) \\ \end{array}
329
+ $$
330
+
331
+ For each distribution $P_{i}$ , let $X_{j} = s_{j} + \varepsilon_{j}$ for $j\geq i$ with $\varepsilon_{j}\sim$ standard Gumbel and independent so that (following Train (2003), Section 3)
332
+
333
+ $$
334
+ P _ {i} (X _ {i} > X _ {j}, \forall j > i) = \frac {e ^ {s _ {i}}}{\sum_ {j \geq i} e ^ {s _ {j}}}
335
+ $$
336
+
337
+ This ranking can be interpreted as a sequence of $L - 1$ independent choices: choose the first item, then choose the second among the remaining alternatives, etc. (Maystre and Grossglauser, 2015). It is usually assumed that each independent choice is made by the same judge whose underlying preferences do not change. If we assume $s_j = \log \pi_\theta(x, y^j)$ for this judge then Equation 4 results.
338
+
339
+ Suppose instead that, rather than a single judge, a succession of $L - 1$ different judges each make one of the sequence of independent choices. The distributions $P_{i}$ should change to reflect the changing preferences of the judges. In particular, if we introduce the preference distances $d_{i}^{j}$ for the $i^{th}$ judge, then we obtain Equation 5 if for each $P_{i}$ the location parameters are set to $s_{j} = d_{i}^{j}\log \pi_{\theta}(x,y^{j})$ for $j\geq i$ . We find that this modified version of the Placket-Luce model can work well in practice although we note that these modifications may violate Luce's Choice Axiom (Luce, 1959; Hamilton et al., 2023).
340
+
341
+ Consider the case of $L = 3$ . The Choice Axiom requires the odds of choosing $X_{2}$ over $X_{3}$ are independent of the presence of $X_{1}$ as an option, i.e. that the odds should not depend on whether this is a choice for the first or the second position
342
+
343
+ $$
344
+ \frac {P _ {1} (X _ {2} > X _ {j} , j = 1 , 3)}{P _ {1} (X _ {3} > X _ {j} , j = 1 , 2)} = \frac {P _ {2} (X _ {2} > X _ {3})}{P _ {2} (X _ {3} > X _ {2})}
345
+ $$
346
+
347
+ With the location parameters from above, the Choice Axiom requires
348
+
349
+ $$
350
+ \frac {\pi_ {\theta} (x , y ^ {2}) ^ {d _ {1} ^ {2}}}{\pi_ {\theta} (x , y ^ {3}) ^ {d _ {1} ^ {3}}} = \frac {\pi_ {\theta} (x , y ^ {2}) ^ {d _ {2} ^ {2}}}{\pi_ {\theta} (x , y ^ {3}) ^ {d _ {2} ^ {3}}}
351
+ $$
352
+
353
+ or that $\pi_{\theta}(x,y^{2})^{(d_{1}^{2} - d_{2}^{2})} = \pi_{\theta}(x,y^{3})^{(d_{1}^{3} - d_{2}^{3})}$ . This holds for the default setting, $d_{i}^{j} = 1$ , leading to Equation 4, but appears not to hold in general.
354
+
355
+ We find that the ground truth preference values can be introduced as preference distances in the binary comparison case, but that doing so in the more general case, while useful, may not satisfy the Axiom of Choice.
356
+
357
+ # B More details on MAPLE
358
+
359
+ # B.1 Data Construction
360
+
361
+ The source sentences in the training data of MAPLE are sampled from the test sets of WMT20 and WMT21. As mentioned in Section 4, four of the five translations are produced by VicunaMT. Considering that VicunaMT is already a strong MT system, often providing accurate translations free of mistakes, randomly selecting source sentences from WMT data could predominantly yield translations that are trivial for VicunaMT to translate, resulting in the collection of many uninformative samples with high human preference scores. To mitigate this, we prioritize source sentences that present difficulties for VicunaMT. Specifically, we use reference translations as a proxy to assess the quality of the model translations through COMET scores. We give priority to samples where the beam search output falls within a COMET score range of [75,85] and where there is a significant standard deviation in COMET scores among the four translations. Following these criteria, we select 1.1K samples for each translation direction. For the development set in MAPLE, we use monolingual data from News Crawl 2022. The sampling and selection process are the same as that of the training set, except that we do not have reference translations, instead, we use a strong commercial MT system to generate pseudo "reference" translations.
362
+
363
+ # B.2 Scoring Rubric
364
+
365
+ The annotators are asked to judge the translation on a scale of 1 to 6, following the guidelines outlined in the following scoring rubric. They can assign scores in increments of 0.2, allowing for more detailed assessments, such as 1.2, 1.4, and so on.
366
+
367
+ - Score it a 1 when the translation has nothing to do with the source; or when the translation has many unknown words; or when the translation looks like word salad.
368
+ - Score it a 2 when you can understand why some of the words in the translation are there, but when the meaning of the source sentence is lost.
369
+ - Score it a 3 when you understand why all or almost all the words in the translation are there and when some of the meaning of the source sentence are adequately transferred into the target language, but when the main meaning of the source sentence is lost.
370
+ - Score it a 4 when the meaning of the source sentence is generally preserved, but when the translation is mechanical and possibly has vocabulary, grammatical, or date / numbering errors.
371
+ - Score it a 5 when the meaning of the source sentence is fully preserved and the translation has no grammatical errors, but when the translation does not sound like the translation a native target language speaker would produce given the style and register of the source sentence.
372
+ - Score it a 6 when the translation is perfect in every sense of the word – something a professional translator/interpreter would come up with when she understands well the context in which the source sentence was produced.
373
+
374
+ # B.3 Annotation UI
375
+
376
+ The UI shows the different translations in a blind and randomized order. All translations are scored simultaneously. A screenshot of the UI is shown in Figure 5.
377
+
378
+ # B.4 More Examples
379
+
380
+ Table 7 shows two additional examples in which the model's translation scores higher than the reference translation. This once again highlights the presence of noise in parallel datasets.
381
+
382
+ # C More implementation details
383
+
384
+ # C.1 Dataset statistics
385
+
386
+ The data statistics are presented in Table 8. We use different validation sets in different training stages
387
+
388
+ <table><tr><td>Source</td><td>Other MPs criticised Twitter for al-
389
+ lowing the tweets to remain visible.</td></tr><tr><td>Reference
390
+ translation</td><td>其他议员也批评 Twitter 未能及时
391
+ 删贴。
392
+ (Other MPs have also criticized
393
+ Twitter for failing to promptly delete
394
+ tweets in time.)</td></tr><tr><td>Best of five
395
+ translation</td><td>其他议员批评了推特允许这些推
396
+ 文仍然可见。
397
+ (Other MPs have criticized Twitter
398
+ for allowing these tweets to remain
399
+ visible.)</td></tr><tr><td>Source</td><td>When he refused, the officials tipped
400
+ his cart over, destroying all the eggs,
401
+ the boy alleged.</td></tr><tr><td>Reference
402
+ translation</td><td>男孩说,他拒绝交出100卢
403
+ 比后,那些官员就把他的小车掀
404
+ 翻,把所有鸡蛋砸碎。
405
+ (The boy said that after he refused to
406
+ hand over 100 rupees, the officials
407
+ overturned his car and smashed all
408
+ the eggs.)</td></tr><tr><td>Best of five
409
+ translation</td><td>当他拒绝时,官员将他的车子推
410
+ 倒,破坏了所有的蛋,男孩称。
411
+ (When he refused, officials pushed
412
+ his car over and broke all the eggs,
413
+ the boy said.)</td></tr></table>
414
+
415
+ Table 7: Two additional examples showing the reference translations can be less accurate than the best model prediction.
416
+
417
+ because MAPLE contains a subset of the parallel data in WMT20/21.
418
+
419
+ # C.2 Prompt format
420
+
421
+ For each source sentence, we attach a MT instruction asking the LLM to generate the translation. The MT instructions come from a instruction pool based on the list of MT instructions released by (Jiao et al., 2023a)<sup>7</sup>. We list all 31 instructions in our instruction pool in Table 9. During training (in both SFT and PL stages), an instruction is randomly sampled from the instruction pool. During evaluation, the first instruction from Table 9 is always used. In addition to instructions, instruction-tuned models like Vicuna requires specific prompt formats. Table 10 presents a depiction of the conversion process from raw data points to the final model input.
422
+
423
+ # C.3 Hyper-parameter search
424
+
425
+ Hyper-parameter search is done for $\beta \in [0.0, 0.05, 0.1]$ , and best values are selected according to the validation loss.
426
+
427
+ ![](images/a8e4dbad8bb21ba56f90c25d46a2fdef2a1a5ad951890058fdfef854b804f278.jpg)
428
+ Figure 5: User interface of translation assessment.
429
+
430
+ <table><tr><td rowspan="2"></td><td rowspan="2">Training stage</td><td rowspan="2">Data source</td><td colspan="4">Number of samples</td></tr><tr><td>de→en</td><td>en→de</td><td>en→zh</td><td>zh→en</td></tr><tr><td rowspan="4">Training</td><td rowspan="3">SFT stage</td><td>WMT17</td><td>3004</td><td>3004</td><td>2001</td><td>2001</td></tr><tr><td>WMT18</td><td>2998</td><td>2998</td><td>3981</td><td>3981</td></tr><tr><td>WMT19</td><td>2000</td><td>1997</td><td>1997</td><td>2000</td></tr><tr><td>PL stage</td><td>MAPLE</td><td>1100</td><td>1100</td><td>1100</td><td>1100</td></tr><tr><td rowspan="2">Validation</td><td>SFT stage</td><td>WMT21</td><td>1000</td><td>1002</td><td>1002</td><td>1948</td></tr><tr><td>PL stage</td><td>WMT20 &amp; 21*</td><td>500</td><td>500</td><td>500</td><td>500</td></tr><tr><td rowspan="2">Test</td><td rowspan="2">-</td><td>WMT22</td><td>1984</td><td>2037</td><td>2037</td><td>1875</td></tr><tr><td>FLORES-200</td><td>1012</td><td>1012</td><td>1012</td><td>1012</td></tr><tr><td>Preference testing</td><td>-</td><td>MAPLE-dev</td><td>217</td><td>195</td><td>208</td><td>180</td></tr></table>
431
+
432
+ Table 8: Datasets used for training, validation and testing. *: a subset WMT20 and WMT21 is used.
433
+
434
+ # C.4 Evaluation packages
435
+
436
+ We use the Unbabel/wmt22-comet-da model to compute the COMET scores and sacreBLEU for computing BLEU scores. The signature of the sacreBLEU package is nrefs:1, case:mixed, eff:no, tok:13a, smooth:exp, version:2.0.0 for all translation directions but en→zh, in which we use tok:zh.
437
+
438
+ # C.5 Hardware specifications and runtime
439
+
440
+ All experiments are either run on a host with eight NVIDIA A100-40GB GPUs or with eight H100-80GB GPUs. Mixed precision with bfloat16 is used in both SFT and PL. Deepspeed<sup>10</sup> zero-stage
441
+
442
+ 3 is used when running PL with five preference samples. Each experiment runs no longer than 15 minutes on H100 GPUs.
443
+
444
+ # D SFT results in BLEU score
445
+
446
+ We present model performance after SFT stage measured by BLEU score in Table 11. While the general trend remains consistent in comparison to the performance evaluated by COMET, there are some exceptions. For example, although VicunaMT still achieves the top average score on FLORES-200, it is outperformed by MistralMT (i.e., Mistral + SFT) on WMT22.
447
+
448
+ # E Model comparison in BLEU score
449
+
450
+ We present model performance measured by BLEU score in Table 12. In this case, there is no clear
451
+
452
+ # Instruction pool
453
+
454
+ <table><tr><td>Translate the following text from [SRC] to [TGT]:</td></tr><tr><td>Please provide the [TGT] translation for the following text</td></tr><tr><td>Convert the subsequent sentences from [SRC] into [TGT]:</td></tr><tr><td>Render the listed sentences in [TGT] from their original [SRC] form:</td></tr><tr><td>Transform the upcoming sentences from [SRC] language to [TGT] language:</td></tr><tr><td>Translate the given text from [SRC] to [TGT]:</td></tr><tr><td>Turn the following sentences from their [SRC] version to the [TGT] version:</td></tr><tr><td>Adapt the upcoming text from [SRC] to [TGT]:</td></tr><tr><td>Transpose the next sentences from the [SRC] format to the [TGT] format.</td></tr><tr><td>Reinterpret the ensuing text from [SRC] to [TGT] language.</td></tr><tr><td>Modify the forthcoming sentences, converting them from [SRC] to [TGT].</td></tr><tr><td>What is the meaning of these sentences when translated to [TGT]?</td></tr><tr><td>In the context of [TGT], what do the upcoming text signify? The text is:</td></tr><tr><td>How would you express the meaning of the following sentences in [TGT]?</td></tr><tr><td>What is the significance of the mentioned sentences in [TGT]?</td></tr><tr><td>In [TGT], what do the following text convey?</td></tr><tr><td>When translated to [TGT], what message do these sentences carry?</td></tr><tr><td>What is the intended meaning of the ensuing sentences in [TGT]?</td></tr><tr><td>How should the following sentences be comprehended in [TGT]?</td></tr><tr><td>In terms of [TGT], what do the next sentences imply?</td></tr><tr><td>Kindly furnish the [TGT] translation of the subsequent sentences.</td></tr><tr><td>Could you supply the [TGT] translation for the upcoming sentences?</td></tr><tr><td>Please offer the [TGT] rendition for the following statements.</td></tr><tr><td>I&#x27;d appreciate it if you could present the [TGT] translation for the following text:</td></tr><tr><td>Can you deliver the [TGT] translation for the mentioned sentences?</td></tr><tr><td>Please share the [TGT] version of the given sentences.</td></tr><tr><td>It would be helpful if you could provide the [TGT] translation of the ensuing sentences.</td></tr><tr><td>Kindly submit the [TGT] interpretation for the next sentences.</td></tr><tr><td>Please make available the [TGT] translation for the listed sentences.</td></tr><tr><td>Can you reveal the [TGT] translation of the forthcoming sentences?</td></tr><tr><td>Translate from [SRC] to [TGT]:</td></tr></table>
455
+
456
+ Table 9: An instruction pool containing 31 MT prompts. An instruction is randomly sampled from this pool to form a training sample. At inference time, the first instruction is always used. [SRC] and [TGT] will be replaced by the source and target sentence in the dataset, respectively.
457
+
458
+ winner. Interestingly, VicunaMT+PL attains lower BLEU scores than VicunaMT on en→de and zh→en when evaluated on WMT22. However, both COMET score and our human evaluation in Table 4 show the opposite, highlighting that BLEU scores may less correlated to human judgement, as also noticed in (Freitag et al., 2022).
459
+
460
+ # F Data reuse in BLEU score and Results on FLORES-200
461
+
462
+ We reuse MAPLE to enhance BLOOMZMT and MistralInstructMT (i.e., BLOOMZ and MistralInstruct after the SFT stage) and report model performance on WMT22 in BLEU score in Table 13. In addition, we evaluate MistralInstructMT on FLORES and present the results in Table 14.
463
+
464
+ <table><tr><td>Model</td><td colspan="2">Instruction template</td></tr><tr><td>Vicuna</td><td>USER:</td><td>[MT Instruction] \nASSISTANT:\n</td></tr><tr><td>Mistral-Instruct</td><td>[INST]</td><td>[MT Instruction] \n[INST]</td></tr><tr><td>BLOOMZ</td><td>USER:</td><td>[MT Instruction] \nASSISTANT:\n</td></tr></table>
465
+
466
+ (a)
467
+
468
+ # Example
469
+
470
+ USER: Translate the following text from English to German: Hello, world.
471
+ \nASSISTANT:\n Hallo, Welt.
472
+
473
+ (b)
474
+ Table 10: (a) Instruction template used for Vicuna, Mistral-Instruct, and BLOOMZ. Raw template is marked in red. BLOOMZ shares the same template as Vicuna at the SFT and PL stage. When performing BLOOMZ on zero-shot tasks, we directly use the first instruction from Table 9 without any instruction template. (b) An example that converts the raw input (marked in green) to the final input.
475
+
476
+ <table><tr><td></td><td>de→en</td><td>en→de</td><td>en→zh</td><td>zh→en</td><td>Avg.</td></tr><tr><td colspan="6">WMT22</td></tr><tr><td>BLOOM</td><td>1.51</td><td>0.53</td><td>1.74</td><td>5.43</td><td>2.30</td></tr><tr><td>+SFT</td><td>23.73</td><td>16.15</td><td>35.15</td><td>21.64</td><td>24.17</td></tr><tr><td>BLOOMZ</td><td>21.59</td><td>6.79</td><td>28.72</td><td>18.54</td><td>18.91</td></tr><tr><td>+SFT</td><td>23.89</td><td>16.79</td><td>35.41</td><td>21.01</td><td>24.28</td></tr><tr><td>Mistral</td><td>4.32</td><td>2.65</td><td>4.93</td><td>7.01</td><td>4.73</td></tr><tr><td>+SFT</td><td>29.39</td><td>24.60</td><td>31.51</td><td>22.09</td><td>26.90</td></tr><tr><td>Mistral-Ins.</td><td>28.04</td><td>21.27</td><td>21.85</td><td>17.77</td><td>22.23</td></tr><tr><td>+SFT</td><td>28.26</td><td>24.61</td><td>31.90</td><td>20.60</td><td>26.35</td></tr><tr><td>LLaMA-1</td><td>6.30</td><td>4.00</td><td>0.88</td><td>3.01</td><td>3.55</td></tr><tr><td>+SFT</td><td>28.28</td><td>19.09</td><td>25.31</td><td>20.27</td><td>23.24</td></tr><tr><td>Vicuna</td><td>26.16</td><td>22.11</td><td>26.26</td><td>13.91</td><td>22.11</td></tr><tr><td>+SFT</td><td>29.26</td><td>25.70</td><td>29.98</td><td>20.61</td><td>26.39</td></tr><tr><td colspan="6">FLORES-200</td></tr><tr><td>BLOOM</td><td>3.88</td><td>1.48</td><td>7.00</td><td>3.75</td><td>4.03</td></tr><tr><td>+SFT</td><td>31.85</td><td>16.26</td><td>34.66</td><td>23.78</td><td>26.64</td></tr><tr><td>Mistral</td><td>3.58</td><td>1.37</td><td>0.16</td><td>1.06</td><td>1.54</td></tr><tr><td>+SFT</td><td>40.48</td><td>29.18</td><td>29.43</td><td>24.67</td><td>30.94</td></tr><tr><td>Mistral-Ins.</td><td>36.81</td><td>25.64</td><td>19.81</td><td>19.25</td><td>25.38</td></tr><tr><td>+SFT</td><td>39.16</td><td>27.79</td><td>29.77</td><td>23.10</td><td>29.96</td></tr><tr><td>LLaMA-1</td><td>4.08</td><td>2.80</td><td>1.73</td><td>1.60</td><td>2.55</td></tr><tr><td>+SFT</td><td>40.70</td><td>29.95</td><td>20.21</td><td>20.66</td><td>27.88</td></tr><tr><td>Vicuna</td><td>35.07</td><td>26.86</td><td>26.09</td><td>17.53</td><td>26.39</td></tr><tr><td>+SFT</td><td>41.90</td><td>30.63</td><td>28.52</td><td>23.34</td><td>31.10</td></tr></table>
477
+
478
+ Table 11: Model performance (in BLEU score) before and after performing SFT on parallel data. Rows in blue indicate instruction-tuned LLMs. Best results are in bold. Instruction-tuned LLMs perform well even without SFT. Raw LLMs benefits the most from SFT. We exclude BLOOMZ on FLORES-200 as it is a part of BLOOMZ's training data.
479
+
480
+ <table><tr><td rowspan="2">System</td><td colspan="5">WMT22</td><td colspan="5">FLORES-200</td></tr><tr><td>de→en</td><td>en→de</td><td>en→zh</td><td>zh→en</td><td>Avg.</td><td>de→en</td><td>en→de</td><td>en→zh</td><td>zh→en</td><td>Avg.</td></tr><tr><td colspan="11">Commercial &amp; LLaMA-2-7B based MT systems</td></tr><tr><td>ChatGPT(3.5-turbo-0613)</td><td>33.13</td><td>33.56</td><td>44.59</td><td>25.63</td><td>31.62</td><td>43.06</td><td>40.07</td><td>45.69</td><td>25.57</td><td>36.55</td></tr><tr><td>GPT-4(gpt-4-0613)</td><td>33.72</td><td>34.84</td><td>42.75</td><td>26.33</td><td>34.41</td><td>43.79</td><td>41.81</td><td>46.10</td><td>27.39</td><td>39.77</td></tr><tr><td>ALMA-7B(LLaMA-2)</td><td>29.49</td><td>30.31</td><td>36.48</td><td>23.52</td><td>29.95</td><td>-⊗</td><td>-⊗</td><td>-⊗</td><td>-⊗</td><td>-⊗</td></tr><tr><td colspan="11">BLOOMZ-mt-7B based LLMs</td></tr><tr><td>ParroT(BLOOMZ-mt)</td><td>24.90</td><td>20.50</td><td>34.50</td><td>22.70</td><td>25.65</td><td>-*</td><td>-*</td><td>-*</td><td>-*</td><td>-*</td></tr><tr><td>TIM(BLOOMZ-mt)</td><td>24.31</td><td>20.63</td><td>37.20</td><td>23.42</td><td>26.39</td><td>-*</td><td>-*</td><td>-*</td><td>-*</td><td>-*</td></tr><tr><td>SWIE(BLOOMZ-mt)</td><td>25.95</td><td>21.83</td><td>36.88</td><td>23.33</td><td>27.00</td><td>-*</td><td>-*</td><td>-*</td><td>-*</td><td>-*</td></tr><tr><td colspan="11">LLaMA-1-7B based LLMs</td></tr><tr><td>ParroT(LLaMA-1)</td><td>27.30</td><td>26.10</td><td>30.30</td><td>20.20</td><td>25.98</td><td>39.40</td><td>30.70</td><td>29.10</td><td>21.30</td><td>32.38</td></tr><tr><td>TIM(LLaMA-1)</td><td>27.91</td><td>25.02</td><td>30.07</td><td>19.33</td><td>25.58</td><td>39.15</td><td>29.31</td><td>28.43</td><td>22.30</td><td>29.80</td></tr><tr><td>SWIE(LLaMA-1)</td><td>30.48</td><td>27.10</td><td>31.08</td><td>21.19</td><td>27.47</td><td>40.20</td><td>31.41</td><td>29.07</td><td>21.59</td><td>30.57</td></tr><tr><td>VicunaMT(LLaMA-1)</td><td>29.26</td><td>25.70</td><td>29.98</td><td>20.61</td><td>26.39</td><td>41.90</td><td>30.63</td><td>28.52</td><td>23.34</td><td>31.10</td></tr><tr><td>+ REF</td><td>31.12</td><td>24.72</td><td>30.07</td><td>20.38</td><td>26.58</td><td>39.03</td><td>29.36</td><td>28.87</td><td>22.84</td><td>30.03</td></tr><tr><td>+ BEST</td><td>29.44</td><td>24.93</td><td>30.91</td><td>20.39</td><td>26.16</td><td>41.29</td><td>29.34</td><td>30.07</td><td>23.48</td><td>31.05</td></tr><tr><td>+ PL</td><td>30.63</td><td>24.63</td><td>31.52</td><td>20.44</td><td>26.81</td><td>40.07</td><td>29.33</td><td>30.50</td><td>21.99</td><td>30.47</td></tr></table>
481
+
482
+ Table 12: Model performance in BLEU scores. Best results with LLaMA-1 based models are in bold. $\otimes$ : LLaMA-2 based models were not evaluated due to license constraints. WMT22 results are extracted from the original paper. *: BLOOMZ-family models use FLORES-200 for training.
483
+
484
+ <table><tr><td></td><td colspan="5">WMT22</td></tr><tr><td></td><td>de→en</td><td>en→de</td><td>en→zh</td><td>zh→en</td><td>Avg.</td></tr><tr><td>BLOOMZ†</td><td>23.89</td><td>16.79</td><td>35.41</td><td>21.01</td><td>24.28</td></tr><tr><td>+REF</td><td>24.51</td><td>15.26</td><td>33.43</td><td>21.80</td><td>23.75</td></tr><tr><td>+BEST</td><td>23.80</td><td>16.33</td><td>34.99</td><td>21.49</td><td>24.15</td></tr><tr><td>+PL</td><td>24.84</td><td>16.81</td><td>36.48</td><td>23.15</td><td>25.32</td></tr><tr><td>Mistral-Ins.†</td><td>28.26</td><td>24.61</td><td>31.90</td><td>20.60</td><td>26.35</td></tr><tr><td>+REF</td><td>30.94</td><td>25.62</td><td>31.66</td><td>21.52</td><td>27.44</td></tr><tr><td>+BEST</td><td>29.76</td><td>24.30</td><td>31.12</td><td>20.83</td><td>26.50</td></tr><tr><td>+PL</td><td>29.32</td><td>24.78</td><td>33.00</td><td>21.76</td><td>27.47</td></tr></table>
485
+
486
+ Table 13: Model performance on WMT22 in BLEU scores. Best results are in **bold**.†: SFT stage has already been applied to these models.
487
+
488
+ <table><tr><td></td><td colspan="5">FLORES-200</td></tr><tr><td></td><td>de→en</td><td>en→de</td><td>en→zh</td><td>zh→en</td><td>Avg.</td></tr><tr><td colspan="6">COMET</td></tr><tr><td>Mistral-Ins.†</td><td>88.21</td><td>83.73</td><td>82.41</td><td>84.77</td><td>84.78</td></tr><tr><td>+REF</td><td>88.10</td><td>85.04</td><td>83.59</td><td>84.74</td><td>85.37</td></tr><tr><td>+BEST</td><td>88.41</td><td>84.55</td><td>83.46</td><td>84.94</td><td>85.34</td></tr><tr><td>+PL</td><td>88.56</td><td>84.98</td><td>83.86</td><td>85.34</td><td>85.67</td></tr><tr><td colspan="6">BLEU</td></tr><tr><td>Mistral-Ins.†</td><td>39.16</td><td>27.79</td><td>29.77</td><td>23.10</td><td>29.96</td></tr><tr><td>+REF</td><td>38.10</td><td>28.39</td><td>31.24</td><td>23.09</td><td>30.21</td></tr><tr><td>+BEST</td><td>39.35</td><td>28.33</td><td>30.46</td><td>22.98</td><td>30.28</td></tr><tr><td>+PL</td><td>39.80</td><td>27.97</td><td>31.00</td><td>23.44</td><td>30.55</td></tr></table>
489
+
490
+ Table 14: Model performance on FLORES-200 in COMET and BLEU scores. Best results are in bold. †: SFT stage has already been applied to these models.
2024/A Preference-driven Paradigm for Enhanced Translation with Large Language Models/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc85f60fdec5a68500d69f3199ee4a4f88518e44ebc8bcd5cf752319a72fe536
3
+ size 1351475
2024/A Preference-driven Paradigm for Enhanced Translation with Large Language Models/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Pretrainer’s Guide to Training Data_ Measuring the Effects of Data Age, Domain Coverage, Quality, & Toxicity/f3255e0a-3bee-47d2-8ffc-17ec6292afa6_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Pretrainer’s Guide to Training Data_ Measuring the Effects of Data Age, Domain Coverage, Quality, & Toxicity/f3255e0a-3bee-47d2-8ffc-17ec6292afa6_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Pretrainer’s Guide to Training Data_ Measuring the Effects of Data Age, Domain Coverage, Quality, & Toxicity/f3255e0a-3bee-47d2-8ffc-17ec6292afa6_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e580f6f10da3b56b26cedc96f58f580fb465425ec0a8d88269fe63ad1ceb47b7
3
+ size 710510
2024/A Pretrainer’s Guide to Training Data_ Measuring the Effects of Data Age, Domain Coverage, Quality, & Toxicity/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Pretrainer’s Guide to Training Data_ Measuring the Effects of Data Age, Domain Coverage, Quality, & Toxicity/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2b6a3d2c7e82d4af0778060aafeb1d94250a2c0e0a340ac18390b2eb1ce340d6
3
+ size 2087877
2024/A Pretrainer’s Guide to Training Data_ Measuring the Effects of Data Age, Domain Coverage, Quality, & Toxicity/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Rationale-centric Counterfactual Data Augmentation Method for Cross-Document Event Coreference Resolution/3de756a6-bbf7-4cfb-ae83-0ff5c60e7686_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Rationale-centric Counterfactual Data Augmentation Method for Cross-Document Event Coreference Resolution/3de756a6-bbf7-4cfb-ae83-0ff5c60e7686_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Rationale-centric Counterfactual Data Augmentation Method for Cross-Document Event Coreference Resolution/3de756a6-bbf7-4cfb-ae83-0ff5c60e7686_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a2bb8dd812205c19dc48cac30ff432e0cb4b29d537d52c40e194d7ee576371b
3
+ size 4525356
2024/A Rationale-centric Counterfactual Data Augmentation Method for Cross-Document Event Coreference Resolution/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Rationale-centric Counterfactual Data Augmentation Method for Cross-Document Event Coreference Resolution/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5de3000d999b9fcd7cc93d6168ceff9fe041d7c38b9b36434c4c0736c4ac0da8
3
+ size 639583
2024/A Rationale-centric Counterfactual Data Augmentation Method for Cross-Document Event Coreference Resolution/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A School Student Essay Corpus for Analyzing Interactions of Argumentative Structure and Quality/8156c041-3083-4221-9fa2-c2116611fd69_content_list.json ADDED
@@ -0,0 +1,1945 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "A School Student Essay Corpus for Analyzing Interactions of Argumentative Structure and Quality",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 169,
8
+ 83,
9
+ 826,
10
+ 121
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Maja Stahl<sup>1</sup>, Nadine Michel<sup>2</sup>, Sebastian Kilsbach<sup>2</sup>, Julian Schmidtke<sup>1</sup>, Sara Rezat<sup>2</sup>, and Henning Wachsmuth<sup>1</sup>",
17
+ "bbox": [
18
+ 243,
19
+ 130,
20
+ 752,
21
+ 162
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "text",
27
+ "text": "<sup>1</sup>Leibniz University Hannover, Institute of Artificial Intelligence",
28
+ "bbox": [
29
+ 240,
30
+ 164,
31
+ 761,
32
+ 181
33
+ ],
34
+ "page_idx": 0
35
+ },
36
+ {
37
+ "type": "text",
38
+ "text": "$^{2}$ Paderborn University, Institute for German Language and Comparative Literature",
39
+ "bbox": [
40
+ 168,
41
+ 181,
42
+ 833,
43
+ 198
44
+ ],
45
+ "page_idx": 0
46
+ },
47
+ {
48
+ "type": "text",
49
+ "text": "{m.stahl,h.wachsmuth}@ai.uni-hannover.de, julian.schmidtke@stud.uni-hannover.de",
50
+ "bbox": [
51
+ 200,
52
+ 200,
53
+ 800,
54
+ 212
55
+ ],
56
+ "page_idx": 0
57
+ },
58
+ {
59
+ "type": "text",
60
+ "text": "{nadine.michel, sebastian.kilsbach, sara.rezat}@uni-paderborn.de",
61
+ "bbox": [
62
+ 257,
63
+ 217,
64
+ 742,
65
+ 229
66
+ ],
67
+ "page_idx": 0
68
+ },
69
+ {
70
+ "type": "text",
71
+ "text": "Abstract",
72
+ "text_level": 1,
73
+ "bbox": [
74
+ 260,
75
+ 252,
76
+ 339,
77
+ 266
78
+ ],
79
+ "page_idx": 0
80
+ },
81
+ {
82
+ "type": "text",
83
+ "text": "Learning argumentative writing is challenging. Besides writing fundamentals such as syntax and grammar, learners must select and arrange argument components meaningfully to create high-quality essays. To support argumentative writing computationally, one step is to mine the argumentative structure. When combined with automatic essay scoring, interactions of the argumentative structure and quality scores can be exploited for comprehensive writing support. Although studies have shown the usefulness of using information about the argumentative structure for essay scoring, no argument mining corpus with ground-truth essay quality annotations has been published yet. Moreover, none of the existing corpora contain essays written by school students specifically. To fill this research gap, we present a German corpus of 1,320 essays from school students of two age groups. Each essay has been manually annotated for argumentative structure and quality on multiple levels of granularity. We propose baseline approaches to argument mining and essay scoring, and we analyze interactions between both tasks, thereby laying the ground for quality-oriented argumentative writing support.",
84
+ "bbox": [
85
+ 141,
86
+ 279,
87
+ 460,
88
+ 650
89
+ ],
90
+ "page_idx": 0
91
+ },
92
+ {
93
+ "type": "text",
94
+ "text": "1 Introduction",
95
+ "text_level": 1,
96
+ "bbox": [
97
+ 114,
98
+ 661,
99
+ 258,
100
+ 676
101
+ ],
102
+ "page_idx": 0
103
+ },
104
+ {
105
+ "type": "text",
106
+ "text": "Writing argumentative texts, in particular argumentative essays, constitutes an essential part of school students' writing education. However, learning to write arguments of high quality can be challenging (Zhu, 2001; Ferretti et al., 2007; Peloghitis, 2017; Alexander et al., 2023). It requires various skills, from writing fundamentals, such as syntax and grammar, to argumentation-specific skills, such as meaningfully organizing and structuring arguments and counter-considerations (Rezat, 2011). This takes time and effort to master (Ka-kan-dee and Kaur, 2014; Dang et al., 2020). Given teachers' limited time to give students feedback on their writing, automatic argumentative writing support could",
107
+ "bbox": [
108
+ 112,
109
+ 687,
110
+ 489,
111
+ 912
112
+ ],
113
+ "page_idx": 0
114
+ },
115
+ {
116
+ "type": "image",
117
+ "img_path": "images/e3a5db9bb6469fbbb0a1b29c4febaca96c341f9b6c938e09c55a0a56e5bfd8ff.jpg",
118
+ "image_caption": [
119
+ "Figure 1: Exemplary annotated school student essay on the use of school funding, taken from our corpus. The text is from the FD-LEX corpus (Becker-Mrotzek and Grabowski, 2018), translated from German for display."
120
+ ],
121
+ "image_footnote": [],
122
+ "bbox": [
123
+ 509,
124
+ 249,
125
+ 884,
126
+ 494
127
+ ],
128
+ "page_idx": 0
129
+ },
130
+ {
131
+ "type": "text",
132
+ "text": "benefit students as it offers guidance at their own pace and convenience (Wambsganss et al., 2022a).",
133
+ "bbox": [
134
+ 507,
135
+ 585,
136
+ 882,
137
+ 615
138
+ ],
139
+ "page_idx": 0
140
+ },
141
+ {
142
+ "type": "text",
143
+ "text": "Argumentative writing support systems employ argument mining to analyze input texts (Stab, 2017; Wambsganss and Niklaus, 2022; Weber et al., 2023), that is, computational methods that identify argumentative components and their relations. Common components are major claim (main standpoint of the text, also known as thesis), claim (controversial statement), and premise (reason for justifying or refuting the claim) along with their argumentative relations support and attack. This knowledge enables the systems to give feedback on the structure of a text, e.g., by highlighting unwarranted claims (Stab and Gurevych, 2017a), or by analyzing the number of argumentative components quantitatively (Stab and Gurevych, 2017a; Wambsganss and Niklaus, 2022; Weber et al., 2023).",
144
+ "bbox": [
145
+ 505,
146
+ 618,
147
+ 882,
148
+ 873
149
+ ],
150
+ "page_idx": 0
151
+ },
152
+ {
153
+ "type": "text",
154
+ "text": "Unlike argument mining, automated essay scoring explicitly evaluates essay quality, either holis",
155
+ "bbox": [
156
+ 507,
157
+ 875,
158
+ 882,
159
+ 906
160
+ ],
161
+ "page_idx": 0
162
+ },
163
+ {
164
+ "type": "page_number",
165
+ "text": "2661",
166
+ "bbox": [
167
+ 480,
168
+ 928,
169
+ 517,
170
+ 940
171
+ ],
172
+ "page_idx": 0
173
+ },
174
+ {
175
+ "type": "footer",
176
+ "text": "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics:",
177
+ "bbox": [
178
+ 139,
179
+ 945,
180
+ 857,
181
+ 957
182
+ ],
183
+ "page_idx": 0
184
+ },
185
+ {
186
+ "type": "footer",
187
+ "text": "Human Language Technologies (Volume 1: Long Papers), pages 2661-2674",
188
+ "bbox": [
189
+ 267,
190
+ 958,
191
+ 729,
192
+ 971
193
+ ],
194
+ "page_idx": 0
195
+ },
196
+ {
197
+ "type": "footer",
198
+ "text": "June 16-21, 2024 ©2024 Association for Computational Linguistics",
199
+ "bbox": [
200
+ 290,
201
+ 972,
202
+ 705,
203
+ 985
204
+ ],
205
+ "page_idx": 0
206
+ },
207
+ {
208
+ "type": "text",
209
+ "text": "tically (Uto et al., 2020; Yang et al., 2020; Wang et al., 2023) or specific linguistic aspects, such as coherence (Li et al., 2018; Farag et al., 2018), grammar (Ajit Tambe and Kulkarni, 2022), and organization (Persing et al., 2010; Rahimi et al., 2015). Combining argument mining with essay scoring may enable support systems to give students comprehensive feedback on their writing. In addition, it helps identify how different argumentative structures influence the overall essay quality and which structures are common for different levels of quality (Wachsmuth et al., 2016). However, student essay corpora for argument mining are scarce and do not include ground-truth essay quality annotations (Stab and Gurevych, 2017a). Moreover, no corpus with structure annotations for essays written by school students has been published yet.",
210
+ "bbox": [
211
+ 110,
212
+ 84,
213
+ 492,
214
+ 357
215
+ ],
216
+ "page_idx": 1
217
+ },
218
+ {
219
+ "type": "text",
220
+ "text": "To fill this research gap, we present a German corpus of 1,320 school student essays with manual annotations for argumentative structure and essay quality. The essays have been systematically selected from an existing corpus (Becker-Mrotzek and Grabowski, 2018), equally distributed over two age groups (fifth-graders and ninth-graders) and binary genders, three per student. We present an extensive annotation scheme focused on school student essays that covers argumentative structure on four levels of granularity as well as five essay quality aspects, as shown in Figure 1. To achieve consistent annotations, we developed annotation guidelines in close dialogue with our expert annotators from the field of language education. This led to high agreement between the annotations.",
221
+ "bbox": [
222
+ 110,
223
+ 357,
224
+ 489,
225
+ 615
226
+ ],
227
+ "page_idx": 1
228
+ },
229
+ {
230
+ "type": "text",
231
+ "text": "Our analyses of the corpus provide various insights into the correlation between the different levels of argumentative structure and essay quality, as well as the interaction between these two types of annotation. We experiment with fine-tuned transformers and adapters as baseline approaches to mining argumentative structure and scoring essay quality. Moreover, we demonstrate that the information on argumentative structure helps predicting the essay quality, which is in line with what previous studies showed on other corpora (Wachsmuth et al., 2016; Beigman Klebanov et al., 2016; Nguyen and Litman, 2018). This result underlines the usefulness of our corpus annotations for quality-oriented argumentative writing support.",
232
+ "bbox": [
233
+ 110,
234
+ 615,
235
+ 490,
236
+ 857
237
+ ],
238
+ "page_idx": 1
239
+ },
240
+ {
241
+ "type": "text",
242
+ "text": "More explicitly, this work aims to answer (i) how the argumentative structure and essay quality of school student essays can be modeled, (ii)",
243
+ "bbox": [
244
+ 112,
245
+ 857,
246
+ 489,
247
+ 904
248
+ ],
249
+ "page_idx": 1
250
+ },
251
+ {
252
+ "type": "text",
253
+ "text": "how different levels of argumentative structure and essay quality correlate for school student essays, and (iii) how this correlation can be exploited to automatically score the essay quality.",
254
+ "bbox": [
255
+ 507,
256
+ 84,
257
+ 884,
258
+ 149
259
+ ],
260
+ "page_idx": 1
261
+ },
262
+ {
263
+ "type": "text",
264
+ "text": "Altogether, this paper's main contributions are:",
265
+ "bbox": [
266
+ 527,
267
+ 149,
268
+ 878,
269
+ 164
270
+ ],
271
+ "page_idx": 1
272
+ },
273
+ {
274
+ "type": "list",
275
+ "sub_type": "text",
276
+ "list_items": [
277
+ "- A corpus for studying argumentative structure and essay quality on school student essays",
278
+ "- Empirical insights into the interactions of argumentative structure and essay quality",
279
+ "- Baseline approaches to argument mining and essay scoring<sup>1</sup>"
280
+ ],
281
+ "bbox": [
282
+ 531,
283
+ 175,
284
+ 882,
285
+ 282
286
+ ],
287
+ "page_idx": 1
288
+ },
289
+ {
290
+ "type": "text",
291
+ "text": "2 Related Work",
292
+ "text_level": 1,
293
+ "bbox": [
294
+ 509,
295
+ 294,
296
+ 665,
297
+ 309
298
+ ],
299
+ "page_idx": 1
300
+ },
301
+ {
302
+ "type": "text",
303
+ "text": "Argumentative writing is a key capability that is taught in school across age groups and disciplines (Becker-Mrotzek et al., 2010; Rezat, 2011). A common educational form of argumentative text is the essay, where school students should introduce a thesis, to which they provide pro and con arguments, and finally conclude (Townsend et al., 1993; Schröter, 2021). The components of an argumentative text take on different roles (e.g., claim or premises), and they may operationalize different actions (e.g., conceding or reasoning) (Feilke, 2017). Learning to write argumentative text is complex and requires continuous and detailed feedback (Kellogg et al., 2010; Wambsganss et al., 2022b).",
304
+ "bbox": [
305
+ 507,
306
+ 319,
307
+ 884,
308
+ 545
309
+ ],
310
+ "page_idx": 1
311
+ },
312
+ {
313
+ "type": "text",
314
+ "text": "Analyzing the argumentative structure of texts computationally, also known as argument(ation) mining, is a crucial and widely-studied step in providing automatic support for argumentative writing (Stede and Schneider, 2019). Student essays are a prominent domain for argument mining. A respective annotated corpus of 402 English student essays is available (Stab and Gurevych, 2017a), for which also quality issues such as insufficient claim support have been modeled (Stab and Gurevych, 2017b; Gurcke et al., 2021). Additionally, student corpora are available for more specific domains, such as argumentative legal texts (Weber et al., 2023), persuasive peer reviews on business models (Wambsganss et al., 2020), and business model pitches (Wambsganss and Niklaus, 2022).",
315
+ "bbox": [
316
+ 507,
317
+ 546,
318
+ 882,
319
+ 801
320
+ ],
321
+ "page_idx": 1
322
+ },
323
+ {
324
+ "type": "text",
325
+ "text": "Some research has used argumentative structure to assess essay quality. In consecutive works, Persing et al. (2010) and Persing and Ng (2013, 2014, 2015) graded different argumentation-related quality aspects for the well-known essay corpus ICLE",
326
+ "bbox": [
327
+ 507,
328
+ 803,
329
+ 884,
330
+ 883
331
+ ],
332
+ "page_idx": 1
333
+ },
334
+ {
335
+ "type": "page_footnote",
336
+ "text": "<sup>1</sup>Our corpus and experiment code can be found under: https://github.com/webis-de/NAACL-24.",
337
+ "bbox": [
338
+ 507,
339
+ 891,
340
+ 882,
341
+ 916
342
+ ],
343
+ "page_idx": 1
344
+ },
345
+ {
346
+ "type": "page_number",
347
+ "text": "2662",
348
+ "bbox": [
349
+ 480,
350
+ 927,
351
+ 519,
352
+ 940
353
+ ],
354
+ "page_idx": 1
355
+ },
356
+ {
357
+ "type": "text",
358
+ "text": "(Granger et al., 2009), namely organization, thesis clarity, prompt adherence, and argument strength. In contrast, Horbach et al. (2017) targeted different quality aspects of argumentative writing at once. Some works further investigated the interaction between argumentative structure and essay quality. Wachsmuth et al. (2016) found that multiple argumentation-related essay scoring tasks benefit from argument mining, underlining the impact of argumentative structure on essay quality. The analyses by Beigman Klebanov et al. (2016) and Nguyen and Litman (2018) suggest that this finding also holds for predicting holistic essay scores, while Persing et al. (2010) observed similar for the quality aspect organization. However, these studies relied on automatically assigned quality scores only, due to the lack of ground-truth annotations.",
359
+ "bbox": [
360
+ 110,
361
+ 84,
362
+ 492,
363
+ 357
364
+ ],
365
+ "page_idx": 2
366
+ },
367
+ {
368
+ "type": "text",
369
+ "text": "In a related line of research, approaches have been proposed to suggest revisions for argumentative essays (Afrin and Litman, 2018), to assess the need for and the quality of revisions (Skitalinskaya and Wachsmuth, 2023; Liu et al., 2023), as well as to perform argument revisions computationally (Skitalinskaya et al., 2023). Other works towards writing support for argumentative essays presented a prototypical system that gives simple feedback in terms of missed criteria (Stab, 2017), design principles for an adaptive learning tool (Wambsganss and Rietsche, 2019), visual feedback to the learner to prompt them to repair broken argument structures (Wambsganss et al., 2022a), or point to enthymematic gaps in arguments and make suggestions on how to fill these gaps (Stahl et al., 2023). Most recently, Britner et al. (2023) proposed a tool that not only analyzes issues with argument quality but also generates an explanation for its prediction. Our corpus supports these steps towards support systems for argumentative writing by providing detailed ground-truth annotations for both the argumentative structure and the quality of essays.",
370
+ "bbox": [
371
+ 110,
372
+ 357,
373
+ 490,
374
+ 728
375
+ ],
376
+ "page_idx": 2
377
+ },
378
+ {
379
+ "type": "text",
380
+ "text": "However, all the works mentioned deal with texts written by university students, while our work targets argumentative essays written by school students, fifth-graders and ninth-graders specifically. To the best of our knowledge, the only other published corpora with school student essays is not openly available (Currenti et al., 2013) and has been analyzed for essay-level quality aspects only, such as the integration of evidence (Rahimi et al., 2014) and the essay's organization (Rahimi et al., 2015). We recently came across another school",
381
+ "bbox": [
382
+ 110,
383
+ 728,
384
+ 492,
385
+ 904
386
+ ],
387
+ "page_idx": 2
388
+ },
389
+ {
390
+ "type": "text",
391
+ "text": "student essay corpus in English with annotations for argumentative structure and quality, which has yet to be published.2 We go beyond in this work by assessing the quality of school student essays in terms of five aspects derived from language education literature while incorporating their interaction with annotated argumentative structures. This may foster the development of effective methods for helping school students improve their argumentative writing skills.",
392
+ "bbox": [
393
+ 507,
394
+ 84,
395
+ 885,
396
+ 247
397
+ ],
398
+ "page_idx": 2
399
+ },
400
+ {
401
+ "type": "text",
402
+ "text": "3 School Student Essay Corpus",
403
+ "text_level": 1,
404
+ "bbox": [
405
+ 507,
406
+ 256,
407
+ 800,
408
+ 273
409
+ ],
410
+ "page_idx": 2
411
+ },
412
+ {
413
+ "type": "text",
414
+ "text": "This section presents the source data and annotation of our corpus for analyzing the argumentative structure and quality of school student essays.",
415
+ "bbox": [
416
+ 507,
417
+ 282,
418
+ 885,
419
+ 331
420
+ ],
421
+ "page_idx": 2
422
+ },
423
+ {
424
+ "type": "text",
425
+ "text": "3.1 Source Data",
426
+ "text_level": 1,
427
+ "bbox": [
428
+ 507,
429
+ 341,
430
+ 653,
431
+ 355
432
+ ],
433
+ "page_idx": 2
434
+ },
435
+ {
436
+ "type": "text",
437
+ "text": "As the basis, we systematically selected 1,320 German school student essays from the FD-LEX corpus (Becker-Mrotzek and Grabowski, 2018). The authors instructed students to each write three argumentative essays on topics pertinent to school students: (a) a letter to school funding organization on possible use of funding, (b) a statement on how to deal with the misbehavior of a fellow student, (c) a statement on who is guilty in a bike accident.",
438
+ "bbox": [
439
+ 507,
440
+ 361,
441
+ 885,
442
+ 506
443
+ ],
444
+ "page_idx": 2
445
+ },
446
+ {
447
+ "type": "text",
448
+ "text": "We seek to enable analyses of differences across different groups of school student essays on the corpus. Therefore, we pseudo-randomly chose 440 school students equally distributed across genders (only male and female exist in the corpus) and age groups (fifth-graders and ninth-graders). Subsequently, we included all three essays written by each selected school student from the source data.",
449
+ "bbox": [
450
+ 507,
451
+ 507,
452
+ 885,
453
+ 634
454
+ ],
455
+ "page_idx": 2
456
+ },
457
+ {
458
+ "type": "text",
459
+ "text": "3.2 Annotation Scheme",
460
+ "text_level": 1,
461
+ "bbox": [
462
+ 507,
463
+ 646,
464
+ 710,
465
+ 661
466
+ ],
467
+ "page_idx": 2
468
+ },
469
+ {
470
+ "type": "text",
471
+ "text": "Our annotation scheme goes beyond existing corpora for argument mining, covering the macro and micro structure of argumentative essays on four levels in total. In addition, we evaluate the quality of the essays overall and in terms of four quality aspects. Figure 2 overviews our annotation scheme.",
472
+ "bbox": [
473
+ 507,
474
+ 667,
475
+ 885,
476
+ 764
477
+ ],
478
+ "page_idx": 2
479
+ },
480
+ {
481
+ "type": "text",
482
+ "text": "Argumentative Structure On the broadest level of granularity for argumentative structure, we annotate discourse functions (Persing et al., 2010):",
483
+ "bbox": [
484
+ 507,
485
+ 772,
486
+ 885,
487
+ 821
488
+ ],
489
+ "page_idx": 2
490
+ },
491
+ {
492
+ "type": "page_footnote",
493
+ "text": "2The unpublished pre-print is available at https:// zenodo.org/records/8221504.",
494
+ "bbox": [
495
+ 507,
496
+ 828,
497
+ 885,
498
+ 853
499
+ ],
500
+ "page_idx": 2
501
+ },
502
+ {
503
+ "type": "page_footnote",
504
+ "text": "3Essays and metadata (grade, school form, age, gender, language background, and age group) are available here: https://fd-lex.uni-koeln.de",
505
+ "bbox": [
506
+ 507,
507
+ 853,
508
+ 885,
509
+ 889
510
+ ],
511
+ "page_idx": 2
512
+ },
513
+ {
514
+ "type": "page_number",
515
+ "text": "2663",
516
+ "bbox": [
517
+ 480,
518
+ 928,
519
+ 519,
520
+ 940
521
+ ],
522
+ "page_idx": 2
523
+ },
524
+ {
525
+ "type": "image",
526
+ "img_path": "images/54267ec2022435022e8d4c15d4d89d78eb89c3417e6dfc36b5e66a23256beaf6.jpg",
527
+ "image_caption": [
528
+ "Figure 2: Proposed annotation scheme for argumentative school student essays: Four levels of argumentative macro and micro structure (discourse functions, arguments, components, discourse modes) and five essay quality aspects."
529
+ ],
530
+ "image_footnote": [],
531
+ "bbox": [
532
+ 117,
533
+ 80,
534
+ 884,
535
+ 215
536
+ ],
537
+ "page_idx": 3
538
+ },
539
+ {
540
+ "type": "list",
541
+ "sub_type": "text",
542
+ "list_items": [
543
+ "- Introduction. Initiates an essay by presenting the topic and possibly the context of an essay. This section is usually non-argumentative and placed at the beginning of an essay.",
544
+ "- Body. Core of the essay, containing the majority of argumentative components.",
545
+ "- Conclusion. Summary of main points, often with a final evaluation of the topic. This section is typically found at the end of the essay."
546
+ ],
547
+ "bbox": [
548
+ 134,
549
+ 278,
550
+ 489,
551
+ 434
552
+ ],
553
+ "page_idx": 3
554
+ },
555
+ {
556
+ "type": "text",
557
+ "text": "Next, we annotate arguments that comprise one point in an argumentative text, following Walton et al. (2008). We differentiate them by stance towards the main standpoint (thesis) of an essay:",
558
+ "bbox": [
559
+ 112,
560
+ 438,
561
+ 489,
562
+ 502
563
+ ],
564
+ "page_idx": 3
565
+ },
566
+ {
567
+ "type": "list",
568
+ "sub_type": "text",
569
+ "list_items": [
570
+ "- Argument. Ideally a claim (conclusion) and premises (reasons) supporting the claim.",
571
+ "- Counter-argument. An argument that attacks the thesis of an essay."
572
+ ],
573
+ "bbox": [
574
+ 134,
575
+ 508,
576
+ 485,
577
+ 576
578
+ ],
579
+ "page_idx": 3
580
+ },
581
+ {
582
+ "type": "text",
583
+ "text": "For analyzing the micro structure, we annotate argumentative and non-argumentative components. As Stab and Gurevych (2017a), we also mark support and attack relations between them (see Figure 2):",
584
+ "bbox": [
585
+ 112,
586
+ 582,
587
+ 489,
588
+ 646
589
+ ],
590
+ "page_idx": 3
591
+ },
592
+ {
593
+ "type": "list",
594
+ "sub_type": "text",
595
+ "list_items": [
596
+ "- Topic. Non-argumentative component that describes the subject or purpose of the essay.",
597
+ "- Thesis. Main standpoint of the whole argumentative text towards the topic. Repetitions of the thesis are also annotated as such.",
598
+ "- Antithesis. Thesis contrary to the actual thesis.",
599
+ "- Modified Thesis. Modified version of the actual thesis (e.g., more detailed or restricted).",
600
+ "- Claim. Statement that conveys a stance towards the topic.",
601
+ "- Premise. Reason that is given to support or attack a claim or another premise.<sup>4</sup>"
602
+ ],
603
+ "bbox": [
604
+ 134,
605
+ 651,
606
+ 487,
607
+ 870
608
+ ],
609
+ "page_idx": 3
610
+ },
611
+ {
612
+ "type": "text",
613
+ "text": "On the finest level of granularity, we annotate discourse modes (Smith, 2003) specific to school student essays. They are derived from language education literature, where they are used for developing and analyzing argumentative writing skills (Gattje et al., 2012; Rezat, 2018; Feilke and Rezat, 2021):",
614
+ "bbox": [
615
+ 507,
616
+ 278,
617
+ 884,
618
+ 375
619
+ ],
620
+ "page_idx": 3
621
+ },
622
+ {
623
+ "type": "list",
624
+ "sub_type": "text",
625
+ "list_items": [
626
+ "- Comparing. Contrasting supporting and attacking points to a statement.",
627
+ "- Conceding. Addressing a counter-consideration and refuting it to support the own stance.",
628
+ "- Concluding. Drawing logical inferences using consecutive or final clauses (so that, if... then).",
629
+ "- Describing. Providing additional information, such as facts, statistics, and background data.",
630
+ "- Exemplifying. Providing examples or reporting on experiences.",
631
+ "- Instructing. Providing explicit instructions that recommend a specific course of action.",
632
+ "- Positioning. Expressing the own standpoint.",
633
+ "- Reasoning. Providing causal links to support a claim/thesis using markers (because, then).",
634
+ "- Referencing. Mentioning statements made by others, for example, by authorities.",
635
+ "- Qualifying. Presenting a variation of the all-or-nothing standpoints."
636
+ ],
637
+ "bbox": [
638
+ 531,
639
+ 379,
640
+ 882,
641
+ 733
642
+ ],
643
+ "page_idx": 3
644
+ },
645
+ {
646
+ "type": "text",
647
+ "text": "Essay Quality As Persing et al. (2010), we score essay quality on a 7-point scale from 1 (unsuccessful), 2 (rather unsuccessful), 3 (rather successful) to 4 (completely successful), with half points in between. We adapted the quality aspects of Kruse et al. (2012) for assessing school student essays in general to argumentative essays as follows:",
648
+ "bbox": [
649
+ 507,
650
+ 741,
651
+ 884,
652
+ 854
653
+ ],
654
+ "page_idx": 3
655
+ },
656
+ {
657
+ "type": "list",
658
+ "sub_type": "text",
659
+ "list_items": [
660
+ "- Relevance. The essay fits the prompt.",
661
+ "- Content. The selection of content helps to reach the essay's goal."
662
+ ],
663
+ "bbox": [
664
+ 531,
665
+ 860,
666
+ 880,
667
+ 913
668
+ ],
669
+ "page_idx": 3
670
+ },
671
+ {
672
+ "type": "page_footnote",
673
+ "text": "4The components in our annotation scheme are similar to those of Stab and Gurevych (2017a). We added antithesis and modified thesis to reflect the changes in position in the essays.",
674
+ "bbox": [
675
+ 112,
676
+ 878,
677
+ 487,
678
+ 917
679
+ ],
680
+ "page_idx": 3
681
+ },
682
+ {
683
+ "type": "page_number",
684
+ "text": "2664",
685
+ "bbox": [
686
+ 480,
687
+ 928,
688
+ 521,
689
+ 940
690
+ ],
691
+ "page_idx": 3
692
+ },
693
+ {
694
+ "type": "list",
695
+ "sub_type": "text",
696
+ "list_items": [
697
+ "- Structure. The selected points are coherent and well-connected.",
698
+ "- Style. The use of language is adequate.",
699
+ "- Overall. The overall impression of the rater."
700
+ ],
701
+ "bbox": [
702
+ 136,
703
+ 84,
704
+ 485,
705
+ 158
706
+ ],
707
+ "page_idx": 4
708
+ },
709
+ {
710
+ "type": "text",
711
+ "text": "3.3 Annotation Process",
712
+ "text_level": 1,
713
+ "bbox": [
714
+ 114,
715
+ 168,
716
+ 312,
717
+ 184
718
+ ],
719
+ "page_idx": 4
720
+ },
721
+ {
722
+ "type": "text",
723
+ "text": "We underwent the following process for both argumentative structure and essay quality:",
724
+ "bbox": [
725
+ 112,
726
+ 190,
727
+ 487,
728
+ 222
729
+ ],
730
+ "page_idx": 4
731
+ },
732
+ {
733
+ "type": "text",
734
+ "text": "To test and refine our annotation guidelines, we conducted pilot studies in which all annotators worked on the same 30 texts. We then discussed their understanding of the guidelines and annotation differences. We integrated their feedback into the guidelines to then test the reliability of the annotations in an inter-annotator agreement (IAA) study, where all annotators independently worked on the same 120 texts. Finally, each annotator annotated a set of 1,200 essays in the main annotation study.",
735
+ "bbox": [
736
+ 112,
737
+ 223,
738
+ 487,
739
+ 382
740
+ ],
741
+ "page_idx": 4
742
+ },
743
+ {
744
+ "type": "text",
745
+ "text": "As annotators, we employed experts in German language education from our lab and started the pilot and IAA studies on argumentative structure with three annotators. For the main part, only the two annotators with the most reliable annotations proceeded. The same annotators then annotated the essay quality, since they had already been trained in argumentative texts and our general procedure. However, we acknowledge that the annotators may have been predisposed to view essays in a certain way after the first annotation.",
746
+ "bbox": [
747
+ 112,
748
+ 384,
749
+ 487,
750
+ 558
751
+ ],
752
+ "page_idx": 4
753
+ },
754
+ {
755
+ "type": "text",
756
+ "text": "To assemble the final corpus, we combined the 1200 essays from the main study with the 120 essays from the IAA study after solving annotation conflicts. For conflicts between the three structure annotations per IAA essay, we kept the annotations that had the highest agreement across all levels with the other two. For conflicts between essay quality scores, we used their mean as the final score.[5]",
757
+ "bbox": [
758
+ 112,
759
+ 561,
760
+ 487,
761
+ 688
762
+ ],
763
+ "page_idx": 4
764
+ },
765
+ {
766
+ "type": "text",
767
+ "text": "3.4 Inter-Annotator Agreement",
768
+ "text_level": 1,
769
+ "bbox": [
770
+ 112,
771
+ 700,
772
+ 379,
773
+ 715
774
+ ],
775
+ "page_idx": 4
776
+ },
777
+ {
778
+ "type": "text",
779
+ "text": "For the components, we follow Stab and Gurevych (2017a) in that we evaluate the agreement per essay at the token level, so the token labels are the unit of analysis. Thereby, overlaps of annotations are taken into account. For relations, we determined the component-level spans that at least two annotators agreed on with a relative overlap $\\geq 75\\%$ . For all pairs of these, we then compared the relation labels (no relation, support, or attack) between the",
780
+ "bbox": [
781
+ 112,
782
+ 720,
783
+ 487,
784
+ 865
785
+ ],
786
+ "page_idx": 4
787
+ },
788
+ {
789
+ "type": "table",
790
+ "img_path": "images/1343bdf030c1a1b0b3dfe6d523dffab3d97cc768a37b2b44cc2418ef2821fa65.jpg",
791
+ "table_caption": [],
792
+ "table_footnote": [],
793
+ "table_body": "<table><tr><td>Argumentative Structure</td><td>α</td><td>Essay Quality</td><td>α</td></tr><tr><td>Discourse Functions</td><td>0.89</td><td>Relevance</td><td>0.77</td></tr><tr><td>Arguments</td><td>0.86</td><td>Content</td><td>0.95</td></tr><tr><td>Components</td><td>0.81</td><td>Structure</td><td>0.84</td></tr><tr><td>Discourse Modes</td><td>0.74</td><td>Style</td><td>0.92</td></tr><tr><td>Relations</td><td>0.58</td><td>Overall</td><td>0.95</td></tr></table>",
794
+ "bbox": [
795
+ 514,
796
+ 80,
797
+ 878,
798
+ 173
799
+ ],
800
+ "page_idx": 4
801
+ },
802
+ {
803
+ "type": "text",
804
+ "text": "Table 1: Krippendorff's $\\alpha$ agreement in the IAA study between three annotators for argumentative structure and two annotators for essay quality. The high values stress the high reliability of our annotations.",
805
+ "bbox": [
806
+ 507,
807
+ 183,
808
+ 882,
809
+ 241
810
+ ],
811
+ "page_idx": 4
812
+ },
813
+ {
814
+ "type": "text",
815
+ "text": "annotators. The mean Krippendorff's $\\alpha$ scores over the 120 IAA essays are reported in Table 1. For essay quality, we computed the $\\alpha$ -value per quality aspect with essays as the unit of analysis.",
816
+ "bbox": [
817
+ 505,
818
+ 266,
819
+ 882,
820
+ 330
821
+ ],
822
+ "page_idx": 4
823
+ },
824
+ {
825
+ "type": "text",
826
+ "text": "The agreement is high for argumentative structure spans with values between 0.74 and 0.89. The agreement for relations is lower but reasonable, given that disagreement from the components annotations is propagated to the relations. The agreement for essay quality is also high, too, ranging from 0.77 to 0.95. Overall, we conclude that the annotations can mostly be seen as very reliable. Content and style quality annotations are very consistent between annotators, while assessing the relevance and structure seems slightly more subjective.",
827
+ "bbox": [
828
+ 505,
829
+ 331,
830
+ 882,
831
+ 507
832
+ ],
833
+ "page_idx": 4
834
+ },
835
+ {
836
+ "type": "text",
837
+ "text": "3.5 Corpus Statistics",
838
+ "text_level": 1,
839
+ "bbox": [
840
+ 507,
841
+ 518,
842
+ 690,
843
+ 533
844
+ ],
845
+ "page_idx": 4
846
+ },
847
+ {
848
+ "type": "text",
849
+ "text": "Table 2 gives insights into the label distribution for argumentation structure. Body occurs most frequently among the discourse functions (1,335 times). With 56.75 tokens on average, bodies are also notably longer than introductions and conclusions. On the argument level, we note that counter-arguments are rather sparse in student essays. Among the components, claims are most frequent in total and per essay, followed by theses. Furthermore, we notice that modified theses are usually longer than theses, which matches our expectation that students add more details or restrictions to the thesis there. The most used discourse modes are positioning, describing, and reasoning, while referencing, comparing, and exemplifying occur rarely. Notable are also the differences in span length, e.g. positioning spans have on average only about half as many tokens as comparing spans.",
850
+ "bbox": [
851
+ 505,
852
+ 538,
853
+ 882,
854
+ 828
855
+ ],
856
+ "page_idx": 4
857
+ },
858
+ {
859
+ "type": "text",
860
+ "text": "Table 3 gives the frequency of annotated relations in our corpus. Most relations outgoing from claims are directed towards theses (92.6%), while most relations outgoing from premises are directed towards claims (85.8%). Overall, 96.2% of the re",
861
+ "bbox": [
862
+ 507,
863
+ 829,
864
+ 882,
865
+ 908
866
+ ],
867
+ "page_idx": 4
868
+ },
869
+ {
870
+ "type": "page_footnote",
871
+ "text": "We rounded down to the next valid quality score for low values $(< 2.5)$ and rounded up for high values $(>2.5)$ to prevent an upward distortion of the distribution.",
872
+ "bbox": [
873
+ 112,
874
+ 873,
875
+ 487,
876
+ 910
877
+ ],
878
+ "page_idx": 4
879
+ },
880
+ {
881
+ "type": "page_number",
882
+ "text": "2665",
883
+ "bbox": [
884
+ 480,
885
+ 928,
886
+ 519,
887
+ 940
888
+ ],
889
+ "page_idx": 4
890
+ },
891
+ {
892
+ "type": "table",
893
+ "img_path": "images/a5caaf7219fcaf1f1d075507b5d111020ab5f3c9a4d57462734fe43310f94c71.jpg",
894
+ "table_caption": [],
895
+ "table_footnote": [],
896
+ "table_body": "<table><tr><td>Label</td><td># Spans</td><td># Tokens</td><td>Tokens/Span</td><td>Spans/Essay</td></tr><tr><td>Introduction</td><td>114</td><td>2329</td><td>20.43</td><td>0.09</td></tr><tr><td>Body</td><td>1335</td><td>75766</td><td>56.75</td><td>1.01</td></tr><tr><td>Conclusion</td><td>191</td><td>2938</td><td>15.38</td><td>0.14</td></tr><tr><td>Argument</td><td>2692</td><td>51560</td><td>19.15</td><td>2.04</td></tr><tr><td>Counter-arg.</td><td>34</td><td>514</td><td>15.12</td><td>0.03</td></tr><tr><td>Topic</td><td>101</td><td>1656</td><td>16.40</td><td>0.08</td></tr><tr><td>Thesis</td><td>1687</td><td>19581</td><td>11.61</td><td>1.28</td></tr><tr><td>Modified T.</td><td>267</td><td>4490</td><td>16.82</td><td>0.20</td></tr><tr><td>Antithesis</td><td>14</td><td>174</td><td>12.43</td><td>0.01</td></tr><tr><td>Claim</td><td>3137</td><td>39096</td><td>12.46</td><td>2.38</td></tr><tr><td>Premise</td><td>1020</td><td>12533</td><td>12.29</td><td>0.77</td></tr><tr><td>Comparing</td><td>20</td><td>431</td><td>21.55</td><td>0.02</td></tr><tr><td>Conceding</td><td>142</td><td>2874</td><td>20.24</td><td>0.11</td></tr><tr><td>Concluding</td><td>868</td><td>11654</td><td>13.43</td><td>0.66</td></tr><tr><td>Describing</td><td>1692</td><td>22258</td><td>13.15</td><td>1.28</td></tr><tr><td>Exemplifying</td><td>63</td><td>926</td><td>14.70</td><td>0.05</td></tr><tr><td>Instructing</td><td>176</td><td>2174</td><td>12.35</td><td>0.13</td></tr><tr><td>Positioning</td><td>1758</td><td>19178</td><td>10.91</td><td>1.33</td></tr><tr><td>Reasoning</td><td>1553</td><td>17204</td><td>11.08</td><td>1.18</td></tr><tr><td>Referencing</td><td>16</td><td>197</td><td>12.31</td><td>0.01</td></tr><tr><td>Qualifying</td><td>147</td><td>2344</td><td>15.95</td><td>0.11</td></tr></table>",
897
+ "bbox": [
898
+ 117,
899
+ 82,
900
+ 485,
901
+ 394
902
+ ],
903
+ "page_idx": 5
904
+ },
905
+ {
906
+ "type": "table",
907
+ "img_path": "images/05ea7ad3eb8819f69411aa15f3f11d756410a92c43c171fdccd59ab8e54f368a.jpg",
908
+ "table_caption": [
909
+ "Table 2: Argumentative structure annotations in the corpus: Total number of spans and tokens per label, average span length in number of tokens (Tokens/Span) and average number of spans per essay (Spans/Essay). The highest value per column and level is marked bold."
910
+ ],
911
+ "table_footnote": [],
912
+ "table_body": "<table><tr><td>From Claim to</td><td>#</td><td>%</td></tr><tr><td>Thesis</td><td>2844</td><td>92.6</td></tr><tr><td>Modified Thesis</td><td>218</td><td>7.1</td></tr><tr><td>Antithesis</td><td>9</td><td>0.3</td></tr></table>",
913
+ "bbox": [
914
+ 115,
915
+ 487,
916
+ 297,
917
+ 555
918
+ ],
919
+ "page_idx": 5
920
+ },
921
+ {
922
+ "type": "table",
923
+ "img_path": "images/b1677bcc40226d5f18f3e9d6ae1de88989e2c7835d1b6126052b95e4d75ff908.jpg",
924
+ "table_caption": [],
925
+ "table_footnote": [],
926
+ "table_body": "<table><tr><td>From Premise to</td><td>#</td><td>%</td></tr><tr><td>Thesis</td><td>6</td><td>0.6</td></tr><tr><td>Claim</td><td>872</td><td>85.8</td></tr><tr><td>Premise</td><td>138</td><td>13.6</td></tr></table>",
927
+ "bbox": [
928
+ 305,
929
+ 488,
930
+ 485,
931
+ 555
932
+ ],
933
+ "page_idx": 5
934
+ },
935
+ {
936
+ "type": "text",
937
+ "text": "lations were labeled as support and $3.8\\%$ as attack.",
938
+ "bbox": [
939
+ 112,
940
+ 618,
941
+ 487,
942
+ 633
943
+ ],
944
+ "page_idx": 5
945
+ },
946
+ {
947
+ "type": "text",
948
+ "text": "The distribution of quality scores is shown in Table 4. We can see that relevance, structure, and style have a similar score distribution, while the distribution of content scores is shifted towards the higher scores, with the highest mean (2.80). Overall quality has the lowest mean score (2.20) and was most often scored with the lowest score of 1.0. These results suggest that overall quality is not just the average of the other annotated quality aspects but that it emerges from the annotators' perception and possibly other aspects.",
949
+ "bbox": [
950
+ 112,
951
+ 634,
952
+ 487,
953
+ 812
954
+ ],
955
+ "page_idx": 5
956
+ },
957
+ {
958
+ "type": "text",
959
+ "text": "4 Analysis",
960
+ "text_level": 1,
961
+ "bbox": [
962
+ 112,
963
+ 822,
964
+ 223,
965
+ 839
966
+ ],
967
+ "page_idx": 5
968
+ },
969
+ {
970
+ "type": "text",
971
+ "text": "This section reports on our corpus analysis of the interaction between the argumentative structure on micro vs. macro level and component vs. discourse mode level, and the different essay quality aspects.",
972
+ "bbox": [
973
+ 112,
974
+ 848,
975
+ 489,
976
+ 913
977
+ ],
978
+ "page_idx": 5
979
+ },
980
+ {
981
+ "type": "table",
982
+ "img_path": "images/cdef616b081451c5938d7a100d3161e4ef24031153c46c8376d2003931be2acd.jpg",
983
+ "table_caption": [
984
+ "Table 3: Absolute and relative frequency of annotated relations outgoing from claim (left) or premise (right)."
985
+ ],
986
+ "table_footnote": [],
987
+ "table_body": "<table><tr><td>Quality Aspect</td><td>1.0</td><td>1.5</td><td>2.0</td><td>2.5</td><td>3.0</td><td>3.5</td><td>4.0</td><td>Mean</td></tr><tr><td>Relevance</td><td>64</td><td>131</td><td>548</td><td>358</td><td>190</td><td>23</td><td>6</td><td>2.22</td></tr><tr><td>Content</td><td>62</td><td>33</td><td>123</td><td>312</td><td>510</td><td>173</td><td>107</td><td>2.80</td></tr><tr><td>Structure</td><td>59</td><td>91</td><td>423</td><td>414</td><td>292</td><td>24</td><td>17</td><td>2.35</td></tr><tr><td>Style</td><td>75</td><td>159</td><td>396</td><td>436</td><td>233</td><td>12</td><td>9</td><td>2.25</td></tr><tr><td>Overall</td><td>90</td><td>142</td><td>478</td><td>390</td><td>195</td><td>23</td><td>2</td><td>2.20</td></tr></table>",
988
+ "bbox": [
989
+ 512,
990
+ 83,
991
+ 882,
992
+ 174
993
+ ],
994
+ "page_idx": 5
995
+ },
996
+ {
997
+ "type": "text",
998
+ "text": "Table 4: Distribution and mean of scores per quality aspect. The highest value per column is marked bold.",
999
+ "bbox": [
1000
+ 507,
1001
+ 183,
1002
+ 882,
1003
+ 212
1004
+ ],
1005
+ "page_idx": 5
1006
+ },
1007
+ {
1008
+ "type": "image",
1009
+ "img_path": "images/605f0e0adab7c8cbd6f8460294660b87a0282019a2ffc0171ad99814e4c060bf.jpg",
1010
+ "image_caption": [
1011
+ "Figure 3: Cooccurrence matrices: Relative token-level overlap of (a) macro and micro structure and (b) component and discourse mode labels in percent. For example, $68\\%$ of all tokens labeled as Introduction on the macro level are also labeled as Topic on the micro level."
1012
+ ],
1013
+ "image_footnote": [],
1014
+ "bbox": [
1015
+ 515,
1016
+ 225,
1017
+ 887,
1018
+ 438
1019
+ ],
1020
+ "page_idx": 5
1021
+ },
1022
+ {
1023
+ "type": "text",
1024
+ "text": "4.1 Macro vs. Micro structure",
1025
+ "text_level": 1,
1026
+ "bbox": [
1027
+ 507,
1028
+ 545,
1029
+ 761,
1030
+ 558
1031
+ ],
1032
+ "page_idx": 5
1033
+ },
1034
+ {
1035
+ "type": "text",
1036
+ "text": "Figure 3(a) shows the overlap between macro structure (discourse functions and arguments) and micro structure (components and discourse mode) labels. The introduction mainly includes the topic $(68\\%)$ , while every second token in the body is a claim on average. The thesis cooccurs with all three discourse functions. We see that the proportion of claims and premises differs for arguments and counter-arguments. Counter-arguments contain more claim tokens than arguments, while fewer counter-argument tokens are part of a premise.",
1037
+ "bbox": [
1038
+ 505,
1039
+ 565,
1040
+ 882,
1041
+ 741
1042
+ ],
1043
+ "page_idx": 5
1044
+ },
1045
+ {
1046
+ "type": "text",
1047
+ "text": "The usage of discourse modes differs between the macro-structure levels, too. In the introduction, students mostly describe (68%) and position (24%). The discourse modes are more diverse for the body, also including a notable portion of reasoning. As expected, in the conclusion, students focus more on concluding. While describing and reasoning are prevalent in arguments and counter-arguments, a notable portion of argument tokens is used for concluding. At the same time, more conceding and",
1048
+ "bbox": [
1049
+ 505,
1050
+ 743,
1051
+ 884,
1052
+ 903
1053
+ ],
1054
+ "page_idx": 5
1055
+ },
1056
+ {
1057
+ "type": "page_number",
1058
+ "text": "2666",
1059
+ "bbox": [
1060
+ 480,
1061
+ 928,
1062
+ 519,
1063
+ 940
1064
+ ],
1065
+ "page_idx": 5
1066
+ },
1067
+ {
1068
+ "type": "table",
1069
+ "img_path": "images/8b1e44ac9b896f887e25359eb543b05a244599589d54e806a5e007b1f2be8954.jpg",
1070
+ "table_caption": [],
1071
+ "table_footnote": [],
1072
+ "table_body": "<table><tr><td></td><td>Relevance</td><td>Content</td><td>Structure</td><td>Style</td><td>Overall</td></tr><tr><td>Relevance</td><td></td><td>.53</td><td>.61</td><td>.47</td><td>.75</td></tr><tr><td>Content</td><td>.53</td><td></td><td>.48</td><td>.41</td><td>.60</td></tr><tr><td>Structure</td><td>.61</td><td>.48</td><td></td><td>.51</td><td>.71</td></tr><tr><td>Style</td><td>.47</td><td>.41</td><td>.51</td><td></td><td>.61</td></tr><tr><td>Overall</td><td>.75</td><td>.60</td><td>.71</td><td>.61</td><td></td></tr></table>",
1073
+ "bbox": [
1074
+ 115,
1075
+ 82,
1076
+ 485,
1077
+ 173
1078
+ ],
1079
+ "page_idx": 6
1080
+ },
1081
+ {
1082
+ "type": "text",
1083
+ "text": "qualifying tokens occur in counter-arguments. This is expected, since especially in counter-arguments other points of view should be varied or refuted.",
1084
+ "bbox": [
1085
+ 112,
1086
+ 237,
1087
+ 485,
1088
+ 285
1089
+ ],
1090
+ "page_idx": 6
1091
+ },
1092
+ {
1093
+ "type": "text",
1094
+ "text": "4.2 Components vs. Discourse modes",
1095
+ "text_level": 1,
1096
+ "bbox": [
1097
+ 112,
1098
+ 297,
1099
+ 420,
1100
+ 311
1101
+ ],
1102
+ "page_idx": 6
1103
+ },
1104
+ {
1105
+ "type": "text",
1106
+ "text": "The cooccurrences between components and actions can be seen in Figure 3(b). While the topic is mostly described (90%) and the thesis consists primarily of positioning (85%), the remaining components include more diverse discourse modes. In contrast to theses, modified theses also feature describing and qualifying tokens, while antitheses additionally cover conceding and concluding. Claims and premises mainly cooccur with describing, reasoning, and concluding. However, the proportions differ slightly. The cooccurrence matrix between all structure labels can be found in Appendix A.",
1107
+ "bbox": [
1108
+ 112,
1109
+ 317,
1110
+ 489,
1111
+ 510
1112
+ ],
1113
+ "page_idx": 6
1114
+ },
1115
+ {
1116
+ "type": "text",
1117
+ "text": "4.3 Essay Quality",
1118
+ "text_level": 1,
1119
+ "bbox": [
1120
+ 112,
1121
+ 521,
1122
+ 270,
1123
+ 536
1124
+ ],
1125
+ "page_idx": 6
1126
+ },
1127
+ {
1128
+ "type": "text",
1129
+ "text": "To further assess the interaction between the quality aspects, Table 5 shows all pairwise Kendall's $\\tau$ correlations. All aspects correlate most with overall quality, most strongly relevance (.75). The correlation between content and style is lowest (.41), which underlines their distinctive nature.",
1130
+ "bbox": [
1131
+ 112,
1132
+ 541,
1133
+ 489,
1134
+ 638
1135
+ ],
1136
+ "page_idx": 6
1137
+ },
1138
+ {
1139
+ "type": "text",
1140
+ "text": "5 Experiments",
1141
+ "text_level": 1,
1142
+ "bbox": [
1143
+ 112,
1144
+ 650,
1145
+ 260,
1146
+ 665
1147
+ ],
1148
+ "page_idx": 6
1149
+ },
1150
+ {
1151
+ "type": "text",
1152
+ "text": "This section presents baseline approaches to the two main tasks our corpus enables: Predicting the argumentative structure (argument mining) and the essay quality (essay scoring). Additionally, we investigate whether information about the argumentative structure helps to predict the essay quality.",
1153
+ "bbox": [
1154
+ 112,
1155
+ 675,
1156
+ 489,
1157
+ 772
1158
+ ],
1159
+ "page_idx": 6
1160
+ },
1161
+ {
1162
+ "type": "text",
1163
+ "text": "5.1 Argument Mining",
1164
+ "text_level": 1,
1165
+ "bbox": [
1166
+ 112,
1167
+ 783,
1168
+ 304,
1169
+ 797
1170
+ ],
1171
+ "page_idx": 6
1172
+ },
1173
+ {
1174
+ "type": "text",
1175
+ "text": "We treat argument mining as a token classification task: Given a school student essay and a structure level, predict the label of each token on that structure level. The IOB2 format is used for the labels to separate adjacent spans of the same type. We performed 5-fold cross-validation for each structure level. For each folding, we used four folds",
1176
+ "bbox": [
1177
+ 112,
1178
+ 803,
1179
+ 489,
1180
+ 915
1181
+ ],
1182
+ "page_idx": 6
1183
+ },
1184
+ {
1185
+ "type": "table",
1186
+ "img_path": "images/11994751dfa8e78001ccdaeb3bf56c7768e228a59859551f0ca5968a2d949189.jpg",
1187
+ "table_caption": [
1188
+ "Table 5: Kendall's $\\tau$ correlation between the quality aspects. The highest value per column is marked bold."
1189
+ ],
1190
+ "table_footnote": [],
1191
+ "table_body": "<table><tr><td rowspan=\"2\">Approach</td><td colspan=\"2\">D. Func.</td><td colspan=\"2\">Argum.</td><td colspan=\"2\">Compon.</td><td colspan=\"2\">D. Mode</td></tr><tr><td>Acc.</td><td>F1</td><td>Acc.</td><td>F1</td><td>Acc.</td><td>F1</td><td>Acc.</td><td>F1</td></tr><tr><td>Random</td><td>.14</td><td>.00</td><td>.20</td><td>.00</td><td>.08</td><td>.00</td><td>.05</td><td>.00</td></tr><tr><td>Majority</td><td>.86</td><td>.52</td><td>.56</td><td>.00</td><td>.41</td><td>.00</td><td>.24</td><td>.00</td></tr><tr><td>mDeBERTaV3</td><td>.92</td><td>.46</td><td>.86</td><td>.29</td><td>.66</td><td>.21</td><td>.63</td><td>.21</td></tr><tr><td>-adapter</td><td>.95</td><td>.68</td><td>.92</td><td>.52</td><td>.76</td><td>.49</td><td>.73</td><td>.46</td></tr><tr><td>Human</td><td>.98</td><td>.94</td><td>.96</td><td>.85</td><td>.93</td><td>.89</td><td>.89</td><td>.84</td></tr></table>",
1192
+ "bbox": [
1193
+ 510,
1194
+ 82,
1195
+ 882,
1196
+ 205
1197
+ ],
1198
+ "page_idx": 6
1199
+ },
1200
+ {
1201
+ "type": "text",
1202
+ "text": "Table 6: Argument mining results: Macro $\\mathrm{F}_1$ -score and accuracy of each approach in 5-fold cross-validation on all four argumentative structure dimensions. The best value per column is marked bold.",
1203
+ "bbox": [
1204
+ 507,
1205
+ 215,
1206
+ 882,
1207
+ 273
1208
+ ],
1209
+ "page_idx": 6
1210
+ },
1211
+ {
1212
+ "type": "text",
1213
+ "text": "(80%) for training and divided the fifth fold in half: one half (10%) for selecting the best-performing checkpoint in terms of macro-averaged $\\mathrm{F}_1$ -score, and the remaining half (10%) for testing.",
1214
+ "bbox": [
1215
+ 507,
1216
+ 298,
1217
+ 882,
1218
+ 362
1219
+ ],
1220
+ "page_idx": 6
1221
+ },
1222
+ {
1223
+ "type": "text",
1224
+ "text": "Models We used the multilingual model mDeBERTaV3 (He et al., 2023) (microsoft/mdeberta-v3-base) from Huggingface (Wolf et al., 2020). Besides, we tested the effect of training adapters (Houlsby et al., 2019), a set of task-specific parameters that are added to every transformer layer of mDeBERTaV3 and fine-tuned on the task while the model weights are fixed. To quantify the impact of learning, we compare against a random baseline that chooses a token label pseudo-randomly and a majority baseline that always predicts the majority token label from the training set. As upper bound, we report the human performance in terms of the average of each annotator in isolation on the 120 IAA texts annotated by all annotators.",
1225
+ "bbox": [
1226
+ 507,
1227
+ 370,
1228
+ 882,
1229
+ 612
1230
+ ],
1231
+ "page_idx": 6
1232
+ },
1233
+ {
1234
+ "type": "text",
1235
+ "text": "Experimental Setup We train mDeBERTaV3 for 30 epochs (1,980 steps) using the suggested hyperparameter values: a learning rate of $3e - 5$ , batch size 16, and 500 warmup steps. For mDeBERTaV3-adapter, we follow Pfeiffer et al. (2020) who recommend to use a higher learning rate of $1e - 4$ and train longer, here 50 epochs (3,300 steps).",
1236
+ "bbox": [
1237
+ 507,
1238
+ 621,
1239
+ 882,
1240
+ 734
1241
+ ],
1242
+ "page_idx": 6
1243
+ },
1244
+ {
1245
+ "type": "text",
1246
+ "text": "Results Table 6 presents the token classification results for all levels of argumentative structure, averaged over all folds. Noteworthy, mDeBERTaV3-adapter outperforms training the whole model (mDeBERTaV3) in all cases. Given that the $\\mathrm{F_1}$ -scores improve more than the accuracy, the",
1247
+ "bbox": [
1248
+ 507,
1249
+ 743,
1250
+ 882,
1251
+ 839
1252
+ ],
1253
+ "page_idx": 6
1254
+ },
1255
+ {
1256
+ "type": "page_footnote",
1257
+ "text": "6 Explorative experiments using instruction fine-tuned models such as Alpaca (Taori et al., 2023) did not lead to promising results for our token classification task.",
1258
+ "bbox": [
1259
+ 507,
1260
+ 848,
1261
+ 882,
1262
+ 883
1263
+ ],
1264
+ "page_idx": 6
1265
+ },
1266
+ {
1267
+ "type": "page_footnote",
1268
+ "text": "Note that the test set used for the model performance is a different subset of the dataset.",
1269
+ "bbox": [
1270
+ 507,
1271
+ 884,
1272
+ 880,
1273
+ 908
1274
+ ],
1275
+ "page_idx": 6
1276
+ },
1277
+ {
1278
+ "type": "page_number",
1279
+ "text": "2667",
1280
+ "bbox": [
1281
+ 480,
1282
+ 928,
1283
+ 519,
1284
+ 940
1285
+ ],
1286
+ "page_idx": 6
1287
+ },
1288
+ {
1289
+ "type": "table",
1290
+ "img_path": "images/26b5bae4cba33f5bf35224a5b25a8862169379aa951917634ec173b6ad5b9fd7.jpg",
1291
+ "table_caption": [],
1292
+ "table_footnote": [],
1293
+ "table_body": "<table><tr><td>Approach</td><td>Relevance</td><td>Content</td><td>Structure</td><td>Style</td><td>Overall</td></tr><tr><td>Random</td><td>-0.013 ±0.084</td><td>-0.011 ±0.071</td><td>-0.014 ±0.073</td><td>0.017 ±0.084</td><td>-0.004 ±0.083</td></tr><tr><td>Majority</td><td>0.000 ±0.000</td><td>0.000 ±0.000</td><td>0.000 ±0.000</td><td>0.000 ±0.000</td><td>0.000 ±0.000</td></tr><tr><td>mDeBERTaV3</td><td>0.530 ±0.069</td><td>0.295 ±0.109</td><td>0.513 ±0.044</td><td>0.492 ±0.059</td><td>0.616 ±0.040</td></tr><tr><td>-adapter</td><td>0.564 ±0.018</td><td>0.431 ±0.098</td><td>0.575 ±0.038</td><td>0.579 ±0.077</td><td>0.648 ±0.054</td></tr><tr><td>-fusion-w/-discourse-functions</td><td>0.599 ±0.043</td><td>0.381 ±0.134</td><td>0.559 ±0.036</td><td>0.569 ±0.069</td><td>0.668 ±0.049</td></tr><tr><td>-fusion-w/-arguments</td><td>0.593 ±0.030</td><td>0.448 ±0.105</td><td>0.575 ±0.019</td><td>0.581 ±0.054</td><td>0.668 ±0.036</td></tr><tr><td>-fusion-w/-components</td><td>†0.600 ±0.025</td><td>0.437 ±0.137</td><td>0.543 ±0.044</td><td>0.585 ±0.053</td><td>0.663 ±0.046</td></tr><tr><td>-fusion-w/-discourse-modes</td><td>0.544 ±0.028</td><td>0.420 ±0.118</td><td>0.535 ±0.041</td><td>0.583 ±0.064</td><td>0.645 ±0.023</td></tr><tr><td>-fusion-w/-all</td><td>0.574 ±0.039</td><td>0.454 ±0.142</td><td>0.546 ±0.013</td><td>0.617 ±0.057</td><td>†0.686 ±0.031</td></tr><tr><td>Human</td><td>0.636 ±0.055</td><td>0.632 ±0.003</td><td>0.734 ±0.007</td><td>0.766 ±0.005</td><td>0.746 ±0.003</td></tr></table>",
1294
+ "bbox": [
1295
+ 117,
1296
+ 80,
1297
+ 880,
1298
+ 256
1299
+ ],
1300
+ "page_idx": 7
1301
+ },
1302
+ {
1303
+ "type": "text",
1304
+ "text": "Table 7: Essay scoring results: QWK of each approach in 5-fold cross-validation on all five quality dimensions. The best value per column is marked bold. We mark significant gains over mDeBERTaV3-adapter at $p < .05$ with †.",
1305
+ "bbox": [
1306
+ 112,
1307
+ 266,
1308
+ 882,
1309
+ 296
1310
+ ],
1311
+ "page_idx": 7
1312
+ },
1313
+ {
1314
+ "type": "text",
1315
+ "text": "adapters seem less prone to overfitting to the majority label. This learning success suggests the possibility of predicting all argumentative structure levels on our corpus. However, further improvements using more advanced approaches are expected.",
1316
+ "bbox": [
1317
+ 112,
1318
+ 321,
1319
+ 489,
1320
+ 401
1321
+ ],
1322
+ "page_idx": 7
1323
+ },
1324
+ {
1325
+ "type": "text",
1326
+ "text": "5.2 Essay Scoring",
1327
+ "text_level": 1,
1328
+ "bbox": [
1329
+ 112,
1330
+ 411,
1331
+ 272,
1332
+ 426
1333
+ ],
1334
+ "page_idx": 7
1335
+ },
1336
+ {
1337
+ "type": "text",
1338
+ "text": "We treat predicting the essay quality as a text classification task: Given a school student essay and a quality aspect, predict the corresponding quality score. As before, we performed 5-fold cross-validation for each quality aspect using the same folds. We selected the best-performing checkpoint on the validation set using quadratic weighted kappa (QWK), the most widely adopted metric for automatic essay scoring (Ke and Ng, 2019).",
1339
+ "bbox": [
1340
+ 112,
1341
+ 432,
1342
+ 489,
1343
+ 577
1344
+ ],
1345
+ "page_idx": 7
1346
+ },
1347
+ {
1348
+ "type": "text",
1349
+ "text": "Models We adopted the previous approaches by changing the head to a text classification head. To analyze the interaction between argumentative structure and essay quality, we employed Adapter-Fusion (Pfeiffer et al., 2021), a multi-task learning framework that can be used to investigate relations between different dimensions by learning how to combine model weights with one or more adapters. We used the mDeBERTaV3-adapters trained on argumentative structure from the previous experiment. As the final adapter, we chose the one trained on the folding that performed most representative for all folds ( $F_1$ -score closest to the reported averaged $F_1$ -score across folds). To measure the impact of each level of argumentative structure on the scoring performance, we used each adapter individually and a combination of all of them.",
1350
+ "bbox": [
1351
+ 112,
1352
+ 586,
1353
+ 489,
1354
+ 858
1355
+ ],
1356
+ "page_idx": 7
1357
+ },
1358
+ {
1359
+ "type": "text",
1360
+ "text": "Experimental Setup The experimental setup for mDeBERTaV3 and mDeBERTaV3-adapter was adopted from before. For training the Adapter-",
1361
+ "bbox": [
1362
+ 112,
1363
+ 868,
1364
+ 489,
1365
+ 917
1366
+ ],
1367
+ "page_idx": 7
1368
+ },
1369
+ {
1370
+ "type": "text",
1371
+ "text": "Fusion, we followed Pfeiffer et al. (2021) to use a learning rate of $5e - 5$ and trained shorter than the adapters, in our case for 20 epochs (1,320 steps).",
1372
+ "bbox": [
1373
+ 507,
1374
+ 319,
1375
+ 882,
1376
+ 369
1377
+ ],
1378
+ "page_idx": 7
1379
+ },
1380
+ {
1381
+ "type": "text",
1382
+ "text": "Results Table 7 shows the scoring results. All models outperform the lower-bound baselines (random and majority), suggesting that the quality scoring can be learned from our corpus. Furthermore, fusing all adapters trained on argumentative structure (mDeBERTaV3-fusion-w/-all) performs best for three out of five quality aspects, significantly beating mDeBERTaV3-adapter in predicting overall quality. This underlines the need for all four levels of argumentative structure together in order to improve scoring overall quality (0.686 vs. 0.648). In addition, using only the adapter trained on the component level (mDeBERTaV3-fusion-w/-components) helps to significantly improve over mDeBERTaV3-adapter in predicting relevance (0.600 vs. 0.564), indicating an interaction between the structure on component level and this quality aspect. QWK scores greater or equal to 0.6 suggest substantial agreement between the predicted and ground-truth quality scoring of essays.",
1383
+ "bbox": [
1384
+ 505,
1385
+ 376,
1386
+ 884,
1387
+ 700
1388
+ ],
1389
+ "page_idx": 7
1390
+ },
1391
+ {
1392
+ "type": "text",
1393
+ "text": "AdapterFusion Activations AdapterFusion extracts information from adapters only if they benefit the target task. Similar to Falk and Lapesa (2023), we visualize the average activations of our model mDeBERTaV3-fusion-w/-all over the layers in Figure 4 to investigate the influence of each level of argumentative structure on the quality scoring. All adapters are activated fairly evenly for all quality aspects, with slight deviations. This aligns with our previous results and underlines that all annotated",
1394
+ "bbox": [
1395
+ 507,
1396
+ 708,
1397
+ 885,
1398
+ 869
1399
+ ],
1400
+ "page_idx": 7
1401
+ },
1402
+ {
1403
+ "type": "page_footnote",
1404
+ "text": "<sup>8</sup>We use Wilcoxon signed-rank tests at $p < .05$ for testing the significance.",
1405
+ "bbox": [
1406
+ 507,
1407
+ 876,
1408
+ 882,
1409
+ 903
1410
+ ],
1411
+ "page_idx": 7
1412
+ },
1413
+ {
1414
+ "type": "page_number",
1415
+ "text": "2668",
1416
+ "bbox": [
1417
+ 480,
1418
+ 928,
1419
+ 519,
1420
+ 940
1421
+ ],
1422
+ "page_idx": 7
1423
+ },
1424
+ {
1425
+ "type": "image",
1426
+ "img_path": "images/17b8a484584cbf846f79aaa9dadf1ff7e0b853ac0f1eb37d917a62ce9c798e4a.jpg",
1427
+ "image_caption": [
1428
+ "Figure 4: AdapterFusion activation on average over the layers for each mDeBERTaV3-fusion- $w$ /-all model per quality aspect. We average the activation for each fused adapter (discourse functions, arguments, components, discourse modes) over all instances in the most representative test set folding."
1429
+ ],
1430
+ "image_footnote": [],
1431
+ "bbox": [
1432
+ 176,
1433
+ 80,
1434
+ 823,
1435
+ 146
1436
+ ],
1437
+ "page_idx": 8
1438
+ },
1439
+ {
1440
+ "type": "text",
1441
+ "text": "structure levels are helpful for quality scoring. The activations per layer can be found in Appendix B.",
1442
+ "bbox": [
1443
+ 112,
1444
+ 223,
1445
+ 485,
1446
+ 256
1447
+ ],
1448
+ "page_idx": 8
1449
+ },
1450
+ {
1451
+ "type": "text",
1452
+ "text": "6 Conclusion",
1453
+ "text_level": 1,
1454
+ "bbox": [
1455
+ 112,
1456
+ 267,
1457
+ 245,
1458
+ 282
1459
+ ],
1460
+ "page_idx": 8
1461
+ },
1462
+ {
1463
+ "type": "text",
1464
+ "text": "Argumentative writing support of school students presupposes that the quality of their arguments can be assessed. Until now, no argument mining corpus with school student essays has been published, let alone any essay corpus with both argument and quality annotations. With this work, we fill both research gaps with a new corpus of 1,320 German school student essays, annotated by experts for argumentative structure and essay quality.",
1465
+ "bbox": [
1466
+ 112,
1467
+ 292,
1468
+ 487,
1469
+ 436
1470
+ ],
1471
+ "page_idx": 8
1472
+ },
1473
+ {
1474
+ "type": "text",
1475
+ "text": "Our corpus analysis has provided various insights into the correlation between the different levels of argumentative structure and essay quality. In our experiments with fine-tuned transformers and adapters for mining argumentative structure and scoring essay quality we have demonstrated that combining information on all four argumentative structure levels helps the prediction of essay quality. This shows the usefulness of our corpus for research on quality-oriented argumentative writing support, which we seek to enable with this paper.",
1476
+ "bbox": [
1477
+ 112,
1478
+ 438,
1479
+ 487,
1480
+ 613
1481
+ ],
1482
+ "page_idx": 8
1483
+ },
1484
+ {
1485
+ "type": "text",
1486
+ "text": "We point out that our corpus contains various information yet to be explored, such as argumentative relations and school student metadata. It thus lays the ground for further analyses—like identifying unwarranted claims and studying differences across age groups and genders.",
1487
+ "bbox": [
1488
+ 112,
1489
+ 615,
1490
+ 487,
1491
+ 709
1492
+ ],
1493
+ "page_idx": 8
1494
+ },
1495
+ {
1496
+ "type": "text",
1497
+ "text": "7 Limitations",
1498
+ "text_level": 1,
1499
+ "bbox": [
1500
+ 112,
1501
+ 722,
1502
+ 248,
1503
+ 737
1504
+ ],
1505
+ "page_idx": 8
1506
+ },
1507
+ {
1508
+ "type": "text",
1509
+ "text": "Aside from the still-improvable performance of the presented baseline models for argument mining and essay scoring, we see two notable limitations of our work: the restriction to German texts, and the pending utilization of the corpus for quality-oriented argumentative writing support.",
1510
+ "bbox": [
1511
+ 112,
1512
+ 747,
1513
+ 487,
1514
+ 844
1515
+ ],
1516
+ "page_idx": 8
1517
+ },
1518
+ {
1519
+ "type": "text",
1520
+ "text": "First, we point out the specific language background of our work. The essays were written by German school students, and the annotations were developed in close communication with German",
1521
+ "bbox": [
1522
+ 112,
1523
+ 845,
1524
+ 487,
1525
+ 908
1526
+ ],
1527
+ "page_idx": 8
1528
+ },
1529
+ {
1530
+ "type": "text",
1531
+ "text": "experts from the field of language education, while the discourse modes and essay quality aspects are, to a considerable extent, derived from work on German texts. This means that our findings may not perfectly align with argumentative writing in other countries or languages with different expectations for argumentative essays.",
1532
+ "bbox": [
1533
+ 507,
1534
+ 223,
1535
+ 884,
1536
+ 335
1537
+ ],
1538
+ "page_idx": 8
1539
+ },
1540
+ {
1541
+ "type": "text",
1542
+ "text": "Second, while our analyses suggest that our corpus helps to enable quality-oriented argumentative writing support, the perceived usefulness of such a tool is still to be evaluated. We expect and encourage future work to utilize our corpus for such writing support tools, for example, by further analyzing which exact argumentative structures influence the essay quality and to what extent. Interpretable essay quality scoring based on the structure might generate helpful insights that can be used as writing feedback by school students.",
1543
+ "bbox": [
1544
+ 507,
1545
+ 336,
1546
+ 884,
1547
+ 512
1548
+ ],
1549
+ "page_idx": 8
1550
+ },
1551
+ {
1552
+ "type": "text",
1553
+ "text": "8 Ethical Considerations",
1554
+ "text_level": 1,
1555
+ "bbox": [
1556
+ 507,
1557
+ 524,
1558
+ 741,
1559
+ 539
1560
+ ],
1561
+ "page_idx": 8
1562
+ },
1563
+ {
1564
+ "type": "text",
1565
+ "text": "We see no apparent risk of the corpus or the methods presented in this paper being misused for ethically doubtful purposes. The authors of the FD-LEX corpus (Becker-Mrotzek and Grabowski, 2018) have already pseudonymized the author of each essay. Therefore, it is not possible to identify the individual school student from the provided data. However, we want to point out that one might find differences in the essays across gender or age groups that do not reflect reality but are rather due to an unintentional bias in the data selection.",
1566
+ "bbox": [
1567
+ 507,
1568
+ 550,
1569
+ 882,
1570
+ 726
1571
+ ],
1572
+ "page_idx": 8
1573
+ },
1574
+ {
1575
+ "type": "text",
1576
+ "text": "Acknowledgments",
1577
+ "text_level": 1,
1578
+ "bbox": [
1579
+ 509,
1580
+ 739,
1581
+ 672,
1582
+ 753
1583
+ ],
1584
+ "page_idx": 8
1585
+ },
1586
+ {
1587
+ "type": "text",
1588
+ "text": "We would like to thank the participants of our study and the anonymous reviewers for the valuable feedback and their time. This work was partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) within the project ArgSchool, project number 453073654.",
1589
+ "bbox": [
1590
+ 507,
1591
+ 764,
1592
+ 882,
1593
+ 860
1594
+ ],
1595
+ "page_idx": 8
1596
+ },
1597
+ {
1598
+ "type": "page_number",
1599
+ "text": "2669",
1600
+ "bbox": [
1601
+ 480,
1602
+ 928,
1603
+ 519,
1604
+ 940
1605
+ ],
1606
+ "page_idx": 8
1607
+ },
1608
+ {
1609
+ "type": "text",
1610
+ "text": "References",
1611
+ "text_level": 1,
1612
+ "bbox": [
1613
+ 115,
1614
+ 84,
1615
+ 213,
1616
+ 98
1617
+ ],
1618
+ "page_idx": 9
1619
+ },
1620
+ {
1621
+ "type": "list",
1622
+ "sub_type": "ref_text",
1623
+ "list_items": [
1624
+ "Tazin Afrin and Diane Litman. 2018. Annotation and classification of sentence-level revision improvement. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 240-246, New Orleans, Louisiana. Association for Computational Linguistics.",
1625
+ "Aniket Ajit Tambe and Manasi Kulkarni. 2022. Automated essay scoring system with grammar score analysis. In 2022 Smart Technologies, Communication and Robotics (STCR), pages 1-7.",
1626
+ "Patricia A. Alexander, Jannah Fusenig, Eric C. Schoute, Anisha Singh, Yuting Sun, and Julianne E. van Meerten. 2023. Confronting the challenges of undergraduates' argumentation writing in a \"learning how to learn\" course. Written Communication, 40(2):482-517.",
1627
+ "Michael Becker-Mrotzek and Joachim Grabowski. 2018. Textkorpus Scriptoria. In Michael Becker-Mrotzek and Joachim Grabowski, editors, FDLEX (Forschungsdatenbank Lernertexte). MercatorInstitut für Sprachforderung und Deutsch als Zweitsprache, Köln. Available at: https://fd-lex.uni-koeln.de.",
1628
+ "Michael Becker-Mrotzek, Frank Schneider, and Klaus Tetling. 2010. Argumentierendes Schreiben - lehren und lernen. Vorschläge für einen systematischen Kompetenzaufbau in den Stufen 5 bis 8.",
1629
+ "Beata Beigman Klebanov, Christian Stab, Jill Burstein, Yi Song, Binod Gyawali, and Iryna Gurevych. 2016. Argumentation: Content, structure, and relationship with essay quality. In Proceedings of the Third Workshop on Argument Mining (ArgMining2016), pages 70-75, Berlin, Germany. Association for Computational Linguistics.",
1630
+ "Sebastian Britner, Lorik Dumani, and Ralf Schenkel. 2023. Aquaplane: The argument quality explainer app. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, CIKM '23, page 5015-5020, New York, NY, USA. Association for Computing Machinery.",
1631
+ "Richard Correnti, Lindsay Clare Matsumura, Laura Hamilton, and Elaine Wang. 2013. Assessing students' skills at writing analytically in response to texts. The Elementary School Journal, 114(2):142-177.",
1632
+ "Scott Crossley, Perpetual Baffour, Tian Yu, Alex Franklin, Meg Benner, and Ulrich Boser. 2023a. A large-scale corpus for assessing written argumentation: PERSUADE 2.0.",
1633
+ "Scott Crossley, Yu Tian, Perpetual Baffour, Alex Franklin, Youngmeen Kim, Wesley Morris, Meg Benner, Aigner Picou, and Ulrich Boser. 2023b. The english language learner insight, proficiency and skills evaluation (ellipse) corpus. International Journal of Learner Corpus Research, 9(2):248-269."
1634
+ ],
1635
+ "bbox": [
1636
+ 115,
1637
+ 105,
1638
+ 489,
1639
+ 917
1640
+ ],
1641
+ "page_idx": 9
1642
+ },
1643
+ {
1644
+ "type": "list",
1645
+ "sub_type": "ref_text",
1646
+ "list_items": [
1647
+ "Thi Hanh Dang, Thanh Hai Chau, and To Quyen Tra. 2020. A study on the difficulties in writing argumentative essays of english-majored sophomores at tay do university, vietnam. European Journal of English Language Teaching, 6(1).",
1648
+ "Neele Falk and Gabriella Lapesa. 2023. Bridging argument quality and deliberative quality annotations with adapters. In Findings of the Association for Computational Linguistics: EACL 2023, pages 2469-2488, Dubrovnik, Croatia. Association for Computational Linguistics.",
1649
+ "Youmna Farag, Helen Yannakoudakis, and Ted Briscoe. 2018. Neural automated essay scoring and coherence modeling for adversarially crafted input. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 263-271, New Orleans, Louisiana. Association for Computational Linguistics.",
1650
+ "Helmuth Feilke. 2017. Schreib- und Textprozeden. In Jürgen Baurmann, Clemens Kammler, and Astrid Müller, editors, Handbuch Deutschunterricht. Theorie und Praxis des Lehrens und Lernens, 1 edition, pages 51-57. Reihe Praxis Deutsch.",
1651
+ "Helmuth Feilke and Sara Rezat. 2021. Textprozeduren und der Erwerb literaler Kompetenz. In Nikolas Koch and Barbara Kozikowski, editors, Sprach(en)erwerb, pages 69-79. Der Deutschunterricht.",
1652
+ "Ralph P. Ferretti, Scott Andrews-Weckerly, and William E. Lewis. 2007. Improving the argumentative writing of students with learning disabilities: Descriptive and normative considerations. Reading & Writing Quarterly, 23(3):267-285.",
1653
+ "Sylviane Granger, Estelle Dagneaux, Fanny Meunier, and Magali Paquot. 2009. The International Corpus of Learner English. Presses universitaires de Louvain, Louvain-la-Neuve.",
1654
+ "Timon Gurcke, Milad Alshomary, and Henning Wachsmuth. 2021. Assessing the sufficiency of arguments through conclusion generation. In Proceedings of the 8th Workshop on Argument Mining, pages 67-77, Punta Cana, Dominican Republic. Association for Computational Linguistics.",
1655
+ "Olaf Gätje, Sara Rezat, and Torsten Steinhoff. 2012. Positionierung. Zur Entwicklung des Gebrauchs modalisierender Prozeduren in argumentativen Texten von Schülern und Studenten. In Helmuth Feilke and Katrin Lehn, editors, TextROUTinen. Theorie, Erwerb und didaktisch-mediale Modellierung, pages 125-153. Lang, Frankfurt/Main.",
1656
+ "Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2023. DeBERTav3: Improving deBERTa using ELECTRA-style pre-training with gradient-disentangled embedding sharing. In The Eleventh International Conference on Learning Representations."
1657
+ ],
1658
+ "bbox": [
1659
+ 510,
1660
+ 85,
1661
+ 884,
1662
+ 917
1663
+ ],
1664
+ "page_idx": 9
1665
+ },
1666
+ {
1667
+ "type": "page_number",
1668
+ "text": "2670",
1669
+ "bbox": [
1670
+ 480,
1671
+ 928,
1672
+ 519,
1673
+ 940
1674
+ ],
1675
+ "page_idx": 9
1676
+ },
1677
+ {
1678
+ "type": "list",
1679
+ "sub_type": "ref_text",
1680
+ "list_items": [
1681
+ "Andrea Horbach, Dirk Scholten-Akoun, Yuning Ding, and Torsten Zesch. 2017. Fine-grained essay scoring of a complex writing task for native speakers. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 357-366, Copenhagen, Denmark. Association for Computational Linguistics.",
1682
+ "Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 2790-2799. PMLR.",
1683
+ "Maleerat Ka-kan-dee and Sarjit Kaur. 2014. Argumentative writing difficulties of thai english major students. In The 2014 WEI International Academic Conference Proceedings, pages 193-207.",
1684
+ "Zixuan Ke and Vincent Ng. 2019. Automated essay scoring: A survey of the state of the art. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pages 6300-6308. International Joint Conferences on Artificial Intelligence Organization.",
1685
+ "Ronald T. Kellogg, Alison P. Whiteford, and Thomas Quinlan. 2010. Does automated feedback help students learn to write? Journal of Educational Computing Research, 42(2):173-196.",
1686
+ "Norbert Kruse, Anke Reichardt, Maik Herrmann, Friederike Heinzel, and Frank Lipowsky. 2012. Zur qualitat von kindertexten. Entwicklung eines bewertungsinstrumentes in der grundschule. Didaktik Deutsch: Halbjahresschrift für die Didaktik der deutschen Sprache und Literatur, 17(32):87-110.",
1687
+ "Xia Li, Minping Chen, Jianyun Nie, Zhenxing Liu, Ziheng Feng, and Yingdan Cai. 2018. Coherence-based automated essay scoring using self-attention. In Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data, pages 386-397, Cham. Springer International Publishing.",
1688
+ "Zhexiong Liu, Diane Litman, Elaine Wang, Lindsay Matsumura, and Richard Correnti. 2023. Predicting the quality of revisions in argumentative writing. In Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023), pages 275-287, Toronto, Canada. Association for Computational Linguistics.",
1689
+ "Huy Nguyen and Diane Litman. 2018. Argument mining for improving the automated scoring of persuasive essays. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1).",
1690
+ "John Peloghitis. 2017. Difficulties and strategies in argumentative writing: A qualitative analysis. Transformation in language education. JALT."
1691
+ ],
1692
+ "bbox": [
1693
+ 115,
1694
+ 85,
1695
+ 485,
1696
+ 910
1697
+ ],
1698
+ "page_idx": 10
1699
+ },
1700
+ {
1701
+ "type": "list",
1702
+ "sub_type": "ref_text",
1703
+ "list_items": [
1704
+ "Isaac Persing, Alan Davis, and Vincent Ng. 2010. Modeling organization in student essays. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 229-239, Cambridge, MA. Association for Computational Linguistics.",
1705
+ "Isaac Persing and Vincent Ng. 2013. Modeling thesis clarity in student essays. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 260-269, Sofia, Bulgaria. Association for Computational Linguistics.",
1706
+ "Isaac Persing and Vincent Ng. 2014. Modeling prompt adherence in student essays. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1534-1543, Baltimore, Maryland. Association for Computational Linguistics.",
1707
+ "Isaac Persing and Vincent Ng. 2015. Modeling argument strength in student essays. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 543-552, Beijing, China. Association for Computational Linguistics.",
1708
+ "Jonas Pfeiffer, Aishwarya Kamath, Andreas Rückle, Kyunghyun Cho, and Iryna Gurevych. 2021. AdapterFusion: Non-destructive task composition for transfer learning. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 487-503, Online. Association for Computational Linguistics.",
1709
+ "Jonas Pfeiffer, Andreas Rückle, Clifton Poth, Aishwarya Kamath, Ivan Vulic, Sebastian Ruder, Kyunghyun Cho, and Iryna Gurevych. 2020. AdapterHub: A framework for adapting transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 46-54, Online. Association for Computational Linguistics.",
1710
+ "Zahra Rahimi, Diane Litman, Elaine Wang, and Richard Correnti. 2015. Incorporating coherence of topics as a criterion in automatic response-to-text assessment of the organization of writing. In Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 20-30, Denver, Colorado. Association for Computational Linguistics.",
1711
+ "Zahra Rahimi, Diane J. Litman, Richard Correnti, Lindsay Clare Matsumura, Elaine Wang, and Zahid Kisa. 2014. Automatic scoring of an analytical response-to-text assessment. In Intelligent Tutoring Systems, pages 601-610, Cham. Springer International Publishing.",
1712
+ "Sara Rezat. 2011. Schriftliches Argumentieren. Zur Ontogenese konzessiver Argumentationskompetenz."
1713
+ ],
1714
+ "bbox": [
1715
+ 510,
1716
+ 85,
1717
+ 880,
1718
+ 913
1719
+ ],
1720
+ "page_idx": 10
1721
+ },
1722
+ {
1723
+ "type": "page_number",
1724
+ "text": "2671",
1725
+ "bbox": [
1726
+ 480,
1727
+ 928,
1728
+ 517,
1729
+ 940
1730
+ ],
1731
+ "page_idx": 10
1732
+ },
1733
+ {
1734
+ "type": "list",
1735
+ "sub_type": "ref_text",
1736
+ "list_items": [
1737
+ "Didaktik Deutsch: Halbjahresschrift für die Didaktik der deutschen Sprache und Literatur, 16(31):50-67.",
1738
+ "Sara Rezat. 2018. Argumentative Textprozeduren als Instrumente zur Anbahnung wissenschaftlicher Textkompetenz. In Sabine Schmölzer-Eibinger, Bora Bushati, Christopher Ebner, and Lisa Niederdorfer, editors, Wissenschaftliches Schreiben lehren und lernen. Diagnose und Förderung wissenschaftlicher Textkompetenz in Schule und Universität, pages 125-146. Waxmann, Münster.",
1739
+ "Juliane Schröter. 2021. Linguistische Argumentationsanalyse. Kurze Einführungen in die germanistische Linguistik. Universitätsverlag Winter, Heidelberg.",
1740
+ "Gabriella Skitalinskaya, Maximilian Sliethöver, and Henning Wachsmuth. 2023. Claim optimization in computational argumentation. In Proceedings of the 16th International Natural Language Generation Conference, pages 134-152, Prague, Czechia. Association for Computational Linguistics.",
1741
+ "Gabriella Skitalinskaya and Henning Wachsmuth. 2023. To revise or not to revise: Learning to detect improvable claims for argumentative writing support. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15799-15816, Toronto, Canada. Association for Computational Linguistics.",
1742
+ "Carlota S. Smith. 2003. Modes of Discourse: The Local Structure of Texts. Cambridge Studies in Linguistics. Cambridge University Press.",
1743
+ "Christian Stab. 2017. Argumentative Writing Support by means of Natural Language Processing. Ph.D. thesis, Technische Universität Darmstadt, Darmstadt.",
1744
+ "Christian Stab and Iryna Gurevych. 2017a. Parsing argumentation structures in persuasive essays. Computational Linguistics, 43(3):619-659.",
1745
+ "Christian Stab and Iryna Gurevych. 2017b. Recognizing insufficiently supported arguments in argumentative essays. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 980-990, Valencia, Spain. Association for Computational Linguistics.",
1746
+ "Maja Stahl, Nick Dusterhus, Mei-Hua Chen, and Henning Wachsmuth. 2023. Mind the gap: Automated corpus creation for enthymeme detection and reconstruction in learner arguments. In *Findings of the Association for Computational Linguistics: EMNLP* 2023, pages 4703-4717, Singapore. Association for Computational Linguistics.",
1747
+ "Manfred Stede and Jodi Schneider. 2019. Argumentation Mining. Springer International Publishing.",
1748
+ "Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca."
1749
+ ],
1750
+ "bbox": [
1751
+ 115,
1752
+ 85,
1753
+ 489,
1754
+ 917
1755
+ ],
1756
+ "page_idx": 11
1757
+ },
1758
+ {
1759
+ "type": "list",
1760
+ "sub_type": "ref_text",
1761
+ "list_items": [
1762
+ "Michael A. R. Townsend, Lynley Hicks, Jacquilyn D. M. Thompson, Keri M. Wilton, Bryan F. Tuck, and Dennis W. Moore. 1993. Effects of introductions and conclusions in assessment of student essays. Journal of Educational Psychology, 85(4):670-678.",
1763
+ "Masaki Uto, Yikuan Xie, and Maomi Ueno. 2020. Neural automated essay scoring incorporating handcrafted features. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6077-6088, Barcelona, Spain (Online). International Committee on Computational Linguistics.",
1764
+ "Henning Wachsmuth, Khalid Al-Khatib, and Benno Stein. 2016. Using argument mining to assess the argumentation quality of essays. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1680-1691, Osaka, Japan. The COLING 2016 Organizing Committee.",
1765
+ "Douglas Walton, Christopher Reed, and Fabrizio Macagno. 2008. Argumentation Schemes. Cambridge University Press.",
1766
+ "Thiemo Wambsganss, Andrew Caines, and Paula Buttery. 2022a. ALEN app: Argumentative writing support to foster English language learning. In Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022), pages 134-140, Seattle, Washington. Association for Computational Linguistics.",
1767
+ "Thiemo Wambsganss, Andreas Janson, and Jan Marco Leimeister. 2022b. Enhancing argumentative writing with automated feedback and social comparison nudging. Computers and Education, 191:104644.",
1768
+ "Thiemo Wambsganss and Christina Niklaus. 2022. Modeling persuasive discourse to adaptively support students' argumentative writing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8748-8760, Dublin, Ireland. Association for Computational Linguistics.",
1769
+ "Thiemo Wambsganss, Christina Niklaus, Matthias Sollenner, Siegfried Handschuh, and Jan Marco Leimeister. 2020. A corpus for argumentative writing support in German. In Proceedings of the 28th International Conference on Computational Linguistics, pages 856-869, Barcelona, Spain (Online). International Committee on Computational Linguistics.",
1770
+ "Thiemo Wambsganss and Roman Rietsche. 2019. Towards designing an adaptive argumentation learning tool. In International Conference on Interaction Sciences.",
1771
+ "Cong Wang, Zhiwei Jiang, Yafeng Yin, Zifeng Cheng, Shiping Ge, and Qing Gu. 2023. Aggregating multiple heuristic signals as supervision for unsupervised automated essay scoring. In Proceedings of the 61st Annual Meeting of the Association for Computational"
1772
+ ],
1773
+ "bbox": [
1774
+ 510,
1775
+ 85,
1776
+ 884,
1777
+ 898
1778
+ ],
1779
+ "page_idx": 11
1780
+ },
1781
+ {
1782
+ "type": "page_number",
1783
+ "text": "2672",
1784
+ "bbox": [
1785
+ 480,
1786
+ 928,
1787
+ 519,
1788
+ 940
1789
+ ],
1790
+ "page_idx": 11
1791
+ },
1792
+ {
1793
+ "type": "text",
1794
+ "text": "Linguistics (Volume 1: Long Papers), pages 1399-14013, Toronto, Canada. Association for Computational Linguistics.",
1795
+ "bbox": [
1796
+ 132,
1797
+ 85,
1798
+ 489,
1799
+ 124
1800
+ ],
1801
+ "page_idx": 12
1802
+ },
1803
+ {
1804
+ "type": "text",
1805
+ "text": "Florian Weber, Thiemo Wambsganss, Seyed Parsa Neshaei, and Matthias Soellner. 2023. Structured persuasive writing support in legal education: A model and tool for German legal case solutions. In *Findings of the Association for Computational Linguistics: ACL* 2023, pages 2296-2313, Toronto, Canada. Association for Computational Linguistics.",
1806
+ "bbox": [
1807
+ 115,
1808
+ 135,
1809
+ 489,
1810
+ 227
1811
+ ],
1812
+ "page_idx": 12
1813
+ },
1814
+ {
1815
+ "type": "text",
1816
+ "text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.",
1817
+ "bbox": [
1818
+ 115,
1819
+ 237,
1820
+ 489,
1821
+ 394
1822
+ ],
1823
+ "page_idx": 12
1824
+ },
1825
+ {
1826
+ "type": "text",
1827
+ "text": "Ruosong Yang, Jiannong Cao, Zhiyuan Wen, Youzheng Wu, and Xiaodong He. 2020. Enhancing automated essay scoring performance via fine-tuning pre-trained language models with combination of regression and ranking. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 1560–1569, Online. Association for Computational Linguistics.",
1828
+ "bbox": [
1829
+ 115,
1830
+ 404,
1831
+ 489,
1832
+ 495
1833
+ ],
1834
+ "page_idx": 12
1835
+ },
1836
+ {
1837
+ "type": "text",
1838
+ "text": "Wei Zhu. 2001. Performing argumentative writing in english: Difficulties, processes, and strategies. TESL Canada Journal, 19(1):34-50.",
1839
+ "bbox": [
1840
+ 115,
1841
+ 505,
1842
+ 487,
1843
+ 545
1844
+ ],
1845
+ "page_idx": 12
1846
+ },
1847
+ {
1848
+ "type": "text",
1849
+ "text": "A Cooccurrence Matrix",
1850
+ "text_level": 1,
1851
+ "bbox": [
1852
+ 114,
1853
+ 558,
1854
+ 339,
1855
+ 571
1856
+ ],
1857
+ "page_idx": 12
1858
+ },
1859
+ {
1860
+ "type": "text",
1861
+ "text": "The cooccurrence matrix between all argumentative structure levels (discourse functions, arguments, components, and discourse modes) is shown in Figure 5.",
1862
+ "bbox": [
1863
+ 112,
1864
+ 583,
1865
+ 489,
1866
+ 646
1867
+ ],
1868
+ "page_idx": 12
1869
+ },
1870
+ {
1871
+ "type": "text",
1872
+ "text": "B AdapterFusion Activations",
1873
+ "text_level": 1,
1874
+ "bbox": [
1875
+ 114,
1876
+ 659,
1877
+ 386,
1878
+ 675
1879
+ ],
1880
+ "page_idx": 12
1881
+ },
1882
+ {
1883
+ "type": "text",
1884
+ "text": "Similar to Pfeiffer et al. (2021), we visualize the activations of our model mDeBERTaV3-fusion-w/ all per layer in Figure 6 to further investigate the influence of each level of argumentative structure on the quality scoring. The first activation layers show for all five quality aspects that all structure adapters are activated quite diversely. In contrast, the later layers have a clear tendency towards activating only one or two adapters. Notable is the similar activation pattern between relevance and overall quality, which could come from their value correlation.",
1885
+ "bbox": [
1886
+ 112,
1887
+ 684,
1888
+ 489,
1889
+ 876
1890
+ ],
1891
+ "page_idx": 12
1892
+ },
1893
+ {
1894
+ "type": "page_number",
1895
+ "text": "2673",
1896
+ "bbox": [
1897
+ 480,
1898
+ 928,
1899
+ 519,
1900
+ 940
1901
+ ],
1902
+ "page_idx": 12
1903
+ },
1904
+ {
1905
+ "type": "image",
1906
+ "img_path": "images/d9d300d88c6589612ddc97390a040d409d3636021c2ec3e945fcb8f7c87a82b8.jpg",
1907
+ "image_caption": [
1908
+ "Figure 5: Relative token-level overlap of all argumentative structure labels, seperated into the four levels of granularity. For example, $68\\%$ of all tokens labeled as Introduction are also labeled Topic."
1909
+ ],
1910
+ "image_footnote": [],
1911
+ "bbox": [
1912
+ 127,
1913
+ 140,
1914
+ 823,
1915
+ 445
1916
+ ],
1917
+ "page_idx": 13
1918
+ },
1919
+ {
1920
+ "type": "image",
1921
+ "img_path": "images/98075f5f25716aa54f1e32f4f01db49a371f6a36dbeb485a29fc47b05899ee58.jpg",
1922
+ "image_caption": [
1923
+ "Figure 6: AdapterFusion activation per layer (1-12) and on average over the layers $(Avg)$ for each mDeBERTaV3-fusion-w/-all model per quality aspect. We average the activation for each fused adapter (for discourse functions, arguments, components, or discourse modes) over all instances in the test set of the most representative folding."
1924
+ ],
1925
+ "image_footnote": [],
1926
+ "bbox": [
1927
+ 126,
1928
+ 627,
1929
+ 875,
1930
+ 802
1931
+ ],
1932
+ "page_idx": 13
1933
+ },
1934
+ {
1935
+ "type": "page_number",
1936
+ "text": "2674",
1937
+ "bbox": [
1938
+ 480,
1939
+ 928,
1940
+ 521,
1941
+ 940
1942
+ ],
1943
+ "page_idx": 13
1944
+ }
1945
+ ]
2024/A School Student Essay Corpus for Analyzing Interactions of Argumentative Structure and Quality/8156c041-3083-4221-9fa2-c2116611fd69_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A School Student Essay Corpus for Analyzing Interactions of Argumentative Structure and Quality/8156c041-3083-4221-9fa2-c2116611fd69_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c6e5424c1fefa8e2c104c94df1d6af9ea63ff86036608c7d19d48c63ab97ab81
3
+ size 993442
2024/A School Student Essay Corpus for Analyzing Interactions of Argumentative Structure and Quality/full.md ADDED
@@ -0,0 +1,356 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A School Student Essay Corpus for Analyzing Interactions of Argumentative Structure and Quality
2
+
3
+ Maja Stahl<sup>1</sup>, Nadine Michel<sup>2</sup>, Sebastian Kilsbach<sup>2</sup>, Julian Schmidtke<sup>1</sup>, Sara Rezat<sup>2</sup>, and Henning Wachsmuth<sup>1</sup>
4
+
5
+ <sup>1</sup>Leibniz University Hannover, Institute of Artificial Intelligence
6
+
7
+ $^{2}$ Paderborn University, Institute for German Language and Comparative Literature
8
+
9
+ {m.stahl,h.wachsmuth}@ai.uni-hannover.de, julian.schmidtke@stud.uni-hannover.de
10
+
11
+ {nadine.michel, sebastian.kilsbach, sara.rezat}@uni-paderborn.de
12
+
13
+ # Abstract
14
+
15
+ Learning argumentative writing is challenging. Besides writing fundamentals such as syntax and grammar, learners must select and arrange argument components meaningfully to create high-quality essays. To support argumentative writing computationally, one step is to mine the argumentative structure. When combined with automatic essay scoring, interactions of the argumentative structure and quality scores can be exploited for comprehensive writing support. Although studies have shown the usefulness of using information about the argumentative structure for essay scoring, no argument mining corpus with ground-truth essay quality annotations has been published yet. Moreover, none of the existing corpora contain essays written by school students specifically. To fill this research gap, we present a German corpus of 1,320 essays from school students of two age groups. Each essay has been manually annotated for argumentative structure and quality on multiple levels of granularity. We propose baseline approaches to argument mining and essay scoring, and we analyze interactions between both tasks, thereby laying the ground for quality-oriented argumentative writing support.
16
+
17
+ # 1 Introduction
18
+
19
+ Writing argumentative texts, in particular argumentative essays, constitutes an essential part of school students' writing education. However, learning to write arguments of high quality can be challenging (Zhu, 2001; Ferretti et al., 2007; Peloghitis, 2017; Alexander et al., 2023). It requires various skills, from writing fundamentals, such as syntax and grammar, to argumentation-specific skills, such as meaningfully organizing and structuring arguments and counter-considerations (Rezat, 2011). This takes time and effort to master (Ka-kan-dee and Kaur, 2014; Dang et al., 2020). Given teachers' limited time to give students feedback on their writing, automatic argumentative writing support could
20
+
21
+ ![](images/e3a5db9bb6469fbbb0a1b29c4febaca96c341f9b6c938e09c55a0a56e5bfd8ff.jpg)
22
+ Figure 1: Exemplary annotated school student essay on the use of school funding, taken from our corpus. The text is from the FD-LEX corpus (Becker-Mrotzek and Grabowski, 2018), translated from German for display.
23
+
24
+ benefit students as it offers guidance at their own pace and convenience (Wambsganss et al., 2022a).
25
+
26
+ Argumentative writing support systems employ argument mining to analyze input texts (Stab, 2017; Wambsganss and Niklaus, 2022; Weber et al., 2023), that is, computational methods that identify argumentative components and their relations. Common components are major claim (main standpoint of the text, also known as thesis), claim (controversial statement), and premise (reason for justifying or refuting the claim) along with their argumentative relations support and attack. This knowledge enables the systems to give feedback on the structure of a text, e.g., by highlighting unwarranted claims (Stab and Gurevych, 2017a), or by analyzing the number of argumentative components quantitatively (Stab and Gurevych, 2017a; Wambsganss and Niklaus, 2022; Weber et al., 2023).
27
+
28
+ Unlike argument mining, automated essay scoring explicitly evaluates essay quality, either holis
29
+
30
+ tically (Uto et al., 2020; Yang et al., 2020; Wang et al., 2023) or specific linguistic aspects, such as coherence (Li et al., 2018; Farag et al., 2018), grammar (Ajit Tambe and Kulkarni, 2022), and organization (Persing et al., 2010; Rahimi et al., 2015). Combining argument mining with essay scoring may enable support systems to give students comprehensive feedback on their writing. In addition, it helps identify how different argumentative structures influence the overall essay quality and which structures are common for different levels of quality (Wachsmuth et al., 2016). However, student essay corpora for argument mining are scarce and do not include ground-truth essay quality annotations (Stab and Gurevych, 2017a). Moreover, no corpus with structure annotations for essays written by school students has been published yet.
31
+
32
+ To fill this research gap, we present a German corpus of 1,320 school student essays with manual annotations for argumentative structure and essay quality. The essays have been systematically selected from an existing corpus (Becker-Mrotzek and Grabowski, 2018), equally distributed over two age groups (fifth-graders and ninth-graders) and binary genders, three per student. We present an extensive annotation scheme focused on school student essays that covers argumentative structure on four levels of granularity as well as five essay quality aspects, as shown in Figure 1. To achieve consistent annotations, we developed annotation guidelines in close dialogue with our expert annotators from the field of language education. This led to high agreement between the annotations.
33
+
34
+ Our analyses of the corpus provide various insights into the correlation between the different levels of argumentative structure and essay quality, as well as the interaction between these two types of annotation. We experiment with fine-tuned transformers and adapters as baseline approaches to mining argumentative structure and scoring essay quality. Moreover, we demonstrate that the information on argumentative structure helps predicting the essay quality, which is in line with what previous studies showed on other corpora (Wachsmuth et al., 2016; Beigman Klebanov et al., 2016; Nguyen and Litman, 2018). This result underlines the usefulness of our corpus annotations for quality-oriented argumentative writing support.
35
+
36
+ More explicitly, this work aims to answer (i) how the argumentative structure and essay quality of school student essays can be modeled, (ii)
37
+
38
+ how different levels of argumentative structure and essay quality correlate for school student essays, and (iii) how this correlation can be exploited to automatically score the essay quality.
39
+
40
+ Altogether, this paper's main contributions are:
41
+
42
+ - A corpus for studying argumentative structure and essay quality on school student essays
43
+ - Empirical insights into the interactions of argumentative structure and essay quality
44
+ - Baseline approaches to argument mining and essay scoring<sup>1</sup>
45
+
46
+ # 2 Related Work
47
+
48
+ Argumentative writing is a key capability that is taught in school across age groups and disciplines (Becker-Mrotzek et al., 2010; Rezat, 2011). A common educational form of argumentative text is the essay, where school students should introduce a thesis, to which they provide pro and con arguments, and finally conclude (Townsend et al., 1993; Schröter, 2021). The components of an argumentative text take on different roles (e.g., claim or premises), and they may operationalize different actions (e.g., conceding or reasoning) (Feilke, 2017). Learning to write argumentative text is complex and requires continuous and detailed feedback (Kellogg et al., 2010; Wambsganss et al., 2022b).
49
+
50
+ Analyzing the argumentative structure of texts computationally, also known as argument(ation) mining, is a crucial and widely-studied step in providing automatic support for argumentative writing (Stede and Schneider, 2019). Student essays are a prominent domain for argument mining. A respective annotated corpus of 402 English student essays is available (Stab and Gurevych, 2017a), for which also quality issues such as insufficient claim support have been modeled (Stab and Gurevych, 2017b; Gurcke et al., 2021). Additionally, student corpora are available for more specific domains, such as argumentative legal texts (Weber et al., 2023), persuasive peer reviews on business models (Wambsganss et al., 2020), and business model pitches (Wambsganss and Niklaus, 2022).
51
+
52
+ Some research has used argumentative structure to assess essay quality. In consecutive works, Persing et al. (2010) and Persing and Ng (2013, 2014, 2015) graded different argumentation-related quality aspects for the well-known essay corpus ICLE
53
+
54
+ (Granger et al., 2009), namely organization, thesis clarity, prompt adherence, and argument strength. In contrast, Horbach et al. (2017) targeted different quality aspects of argumentative writing at once. Some works further investigated the interaction between argumentative structure and essay quality. Wachsmuth et al. (2016) found that multiple argumentation-related essay scoring tasks benefit from argument mining, underlining the impact of argumentative structure on essay quality. The analyses by Beigman Klebanov et al. (2016) and Nguyen and Litman (2018) suggest that this finding also holds for predicting holistic essay scores, while Persing et al. (2010) observed similar for the quality aspect organization. However, these studies relied on automatically assigned quality scores only, due to the lack of ground-truth annotations.
55
+
56
+ In a related line of research, approaches have been proposed to suggest revisions for argumentative essays (Afrin and Litman, 2018), to assess the need for and the quality of revisions (Skitalinskaya and Wachsmuth, 2023; Liu et al., 2023), as well as to perform argument revisions computationally (Skitalinskaya et al., 2023). Other works towards writing support for argumentative essays presented a prototypical system that gives simple feedback in terms of missed criteria (Stab, 2017), design principles for an adaptive learning tool (Wambsganss and Rietsche, 2019), visual feedback to the learner to prompt them to repair broken argument structures (Wambsganss et al., 2022a), or point to enthymematic gaps in arguments and make suggestions on how to fill these gaps (Stahl et al., 2023). Most recently, Britner et al. (2023) proposed a tool that not only analyzes issues with argument quality but also generates an explanation for its prediction. Our corpus supports these steps towards support systems for argumentative writing by providing detailed ground-truth annotations for both the argumentative structure and the quality of essays.
57
+
58
+ However, all the works mentioned deal with texts written by university students, while our work targets argumentative essays written by school students, fifth-graders and ninth-graders specifically. To the best of our knowledge, the only other published corpora with school student essays is not openly available (Currenti et al., 2013) and has been analyzed for essay-level quality aspects only, such as the integration of evidence (Rahimi et al., 2014) and the essay's organization (Rahimi et al., 2015). We recently came across another school
59
+
60
+ student essay corpus in English with annotations for argumentative structure and quality, which has yet to be published.2 We go beyond in this work by assessing the quality of school student essays in terms of five aspects derived from language education literature while incorporating their interaction with annotated argumentative structures. This may foster the development of effective methods for helping school students improve their argumentative writing skills.
61
+
62
+ # 3 School Student Essay Corpus
63
+
64
+ This section presents the source data and annotation of our corpus for analyzing the argumentative structure and quality of school student essays.
65
+
66
+ # 3.1 Source Data
67
+
68
+ As the basis, we systematically selected 1,320 German school student essays from the FD-LEX corpus (Becker-Mrotzek and Grabowski, 2018). The authors instructed students to each write three argumentative essays on topics pertinent to school students: (a) a letter to school funding organization on possible use of funding, (b) a statement on how to deal with the misbehavior of a fellow student, (c) a statement on who is guilty in a bike accident.
69
+
70
+ We seek to enable analyses of differences across different groups of school student essays on the corpus. Therefore, we pseudo-randomly chose 440 school students equally distributed across genders (only male and female exist in the corpus) and age groups (fifth-graders and ninth-graders). Subsequently, we included all three essays written by each selected school student from the source data.
71
+
72
+ # 3.2 Annotation Scheme
73
+
74
+ Our annotation scheme goes beyond existing corpora for argument mining, covering the macro and micro structure of argumentative essays on four levels in total. In addition, we evaluate the quality of the essays overall and in terms of four quality aspects. Figure 2 overviews our annotation scheme.
75
+
76
+ Argumentative Structure On the broadest level of granularity for argumentative structure, we annotate discourse functions (Persing et al., 2010):
77
+
78
+ ![](images/54267ec2022435022e8d4c15d4d89d78eb89c3417e6dfc36b5e66a23256beaf6.jpg)
79
+ Figure 2: Proposed annotation scheme for argumentative school student essays: Four levels of argumentative macro and micro structure (discourse functions, arguments, components, discourse modes) and five essay quality aspects.
80
+
81
+ - Introduction. Initiates an essay by presenting the topic and possibly the context of an essay. This section is usually non-argumentative and placed at the beginning of an essay.
82
+ - Body. Core of the essay, containing the majority of argumentative components.
83
+ - Conclusion. Summary of main points, often with a final evaluation of the topic. This section is typically found at the end of the essay.
84
+
85
+ Next, we annotate arguments that comprise one point in an argumentative text, following Walton et al. (2008). We differentiate them by stance towards the main standpoint (thesis) of an essay:
86
+
87
+ - Argument. Ideally a claim (conclusion) and premises (reasons) supporting the claim.
88
+ - Counter-argument. An argument that attacks the thesis of an essay.
89
+
90
+ For analyzing the micro structure, we annotate argumentative and non-argumentative components. As Stab and Gurevych (2017a), we also mark support and attack relations between them (see Figure 2):
91
+
92
+ - Topic. Non-argumentative component that describes the subject or purpose of the essay.
93
+ - Thesis. Main standpoint of the whole argumentative text towards the topic. Repetitions of the thesis are also annotated as such.
94
+ - Antithesis. Thesis contrary to the actual thesis.
95
+ - Modified Thesis. Modified version of the actual thesis (e.g., more detailed or restricted).
96
+ - Claim. Statement that conveys a stance towards the topic.
97
+ - Premise. Reason that is given to support or attack a claim or another premise.<sup>4</sup>
98
+
99
+ On the finest level of granularity, we annotate discourse modes (Smith, 2003) specific to school student essays. They are derived from language education literature, where they are used for developing and analyzing argumentative writing skills (Gattje et al., 2012; Rezat, 2018; Feilke and Rezat, 2021):
100
+
101
+ - Comparing. Contrasting supporting and attacking points to a statement.
102
+ - Conceding. Addressing a counter-consideration and refuting it to support the own stance.
103
+ - Concluding. Drawing logical inferences using consecutive or final clauses (so that, if... then).
104
+ - Describing. Providing additional information, such as facts, statistics, and background data.
105
+ - Exemplifying. Providing examples or reporting on experiences.
106
+ - Instructing. Providing explicit instructions that recommend a specific course of action.
107
+ - Positioning. Expressing the own standpoint.
108
+ - Reasoning. Providing causal links to support a claim/thesis using markers (because, then).
109
+ - Referencing. Mentioning statements made by others, for example, by authorities.
110
+ - Qualifying. Presenting a variation of the all-or-nothing standpoints.
111
+
112
+ Essay Quality As Persing et al. (2010), we score essay quality on a 7-point scale from 1 (unsuccessful), 2 (rather unsuccessful), 3 (rather successful) to 4 (completely successful), with half points in between. We adapted the quality aspects of Kruse et al. (2012) for assessing school student essays in general to argumentative essays as follows:
113
+
114
+ - Relevance. The essay fits the prompt.
115
+ - Content. The selection of content helps to reach the essay's goal.
116
+
117
+ - Structure. The selected points are coherent and well-connected.
118
+ - Style. The use of language is adequate.
119
+ - Overall. The overall impression of the rater.
120
+
121
+ # 3.3 Annotation Process
122
+
123
+ We underwent the following process for both argumentative structure and essay quality:
124
+
125
+ To test and refine our annotation guidelines, we conducted pilot studies in which all annotators worked on the same 30 texts. We then discussed their understanding of the guidelines and annotation differences. We integrated their feedback into the guidelines to then test the reliability of the annotations in an inter-annotator agreement (IAA) study, where all annotators independently worked on the same 120 texts. Finally, each annotator annotated a set of 1,200 essays in the main annotation study.
126
+
127
+ As annotators, we employed experts in German language education from our lab and started the pilot and IAA studies on argumentative structure with three annotators. For the main part, only the two annotators with the most reliable annotations proceeded. The same annotators then annotated the essay quality, since they had already been trained in argumentative texts and our general procedure. However, we acknowledge that the annotators may have been predisposed to view essays in a certain way after the first annotation.
128
+
129
+ To assemble the final corpus, we combined the 1200 essays from the main study with the 120 essays from the IAA study after solving annotation conflicts. For conflicts between the three structure annotations per IAA essay, we kept the annotations that had the highest agreement across all levels with the other two. For conflicts between essay quality scores, we used their mean as the final score.[5]
130
+
131
+ # 3.4 Inter-Annotator Agreement
132
+
133
+ For the components, we follow Stab and Gurevych (2017a) in that we evaluate the agreement per essay at the token level, so the token labels are the unit of analysis. Thereby, overlaps of annotations are taken into account. For relations, we determined the component-level spans that at least two annotators agreed on with a relative overlap $\geq 75\%$ . For all pairs of these, we then compared the relation labels (no relation, support, or attack) between the
134
+
135
+ <table><tr><td>Argumentative Structure</td><td>α</td><td>Essay Quality</td><td>α</td></tr><tr><td>Discourse Functions</td><td>0.89</td><td>Relevance</td><td>0.77</td></tr><tr><td>Arguments</td><td>0.86</td><td>Content</td><td>0.95</td></tr><tr><td>Components</td><td>0.81</td><td>Structure</td><td>0.84</td></tr><tr><td>Discourse Modes</td><td>0.74</td><td>Style</td><td>0.92</td></tr><tr><td>Relations</td><td>0.58</td><td>Overall</td><td>0.95</td></tr></table>
136
+
137
+ Table 1: Krippendorff's $\alpha$ agreement in the IAA study between three annotators for argumentative structure and two annotators for essay quality. The high values stress the high reliability of our annotations.
138
+
139
+ annotators. The mean Krippendorff's $\alpha$ scores over the 120 IAA essays are reported in Table 1. For essay quality, we computed the $\alpha$ -value per quality aspect with essays as the unit of analysis.
140
+
141
+ The agreement is high for argumentative structure spans with values between 0.74 and 0.89. The agreement for relations is lower but reasonable, given that disagreement from the components annotations is propagated to the relations. The agreement for essay quality is also high, too, ranging from 0.77 to 0.95. Overall, we conclude that the annotations can mostly be seen as very reliable. Content and style quality annotations are very consistent between annotators, while assessing the relevance and structure seems slightly more subjective.
142
+
143
+ # 3.5 Corpus Statistics
144
+
145
+ Table 2 gives insights into the label distribution for argumentation structure. Body occurs most frequently among the discourse functions (1,335 times). With 56.75 tokens on average, bodies are also notably longer than introductions and conclusions. On the argument level, we note that counter-arguments are rather sparse in student essays. Among the components, claims are most frequent in total and per essay, followed by theses. Furthermore, we notice that modified theses are usually longer than theses, which matches our expectation that students add more details or restrictions to the thesis there. The most used discourse modes are positioning, describing, and reasoning, while referencing, comparing, and exemplifying occur rarely. Notable are also the differences in span length, e.g. positioning spans have on average only about half as many tokens as comparing spans.
146
+
147
+ Table 3 gives the frequency of annotated relations in our corpus. Most relations outgoing from claims are directed towards theses (92.6%), while most relations outgoing from premises are directed towards claims (85.8%). Overall, 96.2% of the re
148
+
149
+ <table><tr><td>Label</td><td># Spans</td><td># Tokens</td><td>Tokens/Span</td><td>Spans/Essay</td></tr><tr><td>Introduction</td><td>114</td><td>2329</td><td>20.43</td><td>0.09</td></tr><tr><td>Body</td><td>1335</td><td>75766</td><td>56.75</td><td>1.01</td></tr><tr><td>Conclusion</td><td>191</td><td>2938</td><td>15.38</td><td>0.14</td></tr><tr><td>Argument</td><td>2692</td><td>51560</td><td>19.15</td><td>2.04</td></tr><tr><td>Counter-arg.</td><td>34</td><td>514</td><td>15.12</td><td>0.03</td></tr><tr><td>Topic</td><td>101</td><td>1656</td><td>16.40</td><td>0.08</td></tr><tr><td>Thesis</td><td>1687</td><td>19581</td><td>11.61</td><td>1.28</td></tr><tr><td>Modified T.</td><td>267</td><td>4490</td><td>16.82</td><td>0.20</td></tr><tr><td>Antithesis</td><td>14</td><td>174</td><td>12.43</td><td>0.01</td></tr><tr><td>Claim</td><td>3137</td><td>39096</td><td>12.46</td><td>2.38</td></tr><tr><td>Premise</td><td>1020</td><td>12533</td><td>12.29</td><td>0.77</td></tr><tr><td>Comparing</td><td>20</td><td>431</td><td>21.55</td><td>0.02</td></tr><tr><td>Conceding</td><td>142</td><td>2874</td><td>20.24</td><td>0.11</td></tr><tr><td>Concluding</td><td>868</td><td>11654</td><td>13.43</td><td>0.66</td></tr><tr><td>Describing</td><td>1692</td><td>22258</td><td>13.15</td><td>1.28</td></tr><tr><td>Exemplifying</td><td>63</td><td>926</td><td>14.70</td><td>0.05</td></tr><tr><td>Instructing</td><td>176</td><td>2174</td><td>12.35</td><td>0.13</td></tr><tr><td>Positioning</td><td>1758</td><td>19178</td><td>10.91</td><td>1.33</td></tr><tr><td>Reasoning</td><td>1553</td><td>17204</td><td>11.08</td><td>1.18</td></tr><tr><td>Referencing</td><td>16</td><td>197</td><td>12.31</td><td>0.01</td></tr><tr><td>Qualifying</td><td>147</td><td>2344</td><td>15.95</td><td>0.11</td></tr></table>
150
+
151
+ Table 2: Argumentative structure annotations in the corpus: Total number of spans and tokens per label, average span length in number of tokens (Tokens/Span) and average number of spans per essay (Spans/Essay). The highest value per column and level is marked bold.
152
+
153
+ <table><tr><td>From Claim to</td><td>#</td><td>%</td></tr><tr><td>Thesis</td><td>2844</td><td>92.6</td></tr><tr><td>Modified Thesis</td><td>218</td><td>7.1</td></tr><tr><td>Antithesis</td><td>9</td><td>0.3</td></tr></table>
154
+
155
+ <table><tr><td>From Premise to</td><td>#</td><td>%</td></tr><tr><td>Thesis</td><td>6</td><td>0.6</td></tr><tr><td>Claim</td><td>872</td><td>85.8</td></tr><tr><td>Premise</td><td>138</td><td>13.6</td></tr></table>
156
+
157
+ lations were labeled as support and $3.8\%$ as attack.
158
+
159
+ The distribution of quality scores is shown in Table 4. We can see that relevance, structure, and style have a similar score distribution, while the distribution of content scores is shifted towards the higher scores, with the highest mean (2.80). Overall quality has the lowest mean score (2.20) and was most often scored with the lowest score of 1.0. These results suggest that overall quality is not just the average of the other annotated quality aspects but that it emerges from the annotators' perception and possibly other aspects.
160
+
161
+ # 4 Analysis
162
+
163
+ This section reports on our corpus analysis of the interaction between the argumentative structure on micro vs. macro level and component vs. discourse mode level, and the different essay quality aspects.
164
+
165
+ Table 3: Absolute and relative frequency of annotated relations outgoing from claim (left) or premise (right).
166
+
167
+ <table><tr><td>Quality Aspect</td><td>1.0</td><td>1.5</td><td>2.0</td><td>2.5</td><td>3.0</td><td>3.5</td><td>4.0</td><td>Mean</td></tr><tr><td>Relevance</td><td>64</td><td>131</td><td>548</td><td>358</td><td>190</td><td>23</td><td>6</td><td>2.22</td></tr><tr><td>Content</td><td>62</td><td>33</td><td>123</td><td>312</td><td>510</td><td>173</td><td>107</td><td>2.80</td></tr><tr><td>Structure</td><td>59</td><td>91</td><td>423</td><td>414</td><td>292</td><td>24</td><td>17</td><td>2.35</td></tr><tr><td>Style</td><td>75</td><td>159</td><td>396</td><td>436</td><td>233</td><td>12</td><td>9</td><td>2.25</td></tr><tr><td>Overall</td><td>90</td><td>142</td><td>478</td><td>390</td><td>195</td><td>23</td><td>2</td><td>2.20</td></tr></table>
168
+
169
+ Table 4: Distribution and mean of scores per quality aspect. The highest value per column is marked bold.
170
+
171
+ ![](images/605f0e0adab7c8cbd6f8460294660b87a0282019a2ffc0171ad99814e4c060bf.jpg)
172
+ Figure 3: Cooccurrence matrices: Relative token-level overlap of (a) macro and micro structure and (b) component and discourse mode labels in percent. For example, $68\%$ of all tokens labeled as Introduction on the macro level are also labeled as Topic on the micro level.
173
+
174
+ # 4.1 Macro vs. Micro structure
175
+
176
+ Figure 3(a) shows the overlap between macro structure (discourse functions and arguments) and micro structure (components and discourse mode) labels. The introduction mainly includes the topic $(68\%)$ , while every second token in the body is a claim on average. The thesis cooccurs with all three discourse functions. We see that the proportion of claims and premises differs for arguments and counter-arguments. Counter-arguments contain more claim tokens than arguments, while fewer counter-argument tokens are part of a premise.
177
+
178
+ The usage of discourse modes differs between the macro-structure levels, too. In the introduction, students mostly describe (68%) and position (24%). The discourse modes are more diverse for the body, also including a notable portion of reasoning. As expected, in the conclusion, students focus more on concluding. While describing and reasoning are prevalent in arguments and counter-arguments, a notable portion of argument tokens is used for concluding. At the same time, more conceding and
179
+
180
+ <table><tr><td></td><td>Relevance</td><td>Content</td><td>Structure</td><td>Style</td><td>Overall</td></tr><tr><td>Relevance</td><td></td><td>.53</td><td>.61</td><td>.47</td><td>.75</td></tr><tr><td>Content</td><td>.53</td><td></td><td>.48</td><td>.41</td><td>.60</td></tr><tr><td>Structure</td><td>.61</td><td>.48</td><td></td><td>.51</td><td>.71</td></tr><tr><td>Style</td><td>.47</td><td>.41</td><td>.51</td><td></td><td>.61</td></tr><tr><td>Overall</td><td>.75</td><td>.60</td><td>.71</td><td>.61</td><td></td></tr></table>
181
+
182
+ qualifying tokens occur in counter-arguments. This is expected, since especially in counter-arguments other points of view should be varied or refuted.
183
+
184
+ # 4.2 Components vs. Discourse modes
185
+
186
+ The cooccurrences between components and actions can be seen in Figure 3(b). While the topic is mostly described (90%) and the thesis consists primarily of positioning (85%), the remaining components include more diverse discourse modes. In contrast to theses, modified theses also feature describing and qualifying tokens, while antitheses additionally cover conceding and concluding. Claims and premises mainly cooccur with describing, reasoning, and concluding. However, the proportions differ slightly. The cooccurrence matrix between all structure labels can be found in Appendix A.
187
+
188
+ # 4.3 Essay Quality
189
+
190
+ To further assess the interaction between the quality aspects, Table 5 shows all pairwise Kendall's $\tau$ correlations. All aspects correlate most with overall quality, most strongly relevance (.75). The correlation between content and style is lowest (.41), which underlines their distinctive nature.
191
+
192
+ # 5 Experiments
193
+
194
+ This section presents baseline approaches to the two main tasks our corpus enables: Predicting the argumentative structure (argument mining) and the essay quality (essay scoring). Additionally, we investigate whether information about the argumentative structure helps to predict the essay quality.
195
+
196
+ # 5.1 Argument Mining
197
+
198
+ We treat argument mining as a token classification task: Given a school student essay and a structure level, predict the label of each token on that structure level. The IOB2 format is used for the labels to separate adjacent spans of the same type. We performed 5-fold cross-validation for each structure level. For each folding, we used four folds
199
+
200
+ Table 5: Kendall's $\tau$ correlation between the quality aspects. The highest value per column is marked bold.
201
+
202
+ <table><tr><td rowspan="2">Approach</td><td colspan="2">D. Func.</td><td colspan="2">Argum.</td><td colspan="2">Compon.</td><td colspan="2">D. Mode</td></tr><tr><td>Acc.</td><td>F1</td><td>Acc.</td><td>F1</td><td>Acc.</td><td>F1</td><td>Acc.</td><td>F1</td></tr><tr><td>Random</td><td>.14</td><td>.00</td><td>.20</td><td>.00</td><td>.08</td><td>.00</td><td>.05</td><td>.00</td></tr><tr><td>Majority</td><td>.86</td><td>.52</td><td>.56</td><td>.00</td><td>.41</td><td>.00</td><td>.24</td><td>.00</td></tr><tr><td>mDeBERTaV3</td><td>.92</td><td>.46</td><td>.86</td><td>.29</td><td>.66</td><td>.21</td><td>.63</td><td>.21</td></tr><tr><td>-adapter</td><td>.95</td><td>.68</td><td>.92</td><td>.52</td><td>.76</td><td>.49</td><td>.73</td><td>.46</td></tr><tr><td>Human</td><td>.98</td><td>.94</td><td>.96</td><td>.85</td><td>.93</td><td>.89</td><td>.89</td><td>.84</td></tr></table>
203
+
204
+ Table 6: Argument mining results: Macro $\mathrm{F}_1$ -score and accuracy of each approach in 5-fold cross-validation on all four argumentative structure dimensions. The best value per column is marked bold.
205
+
206
+ (80%) for training and divided the fifth fold in half: one half (10%) for selecting the best-performing checkpoint in terms of macro-averaged $\mathrm{F}_1$ -score, and the remaining half (10%) for testing.
207
+
208
+ Models We used the multilingual model mDeBERTaV3 (He et al., 2023) (microsoft/mdeberta-v3-base) from Huggingface (Wolf et al., 2020). Besides, we tested the effect of training adapters (Houlsby et al., 2019), a set of task-specific parameters that are added to every transformer layer of mDeBERTaV3 and fine-tuned on the task while the model weights are fixed. To quantify the impact of learning, we compare against a random baseline that chooses a token label pseudo-randomly and a majority baseline that always predicts the majority token label from the training set. As upper bound, we report the human performance in terms of the average of each annotator in isolation on the 120 IAA texts annotated by all annotators.
209
+
210
+ Experimental Setup We train mDeBERTaV3 for 30 epochs (1,980 steps) using the suggested hyperparameter values: a learning rate of $3e - 5$ , batch size 16, and 500 warmup steps. For mDeBERTaV3-adapter, we follow Pfeiffer et al. (2020) who recommend to use a higher learning rate of $1e - 4$ and train longer, here 50 epochs (3,300 steps).
211
+
212
+ Results Table 6 presents the token classification results for all levels of argumentative structure, averaged over all folds. Noteworthy, mDeBERTaV3-adapter outperforms training the whole model (mDeBERTaV3) in all cases. Given that the $\mathrm{F_1}$ -scores improve more than the accuracy, the
213
+
214
+ <table><tr><td>Approach</td><td>Relevance</td><td>Content</td><td>Structure</td><td>Style</td><td>Overall</td></tr><tr><td>Random</td><td>-0.013 ±0.084</td><td>-0.011 ±0.071</td><td>-0.014 ±0.073</td><td>0.017 ±0.084</td><td>-0.004 ±0.083</td></tr><tr><td>Majority</td><td>0.000 ±0.000</td><td>0.000 ±0.000</td><td>0.000 ±0.000</td><td>0.000 ±0.000</td><td>0.000 ±0.000</td></tr><tr><td>mDeBERTaV3</td><td>0.530 ±0.069</td><td>0.295 ±0.109</td><td>0.513 ±0.044</td><td>0.492 ±0.059</td><td>0.616 ±0.040</td></tr><tr><td>-adapter</td><td>0.564 ±0.018</td><td>0.431 ±0.098</td><td>0.575 ±0.038</td><td>0.579 ±0.077</td><td>0.648 ±0.054</td></tr><tr><td>-fusion-w/-discourse-functions</td><td>0.599 ±0.043</td><td>0.381 ±0.134</td><td>0.559 ±0.036</td><td>0.569 ±0.069</td><td>0.668 ±0.049</td></tr><tr><td>-fusion-w/-arguments</td><td>0.593 ±0.030</td><td>0.448 ±0.105</td><td>0.575 ±0.019</td><td>0.581 ±0.054</td><td>0.668 ±0.036</td></tr><tr><td>-fusion-w/-components</td><td>†0.600 ±0.025</td><td>0.437 ±0.137</td><td>0.543 ±0.044</td><td>0.585 ±0.053</td><td>0.663 ±0.046</td></tr><tr><td>-fusion-w/-discourse-modes</td><td>0.544 ±0.028</td><td>0.420 ±0.118</td><td>0.535 ±0.041</td><td>0.583 ±0.064</td><td>0.645 ±0.023</td></tr><tr><td>-fusion-w/-all</td><td>0.574 ±0.039</td><td>0.454 ±0.142</td><td>0.546 ±0.013</td><td>0.617 ±0.057</td><td>†0.686 ±0.031</td></tr><tr><td>Human</td><td>0.636 ±0.055</td><td>0.632 ±0.003</td><td>0.734 ±0.007</td><td>0.766 ±0.005</td><td>0.746 ±0.003</td></tr></table>
215
+
216
+ Table 7: Essay scoring results: QWK of each approach in 5-fold cross-validation on all five quality dimensions. The best value per column is marked bold. We mark significant gains over mDeBERTaV3-adapter at $p < .05$ with †.
217
+
218
+ adapters seem less prone to overfitting to the majority label. This learning success suggests the possibility of predicting all argumentative structure levels on our corpus. However, further improvements using more advanced approaches are expected.
219
+
220
+ # 5.2 Essay Scoring
221
+
222
+ We treat predicting the essay quality as a text classification task: Given a school student essay and a quality aspect, predict the corresponding quality score. As before, we performed 5-fold cross-validation for each quality aspect using the same folds. We selected the best-performing checkpoint on the validation set using quadratic weighted kappa (QWK), the most widely adopted metric for automatic essay scoring (Ke and Ng, 2019).
223
+
224
+ Models We adopted the previous approaches by changing the head to a text classification head. To analyze the interaction between argumentative structure and essay quality, we employed Adapter-Fusion (Pfeiffer et al., 2021), a multi-task learning framework that can be used to investigate relations between different dimensions by learning how to combine model weights with one or more adapters. We used the mDeBERTaV3-adapters trained on argumentative structure from the previous experiment. As the final adapter, we chose the one trained on the folding that performed most representative for all folds ( $F_1$ -score closest to the reported averaged $F_1$ -score across folds). To measure the impact of each level of argumentative structure on the scoring performance, we used each adapter individually and a combination of all of them.
225
+
226
+ Experimental Setup The experimental setup for mDeBERTaV3 and mDeBERTaV3-adapter was adopted from before. For training the Adapter-
227
+
228
+ Fusion, we followed Pfeiffer et al. (2021) to use a learning rate of $5e - 5$ and trained shorter than the adapters, in our case for 20 epochs (1,320 steps).
229
+
230
+ Results Table 7 shows the scoring results. All models outperform the lower-bound baselines (random and majority), suggesting that the quality scoring can be learned from our corpus. Furthermore, fusing all adapters trained on argumentative structure (mDeBERTaV3-fusion-w/-all) performs best for three out of five quality aspects, significantly beating mDeBERTaV3-adapter in predicting overall quality. This underlines the need for all four levels of argumentative structure together in order to improve scoring overall quality (0.686 vs. 0.648). In addition, using only the adapter trained on the component level (mDeBERTaV3-fusion-w/-components) helps to significantly improve over mDeBERTaV3-adapter in predicting relevance (0.600 vs. 0.564), indicating an interaction between the structure on component level and this quality aspect. QWK scores greater or equal to 0.6 suggest substantial agreement between the predicted and ground-truth quality scoring of essays.
231
+
232
+ AdapterFusion Activations AdapterFusion extracts information from adapters only if they benefit the target task. Similar to Falk and Lapesa (2023), we visualize the average activations of our model mDeBERTaV3-fusion-w/-all over the layers in Figure 4 to investigate the influence of each level of argumentative structure on the quality scoring. All adapters are activated fairly evenly for all quality aspects, with slight deviations. This aligns with our previous results and underlines that all annotated
233
+
234
+ ![](images/17b8a484584cbf846f79aaa9dadf1ff7e0b853ac0f1eb37d917a62ce9c798e4a.jpg)
235
+ Figure 4: AdapterFusion activation on average over the layers for each mDeBERTaV3-fusion- $w$ /-all model per quality aspect. We average the activation for each fused adapter (discourse functions, arguments, components, discourse modes) over all instances in the most representative test set folding.
236
+
237
+ structure levels are helpful for quality scoring. The activations per layer can be found in Appendix B.
238
+
239
+ # 6 Conclusion
240
+
241
+ Argumentative writing support of school students presupposes that the quality of their arguments can be assessed. Until now, no argument mining corpus with school student essays has been published, let alone any essay corpus with both argument and quality annotations. With this work, we fill both research gaps with a new corpus of 1,320 German school student essays, annotated by experts for argumentative structure and essay quality.
242
+
243
+ Our corpus analysis has provided various insights into the correlation between the different levels of argumentative structure and essay quality. In our experiments with fine-tuned transformers and adapters for mining argumentative structure and scoring essay quality we have demonstrated that combining information on all four argumentative structure levels helps the prediction of essay quality. This shows the usefulness of our corpus for research on quality-oriented argumentative writing support, which we seek to enable with this paper.
244
+
245
+ We point out that our corpus contains various information yet to be explored, such as argumentative relations and school student metadata. It thus lays the ground for further analyses—like identifying unwarranted claims and studying differences across age groups and genders.
246
+
247
+ # 7 Limitations
248
+
249
+ Aside from the still-improvable performance of the presented baseline models for argument mining and essay scoring, we see two notable limitations of our work: the restriction to German texts, and the pending utilization of the corpus for quality-oriented argumentative writing support.
250
+
251
+ First, we point out the specific language background of our work. The essays were written by German school students, and the annotations were developed in close communication with German
252
+
253
+ experts from the field of language education, while the discourse modes and essay quality aspects are, to a considerable extent, derived from work on German texts. This means that our findings may not perfectly align with argumentative writing in other countries or languages with different expectations for argumentative essays.
254
+
255
+ Second, while our analyses suggest that our corpus helps to enable quality-oriented argumentative writing support, the perceived usefulness of such a tool is still to be evaluated. We expect and encourage future work to utilize our corpus for such writing support tools, for example, by further analyzing which exact argumentative structures influence the essay quality and to what extent. Interpretable essay quality scoring based on the structure might generate helpful insights that can be used as writing feedback by school students.
256
+
257
+ # 8 Ethical Considerations
258
+
259
+ We see no apparent risk of the corpus or the methods presented in this paper being misused for ethically doubtful purposes. The authors of the FD-LEX corpus (Becker-Mrotzek and Grabowski, 2018) have already pseudonymized the author of each essay. Therefore, it is not possible to identify the individual school student from the provided data. However, we want to point out that one might find differences in the essays across gender or age groups that do not reflect reality but are rather due to an unintentional bias in the data selection.
260
+
261
+ # Acknowledgments
262
+
263
+ We would like to thank the participants of our study and the anonymous reviewers for the valuable feedback and their time. This work was partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) within the project ArgSchool, project number 453073654.
264
+
265
+ # References
266
+
267
+ Tazin Afrin and Diane Litman. 2018. Annotation and classification of sentence-level revision improvement. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 240-246, New Orleans, Louisiana. Association for Computational Linguistics.
268
+ Aniket Ajit Tambe and Manasi Kulkarni. 2022. Automated essay scoring system with grammar score analysis. In 2022 Smart Technologies, Communication and Robotics (STCR), pages 1-7.
269
+ Patricia A. Alexander, Jannah Fusenig, Eric C. Schoute, Anisha Singh, Yuting Sun, and Julianne E. van Meerten. 2023. Confronting the challenges of undergraduates' argumentation writing in a "learning how to learn" course. Written Communication, 40(2):482-517.
270
+ Michael Becker-Mrotzek and Joachim Grabowski. 2018. Textkorpus Scriptoria. In Michael Becker-Mrotzek and Joachim Grabowski, editors, FDLEX (Forschungsdatenbank Lernertexte). MercatorInstitut für Sprachforderung und Deutsch als Zweitsprache, Köln. Available at: https://fd-lex.uni-koeln.de.
271
+ Michael Becker-Mrotzek, Frank Schneider, and Klaus Tetling. 2010. Argumentierendes Schreiben - lehren und lernen. Vorschläge für einen systematischen Kompetenzaufbau in den Stufen 5 bis 8.
272
+ Beata Beigman Klebanov, Christian Stab, Jill Burstein, Yi Song, Binod Gyawali, and Iryna Gurevych. 2016. Argumentation: Content, structure, and relationship with essay quality. In Proceedings of the Third Workshop on Argument Mining (ArgMining2016), pages 70-75, Berlin, Germany. Association for Computational Linguistics.
273
+ Sebastian Britner, Lorik Dumani, and Ralf Schenkel. 2023. Aquaplane: The argument quality explainer app. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, CIKM '23, page 5015-5020, New York, NY, USA. Association for Computing Machinery.
274
+ Richard Correnti, Lindsay Clare Matsumura, Laura Hamilton, and Elaine Wang. 2013. Assessing students' skills at writing analytically in response to texts. The Elementary School Journal, 114(2):142-177.
275
+ Scott Crossley, Perpetual Baffour, Tian Yu, Alex Franklin, Meg Benner, and Ulrich Boser. 2023a. A large-scale corpus for assessing written argumentation: PERSUADE 2.0.
276
+ Scott Crossley, Yu Tian, Perpetual Baffour, Alex Franklin, Youngmeen Kim, Wesley Morris, Meg Benner, Aigner Picou, and Ulrich Boser. 2023b. The english language learner insight, proficiency and skills evaluation (ellipse) corpus. International Journal of Learner Corpus Research, 9(2):248-269.
277
+
278
+ Thi Hanh Dang, Thanh Hai Chau, and To Quyen Tra. 2020. A study on the difficulties in writing argumentative essays of english-majored sophomores at tay do university, vietnam. European Journal of English Language Teaching, 6(1).
279
+ Neele Falk and Gabriella Lapesa. 2023. Bridging argument quality and deliberative quality annotations with adapters. In Findings of the Association for Computational Linguistics: EACL 2023, pages 2469-2488, Dubrovnik, Croatia. Association for Computational Linguistics.
280
+ Youmna Farag, Helen Yannakoudakis, and Ted Briscoe. 2018. Neural automated essay scoring and coherence modeling for adversarially crafted input. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 263-271, New Orleans, Louisiana. Association for Computational Linguistics.
281
+ Helmuth Feilke. 2017. Schreib- und Textprozeden. In Jürgen Baurmann, Clemens Kammler, and Astrid Müller, editors, Handbuch Deutschunterricht. Theorie und Praxis des Lehrens und Lernens, 1 edition, pages 51-57. Reihe Praxis Deutsch.
282
+ Helmuth Feilke and Sara Rezat. 2021. Textprozeduren und der Erwerb literaler Kompetenz. In Nikolas Koch and Barbara Kozikowski, editors, Sprach(en)erwerb, pages 69-79. Der Deutschunterricht.
283
+ Ralph P. Ferretti, Scott Andrews-Weckerly, and William E. Lewis. 2007. Improving the argumentative writing of students with learning disabilities: Descriptive and normative considerations. Reading & Writing Quarterly, 23(3):267-285.
284
+ Sylviane Granger, Estelle Dagneaux, Fanny Meunier, and Magali Paquot. 2009. The International Corpus of Learner English. Presses universitaires de Louvain, Louvain-la-Neuve.
285
+ Timon Gurcke, Milad Alshomary, and Henning Wachsmuth. 2021. Assessing the sufficiency of arguments through conclusion generation. In Proceedings of the 8th Workshop on Argument Mining, pages 67-77, Punta Cana, Dominican Republic. Association for Computational Linguistics.
286
+ Olaf Gätje, Sara Rezat, and Torsten Steinhoff. 2012. Positionierung. Zur Entwicklung des Gebrauchs modalisierender Prozeduren in argumentativen Texten von Schülern und Studenten. In Helmuth Feilke and Katrin Lehn, editors, TextROUTinen. Theorie, Erwerb und didaktisch-mediale Modellierung, pages 125-153. Lang, Frankfurt/Main.
287
+ Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2023. DeBERTav3: Improving deBERTa using ELECTRA-style pre-training with gradient-disentangled embedding sharing. In The Eleventh International Conference on Learning Representations.
288
+
289
+ Andrea Horbach, Dirk Scholten-Akoun, Yuning Ding, and Torsten Zesch. 2017. Fine-grained essay scoring of a complex writing task for native speakers. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 357-366, Copenhagen, Denmark. Association for Computational Linguistics.
290
+ Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 2790-2799. PMLR.
291
+ Maleerat Ka-kan-dee and Sarjit Kaur. 2014. Argumentative writing difficulties of thai english major students. In The 2014 WEI International Academic Conference Proceedings, pages 193-207.
292
+ Zixuan Ke and Vincent Ng. 2019. Automated essay scoring: A survey of the state of the art. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pages 6300-6308. International Joint Conferences on Artificial Intelligence Organization.
293
+ Ronald T. Kellogg, Alison P. Whiteford, and Thomas Quinlan. 2010. Does automated feedback help students learn to write? Journal of Educational Computing Research, 42(2):173-196.
294
+ Norbert Kruse, Anke Reichardt, Maik Herrmann, Friederike Heinzel, and Frank Lipowsky. 2012. Zur qualitat von kindertexten. Entwicklung eines bewertungsinstrumentes in der grundschule. Didaktik Deutsch: Halbjahresschrift für die Didaktik der deutschen Sprache und Literatur, 17(32):87-110.
295
+ Xia Li, Minping Chen, Jianyun Nie, Zhenxing Liu, Ziheng Feng, and Yingdan Cai. 2018. Coherence-based automated essay scoring using self-attention. In Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data, pages 386-397, Cham. Springer International Publishing.
296
+ Zhexiong Liu, Diane Litman, Elaine Wang, Lindsay Matsumura, and Richard Correnti. 2023. Predicting the quality of revisions in argumentative writing. In Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023), pages 275-287, Toronto, Canada. Association for Computational Linguistics.
297
+ Huy Nguyen and Diane Litman. 2018. Argument mining for improving the automated scoring of persuasive essays. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1).
298
+ John Peloghitis. 2017. Difficulties and strategies in argumentative writing: A qualitative analysis. Transformation in language education. JALT.
299
+
300
+ Isaac Persing, Alan Davis, and Vincent Ng. 2010. Modeling organization in student essays. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 229-239, Cambridge, MA. Association for Computational Linguistics.
301
+ Isaac Persing and Vincent Ng. 2013. Modeling thesis clarity in student essays. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 260-269, Sofia, Bulgaria. Association for Computational Linguistics.
302
+ Isaac Persing and Vincent Ng. 2014. Modeling prompt adherence in student essays. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1534-1543, Baltimore, Maryland. Association for Computational Linguistics.
303
+ Isaac Persing and Vincent Ng. 2015. Modeling argument strength in student essays. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 543-552, Beijing, China. Association for Computational Linguistics.
304
+ Jonas Pfeiffer, Aishwarya Kamath, Andreas Rückle, Kyunghyun Cho, and Iryna Gurevych. 2021. AdapterFusion: Non-destructive task composition for transfer learning. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 487-503, Online. Association for Computational Linguistics.
305
+ Jonas Pfeiffer, Andreas Rückle, Clifton Poth, Aishwarya Kamath, Ivan Vulic, Sebastian Ruder, Kyunghyun Cho, and Iryna Gurevych. 2020. AdapterHub: A framework for adapting transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 46-54, Online. Association for Computational Linguistics.
306
+ Zahra Rahimi, Diane Litman, Elaine Wang, and Richard Correnti. 2015. Incorporating coherence of topics as a criterion in automatic response-to-text assessment of the organization of writing. In Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 20-30, Denver, Colorado. Association for Computational Linguistics.
307
+ Zahra Rahimi, Diane J. Litman, Richard Correnti, Lindsay Clare Matsumura, Elaine Wang, and Zahid Kisa. 2014. Automatic scoring of an analytical response-to-text assessment. In Intelligent Tutoring Systems, pages 601-610, Cham. Springer International Publishing.
308
+ Sara Rezat. 2011. Schriftliches Argumentieren. Zur Ontogenese konzessiver Argumentationskompetenz.
309
+
310
+ Didaktik Deutsch: Halbjahresschrift für die Didaktik der deutschen Sprache und Literatur, 16(31):50-67.
311
+ Sara Rezat. 2018. Argumentative Textprozeduren als Instrumente zur Anbahnung wissenschaftlicher Textkompetenz. In Sabine Schmölzer-Eibinger, Bora Bushati, Christopher Ebner, and Lisa Niederdorfer, editors, Wissenschaftliches Schreiben lehren und lernen. Diagnose und Förderung wissenschaftlicher Textkompetenz in Schule und Universität, pages 125-146. Waxmann, Münster.
312
+ Juliane Schröter. 2021. Linguistische Argumentationsanalyse. Kurze Einführungen in die germanistische Linguistik. Universitätsverlag Winter, Heidelberg.
313
+ Gabriella Skitalinskaya, Maximilian Sliethöver, and Henning Wachsmuth. 2023. Claim optimization in computational argumentation. In Proceedings of the 16th International Natural Language Generation Conference, pages 134-152, Prague, Czechia. Association for Computational Linguistics.
314
+ Gabriella Skitalinskaya and Henning Wachsmuth. 2023. To revise or not to revise: Learning to detect improvable claims for argumentative writing support. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15799-15816, Toronto, Canada. Association for Computational Linguistics.
315
+ Carlota S. Smith. 2003. Modes of Discourse: The Local Structure of Texts. Cambridge Studies in Linguistics. Cambridge University Press.
316
+ Christian Stab. 2017. Argumentative Writing Support by means of Natural Language Processing. Ph.D. thesis, Technische Universität Darmstadt, Darmstadt.
317
+ Christian Stab and Iryna Gurevych. 2017a. Parsing argumentation structures in persuasive essays. Computational Linguistics, 43(3):619-659.
318
+ Christian Stab and Iryna Gurevych. 2017b. Recognizing insufficiently supported arguments in argumentative essays. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 980-990, Valencia, Spain. Association for Computational Linguistics.
319
+ Maja Stahl, Nick Dusterhus, Mei-Hua Chen, and Henning Wachsmuth. 2023. Mind the gap: Automated corpus creation for enthymeme detection and reconstruction in learner arguments. In *Findings of the Association for Computational Linguistics: EMNLP* 2023, pages 4703-4717, Singapore. Association for Computational Linguistics.
320
+ Manfred Stede and Jodi Schneider. 2019. Argumentation Mining. Springer International Publishing.
321
+ Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca.
322
+
323
+ Michael A. R. Townsend, Lynley Hicks, Jacquilyn D. M. Thompson, Keri M. Wilton, Bryan F. Tuck, and Dennis W. Moore. 1993. Effects of introductions and conclusions in assessment of student essays. Journal of Educational Psychology, 85(4):670-678.
324
+ Masaki Uto, Yikuan Xie, and Maomi Ueno. 2020. Neural automated essay scoring incorporating handcrafted features. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6077-6088, Barcelona, Spain (Online). International Committee on Computational Linguistics.
325
+ Henning Wachsmuth, Khalid Al-Khatib, and Benno Stein. 2016. Using argument mining to assess the argumentation quality of essays. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1680-1691, Osaka, Japan. The COLING 2016 Organizing Committee.
326
+ Douglas Walton, Christopher Reed, and Fabrizio Macagno. 2008. Argumentation Schemes. Cambridge University Press.
327
+ Thiemo Wambsganss, Andrew Caines, and Paula Buttery. 2022a. ALEN app: Argumentative writing support to foster English language learning. In Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022), pages 134-140, Seattle, Washington. Association for Computational Linguistics.
328
+ Thiemo Wambsganss, Andreas Janson, and Jan Marco Leimeister. 2022b. Enhancing argumentative writing with automated feedback and social comparison nudging. Computers and Education, 191:104644.
329
+ Thiemo Wambsganss and Christina Niklaus. 2022. Modeling persuasive discourse to adaptively support students' argumentative writing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8748-8760, Dublin, Ireland. Association for Computational Linguistics.
330
+ Thiemo Wambsganss, Christina Niklaus, Matthias Sollenner, Siegfried Handschuh, and Jan Marco Leimeister. 2020. A corpus for argumentative writing support in German. In Proceedings of the 28th International Conference on Computational Linguistics, pages 856-869, Barcelona, Spain (Online). International Committee on Computational Linguistics.
331
+ Thiemo Wambsganss and Roman Rietsche. 2019. Towards designing an adaptive argumentation learning tool. In International Conference on Interaction Sciences.
332
+ Cong Wang, Zhiwei Jiang, Yafeng Yin, Zifeng Cheng, Shiping Ge, and Qing Gu. 2023. Aggregating multiple heuristic signals as supervision for unsupervised automated essay scoring. In Proceedings of the 61st Annual Meeting of the Association for Computational
333
+
334
+ Linguistics (Volume 1: Long Papers), pages 1399-14013, Toronto, Canada. Association for Computational Linguistics.
335
+
336
+ Florian Weber, Thiemo Wambsganss, Seyed Parsa Neshaei, and Matthias Soellner. 2023. Structured persuasive writing support in legal education: A model and tool for German legal case solutions. In *Findings of the Association for Computational Linguistics: ACL* 2023, pages 2296-2313, Toronto, Canada. Association for Computational Linguistics.
337
+
338
+ Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
339
+
340
+ Ruosong Yang, Jiannong Cao, Zhiyuan Wen, Youzheng Wu, and Xiaodong He. 2020. Enhancing automated essay scoring performance via fine-tuning pre-trained language models with combination of regression and ranking. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 1560–1569, Online. Association for Computational Linguistics.
341
+
342
+ Wei Zhu. 2001. Performing argumentative writing in english: Difficulties, processes, and strategies. TESL Canada Journal, 19(1):34-50.
343
+
344
+ # A Cooccurrence Matrix
345
+
346
+ The cooccurrence matrix between all argumentative structure levels (discourse functions, arguments, components, and discourse modes) is shown in Figure 5.
347
+
348
+ # B AdapterFusion Activations
349
+
350
+ Similar to Pfeiffer et al. (2021), we visualize the activations of our model mDeBERTaV3-fusion-w/ all per layer in Figure 6 to further investigate the influence of each level of argumentative structure on the quality scoring. The first activation layers show for all five quality aspects that all structure adapters are activated quite diversely. In contrast, the later layers have a clear tendency towards activating only one or two adapters. Notable is the similar activation pattern between relevance and overall quality, which could come from their value correlation.
351
+
352
+ ![](images/d9d300d88c6589612ddc97390a040d409d3636021c2ec3e945fcb8f7c87a82b8.jpg)
353
+ Figure 5: Relative token-level overlap of all argumentative structure labels, seperated into the four levels of granularity. For example, $68\%$ of all tokens labeled as Introduction are also labeled Topic.
354
+
355
+ ![](images/98075f5f25716aa54f1e32f4f01db49a371f6a36dbeb485a29fc47b05899ee58.jpg)
356
+ Figure 6: AdapterFusion activation per layer (1-12) and on average over the layers $(Avg)$ for each mDeBERTaV3-fusion-w/-all model per quality aspect. We average the activation for each fused adapter (for discourse functions, arguments, components, or discourse modes) over all instances in the test set of the most representative folding.
2024/A School Student Essay Corpus for Analyzing Interactions of Argumentative Structure and Quality/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b51edd7f06ff7ced04c7998fc400fd249e47332698bcc13b8e37cdebb0c6f99
3
+ size 795181
2024/A School Student Essay Corpus for Analyzing Interactions of Argumentative Structure and Quality/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Study on the Calibration of In-context Learning/d325c270-6a74-49da-b92f-90bde3697c69_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Study on the Calibration of In-context Learning/d325c270-6a74-49da-b92f-90bde3697c69_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Study on the Calibration of In-context Learning/d325c270-6a74-49da-b92f-90bde3697c69_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dc4a9cc8cd107a8d265e1d69eff905b7af8d900fb6ad7fc3050bd0959c1d9cb3
3
+ size 744529
2024/A Study on the Calibration of In-context Learning/full.md ADDED
@@ -0,0 +1,380 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Study on the Calibration of In-context Learning
2
+
3
+ Hanlin Zhang $^{1}$ Yi-Fan Zhang $^{2}$ Yaodong Yu $^{3}$ Dhruv Madeka $^{4}$ Dean Foster $^{4}$ Eric Xing $^{2,5}$ Himabindu Lakkaraju $^{1}$ Sham Kakade $^{1,4}$
4
+
5
+ $^{1}$ Harvard University $^{2}$ MBZUAI $^{3}$ UC Berkeley
6
+
7
+ $^{4}$ Amazon $^{5}$ Carnegie Mellon University
8
+
9
+ # Abstract
10
+
11
+ Accurate uncertainty quantification is crucial for the safe deployment of machine learning models, and prior research has demonstrated improvements in the calibration of modern language models (LMs). We study in-context learning (ICL), a prevalent method for adapting static LMs through tailored prompts, and examine the balance between performance and calibration across a broad spectrum of natural language understanding and reasoning tasks. Through comprehensive experiments, we observe that, with an increasing number of ICL examples, models initially exhibit increased miscalibration before achieving better calibration and miscalibration tends to arise in low-shot settings. Moreover, we find that methods aimed at improving usability, such as finetuning and chain-of-thought (CoT) prompting, can lead to miscalibration and unreliable natural language explanations. Furthermore, we explore recalibration techniques and find that a scaling-binning calibrator can reduce calibration errors consistently.
12
+
13
+ # 1 Introduction
14
+
15
+ Language models (LMs) that encompass transformer-based architectures (Brown et al., 2020; Chowdhery et al., 2023; OpenAI, 2023) can generate coherent and contextually relevant texts for various use cases. Despite their impressive performance, these models occasionally produce erroneous or overconfident outputs, leading to concerns about their calibration (Dawid, 1982; DeGroot and Fienberg, 1983) which measures how faithful a model's prediction uncertainty is. Such a problem is pressing when users adapt them using a recent paradigm called in-context learning (Brown et al., 2020) to construct performant predictors, especially for applications in safety-critical domains (Bhatt et al., 2021; Pan et al., 2023).
16
+
17
+ We provide an in-depth evaluation and analysis of how well these models are calibrated - that is, the
18
+
19
+ alignment between the model's confidence in its predictions and the actual correctness of those predictions. This token-level calibration assessment enables us to measure the discrepancy between the model's perceived and actual performance to assess its accuracy and reliability through a Bayesian uncertainty lens.
20
+
21
+ We find that LM such as LLaMA (Touvron et al., 2023a) is poorly calibrated in performant settings and there exists a calibration-accuracy trade-off (Fig.1) for low-shot settings $(k < 4)$ : as we increase the amount of in-context samples, both prediction accuracy and calibration error increase. Such a trade-off can be improved using more ICL examples $(k = 8)$ and larger models. Crucially, this calibration degradation worsens when fine-tuning occurs using specialized data to improve usability, such as curated instructions (Dubois et al., 2023), dialogues (Zheng et al., 2023), or human preference data (Ziegler et al., 2019). Though previous common practice suggests recalibrating models' logits via temperature scaling (Guo et al., 2017), we show that in contrast to classic regimes, the miscalibration issue in ICL can not be easily addressed using such well-established scaling approaches (Platt et al., 1999). Thus we propose to use scaling-binning (Kumar et al., 2019), which fits a scaling function, bins its outputs, and then outputs the average of the function values in that bin, to reduce the expected calibration error below 0.1.
22
+
23
+ Furthermore, we study the trade-off in reasoning tasks that involve generation of explanations (Camburu et al., 2018; Nye et al., 2021; Wei et al., 2022) before the answer, showing that the model can produce confidently wrong answers (using confidence histograms and reliability plots) when prompted with explanations on Strategy QA (Geva et al., 2021), Commonsense QA (Talmor et al., 2018), OpenBook QA (Mihaylov et al., 2018), World Tree (Jansen et al., 2018). We carefully design our human evaluation and observe that, with the increase
24
+
25
+ ![](images/b72fe5971922f5b136799f312f9e1e5e5b0952dada7c5e1070a2932ac49f6845.jpg)
26
+ (a) Demonstration of In-context Learning
27
+
28
+ ![](images/4de6fc91f5dd8a8044052e711c9bc1f0249e5360fc8edc373d4b56ea4792be4d.jpg)
29
+ Figure 1: The accuracy-calibration trade-off of in-context learning. (a) ICL concerns taking task-specific examples as the prompt to adapt a frozen LLM to predict the answer. (b) Classification accuracy and expected calibration error of ICL. As the number of ICL samples increases, the prediction accuracy improves (Left); at the same time, the calibration first worsens ( $k < 3$ ) and then becomes better (Right).
30
+
31
+ ![](images/48e9921e792052e307b5c9517d012990255c303a406cdd4b31ae8e2d263e8efe.jpg)
32
+ (b) The accuracy and calibration of LLaMA-7B
33
+
34
+ in model sizes and the quantity of ICL examples, there is a corresponding rise in the proportion of confidently predicted examples among those incorrectly forecasted. Moreover, we find that a high proportion of wrong predictions are of high confidence and showcase those typical confidently wrong examples of LMs.
35
+
36
+ Moreover, we find that choosing ICL samples from the validation set does not naturally lead to calibrated predictions, showing that ICL learns in a fairly different way than stochastic gradient descent, a common prototype previous works hypothesize (Von Oswald et al., 2023). Motivated by this difficulty, we design controlled experiments to illustrate that when examples in the prompt are sampled from the same task instead of repeating a given example in various ways, the learning performance would be improved.
37
+
38
+ # 2 Related Work
39
+
40
+ Calibration of language models. Calibration is a safety property to measure the faithfulness of machine learning models' uncertainty, especially for error-prone tasks using LMs. Previous works find that pre-training (Desai and Durrett, 2020) and explanation (Zhang et al., 2020; González et al., 2021) improves calibration. Models can be very poorly calibrated when we prompt LMs (Jiang et al., 2021), while calibration can also depend on model size (Kadavath et al., 2022). (Braverman et al., 2020) assesses the long-term dependencies in a language model's generations compared to those of the underlying language and finds that entropy drifts as models such as when GPT-2 generates text. The intricacy of explanations on complementary team performance poses additional challenges due to the overreliance on explanations of users
41
+
42
+ regardless of their correctness (Bansal et al., 2021). (Mielke et al., 2022) gives a framework for linguistic calibration, a concept that emphasizes the alignment of a model's expressed confidence or doubt with the actual accuracy of its responses. The process involves annotating generations with $\langle DK\rangle$ , $\langle LO\rangle$ , $\langle HI\rangle$ for confidence levels, then training the confidence-controlled model by appending the control token $\langle DK/LO/HI\rangle$ at the start of the output, followed by training a calibrator to predict these confidence levels, and finally predicting confidence when generating new examples. (Tian et al., 2023) finds that asking LMs for their probabilities can be better than using conditional probabilities in a traditional way. LHTS (Shih et al., 2023) is a simple amortized inference trick for temperature-scaled sampling from LMs and diffusion models. To aggregate log probabilities across semantically equivalent outputs, Kuhn et al. (2023) utilize bidirectional entailment through a model to identify outputs that are semantically similar, thereby refining the uncertainty estimation process. (Cole et al., 2023) identifies the calibration challenge in ambiguous QA and distinguishes uncertainty about the answer (epistemic uncertainty) from uncertainty about the meaning of the question (denotational uncertainty), proposing sampling and self-verification methods. Kamath et al. (2020) trains a calibrator to identify inputs on which the QA model errs and abstains when it predicts an error is likely. Zhao et al. (2023) proposes the Pareto optimal learning assessed risk score for calibration and error correction but requires additional training. Kalai and Vempala (2023) show the trade-off between calibration and hallucination but they didn't study it in a realistic setting and how the predicted answer's accuracy would impact those two safety aspects.
43
+
44
+ In-context learning. Large models such as GPT-3 (Brown et al., 2020) have demonstrated the potential of in-context learning, a method where the model infers the task at hand from the context provided in the input, without requiring explicit retraining or fine-tuning for each new task. Some recent works attempt to understand ICL through meta-learning (Von Oswald et al., 2023), Bayesian inference (Xie et al., 2021), mechanistic interpretability (Olsson et al., 2022), algorithm selection (Bai et al., 2023), synthetic data and simple function classes (Garg et al., 2022; Akyurek et al., 2022; Raventós et al., 2023). Notably, unlike previous works (Zhao et al., 2021; Han et al., 2023; Fei et al., 2023; Zhou et al., 2023a) that focus on improving task accuracy using the same "calibration" terminology, we study the uncertainty of ICL and measure its trade-off with accuracy.
45
+
46
+ # 3 Background
47
+
48
+ Setting. Given a pre-trained language model $\mathcal{P}_{\theta}(w_t|w_{< t})$ , we seek to adapt it using the prompt $w_0 = [x_1, y_1, x_2, y_2, \ldots, x_{n-1}, y_{n-1}, x_n]$ to generate a predicted answer $y_n = \mathcal{P}_{\theta}(w_0)$ . In the context of reasoning, a popular approach is to hand-craft some explanations/rationales/chain-of-thoughts $e$ in the prompt $w_0 = [x_1, e_1, y_1, x_2, e_2, y_2, \ldots, x_{n-1}, e_{n-1}, y_{n-1}, x_n]$ to generate explanation $e_n$ and answer $y_n$ , for the
49
+
50
+ test sample: $\overbrace{w_1,w_2,\ldots,w_k}^{e_n},y_n = \mathcal{P}_\theta (w_0)$
51
+
52
+ We extract answer token probabilities of LMs, e.g. for binary classification tasks, we filter and extract probabilities $P(\text{"Yes"})$ and $P(\text{"No"})$ , based on which we calculate the following statistics for studying the confidence and calibration of LMs:
53
+
54
+ Confidence and feature norm. We record the maximum probability of the answer token as its confidence $\mathrm{Conf} = \mathcal{P}_{\theta}(y_n|w_{< n})$ and the feature norm $z_{n}$ as the intermediate hidden state before the linear prediction layer.
55
+
56
+ Entropy rate. We denote the entropy of a token $w_{t}$ at position $t$ as $H(w_{t}|w_{<t}) = -\mathbb{E}_{w_{t}\sim \mathcal{P}_{\theta}(\cdot |w_{<t})}[\log \mathcal{P}_{\theta}(w_{t}|w_{<t})]$ . We typically measure it based on the answer token via setting $w_{t} = y_{n}$ . Note that auto-regressive LMs are trained via maximizing the negative log-likelihood objective $\mathcal{L} = -\mathbb{E}_t[\log \mathcal{P}_{\theta}(w_t|w_{<t})]$ on massive corpora.
57
+
58
+ Empirical estimate of the expected calibration error (ECE) In the realm of probabilistic classifiers, calibration is a crucial concept. A classi
59
+
60
+ fier, denoted as $\mathcal{P}_{\theta}$ with parameters $\theta$ and operating over $C$ classes, is said to be "canonically calibrated" (Kull and Flach, 2015) when, for every probability distribution $p$ over the $C$ classes and for every label $y$ , the probability that the label is $y$ given the classifier's prediction is $p$ matches the component of $p$ corresponding to $y$ . This is mathematically represented as, $\forall p \in \Delta^{C-1}, \forall y \in Y$ :
61
+
62
+ $$
63
+ P (Y = y \mid \mathcal {P} _ {\theta} (X) = p) = p _ {y}. \tag {1}
64
+ $$
65
+
66
+ Here, $\Delta^{C - 1}$ symbolizes the $(C - 1)$ -dimensional simplex, which encompasses all potential probability distributions over the $C$ classes.
67
+
68
+ A simpler calibration criterion is the "confidence calibration." In this case, a classifier is deemed calibrated if, for every top predicted probability $p^*$ , the probability that the true label belongs to the class with the highest predicted probability, given that this maximum predicted probability is $p^*$ , equals $p^*$ . Formally: $\forall p^* \in [0,1]$
69
+
70
+ $$
71
+ P (Y = c (X) \mid \max \mathcal {P} _ {\theta} (X) = p ^ {*}) = p ^ {*}, \tag {2}
72
+ $$
73
+
74
+ where $c(X) = \arg \max p$ and ties are broken arbitrarily. To gauge the calibration of a model, we adopt Expected Calibration Error (ECE (Guo et al., 2017)) defined as:
75
+
76
+ $$
77
+ \mathbb {E} \left[ | p ^ {*} - \mathbb {E} [ Y = c (X) \mid \max \mathcal {P} _ {\theta} (X) = p ^ {*} ] | \right]. \tag {3}
78
+ $$
79
+
80
+ In real-world applications, this quantity cannot be computed without quantization. So, the ECE is approximated by segmenting predicted confidences into $M$ distinct bins, $B_{1},\ldots ,B_{M}$ . The approximation is then computed as:
81
+
82
+ $$
83
+ \widehat {\mathrm {E C E}} = \sum_ {m = 1} ^ {M} \frac {| B _ {m} |}{n} \left| \operatorname {a c c} \left(B _ {m}\right) - \operatorname {c o n f} \left(B _ {m}\right) \right|.
84
+ $$
85
+
86
+ Here, $\operatorname{acc}(B_m)$ is the accuracy within bin $B_{m}$ , and $\operatorname{conf}(B_m)$ is the average confidence of predictions in bin $B_{m}$ . The total number of samples is represented by $n$ , and the dataset consists of $n$ independent and identically distributed samples, $\{(x_i,y_i)\}_{i=1}^n$ . In our work, we use this estimator to approximate the ECE.
87
+
88
+ # 4 Experiments
89
+
90
+ We briefly summarize our results and findings before explaining the experimental settings.
91
+
92
+ <table><tr><td rowspan="3">Dataset</td><td colspan="12">LLaMA-30B</td></tr><tr><td colspan="2">0-shot</td><td colspan="2">1-shot</td><td colspan="2">2-shot</td><td colspan="2">3-shot</td><td colspan="2">4-shot</td><td colspan="2">8-shot</td></tr><tr><td>ECE</td><td>Acc</td><td>ECE</td><td>Acc</td><td>ECE</td><td>Acc</td><td>ECE</td><td>Acc</td><td>ECE</td><td>Acc</td><td>ECE</td><td>Acc</td></tr><tr><td></td><td colspan="12">Text Classification</td></tr><tr><td>AGNews</td><td>0.261</td><td>0.37</td><td>0.043</td><td>0.830</td><td>0.049</td><td>0.817</td><td>0.067</td><td>0.810</td><td>0.049</td><td>0.821</td><td>0.047</td><td>0.855</td></tr><tr><td>RTE</td><td>0.023</td><td>0.672</td><td>0.051</td><td>0.742</td><td>0.060</td><td>0.747</td><td>0.050</td><td>0.738</td><td>0.048</td><td>0.748</td><td>0.058</td><td>0.752</td></tr><tr><td>CB</td><td>0.069</td><td>0.500</td><td>0.312</td><td>0.696</td><td>0.216</td><td>0.789</td><td>0.217</td><td>0.834</td><td>0.192</td><td>0.814</td><td>0.181</td><td>0.796</td></tr><tr><td>SST-2</td><td>0.083</td><td>0.607</td><td>0.163</td><td>0.930</td><td>0.139</td><td>0.940</td><td>0.126</td><td>0.961</td><td>0.112</td><td>0.964</td><td>0.080</td><td>0.964</td></tr><tr><td></td><td colspan="12">Reasoning with Scratchpad</td></tr><tr><td>Strategy QA</td><td>0.204</td><td>0.450</td><td>0.154</td><td>0.619</td><td>0.174</td><td>0.654</td><td>0.172</td><td>0.660</td><td>0.161</td><td>0.672</td><td>0.152</td><td>0.665</td></tr><tr><td>Commonsense QA</td><td>0.048</td><td>0.356</td><td>0.232</td><td>0.589</td><td>0.290</td><td>0.608</td><td>0.253</td><td>0.675</td><td>0.283</td><td>0.644</td><td>0.289</td><td>0.653</td></tr><tr><td>World Tree</td><td>0.112</td><td>0.534</td><td>0.211</td><td>0.570</td><td>0.251</td><td>0.621</td><td>0.185</td><td>0.680</td><td>0.206</td><td>0.646</td><td>-</td><td>-</td></tr><tr><td>OpenBook QA</td><td>0.036</td><td>0.386</td><td>0.231</td><td>0.561</td><td>0.255</td><td>0.604</td><td>0.207</td><td>0.644</td><td>0.206</td><td>0.648</td><td>0.191</td><td>0.662</td></tr></table>
93
+
94
+ Table 1: Accuracy and Calibration of LLaMA-30B model with across four text classification datasets and four reasoning datasets. Results are excluded when the data exceeds the context length limit.
95
+
96
+ ![](images/afa54d688b97f4bddf5c31a6920842555922375c2f43f37cdea3236dc35b39aa.jpg)
97
+
98
+ ![](images/d180a8ee2b7664ab35eb10aecbde0c920cc4b41f1b2f9affd894c3071efa3be7.jpg)
99
+
100
+ ![](images/3ed14e895ef5599aafb8d051d841d950290ba7f703521dbdc2fb39600ad2f0fe.jpg)
101
+
102
+ ![](images/8323ab3bd0a260ff35d28815c1575b3ab00d88379d1cd63de73371f93ed7b872.jpg)
103
+ Figure 2: Reliability plots and confidence histograms of LLaMA models on 4-shot learning tasks. Results of different sizes 7B (left), 13B (middle), and 30B (right) are plotted.
104
+
105
+ ![](images/5a8af816852de76108ac73cc52054b09010ab198c3af3b4c7903287fea9bee77.jpg)
106
+
107
+ ![](images/6ed95aad632bbad82848234357db61fd935e4a230c1d22c357847a9d2db856f2.jpg)
108
+
109
+ - For the base LMs we considered, they are calibrated when prompting with a sufficient amount of ICL examples to get non-trivial performance.
110
+ - As we increase the number of ICL examples, models tend to be first more miscalibrated and then calibrated. In low-shot settings $(k < 4)$ , models can be mis-calibrated, in part due to poor data (aleatoric) uncertainty.
111
+ - Interventions that improve usability such as fine-tuning, and chain-of-thought (CoT) prompting would lead to miscalibration. The generated explanations from CoT can improve predictive results but may not be reliable by human evaluation.
112
+
113
+ # 4.1 Experimental Settings
114
+
115
+ Models. We study decoder-only autoregressive LMs involving LLaMA (Touvron et al., 2023a), ranging from 7B to 30B, and its variants finetuned with instruction, dialog, or RLHF like Alpaca (Dubois et al., 2023), Vicuna (Zheng et al., 2023), and LLaMA2-Chat (Touvron et al., 2023b). Datasets and tasks. We used both traditional NLU tasks such as AGNews (Zhang et al., 2015), TREC (Voorhees and Tice, 2000), CB (Schick and Schütze, 2021), SST-2 (Socher et al., 2013), DBPedia (Zhang et al., 2015), as well as reasoning question answering tasks like Strategy QA (Geva et al., 2021), Commonsense QA (Talmor et al., 2018), OpenBook QA (Mihaylov et al., 2018), World Tree (Jansen et al., 2018). Notably, the reasoning task
116
+
117
+ performance can be greatly improved in general via prompting methods like scratchpad (Nye et al., 2021; Wei et al., 2022) that enables models to generate natural language explanations before predicting an answer.
118
+
119
+ In-context learning settings. For $k$ -shot learning, we prompt the model via sampling $k$ examples from the training set for each test example. Each experiment is repeated 10 times to reduce variance and we report the mean results. We use $M = 10$ bins for calculating calibration errors.
120
+
121
+ # 4.2 Numerical Results
122
+
123
+ Model performance and calibration. We record the performance and calibration errors for $k$ -shot learning ( $k = 0,1,2,3,4,8$ ), characterizing the calibration-accuracy trade-off in both classic and realistic settings (Tab. 1). Our findings are twofold: as more in-context examples are included, we observe a concurrent rise in both accuracy and calibration error across most low-shot situations. Especially, when $k = 0$ increases to $k = 1$ , there is a marked boost in both accuracy and calibration error, demonstrating the importance of in-context examples in learning performance while one single example may not be able to reduce aleatoric uncertainty. In particular, for reasoning tasks, we explore prompting approaches that explicitly include explanations in reasoning tasks, i.e. scratchpad (Nye et al., 2021) or chain-of-thought (Wei et al., 2022), showing that calibration significantly degrades after generating a long context for reasoning and explaining the final answer. We also note that having more ICL examples does not necessarily lead to better calibration though the predictive performance can generally improve (e.g., $k = 8$ for CB in Tab.1). This may stem from the intrinsic limitations of transformers in effectively modeling long-term dependencies.
124
+
125
+ Post-hoc recalibraiton. We conducted experiments with three strategies (Algorithm 1) to address miscalibration using temperature scaling (Guo et al., 2017) and scaling-binning (Kumar et al., 2019) with learnable parameter $w$ :
126
+
127
+ 1. (0-shot) Learning $w$ from the training split and applying it to all test samples with different shot numbers.
128
+ 2. $(k$ -shot) Learning $w$ for each $k$ -shot ICL; in other words, different temperatures are learned for different shot numbers in ICL.
129
+
130
+ 3. (Fix $w$ ) Fixing the prompt for each experiment and learning $w$ corresponding to the fixed prompt. In other words, $w$ is learned for calibration for every possible ICL prompt.
131
+
132
+ In Appendix Alg. 1, we introduce the recalibration algorithm employing temperature scaling. Additionally, we utilize the scaling-binning calibrator (Kumar et al., 2019), which fits a calibration function $w \in \mathcal{W}$ to the recalibration dataset: $\arg \min_{w} \sum_{(x_i, y_i)} \ell(w \cdot \mathcal{P}_\theta(x_i), y_i)$ , where $\ell$ is log-loss. Subsequently, the input space is partitioned into bins, ensuring an equal number of inputs in each bin (defaulting to 10 bins). Within each bin, the average of the $w$ values is computed and outputted for recalibration.
133
+
134
+ Upon examination of Table 3 and Table 4, it is evident that none of the aforementioned strategies utilizing temperature scaling achieves satisfactory calibration performance. This finding contrasts with the well-established success of scaling confidence scores in the supervised learning setting, where it effectively reduces calibration errors (Guo et al., 2017). The fact that applying a postprocessing calibration method, such as temperature scaling, cannot directly resolve the miscalibration issue suggests that ICL might have different properties compared to predictions from classical supervised learning models. On the other hand, the scaling-binning method demonstrates superior performance in our experiments, which successfully reduces calibration errors below 0.1.
135
+
136
+ The effect of fine-tuning. We show that vicuna, alpaca, and LLaMA2-Chat are all more accurate but less calibrated than their LLaMA counterpart backbones (Fig. 3), the margin is especially large for reasoning tasks and vicuna. Our finding indicates that fine-tuning might significantly degrade calibration, corroborating the evidence reported in GPT4 (OpenAI, 2023), albeit it can greatly improve the reasoning accuracy. Our results provide evidence that though fine-tuning on carefully curated datasets can greatly improve question-answering performance, especially for hard tasks like reasoning problems, attention may need to be paid when assessing the calibration of those models' predictions. Moreover, we include results of Mistral-7B (Jiang et al., 2023), a sparse Mixture of Experts (MoEs) architecture with sliding window attention. As a base model, it shows similar performance and calibration compared with LLaMA2-7B, indicating that our conclusion still holds for the model
137
+
138
+ <table><tr><td rowspan="2"></td><td colspan="2">1-shot</td><td colspan="2">2-shot</td><td colspan="2">4-shot</td><td colspan="2">8-shot</td><td rowspan="2">Avg Acc</td><td rowspan="2">Avg ECE</td></tr><tr><td>ACC</td><td>ECE</td><td>ACC</td><td>ECE</td><td>ACC</td><td>ECE</td><td>ACC</td><td>ECE</td></tr><tr><td>Vanilla</td><td>0.740</td><td>0.098</td><td>0.877</td><td>0.132</td><td>0.917</td><td>0.108</td><td>0.954</td><td>0.064</td><td>0.872</td><td>0.100</td></tr><tr><td>Repeat prompt</td><td>0.740</td><td>0.098</td><td>0.693</td><td>0.155</td><td>0.801</td><td>0.117</td><td>0.820</td><td>0.111</td><td>0.764</td><td>0.120</td></tr><tr><td>Repeat context</td><td>0.740</td><td>0.098</td><td>0.668</td><td>0.208</td><td>0.657</td><td>0.220</td><td>0.607</td><td>0.219</td><td>0.668</td><td>0.186</td></tr></table>
139
+
140
+ ![](images/95e6ae072a3744b822b3dd39646d08d2ebeae54294c1943b7956b073728626e6.jpg)
141
+ (a) Classification accuracy
142
+
143
+ ![](images/388e545072565df54a43e9729f23e9a411cd4916843004499dbe3910be4e3636.jpg)
144
+ (b) Calibration error
145
+ Figure 3: Accuracy and calibration errors of base models LLaMA and Mistral, as well as fine-tuned variants. Reported Acc and ECE results are averaged across experiments conducted with $\{0,1,2,4,8\}$ shots.
146
+
147
+ Table 2: Acc and ECE of LLaMA-7B model on SST-2 with different prompt repetition strategies.
148
+
149
+ <table><tr><td>Dataset</td><td>Strategy</td><td>0-shot</td><td>1-shot</td><td>2-shot</td><td>3-shot</td><td>4-shot</td><td>8-shot</td><td>Avg</td></tr><tr><td rowspan="4">SST-2</td><td>None</td><td>0.043</td><td>0.223</td><td>0.119</td><td>0.101</td><td>0.060</td><td>0.049</td><td>0.099</td></tr><tr><td>0-shot</td><td>0.043</td><td>0.216</td><td>0.082</td><td>0.074</td><td>0.047</td><td>0.057</td><td>0.087</td></tr><tr><td>k-shot</td><td>0.034</td><td>0.197</td><td>0.101</td><td>0.079</td><td>0.041</td><td>0.038</td><td>0.139</td></tr><tr><td>Fix w</td><td>0.035</td><td>0.176</td><td>0.086</td><td>0.073</td><td>0.047</td><td>0.043</td><td>0.077</td></tr><tr><td rowspan="4">CB</td><td>None</td><td>0.125</td><td>0.316</td><td>0.177</td><td>0.202</td><td>0.221</td><td>0.210</td><td>0.203</td></tr><tr><td>0-shot</td><td>0.015</td><td>0.252</td><td>0.162</td><td>0.217</td><td>0.217</td><td>0.199</td><td>0.209</td></tr><tr><td>k-shot</td><td>0.015</td><td>0.357</td><td>0.187</td><td>0.188</td><td>0.212</td><td>0.216</td><td>0.214</td></tr><tr><td>Fix w</td><td>0.015</td><td>0.217</td><td>0.159</td><td>0.173</td><td>0.182</td><td>0.210</td><td>0.190</td></tr><tr><td rowspan="4">RTE</td><td>None</td><td>0.108</td><td>0.110</td><td>0.142</td><td>0.122</td><td>0.128</td><td>0.120</td><td>0.122</td></tr><tr><td>0-shot</td><td>0.107</td><td>0.112</td><td>0.143</td><td>0.114</td><td>0.125</td><td>0.116</td><td>0.119</td></tr><tr><td>k-shot</td><td>0.108</td><td>0.115</td><td>0.136</td><td>0.112</td><td>0.126</td><td>0.125</td><td>0.120</td></tr><tr><td>Fix w</td><td>0.101</td><td>0.082</td><td>0.097</td><td>0.068</td><td>0.076</td><td>0.097</td><td>0.088</td></tr><tr><td rowspan="4">AGNews</td><td>None</td><td>0.089</td><td>0.057</td><td>0.071</td><td>0.121</td><td>0.085</td><td>0.123</td><td>0.090</td></tr><tr><td>0-shot</td><td>0.067</td><td>0.087</td><td>0.098</td><td>0.160</td><td>0.107</td><td>0.130</td><td>0.114</td></tr><tr><td>k-shot</td><td>0.083</td><td>0.074</td><td>0.059</td><td>0.109</td><td>0.073</td><td>0.082</td><td>0.079</td></tr><tr><td>Fix w</td><td>0.080</td><td>0.074</td><td>0.080</td><td>0.091</td><td>0.073</td><td>0.080</td><td>0.080</td></tr></table>
150
+
151
+ Table 3: ECE for different calibration strategies using temperature scaling (Guo et al., 2017) of base models LLaMA-2-7B across various shot settings.
152
+
153
+ <table><tr><td>Dataset</td><td>Strategy</td><td>0-shot</td><td>1-shot</td><td>2-shot</td><td>3-shot</td><td>4-shot</td><td>8-shot</td><td>Avg</td></tr><tr><td rowspan="4">SST-2</td><td>None</td><td>0.043</td><td>0.223</td><td>0.119</td><td>0.101</td><td>0.060</td><td>0.049</td><td>0.099</td></tr><tr><td>0-shot</td><td>0.015</td><td>0.062</td><td>0.055</td><td>0.060</td><td>0.062</td><td>0.057</td><td>0.052</td></tr><tr><td>k-shot</td><td>0.022</td><td>0.007</td><td>0.008</td><td>0.013</td><td>0.004</td><td>0.008</td><td>0.010</td></tr><tr><td>Fix w</td><td>0.021</td><td>0.004</td><td>0.008</td><td>0.010</td><td>0.005</td><td>0.009</td><td>0.010</td></tr><tr><td rowspan="4">CB</td><td>None</td><td>0.125</td><td>0.316</td><td>0.177</td><td>0.202</td><td>0.221</td><td>0.210</td><td>0.203</td></tr><tr><td>0-shot</td><td>0.122</td><td>0.130</td><td>0.121</td><td>0.086</td><td>0.083</td><td>0.119</td><td>0.110</td></tr><tr><td>k-shot</td><td>0.119</td><td>0.109</td><td>0.100</td><td>0.109</td><td>0.101</td><td>0.049</td><td>0.094</td></tr><tr><td>Fix w</td><td>0.119</td><td>0.088</td><td>0.085</td><td>0.110</td><td>0.121</td><td>0.069</td><td>0.099</td></tr><tr><td rowspan="4">RTE</td><td>None</td><td>0.108</td><td>0.110</td><td>0.142</td><td>0.122</td><td>0.128</td><td>0.120</td><td>0.122</td></tr><tr><td>0-shot</td><td>0.078</td><td>0.083</td><td>0.090</td><td>0.100</td><td>0.102</td><td>0.115</td><td>0.093</td></tr><tr><td>k-shot</td><td>0.089</td><td>0.084</td><td>0.089</td><td>0.095</td><td>0.101</td><td>0.112</td><td>0.096</td></tr><tr><td>Fix w</td><td>0.077</td><td>0.086</td><td>0.092</td><td>0.100</td><td>0.108</td><td>0.117</td><td>0.099</td></tr><tr><td rowspan="4">AGNews</td><td>None</td><td>0.089</td><td>0.057</td><td>0.071</td><td>0.121</td><td>0.085</td><td>0.123</td><td>0.090</td></tr><tr><td>0-shot</td><td>0.007</td><td>0.013</td><td>0.011</td><td>0.014</td><td>0.013</td><td>0.014</td><td>0.013</td></tr><tr><td>k-shot</td><td>0.001</td><td>0.009</td><td>0.015</td><td>0.018</td><td>0.005</td><td>0.005</td><td>0.009</td></tr><tr><td>Fix w</td><td>0.015</td><td>0.019</td><td>0.019</td><td>0.005</td><td>0.008</td><td>0.017</td><td>0.013</td></tr></table>
154
+
155
+ Table 4: ECE for different calibration strategies using scaling-binning (Kumar et al., 2019) calibrator of base models LLaMA-2-7B across various shot settings.
156
+
157
+ pre-trained with significantly different data and architecture. Comprehensive results and variance across different configurations are elaborated in Appendix Table 11.
158
+
159
+ The effect of prompt formats. In our study, we explore the effects of different prompt strategies using three distinct methods. We consider predicting the label $y_{n}$ of test example $x_{n}$ . First, the Repeat-context approach involves constructing prompts as $w_{0} = [x_{1}, x_{1}, \ldots, x_{1}, y_{1}, x_{n}]$ , where the context $x_{1}$ is repeated n-1 times, but the label $y_{1}$ is not included in the repetition. Next, the Repeat-prompt strategy shapes the prompt as $w_{0} = [x_{1}, y_{1}, \ldots, x_{1}, y_{1}, x_{n}]$ , where both the context $x_{1}$ and the label $y_{1}$ are repeated n-1 times. Finally, the Normal involves constructing the prompt as $w_{0} = [x_{1}, y_{1}, x_{2}, y_{2}, \ldots, x_{n-1}, y_{n-1}, x_{n}]$ , systematically incorporating distinct context-label pairs.
160
+
161
+ The findings, as detailed in Tab. 2, reveal certain insights: firstly, integrating labels within prompts significantly decreases uncertainty and enhances learning performance. The reason may be that it aids the model in understanding the label space, which leads to better classification outcomes. In contrast, simply repeating the context without labels does not lead to better outcomes. Secondly, the diversity of ICL examples in the prompt greatly affects performance, a potential explanation is it promotes better task learning (Pan, 2023). Those observations corroborate that ICL is performant
162
+
163
+ ![](images/4bed0176156b86ed9ff066f549cabaa4d27faa6b0055593de4b04438a1ce38a9.jpg)
164
+ (a) 0-shot
165
+
166
+ ![](images/ae29fb3e8a98a7818a51cef7b7090a8d393b8899392b3411a4908264c0a55a81.jpg)
167
+ (b) 1-shot
168
+
169
+ ![](images/dc43a6c411a4b430871435eec02f8e6f6613445a252e6203c83de6315e7b9af9.jpg)
170
+ (c) 4-shot
171
+
172
+ ![](images/82d4a54d5dcffb15221e8add30bde8170011f30aacdde12cc64983e961867fc5.jpg)
173
+ (d) 8-shot
174
+
175
+ ![](images/0a976ed0d37379e91c2339d8b6abc78687ef42fcfdf5b2b0e477509bfb4cad7d.jpg)
176
+ Figure 4: Illustration of confidence distribution. The number of samples whose confidence is greater than a threshold on Commonsense QA.
177
+ (a) 0-shot
178
+ Figure 5: The number of wrongly classified examples whose confidence is above a threshold with different numbers of shots on Commonsense QA.
179
+
180
+ ![](images/46ba38491c047a233faa24d6d1d39055bce7464314952970c35ecc0eedbe3477.jpg)
181
+ (b) 1-shot
182
+
183
+ ![](images/db6bd9eb0b8b5956fbe5f6ead7b23aa74e7cdc0d2ac57d09f15f3c6d7669304b.jpg)
184
+ (c) 4-shot
185
+
186
+ ![](images/e6e96d54626a8be62498c9aeb90bc1afce2e23af9128c42242ea3f25fb223259.jpg)
187
+ (d) 8-shot
188
+
189
+ when the number of ICL examples is large and they demonstrate consistent task properties. Importantly, the trade-off persists for different controlled scenarios, i.e. as we increase the number of ICL examples, models tend to be first more miscalibrated and then calibrated.
190
+
191
+ # 4.3 Qualitative Results
192
+
193
+ Reliability diagram and confidence histogram. A reliability diagram is a graphical tool used to evaluate the calibration of probabilistic predictions of a model across multiple classes; it compares the predicted probabilities of each class against the actual outcomes, with a perfectly calibrated model having its values lie on the diagonal $y = x$ line. A confidence histogram, on the other hand, displays the distribution of the model's prediction confidences across all classes, showing how often the model predicts certain probabilities.
194
+
195
+ Recall that we found significant miscalibration for reasoning with CoT settings, therefore we closely examine the poorly calibrated reasoning cases using the above plots (Fig. 2 and Fig. 6). Our results on 4-shot settings show that for the reasoning problems (Strategy QA, Commonsense QA, OpenBook QA, World Tree) we consider, models are consistently over-confident with ECEs above 0.15. Larger models are better both in both ACC and ECE but for OpenBook QA, calibration wors
196
+
197
+ ens as the model size increases. Moreover, it's observed that confidence scores tend to concentrate on high values as we enlarge the model size. Especially in Commonsense QA and OpenBook QA, the confidence level of nearly all predictions of 13B and 30B models predominantly exceeds 0.8.
198
+
199
+ # 4.4 Ablation Studies
200
+
201
+ For case studies, we research how miscalibration can impact the selective classification of LMs, where models are supposed to abstain from uncertain predictions in high-stakes settings.
202
+
203
+ Ablation with model sizes. As we enlarge the size of models, they become more confident (as measured by the confidence histogram) and accurate (Fig. 2). Moreover, the ECE first increases and then decreases. In some settings like SST-2 and OpenBookQA, calibration errors may have a negative correlation with model sizes (Appendix Tab.9).
204
+
205
+ To better understand the miscalibration issue of ICL, we conduct fined-grained experiments to examine ICL properties: we measure the norm of the representation vectors<sup>1</sup> for different numbers of shots in ICL. Meanwhile, we also measure the confidence and entropy of the prediction for $y_{n}$ ,
206
+
207
+ <table><tr><td rowspan="2">Dataset</td><td colspan="12">LLaMA-30B</td></tr><tr><td colspan="4">Norm</td><td colspan="4">Entropy</td><td colspan="4">Confidence</td></tr><tr><td></td><td>0-shot</td><td>1-shot</td><td>4-shot</td><td>8-shot</td><td>0-shot</td><td>1-shot</td><td>4-shot</td><td>8-shot</td><td>0-shot</td><td>1-shot</td><td>4-shot</td><td>8-shot</td></tr><tr><td>AGNews</td><td>78.8</td><td>92.3</td><td>92.1</td><td>92.2</td><td>3.920</td><td>0.650</td><td>0.595</td><td>0.444</td><td>0.214</td><td>0.821</td><td>0.819</td><td>0.865</td></tr><tr><td>CB</td><td>88.4</td><td>91.7</td><td>89.2</td><td>87.9</td><td>3.857</td><td>1.266</td><td>0.935</td><td>0.823</td><td>0.193</td><td>0.566</td><td>0.629</td><td>0.577</td></tr><tr><td>DBPdia</td><td>77.9</td><td>89.5</td><td>91.0</td><td>90.1</td><td>4.105</td><td>1.438</td><td>0.848</td><td>0.718</td><td>0.078</td><td>0.578</td><td>0.705</td><td>0.671</td></tr></table>
208
+
209
+ Table 5: Norm of representation, entropy, and confidence of LLaMA-30B across three text classification datasets.
210
+
211
+ <table><tr><td>Dataset</td><td>Case</td><td>Reasoning</td><td>\( \hat{y} \)</td><td>p</td></tr><tr><td rowspan="6">Commonsense</td><td colspan="4">Question: What would you do if you have curiosity but are blind and paralyzed?</td></tr><tr><td colspan="4">Options: A) hear news, B) analyse, C) go somewhere, D) examine thing, E) see favorite show</td></tr><tr><td colspan="4">Ground Truth: B</td></tr><tr><td>7B</td><td>If you have curiosity but are blind and paralyzed, you would go somewhere to see things. You cannot hear news, analyse, go somewhere, examine thing or see favorite show.</td><td>C</td><td>0.5046</td></tr><tr><td>13B</td><td>If you are blind and paralyzed, you can&#x27;t see anything. So, you can&#x27;t go somewhere. You can&#x27;t examine thing. You can&#x27;t see favorite show. You can&#x27;t analyse. So, you can hear news.</td><td>A</td><td>0.8684</td></tr><tr><td>30B</td><td>If you are blind and paralyzed, you cannot see your favorite show. You cannot hear news as you are blind. You cannot examine thing as you are paralyzed. You can go somewhere to see things.</td><td>C</td><td>0.9720</td></tr><tr><td rowspan="6">Open Book</td><td colspan="4">Question: What would you do if you have curiosity but are blind and paralyzed?</td></tr><tr><td colspan="4">Options: A) hear news, B) analyse, C) go somewhere, D) examine thing, E) see favorite show</td></tr><tr><td colspan="4">Ground Truth: B</td></tr><tr><td>7B</td><td>A girl and her mom have the same hair length.</td><td>D</td><td>0.6365</td></tr><tr><td>13B</td><td>A girl and her mom have the same date of birth.</td><td>A</td><td>0.9782</td></tr><tr><td>30B</td><td>A girl and her mom have the same genes.</td><td>A</td><td>0.9831</td></tr></table>
212
+
213
+ Table 6: Qualitative Results of LLaMA on Commonsense and OpenBook
214
+
215
+ and the results are summarized in Tab. 5. When switching from 0-shot to 1-shot, all three measurements (representation norm, entropy, and confidence) drastically change; on the other hand, when $k$ increases $(1 \to 4 \to 8)$ , the change of measures would become smoother. Our discovery shows that adding in-context examples can substantially impact model behaviors while the model behaves relatively similarly for various shots once the task is specified $(k \neq 0)$ . Meanwhile, more ICL samples lead to smaller entropy and higher confidence in most cases. Considering the alterations in feature representation, which can manifest in either an augmentation of the representation's norm or a shift in direction, quantifying changes in feature direction poses challenges. Thus, we have chosen to examine changes in the norm as a surrogate measure, suggesting that as the number of ICL samples increases, there is a systematic alteration in the model's features.
216
+
217
+ Confidence and wrongly classified reasoning examples. To inspect the failure modes of LMs, we randomly sample 100 reasoning examples of LLaMA and plot the distribution of wrongly pre
218
+
219
+ dicted samples and the confidence scores via thresholding. Similar to previous observations, as model sizes and the number of ICL examples scale up, LMs would generate more confident samples (Fig. 4 (c, d)). We observe behaviors where models with larger sizes may be more error-prone and tend to generate more confidently wrong explanatory samples (Fig. 5).
220
+
221
+ Examples of hallucinated explanations for highly confident predictions. Next, we showcase in Tab. 6 that models generate both wrong explanations and incorrect predictions with high confidence. We also observe that most of the wrong predictions are highly confident, thus we manually examine the correctness of explanations on commonsense QA, and found its high correlations with predicted answer accuracy, which is the opposite of token-level explainability that tends to get worse when the accuracy improves. For additional qualitative examination of LLaMA's performance on Strategy QA and WorldTree, please refer to Table 12.
222
+
223
+ # 5 Discussion and Concluding Remarks
224
+
225
+ In our investigation of the token-level calibration of in-context learning in contemporary language models, we illustrate the intricate trade-off between ICL performance and calibration. Our findings underscore the importance of being circumspect in model deployment, as maximizing ICL performance does not invariably translate to improved calibration for low-shot and reasoning settings. As LMs continue to evolve and gain more capabilities such as having long enough context windows that can include the whole training set as in-context examples for some downstream tasks, our result can be pedagogical when users would like to examine their uncertainty through prediction probabilities. Moreover, the work suggests the following future directions:
226
+
227
+ Calibration beyond classification regimes. Our findings indicate that in multi-choice or multi-class classification tasks, even though the calibration of answer tokens may deteriorate in high-performance settings, there may be a positive correlation between accuracy and the correctness of explanations in reasoning tasks. This suggests potential avenues for future research in exploring strategies such as the use of hedging words to express uncertainty and examining their relationship with predictive performance.
228
+
229
+ Implications in assessing beliefs of LMs. Previous works show that the expected calibration error would decrease monotonically as the number of ICL examples increases (Kadavath et al., 2022) when querying LMs for answer probabilities. However, we find that zero-shot performance might be weak for models less than 30B, and in low-shot settings, calibration errors can sometimes be even worse than zero-shot. This implies that a close examination and careful control of epistemic uncertainty and aleatoric uncertainty can be needed before deriving conclusions in truthfulness (Liu et al., 2023; Azaria and Mitchell, 2023) for low-shot settings.
230
+
231
+ Limitations. We acknowledge the need to expand our evaluation, which is primarily focused on QA and classification tasks, beyond existing open-sourced language models and datasets. Moreover, we didn't consider nuances such as inherent disagreement about labels (Baan et al., 2022) and adaptive calibration error measures (Nixon et al., 2019) that might be important in certain use cases: it's worth noting that situations may arise where
232
+
233
+ multiple labels share the highest predicted probability. In such instances, the definition (Eq. (2)) doesn't automatically become false; instead, we opt for the first maximal probability. These cases are less likely to occur in most of our experimental setups, where a substantial margin consistently exists between different labels.
234
+
235
+ # Acknowledgment
236
+
237
+ We thank Yu Bai, David Childers, Jean-Stanislas Denain for their valuable feedback. HZ is supported by an Eric and Susan Dunn Graduate Fellowship. YY acknowledges support from the joint Simons Foundation-NSF DMS grant #2031899. SK acknowledges support from the Office of Naval Research under award N00014-22-1-2377 and the National Science Foundation Grant under award #IIS 2229881. This material is based upon work supported by the AI Research Institutes Program funded by the National Science Foundation under AI Institute for Societal Decision Making (AISDM), Award No. 2229881. Kempner Institute computing resources enabled this work. This work has been made possible in part by a gift from the Chan Zuckerberg Initiative Foundation to establish the Kempner Institute for the Study of Natural and Artificial Intelligence.
238
+
239
+ # References
240
+
241
+ Ekin Akyurek, Dale Schuurmans, Jacob Andreas, Tengyu Ma, and Denny Zhou. 2022. What learning algorithm is in-context learning? investigations with linear models. arXiv preprint arXiv:2211.15661.
242
+ Anastasios N Angelopoulos, Stephen Bates, Adam Fisch, Lihua Lei, and Tal Schuster. 2022. Conformal risk control. arXiv preprint arXiv:2208.02814.
243
+ Amos Azaria and Tom Mitchell. 2023. The internal state of an llm knows when its lying. arXiv preprint arXiv:2304.13734.
244
+ Joris Baan, Wilker Aziz, Barbara Plank, and Raquel Fernandez. 2022. Stop measuring calibration when humans disagree. arXiv preprint arXiv:2210.16133.
245
+ Yu Bai, Fan Chen, Huan Wang, Caiming Xiong, and Song Mei. 2023. Transformers as statisticians: Provable in-context learning with in-context algorithm selection. arXiv preprint arXiv:2306.04637.
246
+ Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and Daniel Weld. 2021. Does the whole exceed its parts? the effect of ai explanations on complementary team performance. In Proceedings
247
+
248
+ of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1-16.
249
+ Umang Bhatt, Javier Antorán, Yunfeng Zhang, Q Vera Liao, Prasanna Sattigeri, Riccardo Fogliato, Gabrielle Melançon, Ranganath Krishnan, Jason Stanley, Omesh Tickoo, et al. 2021. Uncertainty as a form of transparency: Measuring, communicating, and using uncertainty. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pages 401-413.
250
+ Mark Braverman, Xinyi Chen, Sham Kakade, Karthik Narasimhan, Cyril Zhang, and Yi Zhang. 2020. Calibration, entropy rates, and memory in language models. In International Conference on Machine Learning, pages 1089-1099. PMLR.
251
+ Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901.
252
+ Zana Buçinca, Maja Barbara Malaya, and Krzysztof Z Gajos. 2021. To trust or to think: cognitive forcing functions can reduce overreliance on ai in ai-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1):1-21.
253
+ Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-snli: Natural language inference with natural language explanations. Advances in Neural Information Processing Systems, 31.
254
+ Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2023. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research, 24(240):1-113.
255
+ Jeremy R Cole, Michael JQ Zhang, Daniel Gillick, Julian Martin Eisenschlos, Bhuwan Dhingra, and Jacob Eisenstein. 2023. Selectively answering ambiguous questions. arXiv preprint arXiv:2305.14613.
256
+ A Philip Dawid. 1982. The well-calibrated bayesian. Journal of the American Statistical Association, 77(379):605-610.
257
+ Morris H DeGroot and Stephen E Fienberg. 1983. The comparison and evaluation of forecasters. Journal of the Royal Statistical Society: Series D (The Statistician), 32(1-2):12-22.
258
+ Shrey Desai and Greg Durrett. 2020. Calibration of pre-trained transformers. arXiv preprint arXiv:2003.07892.
259
+ Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Alpacafarm: A simulation framework for methods that learn from human feedback.
260
+
261
+ Yu Fei, Yifan Hou, Zeming Chen, and Antoine Bosselut. 2023. Mitigating label biases for in-context learning. arXiv preprint arXiv:2305.19148.
262
+ Adam Fisch, Tal Schuster, Tommi Jaakkola, and Regina Barzilay. 2020. Efficient conformal prediction via cascaded inference with expanded admission. arXiv preprint arXiv:2007.03114.
263
+ Shivam Garg, Dimitris Tsipras, Percy S Liang, and Gregory Valiant. 2022. What can transformers learn in-context? a case study of simple function classes. Advances in Neural Information Processing Systems, 35:30583-30598.
264
+ Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. Transactions of the Association for Computational Linguistics, 9:346-361.
265
+ Ana Valeria González, Gagan Bansal, Angela Fan, Yashar Mehdad, Robin Jia, and Srinivasan Iyer. 2021. Do explanations help users detect errors in open-domain qa? an evaluation of spoken vs. visual explanations. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1103-1116.
266
+ Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. 2017. On calibration of modern neural networks. In International conference on machine learning, pages 1321-1330. PMLR.
267
+ Zhixiong Han, Yaru Hao, Li Dong, Yutao Sun, and Furu Wei. 2023. Prototypical calibration for few-shot learning of language models. In *The Eleventh International Conference on Learning Representations*.
268
+ Peter A Jansen, Elizabeth Wainwright, Steven Marmorstein, and Clayton T Morrison. 2018. Worldtree: A corpus of explanation graphs for elementary science questions supporting multi-hop inference. arXiv preprint arXiv:1802.03052.
269
+ Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023. Mistral 7b. arXiv preprint arXiv:2310.06825.
270
+ Zhengbao Jiang, Jun Araki, Haibo Ding, and Graham Neubig. 2021. How can we know when language models know? on the calibration of language models for question answering. Transactions of the Association for Computational Linguistics, 9:962-977.
271
+ Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, et al. 2022. Language models (mostly) know what they know. arXiv preprint arXiv:2207.05221.
272
+
273
+ Adam Tauman Kalai and Santosh S. Vempala. 2023. Calibrated language models must hallucinate.
274
+ Amita Kamath, Robin Jia, and Percy Liang. 2020. Selective question answering under domain shift. arXiv preprint arXiv:2006.09462.
275
+ Lorenz Kuhn, Yarin Gal, and Sebastian Farquhar. 2023. Semantic uncertainty: Linguistic invariances for uncertainty estimation in natural language generation. arXiv preprint arXiv:2302.09664.
276
+ Meelis Kull and Peter Flach. 2015. Novel decompositions of proper scoring rules for classification: Score adjustment as precursor to calibration. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2015, Porto, Portugal, September 7-11, 2015, Proceedings, Part I 15, pages 68-85. Springer.
277
+ Ananya Kumar, Percy S Liang, and Tengyu Ma. 2019. Verified uncertainty calibration. Advances in Neural Information Processing Systems, 32.
278
+ Philippe Laban, Tobias Schnabel, Paul N Bennett, and Marti A Hearst. 2022. Summac: Re-visiting nlibased models for inconsistency detection in summarization. Transactions of the Association for Computational Linguistics, 10:163-177.
279
+ Stephanie Lin, Jacob Hilton, and Owain Evans. 2022. Teaching models to express their uncertainty in words. arXiv preprint arXiv:2205.14334.
280
+ Kevin Liu, Stephen Casper, Dylan Hadfield-Menell, and Jacob Andreas. 2023. Cognitive dissonance: Why do language model outputs disagree with internal representations of truthfulness?
281
+ Sabrina J Mielke, Arthur Szlam, Emily Dinan, and Y-Lan Boureau. 2022. Reducing conversational agents' overconfidence through linguistic calibration. Transactions of the Association for Computational Linguistics, 10:857-872.
282
+ Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? a new dataset for open book question answering. In EMNLP.
283
+ Jeremy Nixon, Michael W Dusenberry, Linchuan Zhang, Ghassen Jerfel, and Dustin Tran. 2019. Measuring calibration in deep learning. In CVPR workshops, volume 2.
284
+ Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, et al. 2021. Show your work: Scratchpads for intermediate computation with language models. arXiv preprint arXiv:2112.00114.
285
+ Catherine Olsson, Nelson Elhage, Neel Nanda, Nicholas Joseph, Nova DasSarma, Tom Henighan, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, et al. 2022. In-context learning and induction heads. arXiv preprint arXiv:2209.11895.
286
+
287
+ OpenAI. 2023. Gpt-4 technical report. https://cdn.openai.com/papers/gpt-4.pdf.
288
+ Alexander Pan, Jun Shern Chan, Andy Zou, Nathaniel Li, Steven Basart, Thomas Woodside, Hanlin Zhang, Scott Emmons, and Dan Hendrycks. 2023. Do the rewards justify the means? measuring trade-offs between rewards and ethical behavior in the machiavelli benchmark. In International Conference on Machine Learning, pages 26837-26867. PMLR.
289
+ Jane Pan. 2023. What In-Context Learning "Learns" In-Context: Disentangling Task Recognition and Task Learning. Ph.D. thesis, Princeton University.
290
+ Suzanne Petryk, Spencer Whitehead, Joseph E Gonzalez, Trevor Darrell, Anna Rohrbach, and Marcus Rohrbach. 2023. Simple token-level confidence improves caption correctness. arXiv preprint arXiv:2305.07021.
291
+ John Platt et al. 1999. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in large margin classifiers, 10(3):61-74.
292
+ Allan Raventós, Mansheej Paul, Feng Chen, and Surya Ganguli. 2023. Pretraining task diversity and the emergence of non-bayesian in-context learning for regression. arXiv preprint arXiv:2306.15063.
293
+ Timo Schick and Hinrich Schütze. 2021. Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 255-269.
294
+ Tal Schuster, Adam Fisch, Jai Gupta, Mostafa Dehghani, Dara Bahri, Vinh Q Tran, Yi Tay, and Donald Metzler. 2022. Confident adaptive language modeling. arXiv preprint arXiv:2207.07061.
295
+ Tal Schuster, Adam Fisch, Tommi Jaakkola, and Regina Barzilay. 2021. Consistent accelerated inference via confident adaptive transformers. arXiv preprint arXiv:2104.08803.
296
+ Andy Shih, Dorsa Sadigh, and Stefano Ermon. 2023. Long horizon temperature scaling. arXiv preprint arXiv:2302.03686.
297
+ Chenglei Si, Navita Goyal, Sherry Tongshuang Wu, Chen Zhao, Shi Feng, Hal Daumé III, and Jordan Boyd-Graber. 2023. Large language models help humans verify truthfulness-except when they are convincingly wrong. arXiv preprint arXiv:2310.12558.
298
+ Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631-1642.
299
+
300
+ Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2018. Commonsenseqa: A question answering challenge targeting commonsense knowledge. arXiv preprint arXiv:1811.00937.
301
+ Katherine Tian, Eric Mitchell, Allan Zhou, Archit Sharma, Rafael Rafailov, Huaxiu Yao, Chelsea Finn, and Christopher D Manning. 2023. Just ask for calibration: Strategies for eliciting calibrated confidence scores from language models fine-tuned with human feedback. arXiv preprint arXiv:2305.14975.
302
+ Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023a. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
303
+ Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
304
+ Johannes Von Oswald, Eyvind Niklasson, Ettore Randazzo, João Sacramento, Alexander Mordvintsev, Andrey Zhmoginov, and Max Vlademyrov. 2023. Transformers learn in-context by gradient descent. In International Conference on Machine Learning, pages 35151-35174. PMLR.
305
+ Ellen M Voorhees and Dawn M Tice. 2000. Building a question answering test collection. In Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, pages 200-207.
306
+ Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903.
307
+ Sang Michael Xie, Aditi Raghunathan, Percy Liang, and Tengyu Ma. 2021. An explanation of in-context learning as implicit bayesian inference. arXiv preprint arXiv:2111.02080.
308
+ Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. Advances in neural information processing systems, 28.
309
+ Yunfeng Zhang, Q Vera Liao, and Rachel KE Bellamy. 2020. Effect of confidence and explanation on accuracy and trust calibration in ai-assisted decision making. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pages 295-305.
310
+ Theodore Zhao, Mu Wei, J Samuel Preston, and Ho-fung Poon. 2023. Automatic calibration and error correction for large language models via pareto optimal self-supervision. arXiv preprint arXiv:2306.16564.
311
+
312
+ Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models. In ICML.
313
+ Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena.
314
+ Han Zhou, Xingchen Wan, Lev Proleev, Diana Mincu, Jilin Chen, Katherine Heller, and Subhrajit Roy. 2023a. Batch calibration: Rethinking calibration for in-context learning and prompt engineering. arXiv preprint arXiv:2309.17249.
315
+ Kaitlyn Zhou, Dan Jurafsky, and Tatsunori Hashimoto. 2023b. Navigating the grey area: Expressions of overconfidence and uncertainty in language models. arXiv preprint arXiv:2302.13439.
316
+ Daniel M Ziegler, Nisan Stiannon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. 2019. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593.
317
+
318
+ # A Extended Related Work
319
+
320
+ Uncertainty quantification in NLP. Uncertainty quantification in NLP, which often adopts the Bayesian principle to sophisticated methods tailored for neural networks, aims to enhance the reliability of model predictions. This may involve non-trivial designs as directly interpreting language model predictions via probabilities (Kadavath et al., 2022) and linguistic expressions (Lin et al., 2022; Mielke et al., 2022; Zhou et al., 2023b) may inadvertently lead to over-reliance on the model's uncertainties (Si et al., 2023), thus complicating the establishment of trustworthy common ground between humans and models (Buçinca et al., 2021). Notable recent advancements include employing model confidence as a critical factor in various applications like dialogue generation (Mielke et al., 2022), cascading prediction (Schuster et al., 2021), open-domain QA (Fisch et al., 2020; Angelopoulos et al., 2022), summarization (Laban et al., 2022), language modeling (Schuster et al., 2022), image captioning (Petryk et al., 2023).
321
+
322
+ # B Additional Experimental Details
323
+
324
+ We provide prompts we adopt for experiments in Tab.7. Additional reliability plots are shown in Fig. 6. Moreover, we provide extra results that extend those in the main text. Our implementation is open-sourced at https://github.com/hlzhang109/iccl-calibration. The greatest accuracy and ECE values are highlighted in bold and red, respectively. Extremely poor performance due to length truncation is omitted. Model performance and calibration. We present experimental results considering different model sizes for text classification and reasoning in Tables 8 and 9, respectively. With the increase in model sizes, we observed overall improvements in model performance across most datasets. However, the calibration error (ECE) did not decrease immediately: for low-shot settings where $k < 4$ , models tend to have an ECE larger than 0.1. On the other hand, ECE can decrease given more ICL examples ( $k = 8$ ) if context length is adequate. Overall, zero-shot ICL can lead to good calibration results though the predictive performance is substantially weaker. Interestingly, for some benchmarks like SST-2 and OpenBook QA, the ECE of the 30B model even surpassed that of the 7B model. Moreover, the ECE curves of the 7B and 13B models exhibited similar patterns to the 30B results as the number of ICL samples increased, as shown in the main Tab. (1).
325
+
326
+ The effect of fine-tuning. We provide full results of all finetuned LLMs in Table 11, complementing Fig. (3). We reach a similar conclusion as we explain in the main text: with an increasing number of ICL examples, accuracy generally improves but ECE first increases then decreases and miscalibration is widespread; an MoE model can also have the same accuracy-calibration trade-off; fine-tuning substantially improves accuracy but hurts calibration by a large margin.
327
+
328
+ Results reliability. Furthermore, as prompting is susceptible to various forms of biases and noises (Zhao et al., 2021; Han et al., 2023; Fei et al., 2023; Zhou et al., 2023a), to provide a comprehensive understanding of the experimental outcomes, we delve into the variance across all experimental repetitions. Table 10 provides a detailed analysis of the variance metrics, affirming the stability and reliability of our experimental findings.
329
+
330
+ ![](images/747fa2985c2dc429972df28da6990633c82aa12f1c286dd4dc0af2e9beab3d52.jpg)
331
+
332
+ ![](images/fc1fe1a88135082898a219a1b6bb969800b191931cf156052f105b23ee6a14d7.jpg)
333
+
334
+ ![](images/0946657dfe39a613bd63a0bfd21fa2b6f8f3d6768eb7dde9491949f4b858ce95.jpg)
335
+
336
+ ![](images/1d4cf78cd7980566e94da2fb7ede9443cdaf3dfcda8886212cf9e02ffbc4322d.jpg)
337
+ Figure 6: Reliability plots and confidence histograms of LLaMA models on 4-shot reasoning tasks. Results of different sizes 7B (left), 13B (middle), and 30B (right) are plotted.
338
+
339
+ ![](images/ae2831570079e5848380c5fc48e080905122a67fcce835bf8b057a1e30b19045.jpg)
340
+
341
+ ![](images/99c6af6fcc2afa1da69dcbb2653a258f4ea8b9e229fb931e70cf187991d0d3f8.jpg)
342
+
343
+ Algorithm 1: Pseudocode for temperature scaling
344
+ Data: $\mathcal{P}_{\theta}(\mathbf{w})$ : Original output of the classification model, $\mathcal{D}$ : Training dataset, $\tau$ : Temperature parameter, $k$ : we use $k$ -shot experimental settings, where during test the ICL prompts will consist of $k$ (sample, label) pairs.
345
+ Result: Adjusted probabilities after temperature scaling;
346
+ // Training process
347
+ // 0-shot: $(\mathbf{w}_i,y_i)$ is every training samples and corresponding label.
348
+ // $k$ -shot: $\mathbf{w}_i = \{x_1,y_1,\dots,x_k,y_k,x_i\}$ uses $k$ prompt pairs.
349
+ // Fix $w$ : the prompt in $\mathbf{w}_i = \{x_1,y_1,\dots,x_k,y_k,x_i\}$ will be used for all training instances and used during inference.
350
+ for each training sample $(\mathbf{w}_i,y_i)\in \mathcal{D}$ do Compute the original output: $z_{i} = \mathcal{P}_{\theta}(\mathbf{w}_{i};\theta)$ . Compute the cross-entropy loss: $L_{i} =$ CrossEntropy $(z_{i},y_{i})$ .
351
+ end
352
+ Compute the gradient of the loss to the temperature parameter: $\nabla_{\tau}\mathcal{L} = \frac{1}{|\mathcal{D}|}\sum_{i = 1}^{|\mathcal{D}|}\nabla_{\tau}L_{i};$ Update the temperature parameter using gradient descent: $\tau \gets \tau -\eta \nabla_{\tau}\mathcal{L};$
353
+ // Test time
354
+ for each test sample $\mathbf{x}_j$ do Compute the original output with prompt: $z_{j} = \mathcal{P}_{\theta}(\mathbf{x}_{j};\theta)$ . Compute the adjusted output: $\hat{z}_j = \frac{z_j}{\tau}$ . Compute the softmax probabilities: $\hat{p}_j =$ Softmax $(\hat{z}_j)$ .
355
+ end
356
+
357
+ <table><tr><td>Dataset</td><td>Prompt</td><td>Label</td></tr><tr><td>SST-2</td><td>Review: it may not be a great piece of filmmaking, but its power comes from its soul&#x27;s - eye view of how well-meaning patronizing masked a social injustice, at least as represented by this case. Sentiment: PositiveReview: smith&#x27;s point is simple and obvious - people&#x27;s homes are extensions of themselves, and particularly eccentric people have particularly eccentric living spaces - but his subjects are charmers .Sentiment:</td><td>Negative, Positive</td></tr><tr><td>CB</td><td>A: No, not really. I spend a lot of time with our income tax, though. especially, this year and last year. Um, I have been married for just a few years, so I&#x27;ve had to really switch around from the EZ form to the, uh, B: Schedule A. A: Right. B: Well, yeah. A: All the deductions and all that. B: Did you notice that when they passed the new simplified tax act, it seemed like it made everything harder? question: when they passed the new simplified tax act it seemed like it made everything harder. true, false, or neither? answer: trueThere was a group of curious onlookers... Marie felt her legs give way beneath her, she sat down on the edge of the pavement, feet in the gutter, doubled-up, sick and winded as if someone had punched her in the stomach. She lifted up her head and looked again. She had watched scenes like this so often in detective films and police series on television that she could hardly believe that this was real life. question: this was real life. true, false, or neither? answer:</td><td>True, False, Neither</td></tr><tr><td>RTE</td><td>The main institutionalised forms of recognition for those who have made a significant contribution in the fields of physics, chemistry, medicine, literature, as well as for those working for peace (and more recently in the area of economics), are the Nobel prizes. question: Nobel Peace Prize candidates have been chosen. True or False? answer: FalseEgypt on Thursday strongly criticized Israeli new Foreign Minister Avigdor Lieberman for his remarks that he refused to recognize the peace efforts initiated in 2007 in the U.S. city of Annapolis to restore the peace talks with the Palestinians, reported the state MENA news agency. Lieberman&#x27;s remarks is &quot;regrettable,&quot; Egyptian Foreign Ministry spokesman Hossam Zaki was quoted as saying, adding &quot;his remarks are the first blow to the peace efforts to come from the Israeli new government.&quot; question: Hossam Zaki is the new Foreign Minister of Israel. True or False? answer:</td><td>True, False</td></tr><tr><td>Strategy QA</td><td>Question: Can spiders help eggplant farmers control parasites? Choose the answer from True and False. Answer: The potato tuber moth is a parasite that targets the plant family Solanaceae, including eggplant Selenops radiatus is a spider genus in South Africa that effectively controls the potato tuber moth So, the answer is: TrueQuestion: Is the voice of Genie from Disney&#x27;s Aladdin still alive? Choose the answer from True and False.Answer:</td><td>True, False</td></tr><tr><td>Commonsense QA</td><td>&quot;Question: Dan was a farmer with just one heifer. But that was okay, he only kept her for milk, and he didn&#x27;t think he&#x27;d find good farmland in a place as cold as where? A arizonab farm yardC michiganD german fieldE dairy farmAnswer: Michigan is a state in the us where it precipitates throughout the year and areas, where it precipitates throughout the year, are generally cold. So the farmer thought he&#x27;d not find a good farmland in a place as cold as michigan. Enslaving heifers or other animals for their milk is wrong as they want to live free. All the places in the other options may not be cold. So, the answer is: CQuestion: From where does a snowflake form?A cloudB snow stormC billowD airE snowstormAnswer:&quot;</td><td>A, B, C, D, E</td></tr></table>
358
+
359
+ Table 7: Prompts used for text classification and reasoning tasks, with a single training example showcased per task for illustrative purposes. The right column displays corresponding labels. The prompting formats and labels for WorldTree and OpenBookQA are the same as those of the CommonsenseQA dataset.
360
+
361
+ <table><tr><td>Metric</td><td>Dataset</td><td>Model Size</td><td>0-shot</td><td>1-shot</td><td>2-shot</td><td>3-shot</td><td>4-shot</td><td>8-shot</td></tr><tr><td rowspan="12">ECE</td><td rowspan="3">AGNews</td><td>7B</td><td>0.067</td><td>0.105</td><td>0.225</td><td>0.158</td><td>0.086</td><td>0.075</td></tr><tr><td>13B</td><td>0.093</td><td>0.084</td><td>0.069</td><td>0.121</td><td>0.103</td><td>0.045</td></tr><tr><td>30B</td><td>0.261</td><td>0.043</td><td>0.049</td><td>0.067</td><td>0.049</td><td>0.047</td></tr><tr><td rowspan="3">CB</td><td>7B</td><td>0.133</td><td>0.218</td><td>0.172</td><td>0.197</td><td>0.202</td><td>0.215</td></tr><tr><td>13B</td><td>0.029</td><td>0.257</td><td>0.282</td><td>0.221</td><td>0.263</td><td>0.216</td></tr><tr><td>30B</td><td>0.069</td><td>0.312</td><td>0.216</td><td>0.217</td><td>0.192</td><td>0.181</td></tr><tr><td rowspan="3">RTE</td><td>7B</td><td>0.068</td><td>0.075</td><td>0.098</td><td>0.112</td><td>0.091</td><td>0.064</td></tr><tr><td>13B</td><td>0.042</td><td>0.104</td><td>0.048</td><td>0.048</td><td>0.049</td><td>0.050</td></tr><tr><td>30B</td><td>0.023</td><td>0.051</td><td>0.060</td><td>0.050</td><td>0.048</td><td>0.058</td></tr><tr><td rowspan="3">SST-2</td><td>7B</td><td>0.038</td><td>0.142</td><td>0.132</td><td>0.121</td><td>0.108</td><td>0.064</td></tr><tr><td>13B</td><td>0.051</td><td>0.134</td><td>0.108</td><td>0.084</td><td>0.073</td><td>0.053</td></tr><tr><td>30B</td><td>0.083</td><td>0.163</td><td>0.139</td><td>0.126</td><td>0.112</td><td>0.080</td></tr><tr><td rowspan="12">ACC</td><td rowspan="3">AGNews</td><td>7B</td><td>0.447</td><td>0.629</td><td>0.563</td><td>0.630</td><td>0.777</td><td>0.833</td></tr><tr><td>13B</td><td>0.490</td><td>0.812</td><td>0.773</td><td>0.720</td><td>0.775</td><td>0.847</td></tr><tr><td>30B</td><td>0.370</td><td>0.830</td><td>0.817</td><td>0.810</td><td>0.821</td><td>0.855</td></tr><tr><td rowspan="3">CB</td><td>7B</td><td>0.482</td><td>0.596</td><td>0.675</td><td>0.696</td><td>0.691</td><td>0.729</td></tr><tr><td>13B</td><td>0.554</td><td>0.627</td><td>0.659</td><td>0.691</td><td>0.611</td><td>0.709</td></tr><tr><td>30B</td><td>0.500</td><td>0.696</td><td>0.789</td><td>0.834</td><td>0.814</td><td>0.796</td></tr><tr><td rowspan="3">RTE</td><td>7B</td><td>0.552</td><td>0.668</td><td>0.653</td><td>0.646</td><td>0.653</td><td>0.698</td></tr><tr><td>13B</td><td>0.679</td><td>0.673</td><td>0.708</td><td>0.723</td><td>0.723</td><td>0.746</td></tr><tr><td>30B</td><td>0.672</td><td>0.742</td><td>0.747</td><td>0.738</td><td>0.748</td><td>0.752</td></tr><tr><td rowspan="3">SST-2</td><td>7B</td><td>0.483</td><td>0.799</td><td>0.877</td><td>0.908</td><td>0.917</td><td>0.954</td></tr><tr><td>13B</td><td>0.483</td><td>0.918</td><td>0.943</td><td>0.955</td><td>0.962</td><td>0.969</td></tr><tr><td>30B</td><td>0.607</td><td>0.930</td><td>0.940</td><td>0.961</td><td>0.964</td><td>0.964</td></tr></table>
362
+
363
+ Table 8: Accuracy and Calibration of LLaMA model with three sizes across four text classification datasets.
364
+
365
+ <table><tr><td>Metric</td><td>Dataset</td><td>Model Size</td><td>0-shot</td><td>1-shot</td><td>2-shot</td><td>3-shot</td><td>4-shot</td><td>8-shot</td></tr><tr><td rowspan="12">ECE</td><td rowspan="3">Commonsense QA</td><td>7B</td><td>0.070</td><td>0.155</td><td>0.237</td><td>0.227</td><td>0.238</td><td>-</td></tr><tr><td>13B</td><td>0.066</td><td>0.161</td><td>0.282</td><td>0.292</td><td>0.310</td><td>-</td></tr><tr><td>30B</td><td>0.048</td><td>0.232</td><td>0.290</td><td>0.253</td><td>0.283</td><td>-</td></tr><tr><td rowspan="3">OpenBook QA</td><td>7B</td><td>0.040</td><td>0.241</td><td>0.270</td><td>0.184</td><td>0.130</td><td>0.121</td></tr><tr><td>13B</td><td>0.031</td><td>0.132</td><td>0.217</td><td>0.209</td><td>0.191</td><td>0.175</td></tr><tr><td>30B</td><td>0.048</td><td>0.232</td><td>0.290</td><td>0.253</td><td>0.283</td><td>-</td></tr><tr><td rowspan="3">Strategy QA</td><td>7B</td><td>0.133</td><td>0.275</td><td>0.206</td><td>0.243</td><td>0.242</td><td>0.227</td></tr><tr><td>13B</td><td>0.051</td><td>0.154</td><td>0.170</td><td>0.192</td><td>0.188</td><td>0.190</td></tr><tr><td>30B</td><td>0.204</td><td>0.154</td><td>0.174</td><td>0.172</td><td>0.161</td><td>0.193</td></tr><tr><td rowspan="3">World Tree</td><td>13B</td><td>0.065</td><td>0.113</td><td>0.226</td><td>0.250</td><td>0.284</td><td>-</td></tr><tr><td>30B</td><td>0.112</td><td>0.211</td><td>0.251</td><td>0.185</td><td>0.206</td><td>-</td></tr><tr><td>7B</td><td>0.074</td><td>0.124</td><td>0.198</td><td>0.179</td><td>0.203</td><td>-</td></tr><tr><td rowspan="12">ACC</td><td rowspan="3">Commonsense QA</td><td>7B</td><td>0.224</td><td>0.292</td><td>0.388</td><td>0.421</td><td>0.406</td><td>-</td></tr><tr><td>13B</td><td>0.320</td><td>0.478</td><td>0.549</td><td>0.574</td><td>0.562</td><td>-</td></tr><tr><td>30B</td><td>0.356</td><td>0.589</td><td>0.608</td><td>0.675</td><td>0.644</td><td>-</td></tr><tr><td rowspan="3">OpenBook QA</td><td>7B</td><td>0.308</td><td>0.298</td><td>0.376</td><td>0.417</td><td>0.454</td><td>0.480</td></tr><tr><td>13B</td><td>0.362</td><td>0.454</td><td>0.509</td><td>0.551</td><td>0.580</td><td>0.611</td></tr><tr><td>30B</td><td>0.386</td><td>0.561</td><td>0.604</td><td>0.644</td><td>0.648</td><td>0.662</td></tr><tr><td rowspan="3">Strategy QA</td><td>7B</td><td>0.566</td><td>0.488</td><td>0.554</td><td>0.550</td><td>0.562</td><td>0.575</td></tr><tr><td>13B</td><td>0.554</td><td>0.598</td><td>0.621</td><td>0.595</td><td>0.618</td><td>0.612</td></tr><tr><td>30B</td><td>0.450</td><td>0.619</td><td>0.654</td><td>0.660</td><td>0.672</td><td>0.662</td></tr><tr><td rowspan="3">World Tree</td><td>7B</td><td>0.302</td><td>0.298</td><td>0.326</td><td>0.384</td><td>0.362</td><td>-</td></tr><tr><td>13B</td><td>0.444</td><td>0.437</td><td>0.495</td><td>0.519</td><td>0.492</td><td>-</td></tr><tr><td>30B</td><td>0.534</td><td>0.570</td><td>0.621</td><td>0.680</td><td>0.646</td><td>-</td></tr></table>
366
+
367
+ Table 9: Accuracy and Calibration of LLaMA models across three sizes on four reasoning datasets.
368
+
369
+ <table><tr><td>Dataset</td><td>Metric</td><td>0-shot</td><td>1-shot</td><td>2-shot</td><td>3-shot</td><td>4-shot</td><td>8-shot</td></tr><tr><td rowspan="2">CB</td><td>ACC</td><td>0.500±0.000</td><td>0.696±0.304</td><td>0.789±0.138</td><td>0.834±0.068</td><td>0.814±0.068</td><td>0.796±0.110</td></tr><tr><td>ECE</td><td>0.143±0.000</td><td>0.409±0.041</td><td>0.216±0.061</td><td>0.217±0.057</td><td>0.376±0.053</td><td>0.359±0.071</td></tr><tr><td rowspan="2">RTE</td><td>ACC</td><td>0.672±0.000</td><td>0.742±0.018</td><td>0.747±0.032</td><td>0.738±0.044</td><td>0.748±0.043</td><td>0.752±0.039</td></tr><tr><td>ECE</td><td>0.023±0.000</td><td>0.051±0.020</td><td>0.060±0.021</td><td>0.050±0.023</td><td>0.048±0.017</td><td>0.058±0.022</td></tr><tr><td rowspan="2">SST-2</td><td>ACC</td><td>0.607±0.000</td><td>0.930±0.025</td><td>0.940±0.066</td><td>0.961±0.017</td><td>0.964±0.012</td><td>0.964±0.011</td></tr><tr><td>ECE</td><td>0.106±0.000</td><td>0.339±0.026</td><td>0.139±0.058</td><td>0.126±0.053</td><td>0.310±0.022</td><td>0.287±0.014</td></tr><tr><td rowspan="2">AGnews</td><td>ACC</td><td>0.370±0.000</td><td>0.830±0.015</td><td>0.817±0.017</td><td>0.810±0.056</td><td>0.821±0.029</td><td>0.855±0.017</td></tr><tr><td>ECE</td><td>0.261±0.000</td><td>0.043±0.009</td><td>0.049±0.016</td><td>0.067±0.029</td><td>0.049±0.017</td><td>0.047±0.018</td></tr><tr><td rowspan="2">OpenBook QA</td><td>ACC</td><td>0.386±0.000</td><td>0.561±0.028</td><td>0.604±0.027</td><td>0.644±0.016</td><td>0.648±0.018</td><td>0.662±0.031</td></tr><tr><td>ECE</td><td>0.036±0.000</td><td>0.231±0.049</td><td>0.255±0.050</td><td>0.207±0.041</td><td>0.206±0.019</td><td>0.191±0.022</td></tr><tr><td rowspan="2">CommonSense QA</td><td>ACC</td><td>0.356±0.000</td><td>0.586±0.028</td><td>0.608±0.013</td><td>0.675±0.027</td><td>0.644±0.034</td><td>0.653±0.090</td></tr><tr><td>ECE</td><td>0.048±0.000</td><td>0.232±0.102</td><td>0.290±0.022</td><td>0.253±0.028</td><td>0.283±0.045</td><td>0.289±0.140</td></tr><tr><td rowspan="2">Strategy QA</td><td>ACC</td><td>0.450±0.000</td><td>0.619±0.030</td><td>0.654±0.033</td><td>0.660±0.022</td><td>0.672±0.015</td><td>-</td></tr><tr><td>ECE</td><td>0.204±0.000</td><td>0.154±0.029</td><td>0.174±0.070</td><td>0.172±0.025</td><td>0.161±0.008</td><td>-</td></tr><tr><td rowspan="2">World Tree</td><td>ACC</td><td>0.554±0.000</td><td>0.570±0.056</td><td>0.621±0.109</td><td>0.680±0.072</td><td>0.504±0.074</td><td>-</td></tr><tr><td>ECE</td><td>0.112±0.000</td><td>0.211±0.042</td><td>0.251±0.101</td><td>0.185±0.048</td><td>0.144±0.051</td><td>-</td></tr></table>
370
+
371
+ Table 10: The full results (mean and standard deviation) for various experimental configurations extending Table. 1.
372
+
373
+ <table><tr><td>Dataset</td><td>Metric</td><td>0-shot</td><td>1-shot</td><td>2-shot</td><td>3-shot</td><td>4-shot</td><td>8-shot</td></tr><tr><td colspan="8">CB</td></tr><tr><td rowspan="2">Alpaca-7B</td><td>ACC</td><td>0.552±0.000</td><td>0.668±0.032</td><td>0.653±0.079</td><td>0.646±0.086</td><td>0.653±0.067</td><td>0.698±0.028</td></tr><tr><td>ECE</td><td>0.016±0.000</td><td>0.119±0.018</td><td>0.123±0.044</td><td>0.122±0.031</td><td>0.115±0.017</td><td>0.127±0.020</td></tr><tr><td rowspan="2">LLama2-Chat-7B</td><td>ACC</td><td>0.375±0.000</td><td>0.566±0.129</td><td>0.643±0.107</td><td>0.670±0.126</td><td>0.677±0.113</td><td>0.677±0.111</td></tr><tr><td>ECE</td><td>0.287±0.000</td><td>0.223±0.078</td><td>0.170±0.062</td><td>0.153±0.054</td><td>0.154±0.054</td><td>0.170±0.054</td></tr><tr><td rowspan="2">LLama2-7B</td><td>ACC</td><td>0.339±0.000</td><td>0.464±0.193</td><td>0.511±0.163</td><td>0.538±0.113</td><td>0.534±0.109</td><td>0.575±0.059</td></tr><tr><td>ECE</td><td>0.125±0.000</td><td>0.222±0.190</td><td>0.174±0.029</td><td>0.206±0.066</td><td>0.226±0.071</td><td>0.222±0.058</td></tr><tr><td rowspan="2">Mistral-7B-v0.1</td><td>ACC</td><td>0.500±0.000</td><td>0.643±0.264</td><td>0.725±0.198</td><td>0.827±0.067</td><td>0.793±0.063</td><td>0.793±0.121</td></tr><tr><td>ECE</td><td>0.063±0.000</td><td>0.330±0.118</td><td>0.228±0.094</td><td>0.244±0.036</td><td>0.193±0.048</td><td>0.144±0.028</td></tr><tr><td rowspan="2">vicuna-7b-v1.5</td><td>ACC</td><td>0.571±0.000</td><td>0.668±0.049</td><td>0.663±0.052</td><td>0.668±0.058</td><td>0.675±0.061</td><td>0.648±0.073</td></tr><tr><td>ECE</td><td>0.051±0.000</td><td>0.176±0.034</td><td>0.172±0.047</td><td>0.169±0.054</td><td>0.170±0.047</td><td>0.181±0.052</td></tr><tr><td colspan="8">AGNews</td></tr><tr><td rowspan="2">Alpaca-7B</td><td>ACC</td><td>0.810±0.000</td><td>0.793±0.041</td><td>0.710±0.110</td><td>0.715±0.111</td><td>0.782±0.079</td><td>0.832±0.029</td></tr><tr><td>ECE</td><td>0.043±0.000</td><td>0.123±0.033</td><td>0.190±0.095</td><td>0.167±0.093</td><td>0.112±0.057</td><td>0.065±0.019</td></tr><tr><td rowspan="2">LLama2-Chat-7B</td><td>ACC</td><td>0.793±0.000</td><td>0.809±0.031</td><td>0.823±0.046</td><td>0.829±0.035</td><td>0.829±0.028</td><td>0.843±0.019</td></tr><tr><td>ECE</td><td>0.164±0.000</td><td>0.162±0.030</td><td>0.143±0.039</td><td>0.138±0.033</td><td>0.138±0.024</td><td>0.127±0.013</td></tr><tr><td rowspan="2">LLama2-7B</td><td>ACC</td><td>0.573±0.000</td><td>0.832±0.022</td><td>0.789±0.112</td><td>0.801±0.108</td><td>0.849±0.057</td><td>0.868±0.009</td></tr><tr><td>ECE</td><td>0.102±0.000</td><td>0.037±0.012</td><td>0.074±0.083</td><td>0.078±0.082</td><td>0.052±0.024</td><td>0.053±0.011</td></tr><tr><td rowspan="2">Mistral-7B-v0.1</td><td>ACC</td><td>0.780±0.000</td><td>0.847±0.017</td><td>0.842±0.028</td><td>0.820±0.056</td><td>0.808±0.085</td><td>0.867±0.004</td></tr><tr><td>ECE</td><td>0.193±0.000</td><td>0.059±0.012</td><td>0.044±0.010</td><td>0.052±0.022</td><td>0.077±0.049</td><td>0.043±0.010</td></tr><tr><td rowspan="2">vicuna-7b-v1.5</td><td>ACC</td><td>0.740±0.000</td><td>0.803±0.013</td><td>0.834±0.031</td><td>0.824±0.054</td><td>0.835±0.030</td><td>0.832±0.036</td></tr><tr><td>ECE</td><td>0.063±0.000</td><td>0.139±0.012</td><td>0.108±0.025</td><td>0.116±0.034</td><td>0.114±0.014</td><td>0.109±0.034</td></tr><tr><td colspan="8">RTE</td></tr><tr><td rowspan="2">Alpaca-7B</td><td>ACC</td><td>0.672±0.000</td><td>0.644±0.015</td><td>0.687±0.019</td><td>0.696±0.020</td><td>0.703±0.015</td><td>0.690±0.035</td></tr><tr><td>ECE</td><td>0.175±0.000</td><td>0.270±0.018</td><td>0.212±0.034</td><td>0.197±0.028</td><td>0.184±0.026</td><td>0.193±0.025</td></tr><tr><td rowspan="2">LLama2-Chat-7B</td><td>ACC</td><td>0.729±0.000</td><td>0.685±0.042</td><td>0.687±0.048</td><td>0.699±0.040</td><td>0.709±0.034</td><td>0.731±0.033</td></tr><tr><td>ECE</td><td>0.165±0.000</td><td>0.218±0.031</td><td>0.205±0.033</td><td>0.198±0.033</td><td>0.184±0.030</td><td>0.172±0.020</td></tr><tr><td rowspan="2">LLama2-7B</td><td>ACC</td><td>0.682±0.000</td><td>0.684±0.034</td><td>0.698±0.049</td><td>0.676±0.058</td><td>0.689±0.068</td><td>0.685±0.050</td></tr><tr><td>ECE</td><td>0.044±0.000</td><td>0.076±0.021</td><td>0.084±0.029</td><td>0.085±0.034</td><td>0.105±0.031</td><td>0.083±0.032</td></tr><tr><td rowspan="2">Mistral-7B-v0.1</td><td>ACC</td><td>0.686±0.000</td><td>0.731±0.025</td><td>0.756±0.015</td><td>0.768±0.019</td><td>0.776±0.016</td><td>0.773±0.025</td></tr><tr><td>ECE</td><td>0.054±0.000</td><td>0.121±0.047</td><td>0.080±0.042</td><td>0.084±0.033</td><td>0.087±0.025</td><td>0.085±0.035</td></tr><tr><td rowspan="2">vicuna-7b-v1.5</td><td>ACC</td><td>0.610±0.000</td><td>0.731±0.015</td><td>0.756±0.011</td><td>0.762±0.013</td><td>0.765±0.019</td><td>0.770±0.026</td></tr><tr><td>ECE</td><td>0.234±0.000</td><td>0.101±0.021</td><td>0.073±0.028</td><td>0.067±0.016</td><td>0.057±0.015</td><td>0.052±0.015</td></tr><tr><td colspan="8">SST-2</td></tr><tr><td rowspan="2">Alpaca-7B</td><td>ACC</td><td>0.730±0.000</td><td>0.868±0.088</td><td>0.939±0.018</td><td>0.949±0.015</td><td>0.955±0.012</td><td>0.952±0.014</td></tr><tr><td>ECE</td><td>0.139±0.000</td><td>0.068±0.048</td><td>0.025±0.009</td><td>0.021±0.006</td><td>0.020±0.009</td><td>0.026±0.010</td></tr><tr><td rowspan="2">LLama2-Chat-7B</td><td>ACC</td><td>0.867±0.000</td><td>0.951±0.008</td><td>0.942±0.018</td><td>0.953±0.012</td><td>0.952±0.016</td><td>0.952±0.015</td></tr><tr><td>ECE</td><td>0.039±0.000</td><td>0.033±0.006</td><td>0.044±0.015</td><td>0.032±0.012</td><td>0.035±0.014</td><td>0.037±0.014</td></tr><tr><td rowspan="2">LLama2-7B</td><td>ACC</td><td>0.530±0.000</td><td>0.754±0.140</td><td>0.829±0.121</td><td>0.874±0.105</td><td>0.904±0.062</td><td>0.925±0.045</td></tr><tr><td>ECE</td><td>0.018±0.000</td><td>0.180±0.058</td><td>0.119±0.076</td><td>0.085±0.072</td><td>0.062±0.027</td><td>0.040±0.012</td></tr><tr><td rowspan="2">Mistral-7B-v0.1</td><td>ACC</td><td>0.563±0.000</td><td>0.958±0.007</td><td>0.941±0.058</td><td>0.956±0.030</td><td>0.961±0.022</td><td>0.969±0.006</td></tr><tr><td>ECE</td><td>0.058±0.000</td><td>0.133±0.029</td><td>0.086±0.033</td><td>0.078±0.032</td><td>0.072±0.032</td><td>0.052±0.020</td></tr><tr><td rowspan="2">vicuna-7b-v1.5</td><td>ACC</td><td>0.893±0.000</td><td>0.946±0.011</td><td>0.956±0.008</td><td>0.961±0.012</td><td>0.964±0.008</td><td>0.968±0.009</td></tr><tr><td>ECE</td><td>0.041±0.000</td><td>0.032±0.009</td><td>0.025±0.008</td><td>0.026±0.006</td><td>0.023±0.005</td><td>0.021±0.005</td></tr></table>
374
+
375
+ Table 11: The full results (mean and standard deviation) for various experimental configurations extending Figure 3.
376
+
377
+ <table><tr><td>Dataset</td><td>Case</td><td>Reasoning</td><td>ŷ</td><td>p</td></tr><tr><td rowspan="6">Strategy QA</td><td colspan="4">Question: Is the voice of the Genie from Disney&#x27;s Aladdin still alive? Choose the answer from True and False</td></tr><tr><td colspan="4">Options: A) True, B) False</td></tr><tr><td colspan="4">Ground Truth: A</td></tr><tr><td>7B</td><td>The voice of the Genie from Disney&#x27;s Aladdin is still alive.</td><td>B</td><td>0.6249</td></tr><tr><td>13B</td><td>The voice of the Genie from Disney&#x27;s Aladdin is Robin Williams. Robin Williams died on August 11, 2014.</td><td>B</td><td>0.7654</td></tr><tr><td>30B</td><td>Robin Williams was the voice of the Genie in Disney&#x27;s Aladdin. He died on August 11, 2014.</td><td>B</td><td>0.8471</td></tr><tr><td rowspan="6">World Tree</td><td colspan="4">Question: A bird eats a brightly-colored insect that tastes bad.
378
+ The bird avoids preying on this type of insect again. This example is best described as ?</td></tr><tr><td colspan="4">Options: A) a physical characteristic, B) a learned behavior, C) an inherited behavior, D) an adaptation</td></tr><tr><td colspan="4">Ground Truth: B</td></tr><tr><td>7B</td><td>ing the juice from fruit is a kind of physical change. A physi- cal change is a change in the physical state of a substance. A physical change is not a change in the chemical composition of a substance. A physical change is not a change in the chemical composition of a substance.</td><td>D</td><td>0.5249</td></tr><tr><td>13B</td><td>An adaptation is a change in an organism that makes it better suited to its environment.</td><td>D</td><td>0.7651</td></tr><tr><td>30B</td><td>Adaptation means a change in the body of an organism that helps it survive in its environment. Adaptation is a kind of behavior.</td><td>D</td><td>0.8764</td></tr></table>
379
+
380
+ Table 12: Qualitative Results of LLaMA on Strategy QA and WorldTree
2024/A Study on the Calibration of In-context Learning/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:70e5eff27f18b6bee8079d95a62a136b4d327536fda765e0c285927da8c17241
3
+ size 1961814
2024/A Study on the Calibration of In-context Learning/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Survey of Confidence Estimation and Calibration in Large Language Models/2f5c88ab-d105-40df-8549-a83aaa8adb25_content_list.json ADDED
The diff for this file is too large to render. See raw diff