Chelsea707 commited on
Commit
5047667
·
verified ·
1 Parent(s): 53c2549

Add Batch 2a7a78cf-9fb5-42b0-bc9a-22fced734deb data

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +64 -0
  2. 2024/1+1_2_ Can Large Language Models Serve as Cross-Lingual Knowledge Aggregators_/25427acc-0095-4e20-ae7a-6d156491d531_content_list.json +0 -0
  3. 2024/1+1_2_ Can Large Language Models Serve as Cross-Lingual Knowledge Aggregators_/25427acc-0095-4e20-ae7a-6d156491d531_model.json +0 -0
  4. 2024/1+1_2_ Can Large Language Models Serve as Cross-Lingual Knowledge Aggregators_/25427acc-0095-4e20-ae7a-6d156491d531_origin.pdf +3 -0
  5. 2024/1+1_2_ Can Large Language Models Serve as Cross-Lingual Knowledge Aggregators_/full.md +500 -0
  6. 2024/1+1_2_ Can Large Language Models Serve as Cross-Lingual Knowledge Aggregators_/images.zip +3 -0
  7. 2024/1+1_2_ Can Large Language Models Serve as Cross-Lingual Knowledge Aggregators_/layout.json +0 -0
  8. 2024/A Bayesian Approach to Harnessing the Power of LLMs in Authorship Attribution/bafb35f6-8fe7-40f9-8743-f4d30539c3ef_content_list.json +1948 -0
  9. 2024/A Bayesian Approach to Harnessing the Power of LLMs in Authorship Attribution/bafb35f6-8fe7-40f9-8743-f4d30539c3ef_model.json +0 -0
  10. 2024/A Bayesian Approach to Harnessing the Power of LLMs in Authorship Attribution/bafb35f6-8fe7-40f9-8743-f4d30539c3ef_origin.pdf +3 -0
  11. 2024/A Bayesian Approach to Harnessing the Power of LLMs in Authorship Attribution/full.md +378 -0
  12. 2024/A Bayesian Approach to Harnessing the Power of LLMs in Authorship Attribution/images.zip +3 -0
  13. 2024/A Bayesian Approach to Harnessing the Power of LLMs in Authorship Attribution/layout.json +0 -0
  14. 2024/A Closer Look at Multidimensional Online Political Incivility/8e194152-6624-404c-bf7d-b3df6189c956_content_list.json +0 -0
  15. 2024/A Closer Look at Multidimensional Online Political Incivility/8e194152-6624-404c-bf7d-b3df6189c956_model.json +0 -0
  16. 2024/A Closer Look at Multidimensional Online Political Incivility/8e194152-6624-404c-bf7d-b3df6189c956_origin.pdf +3 -0
  17. 2024/A Closer Look at Multidimensional Online Political Incivility/full.md +387 -0
  18. 2024/A Closer Look at Multidimensional Online Political Incivility/images.zip +3 -0
  19. 2024/A Closer Look at Multidimensional Online Political Incivility/layout.json +0 -0
  20. 2024/A Comparison of Language Modeling and Translation as Multilingual Pretraining Objectives/979594f1-eb73-4d54-8635-879feb871523_content_list.json +1295 -0
  21. 2024/A Comparison of Language Modeling and Translation as Multilingual Pretraining Objectives/979594f1-eb73-4d54-8635-879feb871523_model.json +0 -0
  22. 2024/A Comparison of Language Modeling and Translation as Multilingual Pretraining Objectives/979594f1-eb73-4d54-8635-879feb871523_origin.pdf +3 -0
  23. 2024/A Comparison of Language Modeling and Translation as Multilingual Pretraining Objectives/full.md +236 -0
  24. 2024/A Comparison of Language Modeling and Translation as Multilingual Pretraining Objectives/images.zip +3 -0
  25. 2024/A Comparison of Language Modeling and Translation as Multilingual Pretraining Objectives/layout.json +0 -0
  26. 2024/A Comprehensive Survey of Scientific Large Language Models and Their Applications in Scientific Discovery/68186560-afa6-46c7-bd58-68f345d6f141_content_list.json +0 -0
  27. 2024/A Comprehensive Survey of Scientific Large Language Models and Their Applications in Scientific Discovery/68186560-afa6-46c7-bd58-68f345d6f141_model.json +0 -0
  28. 2024/A Comprehensive Survey of Scientific Large Language Models and Their Applications in Scientific Discovery/68186560-afa6-46c7-bd58-68f345d6f141_origin.pdf +3 -0
  29. 2024/A Comprehensive Survey of Scientific Large Language Models and Their Applications in Scientific Discovery/full.md +0 -0
  30. 2024/A Comprehensive Survey of Scientific Large Language Models and Their Applications in Scientific Discovery/images.zip +3 -0
  31. 2024/A Comprehensive Survey of Scientific Large Language Models and Their Applications in Scientific Discovery/layout.json +0 -0
  32. 2024/A Fast and Sound Tagging Method for Discontinuous Named-Entity Recognition/471f11c4-935b-4778-a130-ca7d58097a15_content_list.json +2165 -0
  33. 2024/A Fast and Sound Tagging Method for Discontinuous Named-Entity Recognition/471f11c4-935b-4778-a130-ca7d58097a15_model.json +0 -0
  34. 2024/A Fast and Sound Tagging Method for Discontinuous Named-Entity Recognition/471f11c4-935b-4778-a130-ca7d58097a15_origin.pdf +3 -0
  35. 2024/A Fast and Sound Tagging Method for Discontinuous Named-Entity Recognition/full.md +420 -0
  36. 2024/A Fast and Sound Tagging Method for Discontinuous Named-Entity Recognition/images.zip +3 -0
  37. 2024/A Fast and Sound Tagging Method for Discontinuous Named-Entity Recognition/layout.json +0 -0
  38. 2024/A Generic Method for Fine-grained Category Discovery in Natural Language Texts/4d06db13-9ceb-41b6-abd3-fe6498c25825_content_list.json +0 -0
  39. 2024/A Generic Method for Fine-grained Category Discovery in Natural Language Texts/4d06db13-9ceb-41b6-abd3-fe6498c25825_model.json +0 -0
  40. 2024/A Generic Method for Fine-grained Category Discovery in Natural Language Texts/4d06db13-9ceb-41b6-abd3-fe6498c25825_origin.pdf +3 -0
  41. 2024/A Generic Method for Fine-grained Category Discovery in Natural Language Texts/full.md +698 -0
  42. 2024/A Generic Method for Fine-grained Category Discovery in Natural Language Texts/images.zip +3 -0
  43. 2024/A Generic Method for Fine-grained Category Discovery in Natural Language Texts/layout.json +0 -0
  44. 2024/A Learning Rate Path Switching Training Paradigm for Version Updates of Large Language Models/63a13339-dbf4-428a-9241-6178656f6af8_content_list.json +1781 -0
  45. 2024/A Learning Rate Path Switching Training Paradigm for Version Updates of Large Language Models/63a13339-dbf4-428a-9241-6178656f6af8_model.json +0 -0
  46. 2024/A Learning Rate Path Switching Training Paradigm for Version Updates of Large Language Models/63a13339-dbf4-428a-9241-6178656f6af8_origin.pdf +3 -0
  47. 2024/A Learning Rate Path Switching Training Paradigm for Version Updates of Large Language Models/full.md +322 -0
  48. 2024/A Learning Rate Path Switching Training Paradigm for Version Updates of Large Language Models/images.zip +3 -0
  49. 2024/A Learning Rate Path Switching Training Paradigm for Version Updates of Large Language Models/layout.json +0 -0
  50. 2024/A Morphology-Based Investigation of Positional Encodings/b5b51b30-492b-4f55-95ec-1b80f07d62bc_content_list.json +1613 -0
.gitattributes CHANGED
@@ -4017,3 +4017,67 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
4017
  2024/“Any[[:space:]]Other[[:space:]]Thoughts,[[:space:]]Hedgehog_”[[:space:]]Linking[[:space:]]Deliberation[[:space:]]Chains[[:space:]]in[[:space:]]Collaborative[[:space:]]Dialogues/90f1caa3-6e84-4e0f-9155-e4da23cf5baa_origin.pdf filter=lfs diff=lfs merge=lfs -text
4018
  2024/“Knowing[[:space:]]When[[:space:]]You[[:space:]]Don’t[[:space:]]Know”_[[:space:]]A[[:space:]]Multilingual[[:space:]]Relevance[[:space:]]Assessment[[:space:]]Dataset[[:space:]]for[[:space:]]Robust[[:space:]]Retrieval-Augmented[[:space:]]Generation/b4945c16-3fb9-4cdd-8375-1c50358f077b_origin.pdf filter=lfs diff=lfs merge=lfs -text
4019
  2024/“Seeing[[:space:]]the[[:space:]]Big[[:space:]]through[[:space:]]the[[:space:]]Small”_[[:space:]]Can[[:space:]]LLMs[[:space:]]Approximate[[:space:]]Human[[:space:]]Judgment[[:space:]]Distributions[[:space:]]on[[:space:]]NLI[[:space:]]from[[:space:]]a[[:space:]]Few[[:space:]]Explanations_/001d91be-7ce3-4a39-9cdf-32ba707390b7_origin.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4017
  2024/“Any[[:space:]]Other[[:space:]]Thoughts,[[:space:]]Hedgehog_”[[:space:]]Linking[[:space:]]Deliberation[[:space:]]Chains[[:space:]]in[[:space:]]Collaborative[[:space:]]Dialogues/90f1caa3-6e84-4e0f-9155-e4da23cf5baa_origin.pdf filter=lfs diff=lfs merge=lfs -text
4018
  2024/“Knowing[[:space:]]When[[:space:]]You[[:space:]]Don’t[[:space:]]Know”_[[:space:]]A[[:space:]]Multilingual[[:space:]]Relevance[[:space:]]Assessment[[:space:]]Dataset[[:space:]]for[[:space:]]Robust[[:space:]]Retrieval-Augmented[[:space:]]Generation/b4945c16-3fb9-4cdd-8375-1c50358f077b_origin.pdf filter=lfs diff=lfs merge=lfs -text
4019
  2024/“Seeing[[:space:]]the[[:space:]]Big[[:space:]]through[[:space:]]the[[:space:]]Small”_[[:space:]]Can[[:space:]]LLMs[[:space:]]Approximate[[:space:]]Human[[:space:]]Judgment[[:space:]]Distributions[[:space:]]on[[:space:]]NLI[[:space:]]from[[:space:]]a[[:space:]]Few[[:space:]]Explanations_/001d91be-7ce3-4a39-9cdf-32ba707390b7_origin.pdf filter=lfs diff=lfs merge=lfs -text
4020
+ 2024/1+1_2_[[:space:]]Can[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]Serve[[:space:]]as[[:space:]]Cross-Lingual[[:space:]]Knowledge[[:space:]]Aggregators_/25427acc-0095-4e20-ae7a-6d156491d531_origin.pdf filter=lfs diff=lfs merge=lfs -text
4021
+ 2024/A[[:space:]]Bayesian[[:space:]]Approach[[:space:]]to[[:space:]]Harnessing[[:space:]]the[[:space:]]Power[[:space:]]of[[:space:]]LLMs[[:space:]]in[[:space:]]Authorship[[:space:]]Attribution/bafb35f6-8fe7-40f9-8743-f4d30539c3ef_origin.pdf filter=lfs diff=lfs merge=lfs -text
4022
+ 2024/A[[:space:]]Closer[[:space:]]Look[[:space:]]at[[:space:]]Multidimensional[[:space:]]Online[[:space:]]Political[[:space:]]Incivility/8e194152-6624-404c-bf7d-b3df6189c956_origin.pdf filter=lfs diff=lfs merge=lfs -text
4023
+ 2024/A[[:space:]]Comparison[[:space:]]of[[:space:]]Language[[:space:]]Modeling[[:space:]]and[[:space:]]Translation[[:space:]]as[[:space:]]Multilingual[[:space:]]Pretraining[[:space:]]Objectives/979594f1-eb73-4d54-8635-879feb871523_origin.pdf filter=lfs diff=lfs merge=lfs -text
4024
+ 2024/A[[:space:]]Comprehensive[[:space:]]Survey[[:space:]]of[[:space:]]Scientific[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]and[[:space:]]Their[[:space:]]Applications[[:space:]]in[[:space:]]Scientific[[:space:]]Discovery/68186560-afa6-46c7-bd58-68f345d6f141_origin.pdf filter=lfs diff=lfs merge=lfs -text
4025
+ 2024/A[[:space:]]Fast[[:space:]]and[[:space:]]Sound[[:space:]]Tagging[[:space:]]Method[[:space:]]for[[:space:]]Discontinuous[[:space:]]Named-Entity[[:space:]]Recognition/471f11c4-935b-4778-a130-ca7d58097a15_origin.pdf filter=lfs diff=lfs merge=lfs -text
4026
+ 2024/A[[:space:]]Generic[[:space:]]Method[[:space:]]for[[:space:]]Fine-grained[[:space:]]Category[[:space:]]Discovery[[:space:]]in[[:space:]]Natural[[:space:]]Language[[:space:]]Texts/4d06db13-9ceb-41b6-abd3-fe6498c25825_origin.pdf filter=lfs diff=lfs merge=lfs -text
4027
+ 2024/A[[:space:]]Learning[[:space:]]Rate[[:space:]]Path[[:space:]]Switching[[:space:]]Training[[:space:]]Paradigm[[:space:]]for[[:space:]]Version[[:space:]]Updates[[:space:]]of[[:space:]]Large[[:space:]]Language[[:space:]]Models/63a13339-dbf4-428a-9241-6178656f6af8_origin.pdf filter=lfs diff=lfs merge=lfs -text
4028
+ 2024/A[[:space:]]Morphology-Based[[:space:]]Investigation[[:space:]]of[[:space:]]Positional[[:space:]]Encodings/b5b51b30-492b-4f55-95ec-1b80f07d62bc_origin.pdf filter=lfs diff=lfs merge=lfs -text
4029
+ 2024/A[[:space:]]Multi-Perspective[[:space:]]Analysis[[:space:]]of[[:space:]]Memorization[[:space:]]in[[:space:]]Large[[:space:]]Language[[:space:]]Models/673d3f2e-b5b0-4663-9dcd-9991690a5d7d_origin.pdf filter=lfs diff=lfs merge=lfs -text
4030
+ 2024/A[[:space:]]New[[:space:]]Pipeline[[:space:]]for[[:space:]]Knowledge[[:space:]]Graph[[:space:]]Reasoning[[:space:]]Enhanced[[:space:]]by[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]Without[[:space:]]Fine-Tuning/83235cad-af76-4b9a-8e64-7b30a0d9728d_origin.pdf filter=lfs diff=lfs merge=lfs -text
4031
+ 2024/A[[:space:]]Peek[[:space:]]into[[:space:]]Token[[:space:]]Bias_[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]Are[[:space:]]Not[[:space:]]Yet[[:space:]]Genuine[[:space:]]Reasoners/6c725ad6-005b-4cc6-b776-3a26e9149f5e_origin.pdf filter=lfs diff=lfs merge=lfs -text
4032
+ 2024/A[[:space:]]Probability–Quality[[:space:]]Trade-off[[:space:]]in[[:space:]]Aligned[[:space:]]Language[[:space:]]Models[[:space:]]and[[:space:]]its[[:space:]]Relation[[:space:]]to[[:space:]]Sampling[[:space:]]Adaptors/82a1dd27-5775-458e-93f7-2b4ac436868c_origin.pdf filter=lfs diff=lfs merge=lfs -text
4033
+ 2024/A[[:space:]]SMART[[:space:]]Mnemonic[[:space:]]Sounds[[:space:]]like[[:space:]]“Glue[[:space:]]Tonic”_[[:space:]]Mixing[[:space:]]LLMs[[:space:]]with[[:space:]]Student[[:space:]]Feedback[[:space:]]to[[:space:]]Make[[:space:]]Mnemonic[[:space:]]Learning[[:space:]]Stick/ae1e0df2-c630-40b6-8083-26f1c142f8a2_origin.pdf filter=lfs diff=lfs merge=lfs -text
4034
+ 2024/A[[:space:]]Simple[[:space:]]LLM[[:space:]]Framework[[:space:]]for[[:space:]]Long-Range[[:space:]]Video[[:space:]]Question-Answering/400b94a5-0d78-4f2a-ad2b-46e052f1541f_origin.pdf filter=lfs diff=lfs merge=lfs -text
4035
+ 2024/A[[:space:]]Simple[[:space:]]and[[:space:]]Effective[[:space:]]L_2[[:space:]]Norm-Based[[:space:]]Strategy[[:space:]]for[[:space:]]KV[[:space:]]Cache[[:space:]]Compression/cb5bf822-b03b-4c9b-86bd-41ae869b2005_origin.pdf filter=lfs diff=lfs merge=lfs -text
4036
+ 2024/A[[:space:]]Simple[[:space:]]yet[[:space:]]Effective[[:space:]]Training-free[[:space:]]Prompt-free[[:space:]]Approach[[:space:]]to[[:space:]]Chinese[[:space:]]Spelling[[:space:]]Correction[[:space:]]Based[[:space:]]on[[:space:]]Large[[:space:]]Language[[:space:]]Models/130f98a2-4aa7-49e7-85b9-28482f6959f5_origin.pdf filter=lfs diff=lfs merge=lfs -text
4037
+ 2024/A[[:space:]]Study[[:space:]]of[[:space:]]Nationality[[:space:]]Bias[[:space:]]in[[:space:]]Names[[:space:]]and[[:space:]]Perplexity[[:space:]]using[[:space:]]Off-the-Shelf[[:space:]]Affect-related[[:space:]]Tweet[[:space:]]Classifiers/a860a22b-578d-49bd-8de1-e9db804144de_origin.pdf filter=lfs diff=lfs merge=lfs -text
4038
+ 2024/A[[:space:]]Survey[[:space:]]of[[:space:]]AMR[[:space:]]Applications/75874d42-c10b-4938-9be8-2733749c89aa_origin.pdf filter=lfs diff=lfs merge=lfs -text
4039
+ 2024/A[[:space:]]Survey[[:space:]]of[[:space:]]Ontology[[:space:]]Expansion[[:space:]]for[[:space:]]Conversational[[:space:]]Understanding/a596bbb5-9da6-4f3e-be65-3e1f84ace578_origin.pdf filter=lfs diff=lfs merge=lfs -text
4040
+ 2024/A[[:space:]]Survey[[:space:]]on[[:space:]]In-context[[:space:]]Learning/7fbe3db6-6bc9-4edb-b67f-a952cbcabaeb_origin.pdf filter=lfs diff=lfs merge=lfs -text
4041
+ 2024/A[[:space:]]Systematic[[:space:]]Analysis[[:space:]]of[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]as[[:space:]]Soft[[:space:]]Reasoners_[[:space:]]The[[:space:]]Case[[:space:]]of[[:space:]]Syllogistic[[:space:]]Inferences/ea32355a-78f1-46a5-8ad1-92971003c70d_origin.pdf filter=lfs diff=lfs merge=lfs -text
4042
+ 2024/A[[:space:]]Systematic[[:space:]]Survey[[:space:]]and[[:space:]]Critical[[:space:]]Review[[:space:]]on[[:space:]]Evaluating[[:space:]]Large[[:space:]]Language[[:space:]]Models_[[:space:]]Challenges,[[:space:]]Limitations,[[:space:]]and[[:space:]]Recommendations/f6b6b596-2c3c-4fc0-ad28-c8c28df11119_origin.pdf filter=lfs diff=lfs merge=lfs -text
4043
+ 2024/A[[:space:]]Thorough[[:space:]]Examination[[:space:]]of[[:space:]]Decoding[[:space:]]Methods[[:space:]]in[[:space:]]the[[:space:]]Era[[:space:]]of[[:space:]]LLMs/08e9ce3f-829e-4673-bc82-de6d9b603107_origin.pdf filter=lfs diff=lfs merge=lfs -text
4044
+ 2024/A[[:space:]]Two-Step[[:space:]]Approach[[:space:]]for[[:space:]]Data-Efficient[[:space:]]French[[:space:]]Pronunciation[[:space:]]Learning/a41811dd-18c6-4663-badb-3eb4ca20d9d1_origin.pdf filter=lfs diff=lfs merge=lfs -text
4045
+ 2024/A[[:space:]]Usage-centric[[:space:]]Take[[:space:]]on[[:space:]]Intent[[:space:]]Understanding[[:space:]]in[[:space:]]E-Commerce/29f09a40-f559-4178-b241-0187944a140f_origin.pdf filter=lfs diff=lfs merge=lfs -text
4046
+ 2024/A[[:space:]]User-Centric[[:space:]]Multi-Intent[[:space:]]Benchmark[[:space:]]for[[:space:]]Evaluating[[:space:]]Large[[:space:]]Language[[:space:]]Models/0f990a8c-d45a-42eb-a216-1c3001bdc2cf_origin.pdf filter=lfs diff=lfs merge=lfs -text
4047
+ 2024/A[[:space:]]linguistically-motivated[[:space:]]evaluation[[:space:]]methodology[[:space:]]for[[:space:]]unraveling[[:space:]]model’s[[:space:]]abilities[[:space:]]in[[:space:]]reading[[:space:]]comprehension[[:space:]]tasks/9f7bc464-d8bc-4299-87a1-336f825853ed_origin.pdf filter=lfs diff=lfs merge=lfs -text
4048
+ 2024/ABLE_[[:space:]]Personalized[[:space:]]Disability[[:space:]]Support[[:space:]]with[[:space:]]Politeness[[:space:]]and[[:space:]]Empathy[[:space:]]Integration/f96ed249-8546-48a4-b23c-b67cd31bf9fc_origin.pdf filter=lfs diff=lfs merge=lfs -text
4049
+ 2024/ABSEval_[[:space:]]An[[:space:]]Agent-based[[:space:]]Framework[[:space:]]for[[:space:]]Script[[:space:]]Evaluation/5ede6651-5e3a-4a8f-94d6-a388c5a623ea_origin.pdf filter=lfs diff=lfs merge=lfs -text
4050
+ 2024/ACE_[[:space:]]A[[:space:]]LLM-based[[:space:]]Negotiation[[:space:]]Coaching[[:space:]]System/009d36b4-85b3-47cc-8be1-d64ca1120806_origin.pdf filter=lfs diff=lfs merge=lfs -text
4051
+ 2024/ADELIE_[[:space:]]Aligning[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]on[[:space:]]Information[[:space:]]Extraction/835a36c8-f6ca-4d5c-9eff-e64f9d4a3799_origin.pdf filter=lfs diff=lfs merge=lfs -text
4052
+ 2024/AGRaME_[[:space:]]Any-Granularity[[:space:]]Ranking[[:space:]]with[[:space:]]Multi-Vector[[:space:]]Embeddings/1e520eee-ccb9-4414-bc5f-0d638bdc955f_origin.pdf filter=lfs diff=lfs merge=lfs -text
4053
+ 2024/AKEW_[[:space:]]Assessing[[:space:]]Knowledge[[:space:]]Editing[[:space:]]in[[:space:]]the[[:space:]]Wild/8035c215-9efd-43cb-a57b-a5e8c8891738_origin.pdf filter=lfs diff=lfs merge=lfs -text
4054
+ 2024/ALVIN_[[:space:]]Active[[:space:]]Learning[[:space:]]Via[[:space:]]INterpolation/d303ddf5-2930-42ae-9c2f-f393f6f98c15_origin.pdf filter=lfs diff=lfs merge=lfs -text
4055
+ 2024/AMPO_[[:space:]]Automatic[[:space:]]Multi-Branched[[:space:]]Prompt[[:space:]]Optimization/6eab62c0-43e3-432d-a367-2ca88e0b0164_origin.pdf filter=lfs diff=lfs merge=lfs -text
4056
+ 2024/AMR-Evol_[[:space:]]Adaptive[[:space:]]Modular[[:space:]]Response[[:space:]]Evolution[[:space:]]Elicits[[:space:]]Better[[:space:]]Knowledge[[:space:]]Distillation[[:space:]]for[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]in[[:space:]]Code[[:space:]]Generation/9812783a-5fff-4a91-beb2-49682d6305c8_origin.pdf filter=lfs diff=lfs merge=lfs -text
4057
+ 2024/APPLS_[[:space:]]Evaluating[[:space:]]Evaluation[[:space:]]Metrics[[:space:]]for[[:space:]]Plain[[:space:]]Language[[:space:]]Summarization/c897545a-4395-49a9-8c2b-380fba0b0585_origin.pdf filter=lfs diff=lfs merge=lfs -text
4058
+ 2024/ARES_[[:space:]]Alternating[[:space:]]Reinforcement[[:space:]]Learning[[:space:]]and[[:space:]]Supervised[[:space:]]Fine-Tuning[[:space:]]for[[:space:]]Enhanced[[:space:]]Multi-Modal[[:space:]]Chain-of-Thought[[:space:]]Reasoning[[:space:]]Through[[:space:]]Diverse[[:space:]]AI[[:space:]]Feedback/d3614a2a-4d15-4a43-be13-bf5a0127dfd6_origin.pdf filter=lfs diff=lfs merge=lfs -text
4059
+ 2024/ARM_[[:space:]]An[[:space:]]Alignment-and-Replacement[[:space:]]Module[[:space:]]for[[:space:]]Chinese[[:space:]]Spelling[[:space:]]Check[[:space:]]Based[[:space:]]on[[:space:]]LLMs/8c7c0801-67db-482c-a55c-5a980b77db97_origin.pdf filter=lfs diff=lfs merge=lfs -text
4060
+ 2024/ASETF_[[:space:]]A[[:space:]]Novel[[:space:]]Method[[:space:]]for[[:space:]]Jailbreak[[:space:]]Attack[[:space:]]on[[:space:]]LLMs[[:space:]]through[[:space:]]Translate[[:space:]]Suffix[[:space:]]Embeddings/4aeaa075-5535-4c35-8b0d-eb04678defd2_origin.pdf filter=lfs diff=lfs merge=lfs -text
4061
+ 2024/ASL[[:space:]]STEM[[:space:]]Wiki_[[:space:]]Dataset[[:space:]]and[[:space:]]Benchmark[[:space:]]for[[:space:]]Interpreting[[:space:]]STEM[[:space:]]Articles/751dc685-f0e0-417f-b3ae-95409a1389f0_origin.pdf filter=lfs diff=lfs merge=lfs -text
4062
+ 2024/ATAP_[[:space:]]Automatic[[:space:]]Template-Augmented[[:space:]]Commonsense[[:space:]]Knowledge[[:space:]]Graph[[:space:]]Completion[[:space:]]via[[:space:]]Pre-Trained[[:space:]]Language[[:space:]]Models/97a1c482-84ec-47bb-97d1-1a8b983e28fa_origin.pdf filter=lfs diff=lfs merge=lfs -text
4063
+ 2024/ATM_[[:space:]]Adversarial[[:space:]]Tuning[[:space:]]Multi-agent[[:space:]]System[[:space:]]Makes[[:space:]]a[[:space:]]Robust[[:space:]]Retrieval-Augmented[[:space:]]Generator/9bea4328-e0c9-4c5a-820a-5b9894e66024_origin.pdf filter=lfs diff=lfs merge=lfs -text
4064
+ 2024/Academics[[:space:]]Can[[:space:]]Contribute[[:space:]]to[[:space:]]Domain-Specialized[[:space:]]Language[[:space:]]Models/8f961a0e-5519-4e77-ad99-b5d1e1517080_origin.pdf filter=lfs diff=lfs merge=lfs -text
4065
+ 2024/Accurate[[:space:]]and[[:space:]]Data-Efficient[[:space:]]Toxicity[[:space:]]Prediction[[:space:]]when[[:space:]]Annotators[[:space:]]Disagree/4149d901-7794-481a-b333-412366837bec_origin.pdf filter=lfs diff=lfs merge=lfs -text
4066
+ 2024/ActPlan-1K_[[:space:]]Benchmarking[[:space:]]the[[:space:]]Procedural[[:space:]]Planning[[:space:]]Ability[[:space:]]of[[:space:]]Visual[[:space:]]Language[[:space:]]Models[[:space:]]in[[:space:]]Household[[:space:]]Activities/acc137ca-6ab7-4639-a715-5d686e687adf_origin.pdf filter=lfs diff=lfs merge=lfs -text
4067
+ 2024/AdaSwitch_[[:space:]]Adaptive[[:space:]]Switching[[:space:]]between[[:space:]]Small[[:space:]]and[[:space:]]Large[[:space:]]Agents[[:space:]]for[[:space:]]Effective[[:space:]]Cloud-Local[[:space:]]Collaborative[[:space:]]Learning/f00acf6d-5b02-456a-98a0-a1d4305c3e5e_origin.pdf filter=lfs diff=lfs merge=lfs -text
4068
+ 2024/AdaZeta_[[:space:]]Adaptive[[:space:]]Zeroth-Order[[:space:]]Tensor-Train[[:space:]]Adaption[[:space:]]for[[:space:]]Memory-Efficient[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]Fine-Tuning/890d4f58-30da-4f9b-b3be-dfadf8f4e730_origin.pdf filter=lfs diff=lfs merge=lfs -text
4069
+ 2024/Adaptable[[:space:]]Moral[[:space:]]Stances[[:space:]]of[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]on[[:space:]]Sexist[[:space:]]Content_[[:space:]]Implications[[:space:]]for[[:space:]]Society[[:space:]]and[[:space:]]Gender[[:space:]]Discourse/97973e94-4c85-4132-b402-3db35cbdecd8_origin.pdf filter=lfs diff=lfs merge=lfs -text
4070
+ 2024/Adaptation[[:space:]]Odyssey[[:space:]]in[[:space:]]LLMs_[[:space:]]Why[[:space:]]Does[[:space:]]Additional[[:space:]]Pretraining[[:space:]]Sometimes[[:space:]]Fail[[:space:]]to[[:space:]]Improve_/94d5c99f-8263-41f9-a6ae-db3aa90998a7_origin.pdf filter=lfs diff=lfs merge=lfs -text
4071
+ 2024/Adapters[[:space:]]Mixup_[[:space:]]Mixing[[:space:]]Parameter-Efficient[[:space:]]Adapters[[:space:]]to[[:space:]]Enhance[[:space:]]the[[:space:]]Adversarial[[:space:]]Robustness[[:space:]]of[[:space:]]Fine-tuned[[:space:]]Pre-trained[[:space:]]Text[[:space:]]Classifiers/1dd5c121-c89e-4ba6-bace-1063c29b9785_origin.pdf filter=lfs diff=lfs merge=lfs -text
4072
+ 2024/Adaption-of-Thought_[[:space:]]Learning[[:space:]]Question[[:space:]]Difficulty[[:space:]]Improves[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]for[[:space:]]Reasoning/ead39ecf-cb9d-48b4-973e-9fb3dfcc9e78_origin.pdf filter=lfs diff=lfs merge=lfs -text
4073
+ 2024/Adaptive[[:space:]]Axes_[[:space:]]A[[:space:]]Pipeline[[:space:]]for[[:space:]]In-domain[[:space:]]Social[[:space:]]Stereotype[[:space:]]Analysis/c6f0453e-9054-4407-941b-63eb91aedb28_origin.pdf filter=lfs diff=lfs merge=lfs -text
4074
+ 2024/Adaptive[[:space:]]Immune-based[[:space:]]Sound-Shape[[:space:]]Code[[:space:]]Substitution[[:space:]]for[[:space:]]Adversarial[[:space:]]Chinese[[:space:]]Text[[:space:]]Attacks/5adaf3eb-b977-443b-be92-a5550c9f39c3_origin.pdf filter=lfs diff=lfs merge=lfs -text
4075
+ 2024/Adaptive[[:space:]]Query[[:space:]]Rewriting_[[:space:]]Aligning[[:space:]]Rewriters[[:space:]]through[[:space:]]Marginal[[:space:]]Probability[[:space:]]of[[:space:]]Conversational[[:space:]]Answers/a9adfebc-488a-4324-a3ff-4ea3baabd242_origin.pdf filter=lfs diff=lfs merge=lfs -text
4076
+ 2024/Adaptive[[:space:]]Question[[:space:]]Answering_[[:space:]]Enhancing[[:space:]]Language[[:space:]]Model[[:space:]]Proficiency[[:space:]]for[[:space:]]Addressing[[:space:]]Knowledge[[:space:]]Conflicts[[:space:]]with[[:space:]]Source[[:space:]]Citations/c56dd859-4a34-41d0-9c33-fa806022b157_origin.pdf filter=lfs diff=lfs merge=lfs -text
4077
+ 2024/Advancing[[:space:]]Adversarial[[:space:]]Suffix[[:space:]]Transfer[[:space:]]Learning[[:space:]]on[[:space:]]Aligned[[:space:]]Large[[:space:]]Language[[:space:]]Models/dd2f273b-57a2-49bb-8513-a55c8e7cdf43_origin.pdf filter=lfs diff=lfs merge=lfs -text
4078
+ 2024/Advancing[[:space:]]Event[[:space:]]Causality[[:space:]]Identification[[:space:]]via[[:space:]]Heuristic[[:space:]]Semantic[[:space:]]Dependency[[:space:]]Inquiry[[:space:]]Network/c6452e4b-0e3d-4f95-b344-12485b5bd379_origin.pdf filter=lfs diff=lfs merge=lfs -text
4079
+ 2024/Advancing[[:space:]]Large[[:space:]]Language[[:space:]]Model[[:space:]]Attribution[[:space:]]through[[:space:]]Self-Improving/2f59697a-4ba5-4abc-91be-a47ba0518f28_origin.pdf filter=lfs diff=lfs merge=lfs -text
4080
+ 2024/___YesBut____[[:space:]]A[[:space:]]High-Quality[[:space:]]Annotated[[:space:]]Multimodal[[:space:]]Dataset[[:space:]]for[[:space:]]evaluating[[:space:]]Satire[[:space:]]Comprehension[[:space:]]capability[[:space:]]of[[:space:]]Vision-Language[[:space:]]Models/ee54f885-2e56-4e92-9023-5b7edddddd66_origin.pdf filter=lfs diff=lfs merge=lfs -text
4081
+ 2024/“Vorbești[[:space:]]Românește_”[[:space:]]A[[:space:]]Recipe[[:space:]]to[[:space:]]Train[[:space:]]Powerful[[:space:]]Romanian[[:space:]]LLMs[[:space:]]with[[:space:]]English[[:space:]]Instructions/abc04922-637f-4575-9d8d-72d6a32c75e1_origin.pdf filter=lfs diff=lfs merge=lfs -text
4082
+ 2024/“What[[:space:]]is[[:space:]]the[[:space:]]value[[:space:]]of[[:space:]]templates_”[[:space:]]Rethinking[[:space:]]Document[[:space:]]Information[[:space:]]Extraction[[:space:]]Datasets[[:space:]]for[[:space:]]LLMs/ad5320c4-c806-4fc8-981a-b7b84a8661cd_origin.pdf filter=lfs diff=lfs merge=lfs -text
4083
+ 2024/”I[[:space:]]Never[[:space:]]Said[[:space:]]That”_[[:space:]]A[[:space:]]dataset,[[:space:]]taxonomy[[:space:]]and[[:space:]]baselines[[:space:]]on[[:space:]]response[[:space:]]clarity[[:space:]]classification/7c605187-9a72-448f-826b-b83f60bd5e99_origin.pdf filter=lfs diff=lfs merge=lfs -text
2024/1+1_2_ Can Large Language Models Serve as Cross-Lingual Knowledge Aggregators_/25427acc-0095-4e20-ae7a-6d156491d531_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/1+1_2_ Can Large Language Models Serve as Cross-Lingual Knowledge Aggregators_/25427acc-0095-4e20-ae7a-6d156491d531_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/1+1_2_ Can Large Language Models Serve as Cross-Lingual Knowledge Aggregators_/25427acc-0095-4e20-ae7a-6d156491d531_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:03712a106946fce9b0a915bff6e445b592654911d68f7884a7aa8ed3b5899765
3
+ size 3018671
2024/1+1_2_ Can Large Language Models Serve as Cross-Lingual Knowledge Aggregators_/full.md ADDED
@@ -0,0 +1,500 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # $1 + 1 > 2$ : Can Large Language Models Serve as Cross-Lingual Knowledge Aggregators?
2
+
3
+ Yue Huang $^{1*}$ , Chenrui Fan $^{2*}$ , Yuan Li $^{3}$ , Siyuan Wu $^{4}$ , Tianyi Zhou $^{2}$ , Xiangliang Zhang $^{1\dagger}$ , Lichao Sun $^{5}$
4
+
5
+ <sup>1</sup>University of Notre Dame <sup>2</sup>University of Maryland, College Park <sup>3</sup>University of Cambridge <sup>4</sup>Huazhong University of Science and Technology <sup>5</sup>Lehigh University {yhuang37, xzhang33}@nd.edu cfan42@umd.edu
6
+
7
+ # Abstract
8
+
9
+ Large Language Models (LLMs) have garnered significant attention due to their remarkable ability to process information across various languages. Despite their capabilities, they exhibit inconsistencies in handling identical queries in different languages, presenting challenges for further advancement. This paper introduces a method to enhance the multilingual performance of LLMs by aggregating knowledge from diverse languages. This approach incorporates a low-resource knowledge detector specific to a language, a language selection process, and mechanisms for answer replacement and integration. Our experiments demonstrate notable performance improvements, particularly in reducing language performance disparity. An ablation study confirms that each component of our method significantly contributes to these enhancements. This research highlights the inherent potential of LLMs to harmonize multilingual capabilities and offers valuable insights for further exploration.
10
+
11
+ # 1 Introduction
12
+
13
+ Large Language Models (LLMs) are increasingly recognized for their impressive capabilities in natural language processing (NLP). Employed across a variety of domains such as the medical sector (Liu et al., 2023c; Zhang et al., 2023a), data generation (Wu et al., 2024), scientific research (Guo et al., 2023; Li et al., 2024c), and LLM-based agents (Liu et al., 2023b; Guo et al., 2024; Huang et al., 2023b; Chen et al., 2024a), LLMs have demonstrated significant utility. Additionally, recent advancements in LLMs have expanded research (Qin et al., 2024; Li et al., 2024a; Xu et al., 2024b; Chen et al., 2024b), which focuses on enhancing their ability to process multiple languages and thereby increasing their accessibility and relevance across diverse linguistic demographics.
14
+
15
+ ![](images/3064ed230828d6021b3a3d66cdf4772ebc888faf012739f4dcae15984687a705.jpg)
16
+ Figure 1: The top is an example of distinct answers to the same questions in different languages. The bottom is the GPT-4's performance on 300 queries in HalluEval (Li et al., 2023a) of nine different languages.
17
+
18
+ Despite these advancements, LLMs demonstrate inconsistencies when processing queries in different languages with the same meaning (Li et al., 2024d), as evidenced by the results in Figure 1. This inconsistency not only diminishes the efficacy and fairness of LLMs but also signals underlying knowledge conflicts (Xu et al., 2024a) that prevent these models from achieving true intelligence (Liu et al., 2023b; Huang et al., 2023b). Furthermore, such inconsistency can erode trust in LLM applications, particularly when users from varied linguistic backgrounds cannot equally benefit from the technology (Li et al., 2023b).
19
+
20
+ To address the inconsistency problems in LLMs, we propose a novel method by leveraging the intrinsic capabilities of LLMs through integrating knowledge across different languages. Our approach begins with the development of a low-resource knowledge detector. This detector assesses whether a user's query involves knowledge that is underrepresented in the specific language. When the query does not feature low-resource knowledge, it is directly addressed by the LLMs. In contrast, if low-resource knowledge is detected, the LLMs will be required to select the most relevant target language to handle this specific knowledge. Once the target
21
+
22
+ language is selected, the query is translated into this language, and the LLMs generate a response based on the translated query. This response either replaces the original answer or is integrated with it. Finally, the response is translated back to the original language of the query and delivered to the user.
23
+
24
+ We conducted comprehensive experiments using six popular LLMs and five bilingual datasets (specifically, English and Chinese) to evaluate the effectiveness of our proposed method. The experimental results demonstrate that our approach successfully integrates knowledge from different languages, leading to the improvement in overall performance. Importantly, it significantly reduces the performance disparities across languages, thereby addressing the inconsistency issues inherent in LLMs and promoting fairness for downstream applications. Additionally, our ablation study confirms that both the low-resource knowledge detector and the language selection process are crucial to the improvements observed. Overall, our contributions are as follows:
25
+
26
+ - We posed an important challenge on the inconsistency of LLMs in downstream tasks, and the low-resource knowledge in a specific language can be brought from another language.
27
+ - Based on the observation, we propose a method that utilizes the LLMs' internal capability to enhance its performance on datasets in different datasets through a low-resources knowledge detector, language selection process, and answer replacement & integration.
28
+ - We conduct extensive experiments on six popular LLMs and five bilingual datasets. The results show that our proposed method effectively enhances the performance of LLMs by integrating knowledge from different languages and reduce the performance gap in different languages.
29
+
30
+ # 2 Related Work
31
+
32
+ # 2.1 Multilingual LLMs
33
+
34
+ There has been a surge in research and work on Multilingual Large Language Models (MLLMs) (Qin et al., 2024; Li et al., 2024a; Xu et al., 2024b; Chen et al., 2024b; Etxaniz et al., 2023). For instance, the InternLM, proposed by Team (2023), is a multilingual language model that has demonstrated excellent performance on multiple Chinese benchmarks. Similarly, PolyLM (Wei et al., 2023b) is another LLM trained using curriculum learning,
35
+
36
+ surpassing other open-source models in multilingual tasks. Besides the above multilingual LLMs, the popular LLMs also include the ChatGLM series developed by Du et al. (2022) and Zeng et al. (2022), and Baichuan series Yang et al. (2023). To improve model performance on multilingual tasks, Muennighoff et al. (2023) and Zhang et al. (2023b) focus on utilizing multilingual training data to fine-tune the parameters. Our work also connects broadly to cross-lingual methods at inference time. Liu et al. (2024) pointed out that translation into English enhances performance for some multilingual tasks, while native language prompting more effectively addresses culturally and linguistically specific questions. In addition, Huang et al. (2023a) and Qin et al. (2023) introduced cross-lingual prompting to enhance the multilingual capabilities of large language models. They focus on improving logical reasoning and task performance across diverse languages. Pourkamali and Sharifi (2024) proposed Self-Supervised Prompting (SSP), a novel method for in-context learning in low-resource languages that improves performance by using stages of noisy labeling and selective exemplar use.
37
+
38
+ In terms of evaluation, Lai et al. (2023) assessed ChatGPT's performance across 37 different languages. CulturaX (Nguyen et al., 2023) is a multilingual dataset containing 6.3 trillion tokens across 167 languages, aimed at promoting the development of multilingual LLMs. Additionally, M3Exam (Zhang et al., 2023c) introduces a dataset derived from real and official human exam questions, designed for evaluating LLMs in a multilingual, multimodal, and multilevel context. BUFFET consolidates 15 varied tasks across 54 languages into a sequence-to-sequence format, offering a standardized set of few-shot examples and instructions (Asai et al., 2023).
39
+
40
+ # 2.2 Factuality in LLMs
41
+
42
+ One way to improve the factuality of LLMs is the utilization of knowledge graphs (KGs)(Sun et al., 2024b). For instance, Abu-Rasheed et al. (2024) uses knowledge graphs to learn explainable recommendations. Yang et al. (2024b) suggests improving LLMs through the development of knowledge graph-enhanced LLMs, which offers a method to boost the factual reasoning capabilities of LLMs. (Sun et al., 2024a) utilizes the LLM as an agent to interact with and navigate through the KGs, identi
43
+
44
+ ![](images/f4b31fe686824cbdf8f5b738e56d724452cfad1fa5635f015858033fad39dd7c.jpg)
45
+ Figure 2: The knowledge domain of a multilingual LLM can be separated into multiple sections (the figure shows two). The language-specific knowledge (pure blue or pure orange) in one language can be utilized for improving the performance in other languages.
46
+
47
+ fying relevant entities and relationships, and conducting reasoning with the knowledge it gathers.
48
+
49
+ Another method to enhance the factual knowledge of LLMs is the utilization of prompt engineering. Previous studies propose various prompt methods such as Chain-of-Thoughts (CoT) (Wei et al., 2023a) and Tree-of-Thoughts (ToT) (Yao et al., 2023). Moreover, some studies use knowledge injection to enhance the domain capability of LLMs (Huang and Sun, 2024; Huang et al., 2024a).
50
+
51
+ # 2.3 Hallucination Mitigation
52
+
53
+ A significant challenge associated with LLMs is their tendency to generate seemingly plausible yet fabricated responses, a phenomenon known as hallucination which is a significant concern in the trustworthiness of LLMs (Huang et al., 2024b, 2023c). To address this issue and prevent misinformation (Huang et al., 2024a), recent research has introduced various hallucination mitigation strategies (Tonmoy et al., 2024). For example, Feng et al. (2024) leverage multi-LLM collaboration to decrease hallucinations in LLM outputs. Additionally, Guan et al. (2024) have developed a novel framework called Knowledge Graph-based Retrofitting (KGR), which integrates LLMs with KGs to minimize factual hallucinations during reasoning. Similarly, Manakul et al. (2023) propose SelfCheckGPT, a sampling method that verifies the accuracy of responses from black-box models without the need for an external database.
54
+
55
+ # 3 Methodology
56
+
57
+ # 3.1 Motivation
58
+
59
+ Our proposed method draws inspiration from the distinct knowledge domains inherent to different languages. As illustrated in Figure 2, language-specific knowledge can serve as supplementary in
60
+
61
+ ![](images/176ca98df4c23f86aa66eaaac891ddf6628c1b98615c6be44b3f60f1c48058c0.jpg)
62
+ Figure 3: The average performance of six LLMs in five datasets. We show the accuracy of Chinese and English domain knowledge with the query/answer in Chinese and English.
63
+
64
+ formation for another language. Figure 3 demonstrates that when queries related to English domain knowledge are posed in Chinese, the performance (i.e., accuracy) of LLMs declines compared to those posed in the English language. Furthermore, Figure 8 reveals that LLMs often provide correct answers in only one of two languages for a given query, suggesting the potential to use the correct response to rectify inaccuracies in the other language. These observations underscore the potential to leverage the strengths of each language to enhance LLM performance across different languages. As shown in Figure 4, the proposed method includes three main modules: low-resource knowledge detection, target language selection, and answer replacement & integration.
65
+
66
+ # 3.2 Construction of Low-Resource Dataset
67
+
68
+ We first construct a low-resource dataset to measure current LLMs' multilingual knowledge transfer capacity. We also use this dataset to train our Low-Resource Knowledge Detector in section 3.3. We initial the dataset with the combination of various existing question-answering datasets including TriviaQA (Joshi et al., 2017), CMMLU (Li et al., 2024b), HalluEval (Li et al., 2023a), TruthfulQA (Lin et al., 2022), MKQA (Longpre et al., 2021), XQuAD (Artetxe et al., 2019), LC-QuAD (Trivedi et al., 2017), KgCLUE (Xu et al., 2020). Moreover, we also construct a dataset that uses LLM-powered synthesized data to cover more knowledge and topics in the training corpus (We call it MULTIGEN). The details of the constructed dataset are shown in Appendix C.
69
+
70
+ To label these data items, we first use an LLM-Human collaboration to label the samples as Chinese-specific, English-specific, or common sense. Specifically, to confirm the correctness of the labeling, we infer the GPT-4 twice to label the samples with a temperature of 1.0 to enlarge the
71
+
72
+ potential uncertainty of its output. We then conduct human inspections of the dataset where the labels are inconsistent in two labeling processes, to confirm the labeling and filter out the samples that are too hard or ambiguous for current LLMs. The statistics of the dataset can be found in Table 1.
73
+
74
+ # 3.3 Low-Resource Knowledge Detector
75
+
76
+ The multilingual misalignment stems from the unbalanced training data as the knowledge with low data resources is less likely to be captured by the language model during the pretraining process. For example, queries about the details of Chinese history are not well answered by the model if asked in English as they appear less frequently in the English pretraining corpus. This phenomenon could be improved by fully utilizing the model's inherent capacity. To implement this process, we first adopt a low-resource knowledge detector to identify these low-resource queries and later borrow knowledge from other languages for help.
77
+
78
+ We train a classifier for each source language to identify the low-resource query for that language. This classifier separates the query about common sense and language-specified knowledge(e.g. Spanish query about Spanish culture) from the low-resources query(e.g. Spanish query about Turkish geography). Queries of the former class are fed into the normal pipeline of language generation while the latter queries are to be enhanced by the knowledge of other languages through our design of other modules. Given a query $x$ in the original language $L_{o}$ , the low-resource knowledge detector $F_{L_{o}}$ works as follow:
79
+
80
+ $$
81
+ F _ {L _ {o}} (x) = \left\{ \begin{array}{l l} 1, & x \text {i s l o w - r e s o u r c e q u e r y o f} L _ {o} \\ 0, & \text {e l s e} \end{array} \right. \tag {1}
82
+ $$
83
+
84
+ We demonstrate in the experiment that a classifier is effective enough to distinguish low-resource queries from others. The construction of the training dataset of $F_{L}$ can be found in subsection 4.1.
85
+
86
+ The method is cost-effective as it does not require the translation of all queries to multiple languages considering that low-resource query is only a small part of user queries. The majority of user queries are related to common sense and knowledge specified in that language and do not need to go through the following process.
87
+
88
+ # Algorithm 1 Proposed Method
89
+
90
+ Require: Query $x$ in original language $L_{o}$
91
+ Ensure: Final answer $a_{\mathrm{final}}$
92
+
93
+ 1: Low-Resource Knowledge Detection:
94
+ 2: Train classifier $F_{L_o}$ for language $L_o$
95
+ 3: isLowResource $\leftarrow F_{L_o}(x)$
96
+ 4: if isLowResource == 1 then
97
+
98
+ 5: Target Language Selection:
99
+ 6: Define prompt $P_{\mathrm{sel}}$ for selecting target language
100
+ 7: $L_{t}\gets \mathrm{LLM}(P_{\mathrm{sel}}(x))$
101
+ 8: $x^{\prime}\gets \operatorname {Trans}(x,L_{t})$
102
+ 9: Answer Generation:
103
+ 0: $a_{t}\gets \mathrm{LLM}(x^{\prime})$
104
+ 11: $a_{o}\gets \operatorname {Trans}(a_{t},L_{o})$
105
+ 12: Answer Integration:
106
+ 13: Define prompt $P_{\mathrm{int}}$ for integrating answers
107
+ 14: $a_{\mathrm{final}} \gets \mathrm{LLM}(P_{\mathrm{int}}(a_t, a_o))$
108
+ 15: else
109
+ 16: $a_{\mathrm{final}} \gets \mathrm{LLM}(x)$
110
+ 17: end if
111
+ 18: return $a_{\text{final}}$
112
+
113
+ # 3.4 Target Language Selection
114
+
115
+ After selecting the low-resource query from the user's input, we later adopt a target language selection module to find the most suitable language for that question(e.g. translating a question in English about Chinese history into Chinese). Answering the query with its most resourceful language would improve output quality in terms of correctness and may offer more useful details to the user. We implement this process by prompting the LLM itself as the selection is model-dependent. Different LLMs may select different target languages due to their pretraining corpus. Given the prompt $P_{\mathrm{sel}}$ to help select the target language, the low-resource query $x$ , the procedure of Target Language Selection is defined as follows:
116
+
117
+ $$
118
+ x ^ {\prime} \leftarrow \operatorname {T r a n s} \left(x, \operatorname {L L M} \left(P _ {\operatorname {s e l}} (x)\right)\right), \tag {2}
119
+ $$
120
+
121
+ where translator $\mathrm{Trans}(Q,L_t)$ translates the input $Q$ into target language $L_{t}$ , and LLM is the large language model that selects the most suitable language for $x$ with prompt $P_{\mathrm{sel}}$ .
122
+
123
+ # 3.5 Answer Replacement & Integration
124
+
125
+ After translating the original query $x$ to the query in target language $x'$ , we use it to prompt the model for the answer in target language $a_t$ . We simply
126
+
127
+ ![](images/c35e6e153eeb8fb7e11e363a093462809e621e0d9600a5261bcef179d1dbcce7.jpg)
128
+ Figure 4: The proposed method begins with the query detection of low-resource knowledge powered by a detector. If low-resource knowledge is detected within the queries, LLMs then select the language most likely to yield the best answer. Answer replacement and integration are employed to formulate the final response.
129
+
130
+ translate the answer back to the original language to get the final answer $a_{\mathrm{final}}$ for the user's understanding.
131
+
132
+ $$
133
+ a _ {\text {f i n a l}} \leftarrow \operatorname {T r a n s} \left(a _ {t}, L _ {o}\right), \tag {3}
134
+ $$
135
+
136
+ where $L_{o}$ is the original language of the user's query.
137
+
138
+ We also explore the integration of answers in the scenario of open-ended question answering (the prompt template is shown in Appendix F). We let the LLM combine and integrate the answer in the target language $a_{t}$ and the answer in the original language $a_{o} = \mathrm{LLM}(x)$ :
139
+
140
+ $$
141
+ a _ {\text {f i n a l}} = \operatorname {L L M} \left(P _ {\text {i n t}} \left(a _ {t}, a _ {o}\right)\right), \tag {4}
142
+ $$
143
+
144
+ where $P_{\mathrm{int}}$ is the prompt to help LLM integrate between $a_{t}$ and $a_{o}$ , and $a_{\mathrm{final}}$ is the final answer.
145
+
146
+ # 4 Experiments
147
+
148
+ We chose English and Chinese for our experiments primarily due to their broad applicability and the availability of resources. Firstly, most LLMs, particularly open-source ones like the ChatGLM series, perform significantly in English and Chinese. This trend highlights the advanced development and optimization of LLMs for these languages, making them ideal for rigorous testing. Secondly, major LLM benchmarks and datasets predominantly focus on these two languages. For instance, besides English benchmarks or datasets, benchmarks such as HalluQA and AlignBench are primarily designed around English and Chinese, providing a robust framework for evaluating our methods. Lastly, the linguistic features and data availability in English and Chinese ensure comprehensive evaluation and validation of our approaches and suggest that our findings could be extrapolated to other languages. This potential for
149
+
150
+ <table><tr><td>Dataset</td><td>Chinese</td><td>Common</td><td>English</td><td>Total</td><td>Lang.</td></tr><tr><td>TriviaQA</td><td>21</td><td>754</td><td>1040</td><td>1815</td><td>En.</td></tr><tr><td>CMMLU</td><td>1200</td><td>2162</td><td>2751</td><td>6113</td><td>Ch.</td></tr><tr><td>HalluEval</td><td>28</td><td>923</td><td>1033</td><td>1984</td><td>En.</td></tr><tr><td>TruthfulQA</td><td>9</td><td>322</td><td>212</td><td>543</td><td>En.</td></tr><tr><td>MKQA</td><td>71</td><td>315</td><td>1114</td><td>1500</td><td>En.</td></tr><tr><td>XQuAD</td><td>72</td><td>610</td><td>503</td><td>1185</td><td>En.</td></tr><tr><td>LC-QuAD</td><td>2</td><td>640</td><td>345</td><td>987</td><td>En.</td></tr><tr><td>KgCLUE</td><td>1218</td><td>610</td><td>172</td><td>2000</td><td>Ch.</td></tr><tr><td>MULTIGEN</td><td>1095</td><td>1121</td><td>1083</td><td>3299</td><td>En.</td></tr><tr><td>Total</td><td>3716</td><td>7457</td><td>8253</td><td>19426</td><td>/</td></tr></table>
151
+
152
+ Table 1: Dataset statistics of the low-resource knowledge detector. "Lang." is the original language for the dataset.
153
+
154
+ cross-linguistic application supports the broader relevance and utility of our study, choosing English and Chinese as both strategic and impactful.
155
+
156
+ # 4.1 Experiment Setup
157
+
158
+ Training Datasets for Detectors. As we need to train the low-resource detector for each language, for the dataset in English (e.g., TriviaQA) or the dataset in Chinese (e.g., CMMLU, KgCLUE), we translate them to another language (i.e., Chinese or English) through translation API *.
159
+
160
+ Detailed Setting. To ensure the reproducibility of results, the temperature parameter for all LLMs is set to 0. For ChatGPT, GPT-4, and Qwen-turbo, we use the official API. For Yi-34b, we use the API from Replicate†. For ChatGLM3 and Llama3-Chinese, we deploy them locally for inference with a V100 (40G).
161
+
162
+ Test Datasets. We selected five datasets for our study, comprising four pre-existing datasets and one that we developed in-house. The following criteria guided our selection:
163
+
164
+ - The datasets should not predominantly consist of common-sense questions (i.e., questions that
165
+
166
+ ![](images/6e1d734223dffbb817914c7f10881949d855f8a1abc492d3559384aab3505c42.jpg)
167
+ Figure 5: Statistics of the dataset in our experiments.
168
+
169
+ are independent of linguistic background), as this minimizes the potential for LLMs to demonstrate improvement through linguistic knowledge.
170
+
171
+ - The datasets should maintain a balance in difficulty; they should not be overly simplistic or excessively challenging. Datasets that are too easy can lead to inflated performance metrics for LLMs, thereby reducing the potential for meaningful improvement. Conversely, datasets that are too challenging can degrade performance across all linguistic contexts, thereby constraining the opportunity to enhance performance in the target language by leveraging knowledge of additional languages.
172
+
173
+ For all datasets in our study, we select QA-pair samples from them and do not use other extra data to facilitate our evaluation. Totally, we select five datasets for evaluating our method. These include four existing dataset: TriviaQA (Joshi et al., 2017), HaluEval (Li et al., 2023a), HalluQA (Cheng et al., 2023), and WebQA (Li et al., 2016). We show the statistics of the datasets we selected in Figure 5 and the details are shown in Appendix A. In addition to the four datasets mentioned above, we have constructed a bilingual Chinese-English dataset tailored to the Chinese domain. Details of the construction process are provided in Appendix D.
174
+
175
+ Models. We carefully select six popular LLMs including proprietary and open-source LLMs that master both English and Chinese: ChatGPT (OpenAI, 2023a), GPT-4 (OpenAI, 2023b), ChatGLM3 (Zeng et al., 2022; Du et al., 2022), Yi-34b (AI et al., 2024), Qwen-turbo (Bai et al., 2023), and LLama3-Chinese (Ila, 2024).
176
+
177
+ # 4.2 Main Results
178
+
179
+ We evaluate the effectiveness of our proposed method on five benchmark datasets and six popular LLMs mentioned above. Each dataset is translated into a Chinese and an English version for later as
180
+
181
+ ![](images/bd5eac7a4284fd521bd3904a180da5b9595a569cde16de85b625858eb6bcf0b6.jpg)
182
+ Figure 6: The average performance gap on datasets before and after applying our method.
183
+
184
+ sessment. We first infer the models with the queries in the dataset to get the generated answers. We then leverage GPT-4 as the judge model to compare each generated answer with the reference answer in the dataset to see if the model produces a correct output. We calculate the generation accuracy and present the result in Table 2. We mark the result in green where there is a significant improvement of more than $1\%$ and mark the result in red if the accuracy decrease by more than $1\%$ .
185
+
186
+ As can be seen from the table, our method can effectively improve the performance of the model in many scenarios. To be specific, the performance of the GPT-4 model on the HalluEval dataset in Chinese improves significantly from $47.99\%$ to $64.36\%$ . This means there still exists a large crosslingual knowledge gap in advanced models such as GPT-4 and our method successfully leverages the knowledge across languages to enhance the model's performance. It is important to notice that the improvements do not rely on other models or online resources, they exist due to our leverage of the model's inherent capacity.
187
+
188
+ It can also be observed from Table 2 that most improvements happen in the language that is different from that of the original dataset, which is also the part where the models suffer from a weaker performance. The comparison of the cross-lingual performance gap before and after applying our method is shown in Figure 6. The figure showcases that our method could significantly reduce the knowledge gap between languages in all LLMs we evaluate, thus improving the fairness of the application for users of different languages.
189
+
190
+ # 4.3 Ablation Study
191
+
192
+ As our generation pipeline consists of several parts, we conduct an ablation study to validate their effectiveness and expenses.
193
+
194
+ The Impact of the Low-resource Detector. The
195
+
196
+ <table><tr><td rowspan="2">Dataset</td><td rowspan="2">Lang.</td><td colspan="2">ChatGLM3</td><td colspan="2">ChatGPT</td><td colspan="2">GPT-4</td><td colspan="2">Yi-34b</td><td colspan="2">Qwen-turbo</td><td colspan="2">Llama3-Ch.</td></tr><tr><td>Orig.i.</td><td>Impro.</td><td>Orig.i.</td><td>Impro.</td><td>Orig.i.</td><td>Impro.</td><td>Orig.i.</td><td>Impro.</td><td>Orig.i.</td><td>Impro.</td><td>Orig.i.</td><td>Impro.</td></tr><tr><td rowspan="2">HalluEval</td><td>(en)</td><td>18.03%</td><td>18.03%</td><td>57.98%</td><td>57.84%</td><td>67.13%</td><td>67.13%</td><td>42.86%</td><td>42.72%</td><td>29.31%</td><td>29.31%</td><td>40.67%</td><td>40.67%</td></tr><tr><td>(ch)</td><td>11.23%</td><td>17.34%</td><td>32.07%</td><td>51.40%</td><td>47.99%</td><td>64.36%</td><td>25.10%</td><td>39.67%</td><td>19.35%</td><td>26.09%</td><td>25.35%</td><td>37.19%</td></tr><tr><td rowspan="2">HalluQA</td><td>(en)</td><td>20.00%</td><td>25.95%</td><td>34.27%</td><td>30.90%</td><td>51.89%</td><td>54.05%</td><td>38.38%</td><td>47.03%</td><td>25.97%</td><td>37.57%</td><td>22.83%</td><td>19.57%</td></tr><tr><td>(ch)</td><td>22.16%</td><td>22.16%</td><td>21.91%</td><td>24.16%</td><td>49.73%</td><td>51.35%</td><td>45.95%</td><td>44.86%</td><td>43.65%</td><td>43.09%</td><td>15.22%</td><td>16.30%</td></tr><tr><td rowspan="2">Chinese Domain</td><td>(en)</td><td>9.52%</td><td>20.78%</td><td>41.85%</td><td>42.73%</td><td>56.71%</td><td>58.44%</td><td>33.33%</td><td>55.84%</td><td>27.19%</td><td>46.05%</td><td>30.73%</td><td>24.24%</td></tr><tr><td>(ch)</td><td>32.47%</td><td>32.47%</td><td>41.85%</td><td>41.85%</td><td>59.31%</td><td>59.74%</td><td>63.64%</td><td>63.20%</td><td>62.28%</td><td>61.84%</td><td>18.61%</td><td>18.61%</td></tr><tr><td rowspan="2">triviaQA</td><td>(en)</td><td>36.32%</td><td>36.32%</td><td>90.53%</td><td>90.37%</td><td>94.09%</td><td>94.09%</td><td>79.33%</td><td>79.17%</td><td>59.59%</td><td>59.47%</td><td>77.27%</td><td>77.16%</td></tr><tr><td>(ch)</td><td>21.33%</td><td>31.95%</td><td>54.60%</td><td>82.67%</td><td>82.77%</td><td>91.90%</td><td>59.43%</td><td>75.56%</td><td>41.53%</td><td>52.99%</td><td>43.92%</td><td>65.17%</td></tr><tr><td rowspan="2">WebQA</td><td>(en)</td><td>28.51%</td><td>38.15%</td><td>59.08%</td><td>58.88%</td><td>67.70%</td><td>69.41%</td><td>57.07%</td><td>68.71%</td><td>49.48%</td><td>61.08%</td><td>50.00%</td><td>48.09%</td></tr><tr><td>(ch)</td><td>48.69%</td><td>48.49%</td><td>57.35%</td><td>57.86%</td><td>72.52%</td><td>72.42%</td><td>76.93%</td><td>76.13%</td><td>71.12%</td><td>71.33%</td><td>37.02%</td><td>38.43%</td></tr></table>
197
+
198
+ Table 2: Six LLMs' performance on our proposed method.
199
+
200
+ <table><tr><td rowspan="2">Dataset</td><td rowspan="2">Lang.</td><td colspan="2">Yi-34b</td><td colspan="2">Qwen-turbo</td><td colspan="2">Llama3-Ch.</td></tr><tr><td>Orig.i.</td><td>Impro.</td><td>Orig.i.</td><td>Impro.</td><td>Orig.i.</td><td>Impro.</td></tr><tr><td rowspan="2">HalluEval</td><td>(en)</td><td>42.86%</td><td>41.75%</td><td>29.31%</td><td>29.59%</td><td>40.67%</td><td>40.67%</td></tr><tr><td>(ch)</td><td>25.10%</td><td>39.81%</td><td>19.35%</td><td>26.51%</td><td>25.35%</td><td>37.33%</td></tr><tr><td rowspan="2">HalluQA</td><td>(en)</td><td>38.38%</td><td>47.03%</td><td>25.97%</td><td>37.57%</td><td>22.83%</td><td>18.48%</td></tr><tr><td>(ch)</td><td>45.95%</td><td>45.95%</td><td>43.65%</td><td>39.78%</td><td>15.22%</td><td>20.65%</td></tr><tr><td>Chinese</td><td>(en)</td><td>33.33%</td><td>57.58%</td><td>27.19%</td><td>48.25%</td><td>30.74%</td><td>24.24%</td></tr><tr><td>Domain</td><td>(ch)</td><td>63.64%</td><td>57.14%</td><td>62.28%</td><td>62.28%</td><td>18.61%</td><td>22.51%</td></tr></table>
201
+
202
+ low-resource detector serves as a filter to sift the language-specific queries from the majority of the queries that involve only commonsense, thus improving efficiency and reducing the expense of the pipeline. As can be observed in Figure 7, a low-resource query detector would significantly reduce the average inference time per sample from more than 9 seconds to less than 6.5 seconds if the ratio of the low-resource queries is 0.05 in the dataset. When the ratio of the low-resource query in the dataset increases, the detector passes more samples into the translation pipeline and increases the average inference time.
203
+
204
+ Another intriguing finding is that the low-resource detector would increase the model performance. As shown in Table 3, the performance of the pipeline is unstable when we remove the low-resource detector. The overall performance would also drop as we observed in Figure 7. This indicates that the detector and LLM itself can be complementary. The full result of the models' performance without the low-resource detector can be found in Table 7.
205
+
206
+ The Impact of the Language Selection Module. The language selection module can choose the proper language to answer the question with
207
+
208
+ ![](images/45f1084b45f2ab5d95a4cb104ae6f33ce89430dd75a0c64de16a201d69f75e6d.jpg)
209
+ Figure 7: The relationship of time efficiency and error rate. The error rate is the percentage sum of all decreasing in five datasets (value in red on Table 2, Table 3 ( $w/o$ Detection), Table 4 ( $w/o$ Selection)).
210
+
211
+ Table 3: Selected LLMs' performance on the setting without a low-resource detector.
212
+
213
+ <table><tr><td rowspan="2">Dataset</td><td rowspan="2">Lang.</td><td colspan="2">Yi-34b</td><td colspan="2">Qwen-turbo</td><td colspan="2">Llama3-Ch.</td></tr><tr><td>Orig.i.</td><td>Impro.</td><td>Orig.i.</td><td>Impro.</td><td>Orig.i.</td><td>Impro.</td></tr><tr><td rowspan="2">HalluEval</td><td>(en)</td><td>42.86%</td><td>42.72%</td><td>29.31%</td><td>29.17%</td><td>40.67%</td><td>40.25%</td></tr><tr><td>(ch)</td><td>25.10%</td><td>42.58%</td><td>19.35%</td><td>28.61%</td><td>25.34%</td><td>39.97%</td></tr><tr><td rowspan="2">HalluQA</td><td>(en)</td><td>38.38%</td><td>46.49%</td><td>25.97%</td><td>38.67%</td><td>22.83%</td><td>21.74%</td></tr><tr><td>(ch)</td><td>45.95%</td><td>44.86%</td><td>43.65%</td><td>41.44%</td><td>15.21%</td><td>15.76%</td></tr><tr><td>Chinese</td><td>(en)</td><td>33.33%</td><td>61.04%</td><td>27.19%</td><td>58.33%</td><td>30.74%</td><td>19.48%</td></tr><tr><td>Domain</td><td>(ch)</td><td>63.64%</td><td>60.17%</td><td>62.28%</td><td>58.33%</td><td>18.61%</td><td>18.61%</td></tr></table>
214
+
215
+ Table 4: Selected LLMs' performance on the setting without language selection.
216
+
217
+ model-specific choice. It is also flexible in the multi-lingual setting as the resulting target language can be more than two as we test. However, we still validate its effectiveness in the bi-lingual setting, comparing it with the strategy of using the opposite language when the query is detected as low-resources, and show our result in Table 4. The trade-off between its cost and error can also be found in Figure 7.
218
+
219
+ As we can see from Figure 7, the language selection module only adds a small inference cost
220
+
221
+ <table><tr><td rowspan="2">Type</td><td rowspan="2">Lang.</td><td colspan="2">ChatGPT</td><td colspan="2">GPT-4</td><td colspan="2">ChatGLM3</td><td colspan="2">Yi-34b</td><td colspan="2">Qwen-Turbo</td><td colspan="2">Llama3-Ch.</td></tr><tr><td>Orig.i.</td><td>Impro.</td><td>Orig.i.</td><td>Impro.</td><td>Orig.i.</td><td>Impro.</td><td>Orig.i.</td><td>Impro.</td><td>Orig.i.</td><td>Impro.</td><td>Orig.i.</td><td>Impro.</td></tr><tr><td rowspan="2">Integrate</td><td>(ch)</td><td>4.98</td><td>5.26</td><td>6.90</td><td>6.85</td><td>4.15</td><td>4.09</td><td>5.82</td><td>5.91</td><td>5.88</td><td>5.87</td><td>3.71</td><td>3.58</td></tr><tr><td>(en)</td><td>5.92</td><td>6.02</td><td>7.32</td><td>7.54</td><td>4.02</td><td>4.07</td><td>5.86</td><td>6.13</td><td>5.59</td><td>5.78</td><td>4.60</td><td>4.60</td></tr><tr><td rowspan="2">Replace</td><td>(ch)</td><td>4.98</td><td>5.47</td><td>6.90</td><td>6.98</td><td>4.15</td><td>4.16</td><td>5.82</td><td>6.23</td><td>5.88</td><td>6.00</td><td>3.71</td><td>3.93</td></tr><tr><td>(en)</td><td>5.92</td><td>5.97</td><td>7.32</td><td>7.12</td><td>4.02</td><td>4.26</td><td>5.86</td><td>6.25</td><td>5.59</td><td>5.88</td><td>4.60</td><td>4.54</td></tr></table>
222
+
223
+ while significantly improving the model performance. This is due to the existence of the query that is low-resource for both languages, in which switching to the opposite language may make the situation worse. In these situations, the language selection module may pick a third language to better answer the question. The full result of the performance without the language selection module can be found in Table 8.
224
+
225
+ The Comparison between Answer Replacement and Integration. We further investigated the effectiveness of answer replacement and integration strategies. Given that QA setups with a golden answer may not always accommodate answer integration effectively (for example, when the answers in two different languages factually conflict), we opted for a subset in AlignBench (Liu et al., 2023a) as our evaluation dataset. AlignBench provides a comprehensive, multi-dimensional benchmark designed to assess the alignment of LLMs in Chinese, featuring a variety of open-ended questions. To create a bilingual dataset, we translated the Chinese questions into English. For each response evaluation, we employed an LLM-as-a-judge approach, utilizing the prompt template from AlignBench. The LLM judge then assigned an overall score ranging from 1 to 10 to each LLM response. As indicated in Table 5, both replacement and integration methods significantly enhanced the LLMs' performance across most datasets. Direct replacement led to more substantial improvements but also introduced a higher rate of errors, as evidenced by the performance dips in GPT-4 and Llama3-Ch. Interestingly, the integration method showed a more pronounced performance improvement in English responses, suggesting that LLMs may possess stronger capabilities for answer optimization in English than in Chinese (Yang et al., 2024a).
226
+
227
+ The Impact of Different Detection Models. As we build a different low-resource detector for each language, the selection of the tokenizer and classification model would impact the training of the de
228
+
229
+ Table 5: Model performance on AlignBench (Liu et al., 2023a) in the setting of answer replacement and integration.
230
+
231
+ <table><tr><td>Model</td><td>Acc.</td><td>Recall</td><td>Precision</td><td>F1.</td></tr><tr><td>bert-base-chinese (ch)</td><td>86.64</td><td>86.64</td><td>86.68</td><td>86.66</td></tr><tr><td>bert-uncased (en)</td><td>94.98</td><td>94.98</td><td>94.88</td><td>94.91</td></tr><tr><td>Multilingual Bert (ch)</td><td>86.47</td><td>86.47</td><td>86.58</td><td>86.51</td></tr><tr><td>Multilingual Bert (en)</td><td>94.73</td><td>94.73</td><td>94.64</td><td>94.67</td></tr></table>
232
+
233
+ Table 6: The impact of model selection on detector training.
234
+
235
+ tector thereby influencing the overall performance. We adopt language-specific Bert and multi-lingual Bert models to train our low-resource query detector and report the result in Table 6. As shown in the model, using the language-specific model and tokenizer would slightly improve the result of using a multi-lingual model.
236
+
237
+ # 5 Discussion on Other Approach
238
+
239
+ As the confidence of the generated content is related to its entropy during the generation process, a natural idea is to calculate the entropy in different languages and compare them to decide which is the best language to answer the question. This approach is widely used for measuring LLMs' uncertainty and detecting hallucinations (Manakul et al., 2023). However, our trial demonstrates that this approach is infeasible and achieves merely random-guess-level performance when selecting the right language for the given queries.
240
+
241
+ To explore how to leverage entropy-related statistics to select the target language, we train a model $f$ that takes the statistics as input and outputs the selection of the language $Y$ . The statistics we use for a language $l$ include the entropy of the query $E_{Q_l}$ , the entropy of the response $E_{R_l}$ , the perplexity of the query $P_{Q_l}$ , and the perplexity of the response $P_{R_l}$ . We adopt an MLP as the classification model $f:(E_{Q_l},E_{R_l},P_{Q_l},P_{R_l})\to Y$ and train the model on the low-resource query dataset we construct. We trained based on SVM and random forests in Llama2-7b's output. The accuracy is no more than
242
+
243
+ $60\%$ . This is a merely random-guess-level performance when taking the entropy-related statistics as input. We attribute this to the hallucination issue of LLMs, that the model may become overconfident even with the wrong answer (Groot and Valdenegro-Toro, 2024), which indicates LLMs are not calibrated well now (Zhang et al., 2024).
244
+
245
+ # 6 Conclusion
246
+
247
+ This paper presents a method to improve the multilingual capabilities of LLMs by leveraging knowledge from various languages, which includes a low-resource knowledge detector, a process for selecting languages, and answer replacement & integration. Our experiments show significant enhancements in performance, especially in reducing disparities across languages. Moreover, each module in our method contributes to the improvement. Overall, this study underscores the potential of LLMs to unify multilingual functions and provide insights for future research.
248
+
249
+ # Limitations
250
+
251
+ Our method requires training a separate low-resource query detector for each language. This is not convenient as the developer of a certain language should construct a low-resource training set himself, which involves collecting language-specific data. Also, the dataset should be updated with time with the rise of the new language-specific data.
252
+
253
+ # Ethics Statement
254
+
255
+ This study adheres to ethical standards in AI research and development. We acknowledge that while our methods aim to enhance the multilingual capabilities of LLMs, they must be implemented with careful consideration of potential biases. Efforts were made to ensure that the knowledge aggregation does not disproportionately favor any particular language or cultural perspective. We also emphasize transparency in our methodologies and findings to enable scrutiny and replication by the research community. The research was conducted without utilizing any personally identifiable information, thereby safeguarding privacy and upholding data protection standards. We commit to ongoing evaluation of our methods in diverse linguistic settings to address and mitigate any emergent biases or disparities. This research seeks not only to advance technology but also to promote inclusiv
256
+
257
+ ity and fairness in AI applications across different linguistic and cultural groups. In this paper, we utilized AI tools to aid in writing and coding, ensuring that they did not directly contribute to the writing process and that their use adheres to academic standards. Additionally, we ensured that all datasets and benchmarks used in the study comply with their intended purposes and standards.
258
+
259
+ # References
260
+
261
+ 2024. Llama3-chinese. https://github.com/LlamaFamily/Llama-Chinese.
262
+ Hasan Abu-Rasheed, Christian Weber, and Madjid Fathi. 2024. Knowledge graphs as context sources for llm-based explanations of learning recommendations.
263
+ 01. AI, :, Alex Young, Bei Chen, Chao Li, Chengen Huang, Ge Zhang, Guanwei Zhang, Heng Li, Jiangcheng Zhu, Jianqun Chen, Jing Chang, Kaidong Yu, Peng Liu, Qiang Liu, Shawn Yue, Senbin Yang, Shiming Yang, Tao Yu, Wen Xie, Wenhao Huang, Xiaohui Hu, Xiaoyi Ren, Xinyao Niu, Pengcheng Nie, Yuchi Xu, Yudong Liu, Yue Wang, Yuxuan Cai, Zhenyu Gu, Zhiyuan Liu, and Zonghong Dai. 2024. Yi: Open foundation models by 01.ai.
264
+ Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2019. On the cross-lingual transferability of monolingual representations. CoRR, abs/1910.11856.
265
+ Akari Asai, Sneha Kudugunta, Xinyan Velocity Yu, Terra Blevins, Hila Gonen, Machel Reid, Yulia Tsvetkov, Sebastian Ruder, and Hannaneh Hajishirzi. 2023. Buffet: Benchmarking large language models for few-shot cross-lingual transfer. arXiv preprint arXiv:2305.14857.
266
+ Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, and Tianhang Zhu. 2023. Qwen technical report. arXiv preprint arXiv:2309.16609.
267
+ Dongping Chen, Yue Huang, Siyuan Wu, Jingyu Tang, Liuyi Chen, Yilin Bai, Zhigang He, Chenlong Wang, Huichi Zhou, Yiqiang Li, et al. 2024a. Gui-world: A dataset for gui-oriented multimodal llm-based agents. arXiv preprint arXiv:2406.10819.
268
+ Du Chen, Yi Huang, Xiaopu Li, Yongqiang Li, Yongqiang Liu, Haihui Pan, Leichao Xu, Dacheng
269
+
270
+ Zhang, Zhipeng Zhang, and Kun Han. 2024b. Orion-14b: Open-source multilingual large language models.
271
+ Qinyuan Cheng, Tianxiang Sun, Wenwei Zhang, Siyin Wang, Xiangyang Liu, Mozhi Zhang, Junliang He, Mianqiu Huang, Zhangyue Yin, Kai Chen, and Xipeng Qiu. 2023. Evaluating hallucinations in Chinese large language models.
272
+ Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. 2022. Glm: General language model pretraining with autoregressive blank infilling.
273
+ Julen Etxaniz, Gorka Azkune, Aitor Soroa, Oier Lopez de Lacalle, and Mikel Artetxe. 2023. Do multilingual language models think better in english?
274
+ Shangbin Feng, Weijia Shi, Yike Wang, Wenxuan Ding, Vidhisha Balachandran, and Yulia Tsvetkov. 2024. Don't hallucinate, abstain: Identifying llm knowledge gaps via multi-llm collaboration.
275
+ Chujie Gao, Qihui Zhang, Dongping Chen, Yue Huang, Siyuan Wu, Zhengyan Fu, Yao Wan, Xiangliang Zhang, and Lichao Sun. 2024. The best of both worlds: Toward an honest and helpful large language model. arXiv preprint arXiv:2406.00380.
276
+ Tobias Groot and Matias Valdenegro-Toro. 2024. Overconfidence is key: Verbalized uncertainty evaluation in large language and vision-language models.
277
+ Xinyan Guan, Yanjiang Liu, Hongyu Lin, Yaojie Lu, Ben He, Xianpei Han, and Le Sun. 2024. Mitigating large language model hallucinations via autonomous knowledge graph-based retrofitting. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 18126-18134.
278
+ Taicheng Guo, Xiuying Chen, Yaqi Wang, Ruidi Chang, Shichao Pei, Nitesh V. Chawla, Olaf Wiest, and Xi-angliang Zhang. 2024. Large language model based multi-agents: A survey of progress and challenges.
279
+ Taicheng Guo, kehan Guo, Bozhao Nan, Zhenwen Liang, Zhichun Guo, Nitesh Chawla, Olaf Wiest, and Xiangliang Zhang. 2023. What can large language models do in chemistry? a comprehensive benchmark on eight tasks. In Advances in Neural Information Processing Systems, volume 36, pages 59662-59688. Curran Associates, Inc.
280
+ Haoyang Huang, Tianyi Tang, Dongdong Zhang, Xin Zhao, Ting Song, Yan Xia, and Furu Wei. 2023a. Not all languages are created equal in LLMs: Improving multilingual capability by cross-lingual-thought prompting. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 12365–12394, Singapore. Association for Computational Linguistics.
281
+ Yue Huang, Jiawen Shi, Yuan Li, Chenrui Fan, Siyuan Wu, Qihui Zhang, Yixin Liu, Pan Zhou, Yao Wan,
282
+
283
+ Neil Zhenqiang Gong, et al. 2023b. Metatool benchmark for large language models: Deciding whether to use tools and which to use. arXiv preprint arXiv:2310.03128.
284
+ Yue Huang, Kai Shu, Philip S. Yu, and Lichao Sun. 2024a. From creation to clarification: Chatgpt's journey through the fake news quagmire. In *Compan- ion Proceedings of the ACM Web Conference* 2024, WWW '24, page 513–516, New York, NY, USA. Association for Computing Machinery.
285
+ Yue Huang and Lichao Sun. 2024. Fakegpt: Fake news generation, explanation and detection of large language models.
286
+ Yue Huang, Lichao Sun, Haoran Wang, Siyuan Wu, Qihui Zhang, Yuan Li, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, et al. 2024b. Position: Trustllm: Trustworthiness in large language models. In International Conference on Machine Learning, pages 20166-20270. PMLR.
287
+ Yue Huang, Qihui Zhang, Lichao Sun, et al. 2023c. Trustgpt: A benchmark for trustworthy and responsible large language models. arXiv preprint arXiv:2306.11507.
288
+ Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension.
289
+ Viet Dac Lai, Nghia Trung Ngo, Amir Pouran Ben Vey-seh, Hieu Man, Franck Dernoncourt, Trung Bui, and Thien Huu Nguyen. 2023. Chatgpt beyond english: Towards a comprehensive evaluation of large language models in multilingual learning.
290
+ Chong Li, Wen Yang, Jiajun Zhang, Jinliang Lu, Shaonan Wang, and Chengqing Zong. 2024a. X-instruction: Aligning language model in low-resource languages with self-curated cross-lingual instructions.
291
+ Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, and Timothy Baldwin. 2024b. Cmmlu: Measuring massive multitask language understanding in chinese.
292
+ Junyi Li, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun Nie, and Ji-Rong Wen. 2023a. Halueval: A largescale hallucination evaluation benchmark for large language models.
293
+ Peng Li, Wei Li, Zhengyan He, Xuguang Wang, Ying Cao, Jie Zhou, and Wei Xu. 2016. Dataset and neural recurrent sequence labeling model for open-domain factoid question answering.
294
+ Yingji Li, Mengnan Du, Rui Song, Xin Wang, and Ying Wang. 2023b. A survey on fairness in large language models. arXiv preprint arXiv:2308.10149.
295
+
296
+ Yuan Li, Yue Huang, Yuli Lin, Siyuan Wu, Yao Wan, and Lichao Sun. 2024c. I think, therefore i am: Awareness in large language models. arXiv preprint arXiv:2401.17882.
297
+ Yuan Li, Yue Huang, Hongyi Wang, Xiangliang Zhang, James Zou, and Lichao Sun. 2024d. Quantifying ai psychology: A psychometrics benchmark for large language models. arXiv preprint arXiv:2406.17675.
298
+ Stephanie Lin, Jacob Hilton, and Owain Evans. 2022. Truthfulqa: Measuring how models mimic human falsehoods.
299
+ Chaoqun Liu, Wenxuan Zhang, Yiran Zhao, Anh Tuan Luu, and Lidong Bing. 2024. Is translation all you need? a study on solving multilingual tasks with large language models. arXiv preprint arXiv:2403.10258.
300
+ Xiao Liu, Xuanyu Lei, Shengyuan Wang, Yue Huang, Zhuoer Feng, Bosi Wen, Jiale Cheng, Pei Ke, Yifan Xu, Weng Lam Tam, et al. 2023a. Alignbench: Benchmarking chinese alignment of large language models. arXiv preprint arXiv:2311.18743.
301
+ Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, and Jie Tang. 2023b. Agent-bench: Evaluating llms as agents.
302
+ Zhengliang Liu, Yue Huang, Xiaowei Yu, Lu Zhang, Zihao Wu, Chao Cao, Haixing Dai, Lin Zhao, Yiwei Li, Peng Shu, et al. 2023c. Deid-gpt: Zero-shot medical text de-identification by gpt-4. arXiv preprint arXiv:2303.11032.
303
+ Shayne Longpre, Yi Lu, and Joachim Daiber. 2021. Mkqa: A linguistically diverse benchmark for multilingual open domain question answering.
304
+ Potsawee Manakul, Adian Liusie, and Mark JF Gales. 2023. Selfcheckgpt: Zero-resource black-box hallucination detection for generative large language models. arXiv preprint arXiv:2303.08896.
305
+ Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hailey Schoelkopf, Xiangru Tang, Dragomir Radev, Alham Fikri Aji, Khalid Almubarak, Samuel Albanie, Zaid Alyafeai, Albert Webson, Edward Raff, and Colin Raffel. 2023. Crosslingual generalization through multitask finetuning.
306
+ Thuat Nguyen, Chien Van Nguyen, Viet Dao Lai, Hieu Man, Nghia Trung Ngo, Franck Dernoncourt, Ryan A. Rossi, and Thien Huu Nguyen. 2023. Culturax: A cleaned, enormous, and multilingual dataset for large language models in 167 languages.
307
+ OpenAI. 2023a. Chatgpt. https://openai.com/product/chatgpt.
308
+
309
+ OpenAI. 2023b. Gpt-4. https://openai.com/gpt-4.
310
+ Nooshin Pourkamali and Shler Ebrahim Sharifi. 2024. Machine translation with large language models: Prompt engineering for persian, english, and russian directions. arXiv preprint arXiv:2401.08429.
311
+ Libo Qin, Qiguang Chen, Fuxuan Wei, Shijue Huang, and Wanxiang Che. 2023. Cross-lingual prompting: Improving zero-shot chain-of-thought reasoning across languages. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 2695–2709, Singapore. Association for Computational Linguistics.
312
+ Libo Qin, Qiguang Chen, Yuhang Zhou, Zhi Chen, Yinghui Li, Lizi Liao, Min Li, Wanxiang Che, and Philip S. Yu. 2024. Multilingual large language model: A survey of resources, taxonomy and frontiers.
313
+ Jiashuo Sun, Chengjin Xu, Lumingyuan Tang, Saizhuo Wang, Chen Lin, Yeyun Gong, Lionel M. Ni, Heung-Yeung Shum, and Jian Guo. 2024a. Think-on-graph: Deep and responsible reasoning of large language model on knowledge graph.
314
+ Kai Sun, Yifan Ethan Xu, Hanwen Zha, Yue Liu, and Xin Luna Dong. 2024b. Head-to-tail: How knowledgeable are large language models (llms)? a.k.a. will llms replace knowledge graphs?
315
+ InternLM Team. 2023. Internlm: A multilingual language model with progressively enhanced capabilities.
316
+ SM Tonmoy, SM Zaman, Vinija Jain, Anku Rani, Vipula Rawte, Aman Chadha, and Amitava Das. 2024. A comprehensive survey of hallucination mitigation techniques in large language models. arXiv preprint arXiv:2401.01313.
317
+ Priyansh Trivedi, Gaurav Maheshwari, Mohnish Dubey, and Jens Lehmann. 2017. Lc-quad: A corpus for complex question answering over knowledge graphs. In International Semantic Web Conference, pages 210–218. Springer.
318
+ Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023a. Chain-of-thought prompting elicits reasoning in large language models.
319
+ Xiangpeng Wei, Haoran Wei, Huan Lin, Tianhao Li, Pei Zhang, Xingzhang Ren, Mei Li, Yu Wan, Zhiwei Cao, Binbin Xie, Tianxiang Hu, Shangjie Li, Binyuan Hui, Bowen Yu, Dayiheng Liu, Baosong Yang, Fei Huang, and Jun Xie. 2023b. Polylm: An open source polyglot large language model.
320
+ Siyuan Wu, Yue Huang, Chujie Gao, Dongping Chen, Qihui Zhang, Yao Wan, Tianyi Zhou, Xiangliang Zhang, Jianfeng Gao, Chaowei Xiao, et al. 2024. Unigen: A unified framework for textual dataset generation using large language models. arXiv preprint arXiv:2406.18966.
321
+
322
+ Liang Xu, Hai Hu, Xuanwei Zhang, Lu Li, Chenjie Cao, Yudong Li, Yechen Xu, Kai Sun, Dian Yu, Cong Yu, Yin Tian, Qianqian Dong, Weitang Liu, Bo Shi, Yiming Cui, Junyi Li, Jun Zeng, Rongzhao Wang, Weijian Xie, Yanting Li, Yina Patterson, Zuoyu Tian, Yiwen Zhang, He Zhou, Shaoweihua Liu, Zhe Zhao, Qipeng Zhao, Cong Yue, Xinrui Zhang, Zhengliang Yang, Kyle Richardson, and Zhenzhong Lan. 2020. CLUE: A Chinese language understanding evaluation benchmark. In Proceedings of the 28th International Conference on Computational Linguistics, pages 4762-4772, Barcelona, Spain (Online). International Committee on Computational Linguistics.
323
+
324
+ Rongwu Xu, Zehan Qi, Cunxiang Wang, Hongru Wang, Yue Zhang, and Wei Xu. 2024a. Knowledge conflicts for llms: A survey.
325
+
326
+ Yuemei Xu, Ling Hu, Jiayi Zhao, Zihan Qiu, Yuqi Ye, and Hanwen Gu. 2024b. A survey on multilingual large language models: Corpora, alignment, and bias.
327
+
328
+ Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, et al. 2023. Baichuan 2: Open large-scale language models. arXiv preprint arXiv:2309.10305.
329
+
330
+ Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, and Xinyun Chen. 2024a. Large language models as optimizers.
331
+
332
+ Linyao Yang, Hongyang Chen, Zhao Li, Xiao Ding, and Xindong Wu. 2024b. Give us the facts: Enhancing large language models with knowledge graphs for fact-aware language modeling.
333
+
334
+ Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. 2023. Tree of thoughts: Deliberate problem solving with large language models.
335
+
336
+ Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, and Chao Zhang. 2023. Large language model as attributed training data generator: A tale of diversity and bias.
337
+
338
+ Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al. 2022. Glm-130b: An open bilingual pre-trained model. arXiv preprint arXiv:2210.02414.
339
+
340
+ Kai Zhang, Jun Yu, Zhiling Yan, Yixin Liu, Eashan Adhikarla, Sunyang Fu, Xun Chen, Chen Chen, Yuyin Zhou, Xiang Li, et al. 2023a. Biomedgpt: A unified and generalist biomedical generative pre-trained transformer for vision, language, and multimodal tasks. arXiv preprint arXiv:2305.17100.
341
+
342
+ Mozhi Zhang, Mianqiu Huang, Rundong Shi, Linsen Guo, Chong Peng, Peng Yan, Yaqian Zhou, and Xipeng Qiu. 2024. Calibrating the confidence of large language models by eliciting fidelity.
343
+
344
+ Shaolei Zhang, Qingkai Fang, Zhuocheng Zhang, Zhengrui Ma, Yan Zhou, Langlin Huang, Mengyu Bu, Shangtong Gui, Yunji Chen, Xilin Chen, and Yang Feng. 2023b. Bayling: Bridging cross-lingual alignment and instruction following through interactive translation for large language models.
345
+
346
+ Wenxuan Zhang, Sharifah Mahani Aljunied, Chang Gao, Yew Ken Chia, and Lidong Bing. 2023c. M3exam: A multilingual, multimodal, multilevel benchmark for examining large language models.
347
+
348
+ # A Dataset Details
349
+
350
+ - TriviaQA (Joshi et al., 2017) is a reading comprehension dataset that features more than 650,000 question-answer-evidence triples. It consists of lots of question-answer pairs created by trivia aficionados, along with independently collected evidence documents—averaging six per question—that offer robust distant supervision for responding to the queries.
351
+ - HaluEval (Li et al., 2023a) is a benchmark designed to assess how well LLMs hallucinations—unverifiable or incorrect content in their outputs. It includes a collection of generated texts and human-annotated samples that help evaluate the models' performance in detecting such errors.
352
+ - HalluQA (Cheng et al., 2023) is a dataset consisting of 450 carefully crafted adversarial questions that cover various domains, incorporating elements of Chinese historical culture, customs, and social phenomena. It aims to evaluate LLMs on their propensity to produce two types of errors: imitative falsehoods and factual inaccuracies.
353
+ - WebQA (Li et al., 2016) is a large-scale, human-annotated real-world QA dataset, developed to address the scarcity of extensive real-world QA datasets for neural QA systems.
354
+
355
+ # B Experiment Results
356
+
357
+ We show the full experiment results in Table 7, Table 8, and Figure 8.
358
+
359
+ # C Details of Constructed Dataset
360
+
361
+ For the generated dataset, inspired by previous studies (Huang et al., 2023b; Yu et al., 2023), we employed attribute-guided prompting to instruct LLMs to generate relevant questions on specific topics, as illustrated in Table 9. We chose GPT-4 as our generation model because of its exceptional ability to follow instructions. The prompt template is shown in Figure 9. For the generated items, we
362
+
363
+ <table><tr><td rowspan="2">Dataset</td><td rowspan="2">Lang.</td><td colspan="2">ChatGLM3</td><td colspan="2">ChatGPT</td><td colspan="2">GPT-4</td><td colspan="2">Yi-34b</td><td colspan="2">Qwen-turbo</td><td colspan="2">Llama3-Ch.</td></tr><tr><td>Orig.</td><td>Impro.</td><td>Orig.</td><td>Impro.</td><td>Orig.</td><td>Impro.</td><td>Orig.</td><td>Impro.</td><td>Orig.</td><td>Impro.</td><td>Orig.</td><td>Impro.</td></tr><tr><td rowspan="2">HalluEval</td><td>(en)</td><td>18.03%</td><td>18.03%</td><td>57.98%</td><td>57.84%</td><td>67.13%</td><td>66.99%</td><td>42.86%</td><td>41.75%</td><td>29.31%</td><td>29.59%</td><td>40.67%</td><td>40.67%</td></tr><tr><td>(ch)</td><td>11.23%</td><td>17.34%</td><td>32.07%</td><td>52.38%</td><td>47.99%</td><td>65.05%</td><td>25.10%</td><td>39.81%</td><td>19.35%</td><td>26.51%</td><td>25.35%</td><td>37.33%</td></tr><tr><td rowspan="2">HalluQA</td><td>(en)</td><td>20.00%</td><td>25.41%</td><td>34.27%</td><td>30.90%</td><td>51.89%</td><td>53.51%</td><td>38.38%</td><td>47.03%</td><td>25.97%</td><td>37.57%</td><td>22.83%</td><td>18.48%</td></tr><tr><td>(ch)</td><td>22.16%</td><td>22.70%</td><td>21.91%</td><td>25.28%</td><td>49.73%</td><td>51.89%</td><td>45.95%</td><td>45.95%</td><td>43.65%</td><td>39.78%</td><td>15.22%</td><td>20.65%</td></tr><tr><td>Chinese</td><td>(en)</td><td>9.52%</td><td>21.21%</td><td>41.85%</td><td>42.73%</td><td>56.71%</td><td>57.58%</td><td>33.33%</td><td>57.58%</td><td>27.19%</td><td>48.25%</td><td>30.74%</td><td>24.24%</td></tr><tr><td>Domain</td><td>(ch)</td><td>32.47%</td><td>25.54%</td><td>41.85%</td><td>42.29%</td><td>59.31%</td><td>58.44%</td><td>63.64%</td><td>57.14%</td><td>62.28%</td><td>62.28%</td><td>18.61%</td><td>22.51%</td></tr><tr><td rowspan="2">triviaQA</td><td>(en)</td><td>36.32%</td><td>36.38%</td><td>90.53%</td><td>90.31%</td><td>94.09%</td><td>93.93%</td><td>79.33%</td><td>78.90%</td><td>59.59%</td><td>59.47%</td><td>77.27%</td><td>77.05%</td></tr><tr><td>(ch)</td><td>21.33%</td><td>32.22%</td><td>54.60%</td><td>83.33%</td><td>82.77%</td><td>92.29%</td><td>59.43%</td><td>76.27%</td><td>41.53%</td><td>53.55%</td><td>43.92%</td><td>66.32%</td></tr><tr><td rowspan="2">WebQA</td><td>(en)</td><td>28.51%</td><td>38.96%</td><td>59.08%</td><td>58.98%</td><td>67.70%</td><td>69.61%</td><td>57.07%</td><td>69.71%</td><td>49.48%</td><td>62.11%</td><td>50.00%</td><td>47.08%</td></tr><tr><td>(ch)</td><td>48.69%</td><td>42.07%</td><td>57.35%</td><td>59.29%</td><td>72.52%</td><td>72.32%</td><td>76.93%</td><td>74.12%</td><td>71.12%</td><td>70.70%</td><td>37.02%</td><td>40.54%</td></tr></table>
364
+
365
+ Table 7: Six LLMs' performance on the setting without a low-resource detector.
366
+
367
+ <table><tr><td rowspan="2">Dataset</td><td rowspan="2">Lang.</td><td colspan="2">ChatGLM3</td><td colspan="2">ChatGPT</td><td colspan="2">GPT-4</td><td colspan="2">Yi-34b</td><td colspan="2">Qwen-turbo</td><td colspan="2">Llama3-Ch.</td></tr><tr><td>Orig.</td><td>Impro.</td><td>Orig.</td><td>Impro.</td><td>Orig.</td><td>Impro.</td><td>Orig.</td><td>Impro.</td><td>Orig.</td><td>Impro.</td><td>Orig.</td><td>Impro.</td></tr><tr><td rowspan="2">HalluEval</td><td>(en)</td><td>18.03%</td><td>18.31%</td><td>57.98%</td><td>57.70%</td><td>67.13%</td><td>66.57%</td><td>42.86%</td><td>42.72%</td><td>29.31%</td><td>29.17%</td><td>40.67%</td><td>40.25%</td></tr><tr><td>(ch)</td><td>11.23%</td><td>18.03%</td><td>32.07%</td><td>56.02%</td><td>47.99%</td><td>66.16%</td><td>25.10%</td><td>42.58%</td><td>19.35%</td><td>28.61%</td><td>25.34%</td><td>39.97%</td></tr><tr><td rowspan="2">HalluQA</td><td>(en)</td><td>20.00%</td><td>25.95%</td><td>34.27%</td><td>32.02%</td><td>51.89%</td><td>53.51%</td><td>38.38%</td><td>46.49%</td><td>25.97%</td><td>38.67%</td><td>22.83%</td><td>21.74%</td></tr><tr><td>(ch)</td><td>22.16%</td><td>23.78%</td><td>21.91%</td><td>23.60%</td><td>49.73%</td><td>51.35%</td><td>45.95%</td><td>44.86%</td><td>43.65%</td><td>41.44%</td><td>15.21%</td><td>15.76%</td></tr><tr><td>Chinese</td><td>(en)</td><td>9.52%</td><td>32.03%</td><td>41.85%</td><td>41.85%</td><td>56.71%</td><td>59.31%</td><td>33.33%</td><td>61.04%</td><td>27.19%</td><td>58.33%</td><td>30.74%</td><td>19.48%</td></tr><tr><td>Domain</td><td>(ch)</td><td>32.47%</td><td>30.74%</td><td>41.85%</td><td>41.41%</td><td>59.31%</td><td>58.44%</td><td>63.64%</td><td>60.17%</td><td>62.28%</td><td>58.33%</td><td>18.61%</td><td>18.61%</td></tr><tr><td rowspan="2">triviaQA</td><td>(en)</td><td>36.32%</td><td>35.78%</td><td>90.53%</td><td>89.09%</td><td>94.09%</td><td>93.54%</td><td>79.33%</td><td>78.73%</td><td>59.59%</td><td>58.80%</td><td>77.27%</td><td>76.12%</td></tr><tr><td>(ch)</td><td>21.33%</td><td>35.94%</td><td>54.60%</td><td>89.15%</td><td>82.77%</td><td>93.22%</td><td>59.43%</td><td>78.18%</td><td>41.53%</td><td>58.41%</td><td>43.92%</td><td>74.92%</td></tr><tr><td rowspan="2">WebQA</td><td>(en)</td><td>28.51%</td><td>44.38%</td><td>59.08%</td><td>59.90%</td><td>67.70%</td><td>70.81%</td><td>57.07%</td><td>73.72%</td><td>49.48%</td><td>67.70%</td><td>50.00%</td><td>46.48%</td></tr><tr><td>(ch)</td><td>48.69%</td><td>46.99%</td><td>57.35%</td><td>58.88%</td><td>72.52%</td><td>71.61%</td><td>76.93%</td><td>74.22%</td><td>71.12%</td><td>69.25%</td><td>37.02%</td><td>41.15%</td></tr></table>
368
+
369
+ Table 8: Six LLMs' performance on the setting without language selection.
370
+
371
+ ![](images/be5fe758988fbd0a06bee09a72b7d8d1d028323ff820d50a4fa3a85af29df222.jpg)
372
+
373
+ ![](images/b8d9500c30a3e8648374680a310f7569e7e5e2ebcd1247f28822c2ecf7485356.jpg)
374
+
375
+ ![](images/728e79f1eecb99928a5e4cb387f0eac93ef4b1f1f2a2edd4d6a04a08adb0ae1d.jpg)
376
+
377
+ Figure 8: Performance percentage of LLMs across different datasets. 'All correct' indicates that the LLMs answered correctly in both the Chinese and English datasets. 'All wrong' signifies that the LLMs answered incorrectly in both datasets. 'Half' denotes that the LLMs answered correctly in only one of the datasets.
378
+ ![](images/d83e20292f1422b85e900239f5729bbba8ddc58f253f7a7223841e2c69d291cb.jpg)
379
+ All Correct All Wrong Half
380
+
381
+ ![](images/46b3bb49124927ce4e4c0970c3feb2266d65b684b18e0000703cd8be2251db34.jpg)
382
+
383
+ ![](images/9283013f87a735b3276be066948a32fc7786ad6b79e813dc9debd0b1a4e99337.jpg)
384
+
385
+ manually evaluate the correctness of its label to ensure the data quality.
386
+
387
+ # D Collection of Chinese Domain Dataset
388
+
389
+ Our Chinese domain dataset consists of 227 items. This dataset encompasses knowledge and information specific to Chinese content, including aspects of geography, history, culture, and more. We sourced the content from a broad range of Chinese social media platforms and search engines. After initial curation, we conducted filtering to remove contents that cannot be accurately translated into English or may result in discrepancies in meaning upon translation, such as phrases from ancient Chinese.
390
+
391
+ # E Answer Evaluation
392
+
393
+ We adopt the LLM-as-a-Judge for answer evaluation in all experiments to reduce the bias that comes from keyword matching. We use GPT-4 for our evaluation due to its exceptional capability and wide application in previous studies (Liu et al., 2023a; Gao et al., 2024). For the five QA datasets, we use the prompt template shown in Figure 10.
394
+
395
+ # F Prompt Template
396
+
397
+ We show the prompt template used in our study in Figure 9, Figure 10, Figure 12, Figure 11, Figure 14, and Figure 13.
398
+
399
+ # G Screenshot of Human Evaluation
400
+
401
+ We show the screenshots of human evaluation in Figure 15 and Figure 16.
402
+
403
+ # Prompt Template
404
+
405
+ Next, I will provide you with a topic, and you will assist me in generating data based on this topic. I need you to generate three categories: questions with an English background, questions with a Chinese background, and questions with no specific language.
406
+
407
+ I will provide you with some examples:
408
+
409
+ Question: Piaget believes that communication has two functions, one is the egocentric function, and the other is?
410
+
411
+ Category: English knowledge
412
+
413
+ Question: With one byte, how many different codes can be generated at most?
414
+
415
+ Category: Knowledge with no specific language
416
+
417
+ Question: What are some famous dishes from Guangdong?
418
+
419
+ Category: Chinese knowledge
420
+
421
+ For each type of question, you need to generate ten, a total of thirty.
422
+
423
+ You need to return the data in JSON format, as follows:
424
+
425
+ ```txt
426
+ { "Question": "Category", "Question": "Category", "Question": "Category", "Question": "Category", ... }
427
+ ```
428
+
429
+ Please generate the corresponding data in Chinese.
430
+
431
+ The topic I provide is: [TOPIC]
432
+
433
+ Figure 9: Prompt template for the generated dataset.
434
+
435
+ # Prompt Template
436
+
437
+ As a helpful assistant, your task is now to help me assess the correctness of the provided answers. I will present a question along with its correct answer. Subsequently, I will also provide you with the answer you need to evaluate. If the answer to be evaluated correctly expresses the same meaning as the correct answer or contains the correct answer, then it is right. Ignore case errors. Although there are some errors in certain explanations within the answer, as long as the core answer is correct, the response is considered correct. Return me only one word: 'correct' or 'wrong'.
438
+
439
+ Here is the question and its correct answer:
440
+
441
+ Question: [QUESTION]
442
+
443
+ Answer: [ANSWER]
444
+
445
+ Here is the answer you should evaluate: [RES]
446
+
447
+ Figure 10: Prompt template for LLM-as-a-Judge.
448
+
449
+ # Prompt Template
450
+
451
+ You are a very helpful assistant. I will provide you with a question and the answers in both Chinese and English. You need to integrate the Chinese and English answers to provide the final answer. During the integration process, you need to follow these rules:
452
+
453
+ 1. You should primarily refer to the Chinese answer, appropriately integrating parts of the English answer.
454
+ 2. If there is a factual conflict between the English and Chinese answers, you must refer to the Chinese answer.
455
+ 3. The integrated answer should be of higher quality than the individual answers and better address the corresponding question.
456
+ 4. The integrated answer must be all in English
457
+ Question: [[Q]]
458
+ Chinese answer: [[CH_RES]]
459
+ English answer: [[EN_RES]]
460
+
461
+ Figure 11: Prompt template for integration (For the situation when the selected language is English).
462
+
463
+ # Prompt Template
464
+
465
+ 你是一个非常有帮助的助手。我将给你提供一个问题,以及该问题的中英文的答案。你需要融合中英文答案,给出最终的答案。在融合答案的过程中,你需要遵循下面的规则:
466
+
467
+ 1.你需要着重参考英文的答案,适当融合部分中文的答案。
468
+ 2. 如果英文的答案与中文的答案发生事实性冲突,你必须参考英文的答案。
469
+ 3. 融合后的答案应该比融合前的答案具有更高的质量,更好地回答对应的问题。
470
+ 4. 融合后的答案必须全都是中文。
471
+
472
+ 问题:[[Q]]
473
+
474
+ 中文答案:[[CH_RES]]
475
+
476
+ 英文答案:[EN_RES]
477
+
478
+ Figure 12: Prompt template for integration (For the situation when the selected language is Chinese).
479
+
480
+ # Prompt Template
481
+
482
+ As a helpful assistant, you need to categorize an English question, considering that the background of this question is not common in an English environment. Therefore, you need to choose the most suitable language for this question. You need to analyze the required language context for the question first, and then tell me at the end which language you think is most suitable to answer the question. The question is as follows:
483
+
484
+ Figure 13: Prompt template for language selection (For the query in English).
485
+
486
+ # Prompt Template
487
+
488
+ 作为乐于助人的助理,您需要将一个中文问题进行分类,考虑到该问题背景在中文环境中并不常见,因此您需要返回最适合该问题的语言。你需要首先对问题所需要的语言环境进行分析,然后在最后告诉我你返回的最适合回答该问题的语言。问题如下:
489
+
490
+ Figure 14: Prompt template for language selection (For the query in Chinese).
491
+
492
+ ![](images/6305c621e15690a7f35162f479e381d17da7ff173a1c73ef847133504d01d4bc.jpg)
493
+ Figure 15: Screenshot of human annotation (1).
494
+
495
+ ![](images/9dcc9dff0a4961f121073d0510c91108a40de78e734e2cb0c6beb0a33b8f6026.jpg)
496
+ Figure 16: Screenshot of human annotation (2).
497
+
498
+ Table 9: Topics used for data generation.
499
+
500
+ <table><tr><td colspan="4">Topic Word</td></tr><tr><td>History</td><td>Literature</td><td>Science</td><td>Art</td></tr><tr><td>Social Sciences</td><td>Technology</td><td>Philosophy</td><td>Geography</td></tr><tr><td>Culture</td><td>Health</td><td>Artificial Intelligence</td><td>Machine Learning</td></tr><tr><td>Big Data</td><td>Blockchain</td><td>Internet of Things</td><td>Environmental Protection</td></tr><tr><td>Sustainable Development</td><td>Energy</td><td>Finance</td><td>Education</td></tr><tr><td>Human Genetics</td><td>Artificial Life</td><td>Space Exploration</td><td>Food Science</td></tr><tr><td>Sports</td><td>Psychology</td><td>Political Science</td><td>Economics</td></tr><tr><td>Sociology</td><td>Law</td><td></td><td></td></tr></table>
2024/1+1_2_ Can Large Language Models Serve as Cross-Lingual Knowledge Aggregators_/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:228411569641352315d71fa363fdfb7a46122767a5e501deaa24ae33ce0598ba
3
+ size 995634
2024/1+1_2_ Can Large Language Models Serve as Cross-Lingual Knowledge Aggregators_/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Bayesian Approach to Harnessing the Power of LLMs in Authorship Attribution/bafb35f6-8fe7-40f9-8743-f4d30539c3ef_content_list.json ADDED
@@ -0,0 +1,1948 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "A Bayesian Approach to Harnessing the Power of LLMs in Authorship Attribution",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 134,
8
+ 89,
9
+ 867,
10
+ 130
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Zhengmian Hu $^{1,2*}$ , Tong Zheng $^{1*}$ , Heng Huang $^{1}$",
17
+ "bbox": [
18
+ 292,
19
+ 145,
20
+ 707,
21
+ 165
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "text",
27
+ "text": "$^{1}$ Department of Computer Science, University of Maryland, College Park, MD 20742 $^{2}$ Adobe Research",
28
+ "bbox": [
29
+ 152,
30
+ 178,
31
+ 845,
32
+ 212
33
+ ],
34
+ "page_idx": 0
35
+ },
36
+ {
37
+ "type": "text",
38
+ "text": "huzhengmian@gmail.com, zhengtong12356@gmail.com, heng@umd.edu,",
39
+ "bbox": [
40
+ 268,
41
+ 215,
42
+ 727,
43
+ 230
44
+ ],
45
+ "page_idx": 0
46
+ },
47
+ {
48
+ "type": "text",
49
+ "text": "Abstract",
50
+ "text_level": 1,
51
+ "bbox": [
52
+ 260,
53
+ 260,
54
+ 342,
55
+ 275
56
+ ],
57
+ "page_idx": 0
58
+ },
59
+ {
60
+ "type": "text",
61
+ "text": "Authorship attribution aims to identify the origin or author of a document. Traditional approaches have heavily relied on manual features and fail to capture long-range correlations, limiting their effectiveness. Recent advancements leverage text embeddings from pretrained language models, which require significant fine-tuning on labeled data, posing challenges in data dependency and limited interpretability. Large Language Models (LLMs), with their deep reasoning capabilities and ability to maintain long-range textual associations, offer a promising alternative. This study explores the potential of pre-trained LLMs in one-shot authorship attribution, specifically utilizing Bayesian approaches and probability outputs of LLMs. Our methodology calculates the probability that a text entails previous writings of an author, reflecting a more nuanced understanding of authorship. By utilizing only pre-trained models such as Llama-3-70B, our results on the IMDb and blog datasets show an impressive $85\\%$ accuracy in one-shot authorship classification across ten authors. Our findings set new baselines for one-shot authorship analysis using LLMs and expand the application scope of these models in forensic linguistics. This work also includes extensive ablation studies to validate our approach.",
62
+ "bbox": [
63
+ 142,
64
+ 288,
65
+ 460,
66
+ 700
67
+ ],
68
+ "page_idx": 0
69
+ },
70
+ {
71
+ "type": "text",
72
+ "text": "1 Introduction",
73
+ "text_level": 1,
74
+ "bbox": [
75
+ 114,
76
+ 712,
77
+ 260,
78
+ 727
79
+ ],
80
+ "page_idx": 0
81
+ },
82
+ {
83
+ "type": "text",
84
+ "text": "Authorship attribution, the process of identifying the origin or author of a document, has been a longstanding challenge in forensic linguistics. It has numerous applications, including detecting plagiarism (Alzahrani et al., 2011) and attribution of historical text (Silva et al., 2023). As the digital age progresses, the need for reliable methods to determine authorship has become increasingly important, especially in the context of combating misinformation spread through social media and con",
85
+ "bbox": [
86
+ 112,
87
+ 737,
88
+ 490,
89
+ 898
90
+ ],
91
+ "page_idx": 0
92
+ },
93
+ {
94
+ "type": "text",
95
+ "text": "ducting forensic analysis. The ability to attribute authorship can also lead to challenges around privacy and anonymity (Juola et al., 2008).",
96
+ "bbox": [
97
+ 507,
98
+ 261,
99
+ 884,
100
+ 309
101
+ ],
102
+ "page_idx": 0
103
+ },
104
+ {
105
+ "type": "text",
106
+ "text": "The field traces its roots back to the early 19th century (Mechti and Almansour, 2021), with early studies focusing on stylistic features and human expert analysis (Mosteller and Wallace, 1963). Traditional methods often relied on stylometry, which quantifies writing styles (Holmes, 1994), and rule-based computational linguistic methods (Stamatatos, 2009) to deduce authorship. Later, statistical algorithms incorporating extensive text preprocessing and feature engineering (Bozkurt et al., 2007; Seroussi et al., 2014) were introduced to improve accuracy. However, these methods often struggled with capturing long-range dependencies in text and require careful setup of specific thresholds for various indicators, which can be challenging to select effectively. They also involve designing complex, high-quality features, which can be costly and time-consuming.",
107
+ "bbox": [
108
+ 507,
109
+ 309,
110
+ 885,
111
+ 598
112
+ ],
113
+ "page_idx": 0
114
+ },
115
+ {
116
+ "type": "text",
117
+ "text": "The advent of deep learning has transformed the landscape of authorship attribution by turning the problem into a multi-class classification challenge, allowing for the capture of more features and addressing more complex scenarios effectively (Ruder et al., 2016; Ge et al., 2016; Shrestha et al., 2017; Zhang et al., 2018). However, these neural network (NN) models often lack interpretability and struggle with generalization in cases of limited samples.",
118
+ "bbox": [
119
+ 507,
120
+ 599,
121
+ 885,
122
+ 760
123
+ ],
124
+ "page_idx": 0
125
+ },
126
+ {
127
+ "type": "text",
128
+ "text": "Despite advancements, the field still faces significant challenges. Obtaining large, balanced datasets that represent multiple authors fairly is difficult, and as the number of authors increases, the accuracy of machine learning models tends to decrease.",
129
+ "bbox": [
130
+ 507,
131
+ 760,
132
+ 884,
133
+ 841
134
+ ],
135
+ "page_idx": 0
136
+ },
137
+ {
138
+ "type": "text",
139
+ "text": "On the other hand, language models, central to modern NLP applications, define the probability of distributions of words or sequences of words and have traditionally been used to predict and generate plausible language. Yet, for a long time, these",
140
+ "bbox": [
141
+ 507,
142
+ 841,
143
+ 884,
144
+ 921
145
+ ],
146
+ "page_idx": 0
147
+ },
148
+ {
149
+ "type": "page_footnote",
150
+ "text": "*These authors contributed equally to this work.",
151
+ "bbox": [
152
+ 136,
153
+ 906,
154
+ 430,
155
+ 921
156
+ ],
157
+ "page_idx": 0
158
+ },
159
+ {
160
+ "type": "page_number",
161
+ "text": "13216",
162
+ "bbox": [
163
+ 475,
164
+ 927,
165
+ 527,
166
+ 941
167
+ ],
168
+ "page_idx": 0
169
+ },
170
+ {
171
+ "type": "footer",
172
+ "text": "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 13216-13227 November 12-16, 2024 ©2024 Association for Computational Linguistics",
173
+ "bbox": [
174
+ 152,
175
+ 945,
176
+ 845,
177
+ 972
178
+ ],
179
+ "page_idx": 0
180
+ },
181
+ {
182
+ "type": "text",
183
+ "text": "models, including high-bias models like bag-of-words and n-gram models, struggled to fit the true probability distributions of natural language. Deep learning's rapid development has enabled orders of magnitude scaling up of computing and data, facilitating the use of more complex models such as Random Forests (Breiman, 2001), character-level CNNs (Zafar et al., 2020), Recurrent Neural Networks (Bagnall, 2015), and Transformer (Vaswani et al., 2017).",
184
+ "bbox": [
185
+ 112,
186
+ 84,
187
+ 492,
188
+ 243
189
+ ],
190
+ "page_idx": 1
191
+ },
192
+ {
193
+ "type": "text",
194
+ "text": "The recent rapid evolution of Large Language Models (LLMs) has dramatically improved the ability to fit natural language distributions. Trained on massive corpora exceeding 1 trillion tokens, these models have become highly capable of handling a wide range of linguistic tasks, including understanding, generation, and meaningful dialogue (Liang et al., 2022; Bubeck et al., 2023; Zhang et al., 2023a, 2024). They can also explain complex concepts and capture subtle nuances of language. They have been extensively applied in various applications such as chatbots, writing assistants, information retrieval, and translation services. More impressively, LLMs have expanded their utility to novel tasks without additional training, simply through the use of prompts and in-context learning (Brown et al., 2020). This unique ability motivates researchers to adapt LLMs to an even broader range of tasks and topics including reasoning (Wei et al., 2022), theory of mind (Kosinski, 2023) and medical scenario (Singhal et al., 2023).",
195
+ "bbox": [
196
+ 115,
197
+ 244,
198
+ 490,
199
+ 583
200
+ ],
201
+ "page_idx": 1
202
+ },
203
+ {
204
+ "type": "text",
205
+ "text": "Interestingly, language models have also been explored for authorship attribution (Agun and Yilmazel, 2017; Le and Mikolov, 2014; McCallum, 1999). Recently, research has utilized LLMs for question answering (QA) tasks within the application of authorship verification and authorship attribution (Huang et al., 2024), though these have primarily been tested in small-scale settings. Other approaches have attempted to leverage model embeddings and fine-tuning for authorship attribution, such as using GAN-BERT (Silva et al., 2023) and BERTAA (Fabien et al., 2020). However, these techniques often face challenges with scalability and need retraining when updating candidate authors. Moreover, they require relatively large dataset and multiple epochs of fine-tuning to converge. Given the challenges with current approaches, a natural question arises: How can we harness LLMs for more effective authorship attribution?",
206
+ "bbox": [
207
+ 112,
208
+ 583,
209
+ 490,
210
+ 903
211
+ ],
212
+ "page_idx": 1
213
+ },
214
+ {
215
+ "type": "text",
216
+ "text": "Two aspects of evidence provide insights to an-",
217
+ "bbox": [
218
+ 131,
219
+ 904,
220
+ 489,
221
+ 921
222
+ ],
223
+ "page_idx": 1
224
+ },
225
+ {
226
+ "type": "text",
227
+ "text": "swer the above questions. First, recent studies on LLMs have shown that these models possess hallucination problems (Ji et al., 2023). More interestingly, the outputs of LLMs given prompts may disagree with their internal thinking (Liu et al., 2023). Therefore, it is advisable not to rely solely on direct sampling result from LLMs. Second, the training objective of LLMs is to maximize the likelihood of the next token given all previous tokens. This indicates that probability may be a potential indicator for attributing texts to authors.",
228
+ "bbox": [
229
+ 507,
230
+ 84,
231
+ 885,
232
+ 260
233
+ ],
234
+ "page_idx": 1
235
+ },
236
+ {
237
+ "type": "text",
238
+ "text": "Language models are essentially probabilistic models, but we find the probabilistic nature of LLMs and their potential for authorship identification remains underexploited. Our study seeks to bridge this gap. Specifically, we explore the capability of LLMs to perform one-shot authorship attribution among multiple candidates.",
239
+ "bbox": [
240
+ 507,
241
+ 261,
242
+ 885,
243
+ 373
244
+ ],
245
+ "page_idx": 1
246
+ },
247
+ {
248
+ "type": "text",
249
+ "text": "We propose a novel approach based on a Bayesian framework that utilizes the probability outputs from LLMs. By deriving text-level log probabilities from token-level log probabilities, we establish a reliable measure of likelihood that a query text was written by a specific author given example texts from each candidate author. We also design suitable prompts to enhance the accuracy of these log probabilities. By calculating the posterior probability of authorship, we can infer the most likely author of a document (Figure 1). Due to the pivotal role of log probability in our algorithm, we coined our approach the \"Logprob method.\"",
250
+ "bbox": [
251
+ 507,
252
+ 375,
253
+ 885,
254
+ 583
255
+ ],
256
+ "page_idx": 1
257
+ },
258
+ {
259
+ "type": "text",
260
+ "text": "Our new method has three main advantages:",
261
+ "bbox": [
262
+ 526,
263
+ 583,
264
+ 858,
265
+ 599
266
+ ],
267
+ "page_idx": 1
268
+ },
269
+ {
270
+ "type": "list",
271
+ "sub_type": "text",
272
+ "list_items": [
273
+ "- No Need for Fine-Tuning: Our approach aligns the classification task with the pretraining objective, both focusing on computing entailment probability. This avoids any objective mismatch introduced by fine-tuning. Moreover, our method leverages the inherent capabilities of pretrained LLMs and avoids knowledge forgetting that often occurs during fine-tuning.",
274
+ "- Speed and Efficiency: This approach requires only a single forward pass through the model for each author, making it significantly faster and more cost-effective compared to normal question-answering method of language models which involves sampling a sequence of tokens as answer, with one forward pass for each token generated.",
275
+ "- No Need for Manual Feature Engineering: The pre-training on diverse data enables LLMs to au"
276
+ ],
277
+ "bbox": [
278
+ 512,
279
+ 609,
280
+ 885,
281
+ 921
282
+ ],
283
+ "page_idx": 1
284
+ },
285
+ {
286
+ "type": "page_number",
287
+ "text": "13217",
288
+ "bbox": [
289
+ 477,
290
+ 927,
291
+ 524,
292
+ 940
293
+ ],
294
+ "page_idx": 1
295
+ },
296
+ {
297
+ "type": "image",
298
+ "img_path": "images/cea13a2e406ea9dec89c42125cef1fb685c8f8f55b747706e39744a9526b7b66.jpg",
299
+ "image_caption": [
300
+ "Figure 1: Illustration of bayesian authorship attribution using LLM."
301
+ ],
302
+ "image_footnote": [],
303
+ "bbox": [
304
+ 119,
305
+ 80,
306
+ 880,
307
+ 272
308
+ ],
309
+ "page_idx": 2
310
+ },
311
+ {
312
+ "type": "text",
313
+ "text": "tomatically capture and utilize subtle nuances in language, thus eliminating the need for manually designing complex features, which can be costly and time-consuming.",
314
+ "bbox": [
315
+ 129,
316
+ 319,
317
+ 487,
318
+ 382
319
+ ],
320
+ "page_idx": 2
321
+ },
322
+ {
323
+ "type": "text",
324
+ "text": "By applying this technique, we have achieved state-of-the-art results in one-shot learning on the IMDb and blog datasets, demonstrating an impressive $85\\%$ accuracy across ten authors. This advancement establishes a new baseline for one-shot authorship analysis and illustrates the robust potential of LLMs in forensic linguistics.",
325
+ "bbox": [
326
+ 112,
327
+ 400,
328
+ 489,
329
+ 512
330
+ ],
331
+ "page_idx": 2
332
+ },
333
+ {
334
+ "type": "text",
335
+ "text": "2 Method",
336
+ "text_level": 1,
337
+ "bbox": [
338
+ 112,
339
+ 527,
340
+ 218,
341
+ 541
342
+ ],
343
+ "page_idx": 2
344
+ },
345
+ {
346
+ "type": "text",
347
+ "text": "Our approach to authorship attribution is based on a Bayesian framework. Given a document whose authorship is unknown, our objective is to identify the most probable author from a set using the capabilities of Large Language Models (LLMs).",
348
+ "bbox": [
349
+ 112,
350
+ 555,
351
+ 487,
352
+ 634
353
+ ],
354
+ "page_idx": 2
355
+ },
356
+ {
357
+ "type": "text",
358
+ "text": "We consider a scenario where we have a set of authors $\\mathcal{A} = \\{a_1, \\ldots, a_n\\}$ and a set of all possible texts $\\mathcal{E}$ . Given an authorship attribution problem, where each author $a_i$ has written a set of texts $t_{i,1}, t_{i,2}, \\ldots, t_{i,m_i} \\in \\mathcal{E}$ , we denote the collection of known texts of an author $a_i$ as $t(a_i) = (t_{i,1}, t_{i,2}, \\ldots, t_{i,m_i})$ . For an unknown text $u \\in \\mathcal{E}$ , we aim to determine the most likely author from the set $\\mathcal{A}$ .",
359
+ "bbox": [
360
+ 112,
361
+ 637,
362
+ 487,
363
+ 778
364
+ ],
365
+ "page_idx": 2
366
+ },
367
+ {
368
+ "type": "text",
369
+ "text": "To estimate the author of text $u$ , we use a Bayesian framework where the probability that $u$ was written by author $a_{i}$ is given by:",
370
+ "bbox": [
371
+ 112,
372
+ 783,
373
+ 487,
374
+ 831
375
+ ],
376
+ "page_idx": 2
377
+ },
378
+ {
379
+ "type": "equation",
380
+ "text": "\n$$\nP \\left(a _ {i} \\mid u\\right) = \\frac {P \\left(u \\mid a _ {i}\\right) P \\left(a _ {i}\\right)}{P (u)}. \\tag {1}\n$$\n",
381
+ "text_format": "latex",
382
+ "bbox": [
383
+ 200,
384
+ 844,
385
+ 487,
386
+ 879
387
+ ],
388
+ "page_idx": 2
389
+ },
390
+ {
391
+ "type": "text",
392
+ "text": "Here, $P(a_{i})$ is the prior probability of each author, assumed to be equal unless stated otherwise,",
393
+ "bbox": [
394
+ 112,
395
+ 889,
396
+ 489,
397
+ 920
398
+ ],
399
+ "page_idx": 2
400
+ },
401
+ {
402
+ "type": "text",
403
+ "text": "making the problem focus primarily on estimating $P(u|a_i)$ .",
404
+ "bbox": [
405
+ 507,
406
+ 319,
407
+ 880,
408
+ 350
409
+ ],
410
+ "page_idx": 2
411
+ },
412
+ {
413
+ "type": "text",
414
+ "text": "Assuming that each author $a_i$ has a unique writing style represented by a probability distribution $P(\\cdot | a_i)$ , texts written by $a_i$ are samples from this distribution. To estimate $P(u | a_i)$ , we consider the independence assumption: texts by the same author are independently and identically distributed (i.i.d.). Thus, the unknown text $u$ is also presumed to be drawn from $P(\\cdot | a_i)$ for some author $a_i$ and is independent of other texts from that author.",
415
+ "bbox": [
416
+ 507,
417
+ 351,
418
+ 882,
419
+ 495
420
+ ],
421
+ "page_idx": 2
422
+ },
423
+ {
424
+ "type": "text",
425
+ "text": "Notice that although texts are independent under the i.i.d. assumption when conditioned on a particular author, there exists a correlation between the unknown text $u$ and the set of known texts $t(a)$ in the absence of knowledge about the author. This correlation can be exploited to deduce the most likely author of $u$ using the known texts.",
426
+ "bbox": [
427
+ 507,
428
+ 497,
429
+ 882,
430
+ 609
431
+ ],
432
+ "page_idx": 2
433
+ },
434
+ {
435
+ "type": "text",
436
+ "text": "Specifically, we have",
437
+ "bbox": [
438
+ 526,
439
+ 609,
440
+ 687,
441
+ 625
442
+ ],
443
+ "page_idx": 2
444
+ },
445
+ {
446
+ "type": "equation",
447
+ "text": "\n$$\n\\begin{array}{l} P (u | \\boldsymbol {t} (a _ {i})) = \\sum_ {a _ {j} \\in \\mathcal {A}} P (u, a _ {j} | \\boldsymbol {t} (a _ {i})) \\\\ = \\sum_ {a _ {j} \\in \\mathcal {A}} P (u | a _ {j}, \\boldsymbol {t} (a _ {i})) P (a _ {j} | \\boldsymbol {t} (a _ {i})) \\\\ = \\sum_ {a _ {j} \\in \\mathcal {A}} P (u | a _ {j}) P \\left(a _ {j} \\mid \\boldsymbol {t} \\left(a _ {i}\\right)\\right), \\tag {2} \\\\ \\end{array}\n$$\n",
448
+ "text_format": "latex",
449
+ "bbox": [
450
+ 524,
451
+ 637,
452
+ 880,
453
+ 763
454
+ ],
455
+ "page_idx": 2
456
+ },
457
+ {
458
+ "type": "text",
459
+ "text": "where the last equality uses the i.i.d. assumption, meaning that when conditioned on a specific author $a_{j}$ , $u$ is independent of other texts.",
460
+ "bbox": [
461
+ 507,
462
+ 765,
463
+ 882,
464
+ 812
465
+ ],
466
+ "page_idx": 2
467
+ },
468
+ {
469
+ "type": "text",
470
+ "text": "We then introduce the \"sufficient training set\" assumption, where:",
471
+ "bbox": [
472
+ 507,
473
+ 813,
474
+ 880,
475
+ 844
476
+ ],
477
+ "page_idx": 2
478
+ },
479
+ {
480
+ "type": "equation",
481
+ "text": "\n$$\nP \\left(a _ {j} \\mid \\boldsymbol {t} \\left(a _ {i}\\right)\\right) = \\left\\{ \\begin{array}{l l} 1 & a _ {i} = a _ {j} \\\\ 0 & a _ {i} \\neq a _ {j}. \\end{array} \\right. \\tag {3}\n$$\n",
482
+ "text_format": "latex",
483
+ "bbox": [
484
+ 583,
485
+ 857,
486
+ 882,
487
+ 898
488
+ ],
489
+ "page_idx": 2
490
+ },
491
+ {
492
+ "type": "text",
493
+ "text": "This implies that the training set is sufficiently",
494
+ "bbox": [
495
+ 526,
496
+ 904,
497
+ 880,
498
+ 921
499
+ ],
500
+ "page_idx": 2
501
+ },
502
+ {
503
+ "type": "page_number",
504
+ "text": "13218",
505
+ "bbox": [
506
+ 477,
507
+ 927,
508
+ 524,
509
+ 940
510
+ ],
511
+ "page_idx": 2
512
+ },
513
+ {
514
+ "type": "text",
515
+ "text": "Author 1:",
516
+ "text_level": 1,
517
+ "bbox": [
518
+ 280,
519
+ 67,
520
+ 357,
521
+ 80
522
+ ],
523
+ "page_idx": 3
524
+ },
525
+ {
526
+ "type": "text",
527
+ "text": "Author 2:",
528
+ "text_level": 1,
529
+ "bbox": [
530
+ 640,
531
+ 67,
532
+ 717,
533
+ 80
534
+ ],
535
+ "page_idx": 3
536
+ },
537
+ {
538
+ "type": "text",
539
+ "text": "Tina Fey is a successful professional who has missed out on the baby wagon. All her friends have families and she has promotions. Desperate for a child she tries a sperm bank but it fails when she is told that she is infertile. In desperation she takes on a surrogate who turns her life upside down. Clearly Tina Fey is the smartest one in the room and she walks through this film seemingly on autopilot and above to everyone around her. What is she doing here? She is somewhere beyond this film and it shows. Its cute and amusing but Fey's demeanor promises something on a different plane then the rest of the movie. I think the best way to explain it, or over explain it would be Cary Grant in a Three Stooges movie. I think Fey can do great things if she wants or can find material that matches her abilities. A good little film.",
540
+ "bbox": [
541
+ 142,
542
+ 87,
543
+ 492,
544
+ 181
545
+ ],
546
+ "page_idx": 3
547
+ },
548
+ {
549
+ "type": "text",
550
+ "text": "Here is the text from the same author:",
551
+ "bbox": [
552
+ 142,
553
+ 187,
554
+ 307,
555
+ 197
556
+ ],
557
+ "page_idx": 3
558
+ },
559
+ {
560
+ "type": "text",
561
+ "text": "Barbet Schroeder's portrait of French attorney Jacques Vergès. You've seen him defending people like Klaus Barbie, Carlos the Jackal, Pol Pot as well as other dictators and terrorists. This is a complex story of a complex man and it essentially tells the tale of the man from World War 2 until today. ( And even at 140 minutes the film leaves a great deal out). Here is man of his time, who met and defended with many of the famous and infamous people of the last fifty years. He seems to be a man who generally believes in the right of the oppressed to stand up to their oppressors and to have some one to stand up for them. However this is not just the story of a man who fights for the oppressed but it is also the story of a man entangled in things that will cause many to question just how slick a guy is Vergès. Many of the terrorists and dictators he defends are in fact his friends , and he is not doing it for the love of cause but also for the love of the finer things . I liked the film a great deal . To be certain I was lost as to bits of the history and who some people were , but at the same time the film isn't about the history , so much as Verges moving through it . This is the story of the man , his causes and to some degree his women . What exactly are we to make of Verges ? I don't know , but I sure do think that he and his life make for a compelling tale . I loved that my idea of what Verges is changed . I loved that I was completely confused at the end as to what I thought , confused in a way that only a film that forces you to think can do . In the end I don't know what I think ...",
562
+ "bbox": [
563
+ 142,
564
+ 212,
565
+ 492,
566
+ 380
567
+ ],
568
+ "page_idx": 3
569
+ },
570
+ {
571
+ "type": "text",
572
+ "text": "In the run-up to the 1972 elections, Washington Post reporter Bob Woodward covers what seems to be a minor break-in at the Democratic Party National headquarters. He is surprised to find top lawyers already on the defence case, and the discovery of names and addresses of Republican fund organisers on the accused further arouses his suspicions. The editor of the Post is prepared to run with the story and assigns Woodward and Carl Bernstein to it. They find the trail leading higher and higher in the Republican Party, and eventually into the White House itself . . . whatever peoples opinions on the Watergate ' scandal', whether they believe it was a big cover up, or the media got a lot wrong, no one can deny just how powerful and interesting this film really is. Pakula directs this very slickly and brings the tension on the two main protagonists very slowly throughout the duration of the movie. Redford and Hoffman work really well together and are given great support from the rest of the cast. the narration works amazingly well and there is good use of mise en scene and connotations. for example there are a few scenes with the t. v screen in the foreground showing Nixon winning his presidential seat again, with ...",
573
+ "bbox": [
574
+ 502,
575
+ 87,
576
+ 853,
577
+ 223
578
+ ],
579
+ "page_idx": 3
580
+ },
581
+ {
582
+ "type": "text",
583
+ "text": "Here is the text from the same author:",
584
+ "bbox": [
585
+ 504,
586
+ 229,
587
+ 667,
588
+ 237
589
+ ],
590
+ "page_idx": 3
591
+ },
592
+ {
593
+ "type": "text",
594
+ "text": "Barbet Schroeder's portrait of French attorney Jacques Vergès. You've seen him defending people like Klaus Barbie, Carlos the Jackal, Pol Pot as well as other dictators and terrorists. This is a complex story of a complex man and it essentially tells the tale of the man from World War 2 until today. ( And even at 140 minutes the film leaves a great deal out). Here is man of his time, who met and defended with many of the famous and infamous people of the last fifty years. He seems to be a man who generally believes in the right of the oppressed to stand up to their oppressors and to have some one to stand up for them. However this is not just the story of a man who fights for the oppressed but it is also the story of a man entangled in things that will cause many to question just how slick a guy is Vergès. Many of the terrorists and dictators he defends are in fact his friends , and he is not doing it for the love of cause but also for the love of the finer things . I liked the film a great deal . To be certain I was lost as to bits of the history and who some people were , but at the same time the film isn't about the history , so much as Verges moving through it . This is the story of the man , his causes and to some degree his women . What exactly are we to make of Vergès ? I don't know , but I sure do think that he and his life make for a compelling tale . I loved that my idea of what Vergès is changed . I loved that I was completely confused at the end as to what I thought , confused in a way that only a film that forces you to think can do . In the end I don't know what I think ...",
595
+ "bbox": [
596
+ 502,
597
+ 254,
598
+ 853,
599
+ 420
600
+ ],
601
+ "page_idx": 3
602
+ },
603
+ {
604
+ "type": "text",
605
+ "text": "Logprob: -958.41",
606
+ "bbox": [
607
+ 200,
608
+ 429,
609
+ 347,
610
+ 445
611
+ ],
612
+ "page_idx": 3
613
+ },
614
+ {
615
+ "type": "text",
616
+ "text": "Most likely author:",
617
+ "bbox": [
618
+ 151,
619
+ 450,
620
+ 327,
621
+ 466
622
+ ],
623
+ "page_idx": 3
624
+ },
625
+ {
626
+ "type": "text",
627
+ "text": "-964.51",
628
+ "bbox": [
629
+ 648,
630
+ 430,
631
+ 707,
632
+ 443
633
+ ],
634
+ "page_idx": 3
635
+ },
636
+ {
637
+ "type": "text",
638
+ "text": "X",
639
+ "bbox": [
640
+ 670,
641
+ 453,
642
+ 685,
643
+ 464
644
+ ],
645
+ "page_idx": 3
646
+ },
647
+ {
648
+ "type": "text",
649
+ "text": "Figure 2: Example of prompt construction and authorship attribution based on log probabilities. The logprob is computed on the orange part, which represents the text from unknown author.",
650
+ "bbox": [
651
+ 112,
652
+ 480,
653
+ 880,
654
+ 508
655
+ ],
656
+ "page_idx": 3
657
+ },
658
+ {
659
+ "type": "text",
660
+ "text": "comprehensive to unambiguously differentiate authors, leading to:",
661
+ "bbox": [
662
+ 112,
663
+ 521,
664
+ 489,
665
+ 555
666
+ ],
667
+ "page_idx": 3
668
+ },
669
+ {
670
+ "type": "equation",
671
+ "text": "\n$$\nP (u | \\boldsymbol {t} (a _ {i})) = P (u | a _ {j}), \\tag {4}\n$$\n",
672
+ "text_format": "latex",
673
+ "bbox": [
674
+ 211,
675
+ 571,
676
+ 487,
677
+ 590
678
+ ],
679
+ "page_idx": 3
680
+ },
681
+ {
682
+ "type": "text",
683
+ "text": "where $a_{j}$ is the assumed true author of text $u$",
684
+ "bbox": [
685
+ 112,
686
+ 598,
687
+ 452,
688
+ 613
689
+ ],
690
+ "page_idx": 3
691
+ },
692
+ {
693
+ "type": "text",
694
+ "text": "We use Large Language Models (LLMs) to estimate $P(u|t(a_i))$ , which represents the probability that a new text $u$ was written by the author of a given set of texts $t(a_i)$ .",
695
+ "bbox": [
696
+ 112,
697
+ 614,
698
+ 487,
699
+ 678
700
+ ],
701
+ "page_idx": 3
702
+ },
703
+ {
704
+ "type": "text",
705
+ "text": "The probability nature of language models means that they typically calculate the probability of a token or a sequence of tokens given prior context. For a vocabulary set $\\Sigma$ , the input to a language model might be a sequence of tokens $x_{1},\\ldots ,x_{m}\\in \\Sigma$ , and the model's output would be the probability distribution $P_{\\mathrm{LLM}}(\\cdot |x_1,\\dots,x_m)$ typically stored in logarithmic scale for numerical stability.",
706
+ "bbox": [
707
+ 112,
708
+ 678,
709
+ 487,
710
+ 822
711
+ ],
712
+ "page_idx": 3
713
+ },
714
+ {
715
+ "type": "text",
716
+ "text": "When using an autoregressive language model, we can measure not only the probability of the next token but also the probability of a subsequent sequence of tokens. For instance, if we have a prompt consisting of tokens $x_{1},\\ldots ,x_{m}\\in \\Sigma$ , and we want to measure the probability of a sequence",
717
+ "bbox": [
718
+ 112,
719
+ 825,
720
+ 487,
721
+ 921
722
+ ],
723
+ "page_idx": 3
724
+ },
725
+ {
726
+ "type": "equation",
727
+ "text": "\n$$\n\\begin{array}{l} y _ {1}, \\dots , y _ {s} \\in \\Sigma , \\text {w e c a l c u l a t e :} \\\\ P _ {\\mathrm {L L M}} (y _ {1}, \\dots , y _ {s} | x _ {1}, \\dots , x _ {m}) \\\\ = \\prod_ {i = 1} ^ {s} P _ {\\mathrm {L L M}} \\left(y _ {i} \\mid x _ {1}, \\dots , x _ {m}, y _ {1}, \\dots , y _ {i - 1}\\right). \\tag {5} \\\\ \\end{array}\n$$\n",
728
+ "text_format": "latex",
729
+ "bbox": [
730
+ 509,
731
+ 523,
732
+ 880,
733
+ 625
734
+ ],
735
+ "page_idx": 3
736
+ },
737
+ {
738
+ "type": "text",
739
+ "text": "To estimate $P(u|t(a_i))$ for authorship attribution, we define:",
740
+ "bbox": [
741
+ 507,
742
+ 637,
743
+ 884,
744
+ 669
745
+ ],
746
+ "page_idx": 3
747
+ },
748
+ {
749
+ "type": "equation",
750
+ "text": "\n$$\n\\begin{array}{l} P (u | \\mathbf {t} (a _ {i})) \\tag {6} \\\\ = P _ {\\text {L L M}} (u | \\text {p r o m p t} _ {-} \\text {c o n s t r u c t i o n} (t (a _ {i}))). \\\\ \\end{array}\n$$\n",
751
+ "text_format": "latex",
752
+ "bbox": [
753
+ 527,
754
+ 695,
755
+ 880,
756
+ 732
757
+ ],
758
+ "page_idx": 3
759
+ },
760
+ {
761
+ "type": "text",
762
+ "text": "The prompt construction can vary, providing flexibility in how we use the model to estimate probabilities. Our method involves constructing a prompt steering the LLM uses to predict the likelihood that the unknown text was written by the same author (Figure 2).",
763
+ "bbox": [
764
+ 507,
765
+ 744,
766
+ 882,
767
+ 840
768
+ ],
769
+ "page_idx": 3
770
+ },
771
+ {
772
+ "type": "text",
773
+ "text": "In summary, our approach is straightforward and simple. By leveraging the capabilities of Large Language Models, we calculate the likelihood that an unknown text originates from a known author based on existing samples of their writing. This",
774
+ "bbox": [
775
+ 507,
776
+ 841,
777
+ 882,
778
+ 921
779
+ ],
780
+ "page_idx": 3
781
+ },
782
+ {
783
+ "type": "page_number",
784
+ "text": "13219",
785
+ "bbox": [
786
+ 477,
787
+ 927,
788
+ 524,
789
+ 940
790
+ ],
791
+ "page_idx": 3
792
+ },
793
+ {
794
+ "type": "text",
795
+ "text": "probability assessment allows us to identify the most likely author from a set without the need for fine-tuning or feature engineering.",
796
+ "bbox": [
797
+ 112,
798
+ 84,
799
+ 487,
800
+ 134
801
+ ],
802
+ "page_idx": 4
803
+ },
804
+ {
805
+ "type": "text",
806
+ "text": "3 Experimental Setups",
807
+ "text_level": 1,
808
+ "bbox": [
809
+ 114,
810
+ 145,
811
+ 329,
812
+ 162
813
+ ],
814
+ "page_idx": 4
815
+ },
816
+ {
817
+ "type": "text",
818
+ "text": "3.1 Models & Baselines",
819
+ "text_level": 1,
820
+ "bbox": [
821
+ 114,
822
+ 171,
823
+ 315,
824
+ 186
825
+ ],
826
+ "page_idx": 4
827
+ },
828
+ {
829
+ "type": "text",
830
+ "text": "Models We selected two widely-used LLM families: 1) LLaMA family, which includes LLaMA2 (Touvron et al., 2023), LLaMA-3, CodeLLaMA (Roziere et al., 2023), available in various parameter sizes and configurations, with some models specifically fine-tuned for dialogue use cases; 2) the GPT family (Brown et al., 2020), featuring GPT-3.5-Turbo and GPT-4-Turbo (Achiam et al., 2023), where we specifically used versions gpt4-turbo-2024-04-09 and gpt-3.5-turbo-0125. The LLaMA family models were deployed using the vLLM framework (Kwon et al., 2023) if used for Logprob method and are deployed on Azure if used for question-answering. Apart from Table 1, all ablation studies of Logprob method uses LLaMA3-70B model.",
831
+ "bbox": [
832
+ 112,
833
+ 192,
834
+ 489,
835
+ 449
836
+ ],
837
+ "page_idx": 4
838
+ },
839
+ {
840
+ "type": "text",
841
+ "text": "Baselines We chose two types of baselines for comparison. 1) embedding-based methods such as BertAA (Fabien et al., 2020) and GAN-BERT (Silva et al., 2023), which require training or finetuning, 2) LLM-based methods such as those described in (Huang et al., 2024), which utilize LLMs for authorship attribution tasks through a question-answering (QA) approach.",
842
+ "bbox": [
843
+ 112,
844
+ 458,
845
+ 489,
846
+ 589
847
+ ],
848
+ "page_idx": 4
849
+ },
850
+ {
851
+ "type": "text",
852
+ "text": "3.2 Evaluations",
853
+ "text_level": 1,
854
+ "bbox": [
855
+ 114,
856
+ 600,
857
+ 253,
858
+ 614
859
+ ],
860
+ "page_idx": 4
861
+ },
862
+ {
863
+ "type": "text",
864
+ "text": "Datasets We evaluated our method on two widely used author attribution datasets: 1) IMDB62 dataset, a truncated version of IMDB dataset (Seroussi et al., 2014) and 2) Blog Dataset (Schler et al., 2006). IMDB62 dataset comprises 62k movie reviews from 62 authors, with each author contributing 1000 samples. Additionally, it also provides some extra information such as the rating score. The Blog dataset, contains 681k blog comments, each with an assigned authorID. Besides the raw text and authorID, each entry includes extra information such as gender and age. Both datasets are accessible via HuggingFace.",
865
+ "bbox": [
866
+ 112,
867
+ 621,
868
+ 489,
869
+ 831
870
+ ],
871
+ "page_idx": 4
872
+ },
873
+ {
874
+ "type": "text",
875
+ "text": "Benchmark Construction Unlike fixed author sets used in many previous studies, we constructed a random author set for each test to minimize variance. By default, unless specified otherwise, each experiment in our experiments involved a 10-author",
876
+ "bbox": [
877
+ 112,
878
+ 841,
879
+ 489,
880
+ 921
881
+ ],
882
+ "page_idx": 4
883
+ },
884
+ {
885
+ "type": "text",
886
+ "text": "one-shot setting, and we conducted 100 tests for each experiment to reduce variance. Each test involved the following steps: 1) Ten candidate authors were randomly selected. 2) For each author, one (or n for n-shot) article was randomly selected as the training set. 3) One author was randomly selected from the ten candidates as the test author. 4) One article not in the training set was randomly selected from the test author's articles as the test set (with size of 1). 5) We run the authorship attribution algorithm to classify the test article into 10 categories.",
887
+ "bbox": [
888
+ 507,
889
+ 84,
890
+ 884,
891
+ 275
892
+ ],
893
+ "page_idx": 4
894
+ },
895
+ {
896
+ "type": "text",
897
+ "text": "Our evaluation pipeline can avoid potential biases from fixed author sets and better measure the efficacy of LLMs in authorship attribution tasks. We also share our pipeline for fair evaluations of future related works.",
898
+ "bbox": [
899
+ 507,
900
+ 278,
901
+ 884,
902
+ 357
903
+ ],
904
+ "page_idx": 4
905
+ },
906
+ {
907
+ "type": "text",
908
+ "text": "Notably, aforementioned pipeline is suitable for non-training based methods like ours and QA approaches. However, for training-based methods such as embedding approaches, each train-test split is followed by a retraining, demanding significant computational resources. Therefore, in this work, we directly cited scores from the original papers.",
909
+ "bbox": [
910
+ 507,
911
+ 359,
912
+ 882,
913
+ 470
914
+ ],
915
+ "page_idx": 4
916
+ },
917
+ {
918
+ "type": "text",
919
+ "text": "Evaluation Metrics We adopt three metrics: top-1, top-2 and top-5 accuracies. Specifically, top k accuracy is computed as follows:",
920
+ "bbox": [
921
+ 507,
922
+ 482,
923
+ 882,
924
+ 530
925
+ ],
926
+ "page_idx": 4
927
+ },
928
+ {
929
+ "type": "equation",
930
+ "text": "\n$$\n\\mathrm {T o p} k \\mathrm {A c c u r a c y} = \\frac {\\mathrm {N u m} _ {\\mathrm {c o r r e c t}} ^ {\\mathrm {k}}}{\\mathrm {N u m} _ {\\mathrm {a l l}}}, \\qquad (7)\n$$\n",
931
+ "text_format": "latex",
932
+ "bbox": [
933
+ 573,
934
+ 543,
935
+ 882,
936
+ 579
937
+ ],
938
+ "page_idx": 4
939
+ },
940
+ {
941
+ "type": "text",
942
+ "text": "where $\\mathrm{Num}_{\\mathrm{correct}}^{\\mathrm{k}}$ represents the number of tests where the actual author is among the top k predictions, and $\\mathrm{Num}_{\\mathrm{all}}$ represents the total number of tests.",
943
+ "bbox": [
944
+ 507,
945
+ 590,
946
+ 882,
947
+ 655
948
+ ],
949
+ "page_idx": 4
950
+ },
951
+ {
952
+ "type": "text",
953
+ "text": "4 Experiments",
954
+ "text_level": 1,
955
+ "bbox": [
956
+ 507,
957
+ 669,
958
+ 655,
959
+ 686
960
+ ],
961
+ "page_idx": 4
962
+ },
963
+ {
964
+ "type": "text",
965
+ "text": "Firstly, we evaluate different methods for author attribution in Section 4.1, noting that our Log-prob method significantly outperformed QA-based methods in accuracy and stability across datasets. Then, we study the impact of increasing candidate numbers on performance in Section 4.2, where our method maintained high accuracy despite a larger pool of candidates. Next, in Section 4.3, we analyze prompt sensitivity, concluding that while prompt use is crucial, variations in prompt design did not significantly affect the performance. Further, in Section 4.4, we explore bias in author attribution and in Section 4.5, we measure performance variations across different subgroups. Finally, in",
966
+ "bbox": [
967
+ 507,
968
+ 696,
969
+ 884,
970
+ 921
971
+ ],
972
+ "page_idx": 4
973
+ },
974
+ {
975
+ "type": "page_number",
976
+ "text": "13220",
977
+ "bbox": [
978
+ 477,
979
+ 927,
980
+ 524,
981
+ 940
982
+ ],
983
+ "page_idx": 4
984
+ },
985
+ {
986
+ "type": "text",
987
+ "text": "Section 4.6, we compared the efficiency of different author attribution methods.",
988
+ "bbox": [
989
+ 112,
990
+ 84,
991
+ 487,
992
+ 116
993
+ ],
994
+ "page_idx": 5
995
+ },
996
+ {
997
+ "type": "text",
998
+ "text": "4.1 Author Attribution Performance",
999
+ "text_level": 1,
1000
+ "bbox": [
1001
+ 112,
1002
+ 127,
1003
+ 416,
1004
+ 142
1005
+ ],
1006
+ "page_idx": 5
1007
+ },
1008
+ {
1009
+ "type": "text",
1010
+ "text": "Table 1 shows the main results for different methods on the IMDB62 and Blog datasets concerning authorship attribution capabilities. We make the following observations:",
1011
+ "bbox": [
1012
+ 112,
1013
+ 148,
1014
+ 489,
1015
+ 212
1016
+ ],
1017
+ "page_idx": 5
1018
+ },
1019
+ {
1020
+ "type": "list",
1021
+ "sub_type": "text",
1022
+ "list_items": [
1023
+ "- LLMs with QA-based methods cannot perform author attribution tasks effectively. For example, GPT-4-Turbo can only achieve a top-1 accuracy of $34\\%$ on the IMDB62 dataset and $62\\%$ on the Blog dataset. Notably, there are two interesting phenomena: 1) GPT-4-Turbo and GPT-3.5-Turbo exhibit inconsistent higher accuracy across different datasets, highlighting inherent instability in the prompt-based approach. 2) Older LLMs with smaller context window lengths are unable to perform author attribution due to the prompt exceeding the context window. These phenomena indicate that QA methods are not a good option for enabling LLMs to conduct author attribution tasks effectively.",
1024
+ "- Our Logprob method helps LLMs perform author attribution tasks more effectively. With LLaMA-3-70B, we achieved top-1 accuracy of $85\\%$ , and both top-2 and top-5 accuracies were even higher. This suggests that LLMs equipped with our method can effectively narrow down large candidate sets. Additionally, two another things worth noting are that 1) LLMs with the Logprob method exhibit more stable performance across both tasks, something QA methods struggle with, and 2) LLMs with Logprob can conduct authorship attribution tasks with lower requirements for context window length. For instance, LLaMA-2-70B-Chat with the Logprob method can handle authorship attribution, whereas the same model with a QA approach fails when the collective text of 10 authors exceeds the context window length. These findings highlight the superiority of our Logprob method.",
1025
+ "- Training-free method can achieve comparable or even superior performance to training-based methods. The Blog dataset showed higher top-1 accuracy with LLaMA + Logprob compared to GAN-BERT and BertAA. While the IMDB62 dataset exhibited lower performance relative to embedding-based methods, it is important to note that Logprob achieves this as a"
1026
+ ],
1027
+ "bbox": [
1028
+ 117,
1029
+ 223,
1030
+ 490,
1031
+ 921
1032
+ ],
1033
+ "page_idx": 5
1034
+ },
1035
+ {
1036
+ "type": "image",
1037
+ "img_path": "images/7f1d4f31521dc161682d3007e6861d4c48926114d91ade37bb8ce73684a809e3.jpg",
1038
+ "image_caption": [
1039
+ "Figure 3: Accuracy vs. number of candidates."
1040
+ ],
1041
+ "image_footnote": [],
1042
+ "bbox": [
1043
+ 539,
1044
+ 80,
1045
+ 857,
1046
+ 214
1047
+ ],
1048
+ "page_idx": 5
1049
+ },
1050
+ {
1051
+ "type": "text",
1052
+ "text": "one-shot method, whereas embedding-based approaches require much more data for training to converge. This demonstrates that Logprob can more effectively capture the nuances necessary for authorship attribution.",
1053
+ "bbox": [
1054
+ 524,
1055
+ 265,
1056
+ 884,
1057
+ 344
1058
+ ],
1059
+ "page_idx": 5
1060
+ },
1061
+ {
1062
+ "type": "text",
1063
+ "text": "4.2 Performance vs. Number of Candidates",
1064
+ "text_level": 1,
1065
+ "bbox": [
1066
+ 507,
1067
+ 357,
1068
+ 867,
1069
+ 372
1070
+ ],
1071
+ "page_idx": 5
1072
+ },
1073
+ {
1074
+ "type": "text",
1075
+ "text": "One of the challenges in authorship attribution is the difficulty in correctly identifying the author as the number of candidates increases, which generally leads to decreased accuracy. Figure 3 shows the author attribution performance across different candidate counts on the IMDB62 dataset. We made the following observations:",
1076
+ "bbox": [
1077
+ 507,
1078
+ 376,
1079
+ 882,
1080
+ 491
1081
+ ],
1082
+ "page_idx": 5
1083
+ },
1084
+ {
1085
+ "type": "list",
1086
+ "sub_type": "text",
1087
+ "list_items": [
1088
+ "- First, performance indeed decreases as the number of candidates increases.",
1089
+ "- Second, across all settings, all metrics maintain relatively high scores. For example, in the setting with 50 candidates, our method achieved $76\\%$ top-1 accuracy, $84\\%$ top-2 accuracy, and $87\\%$ top-5 accuracy.",
1090
+ "- Third, top-2 and top-5 accuracies are more stable compared to top-1 accuracy. The model may not always place the correct author at the top, but it often includes the correct author within the top few predictions. This attribute is also crucial as it allows the narrowing down of a large pool of candidates to a smaller subset of likely candidates."
1091
+ ],
1092
+ "bbox": [
1093
+ 512,
1094
+ 502,
1095
+ 882,
1096
+ 763
1097
+ ],
1098
+ "page_idx": 5
1099
+ },
1100
+ {
1101
+ "type": "text",
1102
+ "text": "4.3 Analysis of Prompt Sensitivity",
1103
+ "text_level": 1,
1104
+ "bbox": [
1105
+ 507,
1106
+ 776,
1107
+ 793,
1108
+ 791
1109
+ ],
1110
+ "page_idx": 5
1111
+ },
1112
+ {
1113
+ "type": "text",
1114
+ "text": "Our method relies on suitable prompt as in Figure 2. Here, we discuss the sensitivity of our accuracy to different prompt constructions in Table 2. We made the following observations:",
1115
+ "bbox": [
1116
+ 507,
1117
+ 797,
1118
+ 882,
1119
+ 860
1120
+ ],
1121
+ "page_idx": 5
1122
+ },
1123
+ {
1124
+ "type": "text",
1125
+ "text": "- Using prompts is essential for enhancing the accuracy of our method (#1 vs. #2). This phenomenon is aligned with previous studies",
1126
+ "bbox": [
1127
+ 512,
1128
+ 873,
1129
+ 882,
1130
+ 921
1131
+ ],
1132
+ "page_idx": 5
1133
+ },
1134
+ {
1135
+ "type": "page_number",
1136
+ "text": "13221",
1137
+ "bbox": [
1138
+ 477,
1139
+ 927,
1140
+ 522,
1141
+ 940
1142
+ ],
1143
+ "page_idx": 5
1144
+ },
1145
+ {
1146
+ "type": "table",
1147
+ "img_path": "images/20deddee9b57e4ab2897575984f54847db63f632da5ae8d50a67ff6f1fe5935c.jpg",
1148
+ "table_caption": [],
1149
+ "table_footnote": [
1150
+ "Prompt 1: Here is the text from the same author:",
1151
+ "Prompt 2: Analyze the writing styles of the input texts, disregarding the differences in topic and content.",
1152
+ "Here is the text from the same author:",
1153
+ "Prompt 3: Focus on grammatical styles indicative of authorship. Here is the text from the same author:",
1154
+ "Prompt 4: Analyze the writing styles of the input texts, disregarding the differences in topic and content.",
1155
+ "Reasoning based on linguistic features such as phrasal verbs, modal verbs, punctuation, rare words, affixes, quantities, humor, sarcasm, typographical errors, and misspellings. Here is the text from the same author:"
1156
+ ],
1157
+ "table_body": "<table><tr><td rowspan=\"2\">Method</td><td rowspan=\"2\">Model</td><td colspan=\"5\">IMDB62 Dataset</td><td colspan=\"5\">BLOG Dataset</td></tr><tr><td>#Candidate</td><td>n-Shot</td><td>Top 1 Acc.</td><td>Top 2 Acc.</td><td>Top 5 Acc.</td><td>#Candidate</td><td>n-Shot</td><td>Top 1 Acc.</td><td>Top 2 Acc.</td><td>Top 5 Acc.</td></tr><tr><td rowspan=\"12\">LogProb</td><td>LLaMA-2-7B</td><td>10</td><td>1</td><td>80.0 ± 4.0</td><td>88.0 ± 3.3</td><td>97.0 ± 1.7</td><td>10</td><td>1</td><td>79.0 ± 4.1</td><td>84.0 ± 3.7</td><td>98.0 ± 1.4</td></tr><tr><td>LLaMA-2-7B-Chat</td><td>10</td><td>1</td><td>68.0 ± 4.7</td><td>80.0 ± 4.0</td><td>88.0 ± 3.3</td><td>10</td><td>1</td><td>69.0 ± 4.6</td><td>78.0 ± 4.1</td><td>89.0 ± 3.1</td></tr><tr><td>LLaMA-2-13B</td><td>10</td><td>1</td><td>84.0 ± 3.7</td><td>88.0 ± 3.3</td><td>100.0 ± 0.0</td><td>10</td><td>1</td><td>81.0 ± 3.9</td><td>86.0 ± 3.5</td><td>94.0 ± 2.4</td></tr><tr><td>LLaMA-2-70B</td><td>10</td><td>1</td><td>88.0 ± 3.3</td><td>94.0 ± 2.4</td><td>99.0 ± 1.0</td><td>10</td><td>1</td><td>88.0 ± 3.3</td><td>90.0 ± 3.0</td><td>95.0 ± 2.2</td></tr><tr><td>LLaMA-2-70B-Chat</td><td>10</td><td>1</td><td>79.0 ± 4.1</td><td>85.0 ± 3.6</td><td>95.0 ± 2.2</td><td>10</td><td>1</td><td>83.0 ± 3.8</td><td>85.0 ± 3.6</td><td>97.0 ± 1.7</td></tr><tr><td>Code-LLaMA-7B</td><td>10</td><td>1</td><td>71.0 ± 4.5</td><td>84.0 ± 3.7</td><td>96.0 ± 2.0</td><td>10</td><td>1</td><td>78.0 ± 4.1</td><td>84.0 ± 3.7</td><td>94.0 ± 2.4</td></tr><tr><td>Code-LLaMA-13B</td><td>10</td><td>1</td><td>70.0 ± 4.6</td><td>84.0 ± 3.7</td><td>98.0 ± 1.4</td><td>10</td><td>1</td><td>77.0 ± 4.2</td><td>85.0 ± 3.6</td><td>92.0 ± 2.7</td></tr><tr><td>Code-LLaMA-34B</td><td>10</td><td>1</td><td>75.0 ± 4.3</td><td>84.0 ± 3.7</td><td>98.0 ± 1.4</td><td>10</td><td>1</td><td>78.0 ± 4.1</td><td>83.0 ± 3.8</td><td>94.0 ± 2.4</td></tr><tr><td>LLaMA-3-8B</td><td>10</td><td>1</td><td>82.0 ± 3.8</td><td>89.0 ± 3.1</td><td>98.0 ± 1.4</td><td>10</td><td>1</td><td>84.0 ± 3.7</td><td>89.0 ± 3.1</td><td>95.0 ± 2.2</td></tr><tr><td>LLaMA-3-8B-Instruct</td><td>10</td><td>1</td><td>69.0 ± 4.6</td><td>77.0 ± 4.2</td><td>90.0 ± 3.0</td><td>10</td><td>1</td><td>68.0 ± 4.7</td><td>77.0 ± 4.2</td><td>90.0 ± 3.0</td></tr><tr><td>LLaMA-3-70B</td><td>10</td><td>1</td><td>85.0 ± 3.6</td><td>93.0 ± 2.6</td><td>98.0 ± 1.4</td><td>10</td><td>1</td><td>82.0 ± 3.8</td><td>88.0 ± 3.3</td><td>95.0 ± 2.2</td></tr><tr><td>LLaMA-3-70B-Instruct</td><td>10</td><td>1</td><td>79.0 ± 4.1</td><td>89.0 ± 3.1</td><td>99.0 ± 1.0</td><td>10</td><td>1</td><td>79.0 ± 4.1</td><td>87.0 ± 3.4</td><td>96.0 ± 2.0</td></tr><tr><td rowspan=\"4\">QA</td><td>LLaMA-2-70B-Chat</td><td>10</td><td>1</td><td>Failed</td><td>-</td><td>-</td><td>10</td><td>1</td><td>Failed</td><td>-</td><td>-</td></tr><tr><td>LLaMA-3-70B-Instruct</td><td>10</td><td>1</td><td>31.0 ± 4.6</td><td>-</td><td>-</td><td>10</td><td>1</td><td>22.0 ± 4.1</td><td>-</td><td>-</td></tr><tr><td>GPT-3.5-Turbo</td><td>10</td><td>1</td><td>69.0 ± 4.6</td><td>-</td><td>-</td><td>10</td><td>1</td><td>47.0 ± 5.0</td><td>-</td><td>-</td></tr><tr><td>GPT-4-Turbo</td><td>10</td><td>1</td><td>34.0 ± 4.7</td><td>-</td><td>-</td><td>10</td><td>1</td><td>62.0 ± 4.9</td><td>-</td><td>-</td></tr><tr><td rowspan=\"2\">Other Baseline</td><td>GAN-BERT</td><td>20</td><td>80</td><td>96.0</td><td>-</td><td>-</td><td>20</td><td>80</td><td>40.0</td><td>-</td><td>-</td></tr><tr><td>BertAA</td><td>62</td><td>80</td><td>93.0</td><td>-</td><td>-</td><td>10</td><td>80</td><td>65.0</td><td>-</td><td>-</td></tr></table>",
1158
+ "bbox": [
1159
+ 114,
1160
+ 80,
1161
+ 884,
1162
+ 296
1163
+ ],
1164
+ "page_idx": 6
1165
+ },
1166
+ {
1167
+ "type": "table",
1168
+ "img_path": "images/824e332d00910f1a1a4ff0ba0c121ff0e1a2bf5366dd10059c86c23bfc30b325.jpg",
1169
+ "table_caption": [
1170
+ "Table 1: Author attribution results on IMDB62 and Blog dataset. Prompt construction for QA method is in consistent with Huang et al. (2024)."
1171
+ ],
1172
+ "table_footnote": [],
1173
+ "table_body": "<table><tr><td>#</td><td>Prompting</td><td>Top 1 Accuracy</td><td>Top 2 Accuracy</td><td>Top 5 Accuracy</td></tr><tr><td>1</td><td>&lt;Example Text&gt; + &lt;Query Text&gt;</td><td>70.0 ± 4.6</td><td>81.0 ± 3.9</td><td>92.0 ± 2.7</td></tr><tr><td>2</td><td>&lt;Example Text&gt; + &lt;Prompt 1&gt; + &lt;Query Text&gt;</td><td>85.0 ± 3.6</td><td>92.0 ± 2.7</td><td>99.0 ± 1.0</td></tr><tr><td>3</td><td>&lt;Example Text&gt; + &lt;Prompt 2&gt; + &lt;Query Text&gt;</td><td>83.0 ± 3.8</td><td>87.0 ± 3.4</td><td>100.0 ± 0.0</td></tr><tr><td>4</td><td>&lt;Example Text&gt; + &lt;Prompt 3&gt; + &lt;Query Text&gt;</td><td>86.0 ± 3.5</td><td>90.0 ± 3.0</td><td>100.0 ± 0.0</td></tr><tr><td>5</td><td>&lt;Example Text&gt; + &lt;Prompt 4&gt; + &lt;Query Text&gt;</td><td>87.0 ± 3.4</td><td>90.0 ± 3.0</td><td>99.0 ± 1.0</td></tr></table>",
1174
+ "bbox": [
1175
+ 154,
1176
+ 439,
1177
+ 842,
1178
+ 527
1179
+ ],
1180
+ "page_idx": 6
1181
+ },
1182
+ {
1183
+ "type": "table",
1184
+ "img_path": "images/0b67d640e460c417d403f3a0fbbcdfd2602cd37cc7db9abe4795fb0795f0b61d.jpg",
1185
+ "table_caption": [
1186
+ "Table 2: Author attribution performance vs. different prompting choices on IMDB62 dataset."
1187
+ ],
1188
+ "table_footnote": [],
1189
+ "table_body": "<table><tr><td>Gender</td><td>Top 1 Acc.</td><td>Top 2 Acc.</td><td>Top 5 Acc.</td></tr><tr><td>Both</td><td>84.0 ± 1.6</td><td>90.8 ± 1.3</td><td>95.8 ± 1.0</td></tr><tr><td>Male</td><td>81.4 ± 2.5</td><td>88.6 ± 2.1</td><td>95.4 ± 1.4</td></tr><tr><td>Female</td><td>86.3 ± 2.1</td><td>92.8 ± 1.6</td><td>96.2 ± 1.2</td></tr></table>",
1190
+ "bbox": [
1191
+ 163,
1192
+ 574,
1193
+ 438,
1194
+ 646
1195
+ ],
1196
+ "page_idx": 6
1197
+ },
1198
+ {
1199
+ "type": "text",
1200
+ "text": "(Wei et al., 2022) that have demonstrated that prompting is beneficial for unlocking the full potential of LLMs.",
1201
+ "bbox": [
1202
+ 129,
1203
+ 697,
1204
+ 485,
1205
+ 744
1206
+ ],
1207
+ "page_idx": 6
1208
+ },
1209
+ {
1210
+ "type": "text",
1211
+ "text": "- There is no statistically significant evidence to suggest that specific prompt designs impact performance significantly (#2 vs. #3 vs. #4 vs. #5). The results show very close performance metrics across different prompt constructions.",
1212
+ "bbox": [
1213
+ 117,
1214
+ 760,
1215
+ 489,
1216
+ 841
1217
+ ],
1218
+ "page_idx": 6
1219
+ },
1220
+ {
1221
+ "type": "text",
1222
+ "text": "Discussions Prompting sensitivity (Sclar et al., 2023) is a widely acknowledged property in the generation process of LLMs. This also has motivated a trend of research on prompting engineering",
1223
+ "bbox": [
1224
+ 112,
1225
+ 856,
1226
+ 489,
1227
+ 921
1228
+ ],
1229
+ "page_idx": 6
1230
+ },
1231
+ {
1232
+ "type": "table",
1233
+ "img_path": "images/96a2e3449bf07a404e4404cac2d7ccf7d3487a0db8accbc1e40dc689599ade61.jpg",
1234
+ "table_caption": [
1235
+ "Table 3: Gender bias in author attribution performance."
1236
+ ],
1237
+ "table_footnote": [],
1238
+ "table_body": "<table><tr><td>Gender</td><td>Top 1 Acc.</td><td>Top 2 Acc.</td><td>Top 5 Acc.</td></tr><tr><td>Male</td><td>77.0 ± 4.2</td><td>82.0 ± 3.8</td><td>92.0 ± 2.7</td></tr><tr><td>Female</td><td>89.0 ± 3.1</td><td>91.0 ± 2.9</td><td>95.0 ± 2.2</td></tr></table>",
1239
+ "bbox": [
1240
+ 557,
1241
+ 574,
1242
+ 833,
1243
+ 627
1244
+ ],
1245
+ "page_idx": 6
1246
+ },
1247
+ {
1248
+ "type": "text",
1249
+ "text": "Table 4: Author attribution performance in each gender subgroup.",
1250
+ "bbox": [
1251
+ 507,
1252
+ 637,
1253
+ 882,
1254
+ 667
1255
+ ],
1256
+ "page_idx": 6
1257
+ },
1258
+ {
1259
+ "type": "text",
1260
+ "text": "(Zhang et al., 2023b; Guo et al., 2024) as different promptings can lead to completely different performance. However, our method appears to be relatively insensitive to the choice of prompt, which makes our method more robust, maintaining high performance and stability across various settings.",
1261
+ "bbox": [
1262
+ 507,
1263
+ 693,
1264
+ 882,
1265
+ 789
1266
+ ],
1267
+ "page_idx": 6
1268
+ },
1269
+ {
1270
+ "type": "text",
1271
+ "text": "4.4 Bias Analysis",
1272
+ "text_level": 1,
1273
+ "bbox": [
1274
+ 507,
1275
+ 802,
1276
+ 662,
1277
+ 818
1278
+ ],
1279
+ "page_idx": 6
1280
+ },
1281
+ {
1282
+ "type": "text",
1283
+ "text": "An algorithm trained on an entire dataset may exhibit different accuracy levels across different subgroups during testing (Chouldechova and G'Sell, 2017; Pastor et al., 2021). This section discusses such bias issues and measures how the algorithm's accuracy varies for different subgroups.",
1284
+ "bbox": [
1285
+ 507,
1286
+ 825,
1287
+ 882,
1288
+ 921
1289
+ ],
1290
+ "page_idx": 6
1291
+ },
1292
+ {
1293
+ "type": "page_number",
1294
+ "text": "13222",
1295
+ "bbox": [
1296
+ 477,
1297
+ 927,
1298
+ 524,
1299
+ 940
1300
+ ],
1301
+ "page_idx": 6
1302
+ },
1303
+ {
1304
+ "type": "table",
1305
+ "img_path": "images/db0a88cfb20fc4a1103aab05b63bc2c3261897ab20dad33014b6c41cc3bc0525.jpg",
1306
+ "table_caption": [],
1307
+ "table_footnote": [
1308
+ "(a) performance in each rating subgroup."
1309
+ ],
1310
+ "table_body": "<table><tr><td>Interval</td><td>Top 1 Acc.</td><td>Top 2 Acc.</td><td>Top 5 Acc.</td></tr><tr><td>[1 - 2]</td><td>82.0 ± 3.8</td><td>89.0 ± 3.1</td><td>96.0 ± 2.0</td></tr><tr><td>[3 - 4]</td><td>87.0 ± 3.4</td><td>94.0 ± 2.4</td><td>99.0 ± 1.0</td></tr><tr><td>[5 - 6]</td><td>90.0 ± 3.0</td><td>96.0 ± 2.0</td><td>100.0 ± 0.0</td></tr><tr><td>[7 - 8]</td><td>88.0 ± 3.3</td><td>92.0 ± 2.7</td><td>97.0 ± 1.7</td></tr><tr><td>[9 - 10]</td><td>89.0 ± 3.1</td><td>93.0 ± 2.6</td><td>96.0 ± 2.0</td></tr></table>",
1311
+ "bbox": [
1312
+ 218,
1313
+ 82,
1314
+ 468,
1315
+ 160
1316
+ ],
1317
+ "page_idx": 7
1318
+ },
1319
+ {
1320
+ "type": "table",
1321
+ "img_path": "images/47ef5ee77f9df591ebc982fa86537704da1cb358ab35bc48cc138b193a449ff2.jpg",
1322
+ "table_caption": [],
1323
+ "table_footnote": [
1324
+ "(b) performance in each age subgroup."
1325
+ ],
1326
+ "table_body": "<table><tr><td>Age</td><td>Top 1 Acc.</td><td>Top 2 Acc.</td><td>Top 5 Acc.</td></tr><tr><td>[13 - 17]</td><td>90.0 ± 3.0</td><td>94.0 ± 2.4</td><td>99.0 ± 1.0</td></tr><tr><td>[18 - 34]</td><td>84.0 ± 3.7</td><td>89.0 ± 3.1</td><td>95.0 ± 2.2</td></tr><tr><td>[35 - 44]</td><td>80.0 ± 4.0</td><td>87.0 ± 3.4</td><td>94.0 ± 2.4</td></tr><tr><td>[45 - 48]</td><td>81.0 ± 3.9</td><td>85.0 ± 3.6</td><td>95.0 ± 2.2</td></tr></table>",
1327
+ "bbox": [
1328
+ 524,
1329
+ 82,
1330
+ 776,
1331
+ 159
1332
+ ],
1333
+ "page_idx": 7
1334
+ },
1335
+ {
1336
+ "type": "table",
1337
+ "img_path": "images/33ef77b8c669cf1d1842cc468a563cef8648c1ae5ff088bda7becb6f93662036.jpg",
1338
+ "table_caption": [
1339
+ "Table 5: Author attribution performance in each rating subgroup and age subgroup."
1340
+ ],
1341
+ "table_footnote": [],
1342
+ "table_body": "<table><tr><td>#</td><td>Foundation Models</td><td>Deployment Resource</td><td>Method</td><td>Inference Time (s)</td><td>Accuracy</td></tr><tr><td>1</td><td>LLama-3-70B</td><td>8 × A6000 (VLLM)</td><td>Logprob</td><td>462.1</td><td>85.0 ± 3.6</td></tr><tr><td>2</td><td>GPT-4-Turbo</td><td>OpenAI</td><td>QA</td><td>663.1</td><td>34.0 ± 4.7</td></tr><tr><td>3</td><td>LLama-3-70B-Instruct</td><td>Azure</td><td>QA</td><td>2065.6</td><td>31.0 ± 4.6</td></tr></table>",
1343
+ "bbox": [
1344
+ 203,
1345
+ 223,
1346
+ 788,
1347
+ 288
1348
+ ],
1349
+ "page_idx": 7
1350
+ },
1351
+ {
1352
+ "type": "text",
1353
+ "text": "Table 6: Efficiency analysis between prompt-based method and logprob-based method on Blog dataset.",
1354
+ "bbox": [
1355
+ 147,
1356
+ 298,
1357
+ 845,
1358
+ 313
1359
+ ],
1360
+ "page_idx": 7
1361
+ },
1362
+ {
1363
+ "type": "text",
1364
+ "text": "Influence of Gender We conduct 500 tests which consists of 237 tests for blogs written by male authors and 263 tests for blogs written by female authors and show their accuracy of authorship attribution separately in Table 3. The results indicate that authorship attribution for blogs written by female authors exhibits higher accuracy. This suggests that female-authored blogs might contain more distinct personal styles, making it easier to infer the author.",
1365
+ "bbox": [
1366
+ 112,
1367
+ 338,
1368
+ 487,
1369
+ 498
1370
+ ],
1371
+ "page_idx": 7
1372
+ },
1373
+ {
1374
+ "type": "text",
1375
+ "text": "4.5 Subgroup Analysis",
1376
+ "text_level": 1,
1377
+ "bbox": [
1378
+ 112,
1379
+ 510,
1380
+ 309,
1381
+ 526
1382
+ ],
1383
+ "page_idx": 7
1384
+ },
1385
+ {
1386
+ "type": "text",
1387
+ "text": "When considering authorship attribution restricted to specific subgroups, the task can either become simpler or more difficult. Certain subgroups may express personal styles more distinctly, making authorship attribution easier, while others may be more homogeneous, making it more challenging. Here, we consider three subgroup factors: gender, age, and rating, to analyze the performance under each group.",
1388
+ "bbox": [
1389
+ 112,
1390
+ 532,
1391
+ 487,
1392
+ 677
1393
+ ],
1394
+ "page_idx": 7
1395
+ },
1396
+ {
1397
+ "type": "text",
1398
+ "text": "Subgroup by Gender As shown in Table 4, we evaluated the performance of authorship attribution within different gender subgroups in the Blog dataset. We observed that authorship attribution performed better within the female subgroup, consistent with findings in Section 4.4, suggesting female-authored blogs possess more distinctive personal styles.",
1399
+ "bbox": [
1400
+ 112,
1401
+ 686,
1402
+ 487,
1403
+ 814
1404
+ ],
1405
+ "page_idx": 7
1406
+ },
1407
+ {
1408
+ "type": "text",
1409
+ "text": "Subgroup by Rating Table 5 (a) shows the performance of authorship attribution across different rating ranges in the IMDb review dataset. Overall, we can see that rating does influence performance, with review in the $[5 - 6]$ rating range easier to attribute. Despite such difference, our method con",
1410
+ "bbox": [
1411
+ 112,
1412
+ 825,
1413
+ 489,
1414
+ 921
1415
+ ],
1416
+ "page_idx": 7
1417
+ },
1418
+ {
1419
+ "type": "text",
1420
+ "text": "sistently obtains good performance across all subgroups.",
1421
+ "bbox": [
1422
+ 507,
1423
+ 338,
1424
+ 882,
1425
+ 370
1426
+ ],
1427
+ "page_idx": 7
1428
+ },
1429
+ {
1430
+ "type": "text",
1431
+ "text": "Subgroup of Age Table 5 (b) shows the performance of authorship attribution across different age ranges of bloggers in the Blog dataset. We observed that age significantly influences performance. The youngest age group [13 - 17] exhibited the highest top-1 accuracy at $90\\%$ , while accuracy decreased with increasing author age. This suggests that younger authors tend to have more distinct opinions and identifiable writing styles. Despite performance differences, our method maintained relatively overall high performance, with the lowest accuracy still surpassing that of GPT-4-Turbo with QA method.",
1432
+ "bbox": [
1433
+ 507,
1434
+ 394,
1435
+ 884,
1436
+ 602
1437
+ ],
1438
+ "page_idx": 7
1439
+ },
1440
+ {
1441
+ "type": "text",
1442
+ "text": "4.6 Efficiency Analysis",
1443
+ "text_level": 1,
1444
+ "bbox": [
1445
+ 507,
1446
+ 630,
1447
+ 705,
1448
+ 645
1449
+ ],
1450
+ "page_idx": 7
1451
+ },
1452
+ {
1453
+ "type": "text",
1454
+ "text": "Table 6 shows the efficiency comparison of different methods on the imdb dataset. Our Logprob method operates with notably lower runtime compared to QA methods. This is primarily due to the Logprob method requiring only a single forward pass through the LLM for each author to estimate the log probabilities. In contrast, QA methods generally need multiple iterations of token generations to form a response, which increases computation time substantially. In the mean time, our method achieves an accuracy of up to $85\\%$ , surpassing QA method based on GPT-4-Turbo in both efficiency and accuracy.",
1455
+ "bbox": [
1456
+ 505,
1457
+ 659,
1458
+ 882,
1459
+ 868
1460
+ ],
1461
+ "page_idx": 7
1462
+ },
1463
+ {
1464
+ "type": "text",
1465
+ "text": "In summary, our method proves to be effective and efficient in performing authorship attribution across various datasets and setups.",
1466
+ "bbox": [
1467
+ 507,
1468
+ 873,
1469
+ 880,
1470
+ 921
1471
+ ],
1472
+ "page_idx": 7
1473
+ },
1474
+ {
1475
+ "type": "page_number",
1476
+ "text": "13223",
1477
+ "bbox": [
1478
+ 477,
1479
+ 927,
1480
+ 524,
1481
+ 940
1482
+ ],
1483
+ "page_idx": 7
1484
+ },
1485
+ {
1486
+ "type": "text",
1487
+ "text": "5 Conclusion",
1488
+ "text_level": 1,
1489
+ "bbox": [
1490
+ 114,
1491
+ 84,
1492
+ 247,
1493
+ 98
1494
+ ],
1495
+ "page_idx": 8
1496
+ },
1497
+ {
1498
+ "type": "text",
1499
+ "text": "In this paper, we study the problem of authorship attribution. We demonstrate the effectiveness of utilizing pre-trained Large Language Models (LLMs) for one-shot author attribution. Our Bayesian approach leverages the probabilistic nature of language models like Llama-3 to infer authorship. Our method does not require fine-tuning, therefore reduces computational overhead and data requirements. Our experiments validate that our method is more effective and efficient compared to existing techniques.",
1500
+ "bbox": [
1501
+ 112,
1502
+ 109,
1503
+ 490,
1504
+ 287
1505
+ ],
1506
+ "page_idx": 8
1507
+ },
1508
+ {
1509
+ "type": "text",
1510
+ "text": "6 Limitations",
1511
+ "text_level": 1,
1512
+ "bbox": [
1513
+ 112,
1514
+ 297,
1515
+ 250,
1516
+ 312
1517
+ ],
1518
+ "page_idx": 8
1519
+ },
1520
+ {
1521
+ "type": "text",
1522
+ "text": "The main limitations arise due to the dependence on LLMs.",
1523
+ "bbox": [
1524
+ 112,
1525
+ 322,
1526
+ 485,
1527
+ 351
1528
+ ],
1529
+ "page_idx": 8
1530
+ },
1531
+ {
1532
+ "type": "text",
1533
+ "text": "Our method relies heavily on the capabilities of LLMs, and the performance of our approach is highly affected by the size and training objectives of the LLMs. As shown in Table 1, models that are only pre-trained rather than fine-tuned for dialogue or code task performs better.",
1534
+ "bbox": [
1535
+ 112,
1536
+ 354,
1537
+ 485,
1538
+ 449
1539
+ ],
1540
+ "page_idx": 8
1541
+ },
1542
+ {
1543
+ "type": "text",
1544
+ "text": "While larger models generally perform better, they also entail higher costs, posing scalability and accessibility challenges for broader applications.",
1545
+ "bbox": [
1546
+ 112,
1547
+ 451,
1548
+ 487,
1549
+ 498
1550
+ ],
1551
+ "page_idx": 8
1552
+ },
1553
+ {
1554
+ "type": "text",
1555
+ "text": "Another limitation is due to training data of LLMs. If the training data lacks diversity or fails to include certain writing styles, the model may not fully capture the intricacies of an author's style, potentially leading to misclassifications. This limitation underscores the importance of using diverse and comprehensive training datasets.",
1556
+ "bbox": [
1557
+ 112,
1558
+ 499,
1559
+ 487,
1560
+ 611
1561
+ ],
1562
+ "page_idx": 8
1563
+ },
1564
+ {
1565
+ "type": "text",
1566
+ "text": "Furthermore, any biases present in the training data can also be absorbed by the model. These biases will influence the performance of our authorship attribution method.",
1567
+ "bbox": [
1568
+ 112,
1569
+ 611,
1570
+ 487,
1571
+ 674
1572
+ ],
1573
+ "page_idx": 8
1574
+ },
1575
+ {
1576
+ "type": "text",
1577
+ "text": "On the broader societal level, the potential for misuse of this technology is a significant concern. The challenge of regulating and overseeing the use of such powerful tools is still not fully addressed.",
1578
+ "bbox": [
1579
+ 112,
1580
+ 676,
1581
+ 487,
1582
+ 740
1583
+ ],
1584
+ "page_idx": 8
1585
+ },
1586
+ {
1587
+ "type": "text",
1588
+ "text": "Lastly, while our approach avoids the need for extensive retraining or fine-tuning, which is an advantage in many cases, this also means that our method might not adapt well to scenarios where lots of training data and computation is available, which justifies more complex and computationally intensive methods.",
1589
+ "bbox": [
1590
+ 112,
1591
+ 741,
1592
+ 487,
1593
+ 851
1594
+ ],
1595
+ "page_idx": 8
1596
+ },
1597
+ {
1598
+ "type": "text",
1599
+ "text": "Acknowledgments",
1600
+ "text_level": 1,
1601
+ "bbox": [
1602
+ 114,
1603
+ 864,
1604
+ 278,
1605
+ 879
1606
+ ],
1607
+ "page_idx": 8
1608
+ },
1609
+ {
1610
+ "type": "ref_text",
1611
+ "text": "ZH, TZ and HH were partially supported by NSF IIS 2347592, 2347604, 2348159, 2348169, DBI",
1612
+ "bbox": [
1613
+ 112,
1614
+ 889,
1615
+ 487,
1616
+ 920
1617
+ ],
1618
+ "page_idx": 8
1619
+ },
1620
+ {
1621
+ "type": "text",
1622
+ "text": "2405416, CCF 2348306, CNS 2347617.",
1623
+ "bbox": [
1624
+ 509,
1625
+ 84,
1626
+ 808,
1627
+ 98
1628
+ ],
1629
+ "page_idx": 8
1630
+ },
1631
+ {
1632
+ "type": "text",
1633
+ "text": "References",
1634
+ "text_level": 1,
1635
+ "bbox": [
1636
+ 510,
1637
+ 127,
1638
+ 608,
1639
+ 142
1640
+ ],
1641
+ "page_idx": 8
1642
+ },
1643
+ {
1644
+ "type": "list",
1645
+ "sub_type": "ref_text",
1646
+ "list_items": [
1647
+ "Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.",
1648
+ "Hayri Volkan Agun and Ozgur Yilmazel. 2017. Document embedding approach for efficient authorship attribution. In 2017 2nd International Conference on Knowledge Engineering and Applications (ICKEA), pages 194-198. IEEE.",
1649
+ "Salha M Alzahrani, Naomie Salim, and Ajith Abraham. 2011. Understanding plagiarism linguistic patterns, textual features, and detection methods. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 42(2):133-149.",
1650
+ "Douglas Bagnall. 2015. Author identification using multi-headed recurrent neural networks. arXiv preprint arXiv:1506.04891.",
1651
+ "Ilker Nadi Bozkurt, Ozgur Baghoglu, and Erkan Uyar. 2007. Authorship attribution. In 2007 22nd international symposium on computer and information sciences, pages 1-5. IEEE.",
1652
+ "Leo Breiman. 2001. Random forests. Machine learning, 45:5-32.",
1653
+ "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901.",
1654
+ "Sebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712.",
1655
+ "Alexandra Chouldechova and Max G'Sell. 2017. Fairer and more accurate, but for whom? 4th Workshop on Fairness, Accountability, and Transparency in Machine Learning.",
1656
+ "Mael Fabien, Esau Villatoro-Tello, Petr Moticek, and Shantipriya Parida. 2020. BertAA: BERT fine-tuning for authorship attribution. In Proceedings of the 17th International Conference on Natural Language Processing (ICON), pages 127-137, Indian Institute of Technology Patna, Patna, India. NLP Association of India (NLPAI).",
1657
+ "Zhenhao Ge, Yufang Sun, and Mark Smith. 2016. Authorship attribution using a neural network language model. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30."
1658
+ ],
1659
+ "bbox": [
1660
+ 510,
1661
+ 149,
1662
+ 884,
1663
+ 920
1664
+ ],
1665
+ "page_idx": 8
1666
+ },
1667
+ {
1668
+ "type": "page_number",
1669
+ "text": "13224",
1670
+ "bbox": [
1671
+ 477,
1672
+ 927,
1673
+ 524,
1674
+ 940
1675
+ ],
1676
+ "page_idx": 8
1677
+ },
1678
+ {
1679
+ "type": "list",
1680
+ "sub_type": "ref_text",
1681
+ "list_items": [
1682
+ "Qingyan Guo, Rui Wang, Junliang Guo, Bei Li, Kaitao Song, Xu Tan, Guoqing Liu, Jiang Bian, and Yujiu Yang. 2024. Connecting large language models with evolutionary algorithms yields powerful prompt optimizers. In The Twelfth International Conference on Learning Representations.",
1683
+ "David I Holmes. 1994. Authorship attribution. Computers and the Humanities, 28:87-106.",
1684
+ "Baixiang Huang, Canyu Chen, and Kai Shu. 2024. Can large language models identify authorship? arXiv preprint arXiv:2403.08213.",
1685
+ "Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12):1-38.",
1686
+ "Patrick Juola et al. 2008. Authorship attribution. Foundations and Trends® in Information Retrieval, 1(3):233-334.",
1687
+ "Michal Kosinski. 2023. Theory of mind might have spontaneously emerged in large language models. arXiv preprint arXiv:2302.02083.",
1688
+ "Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. 2023. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles.",
1689
+ "Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In International conference on machine learning, pages 1188-1196. PMLR.",
1690
+ "Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. 2022. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110.",
1691
+ "Kevin Liu, Stephen Casper, Dylan Hadfield-Menell, and Jacob Andreas. 2023. Cognitive dissonance: Why do language model outputs disagree with internal representations of truthfulness? In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 4791-4797, Singapore. Association for Computational Linguistics.",
1692
+ "Andrew Kachites McCallum. 1999. Multi-label text classification with a mixture model trained by em. In AAAI'99 workshop on text learning.",
1693
+ "Seif Mechti and Fahad Almansour. 2021. An orderly survey on author attribution methods: From stylistic features to machine learning models. Int. J. Adv. Res. Eng. Technol, 12:528-538."
1694
+ ],
1695
+ "bbox": [
1696
+ 115,
1697
+ 85,
1698
+ 485,
1699
+ 919
1700
+ ],
1701
+ "page_idx": 9
1702
+ },
1703
+ {
1704
+ "type": "list",
1705
+ "sub_type": "ref_text",
1706
+ "list_items": [
1707
+ "Frederick Mosteller and David L Wallace. 1963. Inference in an authorship problem: A comparative study of discrimination methods applied to the authorship of the disputed federalist papers. Journal of the American Statistical Association, 58(302):275-309.",
1708
+ "Eliana Pastor, Luca de Alfaro, and Elena Baralis. 2021. Identifying biased subgroups in ranking and classification. Measures and Best Practices for Responsible AI at KDD 2021.",
1709
+ "Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, et al. 2023. Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950.",
1710
+ "Sebastian Ruder, Parsa Ghaffari, and John G Breslin. 2016. Character-level and multi-channel convolutional neural networks for large-scale authorship attribution. arXiv preprint arXiv:1609.06686.",
1711
+ "Jonathan Schler, Moshe Koppel, Shlomo Argamon, and James W Pennebaker. 2006. Effects of age and gender on blogging. In AAAI spring symposium: Computational approaches to analyzing weblogs, volume 6, pages 199-205.",
1712
+ "Melanie Sclar, Yejin Choi, Yulia Tsvetkov, and Alane Suhr. 2023. Quantifying language models' sensitivity to spurious features in prompt design or: How i learned to start worrying about prompt formatting. arXiv preprint arXiv:2310.11324.",
1713
+ "Yanir Seroussi, Ingrid Zukerman, and Fabian Bohnert. 2014. Authorship attribution with topic models. Computational Linguistics, 40(2):269-310.",
1714
+ "Prasha Shrestha, Sebastian Sierra, Fabio A González, Manuel Montes, Paolo Rosso, and Thamar Solorio. 2017. Convolutional neural networks for authorship attribution of short texts. In Proceedings of the 15th conference of the European chapter of the association for computational linguistics: Volume 2, short papers, pages 669-674.",
1715
+ "Kanishka Silva, Burcu Can, Frédéric Blain, Raheem Sarwar, Laura Ugolini, and Ruslan Mitkov. 2023. Authorship attribution of late 19th century novels using gan-bert. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop), pages 310-320.",
1716
+ "Karan Singhal, Shekoofeh Azizi, Tao Tu, S Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, et al. 2023. Large language models encode clinical knowledge. Nature, 620(7972):172-180.",
1717
+ "Efstathios Stamatos. 2009. A survey of modern authorship attribution methods. Journal of the American Society for information Science and Technology, 60(3):538-556."
1718
+ ],
1719
+ "bbox": [
1720
+ 510,
1721
+ 85,
1722
+ 880,
1723
+ 919
1724
+ ],
1725
+ "page_idx": 9
1726
+ },
1727
+ {
1728
+ "type": "page_number",
1729
+ "text": "13225",
1730
+ "bbox": [
1731
+ 477,
1732
+ 928,
1733
+ 524,
1734
+ 940
1735
+ ],
1736
+ "page_idx": 9
1737
+ },
1738
+ {
1739
+ "type": "list",
1740
+ "sub_type": "ref_text",
1741
+ "list_items": [
1742
+ "Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.",
1743
+ "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.",
1744
+ "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837.",
1745
+ "Sarim Zafar, Muhammad Usman Sarwar, Saeed Salem, and Muhammad Zubair Malik. 2020. Language and obfuscation oblivious source code authorship attribution. IEEE Access, 8:197581-197596.",
1746
+ "Biao Zhang, Barry Haddow, and Alexandra Birch. 2023a. Prompting large language model for machine translation: A case study. In International Conference on Machine Learning, pages 41092-41110. PMLR.",
1747
+ "Richong Zhang, Zhiyuan Hu, Hongyu Guo, and Yongyi Mao. 2018. Syntax encoding with application in authorship attribution. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2742-2753, Brussels, Belgium. Association for Computational Linguistics.",
1748
+ "Tianyi Zhang, Faisal Ladhak, Esin Durmus, Percy Liang, Kathleen McKeown, and Tatsunori B Hashimoto. 2024. Benchmarking large language models for news summarization. Transactions of the Association for Computational Linguistics, 12:39-57.",
1749
+ "Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. 2023b. Automatic chain of thought prompting in large language models. In The Eleventh International Conference on Learning Representations."
1750
+ ],
1751
+ "bbox": [
1752
+ 115,
1753
+ 85,
1754
+ 487,
1755
+ 671
1756
+ ],
1757
+ "page_idx": 10
1758
+ },
1759
+ {
1760
+ "type": "text",
1761
+ "text": "A Ethical Considerations",
1762
+ "text_level": 1,
1763
+ "bbox": [
1764
+ 115,
1765
+ 678,
1766
+ 349,
1767
+ 694
1768
+ ],
1769
+ "page_idx": 10
1770
+ },
1771
+ {
1772
+ "type": "text",
1773
+ "text": "Our method using LLMs for authorship attribution brings several ethical considerations that must be addressed to ensure responsible and fair use of the technology.",
1774
+ "bbox": [
1775
+ 112,
1776
+ 703,
1777
+ 485,
1778
+ 768
1779
+ ],
1780
+ "page_idx": 10
1781
+ },
1782
+ {
1783
+ "type": "text",
1784
+ "text": "Privacy and Anonymity The capacity of LLMs to attribute authorship with high accuracy can lead to ethical challenges regarding privacy and anonymity. Individuals who wish to remain anonymous or protect their identity could be compromised if authorship attribution tools are misused. Therefore, it is crucial to establish strict guidelines and ethical standards on the use of such technologies to prevent breaches of privacy.",
1785
+ "bbox": [
1786
+ 112,
1787
+ 776,
1788
+ 487,
1789
+ 921
1790
+ ],
1791
+ "page_idx": 10
1792
+ },
1793
+ {
1794
+ "type": "text",
1795
+ "text": "Potential for Abuse Despite multiple beneficial applications, the misuse potential of authorship attribution tools is significant. Risks include the use of this technology to suppress free speech or to endanger personal safety by identifying individuals in contexts where anonymity is crucial for safety. Addressing these risks requires robust governance to prevent misuse and to ensure that the technology is used ethically and responsibly.",
1796
+ "bbox": [
1797
+ 507,
1798
+ 84,
1799
+ 882,
1800
+ 229
1801
+ ],
1802
+ "page_idx": 10
1803
+ },
1804
+ {
1805
+ "type": "text",
1806
+ "text": "Bias Issue The performance of authorship attribution methods can vary across different demographics, leading to potential biases. It is important to continually assess and correct these biases to ensure fairness in the application of this technology.",
1807
+ "bbox": [
1808
+ 507,
1809
+ 237,
1810
+ 882,
1811
+ 318
1812
+ ],
1813
+ "page_idx": 10
1814
+ },
1815
+ {
1816
+ "type": "text",
1817
+ "text": "Misclassification Issue Given the high stakes involved, especially in forensic contexts, the accuracy of authorship attribution is important. Misclassifications can have serious consequences, including wrongful accusations or legal implications. It is essential for authorship attribution methods to be reliable and for their limitations to be transparently communicated to users.",
1818
+ "bbox": [
1819
+ 507,
1820
+ 326,
1821
+ 882,
1822
+ 454
1823
+ ],
1824
+ "page_idx": 10
1825
+ },
1826
+ {
1827
+ "type": "text",
1828
+ "text": "B Broader Impact",
1829
+ "text_level": 1,
1830
+ "bbox": [
1831
+ 509,
1832
+ 467,
1833
+ 685,
1834
+ 483
1835
+ ],
1836
+ "page_idx": 10
1837
+ },
1838
+ {
1839
+ "type": "text",
1840
+ "text": "Our study of authorship attribution using LLMs contributes to advancements in various domains:",
1841
+ "bbox": [
1842
+ 507,
1843
+ 492,
1844
+ 880,
1845
+ 523
1846
+ ],
1847
+ "page_idx": 10
1848
+ },
1849
+ {
1850
+ "type": "text",
1851
+ "text": "Forensic Linguistics Our research contributes to the field of forensic linguistics by providing tools that can solve crimes involving anonymous or disputed texts. This can be particularly useful for law enforcement and legal professionals who need to gather evidence and make more informed decisions.",
1852
+ "bbox": [
1853
+ 507,
1854
+ 533,
1855
+ 882,
1856
+ 643
1857
+ ],
1858
+ "page_idx": 10
1859
+ },
1860
+ {
1861
+ "type": "text",
1862
+ "text": "Intellectual Property Protection Our method can serve as a powerful tool in identifying the authors of texts, which can help protect intellectual property rights and resolve disputes in copyright.",
1863
+ "bbox": [
1864
+ 507,
1865
+ 653,
1866
+ 882,
1867
+ 718
1868
+ ],
1869
+ "page_idx": 10
1870
+ },
1871
+ {
1872
+ "type": "text",
1873
+ "text": "Historical Text Attribution In literary and historical studies, determining the authorship of texts can provide insights into their origins and contexts, enhancing our understanding and interpretation.",
1874
+ "bbox": [
1875
+ 507,
1876
+ 727,
1877
+ 882,
1878
+ 791
1879
+ ],
1880
+ "page_idx": 10
1881
+ },
1882
+ {
1883
+ "type": "text",
1884
+ "text": "Enhanced Content Management Media and content companies can use this technology to manage content more effectively by accurately attributing authorship to various contributors.",
1885
+ "bbox": [
1886
+ 507,
1887
+ 800,
1888
+ 882,
1889
+ 863
1890
+ ],
1891
+ "page_idx": 10
1892
+ },
1893
+ {
1894
+ "type": "text",
1895
+ "text": "Educational Applications In educational settings, our method can help prevent plagiarism and promote academic integrity. It can also serve as",
1896
+ "bbox": [
1897
+ 507,
1898
+ 873,
1899
+ 882,
1900
+ 921
1901
+ ],
1902
+ "page_idx": 10
1903
+ },
1904
+ {
1905
+ "type": "page_number",
1906
+ "text": "13226",
1907
+ "bbox": [
1908
+ 477,
1909
+ 927,
1910
+ 524,
1911
+ 940
1912
+ ],
1913
+ "page_idx": 10
1914
+ },
1915
+ {
1916
+ "type": "text",
1917
+ "text": "a teaching tool to help students understand and appreciate stylistic differences between authors.",
1918
+ "bbox": [
1919
+ 115,
1920
+ 85,
1921
+ 485,
1922
+ 115
1923
+ ],
1924
+ "page_idx": 11
1925
+ },
1926
+ {
1927
+ "type": "text",
1928
+ "text": "While our method holds promise across multiple applications, it is crucial to deploy it with caution. Ensuring that the technology is used responsibly and ethically will be key to maximizing its benefits while minimizing potential harm.",
1929
+ "bbox": [
1930
+ 115,
1931
+ 118,
1932
+ 485,
1933
+ 195
1934
+ ],
1935
+ "page_idx": 11
1936
+ },
1937
+ {
1938
+ "type": "page_number",
1939
+ "text": "13227",
1940
+ "bbox": [
1941
+ 478,
1942
+ 928,
1943
+ 524,
1944
+ 940
1945
+ ],
1946
+ "page_idx": 11
1947
+ }
1948
+ ]
2024/A Bayesian Approach to Harnessing the Power of LLMs in Authorship Attribution/bafb35f6-8fe7-40f9-8743-f4d30539c3ef_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Bayesian Approach to Harnessing the Power of LLMs in Authorship Attribution/bafb35f6-8fe7-40f9-8743-f4d30539c3ef_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb0bbe5e43d5f4f30a0f080a0a0bcd3a2bce0bd45434df976c931a0d1c087a8f
3
+ size 296752
2024/A Bayesian Approach to Harnessing the Power of LLMs in Authorship Attribution/full.md ADDED
@@ -0,0 +1,378 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Bayesian Approach to Harnessing the Power of LLMs in Authorship Attribution
2
+
3
+ Zhengmian Hu $^{1,2*}$ , Tong Zheng $^{1*}$ , Heng Huang $^{1}$
4
+
5
+ $^{1}$ Department of Computer Science, University of Maryland, College Park, MD 20742 $^{2}$ Adobe Research
6
+
7
+ huzhengmian@gmail.com, zhengtong12356@gmail.com, heng@umd.edu,
8
+
9
+ # Abstract
10
+
11
+ Authorship attribution aims to identify the origin or author of a document. Traditional approaches have heavily relied on manual features and fail to capture long-range correlations, limiting their effectiveness. Recent advancements leverage text embeddings from pretrained language models, which require significant fine-tuning on labeled data, posing challenges in data dependency and limited interpretability. Large Language Models (LLMs), with their deep reasoning capabilities and ability to maintain long-range textual associations, offer a promising alternative. This study explores the potential of pre-trained LLMs in one-shot authorship attribution, specifically utilizing Bayesian approaches and probability outputs of LLMs. Our methodology calculates the probability that a text entails previous writings of an author, reflecting a more nuanced understanding of authorship. By utilizing only pre-trained models such as Llama-3-70B, our results on the IMDb and blog datasets show an impressive $85\%$ accuracy in one-shot authorship classification across ten authors. Our findings set new baselines for one-shot authorship analysis using LLMs and expand the application scope of these models in forensic linguistics. This work also includes extensive ablation studies to validate our approach.
12
+
13
+ # 1 Introduction
14
+
15
+ Authorship attribution, the process of identifying the origin or author of a document, has been a longstanding challenge in forensic linguistics. It has numerous applications, including detecting plagiarism (Alzahrani et al., 2011) and attribution of historical text (Silva et al., 2023). As the digital age progresses, the need for reliable methods to determine authorship has become increasingly important, especially in the context of combating misinformation spread through social media and con
16
+
17
+ ducting forensic analysis. The ability to attribute authorship can also lead to challenges around privacy and anonymity (Juola et al., 2008).
18
+
19
+ The field traces its roots back to the early 19th century (Mechti and Almansour, 2021), with early studies focusing on stylistic features and human expert analysis (Mosteller and Wallace, 1963). Traditional methods often relied on stylometry, which quantifies writing styles (Holmes, 1994), and rule-based computational linguistic methods (Stamatatos, 2009) to deduce authorship. Later, statistical algorithms incorporating extensive text preprocessing and feature engineering (Bozkurt et al., 2007; Seroussi et al., 2014) were introduced to improve accuracy. However, these methods often struggled with capturing long-range dependencies in text and require careful setup of specific thresholds for various indicators, which can be challenging to select effectively. They also involve designing complex, high-quality features, which can be costly and time-consuming.
20
+
21
+ The advent of deep learning has transformed the landscape of authorship attribution by turning the problem into a multi-class classification challenge, allowing for the capture of more features and addressing more complex scenarios effectively (Ruder et al., 2016; Ge et al., 2016; Shrestha et al., 2017; Zhang et al., 2018). However, these neural network (NN) models often lack interpretability and struggle with generalization in cases of limited samples.
22
+
23
+ Despite advancements, the field still faces significant challenges. Obtaining large, balanced datasets that represent multiple authors fairly is difficult, and as the number of authors increases, the accuracy of machine learning models tends to decrease.
24
+
25
+ On the other hand, language models, central to modern NLP applications, define the probability of distributions of words or sequences of words and have traditionally been used to predict and generate plausible language. Yet, for a long time, these
26
+
27
+ models, including high-bias models like bag-of-words and n-gram models, struggled to fit the true probability distributions of natural language. Deep learning's rapid development has enabled orders of magnitude scaling up of computing and data, facilitating the use of more complex models such as Random Forests (Breiman, 2001), character-level CNNs (Zafar et al., 2020), Recurrent Neural Networks (Bagnall, 2015), and Transformer (Vaswani et al., 2017).
28
+
29
+ The recent rapid evolution of Large Language Models (LLMs) has dramatically improved the ability to fit natural language distributions. Trained on massive corpora exceeding 1 trillion tokens, these models have become highly capable of handling a wide range of linguistic tasks, including understanding, generation, and meaningful dialogue (Liang et al., 2022; Bubeck et al., 2023; Zhang et al., 2023a, 2024). They can also explain complex concepts and capture subtle nuances of language. They have been extensively applied in various applications such as chatbots, writing assistants, information retrieval, and translation services. More impressively, LLMs have expanded their utility to novel tasks without additional training, simply through the use of prompts and in-context learning (Brown et al., 2020). This unique ability motivates researchers to adapt LLMs to an even broader range of tasks and topics including reasoning (Wei et al., 2022), theory of mind (Kosinski, 2023) and medical scenario (Singhal et al., 2023).
30
+
31
+ Interestingly, language models have also been explored for authorship attribution (Agun and Yilmazel, 2017; Le and Mikolov, 2014; McCallum, 1999). Recently, research has utilized LLMs for question answering (QA) tasks within the application of authorship verification and authorship attribution (Huang et al., 2024), though these have primarily been tested in small-scale settings. Other approaches have attempted to leverage model embeddings and fine-tuning for authorship attribution, such as using GAN-BERT (Silva et al., 2023) and BERTAA (Fabien et al., 2020). However, these techniques often face challenges with scalability and need retraining when updating candidate authors. Moreover, they require relatively large dataset and multiple epochs of fine-tuning to converge. Given the challenges with current approaches, a natural question arises: How can we harness LLMs for more effective authorship attribution?
32
+
33
+ Two aspects of evidence provide insights to an-
34
+
35
+ swer the above questions. First, recent studies on LLMs have shown that these models possess hallucination problems (Ji et al., 2023). More interestingly, the outputs of LLMs given prompts may disagree with their internal thinking (Liu et al., 2023). Therefore, it is advisable not to rely solely on direct sampling result from LLMs. Second, the training objective of LLMs is to maximize the likelihood of the next token given all previous tokens. This indicates that probability may be a potential indicator for attributing texts to authors.
36
+
37
+ Language models are essentially probabilistic models, but we find the probabilistic nature of LLMs and their potential for authorship identification remains underexploited. Our study seeks to bridge this gap. Specifically, we explore the capability of LLMs to perform one-shot authorship attribution among multiple candidates.
38
+
39
+ We propose a novel approach based on a Bayesian framework that utilizes the probability outputs from LLMs. By deriving text-level log probabilities from token-level log probabilities, we establish a reliable measure of likelihood that a query text was written by a specific author given example texts from each candidate author. We also design suitable prompts to enhance the accuracy of these log probabilities. By calculating the posterior probability of authorship, we can infer the most likely author of a document (Figure 1). Due to the pivotal role of log probability in our algorithm, we coined our approach the "Logprob method."
40
+
41
+ Our new method has three main advantages:
42
+
43
+ - No Need for Fine-Tuning: Our approach aligns the classification task with the pretraining objective, both focusing on computing entailment probability. This avoids any objective mismatch introduced by fine-tuning. Moreover, our method leverages the inherent capabilities of pretrained LLMs and avoids knowledge forgetting that often occurs during fine-tuning.
44
+ - Speed and Efficiency: This approach requires only a single forward pass through the model for each author, making it significantly faster and more cost-effective compared to normal question-answering method of language models which involves sampling a sequence of tokens as answer, with one forward pass for each token generated.
45
+ - No Need for Manual Feature Engineering: The pre-training on diverse data enables LLMs to au
46
+
47
+ ![](images/cea13a2e406ea9dec89c42125cef1fb685c8f8f55b747706e39744a9526b7b66.jpg)
48
+ Figure 1: Illustration of bayesian authorship attribution using LLM.
49
+
50
+ tomatically capture and utilize subtle nuances in language, thus eliminating the need for manually designing complex features, which can be costly and time-consuming.
51
+
52
+ By applying this technique, we have achieved state-of-the-art results in one-shot learning on the IMDb and blog datasets, demonstrating an impressive $85\%$ accuracy across ten authors. This advancement establishes a new baseline for one-shot authorship analysis and illustrates the robust potential of LLMs in forensic linguistics.
53
+
54
+ # 2 Method
55
+
56
+ Our approach to authorship attribution is based on a Bayesian framework. Given a document whose authorship is unknown, our objective is to identify the most probable author from a set using the capabilities of Large Language Models (LLMs).
57
+
58
+ We consider a scenario where we have a set of authors $\mathcal{A} = \{a_1, \ldots, a_n\}$ and a set of all possible texts $\mathcal{E}$ . Given an authorship attribution problem, where each author $a_i$ has written a set of texts $t_{i,1}, t_{i,2}, \ldots, t_{i,m_i} \in \mathcal{E}$ , we denote the collection of known texts of an author $a_i$ as $t(a_i) = (t_{i,1}, t_{i,2}, \ldots, t_{i,m_i})$ . For an unknown text $u \in \mathcal{E}$ , we aim to determine the most likely author from the set $\mathcal{A}$ .
59
+
60
+ To estimate the author of text $u$ , we use a Bayesian framework where the probability that $u$ was written by author $a_{i}$ is given by:
61
+
62
+ $$
63
+ P \left(a _ {i} \mid u\right) = \frac {P \left(u \mid a _ {i}\right) P \left(a _ {i}\right)}{P (u)}. \tag {1}
64
+ $$
65
+
66
+ Here, $P(a_{i})$ is the prior probability of each author, assumed to be equal unless stated otherwise,
67
+
68
+ making the problem focus primarily on estimating $P(u|a_i)$ .
69
+
70
+ Assuming that each author $a_i$ has a unique writing style represented by a probability distribution $P(\cdot | a_i)$ , texts written by $a_i$ are samples from this distribution. To estimate $P(u | a_i)$ , we consider the independence assumption: texts by the same author are independently and identically distributed (i.i.d.). Thus, the unknown text $u$ is also presumed to be drawn from $P(\cdot | a_i)$ for some author $a_i$ and is independent of other texts from that author.
71
+
72
+ Notice that although texts are independent under the i.i.d. assumption when conditioned on a particular author, there exists a correlation between the unknown text $u$ and the set of known texts $t(a)$ in the absence of knowledge about the author. This correlation can be exploited to deduce the most likely author of $u$ using the known texts.
73
+
74
+ Specifically, we have
75
+
76
+ $$
77
+ \begin{array}{l} P (u | \boldsymbol {t} (a _ {i})) = \sum_ {a _ {j} \in \mathcal {A}} P (u, a _ {j} | \boldsymbol {t} (a _ {i})) \\ = \sum_ {a _ {j} \in \mathcal {A}} P (u | a _ {j}, \boldsymbol {t} (a _ {i})) P (a _ {j} | \boldsymbol {t} (a _ {i})) \\ = \sum_ {a _ {j} \in \mathcal {A}} P (u | a _ {j}) P \left(a _ {j} \mid \boldsymbol {t} \left(a _ {i}\right)\right), \tag {2} \\ \end{array}
78
+ $$
79
+
80
+ where the last equality uses the i.i.d. assumption, meaning that when conditioned on a specific author $a_{j}$ , $u$ is independent of other texts.
81
+
82
+ We then introduce the "sufficient training set" assumption, where:
83
+
84
+ $$
85
+ P \left(a _ {j} \mid \boldsymbol {t} \left(a _ {i}\right)\right) = \left\{ \begin{array}{l l} 1 & a _ {i} = a _ {j} \\ 0 & a _ {i} \neq a _ {j}. \end{array} \right. \tag {3}
86
+ $$
87
+
88
+ This implies that the training set is sufficiently
89
+
90
+ # Author 1:
91
+
92
+ # Author 2:
93
+
94
+ Tina Fey is a successful professional who has missed out on the baby wagon. All her friends have families and she has promotions. Desperate for a child she tries a sperm bank but it fails when she is told that she is infertile. In desperation she takes on a surrogate who turns her life upside down. Clearly Tina Fey is the smartest one in the room and she walks through this film seemingly on autopilot and above to everyone around her. What is she doing here? She is somewhere beyond this film and it shows. Its cute and amusing but Fey's demeanor promises something on a different plane then the rest of the movie. I think the best way to explain it, or over explain it would be Cary Grant in a Three Stooges movie. I think Fey can do great things if she wants or can find material that matches her abilities. A good little film.
95
+
96
+ Here is the text from the same author:
97
+
98
+ Barbet Schroeder's portrait of French attorney Jacques Vergès. You've seen him defending people like Klaus Barbie, Carlos the Jackal, Pol Pot as well as other dictators and terrorists. This is a complex story of a complex man and it essentially tells the tale of the man from World War 2 until today. ( And even at 140 minutes the film leaves a great deal out). Here is man of his time, who met and defended with many of the famous and infamous people of the last fifty years. He seems to be a man who generally believes in the right of the oppressed to stand up to their oppressors and to have some one to stand up for them. However this is not just the story of a man who fights for the oppressed but it is also the story of a man entangled in things that will cause many to question just how slick a guy is Vergès. Many of the terrorists and dictators he defends are in fact his friends , and he is not doing it for the love of cause but also for the love of the finer things . I liked the film a great deal . To be certain I was lost as to bits of the history and who some people were , but at the same time the film isn't about the history , so much as Verges moving through it . This is the story of the man , his causes and to some degree his women . What exactly are we to make of Verges ? I don't know , but I sure do think that he and his life make for a compelling tale . I loved that my idea of what Verges is changed . I loved that I was completely confused at the end as to what I thought , confused in a way that only a film that forces you to think can do . In the end I don't know what I think ...
99
+
100
+ In the run-up to the 1972 elections, Washington Post reporter Bob Woodward covers what seems to be a minor break-in at the Democratic Party National headquarters. He is surprised to find top lawyers already on the defence case, and the discovery of names and addresses of Republican fund organisers on the accused further arouses his suspicions. The editor of the Post is prepared to run with the story and assigns Woodward and Carl Bernstein to it. They find the trail leading higher and higher in the Republican Party, and eventually into the White House itself . . . whatever peoples opinions on the Watergate ' scandal', whether they believe it was a big cover up, or the media got a lot wrong, no one can deny just how powerful and interesting this film really is. Pakula directs this very slickly and brings the tension on the two main protagonists very slowly throughout the duration of the movie. Redford and Hoffman work really well together and are given great support from the rest of the cast. the narration works amazingly well and there is good use of mise en scene and connotations. for example there are a few scenes with the t. v screen in the foreground showing Nixon winning his presidential seat again, with ...
101
+
102
+ Here is the text from the same author:
103
+
104
+ Barbet Schroeder's portrait of French attorney Jacques Vergès. You've seen him defending people like Klaus Barbie, Carlos the Jackal, Pol Pot as well as other dictators and terrorists. This is a complex story of a complex man and it essentially tells the tale of the man from World War 2 until today. ( And even at 140 minutes the film leaves a great deal out). Here is man of his time, who met and defended with many of the famous and infamous people of the last fifty years. He seems to be a man who generally believes in the right of the oppressed to stand up to their oppressors and to have some one to stand up for them. However this is not just the story of a man who fights for the oppressed but it is also the story of a man entangled in things that will cause many to question just how slick a guy is Vergès. Many of the terrorists and dictators he defends are in fact his friends , and he is not doing it for the love of cause but also for the love of the finer things . I liked the film a great deal . To be certain I was lost as to bits of the history and who some people were , but at the same time the film isn't about the history , so much as Verges moving through it . This is the story of the man , his causes and to some degree his women . What exactly are we to make of Vergès ? I don't know , but I sure do think that he and his life make for a compelling tale . I loved that my idea of what Vergès is changed . I loved that I was completely confused at the end as to what I thought , confused in a way that only a film that forces you to think can do . In the end I don't know what I think ...
105
+
106
+ Logprob: -958.41
107
+
108
+ Most likely author:
109
+
110
+ -964.51
111
+
112
+ X
113
+
114
+ Figure 2: Example of prompt construction and authorship attribution based on log probabilities. The logprob is computed on the orange part, which represents the text from unknown author.
115
+
116
+ comprehensive to unambiguously differentiate authors, leading to:
117
+
118
+ $$
119
+ P (u | \boldsymbol {t} (a _ {i})) = P (u | a _ {j}), \tag {4}
120
+ $$
121
+
122
+ where $a_{j}$ is the assumed true author of text $u$
123
+
124
+ We use Large Language Models (LLMs) to estimate $P(u|t(a_i))$ , which represents the probability that a new text $u$ was written by the author of a given set of texts $t(a_i)$ .
125
+
126
+ The probability nature of language models means that they typically calculate the probability of a token or a sequence of tokens given prior context. For a vocabulary set $\Sigma$ , the input to a language model might be a sequence of tokens $x_{1},\ldots ,x_{m}\in \Sigma$ , and the model's output would be the probability distribution $P_{\mathrm{LLM}}(\cdot |x_1,\dots,x_m)$ typically stored in logarithmic scale for numerical stability.
127
+
128
+ When using an autoregressive language model, we can measure not only the probability of the next token but also the probability of a subsequent sequence of tokens. For instance, if we have a prompt consisting of tokens $x_{1},\ldots ,x_{m}\in \Sigma$ , and we want to measure the probability of a sequence
129
+
130
+ $$
131
+ \begin{array}{l} y _ {1}, \dots , y _ {s} \in \Sigma , \text {w e c a l c u l a t e :} \\ P _ {\mathrm {L L M}} (y _ {1}, \dots , y _ {s} | x _ {1}, \dots , x _ {m}) \\ = \prod_ {i = 1} ^ {s} P _ {\mathrm {L L M}} \left(y _ {i} \mid x _ {1}, \dots , x _ {m}, y _ {1}, \dots , y _ {i - 1}\right). \tag {5} \\ \end{array}
132
+ $$
133
+
134
+ To estimate $P(u|t(a_i))$ for authorship attribution, we define:
135
+
136
+ $$
137
+ \begin{array}{l} P (u | \mathbf {t} (a _ {i})) \tag {6} \\ = P _ {\text {L L M}} (u | \text {p r o m p t} _ {-} \text {c o n s t r u c t i o n} (t (a _ {i}))). \\ \end{array}
138
+ $$
139
+
140
+ The prompt construction can vary, providing flexibility in how we use the model to estimate probabilities. Our method involves constructing a prompt steering the LLM uses to predict the likelihood that the unknown text was written by the same author (Figure 2).
141
+
142
+ In summary, our approach is straightforward and simple. By leveraging the capabilities of Large Language Models, we calculate the likelihood that an unknown text originates from a known author based on existing samples of their writing. This
143
+
144
+ probability assessment allows us to identify the most likely author from a set without the need for fine-tuning or feature engineering.
145
+
146
+ # 3 Experimental Setups
147
+
148
+ # 3.1 Models & Baselines
149
+
150
+ Models We selected two widely-used LLM families: 1) LLaMA family, which includes LLaMA2 (Touvron et al., 2023), LLaMA-3, CodeLLaMA (Roziere et al., 2023), available in various parameter sizes and configurations, with some models specifically fine-tuned for dialogue use cases; 2) the GPT family (Brown et al., 2020), featuring GPT-3.5-Turbo and GPT-4-Turbo (Achiam et al., 2023), where we specifically used versions gpt4-turbo-2024-04-09 and gpt-3.5-turbo-0125. The LLaMA family models were deployed using the vLLM framework (Kwon et al., 2023) if used for Logprob method and are deployed on Azure if used for question-answering. Apart from Table 1, all ablation studies of Logprob method uses LLaMA3-70B model.
151
+
152
+ Baselines We chose two types of baselines for comparison. 1) embedding-based methods such as BertAA (Fabien et al., 2020) and GAN-BERT (Silva et al., 2023), which require training or finetuning, 2) LLM-based methods such as those described in (Huang et al., 2024), which utilize LLMs for authorship attribution tasks through a question-answering (QA) approach.
153
+
154
+ # 3.2 Evaluations
155
+
156
+ Datasets We evaluated our method on two widely used author attribution datasets: 1) IMDB62 dataset, a truncated version of IMDB dataset (Seroussi et al., 2014) and 2) Blog Dataset (Schler et al., 2006). IMDB62 dataset comprises 62k movie reviews from 62 authors, with each author contributing 1000 samples. Additionally, it also provides some extra information such as the rating score. The Blog dataset, contains 681k blog comments, each with an assigned authorID. Besides the raw text and authorID, each entry includes extra information such as gender and age. Both datasets are accessible via HuggingFace.
157
+
158
+ Benchmark Construction Unlike fixed author sets used in many previous studies, we constructed a random author set for each test to minimize variance. By default, unless specified otherwise, each experiment in our experiments involved a 10-author
159
+
160
+ one-shot setting, and we conducted 100 tests for each experiment to reduce variance. Each test involved the following steps: 1) Ten candidate authors were randomly selected. 2) For each author, one (or n for n-shot) article was randomly selected as the training set. 3) One author was randomly selected from the ten candidates as the test author. 4) One article not in the training set was randomly selected from the test author's articles as the test set (with size of 1). 5) We run the authorship attribution algorithm to classify the test article into 10 categories.
161
+
162
+ Our evaluation pipeline can avoid potential biases from fixed author sets and better measure the efficacy of LLMs in authorship attribution tasks. We also share our pipeline for fair evaluations of future related works.
163
+
164
+ Notably, aforementioned pipeline is suitable for non-training based methods like ours and QA approaches. However, for training-based methods such as embedding approaches, each train-test split is followed by a retraining, demanding significant computational resources. Therefore, in this work, we directly cited scores from the original papers.
165
+
166
+ Evaluation Metrics We adopt three metrics: top-1, top-2 and top-5 accuracies. Specifically, top k accuracy is computed as follows:
167
+
168
+ $$
169
+ \mathrm {T o p} k \mathrm {A c c u r a c y} = \frac {\mathrm {N u m} _ {\mathrm {c o r r e c t}} ^ {\mathrm {k}}}{\mathrm {N u m} _ {\mathrm {a l l}}}, \qquad (7)
170
+ $$
171
+
172
+ where $\mathrm{Num}_{\mathrm{correct}}^{\mathrm{k}}$ represents the number of tests where the actual author is among the top k predictions, and $\mathrm{Num}_{\mathrm{all}}$ represents the total number of tests.
173
+
174
+ # 4 Experiments
175
+
176
+ Firstly, we evaluate different methods for author attribution in Section 4.1, noting that our Log-prob method significantly outperformed QA-based methods in accuracy and stability across datasets. Then, we study the impact of increasing candidate numbers on performance in Section 4.2, where our method maintained high accuracy despite a larger pool of candidates. Next, in Section 4.3, we analyze prompt sensitivity, concluding that while prompt use is crucial, variations in prompt design did not significantly affect the performance. Further, in Section 4.4, we explore bias in author attribution and in Section 4.5, we measure performance variations across different subgroups. Finally, in
177
+
178
+ Section 4.6, we compared the efficiency of different author attribution methods.
179
+
180
+ # 4.1 Author Attribution Performance
181
+
182
+ Table 1 shows the main results for different methods on the IMDB62 and Blog datasets concerning authorship attribution capabilities. We make the following observations:
183
+
184
+ - LLMs with QA-based methods cannot perform author attribution tasks effectively. For example, GPT-4-Turbo can only achieve a top-1 accuracy of $34\%$ on the IMDB62 dataset and $62\%$ on the Blog dataset. Notably, there are two interesting phenomena: 1) GPT-4-Turbo and GPT-3.5-Turbo exhibit inconsistent higher accuracy across different datasets, highlighting inherent instability in the prompt-based approach. 2) Older LLMs with smaller context window lengths are unable to perform author attribution due to the prompt exceeding the context window. These phenomena indicate that QA methods are not a good option for enabling LLMs to conduct author attribution tasks effectively.
185
+ - Our Logprob method helps LLMs perform author attribution tasks more effectively. With LLaMA-3-70B, we achieved top-1 accuracy of $85\%$ , and both top-2 and top-5 accuracies were even higher. This suggests that LLMs equipped with our method can effectively narrow down large candidate sets. Additionally, two another things worth noting are that 1) LLMs with the Logprob method exhibit more stable performance across both tasks, something QA methods struggle with, and 2) LLMs with Logprob can conduct authorship attribution tasks with lower requirements for context window length. For instance, LLaMA-2-70B-Chat with the Logprob method can handle authorship attribution, whereas the same model with a QA approach fails when the collective text of 10 authors exceeds the context window length. These findings highlight the superiority of our Logprob method.
186
+ - Training-free method can achieve comparable or even superior performance to training-based methods. The Blog dataset showed higher top-1 accuracy with LLaMA + Logprob compared to GAN-BERT and BertAA. While the IMDB62 dataset exhibited lower performance relative to embedding-based methods, it is important to note that Logprob achieves this as a
187
+
188
+ ![](images/7f1d4f31521dc161682d3007e6861d4c48926114d91ade37bb8ce73684a809e3.jpg)
189
+ Figure 3: Accuracy vs. number of candidates.
190
+
191
+ one-shot method, whereas embedding-based approaches require much more data for training to converge. This demonstrates that Logprob can more effectively capture the nuances necessary for authorship attribution.
192
+
193
+ # 4.2 Performance vs. Number of Candidates
194
+
195
+ One of the challenges in authorship attribution is the difficulty in correctly identifying the author as the number of candidates increases, which generally leads to decreased accuracy. Figure 3 shows the author attribution performance across different candidate counts on the IMDB62 dataset. We made the following observations:
196
+
197
+ - First, performance indeed decreases as the number of candidates increases.
198
+ - Second, across all settings, all metrics maintain relatively high scores. For example, in the setting with 50 candidates, our method achieved $76\%$ top-1 accuracy, $84\%$ top-2 accuracy, and $87\%$ top-5 accuracy.
199
+ - Third, top-2 and top-5 accuracies are more stable compared to top-1 accuracy. The model may not always place the correct author at the top, but it often includes the correct author within the top few predictions. This attribute is also crucial as it allows the narrowing down of a large pool of candidates to a smaller subset of likely candidates.
200
+
201
+ # 4.3 Analysis of Prompt Sensitivity
202
+
203
+ Our method relies on suitable prompt as in Figure 2. Here, we discuss the sensitivity of our accuracy to different prompt constructions in Table 2. We made the following observations:
204
+
205
+ - Using prompts is essential for enhancing the accuracy of our method (#1 vs. #2). This phenomenon is aligned with previous studies
206
+
207
+ <table><tr><td rowspan="2">Method</td><td rowspan="2">Model</td><td colspan="5">IMDB62 Dataset</td><td colspan="5">BLOG Dataset</td></tr><tr><td>#Candidate</td><td>n-Shot</td><td>Top 1 Acc.</td><td>Top 2 Acc.</td><td>Top 5 Acc.</td><td>#Candidate</td><td>n-Shot</td><td>Top 1 Acc.</td><td>Top 2 Acc.</td><td>Top 5 Acc.</td></tr><tr><td rowspan="12">LogProb</td><td>LLaMA-2-7B</td><td>10</td><td>1</td><td>80.0 ± 4.0</td><td>88.0 ± 3.3</td><td>97.0 ± 1.7</td><td>10</td><td>1</td><td>79.0 ± 4.1</td><td>84.0 ± 3.7</td><td>98.0 ± 1.4</td></tr><tr><td>LLaMA-2-7B-Chat</td><td>10</td><td>1</td><td>68.0 ± 4.7</td><td>80.0 ± 4.0</td><td>88.0 ± 3.3</td><td>10</td><td>1</td><td>69.0 ± 4.6</td><td>78.0 ± 4.1</td><td>89.0 ± 3.1</td></tr><tr><td>LLaMA-2-13B</td><td>10</td><td>1</td><td>84.0 ± 3.7</td><td>88.0 ± 3.3</td><td>100.0 ± 0.0</td><td>10</td><td>1</td><td>81.0 ± 3.9</td><td>86.0 ± 3.5</td><td>94.0 ± 2.4</td></tr><tr><td>LLaMA-2-70B</td><td>10</td><td>1</td><td>88.0 ± 3.3</td><td>94.0 ± 2.4</td><td>99.0 ± 1.0</td><td>10</td><td>1</td><td>88.0 ± 3.3</td><td>90.0 ± 3.0</td><td>95.0 ± 2.2</td></tr><tr><td>LLaMA-2-70B-Chat</td><td>10</td><td>1</td><td>79.0 ± 4.1</td><td>85.0 ± 3.6</td><td>95.0 ± 2.2</td><td>10</td><td>1</td><td>83.0 ± 3.8</td><td>85.0 ± 3.6</td><td>97.0 ± 1.7</td></tr><tr><td>Code-LLaMA-7B</td><td>10</td><td>1</td><td>71.0 ± 4.5</td><td>84.0 ± 3.7</td><td>96.0 ± 2.0</td><td>10</td><td>1</td><td>78.0 ± 4.1</td><td>84.0 ± 3.7</td><td>94.0 ± 2.4</td></tr><tr><td>Code-LLaMA-13B</td><td>10</td><td>1</td><td>70.0 ± 4.6</td><td>84.0 ± 3.7</td><td>98.0 ± 1.4</td><td>10</td><td>1</td><td>77.0 ± 4.2</td><td>85.0 ± 3.6</td><td>92.0 ± 2.7</td></tr><tr><td>Code-LLaMA-34B</td><td>10</td><td>1</td><td>75.0 ± 4.3</td><td>84.0 ± 3.7</td><td>98.0 ± 1.4</td><td>10</td><td>1</td><td>78.0 ± 4.1</td><td>83.0 ± 3.8</td><td>94.0 ± 2.4</td></tr><tr><td>LLaMA-3-8B</td><td>10</td><td>1</td><td>82.0 ± 3.8</td><td>89.0 ± 3.1</td><td>98.0 ± 1.4</td><td>10</td><td>1</td><td>84.0 ± 3.7</td><td>89.0 ± 3.1</td><td>95.0 ± 2.2</td></tr><tr><td>LLaMA-3-8B-Instruct</td><td>10</td><td>1</td><td>69.0 ± 4.6</td><td>77.0 ± 4.2</td><td>90.0 ± 3.0</td><td>10</td><td>1</td><td>68.0 ± 4.7</td><td>77.0 ± 4.2</td><td>90.0 ± 3.0</td></tr><tr><td>LLaMA-3-70B</td><td>10</td><td>1</td><td>85.0 ± 3.6</td><td>93.0 ± 2.6</td><td>98.0 ± 1.4</td><td>10</td><td>1</td><td>82.0 ± 3.8</td><td>88.0 ± 3.3</td><td>95.0 ± 2.2</td></tr><tr><td>LLaMA-3-70B-Instruct</td><td>10</td><td>1</td><td>79.0 ± 4.1</td><td>89.0 ± 3.1</td><td>99.0 ± 1.0</td><td>10</td><td>1</td><td>79.0 ± 4.1</td><td>87.0 ± 3.4</td><td>96.0 ± 2.0</td></tr><tr><td rowspan="4">QA</td><td>LLaMA-2-70B-Chat</td><td>10</td><td>1</td><td>Failed</td><td>-</td><td>-</td><td>10</td><td>1</td><td>Failed</td><td>-</td><td>-</td></tr><tr><td>LLaMA-3-70B-Instruct</td><td>10</td><td>1</td><td>31.0 ± 4.6</td><td>-</td><td>-</td><td>10</td><td>1</td><td>22.0 ± 4.1</td><td>-</td><td>-</td></tr><tr><td>GPT-3.5-Turbo</td><td>10</td><td>1</td><td>69.0 ± 4.6</td><td>-</td><td>-</td><td>10</td><td>1</td><td>47.0 ± 5.0</td><td>-</td><td>-</td></tr><tr><td>GPT-4-Turbo</td><td>10</td><td>1</td><td>34.0 ± 4.7</td><td>-</td><td>-</td><td>10</td><td>1</td><td>62.0 ± 4.9</td><td>-</td><td>-</td></tr><tr><td rowspan="2">Other Baseline</td><td>GAN-BERT</td><td>20</td><td>80</td><td>96.0</td><td>-</td><td>-</td><td>20</td><td>80</td><td>40.0</td><td>-</td><td>-</td></tr><tr><td>BertAA</td><td>62</td><td>80</td><td>93.0</td><td>-</td><td>-</td><td>10</td><td>80</td><td>65.0</td><td>-</td><td>-</td></tr></table>
208
+
209
+ Prompt 1: Here is the text from the same author:
210
+ Prompt 2: Analyze the writing styles of the input texts, disregarding the differences in topic and content.
211
+ Here is the text from the same author:
212
+ Prompt 3: Focus on grammatical styles indicative of authorship. Here is the text from the same author:
213
+ Prompt 4: Analyze the writing styles of the input texts, disregarding the differences in topic and content.
214
+ Reasoning based on linguistic features such as phrasal verbs, modal verbs, punctuation, rare words, affixes, quantities, humor, sarcasm, typographical errors, and misspellings. Here is the text from the same author:
215
+
216
+ Table 1: Author attribution results on IMDB62 and Blog dataset. Prompt construction for QA method is in consistent with Huang et al. (2024).
217
+
218
+ <table><tr><td>#</td><td>Prompting</td><td>Top 1 Accuracy</td><td>Top 2 Accuracy</td><td>Top 5 Accuracy</td></tr><tr><td>1</td><td>&lt;Example Text&gt; + &lt;Query Text&gt;</td><td>70.0 ± 4.6</td><td>81.0 ± 3.9</td><td>92.0 ± 2.7</td></tr><tr><td>2</td><td>&lt;Example Text&gt; + &lt;Prompt 1&gt; + &lt;Query Text&gt;</td><td>85.0 ± 3.6</td><td>92.0 ± 2.7</td><td>99.0 ± 1.0</td></tr><tr><td>3</td><td>&lt;Example Text&gt; + &lt;Prompt 2&gt; + &lt;Query Text&gt;</td><td>83.0 ± 3.8</td><td>87.0 ± 3.4</td><td>100.0 ± 0.0</td></tr><tr><td>4</td><td>&lt;Example Text&gt; + &lt;Prompt 3&gt; + &lt;Query Text&gt;</td><td>86.0 ± 3.5</td><td>90.0 ± 3.0</td><td>100.0 ± 0.0</td></tr><tr><td>5</td><td>&lt;Example Text&gt; + &lt;Prompt 4&gt; + &lt;Query Text&gt;</td><td>87.0 ± 3.4</td><td>90.0 ± 3.0</td><td>99.0 ± 1.0</td></tr></table>
219
+
220
+ Table 2: Author attribution performance vs. different prompting choices on IMDB62 dataset.
221
+
222
+ <table><tr><td>Gender</td><td>Top 1 Acc.</td><td>Top 2 Acc.</td><td>Top 5 Acc.</td></tr><tr><td>Both</td><td>84.0 ± 1.6</td><td>90.8 ± 1.3</td><td>95.8 ± 1.0</td></tr><tr><td>Male</td><td>81.4 ± 2.5</td><td>88.6 ± 2.1</td><td>95.4 ± 1.4</td></tr><tr><td>Female</td><td>86.3 ± 2.1</td><td>92.8 ± 1.6</td><td>96.2 ± 1.2</td></tr></table>
223
+
224
+ (Wei et al., 2022) that have demonstrated that prompting is beneficial for unlocking the full potential of LLMs.
225
+
226
+ - There is no statistically significant evidence to suggest that specific prompt designs impact performance significantly (#2 vs. #3 vs. #4 vs. #5). The results show very close performance metrics across different prompt constructions.
227
+
228
+ Discussions Prompting sensitivity (Sclar et al., 2023) is a widely acknowledged property in the generation process of LLMs. This also has motivated a trend of research on prompting engineering
229
+
230
+ Table 3: Gender bias in author attribution performance.
231
+
232
+ <table><tr><td>Gender</td><td>Top 1 Acc.</td><td>Top 2 Acc.</td><td>Top 5 Acc.</td></tr><tr><td>Male</td><td>77.0 ± 4.2</td><td>82.0 ± 3.8</td><td>92.0 ± 2.7</td></tr><tr><td>Female</td><td>89.0 ± 3.1</td><td>91.0 ± 2.9</td><td>95.0 ± 2.2</td></tr></table>
233
+
234
+ Table 4: Author attribution performance in each gender subgroup.
235
+
236
+ (Zhang et al., 2023b; Guo et al., 2024) as different promptings can lead to completely different performance. However, our method appears to be relatively insensitive to the choice of prompt, which makes our method more robust, maintaining high performance and stability across various settings.
237
+
238
+ # 4.4 Bias Analysis
239
+
240
+ An algorithm trained on an entire dataset may exhibit different accuracy levels across different subgroups during testing (Chouldechova and G'Sell, 2017; Pastor et al., 2021). This section discusses such bias issues and measures how the algorithm's accuracy varies for different subgroups.
241
+
242
+ <table><tr><td>Interval</td><td>Top 1 Acc.</td><td>Top 2 Acc.</td><td>Top 5 Acc.</td></tr><tr><td>[1 - 2]</td><td>82.0 ± 3.8</td><td>89.0 ± 3.1</td><td>96.0 ± 2.0</td></tr><tr><td>[3 - 4]</td><td>87.0 ± 3.4</td><td>94.0 ± 2.4</td><td>99.0 ± 1.0</td></tr><tr><td>[5 - 6]</td><td>90.0 ± 3.0</td><td>96.0 ± 2.0</td><td>100.0 ± 0.0</td></tr><tr><td>[7 - 8]</td><td>88.0 ± 3.3</td><td>92.0 ± 2.7</td><td>97.0 ± 1.7</td></tr><tr><td>[9 - 10]</td><td>89.0 ± 3.1</td><td>93.0 ± 2.6</td><td>96.0 ± 2.0</td></tr></table>
243
+
244
+ (a) performance in each rating subgroup.
245
+
246
+ <table><tr><td>Age</td><td>Top 1 Acc.</td><td>Top 2 Acc.</td><td>Top 5 Acc.</td></tr><tr><td>[13 - 17]</td><td>90.0 ± 3.0</td><td>94.0 ± 2.4</td><td>99.0 ± 1.0</td></tr><tr><td>[18 - 34]</td><td>84.0 ± 3.7</td><td>89.0 ± 3.1</td><td>95.0 ± 2.2</td></tr><tr><td>[35 - 44]</td><td>80.0 ± 4.0</td><td>87.0 ± 3.4</td><td>94.0 ± 2.4</td></tr><tr><td>[45 - 48]</td><td>81.0 ± 3.9</td><td>85.0 ± 3.6</td><td>95.0 ± 2.2</td></tr></table>
247
+
248
+ (b) performance in each age subgroup.
249
+
250
+ Table 5: Author attribution performance in each rating subgroup and age subgroup.
251
+
252
+ <table><tr><td>#</td><td>Foundation Models</td><td>Deployment Resource</td><td>Method</td><td>Inference Time (s)</td><td>Accuracy</td></tr><tr><td>1</td><td>LLama-3-70B</td><td>8 × A6000 (VLLM)</td><td>Logprob</td><td>462.1</td><td>85.0 ± 3.6</td></tr><tr><td>2</td><td>GPT-4-Turbo</td><td>OpenAI</td><td>QA</td><td>663.1</td><td>34.0 ± 4.7</td></tr><tr><td>3</td><td>LLama-3-70B-Instruct</td><td>Azure</td><td>QA</td><td>2065.6</td><td>31.0 ± 4.6</td></tr></table>
253
+
254
+ Table 6: Efficiency analysis between prompt-based method and logprob-based method on Blog dataset.
255
+
256
+ Influence of Gender We conduct 500 tests which consists of 237 tests for blogs written by male authors and 263 tests for blogs written by female authors and show their accuracy of authorship attribution separately in Table 3. The results indicate that authorship attribution for blogs written by female authors exhibits higher accuracy. This suggests that female-authored blogs might contain more distinct personal styles, making it easier to infer the author.
257
+
258
+ # 4.5 Subgroup Analysis
259
+
260
+ When considering authorship attribution restricted to specific subgroups, the task can either become simpler or more difficult. Certain subgroups may express personal styles more distinctly, making authorship attribution easier, while others may be more homogeneous, making it more challenging. Here, we consider three subgroup factors: gender, age, and rating, to analyze the performance under each group.
261
+
262
+ Subgroup by Gender As shown in Table 4, we evaluated the performance of authorship attribution within different gender subgroups in the Blog dataset. We observed that authorship attribution performed better within the female subgroup, consistent with findings in Section 4.4, suggesting female-authored blogs possess more distinctive personal styles.
263
+
264
+ Subgroup by Rating Table 5 (a) shows the performance of authorship attribution across different rating ranges in the IMDb review dataset. Overall, we can see that rating does influence performance, with review in the $[5 - 6]$ rating range easier to attribute. Despite such difference, our method con
265
+
266
+ sistently obtains good performance across all subgroups.
267
+
268
+ Subgroup of Age Table 5 (b) shows the performance of authorship attribution across different age ranges of bloggers in the Blog dataset. We observed that age significantly influences performance. The youngest age group [13 - 17] exhibited the highest top-1 accuracy at $90\%$ , while accuracy decreased with increasing author age. This suggests that younger authors tend to have more distinct opinions and identifiable writing styles. Despite performance differences, our method maintained relatively overall high performance, with the lowest accuracy still surpassing that of GPT-4-Turbo with QA method.
269
+
270
+ # 4.6 Efficiency Analysis
271
+
272
+ Table 6 shows the efficiency comparison of different methods on the imdb dataset. Our Logprob method operates with notably lower runtime compared to QA methods. This is primarily due to the Logprob method requiring only a single forward pass through the LLM for each author to estimate the log probabilities. In contrast, QA methods generally need multiple iterations of token generations to form a response, which increases computation time substantially. In the mean time, our method achieves an accuracy of up to $85\%$ , surpassing QA method based on GPT-4-Turbo in both efficiency and accuracy.
273
+
274
+ In summary, our method proves to be effective and efficient in performing authorship attribution across various datasets and setups.
275
+
276
+ # 5 Conclusion
277
+
278
+ In this paper, we study the problem of authorship attribution. We demonstrate the effectiveness of utilizing pre-trained Large Language Models (LLMs) for one-shot author attribution. Our Bayesian approach leverages the probabilistic nature of language models like Llama-3 to infer authorship. Our method does not require fine-tuning, therefore reduces computational overhead and data requirements. Our experiments validate that our method is more effective and efficient compared to existing techniques.
279
+
280
+ # 6 Limitations
281
+
282
+ The main limitations arise due to the dependence on LLMs.
283
+
284
+ Our method relies heavily on the capabilities of LLMs, and the performance of our approach is highly affected by the size and training objectives of the LLMs. As shown in Table 1, models that are only pre-trained rather than fine-tuned for dialogue or code task performs better.
285
+
286
+ While larger models generally perform better, they also entail higher costs, posing scalability and accessibility challenges for broader applications.
287
+
288
+ Another limitation is due to training data of LLMs. If the training data lacks diversity or fails to include certain writing styles, the model may not fully capture the intricacies of an author's style, potentially leading to misclassifications. This limitation underscores the importance of using diverse and comprehensive training datasets.
289
+
290
+ Furthermore, any biases present in the training data can also be absorbed by the model. These biases will influence the performance of our authorship attribution method.
291
+
292
+ On the broader societal level, the potential for misuse of this technology is a significant concern. The challenge of regulating and overseeing the use of such powerful tools is still not fully addressed.
293
+
294
+ Lastly, while our approach avoids the need for extensive retraining or fine-tuning, which is an advantage in many cases, this also means that our method might not adapt well to scenarios where lots of training data and computation is available, which justifies more complex and computationally intensive methods.
295
+
296
+ # Acknowledgments
297
+
298
+ ZH, TZ and HH were partially supported by NSF IIS 2347592, 2347604, 2348159, 2348169, DBI
299
+
300
+ 2405416, CCF 2348306, CNS 2347617.
301
+
302
+ # References
303
+
304
+ Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
305
+ Hayri Volkan Agun and Ozgur Yilmazel. 2017. Document embedding approach for efficient authorship attribution. In 2017 2nd International Conference on Knowledge Engineering and Applications (ICKEA), pages 194-198. IEEE.
306
+ Salha M Alzahrani, Naomie Salim, and Ajith Abraham. 2011. Understanding plagiarism linguistic patterns, textual features, and detection methods. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 42(2):133-149.
307
+ Douglas Bagnall. 2015. Author identification using multi-headed recurrent neural networks. arXiv preprint arXiv:1506.04891.
308
+ Ilker Nadi Bozkurt, Ozgur Baghoglu, and Erkan Uyar. 2007. Authorship attribution. In 2007 22nd international symposium on computer and information sciences, pages 1-5. IEEE.
309
+ Leo Breiman. 2001. Random forests. Machine learning, 45:5-32.
310
+ Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901.
311
+ Sebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712.
312
+ Alexandra Chouldechova and Max G'Sell. 2017. Fairer and more accurate, but for whom? 4th Workshop on Fairness, Accountability, and Transparency in Machine Learning.
313
+ Mael Fabien, Esau Villatoro-Tello, Petr Moticek, and Shantipriya Parida. 2020. BertAA: BERT fine-tuning for authorship attribution. In Proceedings of the 17th International Conference on Natural Language Processing (ICON), pages 127-137, Indian Institute of Technology Patna, Patna, India. NLP Association of India (NLPAI).
314
+ Zhenhao Ge, Yufang Sun, and Mark Smith. 2016. Authorship attribution using a neural network language model. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30.
315
+
316
+ Qingyan Guo, Rui Wang, Junliang Guo, Bei Li, Kaitao Song, Xu Tan, Guoqing Liu, Jiang Bian, and Yujiu Yang. 2024. Connecting large language models with evolutionary algorithms yields powerful prompt optimizers. In The Twelfth International Conference on Learning Representations.
317
+ David I Holmes. 1994. Authorship attribution. Computers and the Humanities, 28:87-106.
318
+ Baixiang Huang, Canyu Chen, and Kai Shu. 2024. Can large language models identify authorship? arXiv preprint arXiv:2403.08213.
319
+ Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12):1-38.
320
+ Patrick Juola et al. 2008. Authorship attribution. Foundations and Trends® in Information Retrieval, 1(3):233-334.
321
+ Michal Kosinski. 2023. Theory of mind might have spontaneously emerged in large language models. arXiv preprint arXiv:2302.02083.
322
+ Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. 2023. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles.
323
+ Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In International conference on machine learning, pages 1188-1196. PMLR.
324
+ Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. 2022. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110.
325
+ Kevin Liu, Stephen Casper, Dylan Hadfield-Menell, and Jacob Andreas. 2023. Cognitive dissonance: Why do language model outputs disagree with internal representations of truthfulness? In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 4791-4797, Singapore. Association for Computational Linguistics.
326
+ Andrew Kachites McCallum. 1999. Multi-label text classification with a mixture model trained by em. In AAAI'99 workshop on text learning.
327
+ Seif Mechti and Fahad Almansour. 2021. An orderly survey on author attribution methods: From stylistic features to machine learning models. Int. J. Adv. Res. Eng. Technol, 12:528-538.
328
+
329
+ Frederick Mosteller and David L Wallace. 1963. Inference in an authorship problem: A comparative study of discrimination methods applied to the authorship of the disputed federalist papers. Journal of the American Statistical Association, 58(302):275-309.
330
+ Eliana Pastor, Luca de Alfaro, and Elena Baralis. 2021. Identifying biased subgroups in ranking and classification. Measures and Best Practices for Responsible AI at KDD 2021.
331
+ Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, et al. 2023. Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950.
332
+ Sebastian Ruder, Parsa Ghaffari, and John G Breslin. 2016. Character-level and multi-channel convolutional neural networks for large-scale authorship attribution. arXiv preprint arXiv:1609.06686.
333
+ Jonathan Schler, Moshe Koppel, Shlomo Argamon, and James W Pennebaker. 2006. Effects of age and gender on blogging. In AAAI spring symposium: Computational approaches to analyzing weblogs, volume 6, pages 199-205.
334
+ Melanie Sclar, Yejin Choi, Yulia Tsvetkov, and Alane Suhr. 2023. Quantifying language models' sensitivity to spurious features in prompt design or: How i learned to start worrying about prompt formatting. arXiv preprint arXiv:2310.11324.
335
+ Yanir Seroussi, Ingrid Zukerman, and Fabian Bohnert. 2014. Authorship attribution with topic models. Computational Linguistics, 40(2):269-310.
336
+ Prasha Shrestha, Sebastian Sierra, Fabio A González, Manuel Montes, Paolo Rosso, and Thamar Solorio. 2017. Convolutional neural networks for authorship attribution of short texts. In Proceedings of the 15th conference of the European chapter of the association for computational linguistics: Volume 2, short papers, pages 669-674.
337
+ Kanishka Silva, Burcu Can, Frédéric Blain, Raheem Sarwar, Laura Ugolini, and Ruslan Mitkov. 2023. Authorship attribution of late 19th century novels using gan-bert. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop), pages 310-320.
338
+ Karan Singhal, Shekoofeh Azizi, Tao Tu, S Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, et al. 2023. Large language models encode clinical knowledge. Nature, 620(7972):172-180.
339
+ Efstathios Stamatos. 2009. A survey of modern authorship attribution methods. Journal of the American Society for information Science and Technology, 60(3):538-556.
340
+
341
+ Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
342
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.
343
+ Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837.
344
+ Sarim Zafar, Muhammad Usman Sarwar, Saeed Salem, and Muhammad Zubair Malik. 2020. Language and obfuscation oblivious source code authorship attribution. IEEE Access, 8:197581-197596.
345
+ Biao Zhang, Barry Haddow, and Alexandra Birch. 2023a. Prompting large language model for machine translation: A case study. In International Conference on Machine Learning, pages 41092-41110. PMLR.
346
+ Richong Zhang, Zhiyuan Hu, Hongyu Guo, and Yongyi Mao. 2018. Syntax encoding with application in authorship attribution. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2742-2753, Brussels, Belgium. Association for Computational Linguistics.
347
+ Tianyi Zhang, Faisal Ladhak, Esin Durmus, Percy Liang, Kathleen McKeown, and Tatsunori B Hashimoto. 2024. Benchmarking large language models for news summarization. Transactions of the Association for Computational Linguistics, 12:39-57.
348
+ Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. 2023b. Automatic chain of thought prompting in large language models. In The Eleventh International Conference on Learning Representations.
349
+
350
+ # A Ethical Considerations
351
+
352
+ Our method using LLMs for authorship attribution brings several ethical considerations that must be addressed to ensure responsible and fair use of the technology.
353
+
354
+ Privacy and Anonymity The capacity of LLMs to attribute authorship with high accuracy can lead to ethical challenges regarding privacy and anonymity. Individuals who wish to remain anonymous or protect their identity could be compromised if authorship attribution tools are misused. Therefore, it is crucial to establish strict guidelines and ethical standards on the use of such technologies to prevent breaches of privacy.
355
+
356
+ Potential for Abuse Despite multiple beneficial applications, the misuse potential of authorship attribution tools is significant. Risks include the use of this technology to suppress free speech or to endanger personal safety by identifying individuals in contexts where anonymity is crucial for safety. Addressing these risks requires robust governance to prevent misuse and to ensure that the technology is used ethically and responsibly.
357
+
358
+ Bias Issue The performance of authorship attribution methods can vary across different demographics, leading to potential biases. It is important to continually assess and correct these biases to ensure fairness in the application of this technology.
359
+
360
+ Misclassification Issue Given the high stakes involved, especially in forensic contexts, the accuracy of authorship attribution is important. Misclassifications can have serious consequences, including wrongful accusations or legal implications. It is essential for authorship attribution methods to be reliable and for their limitations to be transparently communicated to users.
361
+
362
+ # B Broader Impact
363
+
364
+ Our study of authorship attribution using LLMs contributes to advancements in various domains:
365
+
366
+ Forensic Linguistics Our research contributes to the field of forensic linguistics by providing tools that can solve crimes involving anonymous or disputed texts. This can be particularly useful for law enforcement and legal professionals who need to gather evidence and make more informed decisions.
367
+
368
+ Intellectual Property Protection Our method can serve as a powerful tool in identifying the authors of texts, which can help protect intellectual property rights and resolve disputes in copyright.
369
+
370
+ Historical Text Attribution In literary and historical studies, determining the authorship of texts can provide insights into their origins and contexts, enhancing our understanding and interpretation.
371
+
372
+ Enhanced Content Management Media and content companies can use this technology to manage content more effectively by accurately attributing authorship to various contributors.
373
+
374
+ Educational Applications In educational settings, our method can help prevent plagiarism and promote academic integrity. It can also serve as
375
+
376
+ a teaching tool to help students understand and appreciate stylistic differences between authors.
377
+
378
+ While our method holds promise across multiple applications, it is crucial to deploy it with caution. Ensuring that the technology is used responsibly and ethically will be key to maximizing its benefits while minimizing potential harm.
2024/A Bayesian Approach to Harnessing the Power of LLMs in Authorship Attribution/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14899f9b6322b0b013574b379b7a684ab9680482a37f825300949a6b568964fe
3
+ size 352806
2024/A Bayesian Approach to Harnessing the Power of LLMs in Authorship Attribution/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Closer Look at Multidimensional Online Political Incivility/8e194152-6624-404c-bf7d-b3df6189c956_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Closer Look at Multidimensional Online Political Incivility/8e194152-6624-404c-bf7d-b3df6189c956_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Closer Look at Multidimensional Online Political Incivility/8e194152-6624-404c-bf7d-b3df6189c956_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a00b7c436a455502fcd35a5e42c5c06c98e596e552d464e15d8680d73fedd67
3
+ size 896511
2024/A Closer Look at Multidimensional Online Political Incivility/full.md ADDED
@@ -0,0 +1,387 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Closer Look at Multidimensional Online Political Incivility
2
+
3
+ Sagi Pendzel
4
+
5
+ CS Dep., Univ. of Haifa
6
+
7
+ spendzel@campus.haifa.ac.il
8
+
9
+ Nir Lotan
10
+
11
+ IS Dep., Univ. of Haifa
12
+
13
+ nlotan02@campus.haifa.ac.il
14
+
15
+ Alon Zoizner
16
+
17
+ Dep. of Communication, Univ. of Haifa
18
+
19
+ zoizner@com.haifa.ac.il
20
+
21
+ Einat Minkov
22
+
23
+ IS Dep., Univ. of Haifa
24
+
25
+ einaTM@is.haifa.ac.il
26
+
27
+ # Abstract
28
+
29
+ Toxic online political discourse has become prevalent, where scholars debate about its impact on democratic processes. This work presents a large-scale study of political incivility on Twitter. In line with theories of political communication, we differentiate between harsh impolite style and intolerant substance. We present a dataset of 13K political tweets in the U.S. context, which we collected and labeled by this multidimensional distinction using crowd sourcing. The evaluation of state-of-the-art classifiers illustrates the challenges involved in political incivility detection, which often requires high-level semantic and social understanding. Nevertheless, performing incivility detection at scale, we are able to characterise its distribution across individual users and geopolitical regions. Our findings align with and extend existing theories of political communication. In particular, we find that roughly $80\%$ of the uncivil tweets are authored by $20\%$ of the users, where users who are politically engaged are more inclined to use uncivil language. We further find that political incivility exhibits network homophily, and that incivility is more prominent in highly competitive geopolitical regions. Our results apply to both uncivil style and substance. Warning: The paper contains examples that readers might find offensive.
30
+
31
+ # 1 Introduction
32
+
33
+ An active discourse between political groups and citizens is considered a fundamental condition for a healthy democracy (Gutmann and Thompson, 2009). The recent rise of social media has been argued however to intensify disrespectful and hostile online political discourse (Coe et al., 2014; Frimer et al., 2023). The term political incivility is commonly used in the political communication literature that studies the prominence, determinants and consequences of using harsh style and discriminatory discourse in online platforms. According
34
+
35
+ to researchers, negative consequences of this trend are that it fosters polarization between rival political groups, and may disengage some citizens from being politically involved (Muddiman et al., 2020; Skytte, 2021; Van't Riet and Van Stekelenburg, 2022). Conversely, others argue that incivility is not inherently negative, considering certain flavors of incivility as a legitimate rhetoric in heated political discussions (Rossini, 2022). Aiming to study the consequences and contextual factors that underlie this general phenomenon, several previous research works have attempted to empirically detect, quantify and characterise political incivility on discussion groups and social media platforms (ElSherief et al., 2018; Davidson et al., 2020; Theocharis et al., 2020; Bianchi et al., 2022; Frimer et al., 2023). In this research, we take a closer look at the challenges involved in the automatic detection of political incivility online, considering it as a multidimensional concept. We then present the results of a large-scale study, where we examine incivility as detected within a very large sample of political tweets posted on the social media platform of Twitter. In particular, we gauge and quantify user-level and geopolitical factors that correlate with political incivility online.
36
+
37
+ We operationalize political incivility as a two-dimensional concept in accordance with recent theories of political communication (Muddiman, 2017; Rossini, 2022). The first dimension is personal-level incivility (impoliteness). This flavor of incivility pertains to a harsh tone that violates interpersonal norms, including foul language, name-calling, vulgarity, and aspersion towards other discussion partners or their ideas (e.g., "are you really so stupid that you would defund this program?"). The second dimension of public-level incivility (intolerance) rather pertains to exclusionary speech,
38
+
39
+ IMPOLITE: "All hell has broken loose under the leadership of the senile old man. I don't believe a damn word from this dumb son of a bitches"; "That's what they are protesting, you rank imbecile. People like you need a damn good kicking."
40
+ INTOLERANT: "Hillary and the dems ARE enemies, foreign AND domestic"; "If you agree with democrats in congress, you are an anti-American commie"
41
+ NEUTRAL: "How long do Republicans believe you can keep pushing this line? You never intended to secure the border"; "There are 400,000,000 guns in the United States, you're going to have to stop the criminals not the guns"
42
+
43
+ Table 1: Example tweets per class. These examples were presented to the annotators as part of their training.
44
+
45
+ silencing or denying the rights of a social or political group (e.g., “Democrats are openly trying to see to the destruction of America”). That is, interpersonal incivility refers to tone, whereas intolerance is defined in terms of substance. Table 1 includes example tweets of each category. As illustrated, the impolite examples are characterized by a harsh tone, vulgar language and profanity, which may be directed at the user participating in the specific Twitter discussion (second example). However, the impolite tweets do not call for silencing an entire community or group, or denying their rights. In contrast, the example tweets of the intolerant category explicitly accuse an entire political group (in this case, Democrats) for being an enemy of the country. Regarding the neutral category, while the first example in the table criticizes Republicans, it does not call for limiting their rights or accuses their entire group of treason—thus, the tweet is not considered intolerant. Table 5 includes additional labeled examples, including a tweet that is both intolerant—as it denounces the elected U.S president along with his voters, as well as impolite-denoting its use of vulgar language. While we follow this distinction in the paper, we acknowledge that both types of incivility may be offensive. A more detailed discussion concerning the terminology of these concepts is included in Section 2.
46
+
47
+ There are several motivations for identifying political incivility at this multidimensional resolution. In general, scholars of political communication have shown that the exposure to either impolite style or intolerant content online leads to increased polarization and intergroup tensions (Muddiman et al., 2020; Skytte, 2021). Yet, recent studies argue that heated political talk should not be dismissed due to interpersonal incivility, whereas expressions of intolerance on digital platforms have a more detrimental effect on democratic processes (Pa
48
+
49
+ pacharissi, 2004; Rossini, 2022). It is therefore desired to distinguish between the different dimensions of political incivility in studying this phenomenon. In this work, we further show that interpersonal incivility and intolerance differ in their language characteristics. While impolite speech often contains unequivocally negative lexical expressions, the interpretation of intolerance is generally a more challenging task, in that it requires contextual, political and social, understanding.
50
+
51
+ A main contribution of our work is the construction of a large dataset of 13K political tweets. We carefully retrieved and sampled these tweets using diverse strategies, aiming to capture both incivility types, while avoiding lexical and topical biases (Wiegand et al., 2019). The dataset was labeled by multidimensional incivility via crowd sourcing, having the annotation process supervised by a domain expert. Using our dataset, we adapt and evaluate a variety of state-of-the-art language models on the task of multi-label incivility detection. Our results indicate that political incivility detection is a challenging task, where we obtain best F1 scores of 0.70 and 0.59 on impoliteness and intolerance detection, respectively.
52
+
53
+ In the second part of this work, we report the results of a large scale study, in which we performed multidimensional incivility detection and examined the prevalence of incivility among the political posts by more than 200K users. We find that both types of political incivility are prevalent on social media, identifying $17.6\%$ of the political tweets as impolite, $13.3\%$ as intolerant and $2.5\%$ as both, with an overall political incivility rate of $28.4\%$ . A user-level analysis shows that a minority of the users, who are politically engaged (as measured by the proportion of their tweets that concern political topics), are more inclined to use uncivil language, generating the majority of the uncivil tweets. Our analysis further establishes that social patterns of political incivility involve network homophilly. Considering the large scope of our study, we were also able to assess differences in the prevalence of incivility across geopolitical regions, specifically, states. We find that state-level incivility on social media is significantly correlated with partisan competition per state, observing higher incivility levels in 'battleground states', where the two camps are on par. We interpret our findings in light of existing theories of political communication, and discuss the challenges and potential of political incivility detection for future research.
54
+
55
+ # 2 Related work
56
+
57
+ As noted in a recent survey, the concepts of uncivil, offensive, and toxic speech often overlap, where incivility is most frequently used by social scientists (Pachinger et al., 2023). In the political communication literature, some researchers frame incivility in terms of impolite speech (Theocharis et al., 2016; Seely, 2018), whereas others define it as either impoliteness, intolerance or hate speech (Davidson et al., 2020; Theocharis et al., 2020). Accordingly, most relevant empirical studies address incivility detection as a binary classification problem, differentiating between neutral and uncivil discourse (Davidson et al., 2020; Theocharis et al., 2020; Rheault et al., 2019). Following recent theories of political communication (Rossini, 2022), we consider political incivility as a multidimensional concept, defining uncivil language as either impolite or intolerant, or both. In a closely related work, Bianchi et al. (2022) introduced a dataset of tweets annotated with fine grained labels, distinguishing between our high-level categories of rude or offensive tone (profanities, insults, outrage, or character assassination) and intolerant expressions (discrimination, hostility). Overall, they report F1 performance of roughly 0.7 on all categories. While offering valuable insights into multidimensional incivility detection, their dataset is focused on the topic of immigration, which receives limited attention in online political discourse (Barbera et al., 2019; Wojcieszak et al., 2022). Crucially, we refrained from sampling tweets based on topical keywords, while targeting political tweets by U.S. residents. Consequently, our dataset captures incivility mainly in the U.S. partisan context, which is prevalent in Twitter, across various topics (only $1.8\%$ of the sampled tweets mention immigration). Aiming at lexical as well as topical diversity, we also minimized the use of pre-trained tools as means for sampling texts that were likely to be toxic. Possibly for these reasons, we observe substantially lower performance on intolerance detection in comparison to Bianchi et al. (F1 of $\sim 0.6$ vs. $\sim 0.7$ ).<sup>2</sup> Our analysis indicates that in the lack of clear lexical cues, contextual social understanding is required in order to improve on the task of intolerance detection. In this respect, our work relates to a recent line of
58
+
59
+ works than concern the detection of implicit hate speech, where the underlying toxic intention is encoded using indirect semantics rather than by foul language (ElSherief et al., 2021; Hartvigsen et al., 2022). Finally, this work makes the contribution of applying multidimensional political incivility detection at large-scale, studying its prevalence while considering various contextual factors, including user-level characteristics and geopolitical conditions.
60
+
61
+ # 3 MUPID: a Multidimensional Political Incivility Dataset
62
+
63
+ # 3.1 Data sampling strategy
64
+
65
+ Even though political incivility is not rare, it is desired to focus the costly annotation effort on a high yield sample. We exploit multiple network-based and other cues to obtain a diverse and representative sample of the target classes, while avoiding topical and lexical biases (Wiegand et al., 2019).
66
+
67
+ As a first step, we collected tweets posted by users who follow multiple disputable political accounts, assuming that such users are more inclined to use uncivil language in political contexts (Gervais, 2014). Concretely, we referred to lists of accounts that are known to distribute fake news (Grinberg et al., 2019), news accounts that are considered politically biased to a large extent (Wojcieszak et al., 2023), and the accounts of members of the U.S. Congress who are considered as ideologically extreme (Lewis et al., 2019). We selected the top accounts per category, balanced over conservative and liberal orientation, based on bias scores specified by those sources. We then identified users who followed two or more biased accounts, maintaining a balance between users of conservative and liberal orientation, and retrieved the (200) latest tweets posted by them as of December 2021. This yielded 885K tweets authored by 15.8K users.
68
+
69
+ Identifying political tweets. We trained a dedicated classifier to identify tweets that discuss political topics, exploiting existing resources for this purpose. Specifically, we sampled 12.5K tweets concerning topics that are discussed frequently by either Republicans (e.g., the U.S. federal budget), Democrats (e.g., marriage equality), or both (e.g., the presidential campaign) (Barbera et al., 2015).
70
+
71
+ Additional 3.5K political posts were extracted from the social media accounts of U.S. politicians.<sup>5</sup> As counter examples, we considered random tweets by U.S. users,<sup>6</sup> constructing a balanced dataset of 32K examples overall. We finetuned a 'bert-base-uncased' model on this dataset using its public implementation and standard training practices, minimizing the cross-entropy loss function. In applying the finetuned classifier, we set a high threshold (0.96) over its confidence scores, aiming to achieve high precision. Overall, 82K (9.3%) of our sampled tweets were predicted to be political. The manual examination of 300 random tweets by a graduate student of political communication indicated on classification precision of 0.91.
72
+
73
+ Sampling tweets for annotation. In order to focus the annotation effort on tweets that demonstrate incivility, we applied several additional sampling heuristics. Following insights by which hateful user accounts tend to be new and more active than average (Ribeiro et al., 2018), we sampled 2K tweets by accounts which were created up to two months prior to the tweet retrieval date, or posted more than one tweet daily on average since their creation date. Similar to previous works (Theocharis et al., 2020; Hede et al., 2021; Bianchi et al., 2022), we utilized the pretrained Jigsaw Perspective tool<sup>7</sup> to identify toxic tweets, sampling another 2K tweets that received high scores on the categories of 'abusive language and slurs', 'inflammatory comments' and 'attacks on the author'. Finally, we sampled 4K tweets uniformly at random. Throughout the annotation process, we tracked the yield of tweets of each class. Among the 8K selected tweets, 2.3K $(28.9\%)$ were labeled as impolite, and 0.8K $(9.8\%)$ as intolerant. Applying an active labeling paradigm (Tong and Koller, 2001), we trained a classifier of intolerance detection using the examples labeled thus far to identify additional tweets that were likely to be intolerant within our large sampled pool of political tweets. In several consequent annotation and learning batches, we selected 5.2K additional tweets for manual annotation in this fashion. The ratio of impulsiveness remained similar to the original sample $(22.5\%)$ , yet the ratio of intolerant tweets has tripled $(29.5\%)$ . Next, we describe the annotation
74
+
75
+ procedure of the sampled examples. We note that in the resulting dataset, for each example, we maintain its sampling method, where we exclude all of the examples obtained via active sampling from the test set in order to avoid evaluation bias.
76
+
77
+ # 3.2 Annotation procedure
78
+
79
+ The task of assessing multidimensional political incivility involves fine semantics and critical thinking. Since labeling examples by experts is costly and limited in capacity, we turned to crowd sourcing, using the platform of Amazon Mechanical Turk. $^{8}$ In order to elicit labels of high-quality, we required the workers to be highly qualified, $^{9}$ as well as residents of the U.S. who are presumably fluent in English and familiar with U.S. politics. Candidate workers were required to undergo dedicated training and quality testing. Table 1 includes examples which were presented to the workers of each class. These examples were accompanied by a code book containing explanations regarding the guidelines for annotating the tweets (Appendix A). In the qualification phase, the workers labeled six other tweets. Whoever labeled a majority of the tweets correctly got qualified to work on our task, as well as received detailed feedback on their mistakes. During annotation, we included control questions (2 out of 15 tweets in each micro-task) which we expected the workers to do well on. We rejected the annotations by workers who failed to label the control tweets, and banned them from further working on our task. Finally, we paid the workers an hourly fee of 17.5 USD, which exceeds the U.S. minimum wage standards, as fair pay positively affects annotation quality (Ye et al., 2017). Overall, our final cohort included 125 workers who annotated up to 2,000 tweets per week over a period of 3 months.
80
+
81
+ Given each tweet, several independent workers were asked to assess whether it was impolite, intolerant, neither, or both. Each tweet was labeled by 3-5 annotators, where we discarded examples for which a label could not be determined based on majority voting.[10] While we take a prescriptive approach, we acknowledge that human judgement on this task may be subjective, being affected by one's cultural background, beliefs, and political stance (Rottger et al., 2022). An assessment of inter-annotator agreement gives an indication for
82
+
83
+ <table><tr><td>Dataset</td><td>Size</td><td>Uncivil</td><td>Impol./Intol./Both</td></tr><tr><td>MUPID</td><td>13.1K</td><td>42.3%</td><td>24.6 / 15.1 / 2.6%</td></tr><tr><td>Davidson et al.</td><td>5.0K</td><td>10.3%</td><td>-</td></tr><tr><td>Rheault et al.</td><td>10.0K</td><td>12.4%</td><td>-</td></tr><tr><td>Theocharis et al.</td><td>4.0K</td><td>26.0%</td><td>-</td></tr></table>
84
+
85
+ Table 2: Dataset statistics: MUPID vs. other datasets.
86
+
87
+ the semantic complexity and subjectivity of the target concepts. Comparing the labels assigned to every tweet by random worker pairs resulted in Fleiss' kappa scores of 0.63 and 0.54 on the categories of impoliteness and intolerance, indicating on 'substantial' and 'moderate' agreement, respectively. This suggests that intolerance may be more subjective and subtle compared to impoliteness. We further compared the majority labels against the judgement of a scholar of political communication, assigned to 300 random labeled tweets. Fleiss' kappa scores in this case indicated on 'substantial' agreement, measuring 0.57 and 0.61 on impoliteness and intolerance, respectively. For a subset of this sample, for which the workers tended to agree on (majority of $70\%$ or more), the agreement scores between the crowed sourced labels and the expert were substantially higher on the impoliteness compared to the intolerance category, measuring 0.79 vs. 0.69, respectively. Again, this suggests that the concept of political intolerance is more semantically subtle.
88
+
89
+ # 3.3 Dataset statistics
90
+
91
+ The resulting dataset includes 13.1K labeled tweets. As detailed in Table 2, the dataset includes a substantial number of tweets labeled as impolite (3.6K), and intolerant (2.3K), where a large proportion of the examples in the dataset (42.3%) correspond to political incivility (with 2.6% of the examples labeled as both intolerant and impolite). As noted in the table, other available datasets of political incivility use binary annotations, and include a lower proportion of examples of incivility.
92
+
93
+ # 4 Multidimensional incivility detection
94
+
95
+ Next, we evaluate the extent to which neural models can detect political incivility as perceived by humans. We perform multi-label classification, detecting impulsiveness and intolerance as orthogonal dimensions, as well as experiment with binary prediction of political incivility.
96
+
97
+ # 4.1 Experimental setup
98
+
99
+ We finetuned several popular transformer-based pre-trained language models, including BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019) and DeBERTa (He et al., 2021) using our dataset. We report our results using the base configurations of these models, as the larger architectures yielded minor performance gains. In addition, we experiment with task-specialized variants of BERT: HateBERT, a model that has been re-trained using a large-scale corpus of offensive, abusive, and hateful Reddit comments (Caselli et al., 2021); and HateXplain, a model that has been finetuned to classify hateful and offensive Twitter and Gab posts (Mathew et al., 2021). All models were applied using their public implementation. $^{11}$ In finetuning, we split our dataset into fixed stratified train (70%), validation (10%) and test (20%) sets, optimizing the parameters of each model on the validation examples. Considering the class imbalance, we found it beneficial to employ a class-weighted cross-entropy loss function (Henning et al., 2023).
100
+
101
+ # 4.2 Classification results
102
+
103
+ Table 3 reports our test results in terms of ROC AUC, precision, recall and F1 with respect to each class. The table includes also the results of binary classification, considering incivility as a unified concept. As shown, binary classification yields best F1 performance of 0.75. In comparison, the best F1 results obtained for impoliteness and intolerance prediction are 0.70 and 0.59, respectively.
104
+
105
+ As baseline, we report the performance of the pre-trained Jigsaw Perspective tool, scoring the test examples by their toxicity. The Perspective model has been trained to predict toxicity as "a rude, disrespectful, or unreasonable comment that is likely to make you leave a discussion". Following related works, we marked as toxic the examples that received a toxicity score of 0.5 or more by the model (Gehman et al., 2020). As detailed in Table 3, this method yields high precision (0.78) yet low recall (0.43) in identifying impolite speech. Possibly, the low recall indicates on a domain adaptation issue. Toxicity is a poor predictor of intolerance however, yielding very low precision and recall scores of 0.20 and 0.18 on this category, respectively. This indicates that the intolerant examples in our dataset are not typically conveyed using general toxic language.
106
+
107
+ <table><tr><td></td><td colspan="4">Inter-personal (impolite style)</td><td colspan="4">Public-level (intolerant substance)</td><td colspan="5">Any incivility (binary)</td></tr><tr><td>Classifier</td><td>AUC</td><td>P</td><td>R</td><td>F1</td><td>AUC</td><td>P</td><td>R</td><td>F1</td><td>AUC</td><td>P</td><td>R</td><td>F1</td><td>Mac.F1</td></tr><tr><td>Perspective</td><td>0.841</td><td>0.781</td><td>0.432</td><td>0.556</td><td>0.674</td><td>0.200</td><td>0.180</td><td>0.189</td><td>0.850</td><td>0.897</td><td>0.329</td><td>0.481</td><td>0.636</td></tr><tr><td>BERT</td><td>0.857</td><td>0.635</td><td>0.713</td><td>0.671</td><td>0.848</td><td>0.530</td><td>0.644</td><td>0.581</td><td>0.849</td><td>0.752</td><td>0.692</td><td>0.721</td><td>0.766</td></tr><tr><td>RoBERTa</td><td>0.874</td><td>0.642</td><td>0.744</td><td>0.689</td><td>0.859</td><td>0.501</td><td>0.728</td><td>0.593</td><td>0.864</td><td>0.765</td><td>0.707</td><td>0.735</td><td>0.777</td></tr><tr><td>DeBERTa</td><td>0.861</td><td>0.687</td><td>0.707</td><td>0.697</td><td>0.845</td><td>0.558</td><td>0.626</td><td>0.590</td><td>0.865</td><td>0.754</td><td>0.739</td><td>0.746</td><td>0.782</td></tr><tr><td>HateBert</td><td>0.865</td><td>0.701</td><td>0.661</td><td>0.680</td><td>0.835</td><td>0.515</td><td>0.639</td><td>0.571</td><td>0.857</td><td>0.755</td><td>0.719</td><td>0.737</td><td>0.777</td></tr><tr><td>HateXplain</td><td>0.820</td><td>0.567</td><td>0.688</td><td>0.622</td><td>0.756</td><td>0.374</td><td>0.537</td><td>0.441</td><td>0.811</td><td>0.773</td><td>0.532</td><td>0.630</td><td>0.713</td></tr><tr><td>GPT-3.5</td><td>0.827</td><td>0.421</td><td>0.913</td><td>0.576</td><td>0.765</td><td>0.379</td><td>0.519</td><td>0.438</td><td>0.838</td><td>0.652</td><td>0.835</td><td>0.732</td><td>0.742</td></tr><tr><td>GPT-4</td><td>-</td><td>0.666</td><td>0.659</td><td>0.663</td><td>-</td><td>0.562</td><td>0.416</td><td>0.478</td><td>-</td><td>0.807</td><td>0.638</td><td>0.712</td><td>0.769</td></tr></table>
108
+
109
+ Table 3: Multi-label and binary prediction results.
110
+
111
+ Considering that Generative Pre-trained Transformer (GPT) models have been applied to related tasks such as hate speech detection (Wullach et al., 2021a; Del Arco et al., 2023), we further attempted few-shot incivility prediction using GPT-3.5 and GPT-4. $^{12}$ In this case, for each target category, we prompted the model with a definition of the task and category, and with (3) labeled examples that were also presented to the human workers (see Appendix A). As shown in Table 3, this approach fell short of the finetuned models. (Unlike GPT-3.5, GPT-4 no longer provides token probability information in its API. For this reason, we do not report AUC figures for GPT-4.) It is possible that further improvements in the performance of these models can be achieved via prompt engineering, additional examples or finetuning (Gül et al., 2024), however this is out of the scope of our work. Nevertheless, we observe similar trends using the GPT and the other models, showing a substantial gap in performance in favor of the impoliteness category. Concretely, we observe that GPT-4 yields F1 of 0.66 vs. 0.48 on the tasks of impulsiveness and intolerance detection, respectively. The finetuned DeBERTa and RoBERTa achieve the best overall performance. Taking into account both performance and cost considerations, RoBERTa is our classifier of choice. This model yields F1 results of 0.69 and 0.59 on the impolite and intolerant classes, respectively.
112
+
113
+ Impoliteness vs. intolerance. We applied Shapley analysis (Lundberg and Lee, 2017)<sup>13</sup> to our training set to identify unigrams that are predictive of impoliteness or intolerance. Table 4 lists words that characterise each class. As expected, impolite style is characterised by derogatory words. Most of the listed words carry negative meaning in an unequivocal way, being offensive in any context, e.g., 'stupid'. In contrast, the intolerant tweets concern political affiliations, e.g., 'republicans', 'right', or
114
+
115
+ Impolite: fuck, help, stupid, damn, obnoxious, fed, joke, ass, goddamn, shit, coward, crap, unreal, love, neoliberal, king, mentality, anarchist, fuel, publishing, bad, wow, back, bastard, communists, forgive, idiot, dumb, change, worst, terrible, broke, asshole, humiliating
116
+
117
+ Intolerant: republican(s), democrat(s), leftists, GOP, democratic, catholics, speech, liberal, dem(s), socialist(s), conservatives, liberals, progressive(s), left, communist(s), party, right, racist, fascists, terrorists, nationalist(s), constituents, marxist, whites, radical, destroyed, americans
118
+
119
+ Table 4: Salient unigrams associated with impolite and intolerant speech in our dataset (Shapley analysis).
120
+
121
+ liberals'. Unlike slur words, negative sentiment that such terms may carry is context dependent. In accordance, we found that impolite tweets were less susceptible to get misclassified as neutral compared with intolerant tweets (26.7% vs. 44.0%). Thus, semantic and contextual understanding is needed to detect intolerance more precisely.
122
+
123
+ Error analysis. Table 5 includes examples of misclassified tweets, showing the labels assigned to them by the human workers versus the predicted labels. We indeed observe cases in which the model missed the presence of intolerance due to implied language (examples (c) and (d)), e.g., "you Republicans don't even know how to keep the electricity on!". Likewise, the model was sometimes misled by lexical cues, demonstrating the gap between lexical-level and semantic understanding (Zagoury et al., 2021); for instance, example (b) was misclassified as impolite, possibly because of the idiom 'sick of'. In some other cases, we found seemingly faulty predictions to be sensible, e.g., "impeach Biden and his administration! Or charge them with treason" was justifiably classified as intolerant. Again, this demonstrates the semantic and contextual challenges involved in identifying political intolerance.
124
+
125
+ Cross-dataset evaluation. We assess learning generalization using MUPID against other relevant
126
+
127
+ <table><tr><td>Tweet</td><td>Label</td><td>Prediction</td></tr><tr><td>(a) We need to impeach Biden and his administration! Or charge them with treason.</td><td>Neither</td><td>Intolerant</td></tr><tr><td>(b) Yes I have hope for your country. There are enough people who are sick of this.</td><td>Neither</td><td>Impolite</td></tr><tr><td>(c) Oh anyways the lefties are lying about everything relating to fixing the economy</td><td>Intolerant</td><td>Impolite</td></tr><tr><td>(d) How are you going to protect our Freedom? You Republicans don’t even know how to keep the electricity on!</td><td>Intolerant</td><td>Neither</td></tr><tr><td>(e) FXCK THAT! NEVER GONNA HAPPEN IN AMERICA! Civil War will happen before that happens here! @LINK</td><td>Impolite</td><td>Neither</td></tr><tr><td>(f) When will this nincompoop leave the White House. He got 81 million votes? God help us!!
128
+ #IllegitimatePresident</td><td>Both</td><td>Intolerant</td></tr></table>
129
+
130
+ Table 5: Examples of tweets illustrating discrepancies between human-assigned labels and classifier predictions for impulsiveness and intolerance.
131
+
132
+ ![](images/be6e3ab12c52fe85437afd4558f16ec71d2f2bba5355b41b7685d4cb606287d4.jpg)
133
+ Figure 1: Test F1 results on impoliteness and intolerance detection, varying the number of training examples.
134
+
135
+ datasets of political incivility (Table 2).<sup>14</sup> Concretely, we measured the extent to which performance declines in a cross-dataset setup compared to within-dataset training. We considered fixed random test sets $(20\%)$ , finetuning a RoBERTa classifier in all cases. On average, applying our model to the other datasets resulted in lower precision $(-25.3\%)$ and higher recall $(29\%)$ , reaching similar F1 results $(-3.3\%)$ . We attribute the increased recall to the diversity of MUPID, where precision may be reduced due to data shift or incompatibility of the annotations. Inversely, we finetuned a model using the other datasets (19K examples overall) and applied it to MUPID test set. Compared to our results (Table 3), we observed lower precision $(-11.5\%)$ , recall $(-23.2\%)$ and F1 $(-18\%)$ . The reduction of recall reflects a failure to detect intolerant instances that are under-represented in the other datasets. See detailed results in Appendix B.
136
+
137
+ Impact of train set size. Figure 1 shows test F1 results while finetuning the RoBERTa classifier using increasing stratified subsets of the train set. As shown, impulseness dominates intolerance detection results using as few as 1,000 training examples, again showing the greater semantic complex
138
+
139
+ ity involved in detecting uncivil substance vs. tone. While the improvement rate subsides past $\sim 4\mathrm{K}$ labeled examples, the best results are obtained using the full dataset. We conjecture that similar to hate speech, further improvements may be achieved by extending the dataset, e.g., via methods of synthetic example generation (Wullach et al., 2021b; Hartvigsen et al., 2022).
140
+
141
+ # 5 From tweets to users: a large-scale evaluation
142
+
143
+ Automatic incivility detection may be used to identify and quantify political incivility at scale, addressing research questions of interest. Here, we introduce and examine the following questions: (i) Are certain users more inclined to post uncivil political content online? (ii) Do incivility levels vary by geopolitical region, specifically, across U.S. states? In both cases, we explore contextual factors that correlate with increased political incivility levels with respect to either impoliteness or intolerance.
144
+
145
+ To investigate these questions, we collected a corpus comprised of the tweeting history of a large number of user accounts. Concretely, we randomly sampled users who authored tweets between July-Nov. 2022, whom we verified to be residents of the U.S. based on the location attribute of their profiles. For each user account, we retrieved the most recent (up to 200) tweets posted by them, discarding retweets and non-textual tweets, as well as tweets posted by overly active accounts suspected as bots.[15] This resulted in a corpus of 16.3M tweets authored by 373K users. Our of those, 2.6M tweets by 230K users were classified as political, henceforth, the corpus. Finally, $17.6\%$ of the political tweets were identified as impolite, $13.3\%$ as intolerant, and $2.5\%$ as both categories, accounting for overall incivility ratio of $28.4\%$ . These proportions
146
+
147
+ <table><tr><td>Variable</td><td>% Impolite</td><td>% Intolerant</td></tr><tr><td colspan="3">User-level metrics (N=230K)</td></tr><tr><td># Followers</td><td>-0.109</td><td>-0.038</td></tr><tr><td># Followees</td><td>-0.017</td><td>0.058</td></tr><tr><td>Tweets per day</td><td>0.068</td><td>0.091</td></tr><tr><td>% political tweets</td><td>0.237</td><td>0.498</td></tr><tr><td colspan="3">Incivility among followees (N=1K, F=600k)</td></tr><tr><td>% Impolite</td><td>0.135</td><td>0.236</td></tr><tr><td>% Intolerant</td><td>0.128</td><td>0.371</td></tr></table>
148
+
149
+ Table 6: Spearman's correlations: the ratio of impolite/intolerant tweets vs. user-level metrics and the incivility ratios among the accounts followed. The table denotes the user sample size (N) and number of followees (F). All scores are significant $(p - value < 0.001)$ . Multivariate analysis gave similar results (Appendix C).
150
+
151
+ are similar to figures reported based on manual examination of a non-English political comments on Facebook- $20\%$ impolite and $10.8\%$ intolerant comments (Rossini, 2022). Considering this distribution, we note the importance of detecting incivility both in terms of style and substance for achieving a comprehensive coverage of online hostility.
152
+
153
+ # 5.1 Political incivility at the user level
154
+
155
+ Our results indicate that some users are indeed more inclined to post uncivil content than others. As few as $7.3\%$ of the users authored $50\%$ of the uncivil posts in the corpus, and $20.6\%$ of the users authored $80\%$ of the uncivil posts. On the other hand, $43.7\%$ of the users authored no uncivil post.
156
+
157
+ To explore the characteristics of incivility at user-level, we examined the associations between the share of impolite and intolerant tweets among one's political tweets and other user-level metrics of interest, including network connectivity (number of followers and followees), activity level (average number of tweets per day), and the ratio of political tweets among the tweets posted by them. Table 6 reports our findings in terms of Spearmans's rank correlation scores. As shown, users who post intolerant and impolite political content are active, posting more tweets per day than other users. They also tend to have less followers—possibly, popular users refrain from controversial political language. Interestingly, a study of 'hateful' users similarly showed that they tweet more, follow other users more, but are less followed (Ribeiro et al., 2018). We find strong positive correlation between incivility and the share of political tweets posted by the user (Spearman's correlation scores of 0.24 and 0.50 with respect to impulsiveness and intolerance, respectively). That is, users who discuss political
158
+
159
+ topics more often—an indicator of increased political engagement (Vaccari and Valeriani, 2018), are more likely to use either intolerant or impolite language. This result echoes the suggestion that incivility may become normalized for those who discuss politics online more often (Hmielowski et al., 2014). As we observe similar trends for both types of incivility, our study suggests that public-level incivility, i.e., intolerance, may have also become normalized online among those who practice political talk often. Importantly, since our classifiers mainly focus on hostility between partisan groups and ideological camps (Table 4), our analyses and findings apply to this context.
160
+
161
+ In another analysis, we examine whether user-level incivility is correlated with incivility among the accounts that one follows. To address this question, we considered a random sample 1K users, and obtained the tweets posted by their followees within a 2-month period prior to the user retrieval date. Overall, we processed 8M tweets posted by 0.6M unique followees, quantifying the share of uncivil political tweets by those accounts. As detailed in Table 6 and in Appendix C, strong and significant correlations were found with respect to both types of incivility between users and the accounts that they follow. Thus, we observe a substantial degree of network homophily among users and followees who use political incivility online (see also Mathew et al. (2019)). This result implies that network information may provide meaningful context for political incivility detection, especially in those cases where indirect language is used (Ribeiro et al., 2018; Ghosh et al., 2023).
162
+
163
+ # 5.2 Incivility across geopolitical regions
164
+
165
+ Using our large sample of users, we further quantify and compare political incivility across geopolitical regions, namely, U.S. states. We identified relevant user accounts for this purpose, which specified state information (full state name, or its abbreviation) in the meta-data location field. Overall, 186K users in the corpus met this condition. The largest number of users were affiliated with the states of New York (23K), California (16K) and Texas (14K). The states with the least number of users were North Dakota (265), Wyoming (315), South Dakota (426), and Alaska (579). The median number of tweets per state was 2.2K, providing a sufficient sample size for statistical analysis.
166
+
167
+ For each state, we computed the average user-level proportion of impolite or intolerant tweets.
168
+
169
+ ![](images/e95965904dd114ffe7d31ef963081c0841bf563565e197695ed70aeef668b57f.jpg)
170
+ Figure 2: Average detected user-level political intolerance ratio per state (ranging between $7 - 12\%$ ).
171
+
172
+ Figure 2 presents a heat map illustrating the average age intolerance ratio across states. Also here, we observed aligned trends for both incivility types, obtaining similar results for impoliteness. As shown, some states demonstrate relatively low incivility rates (e.g., WA and NY) whereas others exhibit higher incivility rates (e.g., AZ and FL).
173
+
174
+ In light of these results, we conjectured that in 'battleground states', where the two camps are on par, there would be more hostility and toxicity in the political debate. To test this hypothesis, we contrasted the detected state-level average ratios of impolite and intolerant tweets against the differences between the percentage of votes for the Democratic and the Republican parties per state.[16] The analysis confirmed our hypothesis, yielding significant Spearman's rank correlation scores of -0.43 and -0.40 (p-value $< 0.01$ ), respectively. In words, this result suggests that political incivility tends to escalate in regions where electoral competition is intense, corresponding to a closer contest between the Democratic and Republican parties.
175
+
176
+ We note that rather than specify our results per state, we wish to highlight the contextual factors that may affect incivility rates at state-level. Our findings corroborate and align with existing literature of political communication. In particular, researchers previously showed that candidates and the media use more negative rhetoric in battleground states (Goldstein and Freedman, 2002); that citizens of battleground states engage more in politics on social media (Settle et al., 2016); and, that competitive districts feature higher levels of Twitter-based incivility (Vargo and Hopp, 2017). Our large-scale study is first to provide conclusive empirical evidence of increased multidimensional
177
+
178
+ political incivility by social media users in battleground states.
179
+
180
+ # 6 Conclusion
181
+
182
+ We presented MUPID, a dataset of political incivility annotated via crowd sourcing, distinguishing between dimensions related to style (impoliteness) and substance (intolerance). As discussed in detail, we refrained from term matching and from using available toxicity detection tools so as to diminish topical and lexical bias. Our experiments using finetuned language models and few-shot learners reached best F1 performances of 0.70 and 0.59 in identifying impolite and intolerance language, respectively. Our results and analyses suggest that finer semantic and social understanding is required for more accurately decoding incivility as perceived in political contexts, where this particularly holds for intolerant expressions. A large-scale study demonstrates the utility of our models for studying various aspects of political incivility. We find that users who are politically engaged, in that they post political content more often, are more inclined to use uncivil language, where as few as $20\%$ of the users authored $80\%$ of the uncivil tweets. We also track network homophily, showing that 'uncivil users' tend to follow other accounts with increased incivility. Analysing incivility at the aggregate level, we find that increased incivility is more prominent in battleground states.
183
+
184
+ Our dataset and models of multidimensional political incivility detection may support future research about the relationship between incivility and other contextual factors, e.g., user sociodemographics, as user traits such as age, gender, and education level, may be elicited given popular accounts that are followed by them (Lotan and Minkov, 2023). A temporal analysis may highlight the impact of political events on incivility levels.
185
+
186
+ We believe that political incivility detection would benefit from the modeling of relevant social context, such as conversation history (Ghosh et al., 2023) and the political events that the text refers to (Pujari and Goldwasser, 2021). Incorporating information about the user alongside the text authored by them may also help decode the text meaning (Pujari et al., 2024). Initial experiments, in which we conjoined user network embeddings with the text encoding showed improved prediction performance. We hope that researchers will benefit from our dataset in exploring similar directions.
187
+
188
+ # 7 Limitations
189
+
190
+ This study applies to political incivility in the U.S., focusing on the Twitter network. Our dataset and models may be therefore limited geographically, temporally, and with respect to platform. In fact, soon after performing this research, the Twitter social network changed ownership and turned into X, where changes in its user base and political incivility levels might have followed. In general, however, we believe that much of the patterns captured in our dataset and models are general, and may transfer to other sites of social media and over time.
191
+
192
+ It is important to note that while we attend contextual factors of political incivility at user and geopolitical level, we acknowledge the potential significance of other contextual factors, e.g., the conversation history, and whether the discussion is held among like-minded users (Rossini, 2022). Exploring these aspects requires diverse methodological approaches, which are beyond the scope of the current paper.
193
+
194
+ Another limitation that is inherent to Twitter data concerns replicability, as accounts may be deleted or suspended and posts may be removed from the social network platform over time. This limitation applies to all Twitter datasets, which require tweet recovery via rehydration (Bianchi et al., 2022). We release our dataset, as well as our code and classification models to the research community to promote future research on this topic,[17] and to allow comparison of our models with future models of political incivility detection.
195
+
196
+ # 8 Ethics statement
197
+
198
+ As the primary focus of this study is political incivility, crowd coders may have encountered texts characterized by an impolite style (e.g., foul language) or intolerant content (e.g., speech that discriminates against or excludes individuals based on their social and political characteristics). To mitigate potential harm to the crowd coders, we implemented several protective measures. First, we deliberately avoided providing coding examples that contained violent threats and extreme forms of incivility. Second, we ensured that all coding examples and tasks were derived from real-world political tweets, similar to those commonly encountered on social media platforms. Additionally, we allowed coders the flexibility to terminate their tasks at their
199
+
200
+ discretion. We further wish to clarify that political incivility is not considered to be a personal trait or a characteristic of a population by the authors. Considering that toxic political discourse may have become normalized among those who frequently engage in social media discussions, our study aims to distinguish between two distinct dimensions of such discourse within the framework of partisan competition. Finally, we clarify that the normative debate on online freedom of speech and its possible restrictions is beyond the scope of our manuscript. Rather, our study aims to provide a foundation for researchers to explore the underlying factors shaping political incivility, allowing for future studies to delve into its implications. This research was approved by our institutional review board.
201
+
202
+ # Acknowledgements
203
+
204
+ We thank the reviewers for their useful comments. This research was funded by the Data Science and Research Center at the University of Haifa and by the Israeli Science Foundation, Grant no. 2671/22.
205
+
206
+ # References
207
+
208
+ Pablo Barberá, Andreu Casas, Jonathan Nagler, Patrick J Egan, Richard Bonnaue, John T Jost, and Joshua A Tucker. 2019. Who leads? who follows? measuring issue attention and agenda setting by legislators and the mass public using social media data. *American Political Science Review*, 113(4):883-901.
209
+ Pablo Barbera, John T Jost, Jonathan Nagler, Joshua A Tucker, and Richard Bonneau. 2015. Tweeting from left to right: Is online political communication more than an echo chamber? Psychological science, 26(10):1531-1542.
210
+ Sam Bestvater, Sono Shah, Gonzalo River, and Aaron Smith. 2022. Politics on twitter: One-third of tweets from us adults are political.
211
+ Federico Bianchi, Stefanie HIlls, Patricia Rossini, Dirk Hovy, Rebekah Tromble, and Nava Tintarev. 2022. "it's not just hate": A multi-dimensional perspective on detecting harmful speech online. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing.
212
+ Tommaso Caselli, Valerio Basile, Jelena Mitrovic, and Michael Granitzer. 2021. HateBERT: Retraining BERT for abusive language detection in English. In Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pages 17-25.
213
+ Kevin Coe, Kate Kenski, and Stephen A Rains. 2014. Online and uncivil? Patterns and determinants of incivility in newspaper website comments. Journal of communication, 64(4):658-679.
214
+
215
+ Sam Davidson, Qiusi Sun, and Magdalena Wojcieszak. 2020. Developing a new classifier for automated identification of incivility in social media. In Proceedings of the Fourth Workshop on Online Abuse and Harms.
216
+ Flor Miriam Plaza Del Arco, Debora Nozza, and Dirk Hovy. 2023. Respectful or toxic? using zero-shot learning with language models to detect hate speech. In *The 7th Workshop on Online Abuse and Harms* (WOAH).
217
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT.
218
+ Mai ElSherief, Vivek Kulkarni, Dana Nguyen, William Yang Wang, and Elizabeth Belding. 2018. Hate lingo: A target-based linguistic analysis of hate speech in social media. Proceedings of the International AAAI Conference on Web and Social Media, 12(1).
219
+ Mai ElSherief, Caleb Ziems, David Muchlinski, Vaishnavi Anupindi, Jordyn Seybolt, Munmun De Choudhury, and Diyi Yang. 2021. Latent hatred: A benchmark for understanding implicit hate speech. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing.
220
+ Jeremy A Frimer, Harinder Aujla, Matthew Feinberg, Linda J Skitka, Karl Aquino, Johannes C Eichstaedt, and Robb Willer. 2023. Incivility is rising among american politicians on Twitter. Social Psychological and Personality Science, 14(2):259-269.
221
+ Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith. 2020. RealToxicityPrompts: Evaluating neural toxic degeneration in language models. In *Findings of the Association for Computational Linguistics: EMNLP* 2020.
222
+ Bryan T Gervais. 2014. Following the news? reception of uncivil partisan media and the use of incivility in political expression. Political Communication, 31(4):564-583.
223
+ Sreyan Ghosh, Manan Suri, Purva Chiniya, Utkarsh Tyagi, Sonal Kumar, and Dinesh Manocha. 2023. Cosyn: Detecting implicit hate speech in online conversations using a context synergized hyperbolic network. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing.
224
+ Ken Goldstein and Paul Freedman. 2002. Lessons learned: Campaign advertising in the 2000 elections. Political Communication, 19(1):5-28.
225
+ Nir Grinberg, Kenneth Joseph, Lisa Friedland, Briony Swire-Thompson, and David Lazer. 2019. Fake news on Twitter during the 2016 US presidential election. Science, 363(6425):374-378.
226
+
227
+ Ilker Gül, Rémi Lebret, and Karl Aberer. 2024. Stance detection on social media with fine-tuned large language models. CoRR, abs/2404.12171.
228
+ Amy Gutmann and Dennis F Thompson. 2009. Democracy and disagreement. Harvard University Press.
229
+ Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, and Ece Kamar. 2022. Toxigen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection. In Proceedings of the Annual Meeting of the Association for Computational Linguistics ACL.
230
+ Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. Deberta: Decoding-enhanced bert with disentangled attention. In 9th International Conference on Learning Representations (ICLR).
231
+ Anushree Hede, Oshin Agarwal, Linda Lu, Diana C. Mutz, and Ani Nenkova. 2021. From toxicity in online comments to incivility in American news: Proceed with caution. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics (EACL).
232
+ Sophie Henning, William Beluch, Alexander Fraser, and Annemarie Friedrich. 2023. A survey of methods for addressing class imbalance in deep-learning based natural language processing. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics (EACL).
233
+ Jay D Hmielowski, Myiah J Hutchens, and Vincent J Cicchirillo. 2014. Living in an age of online incivility: Examining the conditional indirect effects of online discussion on political flaming. Information, Communication & Society, 17(10):1196-1211.
234
+ Jeffrey B Lewis, Keith Poole, Howard Rosenthal, Adam Boche, Aaron Rudkin, and Luke Sonnet. 2019. Voteview: Congressional roll-call votes database. See https://voteview.com/.
235
+ Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692.
236
+ Nir Lotan and Einat Minkov. 2023. Social world knowledge: Modeling and applications. Plos one, 18(7).
237
+ Scott M Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Advances in Neural Information Processing Systems, volume 30.
238
+ Binny Mathew, Ritam Dutt, Pawan Goyal, and Animesh Mukherjee. 2019. Spread of hate speech in online social media. In Proceedings of the 10th ACM conference on web science, pages 173-182.
239
+ Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee. 2021. Hatexplain: A benchmark dataset for
240
+
241
+ explainable hate speech detection. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI).
242
+ Ashley Muddiman. 2017. Personal and public levels of political incivility. International Journal of Communication, 11:21.
243
+ Ashley Muddiman, Jamie Pond-Cobb, and Jamie E. Matson. 2020. Negativity bias or backlash: Interaction with civil and uncivil online political news content. Communication Research, 47(6):815-837.
244
+ Pia Pachinger, Allan Hanbury, Julia Neidhardt, and Anna Planitzer. 2023. Toward disambiguating the definitions of abusive, offensive, toxic, and uncivil comments. In Proceedings of the First Workshop on Cross-Cultural Considerations in NLP (C3NLP).
245
+ Zizi Papacharissi. 2004. Democracy online: Civility, politeness, and the democratic potential of online political discussion groups. New Media & Society, 6(2):259-283.
246
+ Rajkumar Pujari and Dan Goldwasser. 2021. Understanding politics via contextualized discourse processing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing.
247
+ Rajkumar Pujari, Chengfei Wu, and Dan Goldwasser. 2024. "we demand justice!": Towards social context grounding of political texts. In Proceedings of the Conference on Empirical Methods in Natural Language Processing.
248
+ Ludovic Rheault, Erica Rayment, and Andreea Musulan. 2019. Politicians in the line of fire: Incivility and the treatment of women on social media. Research & Politics, 6(1).
249
+ Manoel Ribeiro, Pedro Calais, Yuri Santos, Virgílio Almeida, and Wagner Meira Jr. 2018. Characterizing and detecting hateful users on Twitter. In Proceedings of the International AAAI Conference on Web and Social Media, volume 12.
250
+ Patricia Rossini. 2022. Beyond incivility: Understanding patterns of uncivil and intolerant discourse in online political talk. Communication Research, 49(3):399-425.
251
+ Paul Rottger, Bertie Vidgen, Dirk Hovy, and Janet Pierrehumbert. 2022. Two contrasting data annotation paradigms for subjective NLP tasks. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
252
+ Natalee Seely. 2018. Virtual vitriol: A comparative analysis of incivility within political news discussion forums. *Electronic News*, 12(1):42-61.
253
+ Jaime E Settle, Robert M Bond, Lorenzo Coviello, Christopher J Fariss, James H Fowler, and Jason J Jones. 2016. From posting to voting: The effects of political competition on online political engagement.
254
+
255
+ Political Science Research and Methods, 4(2):361-378.
256
+ Rasmus Skytte. 2021. Dimensions of elite partisan polarization: Disentangling the effects of incivility and issue polarization. *British Journal of Political Science*, 51(4):1457-1475.
257
+ Yannis Theocharis, Pablo Barbera, Zoltán Fazekas, Sebastian Popa, and Olivier Parnet. 2016. A bad workman blames his tweets: The consequences of citizens' uncivil Twitter use when interacting with party candidates: Incivility in interactions with candidates on Twitter. Journal of Communication, 66.
258
+ Yannis Theocharis, Pablo Barbera, Zoltán Fazekas, and Sebastian Adrian Popa. 2020. The dynamics of political incivility on Twitter. SAGE Open, 10(2).
259
+ Simon Tong and Daphne Koller. 2001. Support vector machine active learning with applications to text classification. Journal of machine learning research, 2(Nov):45-66.
260
+ Cristian Vaccari and Augusto Valeriani. 2018. Digital political talk and political participation: Comparing established and third wave democracies. Sage Open, 8(2).
261
+ Jonathan Van't Riet and Aart Van Stekelenburg. 2022. The effects of political incivility on political trust and political participation: A meta-analysis of experimental research. Human Communication Research, 48(2):203-229.
262
+ Chris J Vargo and Toby Hopp. 2017. Socioeconomic status, social capital, and partisan polarity as predictors of political incivility on twitter: A congressional district-level analysis. Social Science Computer Review, 35(1):10-32.
263
+ Michael Wiegand, Josef Ruppenhofer, and Thomas Kleinbauer. 2019. Detection of abusive language: The problem of biased datasets. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT).
264
+ Magdalena Wojcieszak, Andreu Casas, Xudong Yu, Jonathan Nagler, and Joshua A Tucker. 2022. Most users do not follow political elites on twitter; those who do show overwhelming preferences for ideological congruity. Science advances, 8(39):eabn9418.
265
+ Magdalena Wojcieszak, Sjifra de Leeuw, Ericka Menchen-Trevino, Seungsu Lee, Ke M Huang-Isherwood, and Brian Weeks. 2023. No polarization from partisan news: Over-time evidence from trace data. The International Journal of Press/Politics, 28(3):601-626.
266
+ Tomer Wullach, Amir Adler, and Einat Minkov. 2021a. Fight fire with fire: Fine-tuning hate detectors using large samples of generated hate speech. In *Findings of the Association for Computational Linguistics: EMNLP*.
267
+
268
+ Tomer Wullach, Amir Adler, and Einat Minkov. 2021b. Towards hate speech detection at large via deep generative modeling. IEEE Internet Comput., 25(2):48-57.
269
+
270
+ Teng Ye, Sangseok You, and Lionel Robert Jr. 2017. When does more money work? Examining the role of perceived fairness in pay on the performance quality of crowdworkers. In Proceedings of the International AAAI Conference on Web and Social Media (ICWSM).
271
+
272
+ Avishai Zagoury, Einat Minkov, Idan Szpektor, and William W. Cohen. 2021. What's the best place for an AI conference, vancouver or : Why completing comparative questions is difficult. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI.
273
+
274
+ # A Instructions and interfaces for the crowd workers and the GPT prompt
275
+
276
+ Figure 3 presents the code book presented to the crowd workers, and Figure 4 demonstrates the training phase which workers had to complete in order to get qualified to work on our task. As shown in the screenshot, following the completion of the training phase, the candidate worker was presented with explanations about their labeling mistakes. In instructing the GPT model to label the test examples, we used the prompt shown in Figure 5.
277
+
278
+ # B Cross-dataset evaluation results
279
+
280
+ Table 7 includes detailed intra- and cross-dataset evaluation results.
281
+
282
+ <table><tr><td></td><td>Train</td><td>Test</td><td>Precision</td><td>Recall</td><td>F1</td></tr><tr><td colspan="6">MUPID → Other datasets:</td></tr><tr><td>I</td><td>TH</td><td>TH</td><td>0.677</td><td>0.543</td><td>0.604</td></tr><tr><td>C</td><td>MUPID</td><td>TH</td><td>0.542</td><td>0.847</td><td>0.661</td></tr><tr><td></td><td></td><td>Δ</td><td>-19.9%</td><td>56.0%</td><td>9.4%</td></tr><tr><td>I</td><td>RH</td><td>RH</td><td>0.845</td><td>0.672</td><td>0.749</td></tr><tr><td>C</td><td>MUPID</td><td>RH</td><td>0.547</td><td>0.831</td><td>0.66</td></tr><tr><td></td><td></td><td>Δ</td><td>-35.3%</td><td>23.6%</td><td>-11.9%</td></tr><tr><td>I</td><td>DA</td><td>DA</td><td>0.871</td><td>0.725</td><td>0.791</td></tr><tr><td>C</td><td>MUPID</td><td>DA</td><td>0.692</td><td>0.779</td><td>0.733</td></tr><tr><td></td><td></td><td>Δ</td><td>-20.6%</td><td>7.4%</td><td>-7.3%</td></tr><tr><td></td><td></td><td>Average Δ:</td><td>-25.3%</td><td>29.0%</td><td>-3.3%</td></tr><tr><td colspan="6">Other datasets → MUPID:</td></tr><tr><td>I</td><td>MUPID</td><td>MUPID</td><td>0.765</td><td>0.707</td><td>0.735</td></tr><tr><td>C</td><td>All</td><td>MUPID</td><td>0.677</td><td>0.543</td><td>0.603</td></tr><tr><td></td><td></td><td>Δ</td><td>-11.5%</td><td>-23.2%</td><td>-18.0%</td></tr></table>
283
+
284
+ Table 7: Detailed cross-dataset evaluation results: Intra-I) vs. cross-dataset (C) experiments. The table uses acronyms: TH (Theocharis et al., 2020), RH (Rheault et al., 2019), DA (Davidson et al., 2020).
285
+
286
+ <table><tr><td>Variable</td><td>Odds ratio</td><td>Std.Error</td></tr><tr><td colspan="3">IMPOLITE</td></tr><tr><td># Followers</td><td>1.000000</td><td>1</td></tr><tr><td># Followees</td><td>0.999992</td><td>1.000001</td></tr><tr><td>Tweets per day</td><td>1.008036</td><td>1.000401</td></tr><tr><td>% Political tweets</td><td>1.589433</td><td>1.020808</td></tr><tr><td colspan="3">INTOLERANT</td></tr><tr><td># Followers</td><td>1</td><td>1</td></tr><tr><td># Followees</td><td>1.00001</td><td>1.000001</td></tr><tr><td>Tweets per day</td><td>1.008002</td><td>1.000356</td></tr><tr><td>% Political tweets</td><td>5.176365</td><td>1.018723</td></tr></table>
287
+
288
+ Table 8: Multivariate beta regression results of user-level characteristics as explaining factors of the share of impolite and intolerant tweets out of their political tweets. The sample size is $230\mathrm{K}$ users, and all the results are significant at p-value $< 0.001$
289
+
290
+ # C Multi-variate analyses of user-level incivility
291
+
292
+ This section includes multi-variate analysis results, showing similar trends to our results measured in terms of Spearman's correlation, reported in Table 6.
293
+
294
+ We modeled multivariate beta regressions to examine the associations between the share of impolite and intolerant tweets out of users' political tweets and other user-level characteristics, including their number of followers, number of followees (i.e., accounts followed by a given user), average tweets per day, and the share of political tweets out of the total texts by a given user. The correlates with respect to the ratio of impolite and intolerant tweets are presented in Tables 8. We use odds ratio (OR) to interpret the results more intuitively. The results show, for example, a positive relationship between the share of impoliteness and tweets per day $(\mathrm{OR} = 1.008)$ : for a one-unit increase in a user's tweets per day, the odds of observing a higher share of impolite tweets increase by $0.80\%$ . Focusing on the share of political tweets as a predictor, the results show that a movement from its minimum value (0) to its maximum value (1) is associated with a $59\%$ increase in the odds of observing a higher share of impolite tweets $(\mathrm{OR} = 1.59)$ . We also observe that a greater share of political tweets is associated with a higher ratio of intolerant tweets, to a greater extent $(\mathrm{OR} = 5.17)$ . Note that while there is a very small change in impoliteness or intolerance ratio with the increase of a single follower or followee (OR is roughly 1), this effect is statistically significant.
295
+
296
+ We also examined whether posting uncivil tweets is correlated with exposure to incivility by
297
+
298
+ # Rules, Tips and Examples (Click to collapse)
299
+
300
+ # Rules and Tips
301
+
302
+ 1. What makes a tweet uncivil:
303
+ The tweet contains foul language or a harsh tone toward other people or their ideas and actions. It can also include harmful or discriminatory intent toward people or groups based on gender, race, ethnicity, political views, etc.
304
+ 2. Uncivil tweets can be categorized into three sub-dimensions: (A) impulsiveness, (B) intolerance, or (C) both.
305
+
306
+ A. Impoliteness: the tweet contains insults, foul language, harsh tone, name-calling, vulgarity, an accusation of lying, or aspersion toward other people or their ideas and actions.
307
+ B. Intolerance: the tweet contains expressions that derogate or undermine particular groups due to social, political, sexual, ethnic, or cultural features. The tweet can contain threats of physical or emotional harm to others, or the silencing or denial of rights of people and groups (e.g., minorities, political groups, etc.).
308
+ C. Both: the tweet contains both of the above sub-dimensions.
309
+
310
+ 3. An intolerant tweet (sub-dimension B) does not necessarily have an impolite style (sub-dimension A), and vice-versa.
311
+ 4. Disagreements with another person or idea are not considered uncivil automatically. When these disagreements contain foul language or intolerant expressions, only then the tweet will be categorized as uncivil.
312
+ 5. Notice that a tweet may contain criticism (even harsh criticism) toward public figures. However, criticism does not automatically make the tweet intolerant. In order to be intolerant, the tweet should explicitly offend/derogate entire groups (e.g., minorities, liberals, Republicans, women, LGBT, etc.)
313
+
314
+ Figure 3: The code book presented to the crowd workers
315
+
316
+ one's followees network (i.e., the accounts that the user follows). We calculated users' potential exposure to incivility as the share of impolite and intolerant tweets in their followees' network, i.e., the number of uncivil tweets posted by their followees divided by the total number of political tweets of these followees. We modeled the same beta regressions as above, this time adding considering the share of impolite and intolerant tweets in one's network as predictors. In the case of impoliteness, it is indicated that the more users are potentially exposed to impoliteness in their network, the higher is the share of their impolite tweets $(\mathrm{OR} = 1.03$ , a $3\%$ increase, p-value $< 0.001$ ). Similar findings are observed in the case of intolerance $(\mathrm{OR} = 1.06$ , a $6\%$ increase, p-value $< 0.001$ ). While we cannot establish causality due to the cross-sectional nature of the data, we encourage scholars to further investigate these initial conclusions that uncivil users follow others who behave similarly.
317
+
318
+ You were right in 2 out of 6 questions.
319
+
320
+ Correct answers in green & incorrect ones in red color.
321
+
322
+ Please review the correct answers and their detailed explanations:
323
+
324
+ ![](images/f8e6f8ee14c0a4a9d1b83a6749e7ddf7f404596611a90183f9de35d006ce9444.jpg)
325
+
326
+ USER @User.Jun 27
327
+
328
+ The government has ruined millions of lives and livelihoods by disrupting commerce, destroying small businesses and hindering individuals from going about their daily lives.
329
+
330
+ That's what the truckers are protesting, you rank imbecile. People like you need a damn good kicking.
331
+
332
+ ![](images/0c2f3c32f5cd55226033ec79a92c299cd2ca3dc1507fcc795c7e7d4c11d58518.jpg)
333
+ Figure 4: Annotator training: each candidate worker was asked to label 6 example tweets. We provided feedback about the labeling accuracy, and explanations about the labeling mistakes. Workers who obtained 4 out of 6 or higher accuracy in their responses got qualified to work on our task.
334
+
335
+ ![](images/49dd0f10dd050ac0f7a9dc5b64822e5b7ae9e3c40f0d10f8371b9e22d3d12777.jpg)
336
+
337
+ ![](images/7f268ec2f0637f6a4173b0b455d3780fba8a55896a6dd813b3fa0ee959c1c6b1.jpg)
338
+
339
+ ![](images/cd90f6ee84e4ec28dcc45a27cc98aa0c041f8079b0fef84c7b810bfd5fa49c50.jpg)
340
+
341
+ This is an uncivil tweet that belongs to the impoliteness category. Although the beginning of the tweet is critical of the government in a relatively civil manner, the tweet ends with offensive language, including direct insults.
342
+
343
+ ![](images/1550eae9de3f6953a1bb28157f8df516f58c683db0fc31d3b12a7d7a31f54f79.jpg)
344
+
345
+ USER @User.Jun 27
346
+
347
+ It would be nice to see the left fight for the basic human rights of everyone like they do for abortions.
348
+
349
+ ![](images/7add19792dd6b3714b9a8081a9c7b33bd6c10b146f2fcf5c7942725a9e47c8b9.jpg)
350
+
351
+ ![](images/ef4a1545659029308077f71474c2b2d690ec0b76e35c729676027bc52df98e6e.jpg)
352
+
353
+ ![](images/233e33a66269001cddccf57bbbba35e12981400a24666a8e152d4afded212682.jpg)
354
+
355
+ ![](images/a637b9fb99da02e3590a4b236779dea97eeab2a2dd9a90dfbb39f8717776d7ea.jpg)
356
+
357
+ This is a neutral tweet that is considered Civil. For a tweet to be labeled uncivil, it must be unambiguously offensive or contain derogative language. In this case, the tweet criticizes liberals in a relatively neutral way.
358
+
359
+ You are an advanced Political Impoliteness Classification System, specialized in detecting impolite language and behavior in political discourse, particularly within the context of United States politics and its specific terminology.
360
+
361
+ # Instruction:
362
+
363
+ Determine whether the provided tweet displays impoliteness.
364
+
365
+ Impoliteness is characterized by insults, foul language, harsh tone, name-calling, vulgarity, an accusation of lying, or aspersion toward other people or their ideas and actions.
366
+
367
+ Please provide your judgment in the following JSON format: {"impoliteness": "Yes" or "No"}
368
+
369
+ # Example Evaluations:
370
+
371
+ Tweet: "All hell has broken loose under the leadership of the senile old man. And now due to his weakness we will see him take us to WWII. Young people voted for this crap."
372
+
373
+ Your JSON response: {{"impoliteness": "Yes"}}
374
+
375
+ Tweet: "And what's it called when Hillary and the dems arranged illegal surveillance against the POTUS? spying on the Whitehouse servers? Hillary and the dems ARE enemies, foreign AND domestic."
376
+
377
+ Your JSON response: {{"impoliteness": "No"}}
378
+
379
+ Tweet: "@USER just passed a trillion dollar infrastructure bill for Biden with no wall funding. How long do Republicans believe you can keep pushing this line? You never intended to secure the border."
380
+
381
+ Your JSON response: {{"impoliteness": "No"}}
382
+
383
+ # Tweet: $\{x\}$
384
+
385
+ Your JSON response:
386
+
387
+ Figure 5: The prompt provided to the GPT-3.5-instruct model for impulsiveness classification. A similar prompt was provided for intolerance classification. The format of the prompt follows common practice in instructing GPT-instruct and similar models to perform specific classification tasks.
2024/A Closer Look at Multidimensional Online Political Incivility/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b68619bad86127817271b82ea0200a04b3834227403d9fa06567b2ac0b0d48e2
3
+ size 291527
2024/A Closer Look at Multidimensional Online Political Incivility/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Comparison of Language Modeling and Translation as Multilingual Pretraining Objectives/979594f1-eb73-4d54-8635-879feb871523_content_list.json ADDED
@@ -0,0 +1,1295 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "A Comparison of Language Modeling and Translation as Multilingual Pretraining Objectives",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 136,
8
+ 89,
9
+ 860,
10
+ 130
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Zihao Li, $^{1}$ Shaoxiong Ji, $^{*1}$ Timothee Mickus, $^{*1}$ Vincent Segonne, $^{2}$ and Jörg Tiedemann $^{1}$",
17
+ "bbox": [
18
+ 117,
19
+ 153,
20
+ 882,
21
+ 171
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "text",
27
+ "text": "<sup>1</sup>University of Helsinki <sup>2</sup> Université Bretagne Sud",
28
+ "bbox": [
29
+ 287,
30
+ 172,
31
+ 712,
32
+ 187
33
+ ],
34
+ "page_idx": 0
35
+ },
36
+ {
37
+ "type": "text",
38
+ "text": "firstname_lastname@{'helsinki.fi, 'univ-ubs.fr}",
39
+ "bbox": [
40
+ 220,
41
+ 187,
42
+ 779,
43
+ 204
44
+ ],
45
+ "page_idx": 0
46
+ },
47
+ {
48
+ "type": "text",
49
+ "text": "Abstract",
50
+ "text_level": 1,
51
+ "bbox": [
52
+ 260,
53
+ 252,
54
+ 339,
55
+ 268
56
+ ],
57
+ "page_idx": 0
58
+ },
59
+ {
60
+ "type": "text",
61
+ "text": "Pretrained language models (PLMs) display impressive performances and have captured the attention of the NLP community. Establishing best practices in pretraining has, therefore, become a major focus of NLP research, especially since insights gained from monolingual English models may not necessarily apply to more complex multilingual models. One significant caveat of the current state of the art is that different works are rarely comparable: they often discuss different parameter counts, training data, and evaluation methodology.",
62
+ "bbox": [
63
+ 141,
64
+ 282,
65
+ 460,
66
+ 454
67
+ ],
68
+ "page_idx": 0
69
+ },
70
+ {
71
+ "type": "text",
72
+ "text": "This paper proposes a comparison of multilingual pretraining objectives in a controlled methodological environment. We ensure that training data and model architectures are comparable, and discuss the downstream performances across 6 languages that we observe in probing and fine-tuning scenarios. We make two key observations: (1) the architecture dictates which pretraining objective is optimal; (2) multilingual translation is a very effective pretraining objective under the right conditions. We make our code, data, and model weights available at https://github.com/Helsinki-NLP/lm-vs-mt.",
73
+ "bbox": [
74
+ 141,
75
+ 460,
76
+ 460,
77
+ 659
78
+ ],
79
+ "page_idx": 0
80
+ },
81
+ {
82
+ "type": "text",
83
+ "text": "1 Introduction",
84
+ "text_level": 1,
85
+ "bbox": [
86
+ 114,
87
+ 674,
88
+ 260,
89
+ 690
90
+ ],
91
+ "page_idx": 0
92
+ },
93
+ {
94
+ "type": "text",
95
+ "text": "The release of BERT (Devlin et al., 2019) has marked a paradigm shift in the NLP landscape and has ushered in a thorough investment of the NLP research community in developing large language models that can readily be adapted to novel situations. The design, training, and evaluation of these models has become a significant enterprise of its own.",
96
+ "bbox": [
97
+ 112,
98
+ 702,
99
+ 489,
100
+ 828
101
+ ],
102
+ "page_idx": 0
103
+ },
104
+ {
105
+ "type": "text",
106
+ "text": "In recent years, that sustained interest has shifted also to encompass multilingual models (e.g., Muen-nighoff et al., 2022; Alves et al., 2024). There is considerable variation as to how such models are",
107
+ "bbox": [
108
+ 112,
109
+ 831,
110
+ 489,
111
+ 895
112
+ ],
113
+ "page_idx": 0
114
+ },
115
+ {
116
+ "type": "text",
117
+ "text": "trained: For instance, some rely on datasets comprising multiple languages without explicit crosslingual supervision (e.g., Liu et al., 2020), and some use explicit supervision (Xue et al., 2021). One complication that arises from this blossoming field of study is that much of the work being carried out is not directly comparable beyond the raw performances on some well-established benchmark, a procedure which may well be flawed (Gorman and Bedrick, 2019). Avoiding apples-to-oranges comparison requires a methodical approach in strictly comparable circumstances, which is the stance we adopt in this paper.",
118
+ "bbox": [
119
+ 507,
120
+ 253,
121
+ 884,
122
+ 462
123
+ ],
124
+ "page_idx": 0
125
+ },
126
+ {
127
+ "type": "text",
128
+ "text": "In short, we focus on two variables—model architecture and pretraining objectives—and set out to train five models in strictly comparable conditions and compare their monolingual performances in three downstream applications: sentiment analysis, named entity recognition, and POS-tagging. The scope of our study spans from encoder-decoder machine translation models, to decoder-only causal language models and encoder-only BERT-like masked language models. We categorize them into double-stacks (encoder-decoder) and single-stacks (encoder-only or decoder-only) models. We intend to answer two research questions:",
129
+ "bbox": [
130
+ 507,
131
+ 463,
132
+ 884,
133
+ 686
134
+ ],
135
+ "page_idx": 0
136
+ },
137
+ {
138
+ "type": "list",
139
+ "sub_type": "text",
140
+ "list_items": [
141
+ "(i) Does the explicit cross-lingual training signal of translation objectives foster better downstream performances in monolingual tasks?",
142
+ "(ii) Is the optimal choice of architecture independent of the training objective?"
143
+ ],
144
+ "bbox": [
145
+ 519,
146
+ 700,
147
+ 884,
148
+ 793
149
+ ],
150
+ "page_idx": 0
151
+ },
152
+ {
153
+ "type": "text",
154
+ "text": "There are a prima facie reasons to favor either answers to both of these questions. For instance, the success of multilingual pretrained language models (LM) on cross-lingual tasks has been underscored repeatedly (Wu and Dredze, 2019, e.g.), yet explicit alignments such as linear mapping (Wang et al., 2019) and L2 alignment (Cao et al., 2020)",
155
+ "bbox": [
156
+ 507,
157
+ 806,
158
+ 884,
159
+ 917
160
+ ],
161
+ "page_idx": 0
162
+ },
163
+ {
164
+ "type": "page_footnote",
165
+ "text": "* Equal contribution and corresponding authors.",
166
+ "bbox": [
167
+ 136,
168
+ 904,
169
+ 426,
170
+ 917
171
+ ],
172
+ "page_idx": 0
173
+ },
174
+ {
175
+ "type": "page_number",
176
+ "text": "15882",
177
+ "bbox": [
178
+ 475,
179
+ 927,
180
+ 524,
181
+ 940
182
+ ],
183
+ "page_idx": 0
184
+ },
185
+ {
186
+ "type": "footer",
187
+ "text": "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 15882-15894 November 12-16, 2024 ©2024 Association for Computational Linguistics",
188
+ "bbox": [
189
+ 152,
190
+ 945,
191
+ 843,
192
+ 972
193
+ ],
194
+ "page_idx": 0
195
+ },
196
+ {
197
+ "type": "text",
198
+ "text": "between source and target languages do not necessarily improve the quality of cross-lingual representations (Wu and Dredze, 2020).",
199
+ "bbox": [
200
+ 112,
201
+ 84,
202
+ 490,
203
+ 131
204
+ ],
205
+ "page_idx": 1
206
+ },
207
+ {
208
+ "type": "text",
209
+ "text": "Our experiments provide tentative evidence that insofar as a BART denoising autoencoder architecture is concerned, models pretrained with a translation objective consistently outperform those trained with a denoising objective. However, for singlestack transformers, we observe causal language models to perform well in probing and masked language models to generally outperform translation and causal objectives when fine-tuned on downstream tasks. This leads us to conjecture that the optimal pretraining objective depends on the architecture. Furthermore, the best downstream results we observe appear to stem from a machine-translation system, highlighting that MT encoder-decoder systems might constitute an understudied but potentially very impactful type of pretrained model.",
210
+ "bbox": [
211
+ 115,
212
+ 134,
213
+ 489,
214
+ 406
215
+ ],
216
+ "page_idx": 1
217
+ },
218
+ {
219
+ "type": "text",
220
+ "text": "2 Methods and Settings",
221
+ "text_level": 1,
222
+ "bbox": [
223
+ 112,
224
+ 423,
225
+ 334,
226
+ 439
227
+ ],
228
+ "page_idx": 1
229
+ },
230
+ {
231
+ "type": "text",
232
+ "text": "We start our inquiry by adopting a principled stance: We train strictly comparable models with MT and LM objectives before contrasting their performances on monolingual tasks.",
233
+ "bbox": [
234
+ 112,
235
+ 451,
236
+ 489,
237
+ 516
238
+ ],
239
+ "page_idx": 1
240
+ },
241
+ {
242
+ "type": "text",
243
+ "text": "Models and objectives. To allow a systematic evaluation, we train models with various neural network architectures and learning objectives. All models are based on the transformer architecture (Vaswani et al., 2017) and implemented in fairseq (Ott et al., 2019). We consider both double-stacks (encoder-decoder) and single-stacks (encoder-only or decoder-only) models.",
244
+ "bbox": [
245
+ 112,
246
+ 530,
247
+ 489,
248
+ 658
249
+ ],
250
+ "page_idx": 1
251
+ },
252
+ {
253
+ "type": "text",
254
+ "text": "The two double-stack models are variants of the BART architecture of (Lewis et al., 2020); they are trained either on a straightforward machine translation (MT) objective, using language tokens to distinguish the source, or on the original denoising auto-encoder objective of Lewis et al.. We refer to these two models as 2-LM and 2-MT respectively.",
255
+ "bbox": [
256
+ 112,
257
+ 659,
258
+ 489,
259
+ 772
260
+ ],
261
+ "page_idx": 1
262
+ },
263
+ {
264
+ "type": "text",
265
+ "text": "We also consider three single-stack models: (i) an encoder-only model trained on the masked language modeling objective (MLM) of Devlin et al. (2019); (ii) an autoregressive causal language model (CLM), similar to Radford et al. (2019); and (iii) an autoregressive model trained to generate a sentence, followed by its translation in the language specified by a given control token, known as a translation language model (TLM) as proposed",
266
+ "bbox": [
267
+ 112,
268
+ 774,
269
+ 489,
270
+ 919
271
+ ],
272
+ "page_idx": 1
273
+ },
274
+ {
275
+ "type": "text",
276
+ "text": "by Conneau and Lample (2019).<sup>1</sup> We provide an example datapoint for each pretraining objective in Table 3, Appendix A.",
277
+ "bbox": [
278
+ 507,
279
+ 83,
280
+ 884,
281
+ 133
282
+ ],
283
+ "page_idx": 1
284
+ },
285
+ {
286
+ "type": "text",
287
+ "text": "Pretraining conditions. Our core focus is on guaranteeing comparable conditions across the different pretraining objectives we consider. This entails that our datasets need to be doubly structured: both in documents for CLM pretraining; and as aligned bittexts for MT pretraining. Two datasets broadly match these criteria: the UNPC (Ziemski et al., 2016) and OpenSubtitles (OpSub; Tiedemann, 2012) corpora. The choice also narrows down the languages considered in this study: we take the set of languages present in both resources, namely the six languages in UNPC: Arabic (AR), Chinese (ZH), English (EN), French (FR), Russian (RU), and Spanish (ES).",
288
+ "bbox": [
289
+ 507,
290
+ 142,
291
+ 884,
292
+ 366
293
+ ],
294
+ "page_idx": 1
295
+ },
296
+ {
297
+ "type": "text",
298
+ "text": "To guarantee that models are trained on the same data, whenever a document is available in multiple languages, we greedily assign it to the least represented language pair thus far and discard all other possible language pairs where it could have contributed; we then discard documents which cannot be used as bittexts. This ensures that all documents are used exactly once for both document-level and bitext-level pretraining objectives. Dataset statistics are shown in Table 4, Appendix B.",
299
+ "bbox": [
300
+ 507,
301
+ 367,
302
+ 884,
303
+ 527
304
+ ],
305
+ "page_idx": 1
306
+ },
307
+ {
308
+ "type": "text",
309
+ "text": "To ensure a fair comparison, we control key variables, including tokenization (100k BPE pieces; Sennrich et al., 2016), number of transformer layers (12), hidden dimensions (512), attention heads (8), and feedforward layer dimensions (2048). We perform 600k steps of updates, using the largest batch size that fits into the GPU memory, deploy distributed training to make a global batch size of 4096, and apply the Adam optimizer (Kingma and Ba, 2017). Owing to the computational requirements, we only train one seed for each of the five types of models considered.",
310
+ "bbox": [
311
+ 507,
312
+ 529,
313
+ 882,
314
+ 721
315
+ ],
316
+ "page_idx": 1
317
+ },
318
+ {
319
+ "type": "text",
320
+ "text": "Downstream evaluation. The evaluations encompassed both sequence-level and token-level classification tasks using datasets tailored for sentiment analysis (SA), named entity recognition (NER), part-of-speech (POS) tagging, and natural language inference (NLI).",
321
+ "bbox": [
322
+ 507,
323
+ 730,
324
+ 882,
325
+ 826
326
+ ],
327
+ "page_idx": 1
328
+ },
329
+ {
330
+ "type": "text",
331
+ "text": "For SA, we utilized the Amazon review dataset (Hou et al., 2024) in English, Spanish,",
332
+ "bbox": [
333
+ 507,
334
+ 827,
335
+ 882,
336
+ 859
337
+ ],
338
+ "page_idx": 1
339
+ },
340
+ {
341
+ "type": "page_footnote",
342
+ "text": "<sup>1</sup>In this work, we only focus on the causal variant of TLM proposed by Conneau and Lample.",
343
+ "bbox": [
344
+ 507,
345
+ 868,
346
+ 880,
347
+ 892
348
+ ],
349
+ "page_idx": 1
350
+ },
351
+ {
352
+ "type": "page_footnote",
353
+ "text": "2Improvements in cross-entropy over the validation set were always marginal after this stage.",
354
+ "bbox": [
355
+ 507,
356
+ 892,
357
+ 880,
358
+ 917
359
+ ],
360
+ "page_idx": 1
361
+ },
362
+ {
363
+ "type": "page_number",
364
+ "text": "15883",
365
+ "bbox": [
366
+ 477,
367
+ 927,
368
+ 524,
369
+ 940
370
+ ],
371
+ "page_idx": 1
372
+ },
373
+ {
374
+ "type": "text",
375
+ "text": "French, and Chinese. RuReviews (Smetanin and Komarov, 2019) for Russian, and ar_resreviews (ElSahar and El-Beltagy, 2015) for Arabic. While the datasets for most languages were pre-split, ar_resreviews required manual division into training, validation, and testing sets, using an 8:1:1 ratio.",
376
+ "bbox": [
377
+ 112,
378
+ 84,
379
+ 487,
380
+ 197
381
+ ],
382
+ "page_idx": 2
383
+ },
384
+ {
385
+ "type": "text",
386
+ "text": "For NER, we model the problem as an entity span extraction using a BIO scheme. In practice, we classify tokens into three basic categories: Beginning of an entity (B), Inside an entity (I), or Outside any entity (O). We use the MultiCoNER v2 dataset (Fetahu et al., 2023) for English, Spanish, French, and Chinese, MultiCoNER v1 (Malmasi et al., 2022) for Russian and the AQMAR Wikipedia NER corpus (Mohit et al., 2012a) for Arabic. Simplifying the NER task to these fundamental categories allows us to focus more on assessing the basic entity recognition capabilities of the models without the additional complexity of differentiating numerous entity types, which can vary significantly between languages and datasets.",
387
+ "bbox": [
388
+ 115,
389
+ 199,
390
+ 489,
391
+ 441
392
+ ],
393
+ "page_idx": 2
394
+ },
395
+ {
396
+ "type": "text",
397
+ "text": "For POS tagging, we utilized the Universal Dependencies (UD) 2.0 datasets (Nivre et al., 2020), selecting specific corpora tailored to each language to ensure both linguistic diversity and relevance. We select multiple UD treebanks per language, such that each language dataset comprises approximately 160,000 tokens, which are then split into training, validation, and testing segments with an 8:1:1 ratio.",
398
+ "bbox": [
399
+ 112,
400
+ 443,
401
+ 489,
402
+ 586
403
+ ],
404
+ "page_idx": 2
405
+ },
406
+ {
407
+ "type": "text",
408
+ "text": "For NLI, we employed the XNLI dataset (Connéau et al., 2018) for the six languages. The XNLI dataset consists of sentence pairs translated from the MultiNLI dataset (Williams et al., 2018) into 15 languages, providing consistent annotations across languages. The task focuses on classifying the relationship between pairs of sentences into one of three categories: Entailment, Contradiction, or Neutral. Unlike the original cross-lingual design of XNLI, we conducted monolingual experiments for each language to evaluate the performance of our models individually in each linguistic context.",
409
+ "bbox": [
410
+ 112,
411
+ 590,
412
+ 489,
413
+ 783
414
+ ],
415
+ "page_idx": 2
416
+ },
417
+ {
418
+ "type": "text",
419
+ "text": "Supplementary details regarding data preprocessing for downstream experiments are available in Appendix B.",
420
+ "bbox": [
421
+ 112,
422
+ 785,
423
+ 489,
424
+ 833
425
+ ],
426
+ "page_idx": 2
427
+ },
428
+ {
429
+ "type": "text",
430
+ "text": "We evaluate the performances of the encoder output representations for the 2-MT and 2-LM models and of the last hidden representation before the vocabulary projection for the single-stack models.",
431
+ "bbox": [
432
+ 112,
433
+ 835,
434
+ 489,
435
+ 900
436
+ ],
437
+ "page_idx": 2
438
+ },
439
+ {
440
+ "type": "text",
441
+ "text": "The evaluation of the models involves two dis",
442
+ "bbox": [
443
+ 131,
444
+ 903,
445
+ 489,
446
+ 917
447
+ ],
448
+ "page_idx": 2
449
+ },
450
+ {
451
+ "type": "text",
452
+ "text": "tinct experimental approaches to test the performance: probing and fine-tuning. In the probing experiments, only the parameters of the classification heads are adjusted. This method primarily tests the raw capability of the pre-trained models' embeddings to adapt to specific tasks with minimal parameter changes, preserving the underlying pre-trained network structure. Conversely, in the fine-tuning experiments, all parameters of the models are adjusted. This approach allows the entire model to adapt to the specifics of the task, potentially leading to higher performance at the cost of significantly altering the pre-trained weights.",
453
+ "bbox": [
454
+ 505,
455
+ 84,
456
+ 884,
457
+ 294
458
+ ],
459
+ "page_idx": 2
460
+ },
461
+ {
462
+ "type": "text",
463
+ "text": "For both experimental approaches, each model is trained for 10 epochs to ensure sufficient learning without overfitting. We optimize parameters with AdamW (Loshchilov and Hutter, 2017), with a constant learning rate of 0.0001 across all tasks and models. This setup was chosen to standardize the training process, providing a fair basis for comparing the performance outcomes across different models and tasks. We reproduce probing and fine-tuning for 5 seeds to ensure stability.",
464
+ "bbox": [
465
+ 507,
466
+ 294,
467
+ 884,
468
+ 455
469
+ ],
470
+ "page_idx": 2
471
+ },
472
+ {
473
+ "type": "text",
474
+ "text": "3 Results",
475
+ "text_level": 1,
476
+ "bbox": [
477
+ 509,
478
+ 469,
479
+ 608,
480
+ 483
481
+ ],
482
+ "page_idx": 2
483
+ },
484
+ {
485
+ "type": "text",
486
+ "text": "Double-stack models. We first compare the performance of 2-LM and 2-MT across several key language processing tasks including SA, NER, POS tagging, and NLI. Results are shown in Tables 1a and 1b. The pretraining objectives play a significant role in shaping the models' effectiveness. Specifically, 2-MT, which is pretrained with a machine translation objective, consistently outperforms 2-LM, which utilizes a denoising objective. This pattern is consistent across all languages tested after fine-tuning as well as probing.",
487
+ "bbox": [
488
+ 507,
489
+ 497,
490
+ 884,
491
+ 674
492
+ ],
493
+ "page_idx": 2
494
+ },
495
+ {
496
+ "type": "text",
497
+ "text": "Single-stack models. Turning to the single-stack models (CLM, MLM, TLM), we find a somewhat more complex picture. In a probing context (cf. Table 2a), we find the CLM to be almost always the most effective, except for NLI in five languages and NER in Arabic, where it performs slightly less favorably compared to the MLM. As for fine-tuning (Table 2b), while the MLM generally ranks first on all POS, NER, and NLI datasets, the TLM is usually effective for SA. $^3$",
498
+ "bbox": [
499
+ 507,
500
+ 684,
501
+ 882,
502
+ 845
503
+ ],
504
+ "page_idx": 2
505
+ },
506
+ {
507
+ "type": "page_footnote",
508
+ "text": "However, remark that unlike with the BART-based models, SA results are not stable when we shift metrics from accuracy to F1 (see Tables 6 and 7 in Appendix C). The difference in F1 between the top two models is often $\\leq 0.01$ , making it difficult to ascertain that one model strictly dominates.",
509
+ "bbox": [
510
+ 507,
511
+ 857,
512
+ 885,
513
+ 917
514
+ ],
515
+ "page_idx": 2
516
+ },
517
+ {
518
+ "type": "page_number",
519
+ "text": "15884",
520
+ "bbox": [
521
+ 477,
522
+ 927,
523
+ 524,
524
+ 940
525
+ ],
526
+ "page_idx": 2
527
+ },
528
+ {
529
+ "type": "table",
530
+ "img_path": "images/12375a84b635770432e584f9edb78ffba7ad2b15d48ce35583d646c8f6509a24.jpg",
531
+ "table_caption": [],
532
+ "table_footnote": [],
533
+ "table_body": "<table><tr><td rowspan=\"2\" colspan=\"2\">Setup</td><td colspan=\"6\">Languages</td></tr><tr><td>EN</td><td>ES</td><td>FR</td><td>ZH</td><td>RU</td><td>AR</td></tr><tr><td rowspan=\"2\">SA</td><td>2-LM</td><td>42.86±0.86</td><td>42.80±0.69</td><td>43.00±0.60</td><td>40.41±1.02</td><td>65.83±0.70</td><td>70.88±1.62</td></tr><tr><td>2-MT</td><td>46.71±0.88</td><td>46.64±0.55</td><td>46.10±0.43</td><td>43.74±0.65</td><td>68.79±0.42</td><td>73.77±0.97</td></tr><tr><td rowspan=\"2\">NER</td><td>2-LM</td><td>82.69±0.09</td><td>84.74±0.07</td><td>82.80±0.06</td><td>78.88±0.25</td><td>77.93±0.15</td><td>85.28±0.22</td></tr><tr><td>2-MT</td><td>89.47±0.06</td><td>90.54±0.04</td><td>89.41±0.10</td><td>88.78±0.09</td><td>83.39±0.22</td><td>89.70±0.18</td></tr><tr><td rowspan=\"2\">POS</td><td>2-LM</td><td>78.85±0.29</td><td>78.12±0.25</td><td>81.57±0.32</td><td>66.09±0.25</td><td>77.93±0.12</td><td>47.68±0.10</td></tr><tr><td>2-MT</td><td>92.22±0.14</td><td>90.59±0.20</td><td>95.39±0.10</td><td>75.87±0.17</td><td>93.20±0.08</td><td>61.84±0.24</td></tr><tr><td rowspan=\"2\">NLI</td><td>2-LM</td><td>48.56±0.01</td><td>49.31±0.01</td><td>48.33±0.01</td><td>38.81±0.01</td><td>48.34±0.01</td><td>45.11±0.01</td></tr><tr><td>2-MT</td><td>60.50±0.01</td><td>59.56±0.01</td><td>59.00±0.01</td><td>59.01±0.01</td><td>59.83±0.01</td><td>59.58±0.01</td></tr></table>",
534
+ "bbox": [
535
+ 127,
536
+ 93,
537
+ 497,
538
+ 212
539
+ ],
540
+ "page_idx": 3
541
+ },
542
+ {
543
+ "type": "table",
544
+ "img_path": "images/9eafb7c501a6b8cccaf17352b5d6a6a891e4ebaa197e631debf7c4227fe83390.jpg",
545
+ "table_caption": [
546
+ "(a) Probing"
547
+ ],
548
+ "table_footnote": [],
549
+ "table_body": "<table><tr><td rowspan=\"2\" colspan=\"2\">Setup</td><td colspan=\"6\">Languages</td></tr><tr><td>EN</td><td>ES</td><td>FR</td><td>ZH</td><td>RU</td><td>AR</td></tr><tr><td rowspan=\"2\">SA</td><td>2-LM</td><td>52.26±0.55</td><td>52.89±0.69</td><td>52.99±0.59</td><td>48.64±0.36</td><td>73.89±0.43</td><td>79.74 ±1.36</td></tr><tr><td>2-MT</td><td>54.76±0.58</td><td>55.56±0.49</td><td>54.75±0.42</td><td>50.55±0.68</td><td>74.77±0.50</td><td>81.49 ±1.49</td></tr><tr><td rowspan=\"2\">NER</td><td>2-LM</td><td>91.13±0.12</td><td>91.82±0.21</td><td>91.58±0.10</td><td>92.30±0.10</td><td>85.34±0.39</td><td>89.05 ±0.13</td></tr><tr><td>2-MT</td><td>93.46±0.09</td><td>94.22±0.09</td><td>93.84±0.04</td><td>93.75±0.32</td><td>89.07±0.11</td><td>93.26 ±0.15</td></tr><tr><td rowspan=\"2\">POS</td><td>2-LM</td><td>92.42±0.28</td><td>90.41±0.16</td><td>95.21±0.13</td><td>82.30±0.48</td><td>95.36±0.20</td><td>69.57 ±0.24</td></tr><tr><td>2-MT</td><td>95.98±0.08</td><td>94.29±0.05</td><td>98.05±0.17</td><td>90.18±0.15</td><td>97.00±0.07</td><td>74.47 ±0.08</td></tr><tr><td rowspan=\"2\">NLI</td><td>2-LM</td><td>57.76±0.01</td><td>57.87±0.01</td><td>56.77±0.01</td><td>48.05±0.01</td><td>56.43±0.01</td><td>0.5377±0.01</td></tr><tr><td>2-MT</td><td>61.96±0.01</td><td>61.71±0.01</td><td>60.09±0.01</td><td>53.72±0.01</td><td>59.00±0.01</td><td>0.5693±0.01</td></tr></table>",
550
+ "bbox": [
551
+ 502,
552
+ 93,
553
+ 870,
554
+ 212
555
+ ],
556
+ "page_idx": 3
557
+ },
558
+ {
559
+ "type": "table",
560
+ "img_path": "images/feae7f1dd8b8e32bac80681164aaf063edadd56a0e1d27ab475d043c87e65fec.jpg",
561
+ "table_caption": [
562
+ "(b) Fine-tuning",
563
+ "Table 1: Accuracy $(\\times 100)$ of double-stack models (± s.d. over 5 runs)."
564
+ ],
565
+ "table_footnote": [],
566
+ "table_body": "<table><tr><td rowspan=\"2\" colspan=\"2\">Setup</td><td colspan=\"6\">Languages</td></tr><tr><td>EN</td><td>ES</td><td>FR</td><td>ZH</td><td>RU</td><td>AR</td></tr><tr><td rowspan=\"3\">SA</td><td>CLM</td><td>35.14±0.92</td><td>35.66±1.10</td><td>34.14±1.63</td><td>33.62±0.83</td><td>57.57±1.11</td><td>67.71±2.24</td></tr><tr><td>MLM</td><td>34.26±1.34</td><td>34.82±1.58</td><td>33.90±1.12</td><td>32.52±1.65</td><td>54.55±1.86</td><td>65.94±3.30</td></tr><tr><td>TLM</td><td>29.68±2.22</td><td>32.20±3.07</td><td>32.26±2.34</td><td>29.88±4.17</td><td>56.45±1.81</td><td>64.45±1.81</td></tr><tr><td rowspan=\"3\">NER</td><td>CLM</td><td>80.27±0.12</td><td>82.59±0.06</td><td>80.38±0.12</td><td>77.92±0.28</td><td>76.39±0.03</td><td>84.17±0.08</td></tr><tr><td>MLM</td><td>78.77±0.02</td><td>81.61±0.00</td><td>79.11±0.01</td><td>70.67±0.10</td><td>76.34±0.01</td><td>84.29±0.00</td></tr><tr><td>TLM</td><td>79.10±0.06</td><td>81.94±0.13</td><td>79.56±0.14</td><td>77.26±0.24</td><td>76.39±0.02</td><td>84.26±0.02</td></tr><tr><td rowspan=\"3\">POS</td><td>CLM</td><td>69.06±0.38</td><td>70.32±0.50</td><td>76.67±0.46</td><td>51.40±0.47</td><td>59.64±0.62</td><td>43.49±0.40</td></tr><tr><td>MLM</td><td>37.92±0.61</td><td>44.26±0.11</td><td>46.89±0.32</td><td>31.16±0.21</td><td>34.62±0.16</td><td>34.71±0.94</td></tr><tr><td>TLM</td><td>62.96±1.02</td><td>62.08±1.99</td><td>63.89±1.06</td><td>50.46±0.53</td><td>54.27±0.87</td><td>40.94±1.16</td></tr><tr><td rowspan=\"3\">NLI</td><td>CLM</td><td>42.32±0.02</td><td>42.99±0.01</td><td>43.43±0.02</td><td>40.55±0.02</td><td>40.06±0.02</td><td>41.99±0.01</td></tr><tr><td>MLM</td><td>45.64±0.02</td><td>44.49±0.01</td><td>43.11±0.02</td><td>42.80±0.01</td><td>43.16±0.01</td><td>43.55±0.01</td></tr><tr><td>TLM</td><td>38.36±0.02</td><td>41.95±0.02</td><td>41.89±0.01</td><td>38.93±0.04</td><td>41.20±0.02</td><td>39.50±0.02</td></tr></table>",
567
+ "bbox": [
568
+ 127,
569
+ 278,
570
+ 492,
571
+ 432
572
+ ],
573
+ "page_idx": 3
574
+ },
575
+ {
576
+ "type": "table",
577
+ "img_path": "images/c7de4a87fefcef962dc88e5f933db961ea9aae55ad6c829e1f630b8f43e4a928.jpg",
578
+ "table_caption": [
579
+ "(a) Probing"
580
+ ],
581
+ "table_footnote": [],
582
+ "table_body": "<table><tr><td rowspan=\"2\" colspan=\"2\">Setup</td><td colspan=\"7\">Languages</td></tr><tr><td>EN</td><td>ES</td><td>FR</td><td>ZH</td><td>RU</td><td>AR</td><td></td></tr><tr><td rowspan=\"3\">SA</td><td>CLM</td><td>55.23±0.72</td><td>47.81±15.55</td><td>54.84± 0.62</td><td>51.18±0.94</td><td>75.07±0.21</td><td>66.18±21.74</td><td></td></tr><tr><td>MLM</td><td>55.22±0.92</td><td>55.67± 1.77</td><td>54.08± 2.43</td><td>51.00±1.07</td><td>74.53±1.36</td><td>75.00± 3.48</td><td></td></tr><tr><td>TLM</td><td>55.14±0.92</td><td>55.84± 0.59</td><td>55.22± 0.98</td><td>51.46±0.53</td><td>75.31±0.57</td><td>72.75± 2.25</td><td></td></tr><tr><td rowspan=\"3\">NER</td><td>CLM</td><td>89.91±0.33</td><td>91.42± 0.15</td><td>90.65± 0.17</td><td>89.97±0.14</td><td>83.20±0.31</td><td>87.50± 2.22</td><td></td></tr><tr><td>MLM</td><td>93.31±0.57</td><td>93.93± 0.60</td><td>93.67± 0.30</td><td>92.99±0.99</td><td>87.49±0.78</td><td>85.78± 3.30</td><td></td></tr><tr><td>TLM</td><td>89.88±0.06</td><td>91.45± 0.25</td><td>90.49± 0.23</td><td>90.10±0.14</td><td>83.76±0.63</td><td>84.29± 0.00</td><td></td></tr><tr><td rowspan=\"3\">POS</td><td>CLM</td><td>91.72±0.14</td><td>90.51± 0.13</td><td>95.75± 0.10</td><td>78.61±0.31</td><td>85.50±0.15</td><td>57.43± 1.63</td><td></td></tr><tr><td>MLM</td><td>96.00±0.15</td><td>94.45± 0.13</td><td>97.94± 0.20</td><td>89.96±0.71</td><td>96.69±0.13</td><td>74.35± 0.53</td><td></td></tr><tr><td>TLM</td><td>91.68±0.19</td><td>90.38± 0.20</td><td>86.99±19.40</td><td>78.50±0.52</td><td>85.71±0.18</td><td>59.11± 0.50</td><td></td></tr><tr><td rowspan=\"3\">NLI</td><td>CLM</td><td>48.84±0.14</td><td>56.46± 0.03</td><td>55.45± 0.03</td><td>49.70±0.06</td><td>55.23±0.02</td><td>49.02± 0.07</td><td></td></tr><tr><td>MLM</td><td>59.41±0.01</td><td>57.54± 0.04</td><td>55.04± 0.06</td><td>47.96±0.03</td><td>57.80±0.01</td><td>53.60± 0.01</td><td></td></tr><tr><td>TLM</td><td>49.76±0.10</td><td>52.12± 0.11</td><td>54.20± 0.10</td><td>49.03±0.04</td><td>53.60±0.04</td><td>44.39± 0.10</td><td></td></tr></table>",
583
+ "bbox": [
584
+ 494,
585
+ 278,
586
+ 870,
587
+ 434
588
+ ],
589
+ "page_idx": 3
590
+ },
591
+ {
592
+ "type": "text",
593
+ "text": "(b) Fine-tuning",
594
+ "bbox": [
595
+ 633,
596
+ 437,
597
+ 727,
598
+ 450
599
+ ],
600
+ "page_idx": 3
601
+ },
602
+ {
603
+ "type": "text",
604
+ "text": "Table 2: Accuracy $\\left( {\\times {100}}\\right)$ of single-stack models (± s.d. over 5 runs).",
605
+ "bbox": [
606
+ 255,
607
+ 460,
608
+ 737,
609
+ 474
610
+ ],
611
+ "page_idx": 3
612
+ },
613
+ {
614
+ "type": "text",
615
+ "text": "Discussion. A first global observation that we can make for these results is that single-stack and double-stack models appear to behave differently. While the MT objective yields the highest performances for BART-type models, the downstream performances of the TLM do not really stand out compared to the CLM in probing and the MLM in fine-tuning scenarios. It is important to note that the performances stem at least in part from the architecture itself: 2-MT and 2-LM both consistently outperform all single-stack models in probing. However, it is crucial to acknowledge the limitations of our study, as we only conducted one pretraining round for all the objectives. Hence, this evidence should be interpreted as tentative at best.",
616
+ "bbox": [
617
+ 112,
618
+ 500,
619
+ 489,
620
+ 741
621
+ ],
622
+ "page_idx": 3
623
+ },
624
+ {
625
+ "type": "text",
626
+ "text": "Fine-tuning also tends to minimize the difference between single-stack and double-stack models—which suggests that the higher quality of double-stack representations could be an artifact of training limitations. Moreover, the relative ranks of the three single-stack models fluctuate much more than what we see for the double-stack models, owing to no little extent to the oftentimes momentous variation across seeds for single-stack models. We therefore conjecture that while a translation objective",
627
+ "bbox": [
628
+ 112,
629
+ 758,
630
+ 490,
631
+ 917
632
+ ],
633
+ "page_idx": 3
634
+ },
635
+ {
636
+ "type": "text",
637
+ "text": "can yield a clear training signal towards semantically informed representations, this comes with two caveats: first, the signal can only be leveraged with dedicated separate modeling of source and target (viz. double-stack models); second, this advantage is much less consequential when fine-tuning.",
638
+ "bbox": [
639
+ 507,
640
+ 500,
641
+ 884,
642
+ 596
643
+ ],
644
+ "page_idx": 3
645
+ },
646
+ {
647
+ "type": "text",
648
+ "text": "4 Related works",
649
+ "text_level": 1,
650
+ "bbox": [
651
+ 507,
652
+ 614,
653
+ 670,
654
+ 629
655
+ ],
656
+ "page_idx": 3
657
+ },
658
+ {
659
+ "type": "text",
660
+ "text": "Multilingual foundation models have flourished in recent years (a.o., Conneau and Lample, 2019; Liu et al., 2020; Xue et al., 2021; Kale et al., 2021; Fang et al., 2021; Chi et al., 2021; Alves et al., 2024; Ustun et al., 2024), and with them so have studies of their representations (Conneau et al., 2020; Siddhant et al., 2020; Choudhury and Deshpande, 2021; Fierro and Søgaard, 2022; Hammerl et al., 2023 a.o.). All of these works, however, fail to control for some of the most crucial factors, such as ensuring that all models are trained on comparable amounts of data.",
661
+ "bbox": [
662
+ 505,
663
+ 643,
664
+ 884,
665
+ 835
666
+ ],
667
+ "page_idx": 3
668
+ },
669
+ {
670
+ "type": "text",
671
+ "text": "This work is specifically related to Conneau and Lample (2019), which also compares MLM, CLM, and TLM but does not normalize the training data. Another point of comparison is Ji et al. (2024), which studies the impact of MT continued pretrain",
672
+ "bbox": [
673
+ 507,
674
+ 839,
675
+ 885,
676
+ 917
677
+ ],
678
+ "page_idx": 3
679
+ },
680
+ {
681
+ "type": "page_number",
682
+ "text": "15885",
683
+ "bbox": [
684
+ 477,
685
+ 927,
686
+ 524,
687
+ 940
688
+ ],
689
+ "page_idx": 3
690
+ },
691
+ {
692
+ "type": "text",
693
+ "text": "ing in BART on cross-lingual downstream tasks. Monolingual evaluation of multilingual systems has also been broached a.o. by Rust et al. (2021).",
694
+ "bbox": [
695
+ 112,
696
+ 84,
697
+ 487,
698
+ 131
699
+ ],
700
+ "page_idx": 4
701
+ },
702
+ {
703
+ "type": "text",
704
+ "text": "5 Conclusion",
705
+ "text_level": 1,
706
+ "bbox": [
707
+ 114,
708
+ 145,
709
+ 247,
710
+ 160
711
+ ],
712
+ "page_idx": 4
713
+ },
714
+ {
715
+ "type": "text",
716
+ "text": "This paper conducts an empirical study of how pretraining conditions of multilingual models impact downstream performances in probing and finetuning scenarios. Despite the inherent limitations that stem from our stringent data requirements, our experiments offer a novel perspective that highlights directions for future inquiry into how multilingual foundation models ought to be pretrained. We observe that double-stack BART-based models fare much better than single-stack models in probing scenarios, but the difference is overall less clear when it comes to fine-tuning. We also find some tentative evidence that translation objectives can be highly effective for model pretraining in precise circumstances: Namely, the most effective model on downstream tasks among those we experimented with is an MT-pretrained BART-like model, which outperforms both a more traditional denoising objective for BART as well as decoder-only CLM and encoder-only MLM models. This would suggest that translation can serve as a powerful pretraining objective, although it is currently under-explored. $^4$",
717
+ "bbox": [
718
+ 115,
719
+ 172,
720
+ 489,
721
+ 525
722
+ ],
723
+ "page_idx": 4
724
+ },
725
+ {
726
+ "type": "text",
727
+ "text": "Another crucial aspect of our study is that we present strictly comparable models, trained on comparable data, with comparable parameter counts and unified implementations. While this entails some limitations, especially with regard to the scale of models and data used, we nonetheless believe that a strict comparison can help discriminate between the various factors at play in other works. Here, we find clear evidence that CLM pretraining objectives, such as those used in GPT, outperform MLM-based models, such as BERT, in probing scenarios; we are also able to isolate and highlight how the optimal choice of pretraining objective is contingent on the architecture being employed.",
728
+ "bbox": [
729
+ 115,
730
+ 527,
731
+ 489,
732
+ 750
733
+ ],
734
+ "page_idx": 4
735
+ },
736
+ {
737
+ "type": "text",
738
+ "text": "For future work, we recommend exploring multitask learning during pretraining by combining objectives like translation, denoising, and language modeling; in such cases, models could harness the strengths of each task to become more robust and versatile. Additionally, investigating training",
739
+ "bbox": [
740
+ 112,
741
+ 751,
742
+ 489,
743
+ 848
744
+ ],
745
+ "page_idx": 4
746
+ },
747
+ {
748
+ "type": "text",
749
+ "text": "free evaluation methods can offer insights into a model's inherent capabilities without the variability introduced by fine-tuning.",
750
+ "bbox": [
751
+ 505,
752
+ 84,
753
+ 882,
754
+ 131
755
+ ],
756
+ "page_idx": 4
757
+ },
758
+ {
759
+ "type": "text",
760
+ "text": "Acknowledgments",
761
+ "text_level": 1,
762
+ "bbox": [
763
+ 509,
764
+ 145,
765
+ 672,
766
+ 162
767
+ ],
768
+ "page_idx": 4
769
+ },
770
+ {
771
+ "type": "text",
772
+ "text": "We thank Alessandro Raganato and our colleagues at the Helsinki-NLP group for useful discussions throughout this project, as well as the three anonymous reviewers for their comments.",
773
+ "bbox": [
774
+ 507,
775
+ 171,
776
+ 884,
777
+ 234
778
+ ],
779
+ "page_idx": 4
780
+ },
781
+ {
782
+ "type": "text",
783
+ "text": "This project has received funding from the European Union's Horizon Europe research and innovation programme under Grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546], and partially funded by the French National Research Agency [grant ANR-23-IAS1-0001]. The contents of this publication are the sole responsibility of its authors and do not necessarily reflect the opinion of the European Union.",
784
+ "bbox": [
785
+ 507,
786
+ 236,
787
+ 885,
788
+ 412
789
+ ],
790
+ "page_idx": 4
791
+ },
792
+ {
793
+ "type": "text",
794
+ "text": "The authors wish to thank CSC-IT Center for Science, Finland, for the generous computational resources on the Puhti supercomputer and LUMI supercomputer through the LUMI extreme scale access (MOOMIN and LumiNMT). Some of the experiments were performed using the Jean Zay and Adastra clusters from GENCI-IDRIS [grant 2022 A0131013801].",
795
+ "bbox": [
796
+ 507,
797
+ 413,
798
+ 882,
799
+ 541
800
+ ],
801
+ "page_idx": 4
802
+ },
803
+ {
804
+ "type": "text",
805
+ "text": "Limitations",
806
+ "text_level": 1,
807
+ "bbox": [
808
+ 509,
809
+ 554,
810
+ 615,
811
+ 569
812
+ ],
813
+ "page_idx": 4
814
+ },
815
+ {
816
+ "type": "text",
817
+ "text": "This study employs models that are not large in terms of parameters in the era of large language models. Such a constraint potentially hinders the generalizability of our results to much larger architectures that are capable of handling a broader array of linguistic nuances. Furthermore, our study focuses on a small selected group of languages and specific NLP tasks. This focus might limit the applicability of our findings to other linguistic contexts or more complex real-world applications where diverse language phenomena or different task demands play a crucial role.",
818
+ "bbox": [
819
+ 507,
820
+ 581,
821
+ 882,
822
+ 772
823
+ ],
824
+ "page_idx": 4
825
+ },
826
+ {
827
+ "type": "text",
828
+ "text": "Another limitation is our reliance on specific corpora. The datasets utilized, while valuable, represent a potential source of selection bias. They may not fully encompass the vast diversity of global language use, thus skewing the model training and evaluation. Such a bias could affect the robustness and effectiveness of the pretrained models when applied to languages that are not well-represented in the training data.",
829
+ "bbox": [
830
+ 507,
831
+ 774,
832
+ 884,
833
+ 917
834
+ ],
835
+ "page_idx": 4
836
+ },
837
+ {
838
+ "type": "page_footnote",
839
+ "text": "4There are reasonable objections against using MT models as pretrained multilingual foundation models—namely, unlike auto-regressive causal language models, their generation capabilities are strictly tied to translation, thereby requiring some degree of multilingualism from end-users.",
840
+ "bbox": [
841
+ 112,
842
+ 856,
843
+ 489,
844
+ 917
845
+ ],
846
+ "page_idx": 4
847
+ },
848
+ {
849
+ "type": "page_number",
850
+ "text": "15886",
851
+ "bbox": [
852
+ 477,
853
+ 927,
854
+ 524,
855
+ 940
856
+ ],
857
+ "page_idx": 4
858
+ },
859
+ {
860
+ "type": "text",
861
+ "text": "References",
862
+ "text_level": 1,
863
+ "bbox": [
864
+ 115,
865
+ 84,
866
+ 213,
867
+ 98
868
+ ],
869
+ "page_idx": 5
870
+ },
871
+ {
872
+ "type": "list",
873
+ "sub_type": "ref_text",
874
+ "list_items": [
875
+ "AllSet Learning. 2023. Chinese grammar wiki.",
876
+ "Duarte M. Alves, José Pombal, Nuno M. Guerreiro, Pedro H. Martins, João Alves, Amin Farajian, Ben Peters, Ricardo Rei, Patrick Fernandes, Sweta Agrawal, Pierre Colombo, José G. C. de Souza, and André F. T. Martins. 2024. Tower: An open multilingual large language model for translation-related tasks.",
877
+ "Steven Cao, Nikita Kitaev, and Dan Klein. 2020. Multilingual alignment of contextual word representations. In International Conference on Learning Representations.",
878
+ "Zewen Chi, Li Dong, Shuming Ma, Shaohan Huang, Saksham Singhal, Xian-Ling Mao, Heyan Huang, Xia Song, and Furu Wei. 2021. mT6: Multilingual pretrained text-to-text transformer with translation pairs. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1671-1683, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.",
879
+ "Monojit Choudhury and Amit Deshpande. 2021. How linguistically fair are multilingual pre-trained language models? Proceedings of the AAAI Conference on Artificial Intelligence, 35(14):12710-12718.",
880
+ "Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.",
881
+ "Alexis Conneau, Rudy Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating crosslingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475-2485, Brussels, Belgium. Association for Computational Linguistics.",
882
+ "Alexis Conneau, Shijie Wu, Haoran Li, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Emerging crosslingual structure in pretrained language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6022-6034, Online. Association for Computational Linguistics.",
883
+ "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.",
884
+ "Hady ElSahar and Samhaa R El-Beltagy. 2015. Building large arabic multi-domain resources for sentiment analysis. In International conference on intelligent"
885
+ ],
886
+ "bbox": [
887
+ 115,
888
+ 105,
889
+ 487,
890
+ 917
891
+ ],
892
+ "page_idx": 5
893
+ },
894
+ {
895
+ "type": "list",
896
+ "sub_type": "ref_text",
897
+ "list_items": [
898
+ "text processing and computational linguistics, pages 23-34. Springer.",
899
+ "Yuwei Fang, Shuohang Wang, Zhe Gan, Siqi Sun, and Jingjing Liu. 2021. Filter: An enhanced fusion method for cross-lingual language understanding. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 12776-12784.",
900
+ "Besnik Fetahu, Zhiyu Chen, Sudipta Kar, Oleg Rokhlenko, and Shervin Malmasi. 2023. Multi-CoNER v2: a large multilingual dataset for fine-grained and noisy named entity recognition. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 2027-2051, Singapore. Association for Computational Linguistics.",
901
+ "Constanza Fierro and Anders Søgaard. 2022. Factual consistency of multilingual pretrained language models. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3046-3052, Dublin, Ireland. Association for Computational Linguistics.",
902
+ "Kyle Gorman and Steven Bedrick. 2019. We need to talk about standard splits. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2786-2791, Florence, Italy. Association for Computational Linguistics.",
903
+ "Bruno Guillaume, Marie-Catherine de Marneffe, and Guy Perrier. 2019. Conversion et améliorations de corpus du français annotés en Universal Dependencies [conversion and improvement of Universal Dependencies French corpora]. *Traitement Automatique des Langues*, 60(2):71-95.",
904
+ "Katharina Hämmerl, Alina Fastowski, Jindrich Libovický, and Alexander Fraser. 2023. Exploring anisotropy and outliers in multilingual language models for cross-lingual semantic sentence similarity. In Findings of the Association for Computational Linguistics: ACL 2023, pages 7023-7037, Toronto, Canada. Association for Computational Linguistics.",
905
+ "Yupeng Hou, Jiacheng Li, Zhankui He, An Yan, Xiusi Chen, and Julian McAuley. 2024. Bridging language and items for retrieval and recommendation. arXiv preprint arXiv:2403.03952.",
906
+ "Shaoxiong Ji, Timothee Mickus, Vincent Segonne, and Jorg Tiedemann. 2024. Can machine translation bridge multilingual pretraining and cross-lingual transfer learning? In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 2809-2818, Torino, Italia. ELRA and ICCL.",
907
+ "Mihir Kale, Aditya Siddhant, Rami Al-Rfou, Linting Xue, Noah Constant, and Melvin Johnson. 2021. nmT5 - is parallel data still relevant for pre-training massively multilingual language models? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language"
908
+ ],
909
+ "bbox": [
910
+ 510,
911
+ 85,
912
+ 880,
913
+ 917
914
+ ],
915
+ "page_idx": 5
916
+ },
917
+ {
918
+ "type": "page_number",
919
+ "text": "15887",
920
+ "bbox": [
921
+ 477,
922
+ 927,
923
+ 524,
924
+ 940
925
+ ],
926
+ "page_idx": 5
927
+ },
928
+ {
929
+ "type": "list",
930
+ "sub_type": "ref_text",
931
+ "list_items": [
932
+ "Processing (Volume 2: Short Papers), pages 683-691, Online. Association for Computational Linguistics.",
933
+ "Diederik P. Kingma and Jimmy Ba. 2017. Adam: A method for stochastic optimization.",
934
+ "John Lee, Herman Leung, and Keying Li. 2017. Towards Universal Dependencies for learner Chinese. In Proceedings of the NoDaLiDa 2017 Workshop on Universal Dependencies (UDW 2017), pages 67-71, Gothenburg, Sweden. Association for Computational Linguistics.",
935
+ "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.",
936
+ "Yixuan Li, Gerdes Kim, Guillaume Bruno, and Dan Zeman. 2022. Ud chinese patentchar.",
937
+ "Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. Transactions of the Association for Computational Linguistics, 8:726-742.",
938
+ "Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101.",
939
+ "Olga Lyashevskaya, Olga Rudina, Natalia Vlasova, and Anna Zhuravleva. 2018. Ud russian taiga.",
940
+ "Shervin Malmasi, Anjie Fang, Besnik Fetahu, Sudipta Kar, and Oleg Rokhlenko. 2022. MultiCoNER: A large-scale multilingual dataset for complex named entity recognition. In Proceedings of the 29th International Conference on Computational Linguistics, pages 3798-3809, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.",
941
+ "Ryan McDonald, Joakim Nivre, Yvonne Quirmbach-Brundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar Täckström, Claudia Bedini, Núria Bertomeu Castelló, and Jungmee Lee. 2013. Universal Dependency annotation for multilingual parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 92-97, Sofia, Bulgaria. Association for Computational Linguistics.",
942
+ "Behrang Mohit, Nathan Schneider, Rishav Bhowmick, Kemal Oflazer, and Noah A. Smith. 2012a. Recall-oriented learning of named entities in Arabic Wikipedia. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 162-173, Avignon, France. Association for Computational Linguistics."
943
+ ],
944
+ "bbox": [
945
+ 115,
946
+ 85,
947
+ 487,
948
+ 917
949
+ ],
950
+ "page_idx": 6
951
+ },
952
+ {
953
+ "type": "list",
954
+ "sub_type": "ref_text",
955
+ "list_items": [
956
+ "Behrang Mohit, Nathan Schneider, Rishav Bhowmick, Kemal Oflazer, and Noah A Smith. 2012b. Recall-oriented learning of named entities in arabic wikipedia. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 162-173.",
957
+ "Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hailey Schoelkopf, et al. 2022. Crosslingual generalization through multitask finetuning. arXiv preprint arXiv:2211.01786.",
958
+ "Joakim Nivre, Željko Agić, Lars Ahrenberg, Lene Antonsen, Maria Jesus Aranzabe, Masayuki Asahara, Luma Ateyah, Mohammed Attia, Aitziber Atutxa, Elena Badmaeva, Miguel Ballesteros, Esha Banerjee, Sebastian Bank, John Bauer, Kepa Bengoetxea, Riyaz Ahmad Bhat, Eckhard Bick, Cristina Bosco, Gosse Bouma, Sam Bowman, Aljoscha Burchardt, Marie Candito, Gauthier Caron, Gülsen Cebiroğlu Eryiğit, Giuseppe G. A. Celano, Savas Cetin, Fabricio Chalub, Jinho Choi, Yongseok Cho, Silvie Cinkova, Çağr Üçoltekin, Miriam Connor, Marie-Catherine de Marneffe, Valeria de Paiva, Arantza Diaz de Ilarraza, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Marhaba Eli, Ali Elkahky, Tomaz Erjavec, Richard Farkas, Hector Fernandez Alcalde, Jennifer Foster, Cláudia Freitas, Katarina Gajdošová, Daniel Galbraith, Marcos Garcia, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Memduh Gökirmak, Yoav Goldberg, Xavier Gómez Guinovart, Berta Gonzáles Saavedra, Matias Grioni, Normunds Grūzītis, Bruno Guillàume, Nizar Habash, Jan Hajic, Jan Hajic jr., Linh Hā My, Kim Harris, Dag Haug, Barbora Hladka, Jaroslava Hlaváčová, Petter Hohle, Radu Ion, Elena Irimia, Anders Johannsen, Fredrik Jørgensen, Hüner Kasikara, Hiroshi Kanayama, Jenna Kanerva, Tolga Kayadelen, Václava Kettnerová, Jesse Kirchner, Natalia Kotsyba, Simon Krek, Sookyoung Kwak, Veronika Laippala, Lorenzo Lambertino, Tatiana Lando, Phoêng Lê Hóng, Alessandro Lenci, Saran Lertpradit, Herman Leung, Cheuk Ying Li, Josie Li, Nikola Ljubesic, Olga Loginova, Olga Lyashevskaya, Teresa Lynn, Vivien Macketanz, Aibek Makazhanov, Michael Mandl, Christopher Manning, Ruli Manurung, Catalina Máränduc, David Mareček, Katrin Marheinecke, Héctor Martínez Alonso, André Martins, Jan Mašek, Yuji Matsumoto, Ryan McDonald, Gustavo Mendonça, Anna Missilä, Verginica Mititelu, Yusuke Miyao, Simonetta Montemagni, Amir More, Laura Moreno Romero, Shunsuke Mori, Bohdan Moskalevskyi, Kadri Muischnek, Nina Mustafina, Kaili Müürisep, Pinkey Nainwani, Anna Nedoluzhko, Loêng Nguyen Thi, Huyen Nguyen Thi Minh, Vitaly Nikolaev, Rattima Nitisaroj, Hanna Nurmi, Stina Ojala, Petya Osnova, Lilja Øvrelid, Elena Pascual, Marco Passarotti, Cenel Augusto Perez, Guy Perrier, Slav Petrov, Jussi Piitulainen, Emily Pitler, Barbara Plank, Martin Popel, Lauma Pretkalnina, Prokopis Prokopidis, Tiina Puolakainen, Sampo Pyysalo, Alexandre Rademaker"
959
+ ],
960
+ "bbox": [
961
+ 510,
962
+ 85,
963
+ 880,
964
+ 917
965
+ ],
966
+ "page_idx": 6
967
+ },
968
+ {
969
+ "type": "page_number",
970
+ "text": "15888",
971
+ "bbox": [
972
+ 477,
973
+ 928,
974
+ 524,
975
+ 940
976
+ ],
977
+ "page_idx": 6
978
+ },
979
+ {
980
+ "type": "list",
981
+ "sub_type": "ref_text",
982
+ "list_items": [
983
+ "Livy Real, Siva Reddy, Georg Rehm, Larissa Rinaldi, Laura Rituma, Rudolf Rosa, Davide Rovati, Shadi Saleh, Manuela Sanguinetti, Baiba Saulite, Yanin Sawanakunanon, Sebastian Schuster, Djame Seddah, Wolfgang Seeker, Mojgan Seraji, Lena Shakurova, Mo Shen, Atsuko Shimada, Muh Shohibussirri, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simko, Maria Simková, Kiril Simov, Aaron Smith, Antonio Stella, Jana Strnadová, Alane Suhr, Umut Sulubacak, Zsolt Szántó, Dima Taji, Takaaki Tanaka, Trond Trosterud, Anna Trukhina, Reut Tsarfaty, Francis Tyers, Sumire Uematsu, Zdenka Urešová, Larraitz Uria, Hans Uszkoreit, Gertjan van Noord, Viktor Varga, Veronika Vincze, Jonathan North Washington, Zhuoran Yu, Zdeněk Žabokrtský, Daniel Zeman, and Hanzhi Zhu. 2017. Universal dependencies 2.0 - CoNLL 2017 shared task development and test data. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (UFAL), Faculty of Mathematics and Physics, Charles University.",
984
+ "Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Jan Hajic, Christopher D Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal dependencies v2: An evergrowing multilingual treebank collection. arXiv preprint arXiv:2004.10643.",
985
+ "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. *fairoseq: A fast, extensible toolkit for sequence modeling*. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics* (Demonstrations), pages 48-53, Minneapolis, Minnesota. Association for Computational Linguistics.",
986
+ "Peng Qi, Koichi Yasuoka, and Dan Zeman. 2019. Ud chinese gsdsimp.",
987
+ "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.",
988
+ "Phillip Rust, Jonas Pfeiffer, Ivan Vulic, Sebastian Ruder, and Iryna Gurevych. 2021. How good is your tokenizer? on the monolingual performance of multilingual language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3118-3135, Online. Association for Computational Linguistics.",
989
+ "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics.",
990
+ "Aditya Siddhant, Melvin Johnson, Henry Tsai, Naveen Ari, Jason Riesa, Ankur Bapna, Orhan Firat, and"
991
+ ],
992
+ "bbox": [
993
+ 115,
994
+ 85,
995
+ 489,
996
+ 917
997
+ ],
998
+ "page_idx": 7
999
+ },
1000
+ {
1001
+ "type": "list",
1002
+ "sub_type": "ref_text",
1003
+ "list_items": [
1004
+ "Karthik Raman. 2020. Evaluating the cross-lingual effectiveness of massively multilingual neural machine translation. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 8854-8861.",
1005
+ "Sergey Smetanin and Michail Komarov. 2019. Sentiment analysis of product reviews in russian using convolutional neural networks. In 2019 IEEE 21st Conference on Business Informatics (CBI), volume 01, pages 482-486.",
1006
+ "Jörg Tiedemann. 2012. Parallel data, tools and interfaces in opus. In Proceedings of LREC, volume 2012, pages 2214-2218.",
1007
+ "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.",
1008
+ "Yuxuan Wang, Wanxiang Che, Jiang Guo, Yijia Liu, and Ting Liu. 2019. Cross-lingual bert transformation for zero-shot dependency parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5721-5727.",
1009
+ "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguistics.",
1010
+ "Tak-sum Wong, Kim Gerdes, Herman Leung, and John Lee. 2017. Quantitative comparative syntax on the Cantonese-Mandarin parallel dependency treebank. In Proceedings of the Fourth International Conference on Dependency Linguistics (Depling 2017), pages 266-275, Pisa, Italy. Linköping University Electronic Press.",
1011
+ "Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of bert. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 833-844.",
1012
+ "Shijie Wu and Mark Dredze. 2020. Do explicit alignments robustly improve multilingual encoders? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4471-4482.",
1013
+ "Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings"
1014
+ ],
1015
+ "bbox": [
1016
+ 510,
1017
+ 85,
1018
+ 882,
1019
+ 917
1020
+ ],
1021
+ "page_idx": 7
1022
+ },
1023
+ {
1024
+ "type": "page_number",
1025
+ "text": "15889",
1026
+ "bbox": [
1027
+ 477,
1028
+ 928,
1029
+ 524,
1030
+ 940
1031
+ ],
1032
+ "page_idx": 7
1033
+ },
1034
+ {
1035
+ "type": "list",
1036
+ "sub_type": "ref_text",
1037
+ "list_items": [
1038
+ "of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483-498, Online. Association for Computational Linguistics.",
1039
+ "Amir Zeldes. 2017. The GUM corpus: Creating multilayer resources in the classroom. Language Resources and Evaluation, 51(3):581-612.",
1040
+ "Dan Zeman, Kirian Guiller, and Bruno Guillaume. 2023. Ud chinese beginner.",
1041
+ "Otakar Smrz Viktor Bielicky Iveta Kourilova Jakub Kracmar Zemanek. 2008. Dependency treebank: A word on the million words.",
1042
+ "Michal Ziemski, Marcin Junczys-Dowmunt, and Bruno Pouliquen. 2016. The United Nations parallel corpus v1.0. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 3530-3534, Portož, Slovenia. European Language Resources Association (ELRA).",
1043
+ "Ahmet Üstün, Viraat Aryabumi, Zheng-Xin Yong, WeiYin Ko, Daniel D'souza, Gbemileke Onilude, Neel Bhandari, Shivalika Singh, Hui-Lee Ooi, Amr Kayid, Freddie Vargus, Phil Blunsom, Shayne Longpre, Niklas Muennighoff, Marzieh Fadaee, Julia Kreutzer, and Sara Hooker. 2024. Aya model: An instruction finetuned open-access multilingual language model. arXiv preprint arXiv:2402.07827."
1044
+ ],
1045
+ "bbox": [
1046
+ 115,
1047
+ 85,
1048
+ 489,
1049
+ 478
1050
+ ],
1051
+ "page_idx": 8
1052
+ },
1053
+ {
1054
+ "type": "page_number",
1055
+ "text": "15890",
1056
+ "bbox": [
1057
+ 477,
1058
+ 928,
1059
+ 526,
1060
+ 940
1061
+ ],
1062
+ "page_idx": 8
1063
+ },
1064
+ {
1065
+ "type": "text",
1066
+ "text": "A Overview of pretraining objectives",
1067
+ "text_level": 1,
1068
+ "bbox": [
1069
+ 112,
1070
+ 84,
1071
+ 450,
1072
+ 99
1073
+ ],
1074
+ "page_idx": 9
1075
+ },
1076
+ {
1077
+ "type": "text",
1078
+ "text": "Table 3 displays an example data point for all pretraining objectives we consider. In principle, the CLM is a document-level objective, i.e., the full document would be used as an input rather than the two sentences we show here.",
1079
+ "bbox": [
1080
+ 112,
1081
+ 109,
1082
+ 489,
1083
+ 187
1084
+ ],
1085
+ "page_idx": 9
1086
+ },
1087
+ {
1088
+ "type": "text",
1089
+ "text": "B Datasets statistics",
1090
+ "text_level": 1,
1091
+ "bbox": [
1092
+ 112,
1093
+ 200,
1094
+ 305,
1095
+ 215
1096
+ ],
1097
+ "page_idx": 9
1098
+ },
1099
+ {
1100
+ "type": "text",
1101
+ "text": "An overview of the volume of data available for pretraining is displayed in Table 4. The majority of the data were used for training.",
1102
+ "bbox": [
1103
+ 112,
1104
+ 227,
1105
+ 487,
1106
+ 273
1107
+ ],
1108
+ "page_idx": 9
1109
+ },
1110
+ {
1111
+ "type": "text",
1112
+ "text": "In Table 5, we present an overview of the datasets used for downstream evaluation.",
1113
+ "bbox": [
1114
+ 112,
1115
+ 275,
1116
+ 487,
1117
+ 306
1118
+ ],
1119
+ "page_idx": 9
1120
+ },
1121
+ {
1122
+ "type": "text",
1123
+ "text": "C Detailed results",
1124
+ "text_level": 1,
1125
+ "bbox": [
1126
+ 112,
1127
+ 319,
1128
+ 287,
1129
+ 332
1130
+ ],
1131
+ "page_idx": 9
1132
+ },
1133
+ {
1134
+ "type": "text",
1135
+ "text": "In Table 6 and Table 7, we present the macro-f1 score of models in the downstream evaluation.",
1136
+ "bbox": [
1137
+ 112,
1138
+ 344,
1139
+ 485,
1140
+ 375
1141
+ ],
1142
+ "page_idx": 9
1143
+ },
1144
+ {
1145
+ "type": "page_number",
1146
+ "text": "15891",
1147
+ "bbox": [
1148
+ 477,
1149
+ 927,
1150
+ 522,
1151
+ 940
1152
+ ],
1153
+ "page_idx": 9
1154
+ },
1155
+ {
1156
+ "type": "table",
1157
+ "img_path": "images/96307c651cfa4636f1e7d73aeb4b76654922eb8716bd68434010f8375cc55968.jpg",
1158
+ "table_caption": [],
1159
+ "table_footnote": [],
1160
+ "table_body": "<table><tr><td>Objective</td><td>Source input</td><td>Target output</td></tr><tr><td>2-LM</td><td>_D&#x27;_autres _mesures _de _ce _type _vont _été [MASK] [MASK], _en _coopération _avec _d&#x27;_autres _associations _de _Rom s, _de _Sin tis _et _de [MASK] _du _voyage (_ « C am min anti »). &lt;/s&gt;</td><td>&lt;s&gt; _D&#x27;_autres _mesures _de _ce _type _vont _été _appliquées, _en _coopération _avec _d&#x27;_autres _associations _de _Rom s, _de _Sin tis _et _de _gens _du _voyage (_ « C am min anti »). &lt;/s&gt;</td></tr><tr><td>2-MT</td><td>&lt;fr&gt; _D&#x27;_autres _mesures _de _ce _type _vont _été _appliquées, _en _coopération _avec _d&#x27;_autres _associations _de _Rom s, _de _Sin tis _et _de _gens _du _voyage (_ « C am min anti »).</td><td>&lt;s&gt; _Other _similar _measures _are _going _to be _taken in _cooperation _with _other _Rom a, _Sin ti _and _Travel lers (_ &quot; C am min anti &quot;) _associations.</td></tr><tr><td>CLM</td><td>... Divers _accords _ad _hoc _ont _été _conclus à _cet _effet _par _le _Ministère _de _l&#x27;éducation _et _l&#x27;as _sociation _Op era _Nom ad i. _D&#x27;_autres _mesures _de _ce _type _vont _été _appliquées, _en _coopération _avec _d&#x27;_autres _associations _de _Rom s, _de _Sin tis _et _de _gens _du _voyage (_ « C am min anti »). ...</td><td>... _accords _ad _hoc _ont _été _conclus à _cet _effet _par _le _Ministère _de _l&#x27;éducation _et _l&#x27;as _sociation _Op era _Nom ad i. _D&#x27;_autres _mesures _de _ce _type _vont _été _appliquées, _en _coopération _avec _d&#x27;_autres _associations _de _Rom s, de _Sin tis _et _de _gens _du _voyage (_ « C am min anti »). ...</td></tr><tr><td>TLM</td><td>_D&#x27;_autres _mesures _de _ce _type _vont _été _appliquées, _en _coopération _avec _d&#x27;_autres _associations _de _Rom s, _de _Sin tis _et _de _gens _du _voyage (_ « C am min anti »). &lt;fr2en&gt; Other _similar _measures _are _going _to be _taken_in _cooperation _with _other _Rom a, _Sin ti _and _Travel lers (_ &quot; C am min anti &quot;) _associations.</td><td>mesures _de _ce _type _vont _été _appliquées, _en _coopération _avec _d&#x27;_autres _associations _de _Rom s, de _Sin tis _et _de _gens _du _voyage (_ « C am min anti »). &lt;fr2en&gt; Other _similar _measures _are _going _to be _taken_in _cooperation _with _other _Rom a, _Sin ti and _Travel lers (_ &quot; C am min anti &quot;) _associations. &lt;/s&gt;</td></tr><tr><td>MLM</td><td>&lt;s&gt; _D&#x27;_autres _mesures _de _ce _type _vont _été [MASK] [MASK], _en _coopération _avec _d&#x27;_autres _associations _de _Rom s, _de _Sin tis _et _de [MASK] _du _voyage (_ « C am min anti »). &lt;/s&gt;</td><td>&lt;s&gt; _D&#x27;_autres _mesures _de _ce _type _vont _été _appliquées ,_en _coopération _avec _d&#x27;_autres _associations _de _Rom s, de _Sin tis _et _de _gens _du _voyage (_ « C am min anti »). &lt;/s&gt;</td></tr></table>",
1161
+ "bbox": [
1162
+ 114,
1163
+ 160,
1164
+ 865,
1165
+ 533
1166
+ ],
1167
+ "page_idx": 10
1168
+ },
1169
+ {
1170
+ "type": "table",
1171
+ "img_path": "images/87a8cce7c68d8ba0b51e59cc2f05a9a5321f231ee1a6ff072c2a78a461868486.jpg",
1172
+ "table_caption": [
1173
+ "Table 3: Overview of the different objectives considered in this study. Top two rows: two-stacks (encoder-decoder) models; bottom three rows: single-stack (encoder-only or decoder-only) models."
1174
+ ],
1175
+ "table_footnote": [],
1176
+ "table_body": "<table><tr><td></td><td>Train</td><td>Validation</td><td>Test</td><td>Total</td></tr><tr><td>UNPC</td><td>114 376 177</td><td>76 303</td><td>40 712</td><td>114 493 192</td></tr><tr><td>OpSub</td><td>81 622 353</td><td>359 035</td><td>77 342</td><td>82 058 730</td></tr><tr><td>Total</td><td>195 998 530</td><td>435 338</td><td>118 054</td><td>196 551 922</td></tr></table>",
1177
+ "bbox": [
1178
+ 315,
1179
+ 741,
1180
+ 680,
1181
+ 810
1182
+ ],
1183
+ "page_idx": 10
1184
+ },
1185
+ {
1186
+ "type": "text",
1187
+ "text": "Table 4: Number of sentences in pretraining corpora.",
1188
+ "bbox": [
1189
+ 317,
1190
+ 820,
1191
+ 678,
1192
+ 835
1193
+ ],
1194
+ "page_idx": 10
1195
+ },
1196
+ {
1197
+ "type": "page_number",
1198
+ "text": "15892",
1199
+ "bbox": [
1200
+ 477,
1201
+ 927,
1202
+ 526,
1203
+ 940
1204
+ ],
1205
+ "page_idx": 10
1206
+ },
1207
+ {
1208
+ "type": "table",
1209
+ "img_path": "images/2d9876646b3bfa483e98decfdd3513aba61e910c44c369d1fb5a5dabfac7acc9.jpg",
1210
+ "table_caption": [],
1211
+ "table_footnote": [],
1212
+ "table_body": "<table><tr><td>Task</td><td>Language</td><td>Dataset</td><td>Class Count</td><td>Train</td><td>Validation</td><td>Test</td><td>Total</td></tr><tr><td rowspan=\"6\">SA</td><td>EN</td><td></td><td>5</td><td>200000</td><td>5000</td><td>5000</td><td>210000</td></tr><tr><td>ES</td><td>Amazon Review (Hou et al., 2024)</td><td>5</td><td>200000</td><td>5000</td><td>5000</td><td>210000</td></tr><tr><td>FR</td><td></td><td>5</td><td>200000</td><td>5000</td><td>5000</td><td>210000</td></tr><tr><td>ZH</td><td></td><td>5</td><td>200000</td><td>5000</td><td>5000</td><td>210000</td></tr><tr><td>RU</td><td>RuReviews (Smetanin and Komarov, 2019)</td><td>3</td><td>85601</td><td>2143</td><td>2137</td><td>89881</td></tr><tr><td>AR</td><td>ar_resreviews (ElSahar and El-Beltagy, 2015)</td><td>2</td><td>6680</td><td>835</td><td>835</td><td>8350</td></tr><tr><td rowspan=\"6\">NER</td><td>EN</td><td>MultiCoNER v2 (Fetahu et al., 2023)</td><td>3</td><td>253011</td><td>13323</td><td>3773671</td><td>4040005</td></tr><tr><td>ES</td><td>MultiCoNER v2</td><td>3</td><td>262814</td><td>13462</td><td>3925900</td><td>4202176</td></tr><tr><td>FR</td><td>MultiCoNER v2</td><td>3</td><td>247743</td><td>13062</td><td>3742924</td><td>4003729</td></tr><tr><td>ZH</td><td>MultiCoNER v2</td><td>3</td><td>245606</td><td>12816</td><td>489605</td><td>748027</td></tr><tr><td>RU</td><td>MultiCoNER v1 (Malmasi et al., 2022)</td><td>3</td><td>242384</td><td>12787</td><td>2061318</td><td>2316489</td></tr><tr><td>AR</td><td>AQMAR Wikipedia NER corpus (Mohit et al., 2012b)</td><td>3</td><td>57053</td><td>8615</td><td>8185</td><td>73853</td></tr><tr><td rowspan=\"11\">POS</td><td>EN</td><td>UD_English-GUM (Zeldes, 2017)</td><td>16</td><td>128391</td><td>16070</td><td>15554</td><td>160015</td></tr><tr><td>ES</td><td>UD_Spanish-GSD (McDonald et al., 2013)</td><td>16</td><td>127459</td><td>16916</td><td>15645</td><td>160020</td></tr><tr><td>FR</td><td>UD_French-GSD (Guillaume et al., 2019)</td><td>15</td><td>127638</td><td>16207</td><td>16167</td><td>160012</td></tr><tr><td></td><td>UD_Chinese-Beginner (Zeman et al., 2023; AllSet Learning, 2023)+</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td></td><td>UD_Chinese-PUD (Nivre et al., 2017)+</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>ZH</td><td>UD_Chinese-HK (Wong et al., 2017)+</td><td>16</td><td>128935</td><td>15680</td><td>15758</td><td>160373</td></tr><tr><td></td><td>UD_Chinese-CFL (Lee et al., 2017)+</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td></td><td>UD_Chinese-PatentChar (Li et al., 2022)+</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td></td><td>UD_Chinese-GSDTmp (Qi et al., 2019)</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>RU</td><td>UD_Russian-Taiga (Lyashevskaya et al., 2018)</td><td>16</td><td>127647</td><td>16175</td><td>16184</td><td>160006</td></tr><tr><td>AR</td><td>UD_Arabic-PADT (Zemánek, 2008)</td><td>16</td><td>127552</td><td>16608</td><td>15848</td><td>160008</td></tr><tr><td rowspan=\"6\">NLI</td><td>EN</td><td></td><td>3</td><td>392702</td><td>2490</td><td>5010</td><td>400202</td></tr><tr><td>ES</td><td></td><td>3</td><td>392702</td><td>2490</td><td>5010</td><td>400202</td></tr><tr><td>FR</td><td>XNLI (Conneau et al., 2018)</td><td>3</td><td>392702</td><td>2490</td><td>5010</td><td>400202</td></tr><tr><td>ZH</td><td></td><td>3</td><td>392702</td><td>2490</td><td>5010</td><td>400202</td></tr><tr><td>RU</td><td></td><td>3</td><td>392702</td><td>2490</td><td>5010</td><td>400202</td></tr><tr><td>AR</td><td></td><td>3</td><td>392702</td><td>2490</td><td>5010</td><td>400202</td></tr></table>",
1213
+ "bbox": [
1214
+ 114,
1215
+ 135,
1216
+ 882,
1217
+ 448
1218
+ ],
1219
+ "page_idx": 11
1220
+ },
1221
+ {
1222
+ "type": "table",
1223
+ "img_path": "images/9cda7bd9f0dce74dd19eb9b8c23d6d7648f5c65ddbcf344bf4f003145274bcf1.jpg",
1224
+ "table_caption": [
1225
+ "Table 5: Statistics of datasets used for downstream evaluation tasks."
1226
+ ],
1227
+ "table_footnote": [],
1228
+ "table_body": "<table><tr><td rowspan=\"2\">Task</td><td rowspan=\"2\">Model</td><td rowspan=\"2\">EN</td><td rowspan=\"2\">ES</td><td colspan=\"5\">Languages</td></tr><tr><td>FR</td><td>ZH</td><td>RU</td><td>AR</td><td></td></tr><tr><td rowspan=\"5\">SA</td><td>2-LM</td><td>0.4130±0.0118</td><td>0.4120±0.0160</td><td>0.4166±0.0076</td><td>0.3859±0.0156</td><td>0.6599±0.0101</td><td>0.6343±0.0232</td><td></td></tr><tr><td>2-MT</td><td>0.4588±0.0092</td><td>0.4554±0.0053</td><td>0.4448±0.0158</td><td>0.4260±0.0070</td><td>0.6935±0.0052</td><td>0.6864±0.0105</td><td></td></tr><tr><td>CLM</td><td>0.3183±0.0099</td><td>0.3351±0.0198</td><td>0.3066±0.0192</td><td>0.3104±0.0135</td><td>0.5693±0.0107</td><td>0.5886±0.0106</td><td></td></tr><tr><td>MLM</td><td>0.3236±0.0270</td><td>0.3188±0.0188</td><td>0.3153±0.0088</td><td>0.2936±0.0107</td><td>0.5434±0.0236</td><td>0.5804±0.0104</td><td></td></tr><tr><td>TLM</td><td>0.2593±0.0298</td><td>0.2768±0.0589</td><td>0.2528±0.0487</td><td>0.2344±0.0539</td><td>0.5537±0.0307</td><td>0.5487±0.0190</td><td></td></tr><tr><td rowspan=\"5\">NER</td><td>2-LM</td><td>0.5830±0.0057</td><td>0.5616±0.0070</td><td>0.5627±0.0039</td><td>0.5653±0.0164</td><td>0.4178±0.0100</td><td>0.4310±0.0179</td><td></td></tr><tr><td>2-MT</td><td>0.7778±0.0014</td><td>0.7660±0.0014</td><td>0.7716±0.0031</td><td>0.7871±0.0043</td><td>0.6551±0.0088</td><td>0.7311±0.0099</td><td></td></tr><tr><td>CLM</td><td>0.4516±0.0110</td><td>0.4213±0.0075</td><td>0.4306±0.0131</td><td>0.5086±0.0053</td><td>0.3004±0.0034</td><td>0.3223±0.0054</td><td></td></tr><tr><td>MLM</td><td>0.3003±0.0017</td><td>0.2997±0.0001</td><td>0.3021±0.0019</td><td>0.3341±0.0108</td><td>0.2891±0.0001</td><td>0.3094±0.0000</td><td></td></tr><tr><td>TLM</td><td>0.3485±0.0074</td><td>0.3471±0.0152</td><td>0.3499±0.0173</td><td>0.4876±0.0230</td><td>0.2941±0.0015</td><td>0.3094±0.0001</td><td></td></tr><tr><td rowspan=\"5\">POS</td><td>2-LM</td><td>0.7241±0.0040</td><td>0.6607±0.0042</td><td>0.6848±0.0074</td><td>0.5964��0.0072</td><td>0.7427±0.0030</td><td>0.4678±0.0016</td><td></td></tr><tr><td>2-MT</td><td>0.8520±0.0065</td><td>0.7685±0.0203</td><td>0.8300±0.0017</td><td>0.7002±0.0029</td><td>0.8587±0.0055</td><td>0.6575±0.0032</td><td></td></tr><tr><td>CLM</td><td>0.5621±0.0069</td><td>0.5422±0.0066</td><td>0.5568±0.0064</td><td>0.3761±0.0148</td><td>0.4975±0.0140</td><td>0.3040±0.0106</td><td></td></tr><tr><td>MLM</td><td>0.2157±0.0063</td><td>0.1499±0.0055</td><td>0.1722±0.0084</td><td>0.0717±0.0040</td><td>0.1275±0.0080</td><td>0.1511±0.0127</td><td></td></tr><tr><td>TLM</td><td>0.4741±0.0147</td><td>0.3759±0.0378</td><td>0.3744±0.0153</td><td>0.3314±0.0112</td><td>0.3798±0.0097</td><td>0.2299±0.0215</td><td></td></tr><tr><td rowspan=\"5\">NLI</td><td>2-LM</td><td>0.4825±0.0075</td><td>0.4901±0.0046</td><td>0.4779±0.0102</td><td>0.3805±0.0089</td><td>0.4804±0.0059</td><td>0.4445±0.0126</td><td></td></tr><tr><td>2-MT</td><td>0.6017±0.0105</td><td>0.5938±0.0119</td><td>0.5860±0.0087</td><td>0.5881±0.0031</td><td>0.5982±0.0025</td><td>0.5943±0.0053</td><td></td></tr><tr><td>CLM</td><td>0.3946±0.0479</td><td>0.4134±0.0227</td><td>0.4068±0.0373</td><td>0.3744±0.0400</td><td>0.3593±0.0519</td><td>0.3978±0.0314</td><td></td></tr><tr><td>MLM</td><td>0.4464±0.0328</td><td>0.4330±0.0145</td><td>0.4157±0.0347</td><td>0.4208±0.0110</td><td>0.4162±0.0251</td><td>0.4281±0.0126</td><td></td></tr><tr><td>TLM</td><td>0.3063±0.0361</td><td>0.3573±0.0327</td><td>0.3940±0.0240</td><td>0.3122±0.0876</td><td>0.3892±0.0390</td><td>0.3360±0.0477</td><td></td></tr></table>",
1229
+ "bbox": [
1230
+ 179,
1231
+ 589,
1232
+ 818,
1233
+ 834
1234
+ ],
1235
+ "page_idx": 11
1236
+ },
1237
+ {
1238
+ "type": "text",
1239
+ "text": "Table 6: Macro F1 score using probing technique.",
1240
+ "bbox": [
1241
+ 327,
1242
+ 846,
1243
+ 665,
1244
+ 860
1245
+ ],
1246
+ "page_idx": 11
1247
+ },
1248
+ {
1249
+ "type": "page_number",
1250
+ "text": "15893",
1251
+ "bbox": [
1252
+ 477,
1253
+ 928,
1254
+ 524,
1255
+ 940
1256
+ ],
1257
+ "page_idx": 11
1258
+ },
1259
+ {
1260
+ "type": "table",
1261
+ "img_path": "images/6eda3d8e479d43011f3046941eca31beaa87dc1e01c19a4fe24201af0c51fcf2.jpg",
1262
+ "table_caption": [],
1263
+ "table_footnote": [],
1264
+ "table_body": "<table><tr><td rowspan=\"2\">Task</td><td rowspan=\"2\">Model</td><td rowspan=\"2\">EN</td><td rowspan=\"2\">ES</td><td colspan=\"4\">Languages</td></tr><tr><td>FR</td><td>ZH</td><td>RU</td><td>AR</td></tr><tr><td rowspan=\"5\">SA</td><td>2-LM</td><td>0.5213±0.0068</td><td>0.5254±0.0083</td><td>0.5244±0.0135</td><td>0.4739±0.0096</td><td>0.7421±0.0059</td><td>0.7522±0.0151</td></tr><tr><td>2-MT</td><td>0.5407±0.0086</td><td>0.5510±0.0084</td><td>0.5398±0.0054</td><td>0.4956±0.0093</td><td>0.7522±0.0056</td><td>0.7767±0.0156</td></tr><tr><td>CLM</td><td>0.5443±0.0072</td><td>0.4446±0.2115</td><td>0.5421±0.0089</td><td>0.5015±0.0187</td><td>0.7553±0.0015</td><td>0.5283±0.2328</td></tr><tr><td>MLM</td><td>0.5441±0.0107</td><td>0.5466±0.0314</td><td>0.5348±0.0237</td><td>0.4972±0.0142</td><td>0.7509±0.0135</td><td>0.5695±0.1427</td></tr><tr><td>TLM</td><td>0.5358±0.0186</td><td>0.5501±0.0128</td><td>0.5474±0.0137</td><td>0.5069±0.0119</td><td>0.7586±0.0057</td><td>0.4599±0.0943</td></tr><tr><td rowspan=\"5\">NER</td><td>2-LM</td><td>0.8200±0.0042</td><td>0.8092±0.0053</td><td>0.8259±0.0035</td><td>0.8626±0.0022</td><td>0.7215±0.0122</td><td>0.7274±0.0093</td></tr><tr><td>2-MT</td><td>0.8670±0.0017</td><td>0.8651±0.0022</td><td>0.8727±0.0018</td><td>0.8897±0.0042</td><td>0.7934±0.0039</td><td>0.8685±0.0046</td></tr><tr><td>CLM</td><td>0.7950±0.0064</td><td>0.8053±0.0028</td><td>0.8099±0.0044</td><td>0.8129±0.0021</td><td>0.6622±0.0182</td><td>0.5994±0.1880</td></tr><tr><td>MLM</td><td>0.8635±0.0123</td><td>0.8580±0.0142</td><td>0.8706±0.0055</td><td>0.8739±0.0199</td><td>0.7629±0.0172</td><td>0.4113±0.2254</td></tr><tr><td>TLM</td><td>0.7908±0.0028</td><td>0.8024±0.0081</td><td>0.8067±0.0047</td><td>0.8120±0.0032</td><td>0.6758±0.0312</td><td>0.3094±0.0000</td></tr><tr><td rowspan=\"5\">POS</td><td>2-LM</td><td>0.8925±0.0039</td><td>0.7365±0.0025</td><td>0.8496±0.0034</td><td>0.8088±0.0059</td><td>0.8984±0.0055</td><td>0.7769±0.0102</td></tr><tr><td>2-MT</td><td>0.9314±0.0024</td><td>0.7826±0.0235</td><td>0.8866±0.0074</td><td>0.8842±0.0059</td><td>0.9285±0.0029</td><td>0.8660±0.0088</td></tr><tr><td>CLM</td><td>0.8752±0.0042</td><td>0.7854±0.0024</td><td>0.8573±0.0041</td><td>0.7906±0.0195</td><td>0.8264±0.0104</td><td>0.5932±0.0194</td></tr><tr><td>MLM</td><td>0.9177±0.0068</td><td>0.8079±0.0259</td><td>0.8851±0.0019</td><td>0.8313±0.0079</td><td>0.9226±0.0048</td><td>0.8602±0.0132</td></tr><tr><td>TLM</td><td>0.8782±0.0045</td><td>0.7830±0.0067</td><td>0.7421±0.2503</td><td>0.7876±0.0271</td><td>0.8247±0.0088</td><td>0.6201±0.0071</td></tr><tr><td rowspan=\"5\">NLI</td><td>2-LM</td><td>0.5771±0.0067</td><td>0.5760±0.0088</td><td>0.5658±0.0085</td><td>0.4766±0.0058</td><td>0.5629±0.0052</td><td>0.5350±0.0070</td></tr><tr><td>2-MT</td><td>0.6183±0.0054</td><td>0.6151±0.0082</td><td>0.5991±0.0073</td><td>0.5302±0.0086</td><td>0.5887±0.0041</td><td>0.5678±0.0032</td></tr><tr><td>CLM</td><td>0.4240±0.2315</td><td>0.5589±0.0355</td><td>0.5493±0.0404</td><td>0.4729±0.1123</td><td>0.5507±0.0265</td><td>0.4554±0.1199</td></tr><tr><td>MLM</td><td>0.5927±0.0189</td><td>0.5719±0.0487</td><td>0.5282±0.0964</td><td>0.4618±0.0453</td><td>0.5775±0.0069</td><td>0.5247±0.0221</td></tr><tr><td>TLM</td><td>0.4428±0.1751</td><td>0.4728±0.1731</td><td>0.5345±0.1076</td><td>0.4558±0.0722</td><td>0.5061±0.0771</td><td>0.3816±0.1562</td></tr></table>",
1265
+ "bbox": [
1266
+ 179,
1267
+ 363,
1268
+ 816,
1269
+ 609
1270
+ ],
1271
+ "page_idx": 12
1272
+ },
1273
+ {
1274
+ "type": "text",
1275
+ "text": "Table 7: Macro F1 score after model fine-tuning.",
1276
+ "bbox": [
1277
+ 332,
1278
+ 619,
1279
+ 662,
1280
+ 633
1281
+ ],
1282
+ "page_idx": 12
1283
+ },
1284
+ {
1285
+ "type": "page_number",
1286
+ "text": "15894",
1287
+ "bbox": [
1288
+ 477,
1289
+ 928,
1290
+ 524,
1291
+ 940
1292
+ ],
1293
+ "page_idx": 12
1294
+ }
1295
+ ]
2024/A Comparison of Language Modeling and Translation as Multilingual Pretraining Objectives/979594f1-eb73-4d54-8635-879feb871523_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Comparison of Language Modeling and Translation as Multilingual Pretraining Objectives/979594f1-eb73-4d54-8635-879feb871523_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5055120c77332f42da3b82e5d257cb2476bcf501ffc8e903286f300760713f17
3
+ size 288658
2024/A Comparison of Language Modeling and Translation as Multilingual Pretraining Objectives/full.md ADDED
@@ -0,0 +1,236 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Comparison of Language Modeling and Translation as Multilingual Pretraining Objectives
2
+
3
+ Zihao Li, $^{1}$ Shaoxiong Ji, $^{*1}$ Timothee Mickus, $^{*1}$ Vincent Segonne, $^{2}$ and Jörg Tiedemann $^{1}$
4
+
5
+ <sup>1</sup>University of Helsinki <sup>2</sup> Université Bretagne Sud
6
+
7
+ firstname_lastname@{'helsinki.fi, 'univ-ubs.fr}
8
+
9
+ # Abstract
10
+
11
+ Pretrained language models (PLMs) display impressive performances and have captured the attention of the NLP community. Establishing best practices in pretraining has, therefore, become a major focus of NLP research, especially since insights gained from monolingual English models may not necessarily apply to more complex multilingual models. One significant caveat of the current state of the art is that different works are rarely comparable: they often discuss different parameter counts, training data, and evaluation methodology.
12
+
13
+ This paper proposes a comparison of multilingual pretraining objectives in a controlled methodological environment. We ensure that training data and model architectures are comparable, and discuss the downstream performances across 6 languages that we observe in probing and fine-tuning scenarios. We make two key observations: (1) the architecture dictates which pretraining objective is optimal; (2) multilingual translation is a very effective pretraining objective under the right conditions. We make our code, data, and model weights available at https://github.com/Helsinki-NLP/lm-vs-mt.
14
+
15
+ # 1 Introduction
16
+
17
+ The release of BERT (Devlin et al., 2019) has marked a paradigm shift in the NLP landscape and has ushered in a thorough investment of the NLP research community in developing large language models that can readily be adapted to novel situations. The design, training, and evaluation of these models has become a significant enterprise of its own.
18
+
19
+ In recent years, that sustained interest has shifted also to encompass multilingual models (e.g., Muen-nighoff et al., 2022; Alves et al., 2024). There is considerable variation as to how such models are
20
+
21
+ trained: For instance, some rely on datasets comprising multiple languages without explicit crosslingual supervision (e.g., Liu et al., 2020), and some use explicit supervision (Xue et al., 2021). One complication that arises from this blossoming field of study is that much of the work being carried out is not directly comparable beyond the raw performances on some well-established benchmark, a procedure which may well be flawed (Gorman and Bedrick, 2019). Avoiding apples-to-oranges comparison requires a methodical approach in strictly comparable circumstances, which is the stance we adopt in this paper.
22
+
23
+ In short, we focus on two variables—model architecture and pretraining objectives—and set out to train five models in strictly comparable conditions and compare their monolingual performances in three downstream applications: sentiment analysis, named entity recognition, and POS-tagging. The scope of our study spans from encoder-decoder machine translation models, to decoder-only causal language models and encoder-only BERT-like masked language models. We categorize them into double-stacks (encoder-decoder) and single-stacks (encoder-only or decoder-only) models. We intend to answer two research questions:
24
+
25
+ (i) Does the explicit cross-lingual training signal of translation objectives foster better downstream performances in monolingual tasks?
26
+ (ii) Is the optimal choice of architecture independent of the training objective?
27
+
28
+ There are a prima facie reasons to favor either answers to both of these questions. For instance, the success of multilingual pretrained language models (LM) on cross-lingual tasks has been underscored repeatedly (Wu and Dredze, 2019, e.g.), yet explicit alignments such as linear mapping (Wang et al., 2019) and L2 alignment (Cao et al., 2020)
29
+
30
+ between source and target languages do not necessarily improve the quality of cross-lingual representations (Wu and Dredze, 2020).
31
+
32
+ Our experiments provide tentative evidence that insofar as a BART denoising autoencoder architecture is concerned, models pretrained with a translation objective consistently outperform those trained with a denoising objective. However, for singlestack transformers, we observe causal language models to perform well in probing and masked language models to generally outperform translation and causal objectives when fine-tuned on downstream tasks. This leads us to conjecture that the optimal pretraining objective depends on the architecture. Furthermore, the best downstream results we observe appear to stem from a machine-translation system, highlighting that MT encoder-decoder systems might constitute an understudied but potentially very impactful type of pretrained model.
33
+
34
+ # 2 Methods and Settings
35
+
36
+ We start our inquiry by adopting a principled stance: We train strictly comparable models with MT and LM objectives before contrasting their performances on monolingual tasks.
37
+
38
+ Models and objectives. To allow a systematic evaluation, we train models with various neural network architectures and learning objectives. All models are based on the transformer architecture (Vaswani et al., 2017) and implemented in fairseq (Ott et al., 2019). We consider both double-stacks (encoder-decoder) and single-stacks (encoder-only or decoder-only) models.
39
+
40
+ The two double-stack models are variants of the BART architecture of (Lewis et al., 2020); they are trained either on a straightforward machine translation (MT) objective, using language tokens to distinguish the source, or on the original denoising auto-encoder objective of Lewis et al.. We refer to these two models as 2-LM and 2-MT respectively.
41
+
42
+ We also consider three single-stack models: (i) an encoder-only model trained on the masked language modeling objective (MLM) of Devlin et al. (2019); (ii) an autoregressive causal language model (CLM), similar to Radford et al. (2019); and (iii) an autoregressive model trained to generate a sentence, followed by its translation in the language specified by a given control token, known as a translation language model (TLM) as proposed
43
+
44
+ by Conneau and Lample (2019).<sup>1</sup> We provide an example datapoint for each pretraining objective in Table 3, Appendix A.
45
+
46
+ Pretraining conditions. Our core focus is on guaranteeing comparable conditions across the different pretraining objectives we consider. This entails that our datasets need to be doubly structured: both in documents for CLM pretraining; and as aligned bittexts for MT pretraining. Two datasets broadly match these criteria: the UNPC (Ziemski et al., 2016) and OpenSubtitles (OpSub; Tiedemann, 2012) corpora. The choice also narrows down the languages considered in this study: we take the set of languages present in both resources, namely the six languages in UNPC: Arabic (AR), Chinese (ZH), English (EN), French (FR), Russian (RU), and Spanish (ES).
47
+
48
+ To guarantee that models are trained on the same data, whenever a document is available in multiple languages, we greedily assign it to the least represented language pair thus far and discard all other possible language pairs where it could have contributed; we then discard documents which cannot be used as bittexts. This ensures that all documents are used exactly once for both document-level and bitext-level pretraining objectives. Dataset statistics are shown in Table 4, Appendix B.
49
+
50
+ To ensure a fair comparison, we control key variables, including tokenization (100k BPE pieces; Sennrich et al., 2016), number of transformer layers (12), hidden dimensions (512), attention heads (8), and feedforward layer dimensions (2048). We perform 600k steps of updates, using the largest batch size that fits into the GPU memory, deploy distributed training to make a global batch size of 4096, and apply the Adam optimizer (Kingma and Ba, 2017). Owing to the computational requirements, we only train one seed for each of the five types of models considered.
51
+
52
+ Downstream evaluation. The evaluations encompassed both sequence-level and token-level classification tasks using datasets tailored for sentiment analysis (SA), named entity recognition (NER), part-of-speech (POS) tagging, and natural language inference (NLI).
53
+
54
+ For SA, we utilized the Amazon review dataset (Hou et al., 2024) in English, Spanish,
55
+
56
+ French, and Chinese. RuReviews (Smetanin and Komarov, 2019) for Russian, and ar_resreviews (ElSahar and El-Beltagy, 2015) for Arabic. While the datasets for most languages were pre-split, ar_resreviews required manual division into training, validation, and testing sets, using an 8:1:1 ratio.
57
+
58
+ For NER, we model the problem as an entity span extraction using a BIO scheme. In practice, we classify tokens into three basic categories: Beginning of an entity (B), Inside an entity (I), or Outside any entity (O). We use the MultiCoNER v2 dataset (Fetahu et al., 2023) for English, Spanish, French, and Chinese, MultiCoNER v1 (Malmasi et al., 2022) for Russian and the AQMAR Wikipedia NER corpus (Mohit et al., 2012a) for Arabic. Simplifying the NER task to these fundamental categories allows us to focus more on assessing the basic entity recognition capabilities of the models without the additional complexity of differentiating numerous entity types, which can vary significantly between languages and datasets.
59
+
60
+ For POS tagging, we utilized the Universal Dependencies (UD) 2.0 datasets (Nivre et al., 2020), selecting specific corpora tailored to each language to ensure both linguistic diversity and relevance. We select multiple UD treebanks per language, such that each language dataset comprises approximately 160,000 tokens, which are then split into training, validation, and testing segments with an 8:1:1 ratio.
61
+
62
+ For NLI, we employed the XNLI dataset (Connéau et al., 2018) for the six languages. The XNLI dataset consists of sentence pairs translated from the MultiNLI dataset (Williams et al., 2018) into 15 languages, providing consistent annotations across languages. The task focuses on classifying the relationship between pairs of sentences into one of three categories: Entailment, Contradiction, or Neutral. Unlike the original cross-lingual design of XNLI, we conducted monolingual experiments for each language to evaluate the performance of our models individually in each linguistic context.
63
+
64
+ Supplementary details regarding data preprocessing for downstream experiments are available in Appendix B.
65
+
66
+ We evaluate the performances of the encoder output representations for the 2-MT and 2-LM models and of the last hidden representation before the vocabulary projection for the single-stack models.
67
+
68
+ The evaluation of the models involves two dis
69
+
70
+ tinct experimental approaches to test the performance: probing and fine-tuning. In the probing experiments, only the parameters of the classification heads are adjusted. This method primarily tests the raw capability of the pre-trained models' embeddings to adapt to specific tasks with minimal parameter changes, preserving the underlying pre-trained network structure. Conversely, in the fine-tuning experiments, all parameters of the models are adjusted. This approach allows the entire model to adapt to the specifics of the task, potentially leading to higher performance at the cost of significantly altering the pre-trained weights.
71
+
72
+ For both experimental approaches, each model is trained for 10 epochs to ensure sufficient learning without overfitting. We optimize parameters with AdamW (Loshchilov and Hutter, 2017), with a constant learning rate of 0.0001 across all tasks and models. This setup was chosen to standardize the training process, providing a fair basis for comparing the performance outcomes across different models and tasks. We reproduce probing and fine-tuning for 5 seeds to ensure stability.
73
+
74
+ # 3 Results
75
+
76
+ Double-stack models. We first compare the performance of 2-LM and 2-MT across several key language processing tasks including SA, NER, POS tagging, and NLI. Results are shown in Tables 1a and 1b. The pretraining objectives play a significant role in shaping the models' effectiveness. Specifically, 2-MT, which is pretrained with a machine translation objective, consistently outperforms 2-LM, which utilizes a denoising objective. This pattern is consistent across all languages tested after fine-tuning as well as probing.
77
+
78
+ Single-stack models. Turning to the single-stack models (CLM, MLM, TLM), we find a somewhat more complex picture. In a probing context (cf. Table 2a), we find the CLM to be almost always the most effective, except for NLI in five languages and NER in Arabic, where it performs slightly less favorably compared to the MLM. As for fine-tuning (Table 2b), while the MLM generally ranks first on all POS, NER, and NLI datasets, the TLM is usually effective for SA. $^3$
79
+
80
+ <table><tr><td rowspan="2" colspan="2">Setup</td><td colspan="6">Languages</td></tr><tr><td>EN</td><td>ES</td><td>FR</td><td>ZH</td><td>RU</td><td>AR</td></tr><tr><td rowspan="2">SA</td><td>2-LM</td><td>42.86±0.86</td><td>42.80±0.69</td><td>43.00±0.60</td><td>40.41±1.02</td><td>65.83±0.70</td><td>70.88±1.62</td></tr><tr><td>2-MT</td><td>46.71±0.88</td><td>46.64±0.55</td><td>46.10±0.43</td><td>43.74±0.65</td><td>68.79±0.42</td><td>73.77±0.97</td></tr><tr><td rowspan="2">NER</td><td>2-LM</td><td>82.69±0.09</td><td>84.74±0.07</td><td>82.80±0.06</td><td>78.88±0.25</td><td>77.93±0.15</td><td>85.28±0.22</td></tr><tr><td>2-MT</td><td>89.47±0.06</td><td>90.54±0.04</td><td>89.41±0.10</td><td>88.78±0.09</td><td>83.39±0.22</td><td>89.70±0.18</td></tr><tr><td rowspan="2">POS</td><td>2-LM</td><td>78.85±0.29</td><td>78.12±0.25</td><td>81.57±0.32</td><td>66.09±0.25</td><td>77.93±0.12</td><td>47.68±0.10</td></tr><tr><td>2-MT</td><td>92.22±0.14</td><td>90.59±0.20</td><td>95.39±0.10</td><td>75.87±0.17</td><td>93.20±0.08</td><td>61.84±0.24</td></tr><tr><td rowspan="2">NLI</td><td>2-LM</td><td>48.56±0.01</td><td>49.31±0.01</td><td>48.33±0.01</td><td>38.81±0.01</td><td>48.34±0.01</td><td>45.11±0.01</td></tr><tr><td>2-MT</td><td>60.50±0.01</td><td>59.56±0.01</td><td>59.00±0.01</td><td>59.01±0.01</td><td>59.83±0.01</td><td>59.58±0.01</td></tr></table>
81
+
82
+ (a) Probing
83
+
84
+ <table><tr><td rowspan="2" colspan="2">Setup</td><td colspan="6">Languages</td></tr><tr><td>EN</td><td>ES</td><td>FR</td><td>ZH</td><td>RU</td><td>AR</td></tr><tr><td rowspan="2">SA</td><td>2-LM</td><td>52.26±0.55</td><td>52.89±0.69</td><td>52.99±0.59</td><td>48.64±0.36</td><td>73.89±0.43</td><td>79.74 ±1.36</td></tr><tr><td>2-MT</td><td>54.76±0.58</td><td>55.56±0.49</td><td>54.75±0.42</td><td>50.55±0.68</td><td>74.77±0.50</td><td>81.49 ±1.49</td></tr><tr><td rowspan="2">NER</td><td>2-LM</td><td>91.13±0.12</td><td>91.82±0.21</td><td>91.58±0.10</td><td>92.30±0.10</td><td>85.34±0.39</td><td>89.05 ±0.13</td></tr><tr><td>2-MT</td><td>93.46±0.09</td><td>94.22±0.09</td><td>93.84±0.04</td><td>93.75±0.32</td><td>89.07±0.11</td><td>93.26 ±0.15</td></tr><tr><td rowspan="2">POS</td><td>2-LM</td><td>92.42±0.28</td><td>90.41±0.16</td><td>95.21±0.13</td><td>82.30±0.48</td><td>95.36±0.20</td><td>69.57 ±0.24</td></tr><tr><td>2-MT</td><td>95.98±0.08</td><td>94.29±0.05</td><td>98.05±0.17</td><td>90.18±0.15</td><td>97.00±0.07</td><td>74.47 ±0.08</td></tr><tr><td rowspan="2">NLI</td><td>2-LM</td><td>57.76±0.01</td><td>57.87±0.01</td><td>56.77±0.01</td><td>48.05±0.01</td><td>56.43±0.01</td><td>0.5377±0.01</td></tr><tr><td>2-MT</td><td>61.96±0.01</td><td>61.71±0.01</td><td>60.09±0.01</td><td>53.72±0.01</td><td>59.00±0.01</td><td>0.5693±0.01</td></tr></table>
85
+
86
+ (b) Fine-tuning
87
+ Table 1: Accuracy $(\times 100)$ of double-stack models (± s.d. over 5 runs).
88
+
89
+ <table><tr><td rowspan="2" colspan="2">Setup</td><td colspan="6">Languages</td></tr><tr><td>EN</td><td>ES</td><td>FR</td><td>ZH</td><td>RU</td><td>AR</td></tr><tr><td rowspan="3">SA</td><td>CLM</td><td>35.14±0.92</td><td>35.66±1.10</td><td>34.14±1.63</td><td>33.62±0.83</td><td>57.57±1.11</td><td>67.71±2.24</td></tr><tr><td>MLM</td><td>34.26±1.34</td><td>34.82±1.58</td><td>33.90±1.12</td><td>32.52±1.65</td><td>54.55±1.86</td><td>65.94±3.30</td></tr><tr><td>TLM</td><td>29.68±2.22</td><td>32.20±3.07</td><td>32.26±2.34</td><td>29.88±4.17</td><td>56.45±1.81</td><td>64.45±1.81</td></tr><tr><td rowspan="3">NER</td><td>CLM</td><td>80.27±0.12</td><td>82.59±0.06</td><td>80.38±0.12</td><td>77.92±0.28</td><td>76.39±0.03</td><td>84.17±0.08</td></tr><tr><td>MLM</td><td>78.77±0.02</td><td>81.61±0.00</td><td>79.11±0.01</td><td>70.67±0.10</td><td>76.34±0.01</td><td>84.29±0.00</td></tr><tr><td>TLM</td><td>79.10±0.06</td><td>81.94±0.13</td><td>79.56±0.14</td><td>77.26±0.24</td><td>76.39±0.02</td><td>84.26±0.02</td></tr><tr><td rowspan="3">POS</td><td>CLM</td><td>69.06±0.38</td><td>70.32±0.50</td><td>76.67±0.46</td><td>51.40±0.47</td><td>59.64±0.62</td><td>43.49±0.40</td></tr><tr><td>MLM</td><td>37.92±0.61</td><td>44.26±0.11</td><td>46.89±0.32</td><td>31.16±0.21</td><td>34.62±0.16</td><td>34.71±0.94</td></tr><tr><td>TLM</td><td>62.96±1.02</td><td>62.08±1.99</td><td>63.89±1.06</td><td>50.46±0.53</td><td>54.27±0.87</td><td>40.94±1.16</td></tr><tr><td rowspan="3">NLI</td><td>CLM</td><td>42.32±0.02</td><td>42.99±0.01</td><td>43.43±0.02</td><td>40.55±0.02</td><td>40.06±0.02</td><td>41.99±0.01</td></tr><tr><td>MLM</td><td>45.64±0.02</td><td>44.49±0.01</td><td>43.11±0.02</td><td>42.80±0.01</td><td>43.16±0.01</td><td>43.55±0.01</td></tr><tr><td>TLM</td><td>38.36±0.02</td><td>41.95±0.02</td><td>41.89±0.01</td><td>38.93±0.04</td><td>41.20±0.02</td><td>39.50±0.02</td></tr></table>
90
+
91
+ (a) Probing
92
+
93
+ <table><tr><td rowspan="2" colspan="2">Setup</td><td colspan="7">Languages</td></tr><tr><td>EN</td><td>ES</td><td>FR</td><td>ZH</td><td>RU</td><td>AR</td><td></td></tr><tr><td rowspan="3">SA</td><td>CLM</td><td>55.23±0.72</td><td>47.81±15.55</td><td>54.84± 0.62</td><td>51.18±0.94</td><td>75.07±0.21</td><td>66.18±21.74</td><td></td></tr><tr><td>MLM</td><td>55.22±0.92</td><td>55.67± 1.77</td><td>54.08± 2.43</td><td>51.00±1.07</td><td>74.53±1.36</td><td>75.00± 3.48</td><td></td></tr><tr><td>TLM</td><td>55.14±0.92</td><td>55.84± 0.59</td><td>55.22± 0.98</td><td>51.46±0.53</td><td>75.31±0.57</td><td>72.75± 2.25</td><td></td></tr><tr><td rowspan="3">NER</td><td>CLM</td><td>89.91±0.33</td><td>91.42± 0.15</td><td>90.65± 0.17</td><td>89.97±0.14</td><td>83.20±0.31</td><td>87.50± 2.22</td><td></td></tr><tr><td>MLM</td><td>93.31±0.57</td><td>93.93± 0.60</td><td>93.67± 0.30</td><td>92.99±0.99</td><td>87.49±0.78</td><td>85.78± 3.30</td><td></td></tr><tr><td>TLM</td><td>89.88±0.06</td><td>91.45± 0.25</td><td>90.49± 0.23</td><td>90.10±0.14</td><td>83.76±0.63</td><td>84.29± 0.00</td><td></td></tr><tr><td rowspan="3">POS</td><td>CLM</td><td>91.72±0.14</td><td>90.51± 0.13</td><td>95.75± 0.10</td><td>78.61±0.31</td><td>85.50±0.15</td><td>57.43± 1.63</td><td></td></tr><tr><td>MLM</td><td>96.00±0.15</td><td>94.45± 0.13</td><td>97.94± 0.20</td><td>89.96±0.71</td><td>96.69±0.13</td><td>74.35± 0.53</td><td></td></tr><tr><td>TLM</td><td>91.68±0.19</td><td>90.38± 0.20</td><td>86.99±19.40</td><td>78.50±0.52</td><td>85.71±0.18</td><td>59.11± 0.50</td><td></td></tr><tr><td rowspan="3">NLI</td><td>CLM</td><td>48.84±0.14</td><td>56.46± 0.03</td><td>55.45± 0.03</td><td>49.70±0.06</td><td>55.23±0.02</td><td>49.02± 0.07</td><td></td></tr><tr><td>MLM</td><td>59.41±0.01</td><td>57.54± 0.04</td><td>55.04± 0.06</td><td>47.96±0.03</td><td>57.80±0.01</td><td>53.60± 0.01</td><td></td></tr><tr><td>TLM</td><td>49.76±0.10</td><td>52.12± 0.11</td><td>54.20± 0.10</td><td>49.03±0.04</td><td>53.60±0.04</td><td>44.39± 0.10</td><td></td></tr></table>
94
+
95
+ (b) Fine-tuning
96
+
97
+ Table 2: Accuracy $\left( {\times {100}}\right)$ of single-stack models (± s.d. over 5 runs).
98
+
99
+ Discussion. A first global observation that we can make for these results is that single-stack and double-stack models appear to behave differently. While the MT objective yields the highest performances for BART-type models, the downstream performances of the TLM do not really stand out compared to the CLM in probing and the MLM in fine-tuning scenarios. It is important to note that the performances stem at least in part from the architecture itself: 2-MT and 2-LM both consistently outperform all single-stack models in probing. However, it is crucial to acknowledge the limitations of our study, as we only conducted one pretraining round for all the objectives. Hence, this evidence should be interpreted as tentative at best.
100
+
101
+ Fine-tuning also tends to minimize the difference between single-stack and double-stack models—which suggests that the higher quality of double-stack representations could be an artifact of training limitations. Moreover, the relative ranks of the three single-stack models fluctuate much more than what we see for the double-stack models, owing to no little extent to the oftentimes momentous variation across seeds for single-stack models. We therefore conjecture that while a translation objective
102
+
103
+ can yield a clear training signal towards semantically informed representations, this comes with two caveats: first, the signal can only be leveraged with dedicated separate modeling of source and target (viz. double-stack models); second, this advantage is much less consequential when fine-tuning.
104
+
105
+ # 4 Related works
106
+
107
+ Multilingual foundation models have flourished in recent years (a.o., Conneau and Lample, 2019; Liu et al., 2020; Xue et al., 2021; Kale et al., 2021; Fang et al., 2021; Chi et al., 2021; Alves et al., 2024; Ustun et al., 2024), and with them so have studies of their representations (Conneau et al., 2020; Siddhant et al., 2020; Choudhury and Deshpande, 2021; Fierro and Søgaard, 2022; Hammerl et al., 2023 a.o.). All of these works, however, fail to control for some of the most crucial factors, such as ensuring that all models are trained on comparable amounts of data.
108
+
109
+ This work is specifically related to Conneau and Lample (2019), which also compares MLM, CLM, and TLM but does not normalize the training data. Another point of comparison is Ji et al. (2024), which studies the impact of MT continued pretrain
110
+
111
+ ing in BART on cross-lingual downstream tasks. Monolingual evaluation of multilingual systems has also been broached a.o. by Rust et al. (2021).
112
+
113
+ # 5 Conclusion
114
+
115
+ This paper conducts an empirical study of how pretraining conditions of multilingual models impact downstream performances in probing and finetuning scenarios. Despite the inherent limitations that stem from our stringent data requirements, our experiments offer a novel perspective that highlights directions for future inquiry into how multilingual foundation models ought to be pretrained. We observe that double-stack BART-based models fare much better than single-stack models in probing scenarios, but the difference is overall less clear when it comes to fine-tuning. We also find some tentative evidence that translation objectives can be highly effective for model pretraining in precise circumstances: Namely, the most effective model on downstream tasks among those we experimented with is an MT-pretrained BART-like model, which outperforms both a more traditional denoising objective for BART as well as decoder-only CLM and encoder-only MLM models. This would suggest that translation can serve as a powerful pretraining objective, although it is currently under-explored. $^4$
116
+
117
+ Another crucial aspect of our study is that we present strictly comparable models, trained on comparable data, with comparable parameter counts and unified implementations. While this entails some limitations, especially with regard to the scale of models and data used, we nonetheless believe that a strict comparison can help discriminate between the various factors at play in other works. Here, we find clear evidence that CLM pretraining objectives, such as those used in GPT, outperform MLM-based models, such as BERT, in probing scenarios; we are also able to isolate and highlight how the optimal choice of pretraining objective is contingent on the architecture being employed.
118
+
119
+ For future work, we recommend exploring multitask learning during pretraining by combining objectives like translation, denoising, and language modeling; in such cases, models could harness the strengths of each task to become more robust and versatile. Additionally, investigating training
120
+
121
+ free evaluation methods can offer insights into a model's inherent capabilities without the variability introduced by fine-tuning.
122
+
123
+ # Acknowledgments
124
+
125
+ We thank Alessandro Raganato and our colleagues at the Helsinki-NLP group for useful discussions throughout this project, as well as the three anonymous reviewers for their comments.
126
+
127
+ This project has received funding from the European Union's Horizon Europe research and innovation programme under Grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546], and partially funded by the French National Research Agency [grant ANR-23-IAS1-0001]. The contents of this publication are the sole responsibility of its authors and do not necessarily reflect the opinion of the European Union.
128
+
129
+ The authors wish to thank CSC-IT Center for Science, Finland, for the generous computational resources on the Puhti supercomputer and LUMI supercomputer through the LUMI extreme scale access (MOOMIN and LumiNMT). Some of the experiments were performed using the Jean Zay and Adastra clusters from GENCI-IDRIS [grant 2022 A0131013801].
130
+
131
+ # Limitations
132
+
133
+ This study employs models that are not large in terms of parameters in the era of large language models. Such a constraint potentially hinders the generalizability of our results to much larger architectures that are capable of handling a broader array of linguistic nuances. Furthermore, our study focuses on a small selected group of languages and specific NLP tasks. This focus might limit the applicability of our findings to other linguistic contexts or more complex real-world applications where diverse language phenomena or different task demands play a crucial role.
134
+
135
+ Another limitation is our reliance on specific corpora. The datasets utilized, while valuable, represent a potential source of selection bias. They may not fully encompass the vast diversity of global language use, thus skewing the model training and evaluation. Such a bias could affect the robustness and effectiveness of the pretrained models when applied to languages that are not well-represented in the training data.
136
+
137
+ # References
138
+
139
+ AllSet Learning. 2023. Chinese grammar wiki.
140
+ Duarte M. Alves, José Pombal, Nuno M. Guerreiro, Pedro H. Martins, João Alves, Amin Farajian, Ben Peters, Ricardo Rei, Patrick Fernandes, Sweta Agrawal, Pierre Colombo, José G. C. de Souza, and André F. T. Martins. 2024. Tower: An open multilingual large language model for translation-related tasks.
141
+ Steven Cao, Nikita Kitaev, and Dan Klein. 2020. Multilingual alignment of contextual word representations. In International Conference on Learning Representations.
142
+ Zewen Chi, Li Dong, Shuming Ma, Shaohan Huang, Saksham Singhal, Xian-Ling Mao, Heyan Huang, Xia Song, and Furu Wei. 2021. mT6: Multilingual pretrained text-to-text transformer with translation pairs. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1671-1683, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
143
+ Monojit Choudhury and Amit Deshpande. 2021. How linguistically fair are multilingual pre-trained language models? Proceedings of the AAAI Conference on Artificial Intelligence, 35(14):12710-12718.
144
+ Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
145
+ Alexis Conneau, Rudy Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating crosslingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475-2485, Brussels, Belgium. Association for Computational Linguistics.
146
+ Alexis Conneau, Shijie Wu, Haoran Li, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Emerging crosslingual structure in pretrained language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6022-6034, Online. Association for Computational Linguistics.
147
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
148
+ Hady ElSahar and Samhaa R El-Beltagy. 2015. Building large arabic multi-domain resources for sentiment analysis. In International conference on intelligent
149
+
150
+ text processing and computational linguistics, pages 23-34. Springer.
151
+ Yuwei Fang, Shuohang Wang, Zhe Gan, Siqi Sun, and Jingjing Liu. 2021. Filter: An enhanced fusion method for cross-lingual language understanding. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 12776-12784.
152
+ Besnik Fetahu, Zhiyu Chen, Sudipta Kar, Oleg Rokhlenko, and Shervin Malmasi. 2023. Multi-CoNER v2: a large multilingual dataset for fine-grained and noisy named entity recognition. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 2027-2051, Singapore. Association for Computational Linguistics.
153
+ Constanza Fierro and Anders Søgaard. 2022. Factual consistency of multilingual pretrained language models. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3046-3052, Dublin, Ireland. Association for Computational Linguistics.
154
+ Kyle Gorman and Steven Bedrick. 2019. We need to talk about standard splits. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2786-2791, Florence, Italy. Association for Computational Linguistics.
155
+ Bruno Guillaume, Marie-Catherine de Marneffe, and Guy Perrier. 2019. Conversion et améliorations de corpus du français annotés en Universal Dependencies [conversion and improvement of Universal Dependencies French corpora]. *Traitement Automatique des Langues*, 60(2):71-95.
156
+ Katharina Hämmerl, Alina Fastowski, Jindrich Libovický, and Alexander Fraser. 2023. Exploring anisotropy and outliers in multilingual language models for cross-lingual semantic sentence similarity. In Findings of the Association for Computational Linguistics: ACL 2023, pages 7023-7037, Toronto, Canada. Association for Computational Linguistics.
157
+ Yupeng Hou, Jiacheng Li, Zhankui He, An Yan, Xiusi Chen, and Julian McAuley. 2024. Bridging language and items for retrieval and recommendation. arXiv preprint arXiv:2403.03952.
158
+ Shaoxiong Ji, Timothee Mickus, Vincent Segonne, and Jorg Tiedemann. 2024. Can machine translation bridge multilingual pretraining and cross-lingual transfer learning? In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 2809-2818, Torino, Italia. ELRA and ICCL.
159
+ Mihir Kale, Aditya Siddhant, Rami Al-Rfou, Linting Xue, Noah Constant, and Melvin Johnson. 2021. nmT5 - is parallel data still relevant for pre-training massively multilingual language models? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language
160
+
161
+ Processing (Volume 2: Short Papers), pages 683-691, Online. Association for Computational Linguistics.
162
+ Diederik P. Kingma and Jimmy Ba. 2017. Adam: A method for stochastic optimization.
163
+ John Lee, Herman Leung, and Keying Li. 2017. Towards Universal Dependencies for learner Chinese. In Proceedings of the NoDaLiDa 2017 Workshop on Universal Dependencies (UDW 2017), pages 67-71, Gothenburg, Sweden. Association for Computational Linguistics.
164
+ Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.
165
+ Yixuan Li, Gerdes Kim, Guillaume Bruno, and Dan Zeman. 2022. Ud chinese patentchar.
166
+ Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. Transactions of the Association for Computational Linguistics, 8:726-742.
167
+ Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101.
168
+ Olga Lyashevskaya, Olga Rudina, Natalia Vlasova, and Anna Zhuravleva. 2018. Ud russian taiga.
169
+ Shervin Malmasi, Anjie Fang, Besnik Fetahu, Sudipta Kar, and Oleg Rokhlenko. 2022. MultiCoNER: A large-scale multilingual dataset for complex named entity recognition. In Proceedings of the 29th International Conference on Computational Linguistics, pages 3798-3809, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
170
+ Ryan McDonald, Joakim Nivre, Yvonne Quirmbach-Brundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar Täckström, Claudia Bedini, Núria Bertomeu Castelló, and Jungmee Lee. 2013. Universal Dependency annotation for multilingual parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 92-97, Sofia, Bulgaria. Association for Computational Linguistics.
171
+ Behrang Mohit, Nathan Schneider, Rishav Bhowmick, Kemal Oflazer, and Noah A. Smith. 2012a. Recall-oriented learning of named entities in Arabic Wikipedia. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 162-173, Avignon, France. Association for Computational Linguistics.
172
+
173
+ Behrang Mohit, Nathan Schneider, Rishav Bhowmick, Kemal Oflazer, and Noah A Smith. 2012b. Recall-oriented learning of named entities in arabic wikipedia. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 162-173.
174
+ Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hailey Schoelkopf, et al. 2022. Crosslingual generalization through multitask finetuning. arXiv preprint arXiv:2211.01786.
175
+ Joakim Nivre, Željko Agić, Lars Ahrenberg, Lene Antonsen, Maria Jesus Aranzabe, Masayuki Asahara, Luma Ateyah, Mohammed Attia, Aitziber Atutxa, Elena Badmaeva, Miguel Ballesteros, Esha Banerjee, Sebastian Bank, John Bauer, Kepa Bengoetxea, Riyaz Ahmad Bhat, Eckhard Bick, Cristina Bosco, Gosse Bouma, Sam Bowman, Aljoscha Burchardt, Marie Candito, Gauthier Caron, Gülsen Cebiroğlu Eryiğit, Giuseppe G. A. Celano, Savas Cetin, Fabricio Chalub, Jinho Choi, Yongseok Cho, Silvie Cinkova, Çağr Üçoltekin, Miriam Connor, Marie-Catherine de Marneffe, Valeria de Paiva, Arantza Diaz de Ilarraza, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Marhaba Eli, Ali Elkahky, Tomaz Erjavec, Richard Farkas, Hector Fernandez Alcalde, Jennifer Foster, Cláudia Freitas, Katarina Gajdošová, Daniel Galbraith, Marcos Garcia, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Memduh Gökirmak, Yoav Goldberg, Xavier Gómez Guinovart, Berta Gonzáles Saavedra, Matias Grioni, Normunds Grūzītis, Bruno Guillàume, Nizar Habash, Jan Hajic, Jan Hajic jr., Linh Hā My, Kim Harris, Dag Haug, Barbora Hladka, Jaroslava Hlaváčová, Petter Hohle, Radu Ion, Elena Irimia, Anders Johannsen, Fredrik Jørgensen, Hüner Kasikara, Hiroshi Kanayama, Jenna Kanerva, Tolga Kayadelen, Václava Kettnerová, Jesse Kirchner, Natalia Kotsyba, Simon Krek, Sookyoung Kwak, Veronika Laippala, Lorenzo Lambertino, Tatiana Lando, Phoêng Lê Hóng, Alessandro Lenci, Saran Lertpradit, Herman Leung, Cheuk Ying Li, Josie Li, Nikola Ljubesic, Olga Loginova, Olga Lyashevskaya, Teresa Lynn, Vivien Macketanz, Aibek Makazhanov, Michael Mandl, Christopher Manning, Ruli Manurung, Catalina Máränduc, David Mareček, Katrin Marheinecke, Héctor Martínez Alonso, André Martins, Jan Mašek, Yuji Matsumoto, Ryan McDonald, Gustavo Mendonça, Anna Missilä, Verginica Mititelu, Yusuke Miyao, Simonetta Montemagni, Amir More, Laura Moreno Romero, Shunsuke Mori, Bohdan Moskalevskyi, Kadri Muischnek, Nina Mustafina, Kaili Müürisep, Pinkey Nainwani, Anna Nedoluzhko, Loêng Nguyen Thi, Huyen Nguyen Thi Minh, Vitaly Nikolaev, Rattima Nitisaroj, Hanna Nurmi, Stina Ojala, Petya Osnova, Lilja Øvrelid, Elena Pascual, Marco Passarotti, Cenel Augusto Perez, Guy Perrier, Slav Petrov, Jussi Piitulainen, Emily Pitler, Barbara Plank, Martin Popel, Lauma Pretkalnina, Prokopis Prokopidis, Tiina Puolakainen, Sampo Pyysalo, Alexandre Rademaker
176
+
177
+ Livy Real, Siva Reddy, Georg Rehm, Larissa Rinaldi, Laura Rituma, Rudolf Rosa, Davide Rovati, Shadi Saleh, Manuela Sanguinetti, Baiba Saulite, Yanin Sawanakunanon, Sebastian Schuster, Djame Seddah, Wolfgang Seeker, Mojgan Seraji, Lena Shakurova, Mo Shen, Atsuko Shimada, Muh Shohibussirri, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simko, Maria Simková, Kiril Simov, Aaron Smith, Antonio Stella, Jana Strnadová, Alane Suhr, Umut Sulubacak, Zsolt Szántó, Dima Taji, Takaaki Tanaka, Trond Trosterud, Anna Trukhina, Reut Tsarfaty, Francis Tyers, Sumire Uematsu, Zdenka Urešová, Larraitz Uria, Hans Uszkoreit, Gertjan van Noord, Viktor Varga, Veronika Vincze, Jonathan North Washington, Zhuoran Yu, Zdeněk Žabokrtský, Daniel Zeman, and Hanzhi Zhu. 2017. Universal dependencies 2.0 - CoNLL 2017 shared task development and test data. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (UFAL), Faculty of Mathematics and Physics, Charles University.
178
+ Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Jan Hajic, Christopher D Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal dependencies v2: An evergrowing multilingual treebank collection. arXiv preprint arXiv:2004.10643.
179
+ Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. *fairoseq: A fast, extensible toolkit for sequence modeling*. In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics* (Demonstrations), pages 48-53, Minneapolis, Minnesota. Association for Computational Linguistics.
180
+ Peng Qi, Koichi Yasuoka, and Dan Zeman. 2019. Ud chinese gsdsimp.
181
+ Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
182
+ Phillip Rust, Jonas Pfeiffer, Ivan Vulic, Sebastian Ruder, and Iryna Gurevych. 2021. How good is your tokenizer? on the monolingual performance of multilingual language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3118-3135, Online. Association for Computational Linguistics.
183
+ Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics.
184
+ Aditya Siddhant, Melvin Johnson, Henry Tsai, Naveen Ari, Jason Riesa, Ankur Bapna, Orhan Firat, and
185
+
186
+ Karthik Raman. 2020. Evaluating the cross-lingual effectiveness of massively multilingual neural machine translation. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 8854-8861.
187
+ Sergey Smetanin and Michail Komarov. 2019. Sentiment analysis of product reviews in russian using convolutional neural networks. In 2019 IEEE 21st Conference on Business Informatics (CBI), volume 01, pages 482-486.
188
+ Jörg Tiedemann. 2012. Parallel data, tools and interfaces in opus. In Proceedings of LREC, volume 2012, pages 2214-2218.
189
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.
190
+ Yuxuan Wang, Wanxiang Che, Jiang Guo, Yijia Liu, and Ting Liu. 2019. Cross-lingual bert transformation for zero-shot dependency parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5721-5727.
191
+ Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguistics.
192
+ Tak-sum Wong, Kim Gerdes, Herman Leung, and John Lee. 2017. Quantitative comparative syntax on the Cantonese-Mandarin parallel dependency treebank. In Proceedings of the Fourth International Conference on Dependency Linguistics (Depling 2017), pages 266-275, Pisa, Italy. Linköping University Electronic Press.
193
+ Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of bert. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 833-844.
194
+ Shijie Wu and Mark Dredze. 2020. Do explicit alignments robustly improve multilingual encoders? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4471-4482.
195
+ Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings
196
+
197
+ of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483-498, Online. Association for Computational Linguistics.
198
+ Amir Zeldes. 2017. The GUM corpus: Creating multilayer resources in the classroom. Language Resources and Evaluation, 51(3):581-612.
199
+ Dan Zeman, Kirian Guiller, and Bruno Guillaume. 2023. Ud chinese beginner.
200
+ Otakar Smrz Viktor Bielicky Iveta Kourilova Jakub Kracmar Zemanek. 2008. Dependency treebank: A word on the million words.
201
+ Michal Ziemski, Marcin Junczys-Dowmunt, and Bruno Pouliquen. 2016. The United Nations parallel corpus v1.0. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 3530-3534, Portož, Slovenia. European Language Resources Association (ELRA).
202
+ Ahmet Üstün, Viraat Aryabumi, Zheng-Xin Yong, WeiYin Ko, Daniel D'souza, Gbemileke Onilude, Neel Bhandari, Shivalika Singh, Hui-Lee Ooi, Amr Kayid, Freddie Vargus, Phil Blunsom, Shayne Longpre, Niklas Muennighoff, Marzieh Fadaee, Julia Kreutzer, and Sara Hooker. 2024. Aya model: An instruction finetuned open-access multilingual language model. arXiv preprint arXiv:2402.07827.
203
+
204
+ # A Overview of pretraining objectives
205
+
206
+ Table 3 displays an example data point for all pretraining objectives we consider. In principle, the CLM is a document-level objective, i.e., the full document would be used as an input rather than the two sentences we show here.
207
+
208
+ # B Datasets statistics
209
+
210
+ An overview of the volume of data available for pretraining is displayed in Table 4. The majority of the data were used for training.
211
+
212
+ In Table 5, we present an overview of the datasets used for downstream evaluation.
213
+
214
+ # C Detailed results
215
+
216
+ In Table 6 and Table 7, we present the macro-f1 score of models in the downstream evaluation.
217
+
218
+ <table><tr><td>Objective</td><td>Source input</td><td>Target output</td></tr><tr><td>2-LM</td><td>_D&#x27;_autres _mesures _de _ce _type _vont _été [MASK] [MASK], _en _coopération _avec _d&#x27;_autres _associations _de _Rom s, _de _Sin tis _et _de [MASK] _du _voyage (_ « C am min anti »). &lt;/s&gt;</td><td>&lt;s&gt; _D&#x27;_autres _mesures _de _ce _type _vont _été _appliquées, _en _coopération _avec _d&#x27;_autres _associations _de _Rom s, _de _Sin tis _et _de _gens _du _voyage (_ « C am min anti »). &lt;/s&gt;</td></tr><tr><td>2-MT</td><td>&lt;fr&gt; _D&#x27;_autres _mesures _de _ce _type _vont _été _appliquées, _en _coopération _avec _d&#x27;_autres _associations _de _Rom s, _de _Sin tis _et _de _gens _du _voyage (_ « C am min anti »).</td><td>&lt;s&gt; _Other _similar _measures _are _going _to be _taken in _cooperation _with _other _Rom a, _Sin ti _and _Travel lers (_ &quot; C am min anti &quot;) _associations.</td></tr><tr><td>CLM</td><td>... Divers _accords _ad _hoc _ont _été _conclus à _cet _effet _par _le _Ministère _de _l&#x27;éducation _et _l&#x27;as _sociation _Op era _Nom ad i. _D&#x27;_autres _mesures _de _ce _type _vont _été _appliquées, _en _coopération _avec _d&#x27;_autres _associations _de _Rom s, _de _Sin tis _et _de _gens _du _voyage (_ « C am min anti »). ...</td><td>... _accords _ad _hoc _ont _été _conclus à _cet _effet _par _le _Ministère _de _l&#x27;éducation _et _l&#x27;as _sociation _Op era _Nom ad i. _D&#x27;_autres _mesures _de _ce _type _vont _été _appliquées, _en _coopération _avec _d&#x27;_autres _associations _de _Rom s, de _Sin tis _et _de _gens _du _voyage (_ « C am min anti »). ...</td></tr><tr><td>TLM</td><td>_D&#x27;_autres _mesures _de _ce _type _vont _été _appliquées, _en _coopération _avec _d&#x27;_autres _associations _de _Rom s, _de _Sin tis _et _de _gens _du _voyage (_ « C am min anti »). &lt;fr2en&gt; Other _similar _measures _are _going _to be _taken_in _cooperation _with _other _Rom a, _Sin ti _and _Travel lers (_ &quot; C am min anti &quot;) _associations.</td><td>mesures _de _ce _type _vont _été _appliquées, _en _coopération _avec _d&#x27;_autres _associations _de _Rom s, de _Sin tis _et _de _gens _du _voyage (_ « C am min anti »). &lt;fr2en&gt; Other _similar _measures _are _going _to be _taken_in _cooperation _with _other _Rom a, _Sin ti and _Travel lers (_ &quot; C am min anti &quot;) _associations. &lt;/s&gt;</td></tr><tr><td>MLM</td><td>&lt;s&gt; _D&#x27;_autres _mesures _de _ce _type _vont _été [MASK] [MASK], _en _coopération _avec _d&#x27;_autres _associations _de _Rom s, _de _Sin tis _et _de [MASK] _du _voyage (_ « C am min anti »). &lt;/s&gt;</td><td>&lt;s&gt; _D&#x27;_autres _mesures _de _ce _type _vont _été _appliquées ,_en _coopération _avec _d&#x27;_autres _associations _de _Rom s, de _Sin tis _et _de _gens _du _voyage (_ « C am min anti »). &lt;/s&gt;</td></tr></table>
219
+
220
+ Table 3: Overview of the different objectives considered in this study. Top two rows: two-stacks (encoder-decoder) models; bottom three rows: single-stack (encoder-only or decoder-only) models.
221
+
222
+ <table><tr><td></td><td>Train</td><td>Validation</td><td>Test</td><td>Total</td></tr><tr><td>UNPC</td><td>114 376 177</td><td>76 303</td><td>40 712</td><td>114 493 192</td></tr><tr><td>OpSub</td><td>81 622 353</td><td>359 035</td><td>77 342</td><td>82 058 730</td></tr><tr><td>Total</td><td>195 998 530</td><td>435 338</td><td>118 054</td><td>196 551 922</td></tr></table>
223
+
224
+ Table 4: Number of sentences in pretraining corpora.
225
+
226
+ <table><tr><td>Task</td><td>Language</td><td>Dataset</td><td>Class Count</td><td>Train</td><td>Validation</td><td>Test</td><td>Total</td></tr><tr><td rowspan="6">SA</td><td>EN</td><td></td><td>5</td><td>200000</td><td>5000</td><td>5000</td><td>210000</td></tr><tr><td>ES</td><td>Amazon Review (Hou et al., 2024)</td><td>5</td><td>200000</td><td>5000</td><td>5000</td><td>210000</td></tr><tr><td>FR</td><td></td><td>5</td><td>200000</td><td>5000</td><td>5000</td><td>210000</td></tr><tr><td>ZH</td><td></td><td>5</td><td>200000</td><td>5000</td><td>5000</td><td>210000</td></tr><tr><td>RU</td><td>RuReviews (Smetanin and Komarov, 2019)</td><td>3</td><td>85601</td><td>2143</td><td>2137</td><td>89881</td></tr><tr><td>AR</td><td>ar_resreviews (ElSahar and El-Beltagy, 2015)</td><td>2</td><td>6680</td><td>835</td><td>835</td><td>8350</td></tr><tr><td rowspan="6">NER</td><td>EN</td><td>MultiCoNER v2 (Fetahu et al., 2023)</td><td>3</td><td>253011</td><td>13323</td><td>3773671</td><td>4040005</td></tr><tr><td>ES</td><td>MultiCoNER v2</td><td>3</td><td>262814</td><td>13462</td><td>3925900</td><td>4202176</td></tr><tr><td>FR</td><td>MultiCoNER v2</td><td>3</td><td>247743</td><td>13062</td><td>3742924</td><td>4003729</td></tr><tr><td>ZH</td><td>MultiCoNER v2</td><td>3</td><td>245606</td><td>12816</td><td>489605</td><td>748027</td></tr><tr><td>RU</td><td>MultiCoNER v1 (Malmasi et al., 2022)</td><td>3</td><td>242384</td><td>12787</td><td>2061318</td><td>2316489</td></tr><tr><td>AR</td><td>AQMAR Wikipedia NER corpus (Mohit et al., 2012b)</td><td>3</td><td>57053</td><td>8615</td><td>8185</td><td>73853</td></tr><tr><td rowspan="11">POS</td><td>EN</td><td>UD_English-GUM (Zeldes, 2017)</td><td>16</td><td>128391</td><td>16070</td><td>15554</td><td>160015</td></tr><tr><td>ES</td><td>UD_Spanish-GSD (McDonald et al., 2013)</td><td>16</td><td>127459</td><td>16916</td><td>15645</td><td>160020</td></tr><tr><td>FR</td><td>UD_French-GSD (Guillaume et al., 2019)</td><td>15</td><td>127638</td><td>16207</td><td>16167</td><td>160012</td></tr><tr><td></td><td>UD_Chinese-Beginner (Zeman et al., 2023; AllSet Learning, 2023)+</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td></td><td>UD_Chinese-PUD (Nivre et al., 2017)+</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>ZH</td><td>UD_Chinese-HK (Wong et al., 2017)+</td><td>16</td><td>128935</td><td>15680</td><td>15758</td><td>160373</td></tr><tr><td></td><td>UD_Chinese-CFL (Lee et al., 2017)+</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td></td><td>UD_Chinese-PatentChar (Li et al., 2022)+</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td></td><td>UD_Chinese-GSDTmp (Qi et al., 2019)</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>RU</td><td>UD_Russian-Taiga (Lyashevskaya et al., 2018)</td><td>16</td><td>127647</td><td>16175</td><td>16184</td><td>160006</td></tr><tr><td>AR</td><td>UD_Arabic-PADT (Zemánek, 2008)</td><td>16</td><td>127552</td><td>16608</td><td>15848</td><td>160008</td></tr><tr><td rowspan="6">NLI</td><td>EN</td><td></td><td>3</td><td>392702</td><td>2490</td><td>5010</td><td>400202</td></tr><tr><td>ES</td><td></td><td>3</td><td>392702</td><td>2490</td><td>5010</td><td>400202</td></tr><tr><td>FR</td><td>XNLI (Conneau et al., 2018)</td><td>3</td><td>392702</td><td>2490</td><td>5010</td><td>400202</td></tr><tr><td>ZH</td><td></td><td>3</td><td>392702</td><td>2490</td><td>5010</td><td>400202</td></tr><tr><td>RU</td><td></td><td>3</td><td>392702</td><td>2490</td><td>5010</td><td>400202</td></tr><tr><td>AR</td><td></td><td>3</td><td>392702</td><td>2490</td><td>5010</td><td>400202</td></tr></table>
227
+
228
+ Table 5: Statistics of datasets used for downstream evaluation tasks.
229
+
230
+ <table><tr><td rowspan="2">Task</td><td rowspan="2">Model</td><td rowspan="2">EN</td><td rowspan="2">ES</td><td colspan="5">Languages</td></tr><tr><td>FR</td><td>ZH</td><td>RU</td><td>AR</td><td></td></tr><tr><td rowspan="5">SA</td><td>2-LM</td><td>0.4130±0.0118</td><td>0.4120±0.0160</td><td>0.4166±0.0076</td><td>0.3859±0.0156</td><td>0.6599±0.0101</td><td>0.6343±0.0232</td><td></td></tr><tr><td>2-MT</td><td>0.4588±0.0092</td><td>0.4554±0.0053</td><td>0.4448±0.0158</td><td>0.4260±0.0070</td><td>0.6935±0.0052</td><td>0.6864±0.0105</td><td></td></tr><tr><td>CLM</td><td>0.3183±0.0099</td><td>0.3351±0.0198</td><td>0.3066±0.0192</td><td>0.3104±0.0135</td><td>0.5693±0.0107</td><td>0.5886±0.0106</td><td></td></tr><tr><td>MLM</td><td>0.3236±0.0270</td><td>0.3188±0.0188</td><td>0.3153±0.0088</td><td>0.2936±0.0107</td><td>0.5434±0.0236</td><td>0.5804±0.0104</td><td></td></tr><tr><td>TLM</td><td>0.2593±0.0298</td><td>0.2768±0.0589</td><td>0.2528±0.0487</td><td>0.2344±0.0539</td><td>0.5537±0.0307</td><td>0.5487±0.0190</td><td></td></tr><tr><td rowspan="5">NER</td><td>2-LM</td><td>0.5830±0.0057</td><td>0.5616±0.0070</td><td>0.5627±0.0039</td><td>0.5653±0.0164</td><td>0.4178±0.0100</td><td>0.4310±0.0179</td><td></td></tr><tr><td>2-MT</td><td>0.7778±0.0014</td><td>0.7660±0.0014</td><td>0.7716±0.0031</td><td>0.7871±0.0043</td><td>0.6551±0.0088</td><td>0.7311±0.0099</td><td></td></tr><tr><td>CLM</td><td>0.4516±0.0110</td><td>0.4213±0.0075</td><td>0.4306±0.0131</td><td>0.5086±0.0053</td><td>0.3004±0.0034</td><td>0.3223±0.0054</td><td></td></tr><tr><td>MLM</td><td>0.3003±0.0017</td><td>0.2997±0.0001</td><td>0.3021±0.0019</td><td>0.3341±0.0108</td><td>0.2891±0.0001</td><td>0.3094±0.0000</td><td></td></tr><tr><td>TLM</td><td>0.3485±0.0074</td><td>0.3471±0.0152</td><td>0.3499±0.0173</td><td>0.4876±0.0230</td><td>0.2941±0.0015</td><td>0.3094±0.0001</td><td></td></tr><tr><td rowspan="5">POS</td><td>2-LM</td><td>0.7241±0.0040</td><td>0.6607±0.0042</td><td>0.6848±0.0074</td><td>0.5964±0.0072</td><td>0.7427±0.0030</td><td>0.4678±0.0016</td><td></td></tr><tr><td>2-MT</td><td>0.8520±0.0065</td><td>0.7685±0.0203</td><td>0.8300±0.0017</td><td>0.7002±0.0029</td><td>0.8587±0.0055</td><td>0.6575±0.0032</td><td></td></tr><tr><td>CLM</td><td>0.5621±0.0069</td><td>0.5422±0.0066</td><td>0.5568±0.0064</td><td>0.3761±0.0148</td><td>0.4975±0.0140</td><td>0.3040±0.0106</td><td></td></tr><tr><td>MLM</td><td>0.2157±0.0063</td><td>0.1499±0.0055</td><td>0.1722±0.0084</td><td>0.0717±0.0040</td><td>0.1275±0.0080</td><td>0.1511±0.0127</td><td></td></tr><tr><td>TLM</td><td>0.4741±0.0147</td><td>0.3759±0.0378</td><td>0.3744±0.0153</td><td>0.3314±0.0112</td><td>0.3798±0.0097</td><td>0.2299±0.0215</td><td></td></tr><tr><td rowspan="5">NLI</td><td>2-LM</td><td>0.4825±0.0075</td><td>0.4901±0.0046</td><td>0.4779±0.0102</td><td>0.3805±0.0089</td><td>0.4804±0.0059</td><td>0.4445±0.0126</td><td></td></tr><tr><td>2-MT</td><td>0.6017±0.0105</td><td>0.5938±0.0119</td><td>0.5860±0.0087</td><td>0.5881±0.0031</td><td>0.5982±0.0025</td><td>0.5943±0.0053</td><td></td></tr><tr><td>CLM</td><td>0.3946±0.0479</td><td>0.4134±0.0227</td><td>0.4068±0.0373</td><td>0.3744±0.0400</td><td>0.3593±0.0519</td><td>0.3978±0.0314</td><td></td></tr><tr><td>MLM</td><td>0.4464±0.0328</td><td>0.4330±0.0145</td><td>0.4157±0.0347</td><td>0.4208±0.0110</td><td>0.4162±0.0251</td><td>0.4281±0.0126</td><td></td></tr><tr><td>TLM</td><td>0.3063±0.0361</td><td>0.3573±0.0327</td><td>0.3940±0.0240</td><td>0.3122±0.0876</td><td>0.3892±0.0390</td><td>0.3360±0.0477</td><td></td></tr></table>
231
+
232
+ Table 6: Macro F1 score using probing technique.
233
+
234
+ <table><tr><td rowspan="2">Task</td><td rowspan="2">Model</td><td rowspan="2">EN</td><td rowspan="2">ES</td><td colspan="4">Languages</td></tr><tr><td>FR</td><td>ZH</td><td>RU</td><td>AR</td></tr><tr><td rowspan="5">SA</td><td>2-LM</td><td>0.5213±0.0068</td><td>0.5254±0.0083</td><td>0.5244±0.0135</td><td>0.4739±0.0096</td><td>0.7421±0.0059</td><td>0.7522±0.0151</td></tr><tr><td>2-MT</td><td>0.5407±0.0086</td><td>0.5510±0.0084</td><td>0.5398±0.0054</td><td>0.4956±0.0093</td><td>0.7522±0.0056</td><td>0.7767±0.0156</td></tr><tr><td>CLM</td><td>0.5443±0.0072</td><td>0.4446±0.2115</td><td>0.5421±0.0089</td><td>0.5015±0.0187</td><td>0.7553±0.0015</td><td>0.5283±0.2328</td></tr><tr><td>MLM</td><td>0.5441±0.0107</td><td>0.5466±0.0314</td><td>0.5348±0.0237</td><td>0.4972±0.0142</td><td>0.7509±0.0135</td><td>0.5695±0.1427</td></tr><tr><td>TLM</td><td>0.5358±0.0186</td><td>0.5501±0.0128</td><td>0.5474±0.0137</td><td>0.5069±0.0119</td><td>0.7586±0.0057</td><td>0.4599±0.0943</td></tr><tr><td rowspan="5">NER</td><td>2-LM</td><td>0.8200±0.0042</td><td>0.8092±0.0053</td><td>0.8259±0.0035</td><td>0.8626±0.0022</td><td>0.7215±0.0122</td><td>0.7274±0.0093</td></tr><tr><td>2-MT</td><td>0.8670±0.0017</td><td>0.8651±0.0022</td><td>0.8727±0.0018</td><td>0.8897±0.0042</td><td>0.7934±0.0039</td><td>0.8685±0.0046</td></tr><tr><td>CLM</td><td>0.7950±0.0064</td><td>0.8053±0.0028</td><td>0.8099±0.0044</td><td>0.8129±0.0021</td><td>0.6622±0.0182</td><td>0.5994±0.1880</td></tr><tr><td>MLM</td><td>0.8635±0.0123</td><td>0.8580±0.0142</td><td>0.8706±0.0055</td><td>0.8739±0.0199</td><td>0.7629±0.0172</td><td>0.4113±0.2254</td></tr><tr><td>TLM</td><td>0.7908±0.0028</td><td>0.8024±0.0081</td><td>0.8067±0.0047</td><td>0.8120±0.0032</td><td>0.6758±0.0312</td><td>0.3094±0.0000</td></tr><tr><td rowspan="5">POS</td><td>2-LM</td><td>0.8925±0.0039</td><td>0.7365±0.0025</td><td>0.8496±0.0034</td><td>0.8088±0.0059</td><td>0.8984±0.0055</td><td>0.7769±0.0102</td></tr><tr><td>2-MT</td><td>0.9314±0.0024</td><td>0.7826±0.0235</td><td>0.8866±0.0074</td><td>0.8842±0.0059</td><td>0.9285±0.0029</td><td>0.8660±0.0088</td></tr><tr><td>CLM</td><td>0.8752±0.0042</td><td>0.7854±0.0024</td><td>0.8573±0.0041</td><td>0.7906±0.0195</td><td>0.8264±0.0104</td><td>0.5932±0.0194</td></tr><tr><td>MLM</td><td>0.9177±0.0068</td><td>0.8079±0.0259</td><td>0.8851±0.0019</td><td>0.8313±0.0079</td><td>0.9226±0.0048</td><td>0.8602±0.0132</td></tr><tr><td>TLM</td><td>0.8782±0.0045</td><td>0.7830±0.0067</td><td>0.7421±0.2503</td><td>0.7876±0.0271</td><td>0.8247±0.0088</td><td>0.6201±0.0071</td></tr><tr><td rowspan="5">NLI</td><td>2-LM</td><td>0.5771±0.0067</td><td>0.5760±0.0088</td><td>0.5658±0.0085</td><td>0.4766±0.0058</td><td>0.5629±0.0052</td><td>0.5350±0.0070</td></tr><tr><td>2-MT</td><td>0.6183±0.0054</td><td>0.6151±0.0082</td><td>0.5991±0.0073</td><td>0.5302±0.0086</td><td>0.5887±0.0041</td><td>0.5678±0.0032</td></tr><tr><td>CLM</td><td>0.4240±0.2315</td><td>0.5589±0.0355</td><td>0.5493±0.0404</td><td>0.4729±0.1123</td><td>0.5507±0.0265</td><td>0.4554±0.1199</td></tr><tr><td>MLM</td><td>0.5927±0.0189</td><td>0.5719±0.0487</td><td>0.5282±0.0964</td><td>0.4618±0.0453</td><td>0.5775±0.0069</td><td>0.5247±0.0221</td></tr><tr><td>TLM</td><td>0.4428±0.1751</td><td>0.4728±0.1731</td><td>0.5345±0.1076</td><td>0.4558±0.0722</td><td>0.5061±0.0771</td><td>0.3816±0.1562</td></tr></table>
235
+
236
+ Table 7: Macro F1 score after model fine-tuning.
2024/A Comparison of Language Modeling and Translation as Multilingual Pretraining Objectives/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a4d13b6fadfad1d8e7a317ca2125d5d8ed5d3991dc3cd9df732f4fbb7f787e0
3
+ size 811896
2024/A Comparison of Language Modeling and Translation as Multilingual Pretraining Objectives/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Comprehensive Survey of Scientific Large Language Models and Their Applications in Scientific Discovery/68186560-afa6-46c7-bd58-68f345d6f141_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Comprehensive Survey of Scientific Large Language Models and Their Applications in Scientific Discovery/68186560-afa6-46c7-bd58-68f345d6f141_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Comprehensive Survey of Scientific Large Language Models and Their Applications in Scientific Discovery/68186560-afa6-46c7-bd58-68f345d6f141_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8e53a2840b28392e7b4c92fc3394b4578af045a5474c9d699077aab6f26c7f59
3
+ size 2219808
2024/A Comprehensive Survey of Scientific Large Language Models and Their Applications in Scientific Discovery/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Comprehensive Survey of Scientific Large Language Models and Their Applications in Scientific Discovery/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c08d2441557b198c140ac6391d00139f49d23ce9956da3e1af52d155adeaccbd
3
+ size 2409301
2024/A Comprehensive Survey of Scientific Large Language Models and Their Applications in Scientific Discovery/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Fast and Sound Tagging Method for Discontinuous Named-Entity Recognition/471f11c4-935b-4778-a130-ca7d58097a15_content_list.json ADDED
@@ -0,0 +1,2165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "A Fast and Sound Tagging Method for Discontinuous Named-Entity Recognition",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 263,
8
+ 89,
9
+ 732,
10
+ 130
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Caio Corro \nINSA Rennes, IRISA, Inria, CNRS, Université de Rennes \ncaio.corro@irisa.fr",
17
+ "bbox": [
18
+ 263,
19
+ 156,
20
+ 732,
21
+ 206
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "text",
27
+ "text": "Abstract",
28
+ "text_level": 1,
29
+ "bbox": [
30
+ 260,
31
+ 260,
32
+ 339,
33
+ 275
34
+ ],
35
+ "page_idx": 0
36
+ },
37
+ {
38
+ "type": "text",
39
+ "text": "We introduce a novel tagging scheme for discontinuous named entity recognition based on an explicit description of the inner structure of discontinuous mentions. We rely on a weighted finite state automaton for both marginal and maximum a posteriori inference. As such, our method is sound in the sense that (1) well-formedness of predicted tag sequences is ensured via the automaton structure and (2) there is an unambiguous mapping between well-formed sequences of tags and (discontinuous) mentions. We evaluate our approach on three English datasets in the biomedical domain, and report comparable results to state-of-the-art while having a way simpler and faster model.",
40
+ "bbox": [
41
+ 141,
42
+ 288,
43
+ 460,
44
+ 502
45
+ ],
46
+ "page_idx": 0
47
+ },
48
+ {
49
+ "type": "text",
50
+ "text": "1 Introduction",
51
+ "text_level": 1,
52
+ "bbox": [
53
+ 114,
54
+ 516,
55
+ 260,
56
+ 532
57
+ ],
58
+ "page_idx": 0
59
+ },
60
+ {
61
+ "type": "text",
62
+ "text": "Named-entity recognition (NER) is a fundamental natural language processing (NLP) task that aims at identifying mentions of named entities in texts. These mentions may for example refer to persons, organizations, locations or even dates, among others (Grishman and Sundheim, 1996; Chinchor and Robinson, 1998). Over the years, this task has been extensively studied by the community, with contributions including decoding algorithms, neural network architectures, loss functions and methods for learning in different data availability situations, inter alia.",
63
+ "bbox": [
64
+ 112,
65
+ 542,
66
+ 489,
67
+ 734
68
+ ],
69
+ "page_idx": 0
70
+ },
71
+ {
72
+ "type": "text",
73
+ "text": "There exists several variants of the NER problem, among which the most studied are flat NER and nested NER. The most common method for the flat case is BIO tagging (Ramshaw and Marcus, 1995), where each word in a sentence is tagged depending on whether it is the beginning of a mention (B), inside a mention (I) or outside a mention (O). This tagging scheme can be augmented to disambiguate types, e.g. BLOC and BPER. An important benefit of BIO tagging is that prediction has a linear time",
74
+ "bbox": [
75
+ 112,
76
+ 736,
77
+ 489,
78
+ 897
79
+ ],
80
+ "page_idx": 0
81
+ },
82
+ {
83
+ "type": "text",
84
+ "text": "complexity in the input length $^2$ using the Viterbi algorithm (Forney, 1973), contrary to concurrent approaches like semi-Markov models that have a quadratic time-complexity (Janssen and Limnios, 1999; Ge, 2002; Sarawagi and Cohen, 2004).",
85
+ "bbox": [
86
+ 507,
87
+ 260,
88
+ 884,
89
+ 341
90
+ ],
91
+ "page_idx": 0
92
+ },
93
+ {
94
+ "type": "text",
95
+ "text": "A less studied task is discontinuous NER, where mentions are allowed to span discontinuous sequences of words. This problem is especially important for biomedical NLP. For example, pharmacovigilance aims to detect adverse drug reactions after a product is distributed in the market via automatic analysis of medical reports or social media (Berlin et al., 2008; Coloma et al., 2013). Mentions of adverse drug reactions naturally occur in noncontiguous sequences, for example the sentence \"The pain I was experiencing around the hip joints was incredible\" contains the mention \"pain hip joints\" with a five word gap in the middle.",
96
+ "bbox": [
97
+ 507,
98
+ 342,
99
+ 884,
100
+ 565
101
+ ],
102
+ "page_idx": 0
103
+ },
104
+ {
105
+ "type": "text",
106
+ "text": "Several methods for discontinuous NER have been proposed in the literature, including transition models (Dai et al., 2020) and other structured prediction approaches (Wang et al., 2021; Fei et al., 2021; Li et al., 2022). Unfortunately, they are more costly than BIO tagging and require specialized neural network architectures. There have also been attempts to propose tagging schemes for discontinuous NER (Tang et al., 2013, 2018; Metke-Jimenez and Karimi, 2016; Muis and Lu, 2016), but they all exhibit structural ambiguity (see Section 5).",
107
+ "bbox": [
108
+ 507,
109
+ 568,
110
+ 885,
111
+ 743
112
+ ],
113
+ "page_idx": 0
114
+ },
115
+ {
116
+ "type": "text",
117
+ "text": "In this work, we propose a novel tagging scheme for discontinuous NER that exploits the inner structure of discontinuous mentions. Contrary to previous attempts, our approach is sound in the sense that: (1) there is no encoding ambiguity between sets of mentions and sequences of tags (i.e. there is a one-to-one mapping between the two representations); and (2) our prediction algorithm is con",
118
+ "bbox": [
119
+ 507,
120
+ 745,
121
+ 885,
122
+ 873
123
+ ],
124
+ "page_idx": 0
125
+ },
126
+ {
127
+ "type": "page_footnote",
128
+ "text": "It is quadratic in the number of tags, which depends on the number of possible mention types. However, types are not considered part of the input and are assumed to be fixed.",
129
+ "bbox": [
130
+ 507,
131
+ 883,
132
+ 882,
133
+ 921
134
+ ],
135
+ "page_idx": 0
136
+ },
137
+ {
138
+ "type": "page_footnote",
139
+ "text": "See (Ratinov and Roth, 2009) for other variants.",
140
+ "bbox": [
141
+ 136,
142
+ 906,
143
+ 436,
144
+ 920
145
+ ],
146
+ "page_idx": 0
147
+ },
148
+ {
149
+ "type": "page_number",
150
+ "text": "19506",
151
+ "bbox": [
152
+ 475,
153
+ 927,
154
+ 524,
155
+ 940
156
+ ],
157
+ "page_idx": 0
158
+ },
159
+ {
160
+ "type": "footer",
161
+ "text": "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 19506-19518 November 12-16, 2024 ©2024 Association for Computational Linguistics",
162
+ "bbox": [
163
+ 152,
164
+ 945,
165
+ 843,
166
+ 972
167
+ ],
168
+ "page_idx": 0
169
+ },
170
+ {
171
+ "type": "text",
172
+ "text": "strained to predict only well-formed sequences of tags (i.e. we can always reconstruct a set of mentions from a predicted tag sequence). To ensure well-formedness of predictions, we propose an algorithm based on inference in a weighted finite-state automaton. Using our approach, the time complexity of maximum a posteriori inference for prediction is linear in the length of the input. Moreover, our algorithm can be very efficiently implemented on GPU for batched inference (Argueta and Chiang, 2017; Rush, 2020).",
173
+ "bbox": [
174
+ 112,
175
+ 84,
176
+ 489,
177
+ 260
178
+ ],
179
+ "page_idx": 1
180
+ },
181
+ {
182
+ "type": "text",
183
+ "text": "Our contributions can be summarized as follows:",
184
+ "bbox": [
185
+ 131,
186
+ 261,
187
+ 489,
188
+ 275
189
+ ],
190
+ "page_idx": 1
191
+ },
192
+ {
193
+ "type": "list",
194
+ "sub_type": "text",
195
+ "list_items": [
196
+ "- We propose to decompose discontinuous mentions in a new two-layer representation;",
197
+ "- We propose a novel tagging scheme for this representation together with a linear-time tagging algorithm that ensures well-formedness of predictions;",
198
+ "- We explain how labels in the inner structures can be inferred during training when the information is not available in the data;",
199
+ "- We experiment on three English datasets and report competitive results while having a much faster model."
200
+ ],
201
+ "bbox": [
202
+ 134,
203
+ 288,
204
+ 487,
205
+ 511
206
+ ],
207
+ "page_idx": 1
208
+ },
209
+ {
210
+ "type": "text",
211
+ "text": "Our implementation is publicly available.3 Importantly, our decoding algorithm and all our loss functions can be used as a drop-in replacements in any BIO tagger. As such, any future research in the BIO tagging field may also be evaluated on discontinuous NER at no extra cost.",
212
+ "bbox": [
213
+ 112,
214
+ 523,
215
+ 489,
216
+ 619
217
+ ],
218
+ "page_idx": 1
219
+ },
220
+ {
221
+ "type": "text",
222
+ "text": "2 Reduction to Word Tagging",
223
+ "text_level": 1,
224
+ "bbox": [
225
+ 112,
226
+ 632,
227
+ 386,
228
+ 648
229
+ ],
230
+ "page_idx": 1
231
+ },
232
+ {
233
+ "type": "text",
234
+ "text": "In this section, we explain how we map discontinuous mentions into a two-layer representation that allows us to derive a new tagging scheme. Although this transformation is generic, for ease of exposition we illustrate it on the particular case of adverse drug reactions.",
235
+ "bbox": [
236
+ 112,
237
+ 658,
238
+ 487,
239
+ 753
240
+ ],
241
+ "page_idx": 1
242
+ },
243
+ {
244
+ "type": "text",
245
+ "text": "2.1 Inner Structure of Mentions",
246
+ "text_level": 1,
247
+ "bbox": [
248
+ 112,
249
+ 765,
250
+ 381,
251
+ 778
252
+ ],
253
+ "page_idx": 1
254
+ },
255
+ {
256
+ "type": "text",
257
+ "text": "Discontinuous mentions of adverse drug reactions (ADR) and disorders in biomedical NER mainly result from two linguistic phenomena. Firstly, mentions may be expressed as the combination of two non-contiguous syntactic constituents, due to linguistic word order rules. In the following example of an ADR, the discontinuity",
258
+ "bbox": [
259
+ 112,
260
+ 785,
261
+ 487,
262
+ 898
263
+ ],
264
+ "page_idx": 1
265
+ },
266
+ {
267
+ "type": "text",
268
+ "text": "is caused by the verb position constraint in English:",
269
+ "bbox": [
270
+ 507,
271
+ 84,
272
+ 880,
273
+ 99
274
+ ],
275
+ "page_idx": 1
276
+ },
277
+ {
278
+ "type": "equation",
279
+ "text": "\n$$\n\\begin{array}{l} \\text {A D R} \\\\ \\text {t o e s a r e p a i n f u l} \\end{array} \\tag {1}\n$$\n",
280
+ "text_format": "latex",
281
+ "bbox": [
282
+ 510,
283
+ 102,
284
+ 702,
285
+ 139
286
+ ],
287
+ "page_idx": 1
288
+ },
289
+ {
290
+ "type": "text",
291
+ "text": "Secondly, many languages allow alternative sentential structures for coordinations, including construction based on deletion operations. For example, consider the two following sentences:",
292
+ "bbox": [
293
+ 507,
294
+ 139,
295
+ 882,
296
+ 202
297
+ ],
298
+ "page_idx": 1
299
+ },
300
+ {
301
+ "type": "equation",
302
+ "text": "\n$$\n\\begin{array}{l} \\frac {\\text {A D R}}{\\text {p a i n i n a r m s}} \\text {a n d} \\frac {\\text {A D R}}{\\text {p a i n i n s h o u l d e r s}} \\tag {2} \\\\ \\begin{array}{c} \\text {A D R} \\\\ \\text {(3)} \\quad \\frac {\\text {p a i n i n a r m s a n}}{\\text {A D R}} \\end{array} \\\\ \\end{array}\n$$\n",
303
+ "text_format": "latex",
304
+ "bbox": [
305
+ 510,
306
+ 205,
307
+ 872,
308
+ 293
309
+ ],
310
+ "page_idx": 1
311
+ },
312
+ {
313
+ "type": "text",
314
+ "text": "The repeated element is eliminated in the second one, leading to the presence of a discontinuous mention, a phenomenon called coordination reduction (Lakoff and Peters, 1969). Although the underlying linguistic structures are different, we will treat both cases in the same way.",
315
+ "bbox": [
316
+ 507,
317
+ 294,
318
+ 880,
319
+ 390
320
+ ],
321
+ "page_idx": 1
322
+ },
323
+ {
324
+ "type": "text",
325
+ "text": "Change of representation. In practice, discontinuous mentions exhibit an inner structure. For example, a discontinuous ADR can be decomposed into a body part and an event. As such, we propose to transform discontinuous mentions into a two-layer representation:",
326
+ "bbox": [
327
+ 507,
328
+ 391,
329
+ 882,
330
+ 487
331
+ ],
332
+ "page_idx": 1
333
+ },
334
+ {
335
+ "type": "list",
336
+ "sub_type": "text",
337
+ "list_items": [
338
+ "- Upper layers identify sets of mentions;",
339
+ "- Lower layers identify typed components."
340
+ ],
341
+ "bbox": [
342
+ 529,
343
+ 500,
344
+ 847,
345
+ 543
346
+ ],
347
+ "page_idx": 1
348
+ },
349
+ {
350
+ "type": "text",
351
+ "text": "We restrict the number of types for components to be equal to two. The previous example is converted as follows:",
352
+ "bbox": [
353
+ 507,
354
+ 555,
355
+ 880,
356
+ 601
357
+ ],
358
+ "page_idx": 1
359
+ },
360
+ {
361
+ "type": "equation",
362
+ "text": "\n$$\n\\begin{array}{c} \\text {A D R (S)} \\\\ \\text {(4) \\quad E V E N T P A R T P A R T} \\\\ \\text {p a i n i n a r m s a n d s h o u l d e r s} \\end{array}\n$$\n",
363
+ "text_format": "latex",
364
+ "bbox": [
365
+ 510,
366
+ 602,
367
+ 796,
368
+ 650
369
+ ],
370
+ "page_idx": 1
371
+ },
372
+ {
373
+ "type": "text",
374
+ "text": "Note that the two mentions do not explicitly appear in this new representation. Nevertheless, the opposite transformation is trivial: to rebuild all discontinuous mention in a discontinuous set, we simply take the Cartesian product between the two sets of typed components, e.g.",
375
+ "bbox": [
376
+ 507,
377
+ 651,
378
+ 880,
379
+ 746
380
+ ],
381
+ "page_idx": 1
382
+ },
383
+ {
384
+ "type": "equation",
385
+ "text": "\n$$\n\\underbrace {\\{\\text {p a i n i n} \\}} _ {\\text {C o m p o n e n t s} \\text {t y p e d E V E N T}} \\times \\underbrace {\\left\\{ \\begin{array}{c} \\text {a r m s ,} \\\\ \\text {s h o u l d e r s} \\end{array} \\right\\}} _ {\\text {C o m p o n e n t s} \\text {t y p e d P A R T}} \\mapsto \\underbrace {\\left\\{ \\begin{array}{c} \\text {p a i n i n a r m s ,} \\\\ \\text {p a i n i n s h o u l d e r s} \\end{array} \\right\\}} _ {\\text {R e c o n s t r u c t e d} \\text {d i s c o n t i n u o u s m e n t i o n s}}.\n$$\n",
386
+ "text_format": "latex",
387
+ "bbox": [
388
+ 507,
389
+ 757,
390
+ 880,
391
+ 814
392
+ ],
393
+ "page_idx": 1
394
+ },
395
+ {
396
+ "type": "text",
397
+ "text": "Note that this can result in some of the mentions being continuous, as in Example (4).",
398
+ "bbox": [
399
+ 507,
400
+ 824,
401
+ 880,
402
+ 856
403
+ ],
404
+ "page_idx": 1
405
+ },
406
+ {
407
+ "type": "text",
408
+ "text": "One obvious issue is that component types are not annotated in datasets. We consider two solutions to tackle this challenge. First, we can use unsupervised and weakly-supervised learning",
409
+ "bbox": [
410
+ 507,
411
+ 857,
412
+ 880,
413
+ 921
414
+ ],
415
+ "page_idx": 1
416
+ },
417
+ {
418
+ "type": "page_footnote",
419
+ "text": "<sup>3</sup>https://github.com/FilippoC/disc-ner-tagging",
420
+ "bbox": [
421
+ 134,
422
+ 904,
423
+ 480,
424
+ 920
425
+ ],
426
+ "page_idx": 1
427
+ },
428
+ {
429
+ "type": "page_number",
430
+ "text": "19507",
431
+ "bbox": [
432
+ 477,
433
+ 927,
434
+ 524,
435
+ 940
436
+ ],
437
+ "page_idx": 1
438
+ },
439
+ {
440
+ "type": "text",
441
+ "text": "methods to infer component types during training, as explained in Section 4. Second, we can use component types to mark if they share the same type as the leftmost one, no matter whether they refer to a body part of an event. In this setting, Examples (1) and (3) are annotated as follows:",
442
+ "bbox": [
443
+ 112,
444
+ 84,
445
+ 489,
446
+ 180
447
+ ],
448
+ "page_idx": 2
449
+ },
450
+ {
451
+ "type": "text",
452
+ "text": "ADR(S) \n(5) FIRST OTHER toes are painful ADR(S) \n(6) FIRST OTHER pain in arms and",
453
+ "bbox": [
454
+ 114,
455
+ 181,
456
+ 314,
457
+ 282
458
+ ],
459
+ "page_idx": 2
460
+ },
461
+ {
462
+ "type": "text",
463
+ "text": "In other words, component types do not convey semantic information, only structural information.",
464
+ "bbox": [
465
+ 112,
466
+ 284,
467
+ 485,
468
+ 315
469
+ ],
470
+ "page_idx": 2
471
+ },
472
+ {
473
+ "type": "text",
474
+ "text": "Continuous mentions. There exists two forms of continuous mentions. First, continuous mentions that share one or more words with at least one other mention. In this case, we split the mention and we process it as described above. Second, there are continuous mentions that do no share any word with other mentions, see Example (2). In principle, we could also transform these mentions in the two layers representation. However, not only we lack information about component types but we do not even know where to split them! In Example (3), we know that \"pain in arms\" should be splitted into \"pain in\" and \"arms\" as the first two words are shared with another mention. But for the two continuous mentions in Example (2), we do not have such information. Therefore, in this case, we treat them as standard continuous ones.",
475
+ "bbox": [
476
+ 112,
477
+ 317,
478
+ 487,
479
+ 589
480
+ ],
481
+ "page_idx": 2
482
+ },
483
+ {
484
+ "type": "text",
485
+ "text": "Nested NER. Although Dai et al. (2020) suggested the use of nested NER models for discontinuous NER using a similar yet different representation, we argue that the two problems are different:",
486
+ "bbox": [
487
+ 112,
488
+ 590,
489
+ 489,
490
+ 656
491
+ ],
492
+ "page_idx": 2
493
+ },
494
+ {
495
+ "type": "list",
496
+ "sub_type": "text",
497
+ "list_items": [
498
+ "- The structures that we consider are not recursive, contrary to nested mentions, e.g. \"[The president of [the United States of [America]]\"];",
499
+ "- The components are highly constrained, e.g. a set of ADRs must contain at least one body part and one event;",
500
+ "- The span of a set of mentions is fixed by its components: it begins (resp. ends) at the same word as its leftmost (resp. rightmost) component."
501
+ ],
502
+ "bbox": [
503
+ 134,
504
+ 670,
505
+ 487,
506
+ 873
507
+ ],
508
+ "page_idx": 2
509
+ },
510
+ {
511
+ "type": "text",
512
+ "text": "Therefore, we instead propose a tagging scheme tailored to discontinuous NER.",
513
+ "bbox": [
514
+ 112,
515
+ 889,
516
+ 487,
517
+ 920
518
+ ],
519
+ "page_idx": 2
520
+ },
521
+ {
522
+ "type": "text",
523
+ "text": "Beyond the biomedical domain. Our approach can be applied to other domains, e.g. we can transform the following mentions into our representation by differentiating first and last names:",
524
+ "bbox": [
525
+ 507,
526
+ 84,
527
+ 884,
528
+ 148
529
+ ],
530
+ "page_idx": 2
531
+ },
532
+ {
533
+ "type": "text",
534
+ "text": "PER (7) Meg and Jack White PER",
535
+ "bbox": [
536
+ 509,
537
+ 149,
538
+ 724,
539
+ 203
540
+ ],
541
+ "page_idx": 2
542
+ },
543
+ {
544
+ "type": "text",
545
+ "text": "Unfortunately, these discontinuities have not been annotated in standard datasets.4",
546
+ "bbox": [
547
+ 507,
548
+ 204,
549
+ 880,
550
+ 235
551
+ ],
552
+ "page_idx": 2
553
+ },
554
+ {
555
+ "type": "text",
556
+ "text": "2.2 Tagging Scheme",
557
+ "text_level": 1,
558
+ "bbox": [
559
+ 507,
560
+ 247,
561
+ 684,
562
+ 263
563
+ ],
564
+ "page_idx": 2
565
+ },
566
+ {
567
+ "type": "text",
568
+ "text": "We now explain how we transform the two-layer structure into a sequence of tags. Without loss of generality, we assume that mentions are untyped in the original corpus, as practical datasets for discontinuous NER contain a single mention type.<sup>5</sup> Moreover, we define the component types as X and Y (e.g. body part and event in previous examples) to simplify notation and treat in a similar way semantic and structural component types.",
569
+ "bbox": [
570
+ 507,
571
+ 268,
572
+ 882,
573
+ 412
574
+ ],
575
+ "page_idx": 2
576
+ },
577
+ {
578
+ "type": "text",
579
+ "text": "Our approach requires 10 tags. First, the 3 tags CB, CI and O are used in a similar way to BIO tags. CB and CI are used to identify first and following words in a continuous mention, respectively. The tag O is used to mark words that are neither part of a continuous mention or in the span of a set of mentions. In Example (2), word \"and\" is tagged with O whereas in Example (3) it is not tagged with O. This is due to the fact that in the second example, after transformation into the two layers representation, the word \"and\" will appear inside a set of mentions, see Example (4).",
580
+ "bbox": [
581
+ 507,
582
+ 413,
583
+ 882,
584
+ 605
585
+ ],
586
+ "page_idx": 2
587
+ },
588
+ {
589
+ "type": "text",
590
+ "text": "Second, tags to identify set of mentions and their components are of the form $* - *$ where:",
591
+ "bbox": [
592
+ 507,
593
+ 606,
594
+ 880,
595
+ 637
596
+ ],
597
+ "page_idx": 2
598
+ },
599
+ {
600
+ "type": "list",
601
+ "sub_type": "text",
602
+ "list_items": [
603
+ "- the left-hand side is used to identify the span of the set of mentions, and can therefore take values DB (first word of the span) and DI (other words of the span);",
604
+ "- the right-hand side is used to identify typed components, and can take values Bx, Ix, By, Iy and O."
605
+ ],
606
+ "bbox": [
607
+ 531,
608
+ 649,
609
+ 882,
610
+ 771
611
+ ],
612
+ "page_idx": 2
613
+ },
614
+ {
615
+ "type": "text",
616
+ "text": "The 7 tags used for discontinuous mentions are DB-Bx, DB-By, IB-Bx, IB-By, IB-Ix, IB-Iy",
617
+ "bbox": [
618
+ 507,
619
+ 783,
620
+ 880,
621
+ 815
622
+ ],
623
+ "page_idx": 2
624
+ },
625
+ {
626
+ "type": "page_footnote",
627
+ "text": "${}^{4}$ Wang et al. (2023) automatically extracted coordination structures from syntactic structures. However, note that (1) the resulting dataset does not contains discontinuous mentions that we are interested in and (2) conjunction reduction cannot always be inferred from the syntactic structure (Lakoff and Peters, 1969; Lechner, 2000; Wilder, 2018).",
628
+ "bbox": [
629
+ 507,
630
+ 822,
631
+ 880,
632
+ 894
633
+ ],
634
+ "page_idx": 2
635
+ },
636
+ {
637
+ "type": "page_footnote",
638
+ "text": "It is trivial to augment the set of tags with types if necessary, as done for standard BIO tagging.",
639
+ "bbox": [
640
+ 507,
641
+ 894,
642
+ 880,
643
+ 921
644
+ ],
645
+ "page_idx": 2
646
+ },
647
+ {
648
+ "type": "page_number",
649
+ "text": "19508",
650
+ "bbox": [
651
+ 477,
652
+ 927,
653
+ 524,
654
+ 940
655
+ ],
656
+ "page_idx": 2
657
+ },
658
+ {
659
+ "type": "image",
660
+ "img_path": "images/c22fba41f813675d6d7ee36857c68e3cdb3f61b61a651971d1ba1e284a892da3.jpg",
661
+ "image_caption": [
662
+ "Figure 1: (Top) Sentence with its original annotation. It contains two continuous mentions (\"Chronic fatigue\" and \"stiff knees\") and three discontinuous mentions (\"swollen knees\", \"swollen left elbows\" and \"stiff left elbows\"). (Bottom) Sentence annotated with our two-layer representation and the associated tag sequence."
663
+ ],
664
+ "image_footnote": [],
665
+ "bbox": [
666
+ 117,
667
+ 84,
668
+ 873,
669
+ 281
670
+ ],
671
+ "page_idx": 3
672
+ },
673
+ {
674
+ "type": "text",
675
+ "text": "and IB-O. Note that the leftmost word in a set of mentions must also be the beginning of a component, so the following combinations are not part of the tagset: DB-Ix, DB-Iy and DB-O. Figure 1 shows an example of tag conversion.",
676
+ "bbox": [
677
+ 112,
678
+ 359,
679
+ 487,
680
+ 439
681
+ ],
682
+ "page_idx": 3
683
+ },
684
+ {
685
+ "type": "text",
686
+ "text": "Importantly, any sequence of tags is well-formed if and only if:",
687
+ "bbox": [
688
+ 112,
689
+ 439,
690
+ 485,
691
+ 470
692
+ ],
693
+ "page_idx": 3
694
+ },
695
+ {
696
+ "type": "text",
697
+ "text": "5. A set of mentions must not yield a single continuous mention after reconstruction, i.e. the following sequence of tags is forbidden:",
698
+ "bbox": [
699
+ 127,
700
+ 693,
701
+ 487,
702
+ 740
703
+ ],
704
+ "page_idx": 3
705
+ },
706
+ {
707
+ "type": "list",
708
+ "sub_type": "text",
709
+ "list_items": [
710
+ "1. All CI tags are preceded by either BI or CI, as standard in BIO tagging;",
711
+ "2. All DI-\\* tags must be preceded by either DB-\\* or DI-\\*:",
712
+ "3. All $*$ -Ix tags must be preceded by either $*$ -Bx or $*$ -Ix (and similarly for the Y type);",
713
+ "4. A set of mentions must contain at least one component typed X and one typed Y, that is it must contain at least one word tagged with $-\\mathrm{BX}$ and one with $* - \\mathrm{BY}$ ."
714
+ ],
715
+ "bbox": [
716
+ 127,
717
+ 483,
718
+ 487,
719
+ 678
720
+ ],
721
+ "page_idx": 3
722
+ },
723
+ {
724
+ "type": "text",
725
+ "text": "some pain in arms and O DB-Bx DI-IX DF-IY as it would introduce ambiguity in the encoding of continuous mentions;",
726
+ "bbox": [
727
+ 149,
728
+ 743,
729
+ 487,
730
+ 810
731
+ ],
732
+ "page_idx": 3
733
+ },
734
+ {
735
+ "type": "text",
736
+ "text": "6. A discontinuous mention cannot end with tag DI-O, as this would results in the span of a set of mentions that do not end with the same word as its rightmost component.<sup>6</sup>",
737
+ "bbox": [
738
+ 127,
739
+ 824,
740
+ 485,
741
+ 887
742
+ ],
743
+ "page_idx": 3
744
+ },
745
+ {
746
+ "type": "text",
747
+ "text": "3 Decoding Algorithm",
748
+ "text_level": 1,
749
+ "bbox": [
750
+ 507,
751
+ 359,
752
+ 719,
753
+ 375
754
+ ],
755
+ "page_idx": 3
756
+ },
757
+ {
758
+ "type": "text",
759
+ "text": "Without loss of generality, we assume all sentences have $n$ words. Let $T$ be the tagset, $X$ be the set of sentences and $Y$ the set of well-formed tag sequences. We represent a sequence of tags $\\mathbf{y} \\in Y$ as a binary vector with $n|T|$ entries, where each entry is associated with a tag and a word, i.e. $\\mathbf{y} \\in \\{0,1\\}^{n|T|}$ . If the value of an entry is 1 (resp. 0), the associated tag is assigned to the associated word (resp. not assigned). Note that $Y \\subset \\{0,1\\}^{n|T|}$ is a strict subset of all such vectors, as each word must be assigned exactly one tag and that the resulting tag sequence must satisfy the constraints described in Section 2.2.",
760
+ "bbox": [
761
+ 507,
762
+ 384,
763
+ 882,
764
+ 590
765
+ ],
766
+ "page_idx": 3
767
+ },
768
+ {
769
+ "type": "text",
770
+ "text": "Let $f_{\\theta}: X \\to \\mathbb{R}^{n|T|}$ be a neural network parameterized by $\\theta$ . We define the probability of a tag sequence $\\pmb{y} \\in Y$ given the input $\\pmb{x}$ as a Boltzmann-Gibbs distribution (or softmax over structures):",
771
+ "bbox": [
772
+ 507,
773
+ 592,
774
+ 882,
775
+ 657
776
+ ],
777
+ "page_idx": 3
778
+ },
779
+ {
780
+ "type": "equation",
781
+ "text": "\n$$\np _ {\\theta} (\\boldsymbol {y} | \\boldsymbol {x}) = \\exp \\left(\\langle \\boldsymbol {y}, f _ {\\theta} (\\boldsymbol {x}) \\rangle - A _ {Y} (f _ {\\theta} (\\boldsymbol {x}))\\right),\n$$\n",
782
+ "text_format": "latex",
783
+ "bbox": [
784
+ 524,
785
+ 668,
786
+ 862,
787
+ 686
788
+ ],
789
+ "page_idx": 3
790
+ },
791
+ {
792
+ "type": "text",
793
+ "text": "where $\\langle \\cdot ,\\cdot \\rangle$ denotes the dot product and $A_{Y}$ is the log-partition function ensuring that the distribution is correctly normalized:",
794
+ "bbox": [
795
+ 507,
796
+ 701,
797
+ 880,
798
+ 747
799
+ ],
800
+ "page_idx": 3
801
+ },
802
+ {
803
+ "type": "equation",
804
+ "text": "\n$$\nA _ {Y} (\\boldsymbol {w}) = \\log \\sum_ {\\boldsymbol {y} \\in Y} \\exp \\langle \\boldsymbol {y}, \\boldsymbol {w} \\rangle . \\tag {1}\n$$\n",
805
+ "text_format": "latex",
806
+ "bbox": [
807
+ 529,
808
+ 762,
809
+ 882,
810
+ 796
811
+ ],
812
+ "page_idx": 3
813
+ },
814
+ {
815
+ "type": "text",
816
+ "text": "Computing $A_{Y}(\\boldsymbol{w})$ is called marginal inference due to its link with marginal probabilities (Wainwright et al., 2008). Computing the most probable output is reduced to computing:",
817
+ "bbox": [
818
+ 507,
819
+ 810,
820
+ 882,
821
+ 875
822
+ ],
823
+ "page_idx": 3
824
+ },
825
+ {
826
+ "type": "equation",
827
+ "text": "\n$$\n\\widehat {\\boldsymbol {y}} _ {\\theta} (\\boldsymbol {x}) = \\underset {\\boldsymbol {y} \\in Y} {\\arg \\max } \\left\\langle \\boldsymbol {y}, f _ {\\theta} (\\boldsymbol {x}) \\right\\rangle , \\tag {2}\n$$\n",
828
+ "text_format": "latex",
829
+ "bbox": [
830
+ 539,
831
+ 890,
832
+ 882,
833
+ 917
834
+ ],
835
+ "page_idx": 3
836
+ },
837
+ {
838
+ "type": "page_footnote",
839
+ "text": "The analogous constraint on the first word is implicitly enforced by the absence of a DB-O tag in the tagging scheme.",
840
+ "bbox": [
841
+ 112,
842
+ 894,
843
+ 485,
844
+ 920
845
+ ],
846
+ "page_idx": 3
847
+ },
848
+ {
849
+ "type": "page_number",
850
+ "text": "19509",
851
+ "bbox": [
852
+ 477,
853
+ 927,
854
+ 524,
855
+ 940
856
+ ],
857
+ "page_idx": 3
858
+ },
859
+ {
860
+ "type": "text",
861
+ "text": "called maximum a posteriori (MAP) inference.",
862
+ "bbox": [
863
+ 112,
864
+ 84,
865
+ 460,
866
+ 99
867
+ ],
868
+ "page_idx": 4
869
+ },
870
+ {
871
+ "type": "text",
872
+ "text": "In practice, we need to compute the term in Equation (1) for training the model and the term in Equation (2) for prediction. The difficulty stems from the restriction (in the sum and in the arg max search space) to the set of well-formed outputs $Y$ . We follow a long tradition in NLP (Koskenniemi, 1990; Mohri et al., 1996; Karttuten et al., 1996; Kanthak and Ney, 2004; Tromble and Eisner, 2006; Rastogi et al., 2016; Lin et al., 2019; Papay et al., 2022, inter alia) and rely on a finite-state automaton to solve these inference problems.",
873
+ "bbox": [
874
+ 112,
875
+ 101,
876
+ 489,
877
+ 277
878
+ ],
879
+ "page_idx": 4
880
+ },
881
+ {
882
+ "type": "text",
883
+ "text": "3.1 Finite-State Automata",
884
+ "text_level": 1,
885
+ "bbox": [
886
+ 112,
887
+ 290,
888
+ 334,
889
+ 304
890
+ ],
891
+ "page_idx": 4
892
+ },
893
+ {
894
+ "type": "text",
895
+ "text": "Definitions. Weighted Finite State Automata (WFSA) are generalization of FSA (Eilenberg, 1974) that include weights on their transitions. Formally, a WFSA over $\\mathbb{R}$ is a 5-tuple $(\\Sigma, Q, E, i, F)$ where:",
896
+ "bbox": [
897
+ 112,
898
+ 311,
899
+ 489,
900
+ 390
901
+ ],
902
+ "page_idx": 4
903
+ },
904
+ {
905
+ "type": "list",
906
+ "sub_type": "text",
907
+ "list_items": [
908
+ "- $\\Sigma$ is a finite alphabet with $\\epsilon \\notin \\Sigma$ ;",
909
+ "- $Q$ is the set of states;",
910
+ "- $E \\subseteq Q \\times \\Sigma^{*} \\times \\mathbb{R} \\times Q$ is the set of weighted transitions, where $(q, \\sigma, w, r) \\in E$ is a transition from state $q$ to state $r$ emitting symbol(s) $\\sigma$ with weight $w$ ;",
911
+ "- $i \\in Q$ is an initial state and $F \\subseteq Q$ are final states."
912
+ ],
913
+ "bbox": [
914
+ 136,
915
+ 404,
916
+ 489,
917
+ 568
918
+ ],
919
+ "page_idx": 4
920
+ },
921
+ {
922
+ "type": "text",
923
+ "text": "Symbol $\\epsilon$ is used for transitions that emit nothing. A WFSA is $\\epsilon$ -free if there is no $\\epsilon$ -transition. A valid path is a path starting at $i$ and ending at any state in $F$ . A path emits a sequence of symbols, and has a weight equal to the sum of the transition weights it contains. The language of a WFSA is the set of emissions along all possible valid paths.",
924
+ "bbox": [
925
+ 112,
926
+ 582,
927
+ 487,
928
+ 695
929
+ ],
930
+ "page_idx": 4
931
+ },
932
+ {
933
+ "type": "text",
934
+ "text": "Algorithms. Given an acyclic WFSA, the path of maximum weight, Equation (2), and the log-sum-exp of all valid paths, Equation (1), can be computed using variants of the Viterbi algorithm (Forney, 1973) and the Forward algorithm (Baum, 1972), respectively. These algorithms are in fact identical, but defined over different semirings (Goodman, 1999): the tropical semiring for the Viterbi and the thermodynamic semiring (Marcolli and Thorngren, 2014) for the Forward. We refer to (Mohri, 2009, Section 3) for an in-depth introduction. The time complexity of both algorithms is $\\mathcal{O}(|E|)$ if a topological ordering of states is known.",
935
+ "bbox": [
936
+ 112,
937
+ 696,
938
+ 489,
939
+ 919
940
+ ],
941
+ "page_idx": 4
942
+ },
943
+ {
944
+ "type": "text",
945
+ "text": "Application to sequence tagging. We follow previous work and use the intersection of two WF-SAs to constraint tag sequences (Koskenniemi, 1990; Koskenniemi et al., 1992). The grammar automaton $\\mathcal{G} \\triangleq (T, Q, E, i, F)$ is a cyclic WFSA whose language is the set of all well-formed tag sequences (of any length). We assume $G$ is $\\epsilon$ -free and deterministic.<sup>7</sup> Without loss of generality, we fix all transition weights to 0. The sentence automaton $\\mathcal{S} \\triangleq (T, Q', E', i', F')$ is an acyclic FSA that represents all possible (not necessarily valid) analyses for a given sentence of $n$ words. States are $Q' \\triangleq \\{0, \\dots, n\\}$ and transitions are:",
946
+ "bbox": [
947
+ 507,
948
+ 84,
949
+ 884,
950
+ 294
951
+ ],
952
+ "page_idx": 4
953
+ },
954
+ {
955
+ "type": "equation",
956
+ "text": "\n$$\nE ^ {\\prime} \\triangleq \\left\\{\\left(i - 1, t, w _ {(i, t)}, i\\right) \\mid i \\in \\{1 \\dots n \\} \\wedge t \\in T \\right\\}\n$$\n",
957
+ "text_format": "latex",
958
+ "bbox": [
959
+ 514,
960
+ 306,
961
+ 875,
962
+ 326
963
+ ],
964
+ "page_idx": 4
965
+ },
966
+ {
967
+ "type": "text",
968
+ "text": "where $w_{(i,t)}$ is the weight associated with tagging word at position $i$ with tag $t$ . Initial and final states are $i' \\triangleq 0$ and $F' \\triangleq \\{n\\}$ . This WFSA contains $n|T|$ transitions, and each transition corresponds to tagging a given word with a given tag. By construction, it is always deterministic and $\\epsilon$ -free.",
969
+ "bbox": [
970
+ 507,
971
+ 338,
972
+ 882,
973
+ 434
974
+ ],
975
+ "page_idx": 4
976
+ },
977
+ {
978
+ "type": "text",
979
+ "text": "We denote $G \\cap S$ the intersection of $G$ and $S$ (Hopcroft et al., 2001, Section 4.2.1) composed of states $Q'' \\triangleq Q \\times Q'$ , transitions",
980
+ "bbox": [
981
+ 507,
982
+ 435,
983
+ 882,
984
+ 483
985
+ ],
986
+ "page_idx": 4
987
+ },
988
+ {
989
+ "type": "equation",
990
+ "text": "\n$$\nE ^ {\\prime \\prime} \\triangleq \\left\\{\\left((i - 1, p), t, w _ {(i, t)}, (i, q)\\right) \\middle | \\begin{array}{l} i \\in \\{1 \\dots n \\} \\wedge \\\\ (p, t, 0, q) \\in E \\end{array} \\right\\},\n$$\n",
991
+ "text_format": "latex",
992
+ "bbox": [
993
+ 509,
994
+ 495,
995
+ 882,
996
+ 531
997
+ ],
998
+ "page_idx": 4
999
+ },
1000
+ {
1001
+ "type": "text",
1002
+ "text": "initial state $i'' \\triangleq (i, i')$ and final states $F'' \\triangleq F \\times F'$ . Then, all valid paths in $G \\cap S$ are well-formed sequences of tags for the input sentence of length $n$ . We can then simply run the Viterbi or the Forward algorithm on $G \\cap S$ to compute Equartions (1) and (2). Note that $|E''| \\propto n$ , therefore the time-complexity is linear in the number of words.",
1003
+ "bbox": [
1004
+ 507,
1005
+ 543,
1006
+ 884,
1007
+ 657
1008
+ ],
1009
+ "page_idx": 4
1010
+ },
1011
+ {
1012
+ "type": "text",
1013
+ "text": "We refer the reader to (Tapanainen, 1997) for an introduction to this sequence tagging approach.",
1014
+ "bbox": [
1015
+ 507,
1016
+ 658,
1017
+ 880,
1018
+ 690
1019
+ ],
1020
+ "page_idx": 4
1021
+ },
1022
+ {
1023
+ "type": "text",
1024
+ "text": "3.2 Grammar Automaton",
1025
+ "text_level": 1,
1026
+ "bbox": [
1027
+ 507,
1028
+ 703,
1029
+ 727,
1030
+ 715
1031
+ ],
1032
+ "page_idx": 4
1033
+ },
1034
+ {
1035
+ "type": "text",
1036
+ "text": "The grammar automaton used to constraint prediction to well-formed sequences of tags is shown in Figure 2. We present the automaton with $\\epsilon$ -transition for the sake of clarity, but they can be removed. We omit weights as they are null. States 1 and 2 recognize valid sequences of CB, CI and O tags. Moreover, the structure of the WFSA states recognizing discontinuous mentions is symmetric: the left-hand (resp. right-hand) side recognizes discontinuous mentions whose leftmost component is",
1037
+ "bbox": [
1038
+ 507,
1039
+ 724,
1040
+ 884,
1041
+ 885
1042
+ ],
1043
+ "page_idx": 4
1044
+ },
1045
+ {
1046
+ "type": "page_footnote",
1047
+ "text": "7 Procedures to determinize and remove $\\epsilon$ -transitions can be found in Hopcroft et al. (2001, Section 2.3.5 and 2.5.5).",
1048
+ "bbox": [
1049
+ 507,
1050
+ 894,
1051
+ 882,
1052
+ 920
1053
+ ],
1054
+ "page_idx": 4
1055
+ },
1056
+ {
1057
+ "type": "page_number",
1058
+ "text": "19510",
1059
+ "bbox": [
1060
+ 477,
1061
+ 927,
1062
+ 524,
1063
+ 940
1064
+ ],
1065
+ "page_idx": 4
1066
+ },
1067
+ {
1068
+ "type": "image",
1069
+ "img_path": "images/ce2f1abf2b0712734598a632bb7a93b42e857a5af954f1c25c84be09f31c94e5.jpg",
1070
+ "image_caption": [
1071
+ "Figure 2: The grammar automaton we propose for discontinuous named-entity recognition."
1072
+ ],
1073
+ "image_footnote": [],
1074
+ "bbox": [
1075
+ 174,
1076
+ 82,
1077
+ 823,
1078
+ 331
1079
+ ],
1080
+ "page_idx": 5
1081
+ },
1082
+ {
1083
+ "type": "text",
1084
+ "text": "typed X (resp. Y). Therefore we present only the left-hand side.",
1085
+ "bbox": [
1086
+ 112,
1087
+ 378,
1088
+ 487,
1089
+ 409
1090
+ ],
1091
+ "page_idx": 5
1092
+ },
1093
+ {
1094
+ "type": "text",
1095
+ "text": "Transition (1, DB-Bx, 7) starts the recognition of a set of mentions whose leftmost component is typed x. The self-loop in state 7 recognizes following words of the first component. Next we need to check that the inner structure of the set of mentions is well-formed. On the one hand, states 5 and 6 allows to recognize following x components and DI-O tags, until recognition of the first y component via transition (5, DI-BY, 4). On the other hand, transition (7, DB-Bx, 8) starts the recognition of an component typed y that directly follows the first component. Therefore, we need to check that there is \"something else\" in the set of mentions, otherwise the sequence of tags could lead to an ambiguity in the encoding of continuous mentions. We ensure this via transition $(8,\\epsilon ,3)$ , that requires the generation of another component before reaching the final state. Finally, states 3, 4 and 9 recognizes extra x and y in the set of mentions.",
1096
+ "bbox": [
1097
+ 112,
1098
+ 412,
1099
+ 489,
1100
+ 717
1101
+ ],
1102
+ "page_idx": 5
1103
+ },
1104
+ {
1105
+ "type": "text",
1106
+ "text": "As such, the language of our grammar automaton is the set of well-formed tag sequences as described in Section 2.2. To use our grammar automaton, we need to remove $\\epsilon$ -transitions. The resulting WFSA has 22 states. $^{8}$ In the case of structural component types, we can simply remove transition (1, DB-BY, 12) to constrain the leftmost mention to be labeled x.",
1107
+ "bbox": [
1108
+ 110,
1109
+ 721,
1110
+ 489,
1111
+ 847
1112
+ ],
1113
+ "page_idx": 5
1114
+ },
1115
+ {
1116
+ "type": "text",
1117
+ "text": "Practical implementation. The intersection of",
1118
+ "bbox": [
1119
+ 132,
1120
+ 851,
1121
+ 489,
1122
+ 866
1123
+ ],
1124
+ "page_idx": 5
1125
+ },
1126
+ {
1127
+ "type": "text",
1128
+ "text": "the grammar and the sentence automata does not result in a homogeneous Markov chain as transition weights correspond to tag weights for the next word, and are therefore different at each step. However, the resulting automaton has always a simple time-invariant structure. In term of implementation, this reduces to applying a mask at each step, and both Viterbi and forward algorithms can be implemented using basic differentiable tensor operations. For MAP inference, we compute the path of maximum weight and then rely on backpropagation to retrieve the sequence of tags (Mensch and Blondel, 2018, Section 2.1).",
1129
+ "bbox": [
1130
+ 507,
1131
+ 378,
1132
+ 885,
1133
+ 588
1134
+ ],
1135
+ "page_idx": 5
1136
+ },
1137
+ {
1138
+ "type": "text",
1139
+ "text": "4 Weakly-Supervised Learning",
1140
+ "text_level": 1,
1141
+ "bbox": [
1142
+ 507,
1143
+ 599,
1144
+ 798,
1145
+ 615
1146
+ ],
1147
+ "page_idx": 5
1148
+ },
1149
+ {
1150
+ "type": "text",
1151
+ "text": "The negative log-likelihood (NLL) loss,",
1152
+ "bbox": [
1153
+ 507,
1154
+ 624,
1155
+ 808,
1156
+ 640
1157
+ ],
1158
+ "page_idx": 5
1159
+ },
1160
+ {
1161
+ "type": "equation",
1162
+ "text": "\n$$\n\\ell (\\boldsymbol {w}; \\boldsymbol {y}) = - \\langle \\boldsymbol {y}, \\boldsymbol {w} \\rangle + A _ {Y} (\\boldsymbol {w}),\n$$\n",
1163
+ "text_format": "latex",
1164
+ "bbox": [
1165
+ 576,
1166
+ 652,
1167
+ 815,
1168
+ 670
1169
+ ],
1170
+ "page_idx": 5
1171
+ },
1172
+ {
1173
+ "type": "text",
1174
+ "text": "requires knowledge of the gold output $\\pmb{y}$ . Unfortunately, NER datasets only contains annotated mentions, but not their component types (e.g. we do not know which components are body parts and events). Therefore, we need to resort on weakly-supervised learning to infer this information.",
1175
+ "bbox": [
1176
+ 507,
1177
+ 681,
1178
+ 885,
1179
+ 778
1180
+ ],
1181
+ "page_idx": 5
1182
+ },
1183
+ {
1184
+ "type": "text",
1185
+ "text": "4.1 Learning with Partial Labels",
1186
+ "text_level": 1,
1187
+ "bbox": [
1188
+ 507,
1189
+ 788,
1190
+ 784,
1191
+ 803
1192
+ ],
1193
+ "page_idx": 5
1194
+ },
1195
+ {
1196
+ "type": "text",
1197
+ "text": "Learning with partial labels refers to the case where the gold output is unknown but there is access to a subset of labels that includes the gold one (Grandvalet and Bengio, 2004; Nguyen and Caruana, 2008; Cour et al., 2011). Let $\\widetilde{Y} \\subseteq Y$ be the set of tag sequences that recovers the gold discontinuous mentions. For the example in Figure 1, $\\widetilde{Y}$",
1198
+ "bbox": [
1199
+ 507,
1200
+ 808,
1201
+ 885,
1202
+ 921
1203
+ ],
1204
+ "page_idx": 5
1205
+ },
1206
+ {
1207
+ "type": "page_footnote",
1208
+ "text": "Although 22 states is small and allows very fast computation, it is already too large for drawing a comprehensive figure.",
1209
+ "bbox": [
1210
+ 112,
1211
+ 882,
1212
+ 489,
1213
+ 921
1214
+ ],
1215
+ "page_idx": 5
1216
+ },
1217
+ {
1218
+ "type": "page_number",
1219
+ "text": "19511",
1220
+ "bbox": [
1221
+ 477,
1222
+ 927,
1223
+ 524,
1224
+ 940
1225
+ ],
1226
+ "page_idx": 5
1227
+ },
1228
+ {
1229
+ "type": "text",
1230
+ "text": "contain two sequences, one where components of the set of mentions are labeled $\\mathrm{x / x / y / y}$ and the other $\\mathrm{Y} / \\mathrm{Y} / \\mathrm{X} / \\mathrm{X}$ . For a sentence containing $k$ sets of mentions, we have $|\\widetilde{Y}| = 2^{k}$ .",
1231
+ "bbox": [
1232
+ 112,
1233
+ 84,
1234
+ 487,
1235
+ 148
1236
+ ],
1237
+ "page_idx": 6
1238
+ },
1239
+ {
1240
+ "type": "text",
1241
+ "text": "Following Jin and Ghahramani (2002), we minimize the NLL after marginalizing over $\\hat{Y}$ :",
1242
+ "bbox": [
1243
+ 112,
1244
+ 149,
1245
+ 489,
1246
+ 181
1247
+ ],
1248
+ "page_idx": 6
1249
+ },
1250
+ {
1251
+ "type": "equation",
1252
+ "text": "\n$$\n\\begin{array}{l} \\widetilde {\\ell} (\\boldsymbol {w}; \\widetilde {Y}) = - \\log p _ {\\theta} (\\widetilde {Y} | \\boldsymbol {x}) = - \\log \\sum_ {\\boldsymbol {y} \\in \\widetilde {Y}} p _ {\\theta} (\\boldsymbol {y} | \\boldsymbol {x}) \\\\ = A _ {Y} \\left(f _ {\\theta} (\\boldsymbol {x})\\right) - \\underbrace {\\log \\sum_ {\\boldsymbol {y} \\in \\widetilde {Y}} \\exp \\langle \\boldsymbol {y} , f _ {\\theta} (\\boldsymbol {x}) \\rangle} _ {= A _ {\\widetilde {Y}} \\left(f _ {\\theta} (\\boldsymbol {x})\\right)}, \\tag {3} \\\\ \\end{array}\n$$\n",
1253
+ "text_format": "latex",
1254
+ "bbox": [
1255
+ 122,
1256
+ 192,
1257
+ 487,
1258
+ 292
1259
+ ],
1260
+ "page_idx": 6
1261
+ },
1262
+ {
1263
+ "type": "text",
1264
+ "text": "where $A_{\\widetilde{Y}}$ is the clamped log-partition, which can be efficiently computed via a dynamic programming algorithm. In speech processing, $A_{\\widetilde{Y}}$ is called the alignment model and the associated FSA the numerator graph (Povey et al., 2016; Hadian et al., 2018).",
1265
+ "bbox": [
1266
+ 112,
1267
+ 302,
1268
+ 487,
1269
+ 397
1270
+ ],
1271
+ "page_idx": 6
1272
+ },
1273
+ {
1274
+ "type": "text",
1275
+ "text": "Relation with EM. We can interpret minimizing $\\widetilde{\\ell}$ as an Expectation-Maximization (EM) procedure (Neal and Hinton, 1998). Indeed, the variational formulation of the clamped log-partition is:",
1276
+ "bbox": [
1277
+ 112,
1278
+ 398,
1279
+ 487,
1280
+ 463
1281
+ ],
1282
+ "page_idx": 6
1283
+ },
1284
+ {
1285
+ "type": "equation",
1286
+ "text": "\n$$\nA_{\\widetilde{Y}}(\\boldsymbol {w}) = \\sup_{\\boldsymbol {\\mu}\\in \\operatorname {conv}\\widetilde{Y}}\\langle \\boldsymbol {\\mu},\\boldsymbol {w}\\rangle -\\Omega_{\\widetilde{Y}}(\\boldsymbol {\\mu}),\n$$\n",
1287
+ "text_format": "latex",
1288
+ "bbox": [
1289
+ 159,
1290
+ 475,
1291
+ 440,
1292
+ 505
1293
+ ],
1294
+ "page_idx": 6
1295
+ },
1296
+ {
1297
+ "type": "text",
1298
+ "text": "where $\\mathrm{conv}$ denotes the convex hull and $\\Omega_{\\widetilde{Y}}$ is a structured entropy term as described by Blondel et al. (2020, Section 7.1). Setting $\\boldsymbol{w} = f_{\\theta}(\\boldsymbol{x})$ , by Danskin's theorem (Danskin, 1966; Bertsekas, 1999), the gradient of the $A$ is:",
1299
+ "bbox": [
1300
+ 112,
1301
+ 514,
1302
+ 489,
1303
+ 595
1304
+ ],
1305
+ "page_idx": 6
1306
+ },
1307
+ {
1308
+ "type": "equation",
1309
+ "text": "\n$$\n\\widehat{\\boldsymbol{\\mu}}_{\\widetilde{Y}}(\\boldsymbol {w}) = \\nabla A_{\\widetilde{Y}}(\\boldsymbol {w}) = \\operatorname *{arg max}_{\\boldsymbol {\\mu}\\in \\operatorname {conv} \\widetilde{Y}}\\left\\langle \\boldsymbol {\\mu},\\boldsymbol {w}\\right\\rangle -\\Omega_{\\widetilde{Y}}(\\boldsymbol {\\mu}).\n$$\n",
1310
+ "text_format": "latex",
1311
+ "bbox": [
1312
+ 112,
1313
+ 607,
1314
+ 500,
1315
+ 638
1316
+ ],
1317
+ "page_idx": 6
1318
+ },
1319
+ {
1320
+ "type": "text",
1321
+ "text": "We rewrite the minimization of $\\widetilde{\\ell}$ as a two-step procedure:",
1322
+ "bbox": [
1323
+ 112,
1324
+ 650,
1325
+ 487,
1326
+ 683
1327
+ ],
1328
+ "page_idx": 6
1329
+ },
1330
+ {
1331
+ "type": "list",
1332
+ "sub_type": "text",
1333
+ "list_items": [
1334
+ "1. E step: compute $\\widehat{\\pmb{\\mu}}_{\\widetilde{Y}}(\\pmb {w})$",
1335
+ "2. M step: take one gradient step over the network parameters using the marginal distribution computed in E step, yielding the loss:"
1336
+ ],
1337
+ "bbox": [
1338
+ 127,
1339
+ 694,
1340
+ 489,
1341
+ 769
1342
+ ],
1343
+ "page_idx": 6
1344
+ },
1345
+ {
1346
+ "type": "equation",
1347
+ "text": "\n$$\n\\ell (\\boldsymbol {w}; \\widehat {\\boldsymbol {\\mu}} _ {\\widetilde {Y}} (\\boldsymbol {w})) = - \\langle \\boldsymbol {y}, \\widehat {\\boldsymbol {\\mu}} (\\boldsymbol {w}) \\rangle + A _ {Y} (\\boldsymbol {w}).\n$$\n",
1348
+ "text_format": "latex",
1349
+ "bbox": [
1350
+ 164,
1351
+ 781,
1352
+ 470,
1353
+ 800
1354
+ ],
1355
+ "page_idx": 6
1356
+ },
1357
+ {
1358
+ "type": "text",
1359
+ "text": "It is important to note that $\\widehat{\\pmb{\\mu}}_{\\widetilde{Y}}(\\pmb {w})$ is considered as a constant in the M step, i.e. the gradient is:",
1360
+ "bbox": [
1361
+ 112,
1362
+ 815,
1363
+ 487,
1364
+ 848
1365
+ ],
1366
+ "page_idx": 6
1367
+ },
1368
+ {
1369
+ "type": "equation",
1370
+ "text": "\n$$\n\\nabla \\ell (\\boldsymbol {w}; \\widehat {\\boldsymbol {\\mu}} _ {\\widetilde {Y}} (\\boldsymbol {w})) = - \\widehat {\\boldsymbol {\\mu}} (\\boldsymbol {w}) + \\nabla A _ {Y} (\\boldsymbol {w}) = \\nabla \\widetilde {\\ell} (\\boldsymbol {w}; \\widetilde {Y}),\n$$\n",
1371
+ "text_format": "latex",
1372
+ "bbox": [
1373
+ 112,
1374
+ 858,
1375
+ 500,
1376
+ 879
1377
+ ],
1378
+ "page_idx": 6
1379
+ },
1380
+ {
1381
+ "type": "text",
1382
+ "text": "meaning that this EM procedure is equivalent to minimizing the loss in Equation (3).",
1383
+ "bbox": [
1384
+ 112,
1385
+ 889,
1386
+ 487,
1387
+ 921
1388
+ ],
1389
+ "page_idx": 6
1390
+ },
1391
+ {
1392
+ "type": "text",
1393
+ "text": "This suggests a \"Hard EM\" alternative, where the $E$ step computes the unregularized maximum:",
1394
+ "bbox": [
1395
+ 507,
1396
+ 84,
1397
+ 880,
1398
+ 116
1399
+ ],
1400
+ "page_idx": 6
1401
+ },
1402
+ {
1403
+ "type": "equation",
1404
+ "text": "\n$$\n\\widehat{\\boldsymbol{y}}_{\\widetilde{Y}}(\\boldsymbol {w}) = \\operatorname *{arg max}_{\\boldsymbol {y}\\in \\operatorname {conv} \\widetilde{Y}}\\left\\langle \\boldsymbol {y},\\boldsymbol {w}\\right\\rangle ,\n$$\n",
1405
+ "text_format": "latex",
1406
+ "bbox": [
1407
+ 589,
1408
+ 122,
1409
+ 801,
1410
+ 149
1411
+ ],
1412
+ "page_idx": 6
1413
+ },
1414
+ {
1415
+ "type": "text",
1416
+ "text": "and then apply one step of gradient descent using the loss $\\ell(\\boldsymbol{w}; \\widehat{\\boldsymbol{y}}_{\\widetilde{Y}}(\\boldsymbol{w}))$ in the M step.",
1417
+ "bbox": [
1418
+ 507,
1419
+ 155,
1420
+ 880,
1421
+ 189
1422
+ ],
1423
+ "page_idx": 6
1424
+ },
1425
+ {
1426
+ "type": "text",
1427
+ "text": "4.2 Silver Annotation of Components",
1428
+ "text_level": 1,
1429
+ "bbox": [
1430
+ 507,
1431
+ 197,
1432
+ 818,
1433
+ 212
1434
+ ],
1435
+ "page_idx": 6
1436
+ },
1437
+ {
1438
+ "type": "text",
1439
+ "text": "In order to automatically annotate components, we collect names of body parts from the metathesaurus MRCONSO.RRF of the Unified Medical Language System (UMLS, version 2023ab). We select English entries corresponding to semantic types \"Body Location or Region\", \"Body Part, Organ, or Organ Component\" and \"Body Space or Junction\", via the annotation in the lexicon MRSTY.RRF, which corresponds to identifiers T029, T023 and T030, respectively. However, we remove all acronyms (indicated via the marker ABR) as they would introduce too many false positives in the annotation process (e.g. \"in\" and \"am\" are acronyms of body parts). This leads to 218 134 names of body parts.",
1440
+ "bbox": [
1441
+ 505,
1442
+ 218,
1443
+ 882,
1444
+ 443
1445
+ ],
1446
+ "page_idx": 6
1447
+ },
1448
+ {
1449
+ "type": "text",
1450
+ "text": "Then, we try to match words of components with these entries. If at least one word of a component match an entry, we consider it as a body part. Note that a single match fully disambiguate a set of mentions.",
1451
+ "bbox": [
1452
+ 507,
1453
+ 443,
1454
+ 882,
1455
+ 521
1456
+ ],
1457
+ "page_idx": 6
1458
+ },
1459
+ {
1460
+ "type": "text",
1461
+ "text": "5 Related Work",
1462
+ "text_level": 1,
1463
+ "bbox": [
1464
+ 507,
1465
+ 533,
1466
+ 665,
1467
+ 549
1468
+ ],
1469
+ "page_idx": 6
1470
+ },
1471
+ {
1472
+ "type": "text",
1473
+ "text": "Tagging methods. Tang et al. (2013) proposed the BIOHD tagging scheme for discontinuous NER. A major issue of their approach is its structural ambiguity: several tag sequences can encode the same discontinuous mention, and different discontinuous mentions have the same associated tag sequence, see (Muis and Lu, 2016, Section 3.1). A choice to resolve ambiguity has to be made when making a prediction, meaning that there are structures that cannot be predicted. Moreover, this approach does not constrain the output tag sequence to be well-formed, i.e. it may not be possible to reconstruct mentions from a predicted tag sequence. The tagging scheme used by Metke-Jimenez and Karimi (2016) and Dai et al. (2017) has the same limitation. Muis and Lu (2016) proposed a graph-based method that ensures that predictions are well-formed, but their approach still exhibits structural ambiguity.",
1474
+ "bbox": [
1475
+ 507,
1476
+ 558,
1477
+ 884,
1478
+ 864
1479
+ ],
1480
+ "page_idx": 6
1481
+ },
1482
+ {
1483
+ "type": "page_footnote",
1484
+ "text": "<sup>9</sup>https://www.ncbi.nlm.nih.gov/books/NBK9685/table/ch03.T.concept_names_and_sources_file_mr/",
1485
+ "bbox": [
1486
+ 509,
1487
+ 869,
1488
+ 870,
1489
+ 895
1490
+ ],
1491
+ "page_idx": 6
1492
+ },
1493
+ {
1494
+ "type": "page_footnote",
1495
+ "text": "10https://www.ncbi.nlm.nih.gov/books/NBK9685/ table/ch03.Tf/",
1496
+ "bbox": [
1497
+ 510,
1498
+ 895,
1499
+ 867,
1500
+ 919
1501
+ ],
1502
+ "page_idx": 6
1503
+ },
1504
+ {
1505
+ "type": "page_number",
1506
+ "text": "19512",
1507
+ "bbox": [
1508
+ 477,
1509
+ 927,
1510
+ 524,
1511
+ 940
1512
+ ],
1513
+ "page_idx": 6
1514
+ },
1515
+ {
1516
+ "type": "text",
1517
+ "text": "Other methods. Wang and Lu (2019) rely on a two-step model that first predicts continuous spans (i.e. components) and then uses a separate classifier that combines them together. Dai et al. (2020) proposed a novel transition-based model. These two approaches are based on sequential predictions that are trained using gold intermediate outputs, which can lead to error propagation once a single mistake is made at test time. To resolve this problem, Wang et al. (2021) proposed a method that jointly predicts spans and their combination based on the maximal clique problem. A downside of these approaches is that they are more computationally costly (and therefore slower) than tagging methods.",
1518
+ "bbox": [
1519
+ 112,
1520
+ 84,
1521
+ 490,
1522
+ 310
1523
+ ],
1524
+ "page_idx": 7
1525
+ },
1526
+ {
1527
+ "type": "text",
1528
+ "text": "6 Experiments",
1529
+ "text_level": 1,
1530
+ "bbox": [
1531
+ 112,
1532
+ 332,
1533
+ 260,
1534
+ 350
1535
+ ],
1536
+ "page_idx": 7
1537
+ },
1538
+ {
1539
+ "type": "text",
1540
+ "text": "We evaluate our approach on three standard English datasets for discontinuous named-entity recognition in the biomedical domain: CADEC (Karimi et al., 2015), SHARE2013 (Pradhan et al.) and SHARE2014 (Mowery et al.). We pre-process the data using the script of Dai et al. (2020). Note that our tagging scheme cannot predict all discontinuous mentions in the data, i.e. there are sentences that we cannot convert to our representation. Therefore, we remove these sentences from the training set. $^{11}$ Data statistics are given in Table 2.",
1541
+ "bbox": [
1542
+ 112,
1543
+ 367,
1544
+ 489,
1545
+ 544
1546
+ ],
1547
+ "page_idx": 7
1548
+ },
1549
+ {
1550
+ "type": "text",
1551
+ "text": "6.1 Discontinuity Analysis",
1552
+ "text_level": 1,
1553
+ "bbox": [
1554
+ 112,
1555
+ 567,
1556
+ 337,
1557
+ 583
1558
+ ],
1559
+ "page_idx": 7
1560
+ },
1561
+ {
1562
+ "type": "text",
1563
+ "text": "We conduct a qualitative analysis of the search space of our algorithm on the full CADEC dataset. There are 26 discontinuous NER structures incompatible with our approach. $^{12}$",
1564
+ "bbox": [
1565
+ 112,
1566
+ 595,
1567
+ 489,
1568
+ 659
1569
+ ],
1570
+ "page_idx": 7
1571
+ },
1572
+ {
1573
+ "type": "text",
1574
+ "text": "There are discontinuous mentions where there is a partially shared component. This is due to shared negation (1 case), shared adjective (5 cases) and shared prepositional phrase (PP, 1 case):",
1575
+ "bbox": [
1576
+ 112,
1577
+ 663,
1578
+ 489,
1579
+ 728
1580
+ ],
1581
+ "page_idx": 7
1582
+ },
1583
+ {
1584
+ "type": "image",
1585
+ "img_path": "images/0d11433b4ea1d09ce2fa3910bf12da085aafb74110b300d516183b7ccea73cfa.jpg",
1586
+ "image_caption": [],
1587
+ "image_footnote": [],
1588
+ "bbox": [
1589
+ 112,
1590
+ 728,
1591
+ 487,
1592
+ 857
1593
+ ],
1594
+ "page_idx": 7
1595
+ },
1596
+ {
1597
+ "type": "image",
1598
+ "img_path": "images/1099285157ab2e4716e01d5e0b543cd455c4fdbf15b3390f8cff72b32cdf420f.jpg",
1599
+ "image_caption": [],
1600
+ "image_footnote": [],
1601
+ "bbox": [
1602
+ 499,
1603
+ 84,
1604
+ 882,
1605
+ 147
1606
+ ],
1607
+ "page_idx": 7
1608
+ },
1609
+ {
1610
+ "type": "text",
1611
+ "text": "Although we cannot recognize these structures, we could extend our automaton to recognize the shared part as a continuous chunk (negation, adjective or PP), and the rest using our two layer representation.",
1612
+ "bbox": [
1613
+ 507,
1614
+ 152,
1615
+ 884,
1616
+ 231
1617
+ ],
1618
+ "page_idx": 7
1619
+ },
1620
+ {
1621
+ "type": "text",
1622
+ "text": "There are also discontinuous mentions that are composed of three components (16 cases), which we cannot recognize. This can happen because there is a coordination in both subject and PP positions as in the following example:",
1623
+ "bbox": [
1624
+ 507,
1625
+ 233,
1626
+ 882,
1627
+ 313
1628
+ ],
1629
+ "page_idx": 7
1630
+ },
1631
+ {
1632
+ "type": "image",
1633
+ "img_path": "images/058c67ac67148ced97f90b2aa7bdbc13407fd759df4a3e39e4a65ecf4d372d30.jpg",
1634
+ "image_caption": [],
1635
+ "image_footnote": [],
1636
+ "bbox": [
1637
+ 490,
1638
+ 315,
1639
+ 882,
1640
+ 419
1641
+ ],
1642
+ "page_idx": 7
1643
+ },
1644
+ {
1645
+ "type": "text",
1646
+ "text": "The mention \"muscle aches in elbows\" is composed of three components.",
1647
+ "bbox": [
1648
+ 507,
1649
+ 425,
1650
+ 882,
1651
+ 457
1652
+ ],
1653
+ "page_idx": 7
1654
+ },
1655
+ {
1656
+ "type": "text",
1657
+ "text": "Finally, the last three incompatibilities are due to a convoluted syntactic structure and annotation errors (2 cases). Interestingly, some annotation errors can be detected thanks to our new annotation schema. For example, in CADEC the sequence \"renal and respiratory failure\" as been incorrectly annotated as containing renal respiratory failure instead of renal failure. In SHARE2014, the sequence \"pleural / abdominal effusions\" as been incorrectly annotated as containing effusions instead of abdominal effusions. Note that in this paper we used the datasets as such and did not fix any error so that results are comparable with previous work.",
1658
+ "bbox": [
1659
+ 505,
1660
+ 458,
1661
+ 882,
1662
+ 699
1663
+ ],
1664
+ "page_idx": 7
1665
+ },
1666
+ {
1667
+ "type": "text",
1668
+ "text": "6.2 Results",
1669
+ "text_level": 1,
1670
+ "bbox": [
1671
+ 507,
1672
+ 712,
1673
+ 613,
1674
+ 726
1675
+ ],
1676
+ "page_idx": 7
1677
+ },
1678
+ {
1679
+ "type": "text",
1680
+ "text": "Our neural network is excessively simple: we use the DEBERTA-V3 pretrained self-attentive network (He et al., 2021a,b) followed by a single linear projection that maps context-sensitive embeddings to tag weights. All training details are given in Appendix A. For each loss function, we train six models with six different seeds and we select the best model using the development set.",
1681
+ "bbox": [
1682
+ 507,
1683
+ 734,
1684
+ 882,
1685
+ 862
1686
+ ],
1687
+ "page_idx": 7
1688
+ },
1689
+ {
1690
+ "type": "text",
1691
+ "text": "Results. We report the F-measure on all mentions and on discontinuous mentions only in Ta",
1692
+ "bbox": [
1693
+ 507,
1694
+ 864,
1695
+ 882,
1696
+ 895
1697
+ ],
1698
+ "page_idx": 7
1699
+ },
1700
+ {
1701
+ "type": "page_footnote",
1702
+ "text": "11 Obviously, we do not remove anything from the test set. 12 We do not count single mentions: we count full sets of mentions that cannot be recognized by our algorithm.",
1703
+ "bbox": [
1704
+ 112,
1705
+ 881,
1706
+ 485,
1707
+ 921
1708
+ ],
1709
+ "page_idx": 7
1710
+ },
1711
+ {
1712
+ "type": "page_footnote",
1713
+ "text": "This example has been slightly changed for formatting.",
1714
+ "bbox": [
1715
+ 526,
1716
+ 906,
1717
+ 873,
1718
+ 921
1719
+ ],
1720
+ "page_idx": 7
1721
+ },
1722
+ {
1723
+ "type": "page_number",
1724
+ "text": "19513",
1725
+ "bbox": [
1726
+ 477,
1727
+ 927,
1728
+ 524,
1729
+ 940
1730
+ ],
1731
+ "page_idx": 7
1732
+ },
1733
+ {
1734
+ "type": "table",
1735
+ "img_path": "images/ee33b13c192042911cd04471ec09aeabe046c852566fa34f1345d58ee8878739.jpg",
1736
+ "table_caption": [],
1737
+ "table_footnote": [],
1738
+ "table_body": "<table><tr><td rowspan=\"2\"></td><td colspan=\"2\">CADEC</td><td colspan=\"2\">SHARE2013</td><td colspan=\"2\">SHARE2014</td></tr><tr><td>F1</td><td>Disc. F1</td><td>F1</td><td>Disc. F1</td><td>F1</td><td>Disc. F1</td></tr><tr><td colspan=\"7\">Previous work</td></tr><tr><td>Tang et al. (2013)</td><td></td><td></td><td>75.0</td><td></td><td></td><td></td></tr><tr><td>Tang et al. (2018)</td><td>66.3</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Metke-Jimenez and Karimi (2016)</td><td>64.4</td><td></td><td>56.5</td><td></td><td>60.2</td><td></td></tr><tr><td>Metke-Jimenez and Karimi (2016)†</td><td>67.4</td><td>1.8</td><td>74.9</td><td>18.8</td><td>76.6</td><td>6.0</td></tr><tr><td>Muis and Lu (2016)†</td><td>58.0</td><td>23.9</td><td>70.3</td><td>50.0</td><td>74.7</td><td>41.1</td></tr><tr><td>Dai et al. (2020)</td><td>69.0</td><td>37.9</td><td>77.7</td><td>52.5</td><td>79.6</td><td>49.2</td></tr><tr><td>Wang et al. (2021)</td><td>71.5</td><td>44.4</td><td>81.2</td><td>55.9</td><td>81.3</td><td>54.1</td></tr><tr><td colspan=\"7\">This work</td></tr><tr><td>Soft EM</td><td>71.1</td><td>38.1</td><td>80.7</td><td>49.2</td><td>81.5</td><td>51.9</td></tr><tr><td>Hard EM</td><td>71.9</td><td>35.9</td><td>82.0</td><td>51.9</td><td>81.6</td><td>54.1</td></tr><tr><td>Weakly soft EM</td><td>71.8</td><td>37.6</td><td>82.0</td><td>52.0</td><td>81.4</td><td>46.2</td></tr><tr><td>Weakly hard EM</td><td>70.4</td><td>33.6</td><td>82.0</td><td>52.1</td><td>81.8</td><td>49.8</td></tr><tr><td>Structural labels</td><td>72.9</td><td>41.5</td><td>82.1</td><td>53.3</td><td>80.9</td><td>53.7</td></tr></table>",
1739
+ "bbox": [
1740
+ 186,
1741
+ 80,
1742
+ 811,
1743
+ 316
1744
+ ],
1745
+ "page_idx": 8
1746
+ },
1747
+ {
1748
+ "type": "table",
1749
+ "img_path": "images/00e3f630369dd11887cad7d837e044e86a3e8ebd681f0665e6ee03ae2db8f601.jpg",
1750
+ "table_caption": [
1751
+ "Table 1: Results on on three different datasets. Results marked with $\\dagger$ are reproductions by Wang et al. (2021)."
1752
+ ],
1753
+ "table_footnote": [],
1754
+ "table_body": "<table><tr><td>Split</td><td>CADEC</td><td>SHARE2013</td><td>SHARE2014</td></tr><tr><td>Train</td><td>5340 (306)</td><td>8508 (477)</td><td>17407 (777)</td></tr><tr><td>- filtered</td><td>5322 (288)</td><td>8432 (401)</td><td>17294 (667)</td></tr><tr><td>Dev.</td><td>1097 (59)</td><td>1250 (58)</td><td>1361 (59)</td></tr><tr><td>Test</td><td>1160 (74)</td><td>9009 (301)</td><td>15850 (411)</td></tr></table>",
1755
+ "bbox": [
1756
+ 121,
1757
+ 362,
1758
+ 480,
1759
+ 439
1760
+ ],
1761
+ "page_idx": 8
1762
+ },
1763
+ {
1764
+ "type": "table",
1765
+ "img_path": "images/df520f81bfe7b27bab2bd1610cb4fc5e69d0595381352c2a5de822a7bc53d0ab.jpg",
1766
+ "table_caption": [
1767
+ "Table 2: Number of sentences in each split. The number in parentheses corresponds to the number of sentences with at least one discontinuous mention."
1768
+ ],
1769
+ "table_footnote": [],
1770
+ "table_body": "<table><tr><td>Model</td><td>CADEC</td><td>S2013</td><td>S2014</td></tr><tr><td>Dai et al. (2020)</td><td>36</td><td>41</td><td>40</td></tr><tr><td>Wang et al. (2021)</td><td>193</td><td>200</td><td>198</td></tr><tr><td>This work</td><td>8286</td><td>10216</td><td>10206</td></tr></table>",
1771
+ "bbox": [
1772
+ 136,
1773
+ 510,
1774
+ 465,
1775
+ 576
1776
+ ],
1777
+ "page_idx": 8
1778
+ },
1779
+ {
1780
+ "type": "text",
1781
+ "text": "Table 3: Speed comparison in terms of sentence per seconds. Numbers for Dai et al. (2020) are BERT-based models, as reproduced by Wang et al. (2021).",
1782
+ "bbox": [
1783
+ 112,
1784
+ 586,
1785
+ 487,
1786
+ 630
1787
+ ],
1788
+ "page_idx": 8
1789
+ },
1790
+ {
1791
+ "type": "text",
1792
+ "text": "ble 1. The evaluation is conducted on the original representation so results are comparable with previous work. Our approach leads to similar results to previous work. We do not observe significant differences between different loss functions.",
1793
+ "bbox": [
1794
+ 112,
1795
+ 659,
1796
+ 489,
1797
+ 741
1798
+ ],
1799
+ "page_idx": 8
1800
+ },
1801
+ {
1802
+ "type": "text",
1803
+ "text": "Speed. All numbers are reported for computation on NVIDIA V100 GPUs. Training takes approximately 40, 60 and 80 minutes on CADEC, SHARE2013 and SHARE2014, respectively. Table 3 compares decoding with previous work of Dai et al. (2020) and Wang et al. (2021). The transition-based model of Dai et al. (2020) is particularly slow as their approach cannot fully exploit GPU parallelization. Our approach is $\\sim 40 - 50$ times faster than the method of Wang et al. (2021). This is due to two reasons: (1) they use a complex neural net",
1804
+ "bbox": [
1805
+ 112,
1806
+ 744,
1807
+ 490,
1808
+ 921
1809
+ ],
1810
+ "page_idx": 8
1811
+ },
1812
+ {
1813
+ "type": "text",
1814
+ "text": "work architecture on top of a BERT-like model and (2) for each input they must solve a NP-hard problem (maximum clique) to make the prediction.",
1815
+ "bbox": [
1816
+ 507,
1817
+ 365,
1818
+ 884,
1819
+ 413
1820
+ ],
1821
+ "page_idx": 8
1822
+ },
1823
+ {
1824
+ "type": "text",
1825
+ "text": "7 Conclusion",
1826
+ "text_level": 1,
1827
+ "bbox": [
1828
+ 509,
1829
+ 437,
1830
+ 640,
1831
+ 451
1832
+ ],
1833
+ "page_idx": 8
1834
+ },
1835
+ {
1836
+ "type": "text",
1837
+ "text": "In this work, we propose a novel tagging scheme for discontinuous NER based on a two-layer representation of discontinuous mentions. Our approach leads to result on par with state-of-the-art using a very simple neural network architecture. Importantly, decoding with our model is very fast compared to previous work.",
1838
+ "bbox": [
1839
+ 507,
1840
+ 470,
1841
+ 884,
1842
+ 583
1843
+ ],
1844
+ "page_idx": 8
1845
+ },
1846
+ {
1847
+ "type": "text",
1848
+ "text": "Our main objective with this work is to propose a simple plug-in method for discontinuous NER: any future work on models for BIO tagging can now also be trivially evaluated on discontinuous NER. Moreover, our approach is also fast to train, meaning that there is no significant cost overhead.",
1849
+ "bbox": [
1850
+ 507,
1851
+ 587,
1852
+ 882,
1853
+ 683
1854
+ ],
1855
+ "page_idx": 8
1856
+ },
1857
+ {
1858
+ "type": "text",
1859
+ "text": "Acknowledgments",
1860
+ "text_level": 1,
1861
+ "bbox": [
1862
+ 509,
1863
+ 707,
1864
+ 672,
1865
+ 722
1866
+ ],
1867
+ "page_idx": 8
1868
+ },
1869
+ {
1870
+ "type": "text",
1871
+ "text": "I thank Vlad Niculae and François Yvon for their comments and suggestions. I thank Lucas Ondel-Yang for the many discussions on finite stateautomata that inspired the decoding algorithm described in this paper. I thank Pierre Zweigenbaum for the help with Share datasets and the UMLS database.",
1872
+ "bbox": [
1873
+ 507,
1874
+ 740,
1875
+ 884,
1876
+ 852
1877
+ ],
1878
+ "page_idx": 8
1879
+ },
1880
+ {
1881
+ "type": "text",
1882
+ "text": "Work partially done while I was a researcher at LISN and ISIR. This work was granted access to the HPC/AI resources of IDRIS under the allocation 2024-AD011013727R1 made by GENCI.",
1883
+ "bbox": [
1884
+ 507,
1885
+ 857,
1886
+ 882,
1887
+ 921
1888
+ ],
1889
+ "page_idx": 8
1890
+ },
1891
+ {
1892
+ "type": "page_number",
1893
+ "text": "19514",
1894
+ "bbox": [
1895
+ 477,
1896
+ 927,
1897
+ 524,
1898
+ 940
1899
+ ],
1900
+ "page_idx": 8
1901
+ },
1902
+ {
1903
+ "type": "text",
1904
+ "text": "Limitations",
1905
+ "text_level": 1,
1906
+ "bbox": [
1907
+ 114,
1908
+ 84,
1909
+ 220,
1910
+ 99
1911
+ ],
1912
+ "page_idx": 9
1913
+ },
1914
+ {
1915
+ "type": "text",
1916
+ "text": "The approach proposed in this paper cannot cover all form of discontinuities observed in the three datasets. Indeed, some discontinuous mentions are composed of three parts or more. However, they are rare so our results are still competitive. Moreover, our contribution is focused on the general decoding approach that can be extended by future work.",
1917
+ "bbox": [
1918
+ 112,
1919
+ 110,
1920
+ 487,
1921
+ 221
1922
+ ],
1923
+ "page_idx": 9
1924
+ },
1925
+ {
1926
+ "type": "text",
1927
+ "text": "Discontinuous NER datasets are scarce, therefore we are only able to experiment on three datasets in the biomedical domain in English. We suspect this is due to a chicken or the egg dilemma: discontinuity are often not annotated as there are no easy plug-and-easy approach to predict them, and there is little NLP work in the domain as there are only a few datasets available for experiments.",
1928
+ "bbox": [
1929
+ 112,
1930
+ 223,
1931
+ 487,
1932
+ 351
1933
+ ],
1934
+ "page_idx": 9
1935
+ },
1936
+ {
1937
+ "type": "text",
1938
+ "text": "During the evaluation of our approach, we observed that many mentions are missing in the gold annotation. As such, all results reported on these datasets (including previous works) should be taken with a pinch of salt.",
1939
+ "bbox": [
1940
+ 112,
1941
+ 351,
1942
+ 487,
1943
+ 432
1944
+ ],
1945
+ "page_idx": 9
1946
+ },
1947
+ {
1948
+ "type": "text",
1949
+ "text": "References",
1950
+ "text_level": 1,
1951
+ "bbox": [
1952
+ 114,
1953
+ 460,
1954
+ 213,
1955
+ 475
1956
+ ],
1957
+ "page_idx": 9
1958
+ },
1959
+ {
1960
+ "type": "list",
1961
+ "sub_type": "ref_text",
1962
+ "list_items": [
1963
+ "Arturo Argueta and David Chiang. 2017. Decoding with finite-state transducers on GPUs. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1044-1052, Valencia, Spain. Association for Computational Linguistics.",
1964
+ "Leonard E Baum. 1972. An inequality and associated maximization technique in statistical estimation for probabilistic functions of markov processes. Inequalities, 3(1):1-8.",
1965
+ "Jesse A. Berlin, Susan C. Glasser, and Susan S. Ellenberg. 2008. Adverse event detection in drug development: Recommendations and obligations beyond phase 3. American Journal of Public Health, 98(8):1366-1371. PMID: 18556607.",
1966
+ "Dimitri P Bertsekas. 1999. Nonlinear programming. Athena Scientific Belmont.",
1967
+ "Mathieu Blondel, André F.T. Martins, and Vlad Niculae. 2020. Learning with Fenchel-Young losses. Journal of Machine Learning Research, 21(35):1-69.",
1968
+ "N. Chinchor and P. Robinson. 1998. Appendix E: MUC-7 named entity task definition (version 3.5). In Seventh Message Understanding Conference (MUC-7): Proceedings of a Conference Held in Fairfax, Virginia, April 29 - May 1, 1998.",
1969
+ "Preciosa M Coloma, Gianluca Trifiro, Vaishali Patadia, and Miriam Sturkenboom. 2013. Postmarketing safety surveillance: where does signal detection using"
1970
+ ],
1971
+ "bbox": [
1972
+ 115,
1973
+ 483,
1974
+ 489,
1975
+ 920
1976
+ ],
1977
+ "page_idx": 9
1978
+ },
1979
+ {
1980
+ "type": "list",
1981
+ "sub_type": "ref_text",
1982
+ "list_items": [
1983
+ "electronic healthcare records fit into the big picture? Drug safety, 36:183-197.",
1984
+ "Timothy Cour, Ben Sapp, and Ben Taskar. 2011. Learning from partial labels. Journal of Machine Learning Research, 12(42):1501-1536.",
1985
+ "Xiang Dai, Sarvnaz Karimi, Ben Hachey, and Cecile Paris. 2020. An effective transition-based model for discontinuous NER. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5860-5870, Online. Association for Computational Linguistics.",
1986
+ "Xiang Dai, Sarvnaz Karimi, and Cecile Paris. 2017. Medication and adverse event extraction from noisy text. In Proceedings of the Australasian Language Technology Association Workshop 2017, pages 79-87, Brisbane, Australia.",
1987
+ "John M. Danskin. 1966. The theory of max-min, with applications. SIAM Journal on Applied Mathematics, 14(4):641-664.",
1988
+ "Samuel Eilenberg. 1974. Automata, languages, and machines. Academic press.",
1989
+ "Hao Fei, Donghong Ji, Bobo Li, Yijiang Liu, Yafeng Ren, and Fei Li. 2021. Rethinking boundaries: End-to-end recognition of discontinuous mentions with pointer networks. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14):12785-12793.",
1990
+ "G.D. Forney. 1973. The Viterbi algorithm. Proceedings of the IEEE, 61(3):268-278.",
1991
+ "Xianping Ge. 2002. Segmental semi-markov models and applications to sequence analysis.",
1992
+ "Joshua Goodman. 1999. Semiring parsing. Computational Linguistics, 25(4):573-606.",
1993
+ "Yves Grandvalet and Yoshua Bengio. 2004. Learning from partial labels with minimum entropy. Centre interuniversitaire de recherche en analyse des organisations (CIRANO).",
1994
+ "Ralph Grishman and Beth Sundheim. 1996. Message Understanding Conference- 6: A brief history. In COLING 1996 Volume 1: The 16th International Conference on Computational Linguistics.",
1995
+ "Hossein Hadian, Hossein Sameti, Daniel Povey, and Sanjeev Khudanpur. 2018. End-to-end speech recognition using lattice-free MMI. In *Interspeech*, pages 12-16.",
1996
+ "Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2021a. Debertav3: Improving deberta using electra-style pretraining with gradient-disentangled embedding sharing. Preprint, arXiv:2111.09543.",
1997
+ "Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021b. Deberta: Decoding-enhanced bert with disentangled attention. In International Conference on Learning Representations."
1998
+ ],
1999
+ "bbox": [
2000
+ 510,
2001
+ 85,
2002
+ 882,
2003
+ 920
2004
+ ],
2005
+ "page_idx": 9
2006
+ },
2007
+ {
2008
+ "type": "page_number",
2009
+ "text": "19515",
2010
+ "bbox": [
2011
+ 477,
2012
+ 928,
2013
+ 524,
2014
+ 940
2015
+ ],
2016
+ "page_idx": 9
2017
+ },
2018
+ {
2019
+ "type": "list",
2020
+ "sub_type": "ref_text",
2021
+ "list_items": [
2022
+ "John E Hopcroft, Rajeev Motwani, and Jeffrey D Ullman. 2001. Introduction to automata theory, languages, and computation. *Acm Sigact News*, 32(1):60-65.",
2023
+ "Jacques Janssen and Nikolaos Limnios. 1999. SemiMarkov models and applications. Kluwer Academic.",
2024
+ "Rong Jin and Zoubin Ghahramani. 2002. Learning with multiple labels. In Advances in Neural Information Processing Systems, volume 15. MIT Press.",
2025
+ "Stephan Kanthak and Hermann Ney. 2004. FSA: An efficient and flexible $\\mathrm{C + + }$ toolkit for finite state automata using on-demand computation. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pages 510- 517, Barcelona, Spain.",
2026
+ "Sarvnaz Karimi, Alejandro Metke-Jimenez, Madonna Kemp, and Chen Wang. 2015. Cadec: A corpus of adverse drug event annotations. Journal of biomedical informatics, 55:73-81.",
2027
+ "Lauri Karttuten, Jean-Pierre Chanod, Gregory Grefenstette, and Anne Schille. 1996. Regular expressions for language engineering. Natural Language Engineering, 2(4):305-328.",
2028
+ "Kimmo Koskenniemi. 1990. Finite-state parsing and disambiguation. In COLING 1990 Volume 2: Papers presented to the 13th International Conference on Computational Linguistics.",
2029
+ "Kimmo Koskenniemi, Pasi Tapanainen, and Atro Voutilainen. 1992. Compiling and using finite-state syntactic rules. In COLING 1992 Volume 1: The 14th International Conference on Computational Linguistics.",
2030
+ "George Lakoff and Stanley Peters. 1969. Phrasal conjunction and symmetric predicates. modern studies in english: Readings in transformational grammar.",
2031
+ "Winfried Lechner. 2000. Conjunction reduction in subordinate structures. In North East Linguistics Society, volume 30, page 5.",
2032
+ "Jingye Li, Hao Fei, Jiang Liu, Shengqiong Wu, Meishan Zhang, Chong Teng, Donghong Ji, and Fei Li. 2022. Unified named entity recognition as word-word relation classification. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10):10965-10973.",
2033
+ "Chu-Cheng Lin, Hao Zhu, Matthew R. Gormley, and Jason Eisner. 2019. Neural finite-state transducers: Beyond rational relations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 272-283, Minneapolis, Minnesota. Association for Computational Linguistics.",
2034
+ "Matilde Marcolli and Ryan Thorngren. 2014. Thermodynamic semirings. Journal of Noncommutative Geometry, 8(2):337-392."
2035
+ ],
2036
+ "bbox": [
2037
+ 115,
2038
+ 85,
2039
+ 489,
2040
+ 919
2041
+ ],
2042
+ "page_idx": 10
2043
+ },
2044
+ {
2045
+ "type": "list",
2046
+ "sub_type": "ref_text",
2047
+ "list_items": [
2048
+ "Arthur Mensch and Mathieu Blondel. 2018. Differentiable dynamic programming for structured prediction and attention. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 3462-3471. PMLR.",
2049
+ "Alejandro Metke-Jimenez and Sarvnaz Karimi. 2016. Concept identification and normalisation for adverse drug event discovery in medical forums. In BMDID@ ISWC.",
2050
+ "Mehryar Mohri. 2009. Weighted Automata Algorithms, pages 213-254. Springer Berlin Heidelberg, Berlin, Heidelberg.",
2051
+ "Mehryar Mohri, Fernando Pereira, and Michael Riley. 1996. Weighted automata in text and speech processing. In Proceedings of the ECAI 1996 Workshop.",
2052
+ "Marius Mosbach, Maksym Andriushchenko, and Dietrich Klakow. 2021. On the stability of fine-tuning $\\{\\text{bert}\\}$ : Misconceptions, explanations, and strong baselines. In International Conference on Learning Representations.",
2053
+ "Danielle L Mowery, Sumithra Velupillai, Brett R South, Lee Christensen, David Martinez, Liadh Kelly, Lorraine Goeuriot, Noemie Elhadad, Sameer Pradhan, Guergana Savova, and Wendy W Chapman. Task 2: Share/clef ehealth evaluation lab 2014.",
2054
+ "Aldrian Obaja Muis and Wei Lu. 2016. Learning to recognize discontinuous entities. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 75-84, Austin, Texas. Association for Computational Linguistics.",
2055
+ "Radford M. Neal and Geoffrey E. Hinton. 1998. A view of the EM algorithm that justifies incremental, sparse, and other variants. In Learning in graphical models, pages 355-368. Springer.",
2056
+ "Nam Nguyen and Rich Caruana. 2008. Classification with partial labels. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 551-559.",
2057
+ "Sean Papay, Roman Klinger, and Sebastian Pado. 2022. Constraining linear-chain CRFs to regular languages. In International Conference on Learning Representations.",
2058
+ "Daniel Povey, Vijayaditya Peddinti, Daniel Galvez, Pegah Ghahremani, Vimal Manohar, Xingyu Na, Yiming Wang, and Sanjeev Khudanpur. 2016. Purely sequence-trained neural networks for ASR based on lattice-free MMI. In Interspeech, pages 2751-2755.",
2059
+ "Sameer Pradhan, Noemie Elhadad, Brett R South, David Martinez, Lee M Christensen, Amy Vogel, Hanna Suominen, Wendy W Chapman, and Guergana K Savova. Task 1: ShARE/CLEF eHealth evaluation lab 2013."
2060
+ ],
2061
+ "bbox": [
2062
+ 510,
2063
+ 85,
2064
+ 882,
2065
+ 917
2066
+ ],
2067
+ "page_idx": 10
2068
+ },
2069
+ {
2070
+ "type": "page_number",
2071
+ "text": "19516",
2072
+ "bbox": [
2073
+ 477,
2074
+ 928,
2075
+ 524,
2076
+ 940
2077
+ ],
2078
+ "page_idx": 10
2079
+ },
2080
+ {
2081
+ "type": "list",
2082
+ "sub_type": "ref_text",
2083
+ "list_items": [
2084
+ "Lance Ramshaw and Mitch Marcus. 1995. Text chunking using transformation-based learning. In Third Workshop on Very Large Corpora.",
2085
+ "Pushpendre Rastogi, Ryan Cotterell, and Jason Eisner. 2016. Weighting finite-state transductions with neural context. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 623-633, San Diego, California. Association for Computational Linguistics.",
2086
+ "Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009), pages 147-155, Boulder, Colorado. Association for Computational Linguistics.",
2087
+ "Alexander Rush. 2020. Torch-struct: Deep structured prediction library. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 335–342, Online. Association for Computational Linguistics.",
2088
+ "Sunita Sarawagi and William W Cohen. 2004. Semimarkov conditional random fields for information extraction. In Advances in Neural Information Processing Systems, volume 17. MIT Press.",
2089
+ "Buzhou Tang, Jianglu Hu, Xiaolong Wang, and Qingcai Chen. 2018. Recognizing continuous and discontinuous adverse drug reaction mentions from social media using LSTM-CRF. *Wireless Communications & Mobile Computing* (Online), 2018.",
2090
+ "Buzhou Tang, Yonghui Wu, Min Jiang, Joshua C Denny, and Hua Xu. 2013. Recognizing and encoding discorder concepts in clinical text using machine learning and vector space model. CLEF (Working Notes), 665.",
2091
+ "Pasi Tapanainen. 1997. Applying a Finite-State Intersection Grammar. In *Finite-State Language Processing*, The MIT Press.",
2092
+ "Roy Tromble and Jason Eisner. 2006. A fast finite-state relaxation method for enforcing global constraints on sequence decoding. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 423-430, New York City, USA. Association for Computational Linguistics.",
2093
+ "Martin J Wainwright, Michael I Jordan, et al. 2008. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1-305.",
2094
+ "Bailin Wang and Wei Lu. 2019. Combining spans into entities: A neural two-stage approach for recognizing discontinuous entities. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6216-6224, Hong Kong, China. Association for Computational Linguistics."
2095
+ ],
2096
+ "bbox": [
2097
+ 115,
2098
+ 85,
2099
+ 485,
2100
+ 920
2101
+ ],
2102
+ "page_idx": 11
2103
+ },
2104
+ {
2105
+ "type": "list",
2106
+ "sub_type": "ref_text",
2107
+ "list_items": [
2108
+ "Qing Wang, Haojie Jia, Wenfei Song, and Qi Li. 2023. CoRec: An easy approach for coordination recognition. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 15112-15120, Singapore. Association for Computational Linguistics.",
2109
+ "Yucheng Wang, Bowen Yu, Hongsong Zhu, Tingwen Liu, Nan Yu, and Limin Sun. 2021. Discontinuous named entity recognition as maximal clique discovery. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 764-774, Online. Association for Computational Linguistics.",
2110
+ "Christopher Wilder. 2018. Conjunction Reduction and Right-Node Raising. In *The Oxford Handbook of Ellipsis*. Oxford University Press."
2111
+ ],
2112
+ "bbox": [
2113
+ 510,
2114
+ 85,
2115
+ 880,
2116
+ 342
2117
+ ],
2118
+ "page_idx": 11
2119
+ },
2120
+ {
2121
+ "type": "page_number",
2122
+ "text": "19517",
2123
+ "bbox": [
2124
+ 477,
2125
+ 928,
2126
+ 524,
2127
+ 940
2128
+ ],
2129
+ "page_idx": 11
2130
+ },
2131
+ {
2132
+ "type": "text",
2133
+ "text": "A Training details",
2134
+ "text_level": 1,
2135
+ "bbox": [
2136
+ 115,
2137
+ 83,
2138
+ 290,
2139
+ 99
2140
+ ],
2141
+ "page_idx": 12
2142
+ },
2143
+ {
2144
+ "type": "text",
2145
+ "text": "The model is trained for 20 epochs using the cosine learning rate scheduler as implemented in the HuggingFace library. The maximum learning rate is fixed to $10^{-5}$ . The warmup ratio is $10\\%$ . We apply dropout with a probability of 0.5 to BERT's output. The gradient norm is clipped to 1. All parameters have a weight decay of 0.01. We use the Adam variant proposed by Mosbach et al. (2021).",
2146
+ "bbox": [
2147
+ 115,
2148
+ 109,
2149
+ 489,
2150
+ 237
2151
+ ],
2152
+ "page_idx": 12
2153
+ },
2154
+ {
2155
+ "type": "page_number",
2156
+ "text": "19518",
2157
+ "bbox": [
2158
+ 477,
2159
+ 927,
2160
+ 524,
2161
+ 940
2162
+ ],
2163
+ "page_idx": 12
2164
+ }
2165
+ ]
2024/A Fast and Sound Tagging Method for Discontinuous Named-Entity Recognition/471f11c4-935b-4778-a130-ca7d58097a15_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Fast and Sound Tagging Method for Discontinuous Named-Entity Recognition/471f11c4-935b-4778-a130-ca7d58097a15_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:81f8df5ea3ca69b37c8e55adce349522d8fd4657300c58785e83189b399b2b73
3
+ size 416854
2024/A Fast and Sound Tagging Method for Discontinuous Named-Entity Recognition/full.md ADDED
@@ -0,0 +1,420 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Fast and Sound Tagging Method for Discontinuous Named-Entity Recognition
2
+
3
+ Caio Corro
4
+ INSA Rennes, IRISA, Inria, CNRS, Université de Rennes
5
+ caio.corro@irisa.fr
6
+
7
+ # Abstract
8
+
9
+ We introduce a novel tagging scheme for discontinuous named entity recognition based on an explicit description of the inner structure of discontinuous mentions. We rely on a weighted finite state automaton for both marginal and maximum a posteriori inference. As such, our method is sound in the sense that (1) well-formedness of predicted tag sequences is ensured via the automaton structure and (2) there is an unambiguous mapping between well-formed sequences of tags and (discontinuous) mentions. We evaluate our approach on three English datasets in the biomedical domain, and report comparable results to state-of-the-art while having a way simpler and faster model.
10
+
11
+ # 1 Introduction
12
+
13
+ Named-entity recognition (NER) is a fundamental natural language processing (NLP) task that aims at identifying mentions of named entities in texts. These mentions may for example refer to persons, organizations, locations or even dates, among others (Grishman and Sundheim, 1996; Chinchor and Robinson, 1998). Over the years, this task has been extensively studied by the community, with contributions including decoding algorithms, neural network architectures, loss functions and methods for learning in different data availability situations, inter alia.
14
+
15
+ There exists several variants of the NER problem, among which the most studied are flat NER and nested NER. The most common method for the flat case is BIO tagging (Ramshaw and Marcus, 1995), where each word in a sentence is tagged depending on whether it is the beginning of a mention (B), inside a mention (I) or outside a mention (O). This tagging scheme can be augmented to disambiguate types, e.g. BLOC and BPER. An important benefit of BIO tagging is that prediction has a linear time
16
+
17
+ complexity in the input length $^2$ using the Viterbi algorithm (Forney, 1973), contrary to concurrent approaches like semi-Markov models that have a quadratic time-complexity (Janssen and Limnios, 1999; Ge, 2002; Sarawagi and Cohen, 2004).
18
+
19
+ A less studied task is discontinuous NER, where mentions are allowed to span discontinuous sequences of words. This problem is especially important for biomedical NLP. For example, pharmacovigilance aims to detect adverse drug reactions after a product is distributed in the market via automatic analysis of medical reports or social media (Berlin et al., 2008; Coloma et al., 2013). Mentions of adverse drug reactions naturally occur in noncontiguous sequences, for example the sentence "The pain I was experiencing around the hip joints was incredible" contains the mention "pain hip joints" with a five word gap in the middle.
20
+
21
+ Several methods for discontinuous NER have been proposed in the literature, including transition models (Dai et al., 2020) and other structured prediction approaches (Wang et al., 2021; Fei et al., 2021; Li et al., 2022). Unfortunately, they are more costly than BIO tagging and require specialized neural network architectures. There have also been attempts to propose tagging schemes for discontinuous NER (Tang et al., 2013, 2018; Metke-Jimenez and Karimi, 2016; Muis and Lu, 2016), but they all exhibit structural ambiguity (see Section 5).
22
+
23
+ In this work, we propose a novel tagging scheme for discontinuous NER that exploits the inner structure of discontinuous mentions. Contrary to previous attempts, our approach is sound in the sense that: (1) there is no encoding ambiguity between sets of mentions and sequences of tags (i.e. there is a one-to-one mapping between the two representations); and (2) our prediction algorithm is con
24
+
25
+ strained to predict only well-formed sequences of tags (i.e. we can always reconstruct a set of mentions from a predicted tag sequence). To ensure well-formedness of predictions, we propose an algorithm based on inference in a weighted finite-state automaton. Using our approach, the time complexity of maximum a posteriori inference for prediction is linear in the length of the input. Moreover, our algorithm can be very efficiently implemented on GPU for batched inference (Argueta and Chiang, 2017; Rush, 2020).
26
+
27
+ Our contributions can be summarized as follows:
28
+
29
+ - We propose to decompose discontinuous mentions in a new two-layer representation;
30
+ - We propose a novel tagging scheme for this representation together with a linear-time tagging algorithm that ensures well-formedness of predictions;
31
+ - We explain how labels in the inner structures can be inferred during training when the information is not available in the data;
32
+ - We experiment on three English datasets and report competitive results while having a much faster model.
33
+
34
+ Our implementation is publicly available.3 Importantly, our decoding algorithm and all our loss functions can be used as a drop-in replacements in any BIO tagger. As such, any future research in the BIO tagging field may also be evaluated on discontinuous NER at no extra cost.
35
+
36
+ # 2 Reduction to Word Tagging
37
+
38
+ In this section, we explain how we map discontinuous mentions into a two-layer representation that allows us to derive a new tagging scheme. Although this transformation is generic, for ease of exposition we illustrate it on the particular case of adverse drug reactions.
39
+
40
+ # 2.1 Inner Structure of Mentions
41
+
42
+ Discontinuous mentions of adverse drug reactions (ADR) and disorders in biomedical NER mainly result from two linguistic phenomena. Firstly, mentions may be expressed as the combination of two non-contiguous syntactic constituents, due to linguistic word order rules. In the following example of an ADR, the discontinuity
43
+
44
+ is caused by the verb position constraint in English:
45
+
46
+ $$
47
+ \begin{array}{l} \text {A D R} \\ \text {t o e s a r e p a i n f u l} \end{array} \tag {1}
48
+ $$
49
+
50
+ Secondly, many languages allow alternative sentential structures for coordinations, including construction based on deletion operations. For example, consider the two following sentences:
51
+
52
+ $$
53
+ \begin{array}{l} \frac {\text {A D R}}{\text {p a i n i n a r m s}} \text {a n d} \frac {\text {A D R}}{\text {p a i n i n s h o u l d e r s}} \tag {2} \\ \begin{array}{c} \text {A D R} \\ \text {(3)} \quad \frac {\text {p a i n i n a r m s a n}}{\text {A D R}} \end{array} \\ \end{array}
54
+ $$
55
+
56
+ The repeated element is eliminated in the second one, leading to the presence of a discontinuous mention, a phenomenon called coordination reduction (Lakoff and Peters, 1969). Although the underlying linguistic structures are different, we will treat both cases in the same way.
57
+
58
+ Change of representation. In practice, discontinuous mentions exhibit an inner structure. For example, a discontinuous ADR can be decomposed into a body part and an event. As such, we propose to transform discontinuous mentions into a two-layer representation:
59
+
60
+ - Upper layers identify sets of mentions;
61
+ - Lower layers identify typed components.
62
+
63
+ We restrict the number of types for components to be equal to two. The previous example is converted as follows:
64
+
65
+ $$
66
+ \begin{array}{c} \text {A D R (S)} \\ \text {(4) \quad E V E N T P A R T P A R T} \\ \text {p a i n i n a r m s a n d s h o u l d e r s} \end{array}
67
+ $$
68
+
69
+ Note that the two mentions do not explicitly appear in this new representation. Nevertheless, the opposite transformation is trivial: to rebuild all discontinuous mention in a discontinuous set, we simply take the Cartesian product between the two sets of typed components, e.g.
70
+
71
+ $$
72
+ \underbrace {\{\text {p a i n i n} \}} _ {\text {C o m p o n e n t s} \text {t y p e d E V E N T}} \times \underbrace {\left\{ \begin{array}{c} \text {a r m s ,} \\ \text {s h o u l d e r s} \end{array} \right\}} _ {\text {C o m p o n e n t s} \text {t y p e d P A R T}} \mapsto \underbrace {\left\{ \begin{array}{c} \text {p a i n i n a r m s ,} \\ \text {p a i n i n s h o u l d e r s} \end{array} \right\}} _ {\text {R e c o n s t r u c t e d} \text {d i s c o n t i n u o u s m e n t i o n s}}.
73
+ $$
74
+
75
+ Note that this can result in some of the mentions being continuous, as in Example (4).
76
+
77
+ One obvious issue is that component types are not annotated in datasets. We consider two solutions to tackle this challenge. First, we can use unsupervised and weakly-supervised learning
78
+
79
+ methods to infer component types during training, as explained in Section 4. Second, we can use component types to mark if they share the same type as the leftmost one, no matter whether they refer to a body part of an event. In this setting, Examples (1) and (3) are annotated as follows:
80
+
81
+ ADR(S)
82
+ (5) FIRST OTHER toes are painful ADR(S)
83
+ (6) FIRST OTHER pain in arms and
84
+
85
+ In other words, component types do not convey semantic information, only structural information.
86
+
87
+ Continuous mentions. There exists two forms of continuous mentions. First, continuous mentions that share one or more words with at least one other mention. In this case, we split the mention and we process it as described above. Second, there are continuous mentions that do no share any word with other mentions, see Example (2). In principle, we could also transform these mentions in the two layers representation. However, not only we lack information about component types but we do not even know where to split them! In Example (3), we know that "pain in arms" should be splitted into "pain in" and "arms" as the first two words are shared with another mention. But for the two continuous mentions in Example (2), we do not have such information. Therefore, in this case, we treat them as standard continuous ones.
88
+
89
+ Nested NER. Although Dai et al. (2020) suggested the use of nested NER models for discontinuous NER using a similar yet different representation, we argue that the two problems are different:
90
+
91
+ - The structures that we consider are not recursive, contrary to nested mentions, e.g. "[The president of [the United States of [America]]"];
92
+ - The components are highly constrained, e.g. a set of ADRs must contain at least one body part and one event;
93
+ - The span of a set of mentions is fixed by its components: it begins (resp. ends) at the same word as its leftmost (resp. rightmost) component.
94
+
95
+ Therefore, we instead propose a tagging scheme tailored to discontinuous NER.
96
+
97
+ Beyond the biomedical domain. Our approach can be applied to other domains, e.g. we can transform the following mentions into our representation by differentiating first and last names:
98
+
99
+ PER (7) Meg and Jack White PER
100
+
101
+ Unfortunately, these discontinuities have not been annotated in standard datasets.4
102
+
103
+ # 2.2 Tagging Scheme
104
+
105
+ We now explain how we transform the two-layer structure into a sequence of tags. Without loss of generality, we assume that mentions are untyped in the original corpus, as practical datasets for discontinuous NER contain a single mention type.<sup>5</sup> Moreover, we define the component types as X and Y (e.g. body part and event in previous examples) to simplify notation and treat in a similar way semantic and structural component types.
106
+
107
+ Our approach requires 10 tags. First, the 3 tags CB, CI and O are used in a similar way to BIO tags. CB and CI are used to identify first and following words in a continuous mention, respectively. The tag O is used to mark words that are neither part of a continuous mention or in the span of a set of mentions. In Example (2), word "and" is tagged with O whereas in Example (3) it is not tagged with O. This is due to the fact that in the second example, after transformation into the two layers representation, the word "and" will appear inside a set of mentions, see Example (4).
108
+
109
+ Second, tags to identify set of mentions and their components are of the form $* - *$ where:
110
+
111
+ - the left-hand side is used to identify the span of the set of mentions, and can therefore take values DB (first word of the span) and DI (other words of the span);
112
+ - the right-hand side is used to identify typed components, and can take values Bx, Ix, By, Iy and O.
113
+
114
+ The 7 tags used for discontinuous mentions are DB-Bx, DB-By, IB-Bx, IB-By, IB-Ix, IB-Iy
115
+
116
+ ![](images/c22fba41f813675d6d7ee36857c68e3cdb3f61b61a651971d1ba1e284a892da3.jpg)
117
+ Figure 1: (Top) Sentence with its original annotation. It contains two continuous mentions ("Chronic fatigue" and "stiff knees") and three discontinuous mentions ("swollen knees", "swollen left elbows" and "stiff left elbows"). (Bottom) Sentence annotated with our two-layer representation and the associated tag sequence.
118
+
119
+ and IB-O. Note that the leftmost word in a set of mentions must also be the beginning of a component, so the following combinations are not part of the tagset: DB-Ix, DB-Iy and DB-O. Figure 1 shows an example of tag conversion.
120
+
121
+ Importantly, any sequence of tags is well-formed if and only if:
122
+
123
+ 5. A set of mentions must not yield a single continuous mention after reconstruction, i.e. the following sequence of tags is forbidden:
124
+
125
+ 1. All CI tags are preceded by either BI or CI, as standard in BIO tagging;
126
+ 2. All DI-\* tags must be preceded by either DB-\* or DI-\*:
127
+ 3. All $*$ -Ix tags must be preceded by either $*$ -Bx or $*$ -Ix (and similarly for the Y type);
128
+ 4. A set of mentions must contain at least one component typed X and one typed Y, that is it must contain at least one word tagged with $-\mathrm{BX}$ and one with $* - \mathrm{BY}$ .
129
+
130
+ some pain in arms and O DB-Bx DI-IX DF-IY as it would introduce ambiguity in the encoding of continuous mentions;
131
+
132
+ 6. A discontinuous mention cannot end with tag DI-O, as this would results in the span of a set of mentions that do not end with the same word as its rightmost component.<sup>6</sup>
133
+
134
+ # 3 Decoding Algorithm
135
+
136
+ Without loss of generality, we assume all sentences have $n$ words. Let $T$ be the tagset, $X$ be the set of sentences and $Y$ the set of well-formed tag sequences. We represent a sequence of tags $\mathbf{y} \in Y$ as a binary vector with $n|T|$ entries, where each entry is associated with a tag and a word, i.e. $\mathbf{y} \in \{0,1\}^{n|T|}$ . If the value of an entry is 1 (resp. 0), the associated tag is assigned to the associated word (resp. not assigned). Note that $Y \subset \{0,1\}^{n|T|}$ is a strict subset of all such vectors, as each word must be assigned exactly one tag and that the resulting tag sequence must satisfy the constraints described in Section 2.2.
137
+
138
+ Let $f_{\theta}: X \to \mathbb{R}^{n|T|}$ be a neural network parameterized by $\theta$ . We define the probability of a tag sequence $\pmb{y} \in Y$ given the input $\pmb{x}$ as a Boltzmann-Gibbs distribution (or softmax over structures):
139
+
140
+ $$
141
+ p _ {\theta} (\boldsymbol {y} | \boldsymbol {x}) = \exp \left(\langle \boldsymbol {y}, f _ {\theta} (\boldsymbol {x}) \rangle - A _ {Y} (f _ {\theta} (\boldsymbol {x}))\right),
142
+ $$
143
+
144
+ where $\langle \cdot ,\cdot \rangle$ denotes the dot product and $A_{Y}$ is the log-partition function ensuring that the distribution is correctly normalized:
145
+
146
+ $$
147
+ A _ {Y} (\boldsymbol {w}) = \log \sum_ {\boldsymbol {y} \in Y} \exp \langle \boldsymbol {y}, \boldsymbol {w} \rangle . \tag {1}
148
+ $$
149
+
150
+ Computing $A_{Y}(\boldsymbol{w})$ is called marginal inference due to its link with marginal probabilities (Wainwright et al., 2008). Computing the most probable output is reduced to computing:
151
+
152
+ $$
153
+ \widehat {\boldsymbol {y}} _ {\theta} (\boldsymbol {x}) = \underset {\boldsymbol {y} \in Y} {\arg \max } \left\langle \boldsymbol {y}, f _ {\theta} (\boldsymbol {x}) \right\rangle , \tag {2}
154
+ $$
155
+
156
+ called maximum a posteriori (MAP) inference.
157
+
158
+ In practice, we need to compute the term in Equation (1) for training the model and the term in Equation (2) for prediction. The difficulty stems from the restriction (in the sum and in the arg max search space) to the set of well-formed outputs $Y$ . We follow a long tradition in NLP (Koskenniemi, 1990; Mohri et al., 1996; Karttuten et al., 1996; Kanthak and Ney, 2004; Tromble and Eisner, 2006; Rastogi et al., 2016; Lin et al., 2019; Papay et al., 2022, inter alia) and rely on a finite-state automaton to solve these inference problems.
159
+
160
+ # 3.1 Finite-State Automata
161
+
162
+ Definitions. Weighted Finite State Automata (WFSA) are generalization of FSA (Eilenberg, 1974) that include weights on their transitions. Formally, a WFSA over $\mathbb{R}$ is a 5-tuple $(\Sigma, Q, E, i, F)$ where:
163
+
164
+ - $\Sigma$ is a finite alphabet with $\epsilon \notin \Sigma$ ;
165
+ - $Q$ is the set of states;
166
+ - $E \subseteq Q \times \Sigma^{*} \times \mathbb{R} \times Q$ is the set of weighted transitions, where $(q, \sigma, w, r) \in E$ is a transition from state $q$ to state $r$ emitting symbol(s) $\sigma$ with weight $w$ ;
167
+ - $i \in Q$ is an initial state and $F \subseteq Q$ are final states.
168
+
169
+ Symbol $\epsilon$ is used for transitions that emit nothing. A WFSA is $\epsilon$ -free if there is no $\epsilon$ -transition. A valid path is a path starting at $i$ and ending at any state in $F$ . A path emits a sequence of symbols, and has a weight equal to the sum of the transition weights it contains. The language of a WFSA is the set of emissions along all possible valid paths.
170
+
171
+ Algorithms. Given an acyclic WFSA, the path of maximum weight, Equation (2), and the log-sum-exp of all valid paths, Equation (1), can be computed using variants of the Viterbi algorithm (Forney, 1973) and the Forward algorithm (Baum, 1972), respectively. These algorithms are in fact identical, but defined over different semirings (Goodman, 1999): the tropical semiring for the Viterbi and the thermodynamic semiring (Marcolli and Thorngren, 2014) for the Forward. We refer to (Mohri, 2009, Section 3) for an in-depth introduction. The time complexity of both algorithms is $\mathcal{O}(|E|)$ if a topological ordering of states is known.
172
+
173
+ Application to sequence tagging. We follow previous work and use the intersection of two WF-SAs to constraint tag sequences (Koskenniemi, 1990; Koskenniemi et al., 1992). The grammar automaton $\mathcal{G} \triangleq (T, Q, E, i, F)$ is a cyclic WFSA whose language is the set of all well-formed tag sequences (of any length). We assume $G$ is $\epsilon$ -free and deterministic.<sup>7</sup> Without loss of generality, we fix all transition weights to 0. The sentence automaton $\mathcal{S} \triangleq (T, Q', E', i', F')$ is an acyclic FSA that represents all possible (not necessarily valid) analyses for a given sentence of $n$ words. States are $Q' \triangleq \{0, \dots, n\}$ and transitions are:
174
+
175
+ $$
176
+ E ^ {\prime} \triangleq \left\{\left(i - 1, t, w _ {(i, t)}, i\right) \mid i \in \{1 \dots n \} \wedge t \in T \right\}
177
+ $$
178
+
179
+ where $w_{(i,t)}$ is the weight associated with tagging word at position $i$ with tag $t$ . Initial and final states are $i' \triangleq 0$ and $F' \triangleq \{n\}$ . This WFSA contains $n|T|$ transitions, and each transition corresponds to tagging a given word with a given tag. By construction, it is always deterministic and $\epsilon$ -free.
180
+
181
+ We denote $G \cap S$ the intersection of $G$ and $S$ (Hopcroft et al., 2001, Section 4.2.1) composed of states $Q'' \triangleq Q \times Q'$ , transitions
182
+
183
+ $$
184
+ E ^ {\prime \prime} \triangleq \left\{\left((i - 1, p), t, w _ {(i, t)}, (i, q)\right) \middle | \begin{array}{l} i \in \{1 \dots n \} \wedge \\ (p, t, 0, q) \in E \end{array} \right\},
185
+ $$
186
+
187
+ initial state $i'' \triangleq (i, i')$ and final states $F'' \triangleq F \times F'$ . Then, all valid paths in $G \cap S$ are well-formed sequences of tags for the input sentence of length $n$ . We can then simply run the Viterbi or the Forward algorithm on $G \cap S$ to compute Equartions (1) and (2). Note that $|E''| \propto n$ , therefore the time-complexity is linear in the number of words.
188
+
189
+ We refer the reader to (Tapanainen, 1997) for an introduction to this sequence tagging approach.
190
+
191
+ # 3.2 Grammar Automaton
192
+
193
+ The grammar automaton used to constraint prediction to well-formed sequences of tags is shown in Figure 2. We present the automaton with $\epsilon$ -transition for the sake of clarity, but they can be removed. We omit weights as they are null. States 1 and 2 recognize valid sequences of CB, CI and O tags. Moreover, the structure of the WFSA states recognizing discontinuous mentions is symmetric: the left-hand (resp. right-hand) side recognizes discontinuous mentions whose leftmost component is
194
+
195
+ ![](images/ce2f1abf2b0712734598a632bb7a93b42e857a5af954f1c25c84be09f31c94e5.jpg)
196
+ Figure 2: The grammar automaton we propose for discontinuous named-entity recognition.
197
+
198
+ typed X (resp. Y). Therefore we present only the left-hand side.
199
+
200
+ Transition (1, DB-Bx, 7) starts the recognition of a set of mentions whose leftmost component is typed x. The self-loop in state 7 recognizes following words of the first component. Next we need to check that the inner structure of the set of mentions is well-formed. On the one hand, states 5 and 6 allows to recognize following x components and DI-O tags, until recognition of the first y component via transition (5, DI-BY, 4). On the other hand, transition (7, DB-Bx, 8) starts the recognition of an component typed y that directly follows the first component. Therefore, we need to check that there is "something else" in the set of mentions, otherwise the sequence of tags could lead to an ambiguity in the encoding of continuous mentions. We ensure this via transition $(8,\epsilon ,3)$ , that requires the generation of another component before reaching the final state. Finally, states 3, 4 and 9 recognizes extra x and y in the set of mentions.
201
+
202
+ As such, the language of our grammar automaton is the set of well-formed tag sequences as described in Section 2.2. To use our grammar automaton, we need to remove $\epsilon$ -transitions. The resulting WFSA has 22 states. $^{8}$ In the case of structural component types, we can simply remove transition (1, DB-BY, 12) to constrain the leftmost mention to be labeled x.
203
+
204
+ Practical implementation. The intersection of
205
+
206
+ the grammar and the sentence automata does not result in a homogeneous Markov chain as transition weights correspond to tag weights for the next word, and are therefore different at each step. However, the resulting automaton has always a simple time-invariant structure. In term of implementation, this reduces to applying a mask at each step, and both Viterbi and forward algorithms can be implemented using basic differentiable tensor operations. For MAP inference, we compute the path of maximum weight and then rely on backpropagation to retrieve the sequence of tags (Mensch and Blondel, 2018, Section 2.1).
207
+
208
+ # 4 Weakly-Supervised Learning
209
+
210
+ The negative log-likelihood (NLL) loss,
211
+
212
+ $$
213
+ \ell (\boldsymbol {w}; \boldsymbol {y}) = - \langle \boldsymbol {y}, \boldsymbol {w} \rangle + A _ {Y} (\boldsymbol {w}),
214
+ $$
215
+
216
+ requires knowledge of the gold output $\pmb{y}$ . Unfortunately, NER datasets only contains annotated mentions, but not their component types (e.g. we do not know which components are body parts and events). Therefore, we need to resort on weakly-supervised learning to infer this information.
217
+
218
+ # 4.1 Learning with Partial Labels
219
+
220
+ Learning with partial labels refers to the case where the gold output is unknown but there is access to a subset of labels that includes the gold one (Grandvalet and Bengio, 2004; Nguyen and Caruana, 2008; Cour et al., 2011). Let $\widetilde{Y} \subseteq Y$ be the set of tag sequences that recovers the gold discontinuous mentions. For the example in Figure 1, $\widetilde{Y}$
221
+
222
+ contain two sequences, one where components of the set of mentions are labeled $\mathrm{x / x / y / y}$ and the other $\mathrm{Y} / \mathrm{Y} / \mathrm{X} / \mathrm{X}$ . For a sentence containing $k$ sets of mentions, we have $|\widetilde{Y}| = 2^{k}$ .
223
+
224
+ Following Jin and Ghahramani (2002), we minimize the NLL after marginalizing over $\hat{Y}$ :
225
+
226
+ $$
227
+ \begin{array}{l} \widetilde {\ell} (\boldsymbol {w}; \widetilde {Y}) = - \log p _ {\theta} (\widetilde {Y} | \boldsymbol {x}) = - \log \sum_ {\boldsymbol {y} \in \widetilde {Y}} p _ {\theta} (\boldsymbol {y} | \boldsymbol {x}) \\ = A _ {Y} \left(f _ {\theta} (\boldsymbol {x})\right) - \underbrace {\log \sum_ {\boldsymbol {y} \in \widetilde {Y}} \exp \langle \boldsymbol {y} , f _ {\theta} (\boldsymbol {x}) \rangle} _ {= A _ {\widetilde {Y}} \left(f _ {\theta} (\boldsymbol {x})\right)}, \tag {3} \\ \end{array}
228
+ $$
229
+
230
+ where $A_{\widetilde{Y}}$ is the clamped log-partition, which can be efficiently computed via a dynamic programming algorithm. In speech processing, $A_{\widetilde{Y}}$ is called the alignment model and the associated FSA the numerator graph (Povey et al., 2016; Hadian et al., 2018).
231
+
232
+ Relation with EM. We can interpret minimizing $\widetilde{\ell}$ as an Expectation-Maximization (EM) procedure (Neal and Hinton, 1998). Indeed, the variational formulation of the clamped log-partition is:
233
+
234
+ $$
235
+ A_{\widetilde{Y}}(\boldsymbol {w}) = \sup_{\boldsymbol {\mu}\in \operatorname {conv}\widetilde{Y}}\langle \boldsymbol {\mu},\boldsymbol {w}\rangle -\Omega_{\widetilde{Y}}(\boldsymbol {\mu}),
236
+ $$
237
+
238
+ where $\mathrm{conv}$ denotes the convex hull and $\Omega_{\widetilde{Y}}$ is a structured entropy term as described by Blondel et al. (2020, Section 7.1). Setting $\boldsymbol{w} = f_{\theta}(\boldsymbol{x})$ , by Danskin's theorem (Danskin, 1966; Bertsekas, 1999), the gradient of the $A$ is:
239
+
240
+ $$
241
+ \widehat{\boldsymbol{\mu}}_{\widetilde{Y}}(\boldsymbol {w}) = \nabla A_{\widetilde{Y}}(\boldsymbol {w}) = \operatorname *{arg max}_{\boldsymbol {\mu}\in \operatorname {conv} \widetilde{Y}}\left\langle \boldsymbol {\mu},\boldsymbol {w}\right\rangle -\Omega_{\widetilde{Y}}(\boldsymbol {\mu}).
242
+ $$
243
+
244
+ We rewrite the minimization of $\widetilde{\ell}$ as a two-step procedure:
245
+
246
+ 1. E step: compute $\widehat{\pmb{\mu}}_{\widetilde{Y}}(\pmb {w})$
247
+ 2. M step: take one gradient step over the network parameters using the marginal distribution computed in E step, yielding the loss:
248
+
249
+ $$
250
+ \ell (\boldsymbol {w}; \widehat {\boldsymbol {\mu}} _ {\widetilde {Y}} (\boldsymbol {w})) = - \langle \boldsymbol {y}, \widehat {\boldsymbol {\mu}} (\boldsymbol {w}) \rangle + A _ {Y} (\boldsymbol {w}).
251
+ $$
252
+
253
+ It is important to note that $\widehat{\pmb{\mu}}_{\widetilde{Y}}(\pmb {w})$ is considered as a constant in the M step, i.e. the gradient is:
254
+
255
+ $$
256
+ \nabla \ell (\boldsymbol {w}; \widehat {\boldsymbol {\mu}} _ {\widetilde {Y}} (\boldsymbol {w})) = - \widehat {\boldsymbol {\mu}} (\boldsymbol {w}) + \nabla A _ {Y} (\boldsymbol {w}) = \nabla \widetilde {\ell} (\boldsymbol {w}; \widetilde {Y}),
257
+ $$
258
+
259
+ meaning that this EM procedure is equivalent to minimizing the loss in Equation (3).
260
+
261
+ This suggests a "Hard EM" alternative, where the $E$ step computes the unregularized maximum:
262
+
263
+ $$
264
+ \widehat{\boldsymbol{y}}_{\widetilde{Y}}(\boldsymbol {w}) = \operatorname *{arg max}_{\boldsymbol {y}\in \operatorname {conv} \widetilde{Y}}\left\langle \boldsymbol {y},\boldsymbol {w}\right\rangle ,
265
+ $$
266
+
267
+ and then apply one step of gradient descent using the loss $\ell(\boldsymbol{w}; \widehat{\boldsymbol{y}}_{\widetilde{Y}}(\boldsymbol{w}))$ in the M step.
268
+
269
+ # 4.2 Silver Annotation of Components
270
+
271
+ In order to automatically annotate components, we collect names of body parts from the metathesaurus MRCONSO.RRF of the Unified Medical Language System (UMLS, version 2023ab). We select English entries corresponding to semantic types "Body Location or Region", "Body Part, Organ, or Organ Component" and "Body Space or Junction", via the annotation in the lexicon MRSTY.RRF, which corresponds to identifiers T029, T023 and T030, respectively. However, we remove all acronyms (indicated via the marker ABR) as they would introduce too many false positives in the annotation process (e.g. "in" and "am" are acronyms of body parts). This leads to 218 134 names of body parts.
272
+
273
+ Then, we try to match words of components with these entries. If at least one word of a component match an entry, we consider it as a body part. Note that a single match fully disambiguate a set of mentions.
274
+
275
+ # 5 Related Work
276
+
277
+ Tagging methods. Tang et al. (2013) proposed the BIOHD tagging scheme for discontinuous NER. A major issue of their approach is its structural ambiguity: several tag sequences can encode the same discontinuous mention, and different discontinuous mentions have the same associated tag sequence, see (Muis and Lu, 2016, Section 3.1). A choice to resolve ambiguity has to be made when making a prediction, meaning that there are structures that cannot be predicted. Moreover, this approach does not constrain the output tag sequence to be well-formed, i.e. it may not be possible to reconstruct mentions from a predicted tag sequence. The tagging scheme used by Metke-Jimenez and Karimi (2016) and Dai et al. (2017) has the same limitation. Muis and Lu (2016) proposed a graph-based method that ensures that predictions are well-formed, but their approach still exhibits structural ambiguity.
278
+
279
+ Other methods. Wang and Lu (2019) rely on a two-step model that first predicts continuous spans (i.e. components) and then uses a separate classifier that combines them together. Dai et al. (2020) proposed a novel transition-based model. These two approaches are based on sequential predictions that are trained using gold intermediate outputs, which can lead to error propagation once a single mistake is made at test time. To resolve this problem, Wang et al. (2021) proposed a method that jointly predicts spans and their combination based on the maximal clique problem. A downside of these approaches is that they are more computationally costly (and therefore slower) than tagging methods.
280
+
281
+ # 6 Experiments
282
+
283
+ We evaluate our approach on three standard English datasets for discontinuous named-entity recognition in the biomedical domain: CADEC (Karimi et al., 2015), SHARE2013 (Pradhan et al.) and SHARE2014 (Mowery et al.). We pre-process the data using the script of Dai et al. (2020). Note that our tagging scheme cannot predict all discontinuous mentions in the data, i.e. there are sentences that we cannot convert to our representation. Therefore, we remove these sentences from the training set. $^{11}$ Data statistics are given in Table 2.
284
+
285
+ # 6.1 Discontinuity Analysis
286
+
287
+ We conduct a qualitative analysis of the search space of our algorithm on the full CADEC dataset. There are 26 discontinuous NER structures incompatible with our approach. $^{12}$
288
+
289
+ There are discontinuous mentions where there is a partially shared component. This is due to shared negation (1 case), shared adjective (5 cases) and shared prepositional phrase (PP, 1 case):
290
+
291
+ ![](images/0d11433b4ea1d09ce2fa3910bf12da085aafb74110b300d516183b7ccea73cfa.jpg)
292
+
293
+ ![](images/1099285157ab2e4716e01d5e0b543cd455c4fdbf15b3390f8cff72b32cdf420f.jpg)
294
+
295
+ Although we cannot recognize these structures, we could extend our automaton to recognize the shared part as a continuous chunk (negation, adjective or PP), and the rest using our two layer representation.
296
+
297
+ There are also discontinuous mentions that are composed of three components (16 cases), which we cannot recognize. This can happen because there is a coordination in both subject and PP positions as in the following example:
298
+
299
+ ![](images/058c67ac67148ced97f90b2aa7bdbc13407fd759df4a3e39e4a65ecf4d372d30.jpg)
300
+
301
+ The mention "muscle aches in elbows" is composed of three components.
302
+
303
+ Finally, the last three incompatibilities are due to a convoluted syntactic structure and annotation errors (2 cases). Interestingly, some annotation errors can be detected thanks to our new annotation schema. For example, in CADEC the sequence "renal and respiratory failure" as been incorrectly annotated as containing renal respiratory failure instead of renal failure. In SHARE2014, the sequence "pleural / abdominal effusions" as been incorrectly annotated as containing effusions instead of abdominal effusions. Note that in this paper we used the datasets as such and did not fix any error so that results are comparable with previous work.
304
+
305
+ # 6.2 Results
306
+
307
+ Our neural network is excessively simple: we use the DEBERTA-V3 pretrained self-attentive network (He et al., 2021a,b) followed by a single linear projection that maps context-sensitive embeddings to tag weights. All training details are given in Appendix A. For each loss function, we train six models with six different seeds and we select the best model using the development set.
308
+
309
+ Results. We report the F-measure on all mentions and on discontinuous mentions only in Ta
310
+
311
+ <table><tr><td rowspan="2"></td><td colspan="2">CADEC</td><td colspan="2">SHARE2013</td><td colspan="2">SHARE2014</td></tr><tr><td>F1</td><td>Disc. F1</td><td>F1</td><td>Disc. F1</td><td>F1</td><td>Disc. F1</td></tr><tr><td colspan="7">Previous work</td></tr><tr><td>Tang et al. (2013)</td><td></td><td></td><td>75.0</td><td></td><td></td><td></td></tr><tr><td>Tang et al. (2018)</td><td>66.3</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Metke-Jimenez and Karimi (2016)</td><td>64.4</td><td></td><td>56.5</td><td></td><td>60.2</td><td></td></tr><tr><td>Metke-Jimenez and Karimi (2016)†</td><td>67.4</td><td>1.8</td><td>74.9</td><td>18.8</td><td>76.6</td><td>6.0</td></tr><tr><td>Muis and Lu (2016)†</td><td>58.0</td><td>23.9</td><td>70.3</td><td>50.0</td><td>74.7</td><td>41.1</td></tr><tr><td>Dai et al. (2020)</td><td>69.0</td><td>37.9</td><td>77.7</td><td>52.5</td><td>79.6</td><td>49.2</td></tr><tr><td>Wang et al. (2021)</td><td>71.5</td><td>44.4</td><td>81.2</td><td>55.9</td><td>81.3</td><td>54.1</td></tr><tr><td colspan="7">This work</td></tr><tr><td>Soft EM</td><td>71.1</td><td>38.1</td><td>80.7</td><td>49.2</td><td>81.5</td><td>51.9</td></tr><tr><td>Hard EM</td><td>71.9</td><td>35.9</td><td>82.0</td><td>51.9</td><td>81.6</td><td>54.1</td></tr><tr><td>Weakly soft EM</td><td>71.8</td><td>37.6</td><td>82.0</td><td>52.0</td><td>81.4</td><td>46.2</td></tr><tr><td>Weakly hard EM</td><td>70.4</td><td>33.6</td><td>82.0</td><td>52.1</td><td>81.8</td><td>49.8</td></tr><tr><td>Structural labels</td><td>72.9</td><td>41.5</td><td>82.1</td><td>53.3</td><td>80.9</td><td>53.7</td></tr></table>
312
+
313
+ Table 1: Results on on three different datasets. Results marked with $\dagger$ are reproductions by Wang et al. (2021).
314
+
315
+ <table><tr><td>Split</td><td>CADEC</td><td>SHARE2013</td><td>SHARE2014</td></tr><tr><td>Train</td><td>5340 (306)</td><td>8508 (477)</td><td>17407 (777)</td></tr><tr><td>- filtered</td><td>5322 (288)</td><td>8432 (401)</td><td>17294 (667)</td></tr><tr><td>Dev.</td><td>1097 (59)</td><td>1250 (58)</td><td>1361 (59)</td></tr><tr><td>Test</td><td>1160 (74)</td><td>9009 (301)</td><td>15850 (411)</td></tr></table>
316
+
317
+ Table 2: Number of sentences in each split. The number in parentheses corresponds to the number of sentences with at least one discontinuous mention.
318
+
319
+ <table><tr><td>Model</td><td>CADEC</td><td>S2013</td><td>S2014</td></tr><tr><td>Dai et al. (2020)</td><td>36</td><td>41</td><td>40</td></tr><tr><td>Wang et al. (2021)</td><td>193</td><td>200</td><td>198</td></tr><tr><td>This work</td><td>8286</td><td>10216</td><td>10206</td></tr></table>
320
+
321
+ Table 3: Speed comparison in terms of sentence per seconds. Numbers for Dai et al. (2020) are BERT-based models, as reproduced by Wang et al. (2021).
322
+
323
+ ble 1. The evaluation is conducted on the original representation so results are comparable with previous work. Our approach leads to similar results to previous work. We do not observe significant differences between different loss functions.
324
+
325
+ Speed. All numbers are reported for computation on NVIDIA V100 GPUs. Training takes approximately 40, 60 and 80 minutes on CADEC, SHARE2013 and SHARE2014, respectively. Table 3 compares decoding with previous work of Dai et al. (2020) and Wang et al. (2021). The transition-based model of Dai et al. (2020) is particularly slow as their approach cannot fully exploit GPU parallelization. Our approach is $\sim 40 - 50$ times faster than the method of Wang et al. (2021). This is due to two reasons: (1) they use a complex neural net
326
+
327
+ work architecture on top of a BERT-like model and (2) for each input they must solve a NP-hard problem (maximum clique) to make the prediction.
328
+
329
+ # 7 Conclusion
330
+
331
+ In this work, we propose a novel tagging scheme for discontinuous NER based on a two-layer representation of discontinuous mentions. Our approach leads to result on par with state-of-the-art using a very simple neural network architecture. Importantly, decoding with our model is very fast compared to previous work.
332
+
333
+ Our main objective with this work is to propose a simple plug-in method for discontinuous NER: any future work on models for BIO tagging can now also be trivially evaluated on discontinuous NER. Moreover, our approach is also fast to train, meaning that there is no significant cost overhead.
334
+
335
+ # Acknowledgments
336
+
337
+ I thank Vlad Niculae and François Yvon for their comments and suggestions. I thank Lucas Ondel-Yang for the many discussions on finite stateautomata that inspired the decoding algorithm described in this paper. I thank Pierre Zweigenbaum for the help with Share datasets and the UMLS database.
338
+
339
+ Work partially done while I was a researcher at LISN and ISIR. This work was granted access to the HPC/AI resources of IDRIS under the allocation 2024-AD011013727R1 made by GENCI.
340
+
341
+ # Limitations
342
+
343
+ The approach proposed in this paper cannot cover all form of discontinuities observed in the three datasets. Indeed, some discontinuous mentions are composed of three parts or more. However, they are rare so our results are still competitive. Moreover, our contribution is focused on the general decoding approach that can be extended by future work.
344
+
345
+ Discontinuous NER datasets are scarce, therefore we are only able to experiment on three datasets in the biomedical domain in English. We suspect this is due to a chicken or the egg dilemma: discontinuity are often not annotated as there are no easy plug-and-easy approach to predict them, and there is little NLP work in the domain as there are only a few datasets available for experiments.
346
+
347
+ During the evaluation of our approach, we observed that many mentions are missing in the gold annotation. As such, all results reported on these datasets (including previous works) should be taken with a pinch of salt.
348
+
349
+ # References
350
+
351
+ Arturo Argueta and David Chiang. 2017. Decoding with finite-state transducers on GPUs. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1044-1052, Valencia, Spain. Association for Computational Linguistics.
352
+ Leonard E Baum. 1972. An inequality and associated maximization technique in statistical estimation for probabilistic functions of markov processes. Inequalities, 3(1):1-8.
353
+ Jesse A. Berlin, Susan C. Glasser, and Susan S. Ellenberg. 2008. Adverse event detection in drug development: Recommendations and obligations beyond phase 3. American Journal of Public Health, 98(8):1366-1371. PMID: 18556607.
354
+ Dimitri P Bertsekas. 1999. Nonlinear programming. Athena Scientific Belmont.
355
+ Mathieu Blondel, André F.T. Martins, and Vlad Niculae. 2020. Learning with Fenchel-Young losses. Journal of Machine Learning Research, 21(35):1-69.
356
+ N. Chinchor and P. Robinson. 1998. Appendix E: MUC-7 named entity task definition (version 3.5). In Seventh Message Understanding Conference (MUC-7): Proceedings of a Conference Held in Fairfax, Virginia, April 29 - May 1, 1998.
357
+ Preciosa M Coloma, Gianluca Trifiro, Vaishali Patadia, and Miriam Sturkenboom. 2013. Postmarketing safety surveillance: where does signal detection using
358
+
359
+ electronic healthcare records fit into the big picture? Drug safety, 36:183-197.
360
+ Timothy Cour, Ben Sapp, and Ben Taskar. 2011. Learning from partial labels. Journal of Machine Learning Research, 12(42):1501-1536.
361
+ Xiang Dai, Sarvnaz Karimi, Ben Hachey, and Cecile Paris. 2020. An effective transition-based model for discontinuous NER. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5860-5870, Online. Association for Computational Linguistics.
362
+ Xiang Dai, Sarvnaz Karimi, and Cecile Paris. 2017. Medication and adverse event extraction from noisy text. In Proceedings of the Australasian Language Technology Association Workshop 2017, pages 79-87, Brisbane, Australia.
363
+ John M. Danskin. 1966. The theory of max-min, with applications. SIAM Journal on Applied Mathematics, 14(4):641-664.
364
+ Samuel Eilenberg. 1974. Automata, languages, and machines. Academic press.
365
+ Hao Fei, Donghong Ji, Bobo Li, Yijiang Liu, Yafeng Ren, and Fei Li. 2021. Rethinking boundaries: End-to-end recognition of discontinuous mentions with pointer networks. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14):12785-12793.
366
+ G.D. Forney. 1973. The Viterbi algorithm. Proceedings of the IEEE, 61(3):268-278.
367
+ Xianping Ge. 2002. Segmental semi-markov models and applications to sequence analysis.
368
+ Joshua Goodman. 1999. Semiring parsing. Computational Linguistics, 25(4):573-606.
369
+ Yves Grandvalet and Yoshua Bengio. 2004. Learning from partial labels with minimum entropy. Centre interuniversitaire de recherche en analyse des organisations (CIRANO).
370
+ Ralph Grishman and Beth Sundheim. 1996. Message Understanding Conference- 6: A brief history. In COLING 1996 Volume 1: The 16th International Conference on Computational Linguistics.
371
+ Hossein Hadian, Hossein Sameti, Daniel Povey, and Sanjeev Khudanpur. 2018. End-to-end speech recognition using lattice-free MMI. In *Interspeech*, pages 12-16.
372
+ Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2021a. Debertav3: Improving deberta using electra-style pretraining with gradient-disentangled embedding sharing. Preprint, arXiv:2111.09543.
373
+ Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021b. Deberta: Decoding-enhanced bert with disentangled attention. In International Conference on Learning Representations.
374
+
375
+ John E Hopcroft, Rajeev Motwani, and Jeffrey D Ullman. 2001. Introduction to automata theory, languages, and computation. *Acm Sigact News*, 32(1):60-65.
376
+ Jacques Janssen and Nikolaos Limnios. 1999. SemiMarkov models and applications. Kluwer Academic.
377
+ Rong Jin and Zoubin Ghahramani. 2002. Learning with multiple labels. In Advances in Neural Information Processing Systems, volume 15. MIT Press.
378
+ Stephan Kanthak and Hermann Ney. 2004. FSA: An efficient and flexible $\mathrm{C + + }$ toolkit for finite state automata using on-demand computation. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pages 510- 517, Barcelona, Spain.
379
+ Sarvnaz Karimi, Alejandro Metke-Jimenez, Madonna Kemp, and Chen Wang. 2015. Cadec: A corpus of adverse drug event annotations. Journal of biomedical informatics, 55:73-81.
380
+ Lauri Karttuten, Jean-Pierre Chanod, Gregory Grefenstette, and Anne Schille. 1996. Regular expressions for language engineering. Natural Language Engineering, 2(4):305-328.
381
+ Kimmo Koskenniemi. 1990. Finite-state parsing and disambiguation. In COLING 1990 Volume 2: Papers presented to the 13th International Conference on Computational Linguistics.
382
+ Kimmo Koskenniemi, Pasi Tapanainen, and Atro Voutilainen. 1992. Compiling and using finite-state syntactic rules. In COLING 1992 Volume 1: The 14th International Conference on Computational Linguistics.
383
+ George Lakoff and Stanley Peters. 1969. Phrasal conjunction and symmetric predicates. modern studies in english: Readings in transformational grammar.
384
+ Winfried Lechner. 2000. Conjunction reduction in subordinate structures. In North East Linguistics Society, volume 30, page 5.
385
+ Jingye Li, Hao Fei, Jiang Liu, Shengqiong Wu, Meishan Zhang, Chong Teng, Donghong Ji, and Fei Li. 2022. Unified named entity recognition as word-word relation classification. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10):10965-10973.
386
+ Chu-Cheng Lin, Hao Zhu, Matthew R. Gormley, and Jason Eisner. 2019. Neural finite-state transducers: Beyond rational relations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 272-283, Minneapolis, Minnesota. Association for Computational Linguistics.
387
+ Matilde Marcolli and Ryan Thorngren. 2014. Thermodynamic semirings. Journal of Noncommutative Geometry, 8(2):337-392.
388
+
389
+ Arthur Mensch and Mathieu Blondel. 2018. Differentiable dynamic programming for structured prediction and attention. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 3462-3471. PMLR.
390
+ Alejandro Metke-Jimenez and Sarvnaz Karimi. 2016. Concept identification and normalisation for adverse drug event discovery in medical forums. In BMDID@ ISWC.
391
+ Mehryar Mohri. 2009. Weighted Automata Algorithms, pages 213-254. Springer Berlin Heidelberg, Berlin, Heidelberg.
392
+ Mehryar Mohri, Fernando Pereira, and Michael Riley. 1996. Weighted automata in text and speech processing. In Proceedings of the ECAI 1996 Workshop.
393
+ Marius Mosbach, Maksym Andriushchenko, and Dietrich Klakow. 2021. On the stability of fine-tuning $\{\text{bert}\}$ : Misconceptions, explanations, and strong baselines. In International Conference on Learning Representations.
394
+ Danielle L Mowery, Sumithra Velupillai, Brett R South, Lee Christensen, David Martinez, Liadh Kelly, Lorraine Goeuriot, Noemie Elhadad, Sameer Pradhan, Guergana Savova, and Wendy W Chapman. Task 2: Share/clef ehealth evaluation lab 2014.
395
+ Aldrian Obaja Muis and Wei Lu. 2016. Learning to recognize discontinuous entities. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 75-84, Austin, Texas. Association for Computational Linguistics.
396
+ Radford M. Neal and Geoffrey E. Hinton. 1998. A view of the EM algorithm that justifies incremental, sparse, and other variants. In Learning in graphical models, pages 355-368. Springer.
397
+ Nam Nguyen and Rich Caruana. 2008. Classification with partial labels. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 551-559.
398
+ Sean Papay, Roman Klinger, and Sebastian Pado. 2022. Constraining linear-chain CRFs to regular languages. In International Conference on Learning Representations.
399
+ Daniel Povey, Vijayaditya Peddinti, Daniel Galvez, Pegah Ghahremani, Vimal Manohar, Xingyu Na, Yiming Wang, and Sanjeev Khudanpur. 2016. Purely sequence-trained neural networks for ASR based on lattice-free MMI. In Interspeech, pages 2751-2755.
400
+ Sameer Pradhan, Noemie Elhadad, Brett R South, David Martinez, Lee M Christensen, Amy Vogel, Hanna Suominen, Wendy W Chapman, and Guergana K Savova. Task 1: ShARE/CLEF eHealth evaluation lab 2013.
401
+
402
+ Lance Ramshaw and Mitch Marcus. 1995. Text chunking using transformation-based learning. In Third Workshop on Very Large Corpora.
403
+ Pushpendre Rastogi, Ryan Cotterell, and Jason Eisner. 2016. Weighting finite-state transductions with neural context. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 623-633, San Diego, California. Association for Computational Linguistics.
404
+ Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009), pages 147-155, Boulder, Colorado. Association for Computational Linguistics.
405
+ Alexander Rush. 2020. Torch-struct: Deep structured prediction library. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 335–342, Online. Association for Computational Linguistics.
406
+ Sunita Sarawagi and William W Cohen. 2004. Semimarkov conditional random fields for information extraction. In Advances in Neural Information Processing Systems, volume 17. MIT Press.
407
+ Buzhou Tang, Jianglu Hu, Xiaolong Wang, and Qingcai Chen. 2018. Recognizing continuous and discontinuous adverse drug reaction mentions from social media using LSTM-CRF. *Wireless Communications & Mobile Computing* (Online), 2018.
408
+ Buzhou Tang, Yonghui Wu, Min Jiang, Joshua C Denny, and Hua Xu. 2013. Recognizing and encoding discorder concepts in clinical text using machine learning and vector space model. CLEF (Working Notes), 665.
409
+ Pasi Tapanainen. 1997. Applying a Finite-State Intersection Grammar. In *Finite-State Language Processing*, The MIT Press.
410
+ Roy Tromble and Jason Eisner. 2006. A fast finite-state relaxation method for enforcing global constraints on sequence decoding. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 423-430, New York City, USA. Association for Computational Linguistics.
411
+ Martin J Wainwright, Michael I Jordan, et al. 2008. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1-305.
412
+ Bailin Wang and Wei Lu. 2019. Combining spans into entities: A neural two-stage approach for recognizing discontinuous entities. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6216-6224, Hong Kong, China. Association for Computational Linguistics.
413
+
414
+ Qing Wang, Haojie Jia, Wenfei Song, and Qi Li. 2023. CoRec: An easy approach for coordination recognition. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 15112-15120, Singapore. Association for Computational Linguistics.
415
+ Yucheng Wang, Bowen Yu, Hongsong Zhu, Tingwen Liu, Nan Yu, and Limin Sun. 2021. Discontinuous named entity recognition as maximal clique discovery. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 764-774, Online. Association for Computational Linguistics.
416
+ Christopher Wilder. 2018. Conjunction Reduction and Right-Node Raising. In *The Oxford Handbook of Ellipsis*. Oxford University Press.
417
+
418
+ # A Training details
419
+
420
+ The model is trained for 20 epochs using the cosine learning rate scheduler as implemented in the HuggingFace library. The maximum learning rate is fixed to $10^{-5}$ . The warmup ratio is $10\%$ . We apply dropout with a probability of 0.5 to BERT's output. The gradient norm is clipped to 1. All parameters have a weight decay of 0.01. We use the Adam variant proposed by Mosbach et al. (2021).
2024/A Fast and Sound Tagging Method for Discontinuous Named-Entity Recognition/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:974688139413d1dd9d57757363c98d74926996843a5fae1bdfd7be870b1ad913
3
+ size 354609
2024/A Fast and Sound Tagging Method for Discontinuous Named-Entity Recognition/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Generic Method for Fine-grained Category Discovery in Natural Language Texts/4d06db13-9ceb-41b6-abd3-fe6498c25825_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Generic Method for Fine-grained Category Discovery in Natural Language Texts/4d06db13-9ceb-41b6-abd3-fe6498c25825_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Generic Method for Fine-grained Category Discovery in Natural Language Texts/4d06db13-9ceb-41b6-abd3-fe6498c25825_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c106c6db90bd9b60f57d0cbefbf6578b4bb5afce4d209d5ca92434ec96c037a2
3
+ size 1807888
2024/A Generic Method for Fine-grained Category Discovery in Natural Language Texts/full.md ADDED
@@ -0,0 +1,698 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Generic Method for Fine-grained Category Discovery in Natural Language Texts
2
+
3
+ Chang Tian\* Matthew B. Blaschko\* Wenpeng Yin\* Mingzhe Xing\* Yinliang Yue\* Marie-Francine Moens
4
+
5
+ $^{\S}$ KU Leuven†Penn State University $\triangle$ Zhongguancun Laboratory chang.tian@kuleuven.be xingmz@zgclab.edu.cn
6
+
7
+ # Abstract
8
+
9
+ Fine-grained category discovery using only coarse-grained supervision is a cost-effective yet challenging task. Previous training methods focus on aligning query samples with positive samples and distancing them from negatives. They often neglect intra-category and intercategory semantic similarities of fine-grained categories when navigating sample distributions in the embedding space. Furthermore, some evaluation techniques that rely on precollected test samples are inadequate for real-time applications. To address these shortcomings, we introduce a method that successfully detects fine-grained clusters of semantically similar texts guided by a novel objective function. The method uses semantic similarities in a logarithmic space to guide sample distributions in the Euclidean space and to form distinct clusters that represent fine-grained categories. We also propose a centroid inference mechanism to support real-time applications. The efficacy of the method is both theoretically justified and empirically confirmed on three benchmark tasks. The proposed objective function is integrated in multiple contrastive learning based neural models. Its results surpass existing state-of-the-art approaches in terms of Accuracy, Adjusted Rand Index and Normalized Mutual Information of the detected fine-grained categories. Code and data are publicly available at https://github.com/changtianluckyforever/F-grained-STAR.
10
+
11
+ # 1 Introduction
12
+
13
+ Fine-grained analysis has drawn much attention in many artificial intelligence fields, e.g., Computer Vision (Chen et al., 2018; Li et al., 2021; Wang et al., 2024a; Park and Ryu, 2024) and Natural Language Processing (Ma et al., 2023; Tian et al., 2024; An et al., 2024), because it can provide more detailed features than coarse-grained data. For instance, as illustrated in Figure 1, solely
14
+
15
+ ![](images/fa3214dac388fabee35370dfdb3f6ca0688e827d859ddbc82926424acfd73300.jpg)
16
+ Figure 1: A fine-grained intent detection example. Left: This panel illustrates the label hierarchy, transitioning from coarse-grained to fine-grained granularity. Right: This example demonstrates intent detection in a conversation about car choices, showing how coarse-grained analysis alone can lead to incorrect recommendations by a life assistant due to a lack of fine-grained analysis.
17
+
18
+ ![](images/8b21b3c4f33d3d58fed3ece20dc93698fc561f297c7bbebef248c11d8bfb209e.jpg)
19
+ I want to buy a vehicle suited for weekend field adventures.
20
+
21
+ ![](images/1fc5f37a4c562f38034416820421d796ae040e4c073be57c50008426023fc70e.jpg)
22
+
23
+ ![](images/ac3201b1957907ee1aac79dae1c1774067125f1a53c0006df74935c3367e8b57.jpg)
24
+ Intent: Buy_Off-road I recommend buying an off-road vehicle.
25
+
26
+ based on coarse-grained analysis, the chatbot might incorrectly recommend a roadster, which is unsuitable for field adventures. Detecting the fine-grained intent would allow the chatbot to recommend an off-road vehicle that aligns with the user's requirements. However, annotating fine-grained categories can be labor-intensive, as it demands precise expert knowledge specific to each domain and involved dataset. Addressing this challenge, An et al. (2022) recently introduced Fine-grained Category Discovery under Coarse-grained Supervision (FCDC) for language classification tasks (details in Section 3). Solving FCDC tasks can significantly benefit numerous practical applications, for example, fine-grained classification of enterprise documents (Chen et al., 2023; Vellmer et al., 2023), fine-grained dialogue intent detection tasks (Tian et al., 2022; Lichouri et al., 2024), product labeling on online shopping websites based on text descriptions (Ghani and Fano, 2002; Parekh et al., 2021), and so on. FCDC aims to reduce annotation costs by leveraging the relative ease of obtaining coarse-grained annotations, without re
27
+
28
+ ![](images/42e07411cc4a3e553689f0229a26a25dff1596dd37cee04ea592212f8d36f82e.jpg)
29
+ Intent: Buy_Vehicle I suggest purchasing a Porsche roadster.
30
+
31
+ nicle using a letter.
32
+
33
+ ![](images/0e5ecc77da420daf94c2b3477c31999261790fa4a2eb1cb7f20771698b87a33c.jpg)
34
+
35
+ road, saying, cicle.
36
+
37
+ quiring fine-grained supervisory information. This approach has sparked significant research interest in the automatic discovery of fine-grained language categories (Ma et al., 2023; An et al., 2023a; Vaze et al., 2024; Lian et al., 2024).
38
+
39
+ Existing methods for addressing FCDC are typically grouped into three groups (An et al., 2024): language models, self-training methods, and contrastive learning methods. Language models (Devlin et al., 2019a; Touvron et al., 2023), including their fine-tuned versions with coarse labels, generally perform poorly on this task due to a lack of fine-grained supervision. Self-training methods (Caron et al., 2018; Zhang et al., 2021) and their variants often employ clustering assignments as fine-grained pseudo-labels, filtering out some noisy pseudolabels, and training with these labels. Dominant contrastive learning methods (Chen et al., 2020; Mekala et al., 2021; An et al., 2022, 2023a) typically identify positive and negative samples for a given query by measuring their semantic distances. The contrastive loss ensures that the query sample moves closer to positive samples and further away from negative samples. So these methods form clusters of samples in the embedding space, with each cluster representing a discovered fine-grained category, without requiring fine-grained category supervision.
40
+
41
+ However, past methods did not utilize comprehensive semantic similarities (CSS) in the logarithmic space to guide sample distributions in the Euclidean space. We define CSS as the fine-grained semantic similarities measured by bidirectional Kullback-Leibler (KL) divergence in the logarithmic space between the query sample and each available positive or negative sample. Although An et al. (2024) recently explored similarities measured by rank order between the query sample and positive samples, they ignore similarities with negative samples.
42
+
43
+ We propose a method (STAR) for detecting fine-grained clusters of semantically similar texts through a novel objective function, with the core component considering CSS. This component guides sample distributions in the Euclidean space based on the magnitude of CSS in the logarithmic space. Large semantic differences (low similarity) in the logarithmic space between the query sample and an available sample push the query sample further away in the Euclidean space, while small semantic differences bring the query sample closer to the available sample. Thus, samples form dis
44
+
45
+ tinguishable fine-grained clusters in the Euclidean space, with each cluster representing a discovered category.
46
+
47
+ Additionally, clustering inference used by previous works (An et al., 2022, 2023a, 2024) cannot support real-time scenarios, so we propose a variant inference mechanism utilizing approximated fine-grained cluster centroids, delivering competitive results for the tasks considered.
48
+
49
+ Our main contributions in this work can be summarized as follows:
50
+
51
+ - Method: STAR enhances existing contrastive learning methods by leveraging comprehensive semantic similarities in a logarithmic space to guide sample distributions in the Euclidean space, thereby making fine-grained categories more distinguishable.
52
+ - Theory: We interpret STAR from the perspectives of clustering and generalized Expectation Maximization (EM). Also, we conduct loss and gradient analyses to explain the effectiveness of using CSS for category discovery.
53
+ - Experiments: Experiments on three text classification tasks (intent detection (Larson et al., 2019), scientific abstract classification (Kowsari et al., 2017), and chatbot query (Liu et al., 2021)) demonstrate new state-of-the-art (SOTA) performance compared to 22 baselines, validating the theoretical method.
54
+
55
+ # 2 Related Work
56
+
57
+ # 2.1 Fine-grained Category Discovery
58
+
59
+ Fine-grained data analysis is crucial in Natural Language Processing (Guo et al., 2021; Ma et al., 2023; Tian et al., 2024) and Computer Vision (Pan et al., 2023; Wang et al., 2024b). However, effectively discovering fine-grained categories from coarse-grained ones remains challenging (Mekala et al., 2021). Traditional category discovery methods often assume that known and discovered categories are at the same granularity level (An et al., 2023b; Vaze et al., 2024).
60
+
61
+ To discover fine-grained categories under the supervision of coarse-grained categories, An et al. (2022) introduces the FCDC task. Self-training approaches, such as Deep Cluster (Caron et al., 2018; An et al., 2023a), use clustering algorithms to detect the fine-grained categories, assign pseudolabels to the clusters and their samples, and then
62
+
63
+ train a classification model with these pseudolabels. Its variant, Deep Aligned Clustering (Zhang et al., 2021), devises a strategy to filter out inconsistent pseudo-labels during clustering. Contrastive learning has become prevalent in FCDC tasks; Bukchin et al. (2021) and An et al. (2022) develops angular contrastive learning tailored for fine-grained classification. An et al. (2022) proposes a weighted self-contrastive framework to enhance the model's discriminative capacity for coarse-grained samples. Ma et al. (2023) and An et al. (2023a) uses noisy fine-grained centroids and retrieves neighbors as positive pairs, respectively, applying constraints to filter noise. An et al. (2024) advances this approach with neighbors that are manually weighted as positive pairs. However, previous efforts have not leveraged comprehensive semantic similarities to guide sample distributions and thereby to enhance fine-grained category discovery.
64
+
65
+ # 2.2 Neighborhood Contrastive Learning
66
+
67
+ Contrastive learning enhances representation learning by bringing the query sample closer to positive samples and distancing it from negative samples (Chen et al., 2020). Prior research has focused on constructing high-quality positive pairs. He et al. (2020) utilizes two different transformations of the same input as query and positive sample, respectively. Li et al. (2020) introduces the use of prototypes, derived through clustering, as positive instances. Additionally, An et al. (2022) employs shallow-layer features from BERT as positive samples and introduces a weighted contrastive loss. This approach primarily differentiates data at a coarse-grained level, and the manually set weights limit its broader applicability.
68
+
69
+ To circumvent complex data augmentation, neighborhood contrastive learning (NCL) is developed, treating the nearest neighbors of queries as positive samples (Dwibedi et al., 2021). Zhong et al. (2021) extends this by utilizing k-nearest neighbors to identify hard negative samples, while Zhang et al. (2022) selects a positive key from the k-nearest neighbors for contrastive representation learning. However, these approaches often deal with noisy nearest neighbors that include false-positive samples. An et al. (2023a) addresses this by proposing three constraints to filter out uncertain neighbors, yet they overlooks semantic similarities between query sample and each available sample. An et al. (2024) represents semantic similarities us
70
+
71
+ ing rank order among positive samples but neglects similarities among negative samples. In contrast, STAR uses comprehensive semantic similarities to guide sample distributions in the Euclidean space, offering richer features and a superior approach to pure contrastive learning.
72
+
73
+ # 3 Problem Formulation
74
+
75
+ Given a set of coarse-grained categories $Y_{coarse} = \{C_1, C_2, \ldots, C_M\}$ and a coarsely labeled training set $D_{train} = \{(x_i, c_i) \mid c_i \in Y_{coarse}\}_{i=1}^N$ , where $N$ denotes the number of training samples, the task of FCDC involves developing a feature encoder $F_\theta$ . This encoder maps samples into a feature space, further segmenting them into distinct fine-grained categories $Y_{fine} = \{F_1, F_2, \ldots, F_K\}$ , without any fine-grained supervisory information. Here, $Y_{fine}$ represents sub-classes of $Y_{coarse}$ . Model effectiveness is evaluated on a testing set $D_{test} = \{(x_i, y_i) \mid y_i \in Y_{fine}\}_{i=1}^L$ , with $L$ as the number of test samples, utilizing features extracted by $F_\theta$ . For evaluation consistency and fairness, only the number of fine-grained categories $K$ is used, aligning with methodologies established in previous research (Ma et al., 2023; An et al., 2022, 2023a).
76
+
77
+ # 4 Method
78
+
79
+ STAR leverages comprehensive semantic similarities and integrates seamlessly with contrastive learning baselines by modifying the objective function. We have developed variants for three baselines: PseudoPrototypicalNet (PPNet) (Boney and Ilin, 2017; Ji et al., 2020), DNA (An et al., 2023a), and DOWN (An et al., 2024). This section focuses on STAR-DOWN because DOWN outperforms other baselines, with additional method variants detailed in Appendix A.3.
80
+
81
+ DOWN involves three steps: pre-training with coarse-grained labels (Section 4.1), retrieving and weighting nearest neighbors (Section 4.2), and training with a contrastive loss. STAR-DOWN follows the same first two steps but replaces the third with a novel objective function (Section 4.3). Like DOWN, STAR-DOWN iterates the last two steps until the unsupervised metric, the silhouette score of the clustering into fine-grained clusters, does not improve for five consecutive epochs. The detailed algorithm is provided in Appendix A.1.7.
82
+
83
+ ![](images/c865da4835eef271c9f8adb8047f3a37eb7dbc08ae41d982b19f92e2d3eb4ced.jpg)
84
+ Figure 2: STAR-DOWN integrates the baseline DOWN with the STAR method (shown in the red dashed box). In the visual representation, colors differentiate samples, squares represent features extracted by the Encoder, and circles denote features extracted by the Momentum Encoder. Unidirectional arrows indicate proximity, while bidirectional arrows signify distance between samples.
85
+
86
+ # 4.1 Multi-task Pre-training
87
+
88
+ As illustrated in Figure 2, the baseline DOWN (An et al., 2024) utilizes the BERT Encoder $F_{\theta}$ to extract normalized feature embeddings $q_{i} = F_{\theta}(x_{i})$ for input $x_{i}$ , where $\theta$ represents the Encoder parameters. To ensure effective initialization for fine-grained training, DOWN pre-trains the Encoder on the coarsely labeled train set $D_{train}$ with labels $Y_{coarse}$ . DOWN utilizes the sum of a cross-entropy loss $L_{\mathrm{ce}}$ and a masked language modeling loss $L_{\mathrm{mlm}}$ for multi-task pre-training of the Encoder (detailed in Appendix A.1.1).
89
+
90
+ # 4.2 Neighbors Retrieval and Weighting
91
+
92
+ The Momentum Encoder is a slowly evolving version of the Encoder, commonly employed in self-supervised learning (He et al., 2020; An et al., 2023a). DOWN integrates the Momentum Encoder to generate more consistent, stable, and better representations over time (An et al., 2024).
93
+
94
+ In Figure 2, the Momentum Encoder $F_{\theta_k}$ with parameters $\theta_{k}$ extracts and stores gradient-free normalized neighbor features $h_i = F_{\theta_k}(x_i)$ in a dynamic data queue $Q$ . To ensure consistency between the outputs of $F_{\theta_k}$ and $F_{\theta}$ , $F_{\theta_k}$ 's parameters are updated via a moving-average method (He et al., 2020): $\theta_{k} \gets m\theta_{k} + (1 - m)\theta$ , where $m$ is the momentum coefficient. For each query feature $q_{i}$ , in order to facilitate semantic similarity capture and fine-grained clustering, its top-k nearest neighbors $N_{i}$ are determined from $Q$ using cosine similarity (Sim): $N_{i} = \{h_{j} \mid h_{j} \in \operatorname{argtop}_{k}(\operatorname{Sim}(q_{i}, h_{l}))\}$ ,
95
+
96
+ where $\mathrm{Sim}(q_i, h_l) = \frac{q_i^{\mathrm{T}} h_l}{\| q_i \| \cdot \| h_l \|}$ is the cosine similarity function.
97
+
98
+ To counteract potential false positives in $N_{i}$ , DOWN utilizes a soft weighting mechanism based on neighbor rank to balance information utility against noise, with weights $\omega_{j}$ of neighbor $h_{j}$ calculated as: $\omega_{j} = \phi \cdot \alpha^{-\frac{l_{ij}}{k}}$ , where $\phi$ is a normalizing constant for weights, $\alpha$ serves as the exponential base, $k$ is the retrieved neighbor count, and $l_{ij}$ denotes the rank of $h_{j}$ as a neighbor to $q_{i}$ .
99
+
100
+ To align with the model's evolving accuracy in neighbor retrieval during training, DOWN periodically decreases $\alpha$ every five epochs, the values for $\alpha$ in $\omega_{j}$ are: $\alpha_{\mathrm{set}} = \{150, 10, 5, 2\}$ . The $\omega_{j}$ of each positive sample $h_{j}$ is used in Eqs. 3 and 4.
101
+
102
+ # 4.3 Training
103
+
104
+ # 4.3.1 Objective Function
105
+
106
+ Given a training batch $N_{train} \in D_{train}$ , where $Y_{c}$ is the set of coarse-grained labels of $N_{train}$ , DOWN trains the model using the loss:
107
+
108
+ $$
109
+ L _ {\text {t r a i n}} = L _ {\mathrm {c e}} + L _ {\mathrm {D O W N}}, \tag {1}
110
+ $$
111
+
112
+ $$
113
+ L _ {\text {D O W N}} = \frac {1}{\left| N _ {\text {t r a i n}} \right|} \sum_ {q _ {i} \in N _ {\text {t r a i n}}} L _ {1} ^ {i}. \tag {2}
114
+ $$
115
+
116
+ As shown in Eq. 3, DOWN uses a conventional contrastive objective function in the Euclidean space, while STAR-DOWN introduces a novel objective function in Eq. 4, leveraging CSS in the logarithmic space to guide sample distributions in
117
+
118
+ the Euclidean space, the temperature $\tau$ is a fixed constant in Eq. 3 and Eq. 4:
119
+
120
+ $$
121
+ L _ {1} ^ {i} = - \sum_ {h _ {j} \in N _ {i}} \omega_ {j} \cdot \log \frac {\exp \left(q _ {i} ^ {\mathrm {T}} h _ {j} / \tau\right)}{\sum_ {h _ {k} \in Q} \exp \left(q _ {i} ^ {\mathrm {T}} h _ {k} / \tau\right)}. \tag {3}
122
+ $$
123
+
124
+ $$
125
+ \begin{array}{l} L _ {2} ^ {i} = - \gamma \sum_ {h _ {j} \in N _ {i}} \omega_ {j} \cdot \log \frac {\exp (- d _ {K L} (q _ {i} , h _ {j}) / \tau)}{\sum_ {h _ {k} \in Q} \exp (- d _ {K L} (q _ {i} , h _ {k}) / \tau)} \\ - \sum_ {h _ {j} \in N _ {i}} \omega_ {j} \cdot \log \frac {\exp \left(q _ {i} ^ {\mathrm {T}} h _ {j} / \tau\right)}{\sum_ {h _ {k} \in Q} B ^ {d _ {K L} \left(q _ {i} , h _ {k}\right)} \cdot \exp \left(q _ {i} ^ {\mathrm {T}} h _ {k} / \tau\right)}. \tag {4} \\ \end{array}
126
+ $$
127
+
128
+ During training, STAR-DOWN optimizes the following objective function:
129
+
130
+ $$
131
+ L _ {\text {t r a i n}} = L _ {\mathrm {c e}} + L _ {\mathrm {S T A R}}, \tag {5}
132
+ $$
133
+
134
+ $$
135
+ L _ {\text {S T A R}} = \frac {1}{\left| N _ {\text {t r a i n}} \right|} \sum_ {q _ {i} \in N _ {\text {t r a i n}}} L _ {2} ^ {i}. \tag {6}
136
+ $$
137
+
138
+ As shown in Eq. 4, the term $d_{KL}(q_i,h_k)$ in $L_2^i$ represents the bidirectional KL divergence in a logarithmic space between the query sample embedding $q_{i}$ and the data queue sample embedding $h_k$ (detailed in Appendix A.1.2). $B$ is a trainable scalar representing the exponential base.
139
+
140
+ The first term in $L_2^i$ minimizes the KL divergence between query samples and positive samples (the retrieved top-k nearest neighbors $N_i$ in Section 4.2) while increasing it for negative samples (the samples in data queue $Q$ apart from the positive samples) in the logarithmic space, with $\gamma$ as a balancing hyperparameter. The second term in $L_2^i$ uses CSS in the logarithmic space, denoted by $B^{d_{KL}(q_i,h_k)}$ , to guide query sample distribution in the Euclidean space. $q_i^{\mathrm{T}}h_k$ quantifies the cosine similarity between normalized $q_i$ and $h_k$ , equivalent to the negative Euclidean distance (detailed in Appendix A.1.4). The value of the trainable scalar $B$ is updated during loss backpropagation, so $B^{d_{KL}(q_i,h_k)}$ is fully trainable and can integrate with contrastive learning methods, making the STAR method generic.
141
+
142
+ # 4.3.2 Loss Analysis
143
+
144
+ The loss $L_2^i$ consists of two terms. The first is a contrastive loss that optimizes sample distribution in logarithmic space, ensuring that similar samples have a small KL divergence $d_{KL}(q_i, h_k)$ , while dissimilar samples exhibit a large $d_{KL}(q_i, h_k)$ . These semantic similarities are then used as weights in the second contrastive loss term $q_i^{\mathrm{T}} h_k$ , optimizing the sample distribution in Euclidean space.
145
+
146
+ KL divergences grow logarithmically, their scale increases slowly, making it challenging to differentiate semantic differences. In contrast, exponentiation scales rapidly. To address this, we apply exponentiation to amplify semantic distinctions, using a trainable scalar base $B$ and an exponent $d_{KL}(q_i,h_k)$ from the logarithmic space. This results in weights of $B^{d_{KL}(q_i,h_k)}$ for $q_i^{\mathrm{T}}h_k$ .
147
+
148
+ Since STAR-DOWN discovers fine-grained categories in the Euclidean space, we analyze the second term $L_{2-2}^{i}$ of the loss $L_{2}^{i}$ , which optimizes sample distributions in the Euclidean space:
149
+
150
+ $$
151
+ \begin{array}{l} L _ {2 - 2} ^ {i} = - \sum_ {h _ {j} \in N _ {i}} \omega_ {j} \cdot \log \frac {\exp \left(q _ {i} ^ {\mathrm {T}} h _ {j} / \tau\right)}{\sum_ {h _ {k} \in Q} B ^ {d _ {K L} \left(q _ {i} , h _ {k}\right)} \cdot \exp \left(q _ {i} ^ {\mathrm {T}} h _ {k} / \tau\right)} \\ = \sum_ {h _ {j} \in N _ {i}} \omega_ {j} \cdot \left(\log \sum_ {h _ {k} \in Q} B ^ {d _ {K L} \left(q _ {i}, h _ {k}\right)} \cdot \exp \left(q _ {i} ^ {\mathrm {T}} h _ {k} / \tau\right) \right. \\ - \left(q _ {i} ^ {\mathrm {T}} h _ {j} / \tau\right)). \tag {7} \\ \end{array}
152
+ $$
153
+
154
+ In the loss $L_{2-2}^{i}$ , $B^{d_{KL}(q_i, h_k)}$ uses CSS in the logarithmic space to guide sample distributions in the Euclidean space. A large $d_{KL}(q_i, h_k)$ (low semantic similarity) causes $q_i$ to distance itself from $h_k$ in the Euclidean space, reducing $q_i^{\mathrm{T}} h_k$ , while a small $d_{KL}(q_i, h_k)$ allows $q_i$ to remain relatively close to $h_k$ compared to negative samples. This results in the formation of compact fine-grained clusters, with each cluster representing a discovered category.
155
+
156
+ Unlike traditional contrastive loss, which multiplies $\exp \left(\frac{q_i^{\mathrm{T}}h_k}{\tau}\right)$ by 1, our STAR method incorporates logarithmic space semantic differences, $B^{d_{KL}(q_i,h_k)}$ , as weights for each sample pair. This is expressed as $B^{d_{KL}(q_i,h_k)}\cdot \exp \left(\frac{q_i^{\mathrm{T}}h_k}{\tau}\right)$ . As a result, distant samples are pushed further apart in Euclidean space, while closer samples remain near, facilitating the formation of more distinct boundaries. We also analyze the STAR method from the perspectives of gradient, clustering, and generalized EM. Detailed analyses are provided in Appendix A.2.
157
+
158
+ # 4.4 Inference
159
+
160
+ Previous methods (An et al., 2023a, 2024) use clustering inference on sample embeddings from $F_{\theta}$ extracted from $D_{test}$ , which is unsuitable for real-time tasks, such as intent detection, which require immediate response and cannot wait to collect enough test samples for clustering. We introduce
161
+
162
+ <table><tr><td>Dataset</td><td>|C|</td><td>|F|</td><td># Train</td><td># Test</td></tr><tr><td>CLINC</td><td>10</td><td>150</td><td>18000</td><td>1000</td></tr><tr><td>WOS</td><td>7</td><td>33</td><td>8362</td><td>2420</td></tr><tr><td>HWU64</td><td>18</td><td>64</td><td>8954</td><td>1031</td></tr></table>
163
+
164
+ Table 1: Statistics of datasets (An et al., 2023a). #: number of samples. $\left| \mathcal{C}\right|$ : number of coarse-grained categories. $\left| \mathcal{F}\right|$ : number of fine-grained categories.
165
+
166
+ an alternative, centroid inference, suitable for both real-time and other contexts. Using $F_{\theta}$ , we derive sample embeddings from $D_{train}$ and assign fine-grained pseudo-labels through clustering. For each fine-grained cluster, only the embeddings of samples from the predominant coarse-grained category (the category with the most samples in this fine-grained cluster) are averaged to form centroid representations. These approximated centroids are used to determine the fine-grained category of each test sample based on cosine similarity. A visual explanation is in Appendix A.1.5.
167
+
168
+ # 5 Experiments
169
+
170
+ # 5.1 Experimental Settings
171
+
172
+ # 5.1.1 Datasets
173
+
174
+ We conduct experiments on three benchmark datasets: CLINC (Larson et al., 2019), WOS (Kowsari et al., 2017), and HWU64 (Liu et al., 2021). CLINC is an intent detection dataset spanning multiple domains. WOS is used for paper abstract classification, and HWU64 is designed for assistant query classification. Dataset statistics are provided in Table 1.
175
+
176
+ # 5.1.2 Baselines for Comparison
177
+
178
+ We compare our methods against the following baselines. Language models: BERT (Devlin et al., 2019b), BERT with coarse-grained fine-tuning, Llama2 (Touvron et al., 2023), Llama2 with coarse-grained fine-tuning and GPT4 (Achiam et al., 2023). Self-training baselines: DeepCluster (DC) (Caron et al., 2018), DeepAlignedCluster (DAC) (Zhang et al., 2021), and PseudoPrototypicalNet (PPNet) (Boney and Ilin, 2017; Ji et al., 2020). Contrastive learning baselines: SimCSE (Gao et al., 2021), Ancor (Bukchin et al., 2021), Delete (Wu et al., 2020), Nearest-Neighbor Contrastive Learning (NNCL) (Dwibedi et al., 2021), Contrastive Learning with Nearest Neighbors (CLNN) (Zhang et al., 2022), Soft Neighbor Contrastive Learning (SNCL) (Chongjian et al., 2022), Weighted Self-Contrastive Learn
179
+
180
+ ing (WSCL) (An et al., 2022), Denoised Neighborhood Aggregation (DNA), and Dynamic Order Weighted Network (DOWN) (An et al., 2023a, 2024). We also explore variants incorporating the cross-entropy loss (+CE).
181
+
182
+ # 5.1.3 Evaluation Metrics
183
+
184
+ To evaluate the quality of the discovered fine-grained clusters, we use the Adjusted Rand Index (ARI) (Hubert and Arabie, 1985) and Normalized Mutual Information (NMI) (Lancichinetti et al., 2009). For assessing classification performance, we use clustering Accuracy (ACC) (Kuhn, 2010; An et al., 2023a). Detailed descriptions of these metrics are provided in Appendix A.5.
185
+
186
+ # 5.1.4 Implementation Details
187
+
188
+ To ensure fair comparisons with baselines, we use the BERT-base-uncased model as the backbone for all STAR method variants. We adhere to the hyperparameters used by the integrated baselines to demonstrate the effectiveness of our STAR method. The learning rate for both pre-training and training is $5e^{-5}$ , using the AdamW optimizer with a 0.01 weight decay and 1.0 gradient clipping. The batch size for pre-training, training, and testing is 64. The temperature $\tau$ is set to 0.07. The exponential base $B$ in loss is set to 10. The number of neighbors $k$ is set to $\{120, 120, 250\}$ for the CLINC, HWU64, and WOS datasets, respectively. Epochs for pretraining and training are set to 100 and 20, respectively. The $\gamma$ values are $\{0.03, 0.05, 0.1\}$ for the CLINC, HWU64, and WOS datasets. The momentum coefficient $m$ is set to 0.99. Further details are provided in Appendix A.4.
189
+
190
+ # 5.1.5 Research Questions
191
+
192
+ The following research questions (RQs) are investigated: 1. What is the impact of STAR method on FCDC tasks? 2. What are the effects of the proposed real-time centroid inference compared to traditional clustering inference? 3. How does each component of the STAR method affect performance? 4. How can we effectively and efficiently set the base for the exponential function in the STAR method?
193
+
194
+ # 5.2 Result Analysis (RQ1)
195
+
196
+ As shown in Table 2, STAR method variants outperform SOTA methods across all datasets and metrics, validating the effectiveness of the STAR method in FCDC tasks. Language models like BERT, Llama2 and GPT4 (Devlin et al., 2019b; Touvron et al.,
197
+
198
+ <table><tr><td rowspan="2">Methods</td><td colspan="3">HWU64</td><td colspan="3">CLINC</td><td colspan="3">WOS</td></tr><tr><td>ACC</td><td>ARI</td><td>NMI</td><td>ACC</td><td>ARI</td><td>NMI</td><td>ACC</td><td>ARI</td><td>NMI</td></tr><tr><td>BERT (Devlin et al., 2019b)</td><td>33.52</td><td>17.04</td><td>56.90</td><td>34.37</td><td>17.61</td><td>64.75</td><td>31.97</td><td>18.36</td><td>45.15</td></tr><tr><td>BERT + CE</td><td>37.89</td><td>33.68</td><td>74.63</td><td>43.85</td><td>32.37</td><td>78.58</td><td>38.29</td><td>36.94</td><td>64.72</td></tr><tr><td>Llama2 (Touvron et al., 2023)</td><td>19.27±1.21</td><td>5.21±0.46</td><td>44.34±0.85</td><td>20.77±2.61</td><td>5.83±1.52</td><td>49.7±3.68</td><td>9.85±1.14</td><td>1.26±0.75</td><td>18.27±2.28</td></tr><tr><td>Llama2 + CE</td><td>32.40±5.46</td><td>17.32±5.95</td><td>57.53±5.78</td><td>45.69±6.85</td><td>29.38±6.55</td><td>72.66±7.13</td><td>18.51±1.50</td><td>7.8±1.18</td><td>29.66±3.23</td></tr><tr><td>GPT4 (Achiam et al., 2023)</td><td>10.77±1.86</td><td>0.14±0.05</td><td>35.17±3.68</td><td>9.56±2.12</td><td>0.11±0.06</td><td>46.69±3.24</td><td>7.56±1.51</td><td>0.15±0.04</td><td>27.78±2.98</td></tr><tr><td>DC (Caron et al., 2018)</td><td>18.05</td><td>43.34</td><td>29.74</td><td>26.40</td><td>12.51</td><td>61.26</td><td>29.17</td><td>13.98</td><td>53.27</td></tr><tr><td>DAC (Zhang et al., 2021)</td><td>29.14</td><td>12.89</td><td>52.99</td><td>29.16</td><td>14.15</td><td>62.78</td><td>28.47</td><td>15.94</td><td>43.52</td></tr><tr><td>DC + CE</td><td>41.73</td><td>27.81</td><td>66.81</td><td>30.28</td><td>13.56</td><td>62.38</td><td>38.76</td><td>35.21</td><td>60.30</td></tr><tr><td>DAC + CE</td><td>42.19</td><td>28.15</td><td>66.50</td><td>42.09</td><td>28.09</td><td>72.78</td><td>39.42</td><td>33.67</td><td>61.60</td></tr><tr><td>PPNet (Ji et al., 2020)</td><td>58.36±2.51</td><td>47.63±1.96</td><td>79.75±1.02</td><td>70.15±1.86</td><td>59.31±0.96</td><td>85.08±0.81</td><td>62.59±1.41</td><td>50.81±1.21</td><td>72.19±0.68</td></tr><tr><td>STAR-PPNet (ours)</td><td>63.19±2.38</td><td>52.21±1.33</td><td>81.66±1.21</td><td>73.21±1.97</td><td>61.87±0.79</td><td>86.16±0.47</td><td>66.15±1.33</td><td>53.61±1.24</td><td>73.82±0.74</td></tr><tr><td>Delete (Wu et al., 2020)</td><td>21.30</td><td>6.52</td><td>44.13</td><td>47.11</td><td>31.28</td><td>73.39</td><td>24.50</td><td>11.68</td><td>35.47</td></tr><tr><td>SimCSE (Gao et al., 2021)</td><td>24.48</td><td>8.42</td><td>46.94</td><td>40.22</td><td>23.57</td><td>69.02</td><td>25.87</td><td>13.03</td><td>38.53</td></tr><tr><td>Ancor + CE</td><td>32.90</td><td>30.71</td><td>74.73</td><td>44.44</td><td>31.50</td><td>74.67</td><td>39.34</td><td>26.14</td><td>54.35</td></tr><tr><td>NNCL (Dwibedi et al., 2021)</td><td>32.98</td><td>30.02</td><td>73.24</td><td>17.42</td><td>13.93</td><td>67.56</td><td>29.64</td><td>28.51</td><td>61.37</td></tr><tr><td>SimCSE + CE</td><td>34.04</td><td>31.81</td><td>74.86</td><td>52.53</td><td>37.03</td><td>77.39</td><td>41.28</td><td>34.47</td><td>61.62</td></tr><tr><td>Delete + CE</td><td>35.13</td><td>31.84</td><td>74.88</td><td>47.87</td><td>33.79</td><td>76.25</td><td>41.53</td><td>33.78</td><td>61.01</td></tr><tr><td>CLNN (Zhang et al., 2022)</td><td>37.21</td><td>34.66</td><td>75.27</td><td>19.96</td><td>14.76</td><td>68.30</td><td>29.48</td><td>28.42</td><td>60.99</td></tr><tr><td>Ancor (Bukchin et al., 2021)</td><td>37.34</td><td>34.75</td><td>74.99</td><td>45.60</td><td>33.11</td><td>75.23</td><td>41.20</td><td>37.00</td><td>65.42</td></tr><tr><td>SNCL (Chongjian et al., 2022)</td><td>42.32</td><td>38.17</td><td>76.39</td><td>55.01</td><td>45.64</td><td>82.93</td><td>36.27</td><td>33.62</td><td>62.35</td></tr><tr><td>WSCL (An et al., 2022)</td><td>59.52</td><td>49.34</td><td>79.31</td><td>74.02</td><td>62.98</td><td>88.37</td><td>65.27</td><td>51.78</td><td>72.46</td></tr><tr><td>DNA (An et al., 2023a)</td><td>70.81</td><td>59.66</td><td>83.31</td><td>87.66</td><td>81.82</td><td>94.69</td><td>74.57</td><td>63.30</td><td>76.86</td></tr><tr><td>STAR-DNA (ours)</td><td>75.79±0.93</td><td>65.27±1.12</td><td>85.34±0.36</td><td>89.25±0.17</td><td>83.47±0.27</td><td>95.11±0.05</td><td>77.19±0.81</td><td>64.97±0.75</td><td>77.91±0.76</td></tr><tr><td>DOWN (An et al., 2024)</td><td>78.92</td><td>68.17</td><td>86.22</td><td>91.79</td><td>86.70</td><td>96.05</td><td>80.00</td><td>67.09</td><td>78.87</td></tr><tr><td>STAR-DOWN (ours)</td><td>80.31±0.26</td><td>70.22±0.59</td><td>87.28±0.31</td><td>92.45±0.38</td><td>87.05±0.17</td><td>96.20±0.07</td><td>81.98±0.67</td><td>69.27±0.60</td><td>79.99±0.40</td></tr></table>
199
+
200
+ Table 2: The average performance $(\%)$ in terms of Accuracy (ACC), Adjusted Rand Index (ARI), and Normalized Mutual Information (NMI) on three datasets for the FCDC language task. To ensure fair comparisons with previous works (An et al., 2022, 2023a, 2024) and demonstrate the effectiveness of STAR, we use the same clustering inference mechanism and also average the results over three runs with identical common hyperparameters. Some baselines results are cited from aforementioned previous works, where standard deviations are not originally provided.
201
+
202
+ <table><tr><td rowspan="2">Methods</td><td colspan="3">HWU64</td><td colspan="3">CLINC</td><td colspan="3">WOS</td></tr><tr><td>ACC</td><td>ARI</td><td>NMI</td><td>ACC</td><td>ARI</td><td>NMI</td><td>ACC</td><td>ARI</td><td>NMI</td></tr><tr><td>STAR-DOWN (clustering)</td><td>80.31±0.26</td><td>70.22±0.59</td><td>87.28±0.31</td><td>92.45±0.38</td><td>87.05±0.17</td><td>96.20±0.07</td><td>81.98±0.67</td><td>69.27±0.60</td><td>79.99±0.40</td></tr><tr><td>STAR-DOWN (centroid)</td><td>79.44±0.51</td><td>69.13±0.75</td><td>86.97±0.40</td><td>92.60±0.45</td><td>87.16±0.53</td><td>96.21±0.09</td><td>81.89±0.53</td><td>69.05±0.39</td><td>79.78±0.32</td></tr></table>
203
+
204
+ Table 3: Comparison of clustering and centroid inference mechanisms. "Clustering" clusters test set sample embeddings to determine each sample's fine-grained category, while "Centroid" infers the category by comparing each test sample's cosine similarity to fine-grained centroids.
205
+
206
+ 2023; Achiam et al., 2023) (GPT4 prompt in Appendix A.6) perform poorly on the FCDC task due to the lack of fine-grained supervisory information. Self-training methods like DC, DAC, and PPNet (Caron et al., 2018; Zhang et al., 2021; Ji et al., 2020) also struggle because they rely on noisy fine-grained pseudo-labels and overlook comprehensive semantic similarities (CSS). Contrastive learning methods such as SNCL (Chongjian et al., 2022) and WSCL (An et al., 2022) perform better by leveraging positive pairs. DNA (An et al., 2023a) and DOWN (An et al., 2024) further enhance feature quality by filtering false positives and weighting them by rank. However, these methods still do not use CSS for sample distributions. Integrating the STAR method with existing baselines enhances performance across all datasets, consistently improving sample distributions in the Euclidean space.
207
+
208
+ The superior performance of STAR is attributed to three factors: First, bidirectional KL diver
209
+
210
+ gence measures CSS, pushing negative samples further away and relatively bringing positive samples closer based on CSS magnitude, making fine-grained clusters easier to distinguish. Second, the base $B$ of the exponential in Eq. 4 is a trainable scalar, balancing CSS magnitude and semantic structure. Third, STAR variants iteratively bootstrap model performance in neighborhood retrieval and representation learning through a generalized EM process (detailed in Appendix A.2.3).
211
+
212
+ # 5.3 Inference Mechanism Comparison (RQ2)
213
+
214
+ Previous methods (Chongjian et al., 2022; An et al., 2023a, 2024) perform a nearest neighbor search over the examples of the found fine-grained clusters for fine-grained category prediction (we refer to this technique as cluster inference). We speed up this process making it better suitable for real-time tasks by developing a centroid inference mechanism (see Section 4.4). Results in Table 3 demon
215
+
216
+ <table><tr><td>Methods</td><td>ACC</td><td>ARI</td><td>NMI</td></tr><tr><td>ours</td><td>80.31±0.26</td><td>70.22±0.59</td><td>87.28±0.31</td></tr><tr><td>w/o CE</td><td>78.61±0.44</td><td>67.32±0.86</td><td>85.62±0.36</td></tr><tr><td>w/o KL loss</td><td>78.97±0.32</td><td>68.03±0.36</td><td>85.81±0.16</td></tr><tr><td>w/o KL weight</td><td>79.26±0.42</td><td>68.86±0.37</td><td>86.21±0.07</td></tr><tr><td>w/o KL weight and loss</td><td>78.96±0.15</td><td>68.21±0.22</td><td>86.32±0.10</td></tr></table>
217
+
218
+ strates that results of centroid inference are competitive with cluster inference. When results are of the former are lower, this is due to two factors: clustering inference leverages inter-relations among test set samples for richer features, while centroid inference depends on centroids derived from noisy samples with fine-grained pseudo-labels. Despite these issues, centroid inference remains a viable option for real-time applications, balancing immediate analytical needs with slight performance trade-offs.
219
+
220
+ # 5.4 Ablation Study (RQ1 & RQ3)
221
+
222
+ We examine the impact of various components of the STAR method in STAR-DOWN, as detailed in Table 4. Our results yield the following insights. (1) Excluding coarse-grained supervision information during training (w/o CE) reduces model performance, as this information is crucial for effective representation learning. (2) Omitting the first loss term (w/o KL loss) from Eq. 4 diminishes performance. The KL loss term aligns the KL divergence between data samples and the query with their semantic similarities. Without it, $B^{d_{KL}(q_i,h_k)}$ fails to guide the query sample distribution based on semantic similarities in Eq. 4. (3) Removing the KL weight $B^{d_{KL}(q_i,h_k)}$ from Eq. 4 (w/o KL weight) reduces effectiveness. The loss no longer utilizes fine-grained semantic similarities measured by $B^{d_{KL}(q_i,h_k)}$ in the logarithmic space to direct the query sample distribution in comparison to all samples. (4) Eliminating both the KL loss term and the KL weight in Eq. 4 leads to a performance decline. This omission prevents the optimization of the query sample towards positive samples in the logarithmic space and fails to leverage fine-grained semantic similarities in the logarithmic space to influence the distribution of query samples relative to all samples in the Euclidean space.
223
+
224
+ # 5.5 Exponential Base Impact (RQ4)
225
+
226
+ In the STAR method's loss equation (Eq. 4), $B^{d_{KL}(q_i, h_k)}$ modulates the distribution of $q_i$ and $h_k$ in the Euclidean space based on their semantic
227
+
228
+ Table 4: Results $(\%)$ of the ablation study for STAR-DOWN on the HWU64 Dataset.
229
+
230
+ <table><tr><td>Base value</td><td>ACC</td><td>ARI</td><td>NMI</td></tr><tr><td>trainable B (ours)</td><td>80.31±0.26</td><td>70.22±0.59</td><td>87.28±0.31</td></tr><tr><td>e</td><td>79.96±0.12</td><td>68.89±0.55</td><td>86.66±0.10</td></tr><tr><td>10</td><td>80.22±0.27</td><td>69.61±0.65</td><td>87.08±0.30</td></tr><tr><td>16</td><td>80.73±0.32</td><td>70.14±0.58</td><td>87.25±0.36</td></tr><tr><td>66</td><td>80.57±0.38</td><td>70.20±0.52</td><td>87.07±0.15</td></tr></table>
231
+
232
+ Table 5: Averaged results $(\%)$ and their standard deviations over three runs of multiple STAR-DOWN methods with five different base values on the HWU64 dataset. To set base value conveniently, we set $B$ as a trainable scalar.
233
+
234
+ similarity in the logarithmic space, as quantified by the bidirectional KL divergence. The base $B$ is used to enhance semantic differences, improving the discriminability of fine-grained categories. We experimented with multiple constant values and a trainable configuration for $B$ , with multiple STAR-DOWN results presented in Table 5. The multiple STAR-DOWN methods with various base values consistently outperform the DOWN method (Table 2), demonstrating the effectiveness and robustness of the STAR method regardless of the base value $B$ . Notably, base values that are either too low (e.g., $e$ ) or too high (e.g., 66) disrupt the semantic representation by inadequately or excessively emphasizing semantic similarities in the logarithmic space. To set base value conveniently, we set $B$ as a trainable scalar, achieving favorable outcomes as indicated in Table 5.
235
+
236
+ # 5.6 Inference of Category Semantics
237
+
238
+ Prior works (An et al., 2023a, 2024) only discovered fine-grained categories and assigned them numeric indices without elucidating the categories semantics, thus constraining their broader application. We propose utilizing the commonsense reasoning capabilities of large language models (LLMs) to infer the semantics of these categories. Specifically, we employ a trained encoder, $F_{\theta}$ , to extract embeddings from all train set samples and cluster these embeddings to assign fine-grained pseudo-labels to each train set sample. For each fine-grained category indicated by a specific pseudo-label, we aggregate all predicted samples from the training set and use an LLM to deduce the category semantics. Details on the LLM prompt are provided in Appendix A.7.
239
+
240
+ # 5.7 Error Analysis
241
+
242
+ As shown in the results of Table 6, considering this is an unsupervised experiment, the majority of fine
243
+
244
+ <table><tr><td>Category</td><td>ACC</td><td>Category</td><td>ACC</td><td>Category</td><td>ACC</td><td>Category</td><td>ACC</td></tr><tr><td>alarm_query</td><td>63.16</td><td>datetimeConvert</td><td>100.0</td><td>general Explain</td><td>53.85</td><td>iot_hue_lightoff</td><td>89.47</td></tr><tr><td>alarm_remove</td><td>72.73</td><td>datetime_query</td><td>57.89</td><td>general joke</td><td>91.67</td><td>iot_hue_lighton</td><td>66.67</td></tr><tr><td>alarm_set</td><td>84.21</td><td>email_addcontact</td><td>100.0</td><td>general_negate</td><td>100.0</td><td>iot_hue_lightup</td><td>35.72</td></tr><tr><td>audio_volume_down</td><td>87.5</td><td>email_query</td><td>73.68</td><td>general_praise</td><td>100.0</td><td>iot_wemo_off</td><td>88.89</td></tr><tr><td>audio_volume_mute</td><td>80.0</td><td>email_querycontact</td><td>79.95</td><td>general_quirky</td><td>42.11</td><td>iot_wemo_on</td><td>85.72</td></tr><tr><td>audio_volume_up</td><td>76.92</td><td>email_sendemail</td><td>63.16</td><td>generalrepeat</td><td>73.68</td><td>lists_createoradd</td><td>94.74</td></tr><tr><td>calendar_query</td><td>63.16</td><td>general_affirm</td><td>100.0</td><td>iot Cleaning</td><td>100.0</td><td>lists_query</td><td>84.21</td></tr><tr><td>calendar_remove</td><td>84.21</td><td>general_commandstop</td><td>100.0</td><td>iot_coffee</td><td>100.0</td><td>lists_remove</td><td>94.74</td></tr><tr><td>calendar_set</td><td>84.21</td><td>general.confirm</td><td>89.47</td><td>iot_hue_lightchange</td><td>73.68</td><td>music_likeness</td><td>88.89</td></tr><tr><td>cookingrecipe</td><td>89.47</td><td>general_dontcare</td><td>100.0</td><td>iot_hue_lightdim</td><td>58.33</td><td>music_query</td><td>63.16</td></tr><tr><td>music_settings</td><td>100.0</td><td>qa Maths</td><td>92.86</td><td>transport_taxi</td><td>100.0</td><td>news_query</td><td>78.95</td></tr><tr><td>qa_stock</td><td>100.0</td><td>transport-ticket</td><td>89.47</td><td>recommendation_events</td><td>78.95</td><td>transportTraffic</td><td>94.74</td></tr><tr><td>play_audiobook</td><td>89.47</td><td>recommendation Locations</td><td>100.0</td><td>play_game</td><td>89.47</td><td>play_music</td><td>89.47</td></tr><tr><td>qa_currency</td><td>100.0</td><td>takeaway_order</td><td>78.95</td><td>qa_definition</td><td>89.47</td><td>qa_factoid</td><td>52.63</td></tr><tr><td>recommendationMovies</td><td>100.0</td><td>weather_query</td><td>89.47</td><td>transport_query</td><td>78.95</td><td>social_post</td><td>73.68</td></tr></table>
245
+
246
+ Table 6: The error analysis analyzing the discovered fine-grained categories from STAR-DOWN method on the HWU64 dataset. The numerical values represent the classification accuracy (ACC) for each fine-grained category.
247
+
248
+ grained category samples, such as play_audiobook and qa_currency, are classified with reasonable accuracy, demonstrating the qualitative effectiveness of our unsupervised method, STAR-DOWN. However, certain fine-grained categories, such as datetime_query, exhibit lower classification performance compared to others. A possible reason is that queries often contain descriptive text, which can distract from correctly classifying the text into the intended query category. For example, the query "tell me what time it is in Dallas, Texas" falls under the datetime_query category, but its descriptive nature may lead to misclassification into location-related categories.
249
+
250
+ Additionally, some fine-grained categories have very nuanced semantic differences, making them particularly challenging for fine-grained discovery tasks. Examples include general_quirky, iot_hue_lightdim, iot_hue_lightup, qa_factoid and so on. For instance, the iot_hue_lightup category refers to increasing light brightness, which must be carefully distinguished from simply turning the light on.
251
+
252
+ # 5.8 Visualization
253
+
254
+ We visualize the sample embeddings of STAR-DOWN in Figure 3. The results demonstrate that our method forms distinguishable clusters for fine-grained categories, proving STAR's effectiveness in separating dissimilar samples and clustering similar ones. Additionally, we visualize the generalized EM perspective of STAR-DOWN in Appendix A.1.6.
255
+
256
+ ![](images/68591bfeb3af32937c206f875495088d9065dd18a5b98352451c589c388ec9be.jpg)
257
+ Figure 3: The t-SNE visualization of sample embeddings from STAR-DOWN method on the HWU64 dataset, with different colors representing different coarse-grained categories. The distinct clusters represent the discovered fine-grained categories.
258
+
259
+ # 6 Conclusion
260
+
261
+ We propose the STAR method for fine-grained category discovery in natural language texts, which utilizes comprehensive semantic similarities in the logarithmic space to guide the distribution of textual samples, including conversational intents, scientific paper abstracts, and assistant queries, in the Euclidean space. STAR pushes query samples further away from negative samples and brings them closer to positive samples based on the comprehensive semantic similarities magnitude. This process forms compact clusters, each representing a discovered category. We theoretically analyze the effectiveness of STAR method. Additionally, we introduce a centroid inference mechanism that addresses previous gaps in real-time evaluations. Experiments on three natural language benchmarks demonstrate that STAR achieves new state-of-the-art performance in fine-grained category discovery tasks for text classification.
262
+
263
+ # Limitations
264
+
265
+ Although the proposed STAR method, integrated with existing contrastive learning methods, achieves superior performance in fine-grained category discovery tasks, its variants require additional memory to store a data queue for neighbors retrieval and feature learning.
266
+
267
+ # Ethical Consideration
268
+
269
+ Our study introduces a novel method that leverages comprehensive semantic similarities to improve the distinction of fine-grained clusters in fine-grained category discovery tasks. This contribution has no direct negative social impacts.
270
+
271
+ # Acknowledgements
272
+
273
+ We would like to thank all our families and friends for their support throughout this work. This research was partially supported by the China Scholarship Council. All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors. Marie-Francine Moens is supported by the ERC Advanced Grant CALCULUS (788506).
274
+
275
+ # References
276
+
277
+ Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
278
+ Wenbin An, Feng Tian, Ping Chen, Siliang Tang, Qinghua Zheng, and Qianying Wang. 2022. Fine-grained category discovery under coarse-grained supervision with hierarchical weighted self-contrastive learning. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 1314–1323.
279
+ Wenbin An, Feng Tian, Wenkai Shi, Yan Chen, Qinghua Zheng, Qianying Wang, and Ping Chen. 2023a. Dna: Denoised neighborhood aggregation for fine-grained category discovery. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 12292-12302.
280
+ Wenbin An, Feng Tian, Wenkai Shi, Haonan Lin, Yaqiang Wu, Mingxiang Cai, Luyan Wang, Hua Wen, Lei Yao, and Ping Chen. 2024. Down: Dynamic order weighted network for fine-grained category discovery. Knowledge-Based Systems, 293:111666.
281
+ Wenbin An, Feng Tian, Qinghua Zheng, Wei Ding, QianYing Wang, and Ping Chen. 2023b. Generalized category discovery with decoupled prototypical
282
+
283
+ network. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 12527-12535.
284
+ Rinu Boney and Alexander Ilin. 2017. Semi-supervised and active few-shot learning with prototypical networks. arXiv preprint arXiv:1711.10856.
285
+ Guy Bukchin, Eli Schwartz, Kate Saenko, Ori Shahar, Rogerio Feris, Raja Giryes, and Leonid Karlinsky. 2021. Fine-grained angular contrastive learning with coarse labels. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8730-8740.
286
+ Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. 2018. Deep clustering for unsupervised learning of visual features. In Proceedings of the European conference on computer vision (ECCV), pages 132-149.
287
+ Tianshui Chen, Liang Lin, Riquan Chen, Yang Wu, and Xiaonan Luo. 2018. Knowledge-embedded representation learning for fine-grained image recognition. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pages 627-634.
288
+ Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597-1607. PMLR.
289
+ Ze Chen, Wanting Ji, Linlin Ding, and Baoyan Song. 2023. Fine-grained document-level financial event argument extraction approach. Engineering Applications of Artificial Intelligence, 121:105943.
290
+ GE Chongjian, Jiangliu Wang, Zhan Tong, Shoufa Chen, Yibing Song, and Ping Luo. 2022. Soft neighbors are positive supporters in contrastive visual representation learning. In The Eleventh International Conference on Learning Representations.
291
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019a. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.
292
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019b. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
293
+ Debidatta Dwibedi, Yusuf Aytar, Jonathan Tompson, Pierre Sermanet, and Andrew Zisserman. 2021. With
294
+
295
+ a little help from my friends: Nearest-neighbor contrastive learning of visual representations. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9588-9597.
296
+ Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6894-6910.
297
+ Rayid Ghani and Andrew Fano. 2002. Building recommender systems using a knowledge base of product semantics. In Proceedings of the Workshop on Recommendation and Personalization in ECommerce at the 2nd International Conference on Adaptive Hypermedia and Adaptive Web based Systems, pages 27-29. CiteSeer.
298
+ Xiaoting Guo, Wei Yu, and Xiaodong Wang. 2021. An overview on fine-grained text sentiment analysis: Survey and challenges. In Journal of Physics: Conference Series, volume 1757, page 012038. IOP Publishing.
299
+ Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9729-9738.
300
+ Lawrence Hubert and Phipps Arabie. 1985. Comparing partitions. Journal of classification, 2:193-218.
301
+ Zilong Ji, Xiaolong Zou, Tiejun Huang, and Si Wu. 2020. Unsupervised few-shot feature learning via self-supervised training. Frontiers in computational neuroscience, 14:83.
302
+ Kamran Kowsari, Donald E Brown, Mojtaba Heidarysafa, Kiana Jafari Meimandi, Matthew S Gerber, and Laura E Barnes. 2017. HdtTex: Hierarchical deep learning for text classification. In 2017 16th IEEE international conference on machine learning and applications (ICMLA), pages 364-371. IEEE.
303
+ Harold W Kuhn. 2010. The hungarian method for the assignment problem. 50 Years of Integer Programming 1958-2008: From the Early Years to the State-of-the-Art, pages 29-47.
304
+ Andrea Lancichinetti, Santo Fortunato, and János Kertész. 2009. Detecting the overlapping and hierarchical community structure in complex networks. New journal of physics, 11(3):033015.
305
+ Stefan Larson, Anish Mahendran, Joseph J Peper, Christopher Clarke, Andrew Lee, Parker Hill, Jonathan K Kummerfeld, Kevin Leach, Michael A Laurenzano, Lingjia Tang, et al. 2019. An evaluation dataset for intent classification and out-of-scope prediction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1311-1316.
306
+
307
+ Dan Li, Shuai Wang, Jie Zou, Chang Tian, Elisha Nieuwburg, Fengyuan Sun, and Evangelos Kanoulas. 2021. Paint4poem: A dataset for artistic visualization of classical chinese poems. arXiv preprint arXiv:2109.11682.
308
+ Junnan Li, Pan Zhou, Caiming Xiong, and Steven Hoi. 2020. Prototypical contrastive learning of unsupervised representations. In International Conference on Learning Representations.
309
+ Ruixue Lian, William A Sethares, and Junjie Hu. 2024. Learning label hierarchy with supervised contrastive learning. arXiv preprint arXiv:2402.00232.
310
+ Mohamed Lichouri, Khaled Lounnas, and Mohamed Zakaria Amziane. 2024. dzfinnlp at arafinnlp: Improving intent detection in financial conversational agents. arXiv preprint arXiv:2407.13565.
311
+ Xingkun Liu, Arash Eshghi, Pawel Swietojanski, and Verena Rieser. 2021. Benchmarking natural language understanding services for building conversational agents. In Increasing Naturalness and Flexibility in Spoken Dialogue Interaction: 10th International Workshop on Spoken Dialogue Systems, pages 165-183. Springer.
312
+ Ruotian Ma, Zhang Lin, Xuanting Chen, Xin Zhou, Junzhe Wang, Tao Gui, Qi Zhang, Xiang Gao, and Yun Wen Chen. 2023. Coarse-to-fine few-shot learning for named entity recognition. In *Findings of the Association for Computational Linguistics: ACL* 2023, pages 4115-4129.
313
+ Dheeraj Mekala, Varun Gangal, and Jingbo Shang. 2021. Coarse2fine: Fine-grained text classification on coarsely-grained annotated data. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 583-594.
314
+ Zhengxin Pan, Fangyu Wu, and Bailing Zhang. 2023. Fine-grained image-text matching by cross-modal hard aligning network. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 19275-19284.
315
+ Viral Parekh, Karimulla Shaik, Soma Biswas, and Muthusamy Chelliah. 2021. Fine-grained visual attribute extraction from fashion wear. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3973-3977.
316
+ Wongi Park and Jongbin Ryu. 2024. Fine-grained self-supervised learning with jigsaw puzzles for medical image classification. Computers in Biology and Medicine, page 108460.
317
+ Chang Tian, Wenpeng Yin, Dan Li, and Marie-Francine Moens. 2024. Fighting against the repetitive training and sample dependency problem in few-shot named entity recognition. IEEE Access.
318
+ Chang Tian, Wenpeng Yin, and Marie-Francine Moens. 2022. Anti-overestimation dialogue policy learning for task-completion dialogue system. arXiv preprint arXiv:2207.11762.
319
+
320
+ Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
321
+
322
+ Sagar Vaze, Andrea Vedaldi, and Andrew Zisserman. 2024. No representation rules them all in category discovery. Advances in Neural Information Processing Systems, 36.
323
+
324
+ Jan Vellmer, Peter Mandl, Tobias Bellmann, Maximilian Balluff, Manuel Weber, Alexander Döschl, and Max-Emanuel Keller. 2023. A machine learning approach to enterprise matchmaking using multilabel text classification based on semi-structured website content. In International Conference on Information Integration and Web Intelligence, pages 493–509. Springer.
325
+
326
+ Shijie Wang, Jianlong Chang, Zhihui Wang, Haojie Li, Wanli Ouyang, and Qi Tian. 2024a. Content-aware rectified activation for zero-shot fine-grained image retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence.
327
+
328
+ Shijie Wang, Zhihui Wang, Haojie Li, Jianlong Chang, Wanli Ouyang, and Qi Tian. 2024b. Accurate fine-grained object recognition with structure-driven relation graph networks. International Journal of Computer Vision, 132(1):137-160.
329
+
330
+ Zhuofeng Wu, Sinong Wang, Jiatao Gu, Madian Khabsa, Fei Sun, and Hao Ma. 2020. Clear: Contrastive learning for sentence representation. arXiv preprint arXiv:2012.15466.
331
+
332
+ Hanlei Zhang, Hua Xu, Ting-En Lin, and Rui Lyu. 2021. Discovering new intents with deep aligned clustering. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 14365–14373.
333
+
334
+ Yuwei Zhang, Haode Zhang, Li-Ming Zhan, Xiao-Ming Wu, and Albert Lam. 2022. New intent discovery with pre-training and contrastive learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 256-269.
335
+
336
+ Zhun Zhong, Enrico Fini, Subhankar Roy, Zhiming Luo, Elisa Ricci, and Nicu Sebe. 2021. Neighborhood contrastive learning for novel class discovery. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10867-10875.
337
+
338
+ # A Appendix
339
+
340
+ # A.1 STAR-DOWN
341
+
342
+ # A.1.1 DOWN Pre-train and Neighbors Retrieval
343
+
344
+ DOWN (An et al., 2024) pre-trains the Encoder to incorporate coarse-grained and general knowledge. The pre-training loss $L_{\mathrm{pre}}$ is defined as the sum of
345
+
346
+ the cross-entropy loss $L_{\mathrm{ce}}$ and the masked language modeling loss $L_{\mathrm{mlm}}$ :
347
+
348
+ $$
349
+ L _ {\text {p r e}} = L _ {\mathrm {c e}} + L _ {\mathrm {m l m}}, \tag {8}
350
+ $$
351
+
352
+ given that $D_{train}$ is the train set and $Y_{coarse}$ represents the coarse-grained labels for the training set, let $N_{train} \in D_{train}$ be a training batch with coarse labels $Y_{c}$ .
353
+
354
+ The cross-entropy loss for a single training batch is calculated as the average loss over all samples in the batch. Here, $\hat{y}_i$ denotes the predicted probability distribution for the $i$ -th sample in the batch, and $y_i$ represents the ground truth probability distribution for the $i$ -th sample in the batch. The cross-entropy loss $L_{\mathrm{ce}}$ for the training batch $N_{train}$ is given by:
355
+
356
+ $$
357
+ L _ {\mathrm {c e}} = - \frac {1}{\left| N _ {\text {t r a i n}} \right|} \sum_ {i = 1} ^ {\left| N _ {\text {t r a i n}} \right|} \sum_ {c = 1} ^ {C} y _ {i, c} \log \left(\hat {y} _ {i, c}\right), \tag {9}
358
+ $$
359
+
360
+ in this context: $C$ is the number of categories, $y_{i,c}$ is a binary indicator (0 or 1) indicating whether category label $c$ is the correct classification for sample $i$ , $\hat{y}_{i,c}$ is the predicted probability for category $c$ for sample $i$ .
361
+
362
+ The masked language modeling (mlm) loss $L_{\mathrm{mlm}}$ for the training batch $N_{train}$ is expressed as the average negative log-likelihood of the true token given the masked context for each token in each sample in the batch:
363
+
364
+ $$
365
+ L _ {\mathrm {m l m}} = - \frac {1}{| N _ {\text {t r a i n}} |} \sum_ {i = 1} ^ {| N _ {\text {t r a i n}} |} \frac {1}{T} \sum_ {j = 1} ^ {T} \log p \left(\hat {x} _ {i, j} \mid x _ {i, j} ^ {\text {m a x e d}}\right). \tag {10}
366
+ $$
367
+
368
+ In this equation: $N_{train}$ is a training batch. $T$ is the length of each training sequence. $\hat{x}_{i,j}$ is the predicted token. $x_{i,j}^{\text{masked}}$ is the masked token. $p(\hat{x}_{i,j} \mid x_{i,j}^{\text{masked}})$ is the predicted probability of the true token $\hat{x}_{i,j}$ given the masked input $x_{i,j}^{\text{masked}}$ .
369
+
370
+ # A.1.2 KL Divergence
371
+
372
+ In this section, we calculate the bidirectional KL divergence between the query sample embedding $q_{i}$ and positive or negative sample embedding $h_k$ with the function $d_{KL}(q_i,h_k)$ .
373
+
374
+ We assume that sample embeddings follow the Gaussian distributions. Specifically, we use projection networks $f_{\mu}$ and $f_{\Sigma}$ to produce Gaussian distribution parameters:
375
+
376
+ $$
377
+ \mu_ {i} = f _ {\mu} (q _ {i}), \quad \Sigma_ {i} = \operatorname {E L U} \left(f _ {\Sigma} (q _ {i})\right) + (1 + \epsilon),
378
+ $$
379
+
380
+ $$
381
+ \mu_ {k} = f _ {\mu} \left(h _ {k}\right), \quad \Sigma_ {k} = \operatorname {E L U} \left(f _ {\Sigma} \left(h _ {k}\right)\right) + (1 + \epsilon),
382
+ $$
383
+
384
+ where $\mu_i, \mu_k \in \mathbb{R}^l$ and $\Sigma_i, \Sigma_k \in \mathbb{R}^{l \times l}$ represent the mean and diagonal covariance of the Gaussian embeddings, respectively. The covariances have nonzero elements only along the diagonal. The functions $f_\mu$ and $f_\Sigma$ are implemented as ReLU followed by single-layer networks. ELU (exponential linear unit) ensures numerical stability, with $\epsilon \approx e^{-14}$ . Here, $l$ , the Gaussian embedding dimension, is 128.
385
+
386
+ Given the Gaussian distribution parameters $\mu_{i},\mu_{k}\in \mathbb{R}^{l}$ and $\Sigma_{i},\Sigma_{k}\in \mathbb{R}^{l\times l}$ , we define the corresponding Gaussian distributions $\mathcal{N}_i = \mathcal{N}(\mu_i,\Sigma_i)$ and $\mathcal{N}_k = \mathcal{N}(\mu_k,\Sigma_k)$ for the query sample embedding $q_{i}$ and the positive or negative sample embedding $h_k$ .
387
+
388
+ The bidirectional KL divergence between the query sample embedding $q_{i}$ and the positive or negative sample embedding $h_{k}$ is calculated using the function $d_{KL}(q_i, h_k)$ , which measures fine-grained semantic similarities:
389
+
390
+ $$
391
+ d _ {K L} \left(q _ {i}, h _ {k}\right) = \frac {1}{2} \left(D _ {\mathrm {K L}} \left[ \mathcal {N} _ {i} \| \mathcal {N} _ {k} \right] + D _ {\mathrm {K L}} \left[ \mathcal {N} _ {k} \| \mathcal {N} _ {i} \right]\right), \tag {11}
392
+ $$
393
+
394
+ where
395
+
396
+ $$
397
+ \begin{array}{l} D _ {\mathrm {K L}} \left[ \mathcal {N} _ {i} \| \mathcal {N} _ {k} \right] = D _ {\mathrm {K L}} \left[ \mathcal {N} \left(\mu_ {i}, \Sigma_ {i}\right) \| \mathcal {N} \left(\mu_ {k}, \Sigma_ {k}\right) \right] \\ = \frac {1}{2} \left(\operatorname {T r} \left(\Sigma_ {k} ^ {- 1} \Sigma_ {i}\right) + \left(\mu_ {k} - \mu_ {i}\right) ^ {T} \Sigma_ {k} ^ {- 1} \left(\mu_ {k} - \mu_ {i}\right) \right. \\ - l + \log \frac {\left| \Sigma_ {k} \right|}{\left| \Sigma_ {i} \right|}). \tag {12} \\ \end{array}
398
+ $$
399
+
400
+ # A.1.3 Form Similarity
401
+
402
+ $$
403
+ F = G \frac {m _ {1} m _ {2}}{r ^ {2}}. \tag {13}
404
+ $$
405
+
406
+ In astronomy, gravitational force is crucial for determining the orbits of celestial bodies. According to the formula, the mass of the bodies significantly influences their orbital paths.
407
+
408
+ Inspired by astronomy, we use fine-grained comprehensive semantic similarities between the query sample and each available positive or negative sample to guide sample distributions in the embedding space, where the similarities are measured by bidirectional KL divergence, as shown in Eq. 14:
409
+
410
+ ![](images/a94ad34828b115c9638b68bc42f6cf64e15ca22932d132dce40db9e06a8494f4.jpg)
411
+ Figure 4: Cosine rule. $a$ , $b$ , and $c$ denote the lengths of the triangle's sides, and $\gamma$ represents the angle. The cosine rule is used in the Appendix A.1.4.
412
+
413
+ $$
414
+ \begin{array}{l} B ^ {d _ {K L} (q _ {i}, h _ {k})} = \left(e ^ {\log (B)}\right) ^ {d _ {K L} (q _ {i}, h _ {k})} \\ = e ^ {\log (B) \cdot d _ {K L} \left(q _ {i}, h _ {k}\right)} \\ = e ^ {\log (B) \cdot (\log M + \log m)} \tag {14} \\ = e ^ {\log (B) \cdot \log (M m)} \\ = e ^ {B ^ {\prime} \cdot \log (M m)}. \\ \end{array}
415
+ $$
416
+
417
+ Since bidirectional KL divergence is asymmetrical, it consists of two components: $\log M$ and $\log m$ .
418
+
419
+ In the STAR method, the bidirectional KL divergence $B^{d_{KL}(q_i,h_k)}$ is a key component of the loss function (Eq. 4) that guides sample distributions. It can be decomposed into two divergence components, $\log (Mm)$ , which is analogous to the gravitational force term $m_1m_2$ .
420
+
421
+ # A.1.4 Cosine Similarity Conversion
422
+
423
+ In trigonometry, cosine rule relates the lengths of the sides of a triangle to the cosine of one of its angles. In Figure 4, for a triangle with sides $a$ , $b$ , and $c$ , with $\gamma$ being the angle opposite side $c$ , the law of cosines is expressed as:
424
+
425
+ $$
426
+ c ^ {2} = a ^ {2} + b ^ {2} - 2 a b \cos \gamma \tag {15}
427
+ $$
428
+
429
+ In this context, we use $\cos \gamma$ to represent the cosine similarity between $q_{i}$ and $h_k$ , and $c$ to denote the Euclidean distance between the query sample $q_{i}$ and the positive or negative sample $h_k$ . Given that the sample embeddings are normalized to have a length of 1, we have the following relationship:
430
+
431
+ $$
432
+ c ^ {2} = 1 + 1 - 2 \cos \gamma ,
433
+ $$
434
+
435
+ thus,
436
+
437
+ $$
438
+ c ^ {2} \stackrel {c} {=} - \cos \gamma .
439
+ $$
440
+
441
+ Here, $q_{i}^{\mathrm{T}}h_{k}$ quantifies the cosine similarity between $q_{i}$ and $h_{k}$ , so
442
+
443
+ ![](images/8c160edb697986ef4a32b4660e14562f5e29c358ede9a590720e526bedebf73e.jpg)
444
+ Figure 5: The centroid inference mechanism. Triangles represent samples with the same fine-grained pseudo label; different colors denote various coarse-grained labels. Only samples with predominant coarse-grained labels, represented by blue triangles, are used to approximate fine-grained centroids; all others are excluded.
445
+
446
+ $$
447
+ c ^ {2} \stackrel {c} {=} - q _ {i} ^ {\mathrm {T}} h _ {k}.
448
+ $$
449
+
450
+ The cosine similarity and the negative Euclidean distance represents the similar mathematical meaning. Therefore, a smaller Euclidean distance between two samples corresponds to a larger cosine similarity. The STAR-DOWN procedure is outlined in Algorithm 1. In Step 3, STAR-DOWN introduces a novel contrastive loss, as specified in Eq. 5. To ensure fair validation of its effectiveness, STAR-DOWN adheres to Steps 1 and 2 of the DOWN procedure.
451
+
452
+ # A.1.5 Centroid Inference
453
+
454
+ As shown in Figure 5, we introduce centroid inference, an alternative inference suitable for real-time and other contexts. Using $F_{\theta}$ , we derive sample embeddings from $D_{train}$ and assign fine-grained pseudo-labels through clustering. For each fine-grained cluster, only the embeddings of samples from the predominant coarse-grained category are averaged to approximate centroid representations. These approximated centroids are then used to determine the fine-grained category of each test sample based on cosine similarity.
455
+
456
+ # A.1.6 Visualization
457
+
458
+ To verify the generalized EM perspective of STAR-DOWN, we visualize the true neighbor rate and model performance curves using three metrics during the training process, as shown in Figures 6a and 6b. The results indicate that STAR-DOWN progressively retrieves more accurate neighbors and improves model performance across the three metrics throughout the training. This improvement
459
+
460
+ is due to the positive feedback loop where more accurate neighbor retrieval enhances feature learning, and enhanced feature learning, in turn, leads to more accurate neighbor retrieval. Thus, STAR-DOWN effectively estimates true neighbors in the E-step and obtains better representations in the M-step, with two steps alternately performed to gradually enhance each other.
461
+
462
+ # A.1.7 Algorithm Procedure
463
+
464
+ The STAR-DOWN procedure is outlined in Algorithm 1. In Step 3, STAR-DOWN introduces a novel contrastive loss, as specified in Eq. 5. To ensure fair validation of its effectiveness, STAR-DOWN adheres to Steps 1 and 2 of the DOWN procedure.
465
+
466
+ # A.2 STAR-DOWN Analyses
467
+
468
+ The STAR method upgrades the original contrastive loss $L_{1}^{i}$ in $L_{\mathrm{train}}$ to the new loss $L_{2}^{i}$ as shown in Eq. 16. The first term in $L_{2}^{i}$ optimizes the method in the logarithmic space, increasing the KL divergence magnitude in accordance with semantic differences. The second term, $L_{2 - 2}^{i}$ , optimizes sample distributions in the Euclidean space. Since fine-grained category discovery occurs in the Euclidean space, our analyzes focus on $L_{2 - 2}^{i}$ , which optimizes the distributions of query samples within the Euclidean space:
469
+
470
+ $$
471
+ \begin{array}{l} L _ {2} ^ {i} = - \gamma \sum_ {h _ {j} \in N _ {i}} \omega_ {j} \cdot \log \frac {\exp (- d _ {K L} (q _ {i} , h _ {j}) / \tau)}{\sum_ {h _ {k} \in Q} \exp (- d _ {K L} (q _ {i} , h _ {k}) / \tau)} \\ - \sum_ {h _ {j} \in N _ {i}} \omega_ {j} \cdot \log \frac {\exp \left(q _ {i} ^ {\mathrm {T}} h _ {j} / \tau\right)}{\sum_ {h _ {k} \in Q} B ^ {d _ {K L} \left(q _ {i} , h _ {k}\right)} \cdot \exp \left(q _ {i} ^ {\mathrm {T}} h _ {k} / \tau\right)}, \tag {16} \\ \end{array}
472
+ $$
473
+
474
+ $$
475
+ \begin{array}{l} L _ {2 - 2} ^ {i} = - \sum_ {h _ {j} \in N _ {i}} \omega_ {j} \cdot \log \frac {\exp \left(q _ {i} ^ {\mathrm {T}} h _ {j} / \tau\right)}{\sum_ {h _ {k} \in Q} B ^ {d _ {K L} \left(q _ {i} , h _ {k}\right)} \cdot \exp \left(q _ {i} ^ {\mathrm {T}} h _ {k} / \tau\right)} \\ = \sum_ {h _ {j} \in N _ {i}} \omega_ {j} \cdot \left(\log \sum_ {h _ {k} \in Q} B ^ {d _ {K L} \left(q _ {i}, h _ {k}\right)} \cdot \exp \left(q _ {i} ^ {\mathrm {T}} h _ {k} / \tau\right) \right. \\ - \left(q _ {i} ^ {\mathrm {T}} h _ {j} / \tau\right)). \tag {17} \\ \end{array}
476
+ $$
477
+
478
+ # A.2.1 Gradient Analysis
479
+
480
+ From the gradient perspective, the gradient optimizes the method to project samples into the Euclidean space, where samples form into fine-grained clusters and each cluster represents a discovered fine-grained category. Each query sample $q_{i}$ has multiple positive samples $h_j \in N_i$ . To simplify the understanding of the STAR method's
481
+
482
+ ![](images/79607af8960ca44235fd95076ed4839b6e25dfd7935ed662ba52c137f34a307b.jpg)
483
+ (a) True Neighbor Rate (TNR) during training.
484
+
485
+ ![](images/47a289f6e828cf875ac632f50e56f6813e7c59891cb08403c570b78727fccf9b.jpg)
486
+ (b) Model performance during training.
487
+ Figure 6: The validation of generalized EM perspective on the HWU64 dataset.
488
+
489
+ gradient, we focus on the loss component related to a specific $h_j$ from the loss $L_{2-2}^i$ . This gives us:
490
+
491
+ $$
492
+ \begin{array}{l} L _ {2 - 2} ^ {i j} = - \omega_ {j} \cdot \log \frac {\exp \left(q _ {i} ^ {\mathrm {T}} h _ {j} / \tau\right)}{\sum_ {h _ {k} \in Q} B ^ {d _ {K L} \left(q _ {i} , h _ {k}\right)} \cdot \exp \left(q _ {i} ^ {\mathrm {T}} h _ {k} / \tau\right)} \\ = \omega_ {j} \cdot \left(\log \sum_ {h _ {k} \in Q} B ^ {d _ {K L} \left(q _ {i}, h _ {k}\right)} \cdot \exp \left(q _ {i} ^ {\mathrm {T}} h _ {k} / \tau\right) \right. \\ - \left(q _ {i} ^ {\mathrm {T}} h _ {j} / \tau\right). \tag {18} \\ \end{array}
493
+ $$
494
+
495
+ For the query sample $q_{i}$ , we investigate the gradient related to the negative sample $h_n \in Q$ . With the loss $L_{2-2}^{ij}$ , the gradient becomes:
496
+
497
+ $$
498
+ \frac {\partial L _ {2 - 2} ^ {i j}}{\partial \left[ q _ {i} ^ {\mathrm {T}} h _ {n} \right]} = \frac {\omega_ {j}}{\tau} \cdot \frac {B ^ {d _ {K L} \left(q _ {i} , h _ {n}\right)} \cdot \exp \left(q _ {i} ^ {\mathrm {T}} h _ {n} / \tau\right)}{\sum_ {h _ {k} \in Q} B ^ {d _ {K L} \left(q _ {i} , h _ {k}\right)} \cdot \exp \left(q _ {i} ^ {\mathrm {T}} h _ {k} / \tau\right)}. \tag {19}
499
+ $$
500
+
501
+ The larger the semantic difference in $B^{d_{KL}(q_i, h_n)}$ , the larger the gradient with regards to $q_i^{\mathrm{T}} h_n$ , the more $q_i^{\mathrm{T}} h_n$ will decrease to push $q_i$ away from $h_n$ .
502
+
503
+ For the gradient relevant to the positive sample $h_j \in N_i$ , with the upgraded loss $L_{2-2}^{ij}$ :
504
+
505
+ $$
506
+ \frac {\partial L _ {2 - 2} ^ {i j}}{\partial \left[ q _ {i} ^ {\mathrm {T}} h _ {j} \right]} = - \frac {\omega_ {j}}{\tau} \cdot \left(1 - \frac {B ^ {d _ {K L} \left(q _ {i} , h _ {j}\right)} \cdot \exp \left(q _ {i} ^ {\mathrm {T}} h _ {j} / \tau\right)}{\sum_ {h _ {k} \in Q} B ^ {d _ {K L} \left(q _ {i} , h _ {k}\right)} \cdot \exp \left(q _ {i} ^ {\mathrm {T}} h _ {k} / \tau\right)}\right). \tag {20}
507
+ $$
508
+
509
+ A smaller semantic difference in $B^{d_{KL}(q_i, h_j)}$ results in a larger gradient magnitude with respect to $q_i^{\mathrm{T}} h_j$ , thereby increasing $q_i^{\mathrm{T}} h_j$ and bringing $q_i$ closer to $h_j$ .
510
+
511
+ Overall, the gradient optimizes sample distributions in the Euclidean space by leveraging comprehensive semantic similarities in the logarithmic space. Large semantic differences (low semantic similarities) between the query sample and an available sample push the query sample further away in the Euclidean space, while small semantic differences (high semantic similarities) bring the query
512
+
513
+ sample closer. Consequently, samples form distinguishable fine-grained clusters in the Euclidean space, with each cluster representing a discovered category.
514
+
515
+ # A.2.2 Clustering Perspective Interpretation
516
+
517
+ From the clustering perspective, the loss $L_{2 - 2}^{i}$ can be written as:
518
+
519
+ $$
520
+ \begin{array}{l} L _ {2 - 2} ^ {i} = - \sum_ {h _ {j} \in N _ {i}} \omega_ {j} \cdot \log \frac {\exp \left(q _ {i} ^ {\mathrm {T}} h _ {j} / \tau\right)}{\sum_ {h _ {k} \in Q} B ^ {d _ {K L} \left(q _ {i} , h _ {k}\right)} \cdot \exp \left(q _ {i} ^ {\mathrm {T}} h _ {k} / \tau\right)} \\ = \sum_ {h _ {j} \in N _ {i}} \omega_ {j} \cdot \log \sum_ {h _ {k} \in Q} B ^ {d _ {K L} \left(q _ {i}, h _ {k}\right)} \cdot \exp \left(q _ {i} ^ {\mathrm {T}} h _ {k} / \tau\right) \\ - \sum_ {h _ {j} \in N _ {i}} \omega_ {j} \cdot \left(q _ {i} ^ {\mathrm {T}} h _ {j} / \tau\right) \\ = \sum_ {h _ {j} \in N _ {i}} \omega_ {j} \cdot \log \sum_ {h _ {k} \in Q} B ^ {d _ {K L} \left(q _ {i}, h _ {k}\right)} \cdot \exp \left(q _ {i} ^ {\mathrm {T}} h _ {k} / \tau\right) \\ - \frac {1}{\tau} q _ {i} ^ {\mathrm {T}} \left(\sum_ {h _ {j} \in N _ {i}} \omega_ {j} \cdot h _ {j}\right) \\ = \sum_ {h _ {j} \in N _ {i}} \omega_ {j} \cdot \log \sum_ {h _ {k} \in Q} B ^ {d _ {K L} \left(q _ {i}, h _ {k}\right)} \cdot \exp \left(q _ {i} ^ {\mathrm {T}} h _ {k} / \tau\right) \\ - q _ {i} ^ {\mathrm {T}} c _ {i} / \tau \\ = \sum_ {h _ {j} \in N _ {i}} \omega_ {j} \cdot \log \sum_ {h _ {k} \in Q} B ^ {d _ {K L} \left(q _ {i}, h _ {k}\right)} \cdot \exp \left(q _ {i} ^ {\mathrm {T}} h _ {k} / \tau\right) \\ + \frac {1}{2 \tau} \left\{\left(q _ {i} - c _ {i}\right) ^ {2} - \left\| q _ {i} \right\| ^ {2} - \left\| c _ {i} \right\| ^ {2} \right\} \\ \stackrel {c} {=} \sum_ {h _ {j} \in N _ {i}} \omega_ {j} \cdot \log \sum_ {h _ {k} \in Q} B ^ {d _ {K L} (q _ {i}, h _ {k})} \cdot \exp \left(q _ {i} ^ {\mathrm {T}} h _ {k} / \tau\right) \\ + \frac {1}{2 \tau} \left(q _ {i} - c _ {i}\right) ^ {2}, \tag {21} \\ \end{array}
521
+ $$
522
+
523
+ where $c_{i} = \sum_{h_{j}\in N_{i}}\omega_{j}\cdot h_{j}$ is the weighted average of query $q_{i}$ neighbors' embeddings, $\stackrel{c}{=}$ indicates equal up to a multiplicative and/or an additive constant. $\| q_i\| ^2 = 1$ because of normalization, and $\| c_{i}\|^{2}$ is a constant since the neighbor embedding $h_j$ is from the dynamic queue without gradient.
524
+
525
+ In Eq. 21, the loss term $L_{2-2}^{i}$ indicates that, from
526
+
527
+ a clustering perspective, query samples will cluster around the neighbor centroids. These query samples will be distributed in the Euclidean space based on the comprehensive semantic similarities in the logarithmic space between the query sample and each available positive or negative sample, as measured by $B^{d_{KL}(q_i,h_k)}$ , effectively distancing dissimilar samples, so that samples could form distinguishable clusters and each cluster represents a discovered fine-grained category.
528
+
529
+ # A.2.3 Generalized EM Perspective Interpretation
530
+
531
+ From the generalized EM perspective, If we treat the centers of the weighted neighbors $C = \{c_i\}_{i=1}^N$ as hidden variables, we can interpret our model from the Expectation Maximization (EM) perspective following (An et al., 2023a).
532
+
533
+ At the E-step, we estimate the hidden variables by retrieving neighbors and weighting the neighbors' embeddings.
534
+
535
+ $$
536
+ \left\{c _ {i} \mid \theta , q _ {i}, Q \right\} _ {i = 1} ^ {N} = \sum_ {h _ {j} \in N _ {i}} \omega_ {j} \cdot h _ {j}, \tag {22}
537
+ $$
538
+
539
+ where $\theta$ represents the parameters of Encoder, $Q$ is the data queue, $N_{i}$ is the positive samples set of the query sample $q_{i}$ .
540
+
541
+ At the M-step, we optimize the Encoder parameters $\theta$ :
542
+
543
+ $$
544
+ \begin{array}{l} \arg \min _ {\theta} \sum_ {q _ {i} \in N _ {t r a i n}} \left(\frac {1}{2 \tau} (q _ {i} - c _ {i}) ^ {2} \right. \\ + \sum_ {h _ {j} \in N _ {i}} \omega_ {j} \cdot \log \sum_ {h _ {k} \in Q} B ^ {d _ {K L} \left(q _ {i}, h _ {k}\right)} \cdot \exp \left(q _ {i} ^ {\mathrm {T}} h _ {k} / \tau\right)), \tag {23} \\ \end{array}
545
+ $$
546
+
547
+ where $N_{train}$ is the training batch, and $h_k$ is the positive or negative sample from data queue $Q$ .
548
+
549
+ Accurate neighbors enhance representation learning, which in turn facilitates the retrieval of more accurate neighbors. This iterative process enables STAR-DOWN to progressively improve performances in both representation learning and neighborhood retrieval. Detailed empirical results are presented in Appendix A.1.6.
550
+
551
+ # A.3 STAR Method Variants
552
+
553
+ # A.3.1 STAR-DNA
554
+
555
+ DNA (An et al., 2023a) is a promising contrastive learning based method to solve the fine-grained category discovery task. DNA has three steps.
556
+
557
+ Step 1: pre-training:
558
+
559
+ $$
560
+ L _ {\text {p r e}} = L _ {\text {c e}}. \tag {24}
561
+ $$
562
+
563
+ DNA pre-trains the Encoder $F_{\theta}$ to learn the coarse-grained information with Eq. 24.
564
+
565
+ Step 2: neighbors retrieval and refinement:
566
+
567
+ DNA retrieves positive samples from the data queue $Q$ for each query sample $q_{i}$ and applies three principles to eliminate potential false-positive neighbors: Label Constraint, Reciprocal Constraint, and Rank Statistic Constraint. The resulting positive set is $S_{i}$ .
568
+
569
+ Step 3: training:
570
+
571
+ DNA trains the model parameters with the following loss:
572
+
573
+ $$
574
+ L = - \frac {1}{| D |} \sum_ {q _ {i} \in D} \frac {1}{| S _ {i} |} \sum_ {h _ {j} \in S _ {i}} \log \frac {\exp \left(q _ {i} ^ {\mathrm {T}} h _ {j} / \tau\right)}{\sum_ {h _ {k} \in Q} \exp \left(q _ {i} ^ {\mathrm {T}} h _ {k} / \tau\right)}. \tag {25}
575
+ $$
576
+
577
+ $D$ denotes the training batch, $S_{i}$ represents the positive set for $q_{i}$ , and $\tau$ is the temperature parameter.
578
+
579
+ DNA iteratively performs steps 2 and 3 to enhance model performance. However, DNA does not utilize fine-grained semantic similarities to guide the distributions of query samples. To ensure a fair comparison, STAR-DNA upgrades the training loss in step 3 while following the same initial two steps as DNA.
580
+
581
+ STAR-DNA introduces a new loss function in step 3 to discover fine-grained semantic similarities:
582
+
583
+ $$
584
+ \begin{array}{l} L = - \gamma \frac {1}{| D |} \sum_ {q _ {i} \in D} \frac {1}{| S _ {i} |} \sum_ {h _ {j} \in S _ {i}} \log \frac {\exp \left(\frac {- d _ {K L} (q _ {i} , h _ {j})}{\tau}\right)}{\sum_ {h _ {k} \in Q} \exp \left(\frac {- d _ {K L} (q _ {i} , h _ {k})}{\tau}\right)} \\ - \frac {1}{| D |} \sum_ {q _ {i} \in D} \frac {1}{| S _ {i} |} \sum_ {h _ {j} \in S _ {i}} \log \frac {\exp \left(\frac {q _ {i} ^ {\mathrm {T}} h _ {j}}{\tau}\right)}{\sum_ {h _ {k} \in Q} B ^ {d _ {K L} (q _ {i} , h _ {k})} \cdot \exp \left(\frac {q _ {i} ^ {\mathrm {T}} h _ {k}}{\tau}\right)}. \tag {26} \\ \end{array}
585
+ $$
586
+
587
+ # A.3.2 STAR-PPNet
588
+
589
+ The Prototypical Network with pseudolabels (Boney and Ilin, 2017; Ji et al., 2020) (PPNet) is a widely used approach for fine-grained unsupervised classification tasks. PPNet typically involves two steps: in step 1, it employs a clustering algorithm to assign pseudo fine-grained labels to each query sample $q_{i}$ in the train set $D_{train}$ . In step 2, it trains using these pseudo-labels with loss Eq. 27 to cluster query samples with the same fine-grained pseudo label, thereby discovering fine-grained categories:
590
+
591
+ $$
592
+ L = - \frac {1}{\left| N _ {\text {t r a i n}} \right|} \sum_ {i = 1} ^ {\left| N _ {\text {t r a i n}} \right|} L _ {i}, \tag {27}
593
+ $$
594
+
595
+ $$
596
+ L _ {i} = \log \frac {\exp (- d (q _ {i} , \mathbf {p} _ {c}))}{\sum_ {c ^ {\prime} = 1} ^ {C} \exp (- d (q _ {i} , \mathbf {p} _ {c ^ {\prime}}))}. \tag {28}
597
+ $$
598
+
599
+ $N_{train}$ is the training batch, $C$ is the number of fine-grained pseudo-categories, $q_{i}$ is the query sample embedding from the training set $D_{train}$ , $d(\cdot ,\cdot)$ is typically the Euclidean distance, and $\mathbf{p}_c$ is the prototype embedding of category $c$ , computed as the mean of the embeddings of the support set examples of category $c$ , $\mathbf{p}_{c^{\prime}}$ is the prototype embedding of category $c^{\prime}$ , $c^{\prime}$ is a category among $C$ fine-grained categories:
600
+
601
+ $$
602
+ \mathbf {p} _ {c} = \frac {1}{N _ {c}} \sum_ {i = 1} ^ {N _ {c}} q _ {i} ^ {c}, \tag {29}
603
+ $$
604
+
605
+ where $N_{c}$ is the number of samples in category $c$ and $q_{i}^{c}$ is the embedding of query sample with the pseudo fine-grained label of category $c$ .
606
+
607
+ STAR-PPNET incorporates the fine-grained semantic similarities between the query sample and cluster centroids into the loss:
608
+
609
+ $$
610
+ L = - \frac {1}{\left| N _ {\text {t r a i n}} \right|} \sum_ {i = 1} ^ {\left| N _ {\text {t r a i n}} \right|} L ^ {i}, \tag {30}
611
+ $$
612
+
613
+ $$
614
+ \begin{array}{l} L ^ {i} = \log \frac {\exp (- d (q _ {i} , \mathbf {p} _ {c}))}{\sum_ {c ^ {\prime} = 1} ^ {C} B ^ {d _ {K L} (q _ {i} , \mathbf {p} _ {c} ^ {\prime})} \cdot \exp (- d (q _ {i} , \mathbf {p} _ {c ^ {\prime}}))} \\ + \gamma \log \frac {\exp \left(- d _ {K L} \left(q _ {i} , \mathbf {p} _ {c}\right)\right)}{\sum_ {c ^ {\prime} = 1} ^ {C} \exp \left(- d _ {K L} \left(q _ {i} , \mathbf {p} _ {c ^ {\prime}}\right)\right)}, \tag {31} \\ \end{array}
615
+ $$
616
+
617
+ # A.4 Implementation Details
618
+
619
+ For comparison, we use the same BERT model, bert-base-uncased, for feature extraction as in the original baseline papers. We employ GPT4 (version gpt-4-0125-preview) and Llama2 with 7B parameters. We fine-tune Llama2 with the LoRA technique, where the LoRA rank is 8, and the LoRA $\alpha$ is 32. The sample feature dimension in the embedding space is 768, and for calculating KL divergence, it is 128. The number of neighbors $k$ is set to \{120, 120, 250\} for the CLINC, HWU64, and WOS datasets, respectively. We use random seeds $\{0, 1, 2\}$ . The dimension for Rank Statistic Constraint in the DNA baseline is set to 5. The PyTorch version is 1.11.0.
620
+
621
+ # A.5 Evaluation Metrics
622
+
623
+ The formula of ARI (Hubert and Arabie, 1985) is:
624
+
625
+ $$
626
+ A R I = \frac {R I - E (R I)}{\max (R I) - E (R I)}, \tag {32}
627
+ $$
628
+
629
+ where $RI$ is the rand index and the $E(RI)$ is the expectation of $RI$ . Given a test set with $n$ samples, a sample pair is simply any two distinct samples chosen from the test set. $a$ : Number of pairs in the same cluster in both predicted labels and ground truth labels. $b$ : Number of pairs in different clusters in both predicted labels and ground truth labels. $c$ : Number of pairs in the same cluster in predicted labels but different in ground truth labels. $d$ : Number of pairs in different clusters in predicted labels but same in ground truth labels.
630
+
631
+ $$
632
+ R I = \frac {a + b}{a + b + c + d}. \tag {33}
633
+ $$
634
+
635
+ The formula of Normalized Mutual Information (NMI) (Lancichinetti et al., 2009) is:
636
+
637
+ $$
638
+ N M I = \frac {2 * I (\hat {y} ; y)}{H (\hat {y}) + H (y)}, \tag {34}
639
+ $$
640
+
641
+ where $\hat{y}$ is the prediction from clustering and $y$ is the ground truth. $I(\hat{y};y)$ is the mutual information between $\hat{y}$ and $y$ , $H(\hat{y})$ and $H(y)$ represent the entropy of $\hat{y}$ and $y$ , respectively.
642
+
643
+ ACC (Kuhn, 2010; An et al., 2023a) is the metric to evaluate the clustering accuracy:
644
+
645
+ $$
646
+ A C C = \frac {\sum_ {i = 1} ^ {N} \mathbb {I} \left\{\mathcal {P} \left(\hat {y} _ {i}\right) = y _ {i} \right\}}{N}, \tag {35}
647
+ $$
648
+
649
+ where $\mathbb{I}\{\cdot\}$ is the indicator function, it returns 1 if the condition inside the braces is true, and 0 otherwise. In this formula, it checks whether the predicted label $\mathcal{P}(\hat{y}_i)$ matches the true label $y_i$ . $\hat{y}_i$ is the prediction from clustering and $y_i$ is the groundtruth label, $N$ is the number of samples, and $\mathcal{P}(\cdot)$ is the permutation map function from the Hungarian algorithm (Kuhn, 2010).
650
+
651
+ # A.6 Baseline GPT4 Prompt
652
+
653
+ The following prompt is designed for GPT4 using the HWU64 dataset. To adapt it for other datasets, simply adjust the number of coarse-grained classes and the index of fine-grained classes accordingly.
654
+
655
+ Our dataset comprises two tab-separated value (TSV) files:
656
+
657
+ - coarse_label_GPT4. tsv: This file contains text content and corresponding coarse-grained
658
+
659
+ class labels for each line. The training set encompasses a total of 18 distinct coarse-grained classes.
660
+
661
+ - test_GPT4. tsv: This file consists of multiple lines, each containing a single text entry.
662
+
663
+ The objective is to assign fine-grained class labels to the text content in the test_GPT4.tsv file. Each fine-grained class will be represented by a unique integer identifier ranging from 0 to 63. Each fine-grained class is a sub-class of the existing coarse-grained classes defined in coarse_label_GPT4.tsv.
664
+
665
+ Please complete the following tasks according to your understanding:
666
+
667
+ 1. Prediction: Assign a fine-grained class label (integer from 0 to 63) to each text content in the test_GPT4. tsv file.
668
+ 2. Output: Print the predicted labels in order, corresponding to the order of text entries in the test file. Collect all the predicted labels in a list.
669
+ 3. Explanation: Provide a semantic explanation (meaning description) for each predicted fine-grained class label.
670
+
671
+ # A.7 Category Semantics Inference Prompt
672
+
673
+ In order to infer the discovered fine-grained category semantics, we leverage the commonsense reasoning capabilities of GPT4.
674
+
675
+ The following prompt is employed:
676
+
677
+ Can you please check what i last said. Could you please check and confirm my last command. I like you check question please. Did you get what just said correctly? I like you check it please. Did you get my last instruction correctly? Please can you check and confirm what just said. Would you check the details please. .... Lower the lights in the house. Please can you confirm the command. Could you please check and confirm the detail. Would you check and confirm question please. Please confirm details. Will you please check and confirm it. Please would you check instruction. Would you please check question. Could you please check the detail. Could you please check and confirm details. Check the last thing said please. What you just said was not so clear to me. I would like you check and confirm it please. Did you get what said correctly? I like you to check and confirm what i said please.
678
+
679
+ Hello. The provided sentence samples originate from the domain of assistant query classification. Predominantly, these samples fit into a specific category, though a minority are noisy and do not align with the primary category. Could you please identify and name this category, and clarify its definition?
680
+
681
+ The ground truth semantics of this discovered fine-grained category is: general-confirmed in the assistant query domain.
682
+
683
+ The outputs from GPT4 are as follows:
684
+
685
+ The primary category for the majority of the sentence samples you've provided seems to be "confirmation requests." This category comprises sentences where the speaker asks the assistant to confirm if a command or a query was understood or executed correctly. The requests often seek verification or validation of an action that was presumably communicated earlier, either explicitly or implicitly.
686
+
687
+ Definition of Confirmation Requests: Confirmation requests are communications aimed at verifying whether an earlier instruction, command, or statement was received, understood, and/or processed correctly. This category is common in user assistant interactions where clarity and confirmation of understanding are critical to ensure accurate task execution.
688
+
689
+ Clearly, this method offers a practical and heuristic approach to deducing category semantics.
690
+
691
+ Algorithm 1: STAR-DOWN
692
+ Initialize pre-training epoch $P$ , training epoch $E$
693
+ Encoder $F_{\theta}$ , Momentum Encoder $F_{\theta_k}$ , and train set $D_{train}$
694
+ Step 1: pre-training
695
+ for epoch $= 0,\dots,P - 1$ do Initialize pre-training batch size for $N_{pre}\in D_{train}$ do The Encoder $F_{\theta}$ is trained with pre-training batch $N_{pre}$ using loss $L_{\mathrm{pre}}$ (Eq. 8) Update Encoder parameters $\theta$ end
696
+ end
697
+ for epoch $= 0,\ldots ,E - 1$ do Initialize nearest neighbors number $K$ Step 2: neighbors retrieval & weighting Construct the gradient-free data queue: $Q = F_{\theta_k}(D_{train})$ Each query sample $x_{i}\in D_{train}$ , the normalized feature: $q_{i} = F_{\theta}(x_{i})$ Each $x_{i}\in D_{train}$ , retrieve $K$ nearest neighbors to form the positive samples set $N_{i}$ .. $N_{i} = \{h_{j}\mid h_{j}\in \operatorname {argtop}_{k}(\operatorname {CosineSimilarity}(q_{i},h_{l}))\}$ $h_l\in Q$ Each neighbor $h_j\in N_i$ is weighted: $\omega_{j} = \phi \cdot \alpha^{-\frac{l_{ij}}{k}}$ (described in Section 4.2). Step 3: training Initialize training batch size for $N_{train}\in D_{train}$ do The Encoder $F_{\theta}$ is trained with training batch $N_{train}$ and associated neighbors set $N_{i}$ and $Q$ using loss $L_{train}$ (Eq. 5) Update Encoder and Momentum Encoder parameters $\theta$ and $\theta_{k}$
698
+ end
2024/A Generic Method for Fine-grained Category Discovery in Natural Language Texts/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:332ab49df01e25de9b0e878dfc001329097b359baa9957e5e66a9b2e2473e9ed
3
+ size 854602
2024/A Generic Method for Fine-grained Category Discovery in Natural Language Texts/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Learning Rate Path Switching Training Paradigm for Version Updates of Large Language Models/63a13339-dbf4-428a-9241-6178656f6af8_content_list.json ADDED
@@ -0,0 +1,1781 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "A Learning Rate Path Switching Training Paradigm for Version Updates of Large Language Models",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 208,
8
+ 85,
9
+ 789,
10
+ 124
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Zhihao Wang $^{1*}$ Shiyu Liu $^{3*}$ Jianheng Huang $^{1}$ Zheng Wang $^{2}$ Yixuan Liao $^{2}$ Xiaoxin Chen $^{2}$ Junfeng Yao $^{1}$ Jinsong Su $^{1,3,4\\dagger}$",
17
+ "bbox": [
18
+ 216,
19
+ 135,
20
+ 776,
21
+ 170
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "text",
27
+ "text": "$^{1}$ School of Informatics, Xiamen University, China $^{2}$ vivo AI Lab, China $^{3}$ Institute of Artificial Intelligence, Xiamen University, China $^{4}$ Shanghai Artificial Intelligence Laboratory, China {zhwang, liushiyu213}@stu.xmu.edu.cn jssu@xmu.edu.cn",
28
+ "bbox": [
29
+ 203,
30
+ 171,
31
+ 793,
32
+ 237
33
+ ],
34
+ "page_idx": 0
35
+ },
36
+ {
37
+ "type": "text",
38
+ "text": "Abstract",
39
+ "text_level": 1,
40
+ "bbox": [
41
+ 260,
42
+ 260,
43
+ 339,
44
+ 275
45
+ ],
46
+ "page_idx": 0
47
+ },
48
+ {
49
+ "type": "text",
50
+ "text": "Due to the continuous emergence of new data, version updates have become an indispensable requirement for Large Language Models (LLMs). The training paradigms for version updates of LLMs include pre-training from scratch (PTFS) and continual pre-training (CPT). Preliminary experiments demonstrate that PTFS achieves better pre-training performance, while CPT has lower training cost. Moreover, their performance and training cost gaps widen progressively with version updates. To investigate the underlying reasons for this phenomenon, we analyze the effect of learning rate adjustments during the two stages of CPT: preparing an initialization checkpoint and continual pre-training based on this checkpoint. We find that a large learning rate in the first stage and a complete learning rate decay process in the second stage are crucial for version updates of LLMs. Hence, we propose a learning rate path switching training paradigm. Our paradigm comprises one main path, where we pre-train a LLM with the maximal learning rate, and multiple branching paths, each of which corresponds to an update of the LLM with newly-added training data. Extensive experiments demonstrate the effectiveness and generalization of our paradigm. Particularly, when training four versions of LLMs, our paradigm reduces the total training cost to $58\\%$ compared to PTFS, while maintaining comparable pretraining performance.",
51
+ "bbox": [
52
+ 144,
53
+ 288,
54
+ 460,
55
+ 744
56
+ ],
57
+ "page_idx": 0
58
+ },
59
+ {
60
+ "type": "text",
61
+ "text": "1 Introduction",
62
+ "text_level": 1,
63
+ "bbox": [
64
+ 114,
65
+ 756,
66
+ 258,
67
+ 772
68
+ ],
69
+ "page_idx": 0
70
+ },
71
+ {
72
+ "type": "text",
73
+ "text": "In recent years, there has been significant progress in the research of Large Language Models (LLMs). By performing large-scale training on massive datasets, LLMs have demonstrated remarkable capabilities, contributing to various fields (Wu et al.,",
74
+ "bbox": [
75
+ 112,
76
+ 783,
77
+ 489,
78
+ 863
79
+ ],
80
+ "page_idx": 0
81
+ },
82
+ {
83
+ "type": "text",
84
+ "text": "2023; Cui et al., 2024; Wang et al., 2024; Guo et al., 2024). However, the training cost of LLMs is significantly higher than that of traditional NLP models. Particularly, in practical applications, LLMs have to face the need for version updates due to the continuous emergence of new data, which exacerbates the training cost of LLMs. Therefore, reducing training cost while maintaining optimal pre-training performance across different versions has become one of the pivotal challenges for LLMs.",
85
+ "bbox": [
86
+ 507,
87
+ 260,
88
+ 884,
89
+ 420
90
+ ],
91
+ "page_idx": 0
92
+ },
93
+ {
94
+ "type": "text",
95
+ "text": "Generally, training paradigms applicable for version updates of LLMs can be categorized into two types: 1) Pre-Training From Scratch (PTFS): retraining new versions of LLMs on both old and new data. The well-known LLMs including LLaMA (Touvron et al., 2023a,b), GLM (Zeng et al., 2023), and Baichuan (Yang et al., 2023) are updated via this paradigm. 2) Continual Pre-Training (CPT): further pre-training new versions of LLMs on only new data based on the checkpoints from old versions. This paradigm is often utilized in resource constrained scenarios, such as limited computational resources or unavailability of old data.",
96
+ "bbox": [
97
+ 507,
98
+ 423,
99
+ 884,
100
+ 645
101
+ ],
102
+ "page_idx": 0
103
+ },
104
+ {
105
+ "type": "text",
106
+ "text": "In this paper, we firstly conduct preliminary experiments to compare the above two paradigms in version updates of LLMs. Compared with PTFS, CPT uses previous checkpoints for initialization, resulting in lower total training cost. However, CPT suffers from inferior pre-training performance, which becomes increasingly serious as version updates progress. To study the reasons for this phenomenon, we break down the CPT process into two stages: the first stage involves preparing an initialization checkpoint, and the second stage performing continual pre-training based on this checkpoint. Then, we conduct two groups of experiments to analyze the effect of learning rate adjustments during these two stages, leading to two conclusions: 1) the larger the learning rate in the first stage, the better the performance of updated LLMs in the second",
107
+ "bbox": [
108
+ 507,
109
+ 648,
110
+ 884,
111
+ 921
112
+ ],
113
+ "page_idx": 0
114
+ },
115
+ {
116
+ "type": "page_footnote",
117
+ "text": "Work was done when Zhihao Wang, Shiyu Liu and Jianheng Huang were interning at vivo AI Lab. \n* Equal contribution. \n† Corresponding author.",
118
+ "bbox": [
119
+ 112,
120
+ 870,
121
+ 487,
122
+ 920
123
+ ],
124
+ "page_idx": 0
125
+ },
126
+ {
127
+ "type": "page_number",
128
+ "text": "13581",
129
+ "bbox": [
130
+ 475,
131
+ 927,
132
+ 522,
133
+ 940
134
+ ],
135
+ "page_idx": 0
136
+ },
137
+ {
138
+ "type": "footer",
139
+ "text": "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 13581-13594 November 12-16, 2024 ©2024 Association for Computational Linguistics",
140
+ "bbox": [
141
+ 152,
142
+ 945,
143
+ 843,
144
+ 972
145
+ ],
146
+ "page_idx": 0
147
+ },
148
+ {
149
+ "type": "image",
150
+ "img_path": "images/6dbc6fa1f25022b07bce0e4b21f90ef8a48f46c930c5250eb10da64277fa8977.jpg",
151
+ "image_caption": [
152
+ "Figure 1: The learning rate curves of cosine learning rate schedule under PTFS, CPT<sup>1</sup> and our paradigm, all of which are used to update four versions of LLMs. Here, different color curves represent different version updates of LLMs."
153
+ ],
154
+ "image_footnote": [],
155
+ "bbox": [
156
+ 131,
157
+ 84,
158
+ 480,
159
+ 317
160
+ ],
161
+ "page_idx": 1
162
+ },
163
+ {
164
+ "type": "text",
165
+ "text": "stage; 2) for the second stage, a complete learning rate decay process is beneficial to ensure the optimal performance of updated LLMs.",
166
+ "bbox": [
167
+ 112,
168
+ 434,
169
+ 489,
170
+ 481
171
+ ],
172
+ "page_idx": 1
173
+ },
174
+ {
175
+ "type": "text",
176
+ "text": "Based on the above analyses, we propose a learning rate path switching training paradigm for version updates of LLMs. To better illustrate our paradigm, we take the most commonly used cosine learning rate schedule (Smith and Topin, 2019) as an example, and plot the learning rate curves of PTFS, CPT and our paradigm in Figure 1. Please note that our paradigm is also applicable to other schedules, such as Knee (Iyer et al., 2023), and multi-step (Bi et al., 2024) learning rate schedules.",
177
+ "bbox": [
178
+ 112,
179
+ 483,
180
+ 489,
181
+ 645
182
+ ],
183
+ "page_idx": 1
184
+ },
185
+ {
186
+ "type": "text",
187
+ "text": "In short, the learning rate curve of our paradigm comprises one main path and multiple branching paths, each of which corresponds to a version update of LLM. As shown by the main path in Figure 1, we pre-train a LLM with the maximal learning rate, providing superior initialization checkpoints for subsequent continual pre-training. When we want to update the LLM with newly-added training data, we perform continual pre-training on the LLM with a dynamically-adjusted learning rate. Referring back to Figure 1, after a few steps of training with the maximal learning rate, the learning rate fast decays to its minimum, which",
188
+ "bbox": [
189
+ 112,
190
+ 646,
191
+ 489,
192
+ 854
193
+ ],
194
+ "page_idx": 1
195
+ },
196
+ {
197
+ "type": "image",
198
+ "img_path": "images/41156f2d39c9544ab527eea6bb5d68691114c330ef09d7b0aa50220dc1ffde5d.jpg",
199
+ "image_caption": [
200
+ "Figure 2: The comparison of different training paradigms. \"APPL\" $(\\downarrow)$ denotes the average perplexity of LLMs across different versions, \"Relative Cost\" $(\\downarrow)$ is the ratio of the total training steps across different versions of each paradigm to the total training steps of PTFS. The lower left corner achieves the best trade-off."
201
+ ],
202
+ "image_footnote": [],
203
+ "bbox": [
204
+ 515,
205
+ 87,
206
+ 877,
207
+ 300
208
+ ],
209
+ "page_idx": 1
210
+ },
211
+ {
212
+ "type": "text",
213
+ "text": "effectively ensures the training performance of the updated LLM. Meanwhile, on the main path, we continue to pre-train the original checkpoint with the maximal learning rate, facilitating subsequent LLM updates.",
214
+ "bbox": [
215
+ 507,
216
+ 451,
217
+ 882,
218
+ 533
219
+ ],
220
+ "page_idx": 1
221
+ },
222
+ {
223
+ "type": "text",
224
+ "text": "Our paradigm better balances model performance and training cost compared to the other two paradigms, as detailed in Figure 2. To summarize, our main contributions are as follows:",
225
+ "bbox": [
226
+ 507,
227
+ 546,
228
+ 882,
229
+ 609
230
+ ],
231
+ "page_idx": 1
232
+ },
233
+ {
234
+ "type": "list",
235
+ "sub_type": "text",
236
+ "list_items": [
237
+ "- We conduct preliminary experiments to compare PTFS and CPT for version updates of LLMs. Furthermore, our in-depth analyses show that initially using a large learning rate and subsequent learning rate decay are crucial for improving the performance of updated LLMs.",
238
+ "- We propose a learning rate path switching paradigm for version updates of LLMs. To the best of our knowledge, our work is the first attempt to explore how to balance model performance and training cost for version updates of LLMs.",
239
+ "- Experimental results and in-depth analyses strongly demonstrate the effectiveness and generalization of our paradigm. Particularly, when training four versions of LLMs, our paradigm achieves comparable pre-training performance to PTFS with only $58\\%$ of the total training cost."
240
+ ],
241
+ "bbox": [
242
+ 507,
243
+ 626,
244
+ 882,
245
+ 921
246
+ ],
247
+ "page_idx": 1
248
+ },
249
+ {
250
+ "type": "page_footnote",
251
+ "text": "<sup>1</sup>In fact, multiple CPT variants can be used for version updates of LLMs. We compare these variants in Appendix B and retain only the best-performing variant in the subsequent experiments.",
252
+ "bbox": [
253
+ 112,
254
+ 870,
255
+ 487,
256
+ 921
257
+ ],
258
+ "page_idx": 1
259
+ },
260
+ {
261
+ "type": "page_number",
262
+ "text": "13582",
263
+ "bbox": [
264
+ 477,
265
+ 927,
266
+ 524,
267
+ 940
268
+ ],
269
+ "page_idx": 1
270
+ },
271
+ {
272
+ "type": "image",
273
+ "img_path": "images/5c10d9cee3fce3f7593fb556db3e53c1dc92b1b168a85a531ac29a754c011263.jpg",
274
+ "image_caption": [
275
+ "Figure 3: The learning rate curves of cosine (Smith and Topin, 2019), Knee (Iyer et al., 2023), and multi-step (Bi et al., 2024) learning rate schedules."
276
+ ],
277
+ "image_footnote": [],
278
+ "bbox": [
279
+ 142,
280
+ 84,
281
+ 463,
282
+ 300
283
+ ],
284
+ "page_idx": 2
285
+ },
286
+ {
287
+ "type": "text",
288
+ "text": "2 Preliminary Study",
289
+ "text_level": 1,
290
+ "bbox": [
291
+ 112,
292
+ 385,
293
+ 309,
294
+ 401
295
+ ],
296
+ "page_idx": 2
297
+ },
298
+ {
299
+ "type": "text",
300
+ "text": "In this section, we first compare the performance of PTFS and CPT in version updates of LLMs, and then analyze the underlying reasons for their performance gap.",
301
+ "bbox": [
302
+ 112,
303
+ 411,
304
+ 489,
305
+ 476
306
+ ],
307
+ "page_idx": 2
308
+ },
309
+ {
310
+ "type": "text",
311
+ "text": "2.1 Setup",
312
+ "text_level": 1,
313
+ "bbox": [
314
+ 112,
315
+ 488,
316
+ 206,
317
+ 504
318
+ ],
319
+ "page_idx": 2
320
+ },
321
+ {
322
+ "type": "text",
323
+ "text": "Model In this study, we use LLaMA-1.2B (Touvron et al., 2023a,b) as our base LLM and train for four versions. When employing PTFS, the total training steps for these four versions are 10K, 20K, 30K, and 40K, respectively. For CPT, each LLM update only requires 10K training steps. We train all LLMs with a batch size of 1.05M tokens.",
324
+ "bbox": [
325
+ 112,
326
+ 508,
327
+ 489,
328
+ 621
329
+ ],
330
+ "page_idx": 2
331
+ },
332
+ {
333
+ "type": "text",
334
+ "text": "Learning Rate Schedule We conduct experiments with three learning rate schedules: cosine (Smith and Topin, 2019), Knee (Iyer et al., 2023), and multi-step (Bi et al., 2024) learning rate schedules. The specific learning rate curves of these schedules are plotted in Figure 3. Notably, cosine learning rate schedule is the most commonly used one for training LLMs (Zhao et al., 2023), and both Knee and multi-step learning rate schedules can achieve comparable or even superior performance than cosine learning rate schedule. For all learning rate schedules, we implement a linear warm-up phase of 2K steps (approximately 2.1B tokens). Besides, we set the maximum and minimum learning rates for these schedules to 3e-4 and",
335
+ "bbox": [
336
+ 112,
337
+ 632,
338
+ 489,
339
+ 872
340
+ ],
341
+ "page_idx": 2
342
+ },
343
+ {
344
+ "type": "table",
345
+ "img_path": "images/183158c89e5c249438bcf4a99f6385ddfd9656e476198171edb7736ebb7ab572.jpg",
346
+ "table_caption": [],
347
+ "table_footnote": [],
348
+ "table_body": "<table><tr><td rowspan=\"2\">LRS</td><td rowspan=\"2\">TP</td><td rowspan=\"2\">Cost</td><td colspan=\"3\">PPL</td></tr><tr><td>V2</td><td>V3</td><td>V4</td></tr><tr><td rowspan=\"3\">Cos</td><td>PTFS</td><td>1.00×</td><td>20.84</td><td>19.28</td><td>18.36</td></tr><tr><td>CPT</td><td>0.40×</td><td>21.11</td><td>19.70</td><td>18.87</td></tr><tr><td>Δ</td><td>-</td><td>-0.27</td><td>-0.42</td><td>-0.51</td></tr><tr><td rowspan=\"3\">Knee</td><td>PTFS</td><td>1.00×</td><td>20.22</td><td>18.80</td><td>17.98</td></tr><tr><td>CPT</td><td>0.40×</td><td>20.56</td><td>19.27</td><td>18.52</td></tr><tr><td>Δ</td><td>-</td><td>-0.34</td><td>-0.47</td><td>-0.54</td></tr><tr><td rowspan=\"3\">Multi</td><td>PTFS</td><td>1.00×</td><td>20.28</td><td>18.88</td><td>18.06</td></tr><tr><td>CPT</td><td>0.40×</td><td>20.62</td><td>19.37</td><td>18.65</td></tr><tr><td>Δ</td><td>-</td><td>-0.34</td><td>-0.49</td><td>-0.59</td></tr></table>",
349
+ "bbox": [
350
+ 514,
351
+ 82,
352
+ 890,
353
+ 293
354
+ ],
355
+ "page_idx": 2
356
+ },
357
+ {
358
+ "type": "text",
359
+ "text": "Table 1: The comparison between PTFS and CPT for training four versions of LLMs. \"LRS\" and \"TP\" indicate learning rate schedule and training paradigm, respectively. \"V*\" means the *-th version of LLM. Notably, regardless of PTFS or CPT, the learning rate curve and pre-training performance of the first version remain identical. Thus, we do not report the performance of the first version in all experiments.",
360
+ "bbox": [
361
+ 507,
362
+ 303,
363
+ 884,
364
+ 418
365
+ ],
366
+ "page_idx": 2
367
+ },
368
+ {
369
+ "type": "text",
370
+ "text": "3e-5, respectively.",
371
+ "bbox": [
372
+ 509,
373
+ 456,
374
+ 648,
375
+ 472
376
+ ],
377
+ "page_idx": 2
378
+ },
379
+ {
380
+ "type": "text",
381
+ "text": "Dataset Similar to LLaMA (Touvron et al., 2023a,b), our training corpus comprises a mixture of data from publicly available sources, including code, paper, Wikipedia, books, mathematics, CommonCrawl and C4, webpage, translation and others. In total, our training data contains 764M English and Chinese samples. Due to the limitation of GPU resource, we do not experiment with the entire dataset. To simulate the scenario of version updates, we perform non-replacement sampling on the training data to obtain 10.5B tokens as the newly-added data for each update. Hence, when using PTFS, we train four versions of LLMs from scratch with 10.5B, 21B, 31.5B, and 42B tokens, respectively. By contrast, using CPT to update the LLMs only involves the newly-added 10.5B tokens each time.",
382
+ "bbox": [
383
+ 507,
384
+ 502,
385
+ 884,
386
+ 776
387
+ ],
388
+ "page_idx": 2
389
+ },
390
+ {
391
+ "type": "text",
392
+ "text": "Evaluation Following previous studies (Qin et al., 2022; Gupta et al., 2023; Bi et al., 2024), we mainly use perplexity (PPL) to evaluate the pretraining performance of LLMs. Meanwhile, we also focus on the training cost of each paradigm, defined as the total training steps required for different versions.",
393
+ "bbox": [
394
+ 507,
395
+ 808,
396
+ 884,
397
+ 919
398
+ ],
399
+ "page_idx": 2
400
+ },
401
+ {
402
+ "type": "page_footnote",
403
+ "text": "2We also evaluate constant and inverse square root learning rate schedules, both of which yield inferior performance compared to the three selected schedules.",
404
+ "bbox": [
405
+ 112,
406
+ 882,
407
+ 489,
408
+ 921
409
+ ],
410
+ "page_idx": 2
411
+ },
412
+ {
413
+ "type": "page_number",
414
+ "text": "13583",
415
+ "bbox": [
416
+ 477,
417
+ 927,
418
+ 524,
419
+ 940
420
+ ],
421
+ "page_idx": 2
422
+ },
423
+ {
424
+ "type": "image",
425
+ "img_path": "images/d8b2ee2c06421cf75a924f9169f54c8daf549c83de336e6bc11b7b6156c8517c.jpg",
426
+ "image_caption": [
427
+ "Figure 4: The effect of learning rate adjustment in the first stage. In the first stage, we vary the cosine cycle length as $10\\mathrm{K}$ , $20\\mathrm{K}$ , $30\\mathrm{K}$ , $40\\mathrm{K}$ and $+\\infty$ steps, respectively, where the checkpoints at the $10\\mathrm{K}$ -th steps are selected as the initialization ones for the subsequent $10\\mathrm{K}$ -steps continual pre-training. “ $(\\cdot,\\cdot)$ ” indicates the PPLs of the initialization checkpoint and corresponding updated LLM."
428
+ ],
429
+ "image_footnote": [],
430
+ "bbox": [
431
+ 134,
432
+ 84,
433
+ 463,
434
+ 259
435
+ ],
436
+ "page_idx": 3
437
+ },
438
+ {
439
+ "type": "text",
440
+ "text": "2.2 Comparison Between PTFS and CPT",
441
+ "text_level": 1,
442
+ "bbox": [
443
+ 112,
444
+ 417,
445
+ 452,
446
+ 432
447
+ ],
448
+ "page_idx": 3
449
+ },
450
+ {
451
+ "type": "text",
452
+ "text": "Experimental results are shown in Table 1. It is evident that CPT has lower training cost, whereas PTFS achieves superior performance. More importantly, as the version updates progress, the performance gap between PTFS and CPT progressively widens.",
453
+ "bbox": [
454
+ 112,
455
+ 439,
456
+ 487,
457
+ 535
458
+ ],
459
+ "page_idx": 3
460
+ },
461
+ {
462
+ "type": "text",
463
+ "text": "To understand the underlying cause of this phenomenon, we focus on the learning rate, the key distinction between PTFS and CPT during version updates of LLMs. Using the cosine learning rate schedule, we conduct two groups of experiments to examine its impact on updated LLM performance across the two stages of CPT: 1) preparing an initialization checkpoint, and 2) continual pre-training based on this checkpoint.",
464
+ "bbox": [
465
+ 112,
466
+ 538,
467
+ 489,
468
+ 683
469
+ ],
470
+ "page_idx": 3
471
+ },
472
+ {
473
+ "type": "text",
474
+ "text": "Effect of Learning Rate Adjustment During the First Stage As depicted in Figure 4, in the first group of experiments, we vary the cosine cycle length across 10K, 20K, 30K, 40K, and $+\\infty$ steps, respectively. The checkpoints at the 10K-th steps are selected as initialization checkpoints for the second stage. Then, we continually pre-train LLMs for 10K steps, where the learning rate gradually decays from its maximum to minimum. Referring back to Figure 4, we observe that with the increase in the cosine cycle length during the first stage, the performance of the initialization checkpoint drops, whereas its corresponding updated LLM performs better. Therefore, we conclude that a",
475
+ "bbox": [
476
+ 112,
477
+ 696,
478
+ 489,
479
+ 921
480
+ ],
481
+ "page_idx": 3
482
+ },
483
+ {
484
+ "type": "image",
485
+ "img_path": "images/39c823ad70c04a6a4221bfa61fdd4f1b26c10696d898b045d7232290d2126692.jpg",
486
+ "image_caption": [
487
+ "Figure 5: The effect of learning rate adjustment in the second stage. In the first stage, we directly use the maximal learning rate after warm-up. During the second stage, we try cosine cycle length with 10K, 20K, 30K, 40K and $+\\infty$ steps, respectively, where the PPLs of LLMs at the 20K-th steps are compared."
488
+ ],
489
+ "image_footnote": [],
490
+ "bbox": [
491
+ 510,
492
+ 84,
493
+ 882,
494
+ 261
495
+ ],
496
+ "page_idx": 3
497
+ },
498
+ {
499
+ "type": "text",
500
+ "text": "large learning rate in the first stage benefits continual pre-training in the second stage.",
501
+ "bbox": [
502
+ 507,
503
+ 400,
504
+ 882,
505
+ 432
506
+ ],
507
+ "page_idx": 3
508
+ },
509
+ {
510
+ "type": "text",
511
+ "text": "Effect of Learning Rate Adjustment During the Second Stage Based on the above conclusion, we directly set the cosine cycle length as $+\\infty$ steps in the first stage, as illustrated in Figure 5. Then, during continual pre-training, we experiment with the cosine learning rate schedule using different cosine cycle lengths: 10K, 20K, 30K, 40K, $+\\infty$ steps, and report the performance of updated LLMs at the 20K-th steps. As shown in Figure 5, it is evident that a complete learning rate decay process enables the updated LLMs to achieve the best performance. This finding is consistent with the results from the first group of experiments mentioned above. In other words, when the learning rate undergoes complete decay during the first stage, the performance of the initialization checkpoint is also optimal.",
512
+ "bbox": [
513
+ 507,
514
+ 463,
515
+ 882,
516
+ 736
517
+ ],
518
+ "page_idx": 3
519
+ },
520
+ {
521
+ "type": "text",
522
+ "text": "Based on the findings of the above two groups of experiments, we conclude that CPT is difficult to achieve good performance across different versions of LLMs. Specifically, according to the findings from the second group of experiments, if the current LLM is expected to achieve optimal performance, its learning rate in the second stage should undergo a complete decay process. However, such decay results in a lower learning rate in the first stage of the subsequent update, further degrading the performance of the updated LLM.",
523
+ "bbox": [
524
+ 507,
525
+ 744,
526
+ 884,
527
+ 921
528
+ ],
529
+ "page_idx": 3
530
+ },
531
+ {
532
+ "type": "page_number",
533
+ "text": "13584",
534
+ "bbox": [
535
+ 477,
536
+ 927,
537
+ 524,
538
+ 940
539
+ ],
540
+ "page_idx": 3
541
+ },
542
+ {
543
+ "type": "table",
544
+ "img_path": "images/fb1fb869df02ccc41cb601984bae98ee96cff67b1354df02bfac162c461f1e94.jpg",
545
+ "table_caption": [],
546
+ "table_footnote": [],
547
+ "table_body": "<table><tr><td rowspan=\"2\">LRS</td><td rowspan=\"2\">α</td><td rowspan=\"2\">Cost</td><td colspan=\"3\">PPL</td></tr><tr><td>V2</td><td>V3</td><td>V4</td></tr><tr><td rowspan=\"4\">Cos</td><td>0.2</td><td>0.49×</td><td>20.34</td><td>19.13</td><td>18.44</td></tr><tr><td>0.4</td><td>0.53×</td><td>20.16</td><td>18.91</td><td>18.21</td></tr><tr><td>0.6</td><td>0.58×</td><td>20.13</td><td>18.81</td><td>18.09</td></tr><tr><td>0.8</td><td>0.62×</td><td>20.15</td><td>18.77</td><td>18.02</td></tr><tr><td rowspan=\"4\">Knee</td><td>0.2</td><td>0.49×</td><td>20.33</td><td>19.12</td><td>18.42</td></tr><tr><td>0.4</td><td>0.53×</td><td>20.16</td><td>18.91</td><td>18.20</td></tr><tr><td>0.6</td><td>0.58×</td><td>20.12</td><td>18.81</td><td>18.08</td></tr><tr><td>0.8</td><td>0.62×</td><td>20.15</td><td>18.77</td><td>18.01</td></tr><tr><td rowspan=\"4\">Multi</td><td>0.2</td><td>0.49×</td><td>20.33</td><td>19.08</td><td>18.37</td></tr><tr><td>0.4</td><td>0.53×</td><td>20.29</td><td>18.91</td><td>18.16</td></tr><tr><td>0.6</td><td>0.58×</td><td>20.40</td><td>18.88</td><td>18.09</td></tr><tr><td>0.8</td><td>0.62×</td><td>20.63</td><td>18.91</td><td>18.06</td></tr></table>",
548
+ "bbox": [
549
+ 121,
550
+ 80,
551
+ 480,
552
+ 344
553
+ ],
554
+ "page_idx": 4
555
+ },
556
+ {
557
+ "type": "text",
558
+ "text": "3 Our Paradigm",
559
+ "text_level": 1,
560
+ "bbox": [
561
+ 114,
562
+ 423,
563
+ 275,
564
+ 441
565
+ ],
566
+ "page_idx": 4
567
+ },
568
+ {
569
+ "type": "text",
570
+ "text": "Based on the conclusions from Section 2, we propose a learning rate path switching paradigm for version updates of LLMs in this section. The training cost of our paradigm is lower than that of PTFS, and it achieves significantly better performance than CPT, with performance even comparable to that of PTFS.",
571
+ "bbox": [
572
+ 112,
573
+ 451,
574
+ 489,
575
+ 563
576
+ ],
577
+ "page_idx": 4
578
+ },
579
+ {
580
+ "type": "text",
581
+ "text": "3.1 Paradigm Overview",
582
+ "text_level": 1,
583
+ "bbox": [
584
+ 112,
585
+ 577,
586
+ 317,
587
+ 593
588
+ ],
589
+ "page_idx": 4
590
+ },
591
+ {
592
+ "type": "text",
593
+ "text": "Let us revisit Figure 1, which shows the learning rate curves of our paradigm applied to the cosine learning rate schedule. Please note that our paradigm is also applicable to other schedules, such as Knee and multi-step and so on. The learning rate curve of our paradigm comprises one main path and multiple branching paths, each of which corresponds to one version update. On the main path, we pre-train the LLM from scratch with the maximal learning rate, providing initialization checkpoints for subsequent version updates. When we want to obtain an updated LLM, we directly use the current checkpoint of the main path as the initialization one, and then perform continual pre-training. During this process, the learning rate undergoes a complete fast-decaying process, effectively ensuring the performance of the updated LLM. Meanwhile, on the main path, we still use newly-added data to pre-train the existing checkpoint with the maximal learning rate, so as to facilitate subsequent updates.",
594
+ "bbox": [
595
+ 112,
596
+ 599,
597
+ 489,
598
+ 921
599
+ ],
600
+ "page_idx": 4
601
+ },
602
+ {
603
+ "type": "table",
604
+ "img_path": "images/eac57e881a997c10f24edd4997637aa453e2ffde6865d3588b9fcf3a9a4ba107.jpg",
605
+ "table_caption": [
606
+ "Table 2: The effect of hyper-parameter $\\alpha$ on the pretraining performance and training cost of our paradigm. Experiments are conducted on LLaMA-1.2B."
607
+ ],
608
+ "table_footnote": [],
609
+ "table_body": "<table><tr><td rowspan=\"2\">LRS</td><td rowspan=\"2\">TP</td><td rowspan=\"2\">Cost</td><td colspan=\"3\">PPL</td></tr><tr><td>V2</td><td>V3</td><td>V4</td></tr><tr><td rowspan=\"3\">Cos</td><td>PTFS</td><td>1.00×</td><td>20.84</td><td>19.28</td><td>18.36</td></tr><tr><td>CPT</td><td>0.40×</td><td>21.11</td><td>19.70</td><td>18.87</td></tr><tr><td>Ours</td><td>0.58×</td><td>20.13</td><td>18.81</td><td>18.09</td></tr><tr><td rowspan=\"3\">Knee</td><td>PTFS</td><td>1.00×</td><td>20.22</td><td>18.80</td><td>17.98</td></tr><tr><td>CPT</td><td>0.40×</td><td>20.56</td><td>19.27</td><td>18.52</td></tr><tr><td>Ours</td><td>0.58×</td><td>20.12</td><td>18.81</td><td>18.08</td></tr><tr><td rowspan=\"3\">Multi</td><td>PTFS</td><td>1.00×</td><td>20.28</td><td>18.88</td><td>18.06</td></tr><tr><td>CPT</td><td>0.40×</td><td>20.62</td><td>19.37</td><td>18.65</td></tr><tr><td>Ours</td><td>0.58×</td><td>20.40</td><td>18.88</td><td>18.09</td></tr></table>",
610
+ "bbox": [
611
+ 514,
612
+ 80,
613
+ 887,
614
+ 292
615
+ ],
616
+ "page_idx": 4
617
+ },
618
+ {
619
+ "type": "text",
620
+ "text": "Table 3: The comparison of different paradigms for training four versions of LLaMA-1.2B.",
621
+ "bbox": [
622
+ 507,
623
+ 300,
624
+ 882,
625
+ 331
626
+ ],
627
+ "page_idx": 4
628
+ },
629
+ {
630
+ "type": "text",
631
+ "text": "Obviously, our paradigm has lower training cost than PTFS, as it conducts continual pre-training based on the initialization checkpoints from the main path. Unlike CPT, these checkpoints are obtained through training from scratch with the maximum learning rate, which enables the updated LLMs to achieve better performance, as analyzed in Section 2. The following experiments fully confirm the superiority of our paradigm in balancing model performance and training cost.",
632
+ "bbox": [
633
+ 507,
634
+ 376,
635
+ 882,
636
+ 539
637
+ ],
638
+ "page_idx": 4
639
+ },
640
+ {
641
+ "type": "text",
642
+ "text": "3.2 Time Complexity Analysis",
643
+ "text_level": 1,
644
+ "bbox": [
645
+ 507,
646
+ 585,
647
+ 763,
648
+ 601
649
+ ],
650
+ "page_idx": 4
651
+ },
652
+ {
653
+ "type": "text",
654
+ "text": "To further compare different training paradigms in terms of training cost, we define their time complexity functions as the total training steps of version updates.",
655
+ "bbox": [
656
+ 507,
657
+ 626,
658
+ 882,
659
+ 690
660
+ ],
661
+ "page_idx": 4
662
+ },
663
+ {
664
+ "type": "text",
665
+ "text": "Before providing our definitions, we first introduce two symbols to facilitate the subsequent descriptions: 1) $N_v$ : the number of version updates of LLMs; 2) $T$ : the amount of data added for each update, assuming it remains consistent. When updating the $i-th$ version of LLMs, PTFS requires updating $iT(1 \\leq i \\leq N_v)$ steps each time, CPT needs to train for $T$ steps, and our paradigm requires training $T + \\alpha T$ steps, where $\\alpha (0 \\leq \\alpha \\leq 1)$ controls the proportion of fast-decaying steps to the total steps in each update.",
666
+ "bbox": [
667
+ 507,
668
+ 701,
669
+ 884,
670
+ 878
671
+ ],
672
+ "page_idx": 4
673
+ },
674
+ {
675
+ "type": "text",
676
+ "text": "Formally, the time complexity functions of PTFS, CPT and our paradigm can be described",
677
+ "bbox": [
678
+ 507,
679
+ 889,
680
+ 882,
681
+ 921
682
+ ],
683
+ "page_idx": 4
684
+ },
685
+ {
686
+ "type": "page_number",
687
+ "text": "13585",
688
+ "bbox": [
689
+ 477,
690
+ 927,
691
+ 524,
692
+ 940
693
+ ],
694
+ "page_idx": 4
695
+ },
696
+ {
697
+ "type": "table",
698
+ "img_path": "images/6e1e10e5f2f08bc224ecb063890f135443d8cdd7cb5202c596dd6096b6857ac1.jpg",
699
+ "table_caption": [],
700
+ "table_footnote": [],
701
+ "table_body": "<table><tr><td>Ver.</td><td>TP</td><td>C3</td><td>GSM8K</td><td>MMLU</td><td>CSL</td><td>C-EVAL</td><td>BBH</td><td>CMMLU</td><td>GAOKAO</td><td>AGIEval</td><td>AVG</td></tr><tr><td rowspan=\"3\">V2</td><td>PTFS</td><td>38.00</td><td>4.63</td><td>24.00</td><td>38.25</td><td>30.09</td><td>17.43</td><td>25.37</td><td>18.10</td><td>14.59</td><td>23.38</td></tr><tr><td>CPT</td><td>37.00</td><td>4.09</td><td>23.52</td><td>35.11</td><td>27.42</td><td>18.55</td><td>25.63</td><td>18.86</td><td>13.40</td><td>22.62</td></tr><tr><td>Ours</td><td>38.60</td><td>5.08</td><td>22.94</td><td>39.08</td><td>28.38</td><td>20.79</td><td>24.88</td><td>18.48</td><td>14.73</td><td>23.66</td></tr><tr><td rowspan=\"3\">V3</td><td>PTFS</td><td>40.30</td><td>3.34</td><td>24.33</td><td>39.17</td><td>25.85</td><td>17.11</td><td>25.30</td><td>22.03</td><td>14.34</td><td>23.53</td></tr><tr><td>CPT</td><td>38.30</td><td>4.70</td><td>23.32</td><td>36.40</td><td>28.38</td><td>21.11</td><td>24.76</td><td>17.85</td><td>13.47</td><td>23.14</td></tr><tr><td>Ours</td><td>42.10</td><td>4.63</td><td>23.22</td><td>34.91</td><td>29.35</td><td>19.70</td><td>24.73</td><td>19.24</td><td>14.90</td><td>23.64</td></tr><tr><td rowspan=\"3\">V4</td><td>PTFS</td><td>35.70</td><td>4.25</td><td>24.93</td><td>38.75</td><td>27.04</td><td>16.73</td><td>24.97</td><td>21.01</td><td>14.10</td><td>23.05</td></tr><tr><td>CPT</td><td>43.90</td><td>4.55</td><td>22.20</td><td>38.69</td><td>27.19</td><td>21.62</td><td>24.43</td><td>18.23</td><td>13.50</td><td>23.81</td></tr><tr><td>Ours</td><td>41.90</td><td>5.53</td><td>24.09</td><td>40.24</td><td>27.71</td><td>21.84</td><td>24.78</td><td>17.24</td><td>14.40</td><td>24.19</td></tr></table>",
702
+ "bbox": [
703
+ 119,
704
+ 80,
705
+ 895,
706
+ 266
707
+ ],
708
+ "page_idx": 5
709
+ },
710
+ {
711
+ "type": "text",
712
+ "text": "Table 4: The performance of LLMs across different versions on downstream tasks. \"Ver.\" indicates the version number of the LLMs. Additional experimental results for LLMs with larger model sizes or data sizes are listed in Appendix C.",
713
+ "bbox": [
714
+ 112,
715
+ 275,
716
+ 884,
717
+ 319
718
+ ],
719
+ "page_idx": 5
720
+ },
721
+ {
722
+ "type": "text",
723
+ "text": "as follows:",
724
+ "bbox": [
725
+ 112,
726
+ 344,
727
+ 200,
728
+ 357
729
+ ],
730
+ "page_idx": 5
731
+ },
732
+ {
733
+ "type": "equation",
734
+ "text": "\n$$\n\\mathbf {C} _ {\\mathrm {p t f s}} \\left(N _ {v}\\right) = \\sum_ {i = 1} ^ {N _ {v}} i T = 0. 5 T N _ {v} ^ {2} + 0. 5 T N _ {v},\n$$\n",
735
+ "text_format": "latex",
736
+ "bbox": [
737
+ 142,
738
+ 380,
739
+ 462,
740
+ 420
741
+ ],
742
+ "page_idx": 5
743
+ },
744
+ {
745
+ "type": "equation",
746
+ "text": "\n$$\n\\mathbf {C} _ {\\mathrm {c p t}} (N _ {v}) = \\sum_ {i = 1} ^ {N _ {v}} T = T N _ {v},\n$$\n",
747
+ "text_format": "latex",
748
+ "bbox": [
749
+ 146,
750
+ 426,
751
+ 349,
752
+ 467
753
+ ],
754
+ "page_idx": 5
755
+ },
756
+ {
757
+ "type": "equation",
758
+ "text": "\n$$\n\\begin{array}{l} \\mathbf {C} _ {\\text {o u r s}} (N _ {v}) = \\sum_ {i = 1} ^ {N _ {v} - 1} (T + \\alpha T) + T \\\\ = (1 + \\alpha) T N _ {v} - \\alpha T. \\\\ \\end{array}\n$$\n",
759
+ "text_format": "latex",
760
+ "bbox": [
761
+ 139,
762
+ 472,
763
+ 388,
764
+ 532
765
+ ],
766
+ "page_idx": 5
767
+ },
768
+ {
769
+ "type": "text",
770
+ "text": "Please note that, for the last version, the additional main path training for preparing the initialization checkpoint for the next update can be omitted, which counts as $\\alpha T$ steps. Thus, only $T$ steps are required.",
771
+ "bbox": [
772
+ 112,
773
+ 555,
774
+ 487,
775
+ 634
776
+ ],
777
+ "page_idx": 5
778
+ },
779
+ {
780
+ "type": "text",
781
+ "text": "Comparing the above functions, we observe that $\\mathbf{C}_{\\mathrm{ptfs}}(N_v)$ is a quadratic function in terms of $N_{v}$ , whereas both $\\mathbf{C}_{\\mathrm{cpt}}(N_v)$ and $\\mathbf{C}_{\\mathrm{ours}}(N_v)$ are linear functions. Moreover, the gaps between $\\mathbf{C}_{\\mathrm{ptfs}}(N_v)$ and the other two functions significantly widens as $N_{v}$ increases. For example, when $N_{v} = 4$ ,the values of these three time complexity functions are $10T$ $4T$ and $5.8T$ , respectively. When $N_{v} = 10$ the gaps widen as the values of these functions increase to $55T$ $10T$ and $15.4T$",
782
+ "bbox": [
783
+ 112,
784
+ 639,
785
+ 487,
786
+ 797
787
+ ],
788
+ "page_idx": 5
789
+ },
790
+ {
791
+ "type": "text",
792
+ "text": "4 Experiment",
793
+ "text_level": 1,
794
+ "bbox": [
795
+ 112,
796
+ 822,
797
+ 250,
798
+ 839
799
+ ],
800
+ "page_idx": 5
801
+ },
802
+ {
803
+ "type": "text",
804
+ "text": "In this section, we still use the settings of the preliminary study to conduct more experiments, comparing the performance and training cost of different training paradigms.",
805
+ "bbox": [
806
+ 112,
807
+ 856,
808
+ 489,
809
+ 921
810
+ ],
811
+ "page_idx": 5
812
+ },
813
+ {
814
+ "type": "text",
815
+ "text": "4.1 Effect of Hyper-Parameter $\\alpha$",
816
+ "text_level": 1,
817
+ "bbox": [
818
+ 507,
819
+ 344,
820
+ 781,
821
+ 360
822
+ ],
823
+ "page_idx": 5
824
+ },
825
+ {
826
+ "type": "text",
827
+ "text": "As described in Section 3, $\\alpha$ is one of the most important hyper-parameters in our paradigm, as it controls the proportion of fast-decaying steps to the total steps in each update. The fast-decaying steps influence model performance and training cost of our paradigm. To select an optimal $\\alpha$ value, we experiment with different $\\alpha$ values, ranging from 0.2 to 0.8 with an interval of 0.2, and then observe the changes in pre-training performance and training cost.",
828
+ "bbox": [
829
+ 507,
830
+ 365,
831
+ 882,
832
+ 525
833
+ ],
834
+ "page_idx": 5
835
+ },
836
+ {
837
+ "type": "text",
838
+ "text": "Experimental results are listed in Table 2, showing that the overall performance of LLMs across different versions is optimal at $\\alpha = 0.6$ and $\\alpha = 0.8$ . However, when $\\alpha = 0.6$ , our paradigm achieves lower training cost. Thus, we adopt $\\alpha = 0.6$ in subsequent experiments.",
839
+ "bbox": [
840
+ 507,
841
+ 526,
842
+ 882,
843
+ 621
844
+ ],
845
+ "page_idx": 5
846
+ },
847
+ {
848
+ "type": "text",
849
+ "text": "4.2 Main Experiments",
850
+ "text_level": 1,
851
+ "bbox": [
852
+ 507,
853
+ 633,
854
+ 702,
855
+ 649
856
+ ],
857
+ "page_idx": 5
858
+ },
859
+ {
860
+ "type": "text",
861
+ "text": "Then, we compare different paradigms in terms of training cost, pre-training performance and downstream performance. To comprehensively examine our paradigm, we conduct a series of experiments with the three aforementioned learning rate schedules.",
862
+ "bbox": [
863
+ 507,
864
+ 653,
865
+ 882,
866
+ 749
867
+ ],
868
+ "page_idx": 5
869
+ },
870
+ {
871
+ "type": "text",
872
+ "text": "Pre-Training Performance From Table 3, we observe that, compared to PTFS, our paradigm reduces the total training cost to $58\\%$ while maintaining comparable pre-training performance. Particularly, when using the cosine learning rate schedule, our paradigm even slightly outperforms PTFS. On the other hand, as expected, the training cost of our paradigm is still higher than that of CPT, however, it always achieves better performance than CPT, regardless of the schedule used.",
873
+ "bbox": [
874
+ 507,
875
+ 760,
876
+ 884,
877
+ 920
878
+ ],
879
+ "page_idx": 5
880
+ },
881
+ {
882
+ "type": "page_number",
883
+ "text": "13586",
884
+ "bbox": [
885
+ 477,
886
+ 927,
887
+ 524,
888
+ 940
889
+ ],
890
+ "page_idx": 5
891
+ },
892
+ {
893
+ "type": "table",
894
+ "img_path": "images/650139e262310782cece8d0bfbc383ee19abda8e8eb530caa27e44158074ccbd.jpg",
895
+ "table_caption": [],
896
+ "table_footnote": [],
897
+ "table_body": "<table><tr><td rowspan=\"2\">LRS</td><td rowspan=\"2\">TP</td><td rowspan=\"2\">Cost</td><td colspan=\"3\">PPL</td></tr><tr><td>V2</td><td>V3</td><td>V4</td></tr><tr><td rowspan=\"3\">Cos</td><td>PTFS</td><td>1.00×</td><td>20.94</td><td>19.35</td><td>18.41</td></tr><tr><td>CPT</td><td>0.40×</td><td>21.23</td><td>19.78</td><td>18.92</td></tr><tr><td>Ours</td><td>0.58×</td><td>20.23</td><td>18.87</td><td>18.11</td></tr><tr><td rowspan=\"3\">Knee</td><td>PTFS</td><td>1.00×</td><td>20.30</td><td>18.84</td><td>17.98</td></tr><tr><td>CPT</td><td>0.40×</td><td>20.67</td><td>19.34</td><td>18.56</td></tr><tr><td>Ours</td><td>0.58×</td><td>20.20</td><td>18.85</td><td>18.09</td></tr><tr><td rowspan=\"3\">Multi</td><td>PTFS</td><td>1.00×</td><td>20.37</td><td>18.92</td><td>18.06</td></tr><tr><td>CPT</td><td>0.40×</td><td>20.74</td><td>19.44</td><td>18.68</td></tr><tr><td>Ours</td><td>0.58×</td><td>20.49</td><td>18.92</td><td>18.09</td></tr></table>",
898
+ "bbox": [
899
+ 119,
900
+ 80,
901
+ 490,
902
+ 293
903
+ ],
904
+ "page_idx": 6
905
+ },
906
+ {
907
+ "type": "text",
908
+ "text": "Overall, our paradigm achieves a better balance between pre-training performance and total training cost during version updates of LLMs.3",
909
+ "bbox": [
910
+ 112,
911
+ 373,
912
+ 489,
913
+ 422
914
+ ],
915
+ "page_idx": 6
916
+ },
917
+ {
918
+ "type": "text",
919
+ "text": "Performance on Downstream Tasks Furthermore, we investigate the performance of different training paradigms across nine downstream tasks, including $C^3$ (Sun et al., 2020), GSM8K (Cobbe et al., 2021), MMLU (Hendrycks et al., 2021), CSL (Li et al., 2022), C-EVAL (Huang et al., 2023), BBH (Suzgun et al., 2023), CMMLU (Li et al., 2024), GAOKAO (Zhang et al., 2023) and AGIEval (Zhong et al., 2024). To this end, we first construct a general supervised fine-tuning (SFT) dataset with 1.8B tokens and then we perform SFT on each of the four versions of the updated LLMs.",
920
+ "bbox": [
921
+ 112,
922
+ 436,
923
+ 489,
924
+ 627
925
+ ],
926
+ "page_idx": 6
927
+ },
928
+ {
929
+ "type": "text",
930
+ "text": "From the results listed in Table 4, we clearly find that our paradigm can still obtain better average performance than PTFS and CPT, which further proves the effectiveness of our paradigm.",
931
+ "bbox": [
932
+ 112,
933
+ 630,
934
+ 487,
935
+ 695
936
+ ],
937
+ "page_idx": 6
938
+ },
939
+ {
940
+ "type": "text",
941
+ "text": "4.3 Generalization of Our Paradigm",
942
+ "text_level": 1,
943
+ "bbox": [
944
+ 112,
945
+ 711,
946
+ 416,
947
+ 727
948
+ ],
949
+ "page_idx": 6
950
+ },
951
+ {
952
+ "type": "text",
953
+ "text": "Subsequently, we explore the generalization of our paradigm in the following aspects, including model architecture, model size, data size, and maximum learning rate, all of which are crucial for the practical applications of LLMs. In all of these experiments, we maintain the use of the cosine learning rate schedule.",
954
+ "bbox": [
955
+ 112,
956
+ 734,
957
+ 489,
958
+ 847
959
+ ],
960
+ "page_idx": 6
961
+ },
962
+ {
963
+ "type": "table",
964
+ "img_path": "images/f6473236d8fba53013f2326ebd0ad8713179d13b215276980439090b24c4d99d.jpg",
965
+ "table_caption": [
966
+ "Table 5: The generalization of our paradigm in terms of model architecture. Based on Qwen-1.2B, we conduct experiments with the same setting as LLaMA-1.2B."
967
+ ],
968
+ "table_footnote": [],
969
+ "table_body": "<table><tr><td rowspan=\"2\">Size</td><td rowspan=\"2\">TP</td><td colspan=\"3\">PPL</td></tr><tr><td>V2</td><td>V3</td><td>V4</td></tr><tr><td rowspan=\"3\">203M</td><td>PTFS</td><td>30.97</td><td>29.50</td><td>28.65</td></tr><tr><td>CPT</td><td>31.31</td><td>29.90</td><td>29.07</td></tr><tr><td>Ours</td><td>30.25</td><td>28.94</td><td>28.19</td></tr><tr><td rowspan=\"3\">406M</td><td>PTFS</td><td>26.58</td><td>25.06</td><td>24.19</td></tr><tr><td>CPT</td><td>26.89</td><td>25.49</td><td>24.67</td></tr><tr><td>Ours</td><td>25.85</td><td>24.52</td><td>23.79</td></tr><tr><td rowspan=\"3\">608M</td><td>PTFS</td><td>23.12</td><td>21.75</td><td>20.93</td></tr><tr><td>CPT</td><td>23.50</td><td>22.26</td><td>21.52</td></tr><tr><td>Ours</td><td>22.59</td><td>21.43</td><td>20.77</td></tr><tr><td rowspan=\"3\">1.2B</td><td>PTFS</td><td>20.84</td><td>19.28</td><td>18.36</td></tr><tr><td>CPT</td><td>21.22</td><td>19.79</td><td>18.97</td></tr><tr><td>Ours</td><td>20.13</td><td>18.81</td><td>18.09</td></tr><tr><td rowspan=\"3\">2.1B</td><td>PTFS</td><td>18.33</td><td>16.88</td><td>16.04</td></tr><tr><td>CPT</td><td>18.76</td><td>17.47</td><td>16.72</td></tr><tr><td>Ours</td><td>17.82</td><td>16.63</td><td>15.97</td></tr><tr><td rowspan=\"3\">3.1B</td><td>PTFS</td><td>17.22</td><td>15.87</td><td>15.07</td></tr><tr><td>CPT</td><td>17.67</td><td>16.48</td><td>15.77</td></tr><tr><td>Ours</td><td>16.84</td><td>15.72</td><td>15.09</td></tr></table>",
970
+ "bbox": [
971
+ 539,
972
+ 80,
973
+ 857,
974
+ 461
975
+ ],
976
+ "page_idx": 6
977
+ },
978
+ {
979
+ "type": "text",
980
+ "text": "Table 6: The generalization of our paradigm in terms of model size. The model sizes range from 203M to 3.1B.",
981
+ "bbox": [
982
+ 507,
983
+ 470,
984
+ 882,
985
+ 500
986
+ ],
987
+ "page_idx": 6
988
+ },
989
+ {
990
+ "type": "text",
991
+ "text": "Model Architecture To demonstrate the generalization of our paradigm on model architecture, we use Qwen-1.2B (Bai et al., 2023) to re-conduct experiments with the same setting as LLaMA-1.2B.",
992
+ "bbox": [
993
+ 507,
994
+ 526,
995
+ 884,
996
+ 589
997
+ ],
998
+ "page_idx": 6
999
+ },
1000
+ {
1001
+ "type": "text",
1002
+ "text": "Similar to the experimental results of LLaMA-1.2B presented in Table 3, the experimental results of Qwen-1.2B shown in Table 5 further demonstrate the superiority of our paradigm in balancing model performance and training cost. This validates the generalization of our paradigm in terms of model architecture.",
1003
+ "bbox": [
1004
+ 507,
1005
+ 590,
1006
+ 882,
1007
+ 702
1008
+ ],
1009
+ "page_idx": 6
1010
+ },
1011
+ {
1012
+ "type": "text",
1013
+ "text": "Model Size We then focus on the generalization of our paradigm on model size. To this end, we vary the number of model parameters to conduct experiments. In total, we consider the following six model sizes: 203M, 406M, 608M, 1.2B, 2.1B, 3.1B, of which detailed hyper-parameters are listed in Appendix A.",
1014
+ "bbox": [
1015
+ 507,
1016
+ 711,
1017
+ 882,
1018
+ 824
1019
+ ],
1020
+ "page_idx": 6
1021
+ },
1022
+ {
1023
+ "type": "text",
1024
+ "text": "From the results shown in Table 6, we observe that our paradigm achieves pre-training performance comparable to PTFS across different sizes of LLMs and outperforms CPT. This validates the generalization of our paradigm in terms of model size.",
1025
+ "bbox": [
1026
+ 507,
1027
+ 825,
1028
+ 882,
1029
+ 920
1030
+ ],
1031
+ "page_idx": 6
1032
+ },
1033
+ {
1034
+ "type": "page_footnote",
1035
+ "text": "3We also compare our paradigm with CPT based on equal training cost, with results detailed in Appendix D. Besides, we also compare PTFS, CPT and our paradigm in the scenario with varying data increments. The corresponding results are listed in Appendix E.",
1036
+ "bbox": [
1037
+ 112,
1038
+ 858,
1039
+ 489,
1040
+ 921
1041
+ ],
1042
+ "page_idx": 6
1043
+ },
1044
+ {
1045
+ "type": "page_number",
1046
+ "text": "13587",
1047
+ "bbox": [
1048
+ 477,
1049
+ 927,
1050
+ 524,
1051
+ 940
1052
+ ],
1053
+ "page_idx": 6
1054
+ },
1055
+ {
1056
+ "type": "table",
1057
+ "img_path": "images/5d8d800ff8d3fdd8dc210fce557140b218247adbfaade6a6b660dee231d8ad1e.jpg",
1058
+ "table_caption": [],
1059
+ "table_footnote": [],
1060
+ "table_body": "<table><tr><td rowspan=\"2\">Data</td><td rowspan=\"2\">TP</td><td colspan=\"3\">PPL</td></tr><tr><td>V2</td><td>V3</td><td>V4</td></tr><tr><td rowspan=\"3\">21B</td><td>PTFS</td><td>24.66</td><td>22.31</td><td>20.84</td></tr><tr><td>CPT</td><td>25.10</td><td>22.84</td><td>21.56</td></tr><tr><td>Ours</td><td>23.59</td><td>21.41</td><td>20.27</td></tr><tr><td rowspan=\"3\">42B</td><td>PTFS</td><td>20.84</td><td>19.28</td><td>18.36</td></tr><tr><td>CPT</td><td>21.11</td><td>19.70</td><td>18.87</td></tr><tr><td>Ours</td><td>20.13</td><td>18.81</td><td>18.09</td></tr><tr><td rowspan=\"3\">168B</td><td>PTFS</td><td>16.70</td><td>15.97</td><td>15.54</td></tr><tr><td>CPT</td><td>16.90</td><td>16.25</td><td>15.86</td></tr><tr><td>Ours</td><td>16.47</td><td>15.86</td><td>15.51</td></tr></table>",
1061
+ "bbox": [
1062
+ 146,
1063
+ 80,
1064
+ 455,
1065
+ 294
1066
+ ],
1067
+ "page_idx": 7
1068
+ },
1069
+ {
1070
+ "type": "text",
1071
+ "text": "Data Size Next, we switch our attention to the generalization of our paradigm on data size. To do this, we conduct experiments using different sizes of training data: 21B, 42B, and 168B tokens. Correspondingly, the training steps are 5K, 10K and 40K for each LLM update, respectively.",
1072
+ "bbox": [
1073
+ 112,
1074
+ 374,
1075
+ 487,
1076
+ 470
1077
+ ],
1078
+ "page_idx": 7
1079
+ },
1080
+ {
1081
+ "type": "text",
1082
+ "text": "As shown in Table 7, our paradigm achieves optimal pre-training performance across different data sizes, which further demonstrates the generalization of our paradigm.",
1083
+ "bbox": [
1084
+ 112,
1085
+ 470,
1086
+ 489,
1087
+ 536
1088
+ ],
1089
+ "page_idx": 7
1090
+ },
1091
+ {
1092
+ "type": "text",
1093
+ "text": "Maximum Learning Rate Finally, we aim to verify the generalization of our paradigm in terms of the maximum learning rate. We conduct experiments by setting the maximum learning rates as 5e-5, 1e-4, 3e-4, 5e-4, 8e-4, respectively.",
1094
+ "bbox": [
1095
+ 112,
1096
+ 545,
1097
+ 489,
1098
+ 625
1099
+ ],
1100
+ "page_idx": 7
1101
+ },
1102
+ {
1103
+ "type": "text",
1104
+ "text": "As shown in Table 8, as the maximum learning rate increases, our paradigm consistently achieves better or comparable performance than PTFS, and significantly outperforms CPT. This strongly highlights the generalization of our paradigm in terms of the maximum learning rate.",
1105
+ "bbox": [
1106
+ 112,
1107
+ 626,
1108
+ 489,
1109
+ 722
1110
+ ],
1111
+ "page_idx": 7
1112
+ },
1113
+ {
1114
+ "type": "text",
1115
+ "text": "5 Related Work",
1116
+ "text_level": 1,
1117
+ "bbox": [
1118
+ 112,
1119
+ 734,
1120
+ 268,
1121
+ 750
1122
+ ],
1123
+ "page_idx": 7
1124
+ },
1125
+ {
1126
+ "type": "text",
1127
+ "text": "Continual Training As one of the most direct approaches for version updates of LLMs, continual training has attracted increasing attention, of which related studies can be broadly categorized into the following four types: 1) methods introducing additional parameters (Ke et al., 2022, 2023; Song et al., 2023; PENG et al., 2024), 2) prompt-based methods (Wang et al., 2022b,a; Razdaibiedina et al., 2023), 3) multi-stage training methods (Liu et al., 2021; Zhou et al., 2022, 2023; Liu et al., 2023;",
1128
+ "bbox": [
1129
+ 112,
1130
+ 760,
1131
+ 489,
1132
+ 921
1133
+ ],
1134
+ "page_idx": 7
1135
+ },
1136
+ {
1137
+ "type": "table",
1138
+ "img_path": "images/5560e9db2126d5990dbf098de5d366f6e752a2f64c070c013856e9264a5f8165.jpg",
1139
+ "table_caption": [
1140
+ "Table 7: The generalization of our paradigm in terms of data size. The total data sizes (for four versions) range from 21B to 168B."
1141
+ ],
1142
+ "table_footnote": [],
1143
+ "table_body": "<table><tr><td rowspan=\"2\">MLR</td><td rowspan=\"2\">TP</td><td colspan=\"3\">PPL</td></tr><tr><td>V2</td><td>V3</td><td>V4</td></tr><tr><td rowspan=\"3\">5e-5</td><td>PTFS</td><td>34.78</td><td>29.53</td><td>26.65</td></tr><tr><td>CPT</td><td>35.23</td><td>30.08</td><td>27.23</td></tr><tr><td>Ours</td><td>29.99</td><td>25.54</td><td>23.27</td></tr><tr><td rowspan=\"3\">1e-4</td><td>PTFS</td><td>26.34</td><td>23.28</td><td>21.57</td></tr><tr><td>CPT</td><td>26.64</td><td>23.70</td><td>22.04</td></tr><tr><td>Ours</td><td>23.89</td><td>21.32</td><td>19.97</td></tr><tr><td rowspan=\"3\">3e-4</td><td>PTFS</td><td>20.84</td><td>19.28</td><td>18.36</td></tr><tr><td>CPT</td><td>21.22</td><td>19.79</td><td>18.97</td></tr><tr><td>Ours</td><td>20.13</td><td>18.81</td><td>18.09</td></tr><tr><td rowspan=\"3\">5e-4</td><td>PTFS</td><td>19.89</td><td>18.62</td><td>17.85</td></tr><tr><td>CPT</td><td>20.17</td><td>19.05</td><td>18.38</td></tr><tr><td>Ours</td><td>19.53</td><td>18.45</td><td>17.85</td></tr><tr><td rowspan=\"3\">8e-4</td><td>PTFS</td><td>19.38</td><td>18.26</td><td>17.58</td></tr><tr><td>CPT</td><td>19.69</td><td>18.73</td><td>18.16</td></tr><tr><td>Ours</td><td>19.22</td><td>18.30</td><td>17.78</td></tr></table>",
1144
+ "bbox": [
1145
+ 539,
1146
+ 80,
1147
+ 855,
1148
+ 406
1149
+ ],
1150
+ "page_idx": 7
1151
+ },
1152
+ {
1153
+ "type": "text",
1154
+ "text": "Table 8: The generalization of our paradigm in terms of the maximum learning rate. The maximum learning rate ranges from 5e-5 to 8e-4. \"MLR\" indicates the maximum learning rate.",
1155
+ "bbox": [
1156
+ 507,
1157
+ 414,
1158
+ 882,
1159
+ 473
1160
+ ],
1161
+ "page_idx": 7
1162
+ },
1163
+ {
1164
+ "type": "text",
1165
+ "text": "Huang et al., 2024), and 4) scenario-specific methods (Peng et al., 2023; Gogoulou et al., 2024; Xie et al., 2024). Significantly different from the above studies, our paradigm comprises one main learning rate path, where we perform pre-training from scratch with the maximal learning rate, and multiple learning rate branching paths, where we perform continual pre-training with a complete learning rate decay process. Thus, our paradigm achieve a better trade-off between the performance and training cost than PTFS and CPT.",
1166
+ "bbox": [
1167
+ 507,
1168
+ 502,
1169
+ 884,
1170
+ 678
1171
+ ],
1172
+ "page_idx": 7
1173
+ },
1174
+ {
1175
+ "type": "text",
1176
+ "text": "Learning Rate The learning rate is one of the most crucial hyper-parameters for training LLMs. Existing learning rate schedules can be broadly divided into the following four policies (Wu et al., 2019; Wu and Liu, 2023; Jin et al., 2023): 1) Fixed learning rate policy, such as constant learning rate schedule; 2) Decaying learning rate policy, such as inverse square root learning rate schedule; 3) Cyclic learning rate policy, such as cosine learning rate schedule; 4) Composite learning rate policy, such as Knee and multi-step learning rate schedules. In addition, there are some recent studies exploring learning rate schedules for LLMs, including Warmup-Stable-Decay schedule (Hu et al., 2024)",
1177
+ "bbox": [
1178
+ 507,
1179
+ 696,
1180
+ 884,
1181
+ 921
1182
+ ],
1183
+ "page_idx": 7
1184
+ },
1185
+ {
1186
+ "type": "page_number",
1187
+ "text": "13588",
1188
+ "bbox": [
1189
+ 477,
1190
+ 927,
1191
+ 524,
1192
+ 940
1193
+ ],
1194
+ "page_idx": 7
1195
+ },
1196
+ {
1197
+ "type": "text",
1198
+ "text": "and constant learning rate with cooldown (Hägele et al., 2024). Particularly, our paradigm is a well-designed training paradigm for version updates of LLMs, which is applicable to cosine, Knee, and multi-step and other learning rate schedules.",
1199
+ "bbox": [
1200
+ 112,
1201
+ 84,
1202
+ 489,
1203
+ 164
1204
+ ],
1205
+ "page_idx": 8
1206
+ },
1207
+ {
1208
+ "type": "text",
1209
+ "text": "6 Conclusion and Future Work",
1210
+ "text_level": 1,
1211
+ "bbox": [
1212
+ 112,
1213
+ 177,
1214
+ 401,
1215
+ 192
1216
+ ],
1217
+ "page_idx": 8
1218
+ },
1219
+ {
1220
+ "type": "text",
1221
+ "text": "This paper focuses on how to effectively balance model performance and training cost for version updates of LLMs. We begin by comparing two training paradigms: PTFS and CPT, concluding that PTFS achieves better pre-training performance, while CPT has lower training cost. Through the analysis in the preliminary study, we find that 1) a large learning rate is beneficial for providing better initialization checkpoints for subsequent updates, and 2) a complete learning rate decay process enables the updated LLMs to achieve optimal performance. Based on the above two findings, we propose a learning rate path switching paradigm for version updates of LLMs, which comprises one main path and multiple branching paths. On the main path, we pre-train the LLMs with the maximal learning rate to provide superior initialization checkpoints for subsequent updates. When an update is required, our paradigm switches from the main path to a branching path, undergoing a complete learning rate decay process. Experimental results and further analyses strongly demonstrate the effectiveness and generalization of our paradigm.",
1222
+ "bbox": [
1223
+ 115,
1224
+ 202,
1225
+ 489,
1226
+ 571
1227
+ ],
1228
+ "page_idx": 8
1229
+ },
1230
+ {
1231
+ "type": "text",
1232
+ "text": "In the future, we will further expand the practical scope of our paradigm. Current research mainly focuses on the pre-training phase and does not include supervised fine-tuning, safety alignment, etc., which could be integrated into the fast-decaying stage of our paradigm. Additionally, we plan to explore the applicability of our paradigm in the context of multimodal large language models.",
1233
+ "bbox": [
1234
+ 112,
1235
+ 573,
1236
+ 489,
1237
+ 702
1238
+ ],
1239
+ "page_idx": 8
1240
+ },
1241
+ {
1242
+ "type": "text",
1243
+ "text": "Limitations",
1244
+ "text_level": 1,
1245
+ "bbox": [
1246
+ 112,
1247
+ 713,
1248
+ 218,
1249
+ 728
1250
+ ],
1251
+ "page_idx": 8
1252
+ },
1253
+ {
1254
+ "type": "text",
1255
+ "text": "Although the training cost of our paradigm is significantly lower than that of PTFS, it is still higher than that of CPT. Hence, we plan to design a precise method to determine the proportion of the fast-decaying steps to the total steps, which can further reduce the training cost of our paradigm.",
1256
+ "bbox": [
1257
+ 112,
1258
+ 739,
1259
+ 489,
1260
+ 835
1261
+ ],
1262
+ "page_idx": 8
1263
+ },
1264
+ {
1265
+ "type": "text",
1266
+ "text": "Acknowledgements",
1267
+ "text_level": 1,
1268
+ "bbox": [
1269
+ 112,
1270
+ 847,
1271
+ 285,
1272
+ 864
1273
+ ],
1274
+ "page_idx": 8
1275
+ },
1276
+ {
1277
+ "type": "text",
1278
+ "text": "The project was supported by National Key R&D Program of China (No. 2022ZD0160501), National Natural Science Foundation of China (No.",
1279
+ "bbox": [
1280
+ 112,
1281
+ 873,
1282
+ 489,
1283
+ 920
1284
+ ],
1285
+ "page_idx": 8
1286
+ },
1287
+ {
1288
+ "type": "text",
1289
+ "text": "62276219), and the Public Technology Service Platform Project of Xiamen (No. 3502Z20231043).",
1290
+ "bbox": [
1291
+ 509,
1292
+ 84,
1293
+ 884,
1294
+ 116
1295
+ ],
1296
+ "page_idx": 8
1297
+ },
1298
+ {
1299
+ "type": "text",
1300
+ "text": "References",
1301
+ "text_level": 1,
1302
+ "bbox": [
1303
+ 510,
1304
+ 143,
1305
+ 608,
1306
+ 156
1307
+ ],
1308
+ "page_idx": 8
1309
+ },
1310
+ {
1311
+ "type": "list",
1312
+ "sub_type": "ref_text",
1313
+ "list_items": [
1314
+ "Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, et al. 2023. Qwen technical report. arXiv.",
1315
+ "Xiao Bi, Deli Chen, Guanting Chen, Shanhuang Chen, Damai Dai, Chengqi Deng, et al. 2024. Deepseek llm: Scaling open-source language models with longtermism. arXiv.",
1316
+ "Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, et al. 2021. Training verifiers to solve math word problems. arXiv.",
1317
+ "Can Cui, Yunsheng Ma, Xu Cao, Wenqian Ye, Yang Zhou, Kaizhao Liang, et al. 2024. A survey on multimodal large language models for autonomous driving. In WACVW Workshops.",
1318
+ "Evangelia Gogoulou, Timothee Lesort, Magnus Boman, and Joakim Nivre. 2024. Continual learning under language shift. In TSD.",
1319
+ "Taicheng Guo, Xiuying Chen, Yaqi Wang, Ruidi Chang, Shichao Pei, Nitesh V. Chawla, et al. 2024. Large language model based multi-agents: A survey of progress and challenges. arXiv.",
1320
+ "Kshitij Gupta, Benjamin Thérien, Adam Ibrahim, Mats L. Richter, Quentin Anthony, Eugene Belilovsky, et al. 2023. Continual pre-training of large language models: How to (re)warm your model? In ICML Workshop.",
1321
+ "Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language understanding. In ICLR.",
1322
+ "Shengding Hu, Yuge Tu, Xu Han, Chaoqun He, Ganqu Cui, Xiang Long, et al. 2024. Minicpm: Unveiling the potential of small language models with scalable training strategies. In COLM.",
1323
+ "Jianheng Huang, Leyang Cui, Ante Wang, Chengyi Yang, Xinting Liao, Linfeng Song, Junfeng Yao, and Jinsong Su. 2024. Mitigating catastrophic forgetting in large language models with self-synthesized rehearsal. In ACL.",
1324
+ "Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, et al. 2023. C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models. In NeurIPS.",
1325
+ "Alexander Hagele, Elie Bakouch, Atli Kosson, Loubna Ben Allal, Leandro Von Werra, and Martin Jaggi. 2024. Scaling laws and compute-optimal training beyond fixed training durations. In ICML Workshop."
1326
+ ],
1327
+ "bbox": [
1328
+ 510,
1329
+ 165,
1330
+ 884,
1331
+ 920
1332
+ ],
1333
+ "page_idx": 8
1334
+ },
1335
+ {
1336
+ "type": "page_number",
1337
+ "text": "13589",
1338
+ "bbox": [
1339
+ 477,
1340
+ 927,
1341
+ 524,
1342
+ 940
1343
+ ],
1344
+ "page_idx": 8
1345
+ },
1346
+ {
1347
+ "type": "list",
1348
+ "sub_type": "ref_text",
1349
+ "list_items": [
1350
+ "Nikhil Iyer, V Thejas, Nipun Kwatra, Ramachandran Ramjee, and Muthian Sivathanu. 2023. Wide-minima density hypothesis and the explore-exploit learning rate schedule. JMLR.",
1351
+ "Hongpeng Jin, Wenqi Wei, Xuyu Wang, Wenbin Zhang, Hongpeng Wu, YanzhaoJin, Wenqi Wei, et al. 2023. Rethinking learning rate tuning in the era of large language models. In CogMI.",
1352
+ "Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, et al. 2020. Scaling laws for neural language models. arXiv.",
1353
+ "Zixuan Ke, Haowei Lin, Yijia Shao, Hu Xu, Lei Shu, and Bing Liu. 2022. Continual training of language models for few-shot learning. In EMNLP.",
1354
+ "Zixuan Ke, Yijia Shao, Haowei Lin, Tatsuya Konishi, Gyuhak Kim, and Bing Liu. 2023. Continual pretraining of language models. In ICLR.",
1355
+ "Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, and Timothy Baldwin. 2024. Cmmlu: Measuring massive multitask language understanding in chinese. In Findings of ACL.",
1356
+ "Yudong Li, Yuqing Zhang, Zhe Zhao, Linlin Shen, Wei-jie Liu, Weiquan Mao, and Hui Zhang. 2022. Csl: A large-scale chinese scientific literature dataset. In COLING.",
1357
+ "Junpeng Liu, Kaiyu Huang, Hao Yu, Jiuyi Li, Jinsong Su, and Degen Huang. 2023. Continual learning for multilingual neural machine translation via dual importance-based model division. In EMNLP.",
1358
+ "Xin Liu, Baosong Yang, Dayiheng Liu, Haibo Zhang, Weihua Luo, Min Zhang, Haiying Zhang, and Jinsong Su. 2021. Bridging subword gaps in pretrain-finetune paradigm for natural language generation. In ACL.",
1359
+ "Ben Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, A Neelakantan, et al. 2020. Language models are few-shot learners. In NeurIPS.",
1360
+ "Bohao PENG, Zhuotao Tian, Shu Liu, Ming-Chang Yang, and Jiaya Jia. 2024. Scalable language model with generalized continual learning. In ICLR.",
1361
+ "Guangyue Peng, Tao Ge, Si-Qing Chen, Furu Wei, and Houfeng Wang. 2023. Semiparametric language models are scalable continual learners. arXiv.",
1362
+ "Yujia Qin, Jiajie Zhang, Yankai Lin, Zhiyuan Liu, Peng Li, Maosong Sun, and Jie Zhou. 2022. Elle: Efficient lifelong pre-training for emerging data. In Findings of ACL.",
1363
+ "Anastasia Razdaibiedina, Yuning Mao, Rui Hou, Madian Khabsa, Mike Lewis, and Amjad Almahairi. 2023. Progressive prompts: Continual learning for language models. In ICLR."
1364
+ ],
1365
+ "bbox": [
1366
+ 115,
1367
+ 85,
1368
+ 485,
1369
+ 919
1370
+ ],
1371
+ "page_idx": 9
1372
+ },
1373
+ {
1374
+ "type": "list",
1375
+ "sub_type": "ref_text",
1376
+ "list_items": [
1377
+ "Leslie N Smith and Nicholay Topin. 2019. Superconvergence: Very fast training of neural networks using large learning rates. In Artificial Intelligence and Machine Learning for Multi-domain Operations Applications.",
1378
+ "Chenyang Song, Xu Han, Zheni Zeng, Kuai Li, Chen Chen, Zhiyuan Liu, et al. 2023. Conpet: Continual parameter-efficient tuning for large language models. arXiv.",
1379
+ "Kai Sun, Dian Yu, Dong Yu, and Claire Cardie. 2020. Investigating prior knowledge for challenging Chinese machine reading comprehension. TACL.",
1380
+ "Mirac Suzgun, Nathan Scales, Nathanael Scharli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, et al. 2023. Challenging big-bench tasks and whether chain-of-thought can solve them. In Findings of ACL.",
1381
+ "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, et al. 2023a. Llama: Open and efficient foundation language models. arXiv.",
1382
+ "Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. arXiv.",
1383
+ "Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, et al. 2024. A survey on large language model based autonomous agents. Frontiers Comput. Sci.",
1384
+ "Zifeng Wang, Zizhao Zhang, Sayna Ebrahimi, Ruoxi Sun, Han Zhang, Chen-Yu Lee, et al. 2022a. Dualprompt: Complementary prompting for rehearsal-free continual learning. In ECCV.",
1385
+ "Zifeng Wang, Zizhao Zhang, Chen-Yu Lee, Han Zhang, Ruoxi Sun, Xiaoqi Ren, et al. 2022b. Learning to prompt for continual learning. In CVPR.",
1386
+ "Jiayang Wu, Wensheng Gan, Zefeng Chen, Shicheng Wan, and Philip S. Yu. 2023. Multimodal large language models: A survey. In IEEE BigData.",
1387
+ "Yanzhao Wu and Ling Liu. 2023. Selecting and composing learning rate policies for deep neural networks. ACM TIST.",
1388
+ "Yanzhao Wu, Ling Liu, Juhyun Bae, Ka-Ho Chow, Arun Iyengar, Calton Pu, et al. 2019. Demystifying learning rate policies for high accuracy training of deep neural networks. In IEEE BigData.",
1389
+ "Yong Xie, Karan Aggarwal, and Aitzaz Ahmad. 2024. Efficient continual pre-training for building domain specific large language models. In *Findings of ACL*.",
1390
+ "Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, et al. 2023. Baichuan 2: Open large-scale language models. arXiv."
1391
+ ],
1392
+ "bbox": [
1393
+ 510,
1394
+ 85,
1395
+ 880,
1396
+ 919
1397
+ ],
1398
+ "page_idx": 9
1399
+ },
1400
+ {
1401
+ "type": "page_number",
1402
+ "text": "13590",
1403
+ "bbox": [
1404
+ 477,
1405
+ 928,
1406
+ 524,
1407
+ 940
1408
+ ],
1409
+ "page_idx": 9
1410
+ },
1411
+ {
1412
+ "type": "list",
1413
+ "sub_type": "ref_text",
1414
+ "list_items": [
1415
+ "Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, et al. 2023. Glm-130b: An open bilingual pre-trained model. In ICLR.",
1416
+ "Xiaotian Zhang, Chunyang Li, Yi Zong, Zhengyu Ying, Liang He, and Xipeng Qiu. 2023. Evaluating the performance of large language models on gaokao benchmark. arXiv.",
1417
+ "Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, et al. 2023. A survey of large language models. arXiv.",
1418
+ "Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. 2024. Agieval: A human-centric benchmark for evaluating foundation models. In Findings of NAACL.",
1419
+ "Chulun Zhou, Yunlong Liang, Fandong Meng, Jie Zhou, Jinan Xu, Hongji Wang, Min Zhang, and Jinsong Su. 2023. A multi-task multi-stage transitional training framework for neural chat translation. TPAMI.",
1420
+ "Chulun Zhou, Fandong Meng, Jie Zhou, Min Zhang, Hongji Wang, and Jinsong Su. 2022. Confidence based bidirectional global context aware training framework for neural machine translation. In ACL."
1421
+ ],
1422
+ "bbox": [
1423
+ 115,
1424
+ 85,
1425
+ 487,
1426
+ 437
1427
+ ],
1428
+ "page_idx": 10
1429
+ },
1430
+ {
1431
+ "type": "page_number",
1432
+ "text": "13591",
1433
+ "bbox": [
1434
+ 477,
1435
+ 927,
1436
+ 522,
1437
+ 940
1438
+ ],
1439
+ "page_idx": 10
1440
+ },
1441
+ {
1442
+ "type": "table",
1443
+ "img_path": "images/3543006f12658401a18822b83e2244abfd80b2f847c5d1e8d0aa0d102e634801.jpg",
1444
+ "table_caption": [],
1445
+ "table_footnote": [],
1446
+ "table_body": "<table><tr><td>Size</td><td>MLR</td><td>Hidden</td><td>Head</td><td>Layer</td></tr><tr><td>203M</td><td>1e-3</td><td>512</td><td>8</td><td>24</td></tr><tr><td>406M</td><td>6e-4</td><td>1,024</td><td>16</td><td>12</td></tr><tr><td>608M</td><td>6e-4</td><td>1,024</td><td>16</td><td>24</td></tr><tr><td>1.2B</td><td>3e-4</td><td>1,536</td><td>16</td><td>24</td></tr><tr><td>2.1B</td><td>3e-4</td><td>1,536</td><td>16</td><td>48</td></tr><tr><td>3.1B</td><td>3e-4</td><td>8,192</td><td>32</td><td>40</td></tr></table>",
1447
+ "bbox": [
1448
+ 132,
1449
+ 80,
1450
+ 470,
1451
+ 211
1452
+ ],
1453
+ "page_idx": 11
1454
+ },
1455
+ {
1456
+ "type": "image",
1457
+ "img_path": "images/c6e98f0abb5e8cd027be577ef73c83be5fa2b43cd49a852b03bc13f6bff370fe.jpg",
1458
+ "image_caption": [
1459
+ "Figure 6: The learning rate curves of different adaptation method of CPT for version updates of LLMs. The learning rate curves are plotted based on cosine learning rate schedules."
1460
+ ],
1461
+ "image_footnote": [],
1462
+ "bbox": [
1463
+ 129,
1464
+ 271,
1465
+ 478,
1466
+ 502
1467
+ ],
1468
+ "page_idx": 11
1469
+ },
1470
+ {
1471
+ "type": "text",
1472
+ "text": "A Detailed Hyper-Parameters",
1473
+ "text_level": 1,
1474
+ "bbox": [
1475
+ 114,
1476
+ 607,
1477
+ 391,
1478
+ 624
1479
+ ],
1480
+ "page_idx": 11
1481
+ },
1482
+ {
1483
+ "type": "text",
1484
+ "text": "In this work, we compare PTFS, CPT and our paradigm based on LLMs with different sizes, whose hyper-parameters are listed in Table 9. Following Kaplan et al.; Mann et al., we set smaller maximum learning rates for larger LLMs. Besides, the minimum learning rate is configured to be $10\\%$ of the maximum learning rate.",
1485
+ "bbox": [
1486
+ 112,
1487
+ 639,
1488
+ 489,
1489
+ 751
1490
+ ],
1491
+ "page_idx": 11
1492
+ },
1493
+ {
1494
+ "type": "text",
1495
+ "text": "B CPT Variants",
1496
+ "text_level": 1,
1497
+ "bbox": [
1498
+ 114,
1499
+ 771,
1500
+ 272,
1501
+ 788
1502
+ ],
1503
+ "page_idx": 11
1504
+ },
1505
+ {
1506
+ "type": "text",
1507
+ "text": "In order to adapt traditional CPT for version updates of LLMs, we compare three variants of CPT in Figure 6:",
1508
+ "bbox": [
1509
+ 112,
1510
+ 803,
1511
+ 487,
1512
+ 851
1513
+ ],
1514
+ "page_idx": 11
1515
+ },
1516
+ {
1517
+ "type": "text",
1518
+ "text": "- RewarmMax: Warm up the learning rate periodically, and use the learning rate schedule of the old version to train the new version of LLMs (Gupta et al., 2023).",
1519
+ "bbox": [
1520
+ 114,
1521
+ 856,
1522
+ 489,
1523
+ 920
1524
+ ],
1525
+ "page_idx": 11
1526
+ },
1527
+ {
1528
+ "type": "table",
1529
+ "img_path": "images/5f6356de2a449045e65c2d8b5c6e4a8735e0100d22ae14b4cbf575e5dae8dca0.jpg",
1530
+ "table_caption": [
1531
+ "Table 9: The detailed hyper-parameters of LLMs with different model sizes."
1532
+ ],
1533
+ "table_footnote": [],
1534
+ "table_body": "<table><tr><td rowspan=\"2\">LRS</td><td rowspan=\"2\">Variant</td><td colspan=\"3\">PPL</td></tr><tr><td>V2</td><td>V3</td><td>V4</td></tr><tr><td rowspan=\"3\">Cos</td><td>RewarmMax</td><td>21.22</td><td>19.79</td><td>18.97</td></tr><tr><td>ResetMax</td><td>21.11</td><td>19.70</td><td>18.87</td></tr><tr><td>KeepMin</td><td>23.00</td><td>21.99</td><td>21.26</td></tr><tr><td rowspan=\"3\">Knee</td><td>RewarmMax</td><td>20.74</td><td>19.46</td><td>18.70</td></tr><tr><td>ResetMax</td><td>20.56</td><td>19.27</td><td>18.52</td></tr><tr><td>KeepMin</td><td>22.22</td><td>21.36</td><td>20.37</td></tr><tr><td rowspan=\"3\">Multi</td><td>RewarmMax</td><td>20.80</td><td>19.55</td><td>18.82</td></tr><tr><td>ResetMax</td><td>20.62</td><td>19.37</td><td>18.65</td></tr><tr><td>KeepMin</td><td>22.11</td><td>21.24</td><td>20.60</td></tr></table>",
1535
+ "bbox": [
1536
+ 512,
1537
+ 82,
1538
+ 880,
1539
+ 296
1540
+ ],
1541
+ "page_idx": 11
1542
+ },
1543
+ {
1544
+ "type": "text",
1545
+ "text": "Table 10: The comparison among RewarmMax, ResetMax and KeepMin for CPT.",
1546
+ "bbox": [
1547
+ 507,
1548
+ 305,
1549
+ 882,
1550
+ 335
1551
+ ],
1552
+ "page_idx": 11
1553
+ },
1554
+ {
1555
+ "type": "list",
1556
+ "sub_type": "text",
1557
+ "list_items": [
1558
+ "- ResetMax: Directly set the learning rate as the maximum periodically, and use the learning rate schedule of the old version to train the new version of LLMs (Gupta et al., 2023).",
1559
+ "- KeepMin: Keep the learning rate at the minimum by using a constant learning rate schedule to ensure the convergence of LLMs during training (Gogoulou et al., 2024)."
1560
+ ],
1561
+ "bbox": [
1562
+ 509,
1563
+ 362,
1564
+ 882,
1565
+ 495
1566
+ ],
1567
+ "page_idx": 11
1568
+ },
1569
+ {
1570
+ "type": "text",
1571
+ "text": "Experimental results are listed in Table 10. We observe that ResetMax achieves the best pre-training performance among these variants. Therefore, we use ResetMax for the other experiments.",
1572
+ "bbox": [
1573
+ 507,
1574
+ 499,
1575
+ 882,
1576
+ 564
1577
+ ],
1578
+ "page_idx": 11
1579
+ },
1580
+ {
1581
+ "type": "text",
1582
+ "text": "C Performance of Downstream Tasks",
1583
+ "text_level": 1,
1584
+ "bbox": [
1585
+ 507,
1586
+ 577,
1587
+ 850,
1588
+ 593
1589
+ ],
1590
+ "page_idx": 11
1591
+ },
1592
+ {
1593
+ "type": "text",
1594
+ "text": "In addition to the standard training scale (LLaMA-1.2B trained for 42B tokens), we also evaluate LLMs with a larger training dataset (LLaMA-1.2B trained for 168B tokens) and a larger model size (LLaMA-3.1B trained for 42B tokens). We report the performance of downstream tasks across different versions of LLMs, as shown in Table 11. Experimental results show that our paradigm achieves superior average performance compared with PTFS and CPT across different training scales for downstream tasks.",
1595
+ "bbox": [
1596
+ 507,
1597
+ 606,
1598
+ 884,
1599
+ 781
1600
+ ],
1601
+ "page_idx": 11
1602
+ },
1603
+ {
1604
+ "type": "text",
1605
+ "text": "D Comparison between CPT and Ours",
1606
+ "text_level": 1,
1607
+ "bbox": [
1608
+ 507,
1609
+ 797,
1610
+ 863,
1611
+ 814
1612
+ ],
1613
+ "page_idx": 11
1614
+ },
1615
+ {
1616
+ "type": "text",
1617
+ "text": "Existing experimental results show that while our paradigm outperforms CPT in terms of performance, it has higher training cost. To provide a more direct comparison between CPT and our paradigm, we conduct an experiment where the training cost (measured by training steps) are kept",
1618
+ "bbox": [
1619
+ 507,
1620
+ 825,
1621
+ 882,
1622
+ 921
1623
+ ],
1624
+ "page_idx": 11
1625
+ },
1626
+ {
1627
+ "type": "page_number",
1628
+ "text": "13592",
1629
+ "bbox": [
1630
+ 477,
1631
+ 927,
1632
+ 524,
1633
+ 940
1634
+ ],
1635
+ "page_idx": 11
1636
+ },
1637
+ {
1638
+ "type": "table",
1639
+ "img_path": "images/e21ac11db2529958f76bb2c2cdaf1ca3f880e38e5993f749835405a669eb1da8.jpg",
1640
+ "table_caption": [],
1641
+ "table_footnote": [],
1642
+ "table_body": "<table><tr><td>Scale</td><td>Ver.</td><td>TP</td><td>C3</td><td>GSM8K</td><td>MMLU</td><td>CSL</td><td>C-EVAL</td><td>BBH</td><td>CMMLU</td><td>GAOKAO</td><td>AGIEval</td><td>AVG</td></tr><tr><td rowspan=\"9\">1.2B 42B</td><td rowspan=\"3\">V2</td><td>PTFS</td><td>38.00</td><td>4.63</td><td>24.00</td><td>38.25</td><td>30.09</td><td>17.43</td><td>25.37</td><td>18.10</td><td>14.59</td><td>23.38</td></tr><tr><td>CPT</td><td>37.00</td><td>4.09</td><td>23.52</td><td>35.11</td><td>27.42</td><td>18.55</td><td>25.63</td><td>18.86</td><td>13.40</td><td>22.62</td></tr><tr><td>Ours</td><td>38.60</td><td>5.08</td><td>22.94</td><td>39.08</td><td>28.38</td><td>20.79</td><td>24.88</td><td>18.48</td><td>14.73</td><td>23.66</td></tr><tr><td rowspan=\"3\">V3</td><td>PTFS</td><td>40.30</td><td>3.34</td><td>24.33</td><td>39.17</td><td>25.85</td><td>17.11</td><td>25.30</td><td>22.03</td><td>14.34</td><td>23.53</td></tr><tr><td>CPT</td><td>38.30</td><td>4.70</td><td>23.32</td><td>36.40</td><td>28.38</td><td>21.11</td><td>24.76</td><td>17.85</td><td>13.47</td><td>23.14</td></tr><tr><td>Ours</td><td>42.10</td><td>4.63</td><td>23.22</td><td>34.91</td><td>29.35</td><td>19.70</td><td>24.73</td><td>19.24</td><td>14.90</td><td>23.64</td></tr><tr><td rowspan=\"3\">V4</td><td>PTFS</td><td>35.70</td><td>4.25</td><td>24.93</td><td>38.75</td><td>27.04</td><td>16.73</td><td>24.97</td><td>21.01</td><td>14.10</td><td>23.05</td></tr><tr><td>CPT</td><td>43.90</td><td>4.55</td><td>22.20</td><td>38.69</td><td>27.19</td><td>21.62</td><td>24.43</td><td>18.23</td><td>13.50</td><td>23.81</td></tr><tr><td>Ours</td><td>41.90</td><td>5.53</td><td>24.09</td><td>40.24</td><td>27.71</td><td>21.84</td><td>24.78</td><td>17.24</td><td>14.40</td><td>24.19</td></tr><tr><td rowspan=\"9\">1.2B 168B</td><td rowspan=\"3\">V2</td><td>PTFS</td><td>38.90</td><td>6.82</td><td>23.49</td><td>40.33</td><td>29.27</td><td>23.28</td><td>25.14</td><td>23.29</td><td>14.39</td><td>24.99</td></tr><tr><td>CPT</td><td>43.80</td><td>7.13</td><td>24.61</td><td>37.22</td><td>26.52</td><td>22.96</td><td>25.40</td><td>20.13</td><td>14.25</td><td>24.67</td></tr><tr><td>Ours</td><td>43.20</td><td>8.95</td><td>25.43</td><td>40.45</td><td>26.90</td><td>22.16</td><td>25.45</td><td>18.73</td><td>15.94</td><td>25.25</td></tr><tr><td rowspan=\"3\">V3</td><td>PTFS</td><td>47.40</td><td>8.49</td><td>25.04</td><td>42.42</td><td>27.42</td><td>26.88</td><td>25.06</td><td>18.23</td><td>16.59</td><td>26.39</td></tr><tr><td>CPT</td><td>40.30</td><td>8.42</td><td>24.30</td><td>41.61</td><td>26.30</td><td>24.07</td><td>24.59</td><td>20.00</td><td>18.00</td><td>25.29</td></tr><tr><td>Ours</td><td>47.70</td><td>9.33</td><td>25.35</td><td>44.39</td><td>25.85</td><td>23.05</td><td>24.85</td><td>17.60</td><td>15.63</td><td>25.97</td></tr><tr><td rowspan=\"3\">V4</td><td>PTFS</td><td>48.50</td><td>8.19</td><td>24.73</td><td>44.37</td><td>26.82</td><td>25.70</td><td>25.19</td><td>19.49</td><td>15.36</td><td>26.48</td></tr><tr><td>CPT</td><td>49.10</td><td>8.34</td><td>25.48</td><td>40.60</td><td>27.27</td><td>22.54</td><td>25.38</td><td>21.39</td><td>17.44</td><td>26.39</td></tr><tr><td>Ours</td><td>48.20</td><td>9.02</td><td>26.30</td><td>44.56</td><td>27.27</td><td>23.69</td><td>25.56</td><td>22.53</td><td>14.20</td><td>26.81</td></tr><tr><td rowspan=\"9\">3.1B 42B</td><td rowspan=\"3\">V2</td><td>PTFS</td><td>41.10</td><td>6.37</td><td>24.00</td><td>36.43</td><td>24.15</td><td>21.62</td><td>24.97</td><td>19.75</td><td>14.22</td><td>23.62</td></tr><tr><td>CPT</td><td>46.00</td><td>6.14</td><td>24.00</td><td>40.81</td><td>27.04</td><td>21.94</td><td>23.57</td><td>20.89</td><td>13.28</td><td>24.85</td></tr><tr><td>Ours</td><td>43.70</td><td>8.57</td><td>24.23</td><td>40.17</td><td>25.78</td><td>24.59</td><td>25.70</td><td>19.37</td><td>14.22</td><td>25.15</td></tr><tr><td rowspan=\"3\">V3</td><td>PTFS</td><td>44.30</td><td>8.34</td><td>23.83</td><td>40.99</td><td>27.12</td><td>21.71</td><td>24.73</td><td>21.65</td><td>15.48</td><td>25.35</td></tr><tr><td>CPT</td><td>43.90</td><td>8.11</td><td>25.23</td><td>41.24</td><td>26.00</td><td>25.00</td><td>25.44</td><td>20.00</td><td>13.40</td><td>25.37</td></tr><tr><td>Ours</td><td>47.90</td><td>9.48</td><td>24.02</td><td>40.74</td><td>25.71</td><td>25.73</td><td>25.09</td><td>19.62</td><td>14.54</td><td>25.87</td></tr><tr><td rowspan=\"3\">V4</td><td>PTFS</td><td>50.20</td><td>11.22</td><td>25.98</td><td>39.89</td><td>27.64</td><td>23.12</td><td>25.47</td><td>21.65</td><td>15.46</td><td>26.74</td></tr><tr><td>CPT</td><td>50.60</td><td>9.78</td><td>25.12</td><td>41.03</td><td>28.08</td><td>22.48</td><td>25.38</td><td>21.01</td><td>13.93</td><td>26.38</td></tr><tr><td>Ours</td><td>49.80</td><td>10.77</td><td>25.77</td><td>42.95</td><td>26.97</td><td>22.45</td><td>26.25</td><td>22.41</td><td>14.80</td><td>26.91</td></tr></table>",
1643
+ "bbox": [
1644
+ 119,
1645
+ 239,
1646
+ 892,
1647
+ 706
1648
+ ],
1649
+ "page_idx": 12
1650
+ },
1651
+ {
1652
+ "type": "text",
1653
+ "text": "Table 11: The performance of downstream tasks for LLMs across four versions. In addition to the standard training scale (LLaMA-1.2B trained for 42B tokens), we further evaluate LLMs trained on more data (LLaMA-1.2B trained for 168B tokens) and LLMs with a larger size (LLaMA-3.1B trained for 42B tokens).",
1654
+ "bbox": [
1655
+ 112,
1656
+ 714,
1657
+ 884,
1658
+ 758
1659
+ ],
1660
+ "page_idx": 12
1661
+ },
1662
+ {
1663
+ "type": "page_number",
1664
+ "text": "13593",
1665
+ "bbox": [
1666
+ 477,
1667
+ 928,
1668
+ 524,
1669
+ 940
1670
+ ],
1671
+ "page_idx": 12
1672
+ },
1673
+ {
1674
+ "type": "table",
1675
+ "img_path": "images/03704c34abf18b576d8061199f0f3d8a9e813e9cd709c38d958e518ff223a064.jpg",
1676
+ "table_caption": [],
1677
+ "table_footnote": [],
1678
+ "table_body": "<table><tr><td rowspan=\"2\">α</td><td rowspan=\"2\">TP</td><td colspan=\"3\">PPL</td></tr><tr><td>V2</td><td>V3</td><td>V4</td></tr><tr><td rowspan=\"2\">0.2</td><td>CPT</td><td>21.40</td><td>20.69</td><td>20.20</td></tr><tr><td>Ours</td><td>20.96</td><td>20.05</td><td>19.59</td></tr><tr><td rowspan=\"2\">0.4</td><td>CPT</td><td>21.06</td><td>20.43</td><td>20.18</td></tr><tr><td>Ours</td><td>20.81</td><td>19.88</td><td>19.42</td></tr><tr><td rowspan=\"2\">0.6</td><td>CPT</td><td>20.85</td><td>20.21</td><td>19.85</td></tr><tr><td>Ours</td><td>20.82</td><td>19.86</td><td>19.40</td></tr></table>",
1679
+ "bbox": [
1680
+ 157,
1681
+ 80,
1682
+ 445,
1683
+ 247
1684
+ ],
1685
+ "page_idx": 13
1686
+ },
1687
+ {
1688
+ "type": "table",
1689
+ "img_path": "images/b7d92a223fb6c4821dfab83941bbcf05e33112badeff2cfb2e30ae09bbd64f75.jpg",
1690
+ "table_caption": [
1691
+ "Table 12: The comparison between CPT and our paradigm with equal training cost. The $\\alpha$ ranges from 0.2 to 0.6."
1692
+ ],
1693
+ "table_footnote": [],
1694
+ "table_body": "<table><tr><td rowspan=\"2\">TP</td><td colspan=\"3\">PPL</td></tr><tr><td>10.5B</td><td>21.0B</td><td>31.5B</td></tr><tr><td>PTFS</td><td>20.84</td><td>19.28</td><td>18.36</td></tr><tr><td>CPT</td><td>21.11</td><td>19.50</td><td>18.47</td></tr><tr><td>Ours</td><td>20.13</td><td>18.81</td><td>18.09</td></tr></table>",
1695
+ "bbox": [
1696
+ 171,
1697
+ 313,
1698
+ 431,
1699
+ 419
1700
+ ],
1701
+ "page_idx": 13
1702
+ },
1703
+ {
1704
+ "type": "text",
1705
+ "text": "Table 13: The comparison of different paradigms for training two versions of LLaMA-1.2B. The data increment of the second version varies from 10.5B to 31.5B tokens.",
1706
+ "bbox": [
1707
+ 112,
1708
+ 428,
1709
+ 489,
1710
+ 485
1711
+ ],
1712
+ "page_idx": 13
1713
+ },
1714
+ {
1715
+ "type": "text",
1716
+ "text": "consistent. Concretely, we sample a dataset of approximately 5.25B tokens (5K steps) and use it to train four versions of LLMs. As mentioned in section 3.2, we analyze the time complexity of CPT and our paradigm, obtaining that the time complexity of ours is about $1 + \\alpha$ times that of CPT.",
1717
+ "bbox": [
1718
+ 112,
1719
+ 512,
1720
+ 489,
1721
+ 607
1722
+ ],
1723
+ "page_idx": 13
1724
+ },
1725
+ {
1726
+ "type": "text",
1727
+ "text": "We compare the results for different $\\alpha$ (proportion of fast-decaying steps) set as 0.2, 0.4 and 0.6, respectively. To ensure that the total training cost used for these two paradigms are consistent, our paradigm always includes an additional $10\\mathrm{K}$ steps for each version update, while CPT uses additional $12\\mathrm{K} (\\alpha = 0.2)$ , $14\\mathrm{K} (\\alpha = 0.4)$ and $16\\mathrm{K} (\\alpha = 0.6)$ steps for each version update, respectively. The experimental results in Table 12 demonstrate that our paradigm remains effective even when the total training cost are kept consistent with CPT.",
1728
+ "bbox": [
1729
+ 112,
1730
+ 609,
1731
+ 489,
1732
+ 785
1733
+ ],
1734
+ "page_idx": 13
1735
+ },
1736
+ {
1737
+ "type": "text",
1738
+ "text": "E Version Updates with Inconsistent Data Increments",
1739
+ "text_level": 1,
1740
+ "bbox": [
1741
+ 112,
1742
+ 797,
1743
+ 487,
1744
+ 829
1745
+ ],
1746
+ "page_idx": 13
1747
+ },
1748
+ {
1749
+ "type": "text",
1750
+ "text": "Existing experiments are based on the assumption of consistent data increments during version updates of LLMs. The effectiveness of our paradigm has not yet been validated in the scenario with varying data increments. Hence, we conduct a compar",
1751
+ "bbox": [
1752
+ 112,
1753
+ 841,
1754
+ 489,
1755
+ 921
1756
+ ],
1757
+ "page_idx": 13
1758
+ },
1759
+ {
1760
+ "type": "text",
1761
+ "text": "ative experiment involving PTFS, CPT, and ours, training two versions of LLMs. For all paradigms, the LLMs of the first version are trained with 10.5B tokens (10K steps), while the LLMs of second version are trained with 10.5B, 21B, and 31.5B tokens, respectively. As the experimental results shown in Table 13, our paradigm maintains a better pre-training performance than PTFS and CPT in the scenario with inconsistent data increments. This further demonstrates the generalization of our paradigm.",
1762
+ "bbox": [
1763
+ 507,
1764
+ 84,
1765
+ 884,
1766
+ 262
1767
+ ],
1768
+ "page_idx": 13
1769
+ },
1770
+ {
1771
+ "type": "page_number",
1772
+ "text": "13594",
1773
+ "bbox": [
1774
+ 477,
1775
+ 927,
1776
+ 524,
1777
+ 940
1778
+ ],
1779
+ "page_idx": 13
1780
+ }
1781
+ ]
2024/A Learning Rate Path Switching Training Paradigm for Version Updates of Large Language Models/63a13339-dbf4-428a-9241-6178656f6af8_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Learning Rate Path Switching Training Paradigm for Version Updates of Large Language Models/63a13339-dbf4-428a-9241-6178656f6af8_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3b9664273412df8b4d2039e0bc9fe317e4639ca09b95925bec10c91f9d7fe81
3
+ size 1042804
2024/A Learning Rate Path Switching Training Paradigm for Version Updates of Large Language Models/full.md ADDED
@@ -0,0 +1,322 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Learning Rate Path Switching Training Paradigm for Version Updates of Large Language Models
2
+
3
+ Zhihao Wang $^{1*}$ Shiyu Liu $^{3*}$ Jianheng Huang $^{1}$ Zheng Wang $^{2}$ Yixuan Liao $^{2}$ Xiaoxin Chen $^{2}$ Junfeng Yao $^{1}$ Jinsong Su $^{1,3,4\dagger}$
4
+
5
+ $^{1}$ School of Informatics, Xiamen University, China $^{2}$ vivo AI Lab, China $^{3}$ Institute of Artificial Intelligence, Xiamen University, China $^{4}$ Shanghai Artificial Intelligence Laboratory, China {zhwang, liushiyu213}@stu.xmu.edu.cn jssu@xmu.edu.cn
6
+
7
+ # Abstract
8
+
9
+ Due to the continuous emergence of new data, version updates have become an indispensable requirement for Large Language Models (LLMs). The training paradigms for version updates of LLMs include pre-training from scratch (PTFS) and continual pre-training (CPT). Preliminary experiments demonstrate that PTFS achieves better pre-training performance, while CPT has lower training cost. Moreover, their performance and training cost gaps widen progressively with version updates. To investigate the underlying reasons for this phenomenon, we analyze the effect of learning rate adjustments during the two stages of CPT: preparing an initialization checkpoint and continual pre-training based on this checkpoint. We find that a large learning rate in the first stage and a complete learning rate decay process in the second stage are crucial for version updates of LLMs. Hence, we propose a learning rate path switching training paradigm. Our paradigm comprises one main path, where we pre-train a LLM with the maximal learning rate, and multiple branching paths, each of which corresponds to an update of the LLM with newly-added training data. Extensive experiments demonstrate the effectiveness and generalization of our paradigm. Particularly, when training four versions of LLMs, our paradigm reduces the total training cost to $58\%$ compared to PTFS, while maintaining comparable pretraining performance.
10
+
11
+ # 1 Introduction
12
+
13
+ In recent years, there has been significant progress in the research of Large Language Models (LLMs). By performing large-scale training on massive datasets, LLMs have demonstrated remarkable capabilities, contributing to various fields (Wu et al.,
14
+
15
+ 2023; Cui et al., 2024; Wang et al., 2024; Guo et al., 2024). However, the training cost of LLMs is significantly higher than that of traditional NLP models. Particularly, in practical applications, LLMs have to face the need for version updates due to the continuous emergence of new data, which exacerbates the training cost of LLMs. Therefore, reducing training cost while maintaining optimal pre-training performance across different versions has become one of the pivotal challenges for LLMs.
16
+
17
+ Generally, training paradigms applicable for version updates of LLMs can be categorized into two types: 1) Pre-Training From Scratch (PTFS): retraining new versions of LLMs on both old and new data. The well-known LLMs including LLaMA (Touvron et al., 2023a,b), GLM (Zeng et al., 2023), and Baichuan (Yang et al., 2023) are updated via this paradigm. 2) Continual Pre-Training (CPT): further pre-training new versions of LLMs on only new data based on the checkpoints from old versions. This paradigm is often utilized in resource constrained scenarios, such as limited computational resources or unavailability of old data.
18
+
19
+ In this paper, we firstly conduct preliminary experiments to compare the above two paradigms in version updates of LLMs. Compared with PTFS, CPT uses previous checkpoints for initialization, resulting in lower total training cost. However, CPT suffers from inferior pre-training performance, which becomes increasingly serious as version updates progress. To study the reasons for this phenomenon, we break down the CPT process into two stages: the first stage involves preparing an initialization checkpoint, and the second stage performing continual pre-training based on this checkpoint. Then, we conduct two groups of experiments to analyze the effect of learning rate adjustments during these two stages, leading to two conclusions: 1) the larger the learning rate in the first stage, the better the performance of updated LLMs in the second
20
+
21
+ ![](images/6dbc6fa1f25022b07bce0e4b21f90ef8a48f46c930c5250eb10da64277fa8977.jpg)
22
+ Figure 1: The learning rate curves of cosine learning rate schedule under PTFS, CPT<sup>1</sup> and our paradigm, all of which are used to update four versions of LLMs. Here, different color curves represent different version updates of LLMs.
23
+
24
+ stage; 2) for the second stage, a complete learning rate decay process is beneficial to ensure the optimal performance of updated LLMs.
25
+
26
+ Based on the above analyses, we propose a learning rate path switching training paradigm for version updates of LLMs. To better illustrate our paradigm, we take the most commonly used cosine learning rate schedule (Smith and Topin, 2019) as an example, and plot the learning rate curves of PTFS, CPT and our paradigm in Figure 1. Please note that our paradigm is also applicable to other schedules, such as Knee (Iyer et al., 2023), and multi-step (Bi et al., 2024) learning rate schedules.
27
+
28
+ In short, the learning rate curve of our paradigm comprises one main path and multiple branching paths, each of which corresponds to a version update of LLM. As shown by the main path in Figure 1, we pre-train a LLM with the maximal learning rate, providing superior initialization checkpoints for subsequent continual pre-training. When we want to update the LLM with newly-added training data, we perform continual pre-training on the LLM with a dynamically-adjusted learning rate. Referring back to Figure 1, after a few steps of training with the maximal learning rate, the learning rate fast decays to its minimum, which
29
+
30
+ ![](images/41156f2d39c9544ab527eea6bb5d68691114c330ef09d7b0aa50220dc1ffde5d.jpg)
31
+ Figure 2: The comparison of different training paradigms. "APPL" $(\downarrow)$ denotes the average perplexity of LLMs across different versions, "Relative Cost" $(\downarrow)$ is the ratio of the total training steps across different versions of each paradigm to the total training steps of PTFS. The lower left corner achieves the best trade-off.
32
+
33
+ effectively ensures the training performance of the updated LLM. Meanwhile, on the main path, we continue to pre-train the original checkpoint with the maximal learning rate, facilitating subsequent LLM updates.
34
+
35
+ Our paradigm better balances model performance and training cost compared to the other two paradigms, as detailed in Figure 2. To summarize, our main contributions are as follows:
36
+
37
+ - We conduct preliminary experiments to compare PTFS and CPT for version updates of LLMs. Furthermore, our in-depth analyses show that initially using a large learning rate and subsequent learning rate decay are crucial for improving the performance of updated LLMs.
38
+ - We propose a learning rate path switching paradigm for version updates of LLMs. To the best of our knowledge, our work is the first attempt to explore how to balance model performance and training cost for version updates of LLMs.
39
+ - Experimental results and in-depth analyses strongly demonstrate the effectiveness and generalization of our paradigm. Particularly, when training four versions of LLMs, our paradigm achieves comparable pre-training performance to PTFS with only $58\%$ of the total training cost.
40
+
41
+ ![](images/5c10d9cee3fce3f7593fb556db3e53c1dc92b1b168a85a531ac29a754c011263.jpg)
42
+ Figure 3: The learning rate curves of cosine (Smith and Topin, 2019), Knee (Iyer et al., 2023), and multi-step (Bi et al., 2024) learning rate schedules.
43
+
44
+ # 2 Preliminary Study
45
+
46
+ In this section, we first compare the performance of PTFS and CPT in version updates of LLMs, and then analyze the underlying reasons for their performance gap.
47
+
48
+ # 2.1 Setup
49
+
50
+ Model In this study, we use LLaMA-1.2B (Touvron et al., 2023a,b) as our base LLM and train for four versions. When employing PTFS, the total training steps for these four versions are 10K, 20K, 30K, and 40K, respectively. For CPT, each LLM update only requires 10K training steps. We train all LLMs with a batch size of 1.05M tokens.
51
+
52
+ Learning Rate Schedule We conduct experiments with three learning rate schedules: cosine (Smith and Topin, 2019), Knee (Iyer et al., 2023), and multi-step (Bi et al., 2024) learning rate schedules. The specific learning rate curves of these schedules are plotted in Figure 3. Notably, cosine learning rate schedule is the most commonly used one for training LLMs (Zhao et al., 2023), and both Knee and multi-step learning rate schedules can achieve comparable or even superior performance than cosine learning rate schedule. For all learning rate schedules, we implement a linear warm-up phase of 2K steps (approximately 2.1B tokens). Besides, we set the maximum and minimum learning rates for these schedules to 3e-4 and
53
+
54
+ <table><tr><td rowspan="2">LRS</td><td rowspan="2">TP</td><td rowspan="2">Cost</td><td colspan="3">PPL</td></tr><tr><td>V2</td><td>V3</td><td>V4</td></tr><tr><td rowspan="3">Cos</td><td>PTFS</td><td>1.00×</td><td>20.84</td><td>19.28</td><td>18.36</td></tr><tr><td>CPT</td><td>0.40×</td><td>21.11</td><td>19.70</td><td>18.87</td></tr><tr><td>Δ</td><td>-</td><td>-0.27</td><td>-0.42</td><td>-0.51</td></tr><tr><td rowspan="3">Knee</td><td>PTFS</td><td>1.00×</td><td>20.22</td><td>18.80</td><td>17.98</td></tr><tr><td>CPT</td><td>0.40×</td><td>20.56</td><td>19.27</td><td>18.52</td></tr><tr><td>Δ</td><td>-</td><td>-0.34</td><td>-0.47</td><td>-0.54</td></tr><tr><td rowspan="3">Multi</td><td>PTFS</td><td>1.00×</td><td>20.28</td><td>18.88</td><td>18.06</td></tr><tr><td>CPT</td><td>0.40×</td><td>20.62</td><td>19.37</td><td>18.65</td></tr><tr><td>Δ</td><td>-</td><td>-0.34</td><td>-0.49</td><td>-0.59</td></tr></table>
55
+
56
+ Table 1: The comparison between PTFS and CPT for training four versions of LLMs. "LRS" and "TP" indicate learning rate schedule and training paradigm, respectively. "V*" means the *-th version of LLM. Notably, regardless of PTFS or CPT, the learning rate curve and pre-training performance of the first version remain identical. Thus, we do not report the performance of the first version in all experiments.
57
+
58
+ 3e-5, respectively.
59
+
60
+ Dataset Similar to LLaMA (Touvron et al., 2023a,b), our training corpus comprises a mixture of data from publicly available sources, including code, paper, Wikipedia, books, mathematics, CommonCrawl and C4, webpage, translation and others. In total, our training data contains 764M English and Chinese samples. Due to the limitation of GPU resource, we do not experiment with the entire dataset. To simulate the scenario of version updates, we perform non-replacement sampling on the training data to obtain 10.5B tokens as the newly-added data for each update. Hence, when using PTFS, we train four versions of LLMs from scratch with 10.5B, 21B, 31.5B, and 42B tokens, respectively. By contrast, using CPT to update the LLMs only involves the newly-added 10.5B tokens each time.
61
+
62
+ Evaluation Following previous studies (Qin et al., 2022; Gupta et al., 2023; Bi et al., 2024), we mainly use perplexity (PPL) to evaluate the pretraining performance of LLMs. Meanwhile, we also focus on the training cost of each paradigm, defined as the total training steps required for different versions.
63
+
64
+ ![](images/d8b2ee2c06421cf75a924f9169f54c8daf549c83de336e6bc11b7b6156c8517c.jpg)
65
+ Figure 4: The effect of learning rate adjustment in the first stage. In the first stage, we vary the cosine cycle length as $10\mathrm{K}$ , $20\mathrm{K}$ , $30\mathrm{K}$ , $40\mathrm{K}$ and $+\infty$ steps, respectively, where the checkpoints at the $10\mathrm{K}$ -th steps are selected as the initialization ones for the subsequent $10\mathrm{K}$ -steps continual pre-training. “ $(\cdot,\cdot)$ ” indicates the PPLs of the initialization checkpoint and corresponding updated LLM.
66
+
67
+ # 2.2 Comparison Between PTFS and CPT
68
+
69
+ Experimental results are shown in Table 1. It is evident that CPT has lower training cost, whereas PTFS achieves superior performance. More importantly, as the version updates progress, the performance gap between PTFS and CPT progressively widens.
70
+
71
+ To understand the underlying cause of this phenomenon, we focus on the learning rate, the key distinction between PTFS and CPT during version updates of LLMs. Using the cosine learning rate schedule, we conduct two groups of experiments to examine its impact on updated LLM performance across the two stages of CPT: 1) preparing an initialization checkpoint, and 2) continual pre-training based on this checkpoint.
72
+
73
+ Effect of Learning Rate Adjustment During the First Stage As depicted in Figure 4, in the first group of experiments, we vary the cosine cycle length across 10K, 20K, 30K, 40K, and $+\infty$ steps, respectively. The checkpoints at the 10K-th steps are selected as initialization checkpoints for the second stage. Then, we continually pre-train LLMs for 10K steps, where the learning rate gradually decays from its maximum to minimum. Referring back to Figure 4, we observe that with the increase in the cosine cycle length during the first stage, the performance of the initialization checkpoint drops, whereas its corresponding updated LLM performs better. Therefore, we conclude that a
74
+
75
+ ![](images/39c823ad70c04a6a4221bfa61fdd4f1b26c10696d898b045d7232290d2126692.jpg)
76
+ Figure 5: The effect of learning rate adjustment in the second stage. In the first stage, we directly use the maximal learning rate after warm-up. During the second stage, we try cosine cycle length with 10K, 20K, 30K, 40K and $+\infty$ steps, respectively, where the PPLs of LLMs at the 20K-th steps are compared.
77
+
78
+ large learning rate in the first stage benefits continual pre-training in the second stage.
79
+
80
+ Effect of Learning Rate Adjustment During the Second Stage Based on the above conclusion, we directly set the cosine cycle length as $+\infty$ steps in the first stage, as illustrated in Figure 5. Then, during continual pre-training, we experiment with the cosine learning rate schedule using different cosine cycle lengths: 10K, 20K, 30K, 40K, $+\infty$ steps, and report the performance of updated LLMs at the 20K-th steps. As shown in Figure 5, it is evident that a complete learning rate decay process enables the updated LLMs to achieve the best performance. This finding is consistent with the results from the first group of experiments mentioned above. In other words, when the learning rate undergoes complete decay during the first stage, the performance of the initialization checkpoint is also optimal.
81
+
82
+ Based on the findings of the above two groups of experiments, we conclude that CPT is difficult to achieve good performance across different versions of LLMs. Specifically, according to the findings from the second group of experiments, if the current LLM is expected to achieve optimal performance, its learning rate in the second stage should undergo a complete decay process. However, such decay results in a lower learning rate in the first stage of the subsequent update, further degrading the performance of the updated LLM.
83
+
84
+ <table><tr><td rowspan="2">LRS</td><td rowspan="2">α</td><td rowspan="2">Cost</td><td colspan="3">PPL</td></tr><tr><td>V2</td><td>V3</td><td>V4</td></tr><tr><td rowspan="4">Cos</td><td>0.2</td><td>0.49×</td><td>20.34</td><td>19.13</td><td>18.44</td></tr><tr><td>0.4</td><td>0.53×</td><td>20.16</td><td>18.91</td><td>18.21</td></tr><tr><td>0.6</td><td>0.58×</td><td>20.13</td><td>18.81</td><td>18.09</td></tr><tr><td>0.8</td><td>0.62×</td><td>20.15</td><td>18.77</td><td>18.02</td></tr><tr><td rowspan="4">Knee</td><td>0.2</td><td>0.49×</td><td>20.33</td><td>19.12</td><td>18.42</td></tr><tr><td>0.4</td><td>0.53×</td><td>20.16</td><td>18.91</td><td>18.20</td></tr><tr><td>0.6</td><td>0.58×</td><td>20.12</td><td>18.81</td><td>18.08</td></tr><tr><td>0.8</td><td>0.62×</td><td>20.15</td><td>18.77</td><td>18.01</td></tr><tr><td rowspan="4">Multi</td><td>0.2</td><td>0.49×</td><td>20.33</td><td>19.08</td><td>18.37</td></tr><tr><td>0.4</td><td>0.53×</td><td>20.29</td><td>18.91</td><td>18.16</td></tr><tr><td>0.6</td><td>0.58×</td><td>20.40</td><td>18.88</td><td>18.09</td></tr><tr><td>0.8</td><td>0.62×</td><td>20.63</td><td>18.91</td><td>18.06</td></tr></table>
85
+
86
+ # 3 Our Paradigm
87
+
88
+ Based on the conclusions from Section 2, we propose a learning rate path switching paradigm for version updates of LLMs in this section. The training cost of our paradigm is lower than that of PTFS, and it achieves significantly better performance than CPT, with performance even comparable to that of PTFS.
89
+
90
+ # 3.1 Paradigm Overview
91
+
92
+ Let us revisit Figure 1, which shows the learning rate curves of our paradigm applied to the cosine learning rate schedule. Please note that our paradigm is also applicable to other schedules, such as Knee and multi-step and so on. The learning rate curve of our paradigm comprises one main path and multiple branching paths, each of which corresponds to one version update. On the main path, we pre-train the LLM from scratch with the maximal learning rate, providing initialization checkpoints for subsequent version updates. When we want to obtain an updated LLM, we directly use the current checkpoint of the main path as the initialization one, and then perform continual pre-training. During this process, the learning rate undergoes a complete fast-decaying process, effectively ensuring the performance of the updated LLM. Meanwhile, on the main path, we still use newly-added data to pre-train the existing checkpoint with the maximal learning rate, so as to facilitate subsequent updates.
93
+
94
+ Table 2: The effect of hyper-parameter $\alpha$ on the pretraining performance and training cost of our paradigm. Experiments are conducted on LLaMA-1.2B.
95
+
96
+ <table><tr><td rowspan="2">LRS</td><td rowspan="2">TP</td><td rowspan="2">Cost</td><td colspan="3">PPL</td></tr><tr><td>V2</td><td>V3</td><td>V4</td></tr><tr><td rowspan="3">Cos</td><td>PTFS</td><td>1.00×</td><td>20.84</td><td>19.28</td><td>18.36</td></tr><tr><td>CPT</td><td>0.40×</td><td>21.11</td><td>19.70</td><td>18.87</td></tr><tr><td>Ours</td><td>0.58×</td><td>20.13</td><td>18.81</td><td>18.09</td></tr><tr><td rowspan="3">Knee</td><td>PTFS</td><td>1.00×</td><td>20.22</td><td>18.80</td><td>17.98</td></tr><tr><td>CPT</td><td>0.40×</td><td>20.56</td><td>19.27</td><td>18.52</td></tr><tr><td>Ours</td><td>0.58×</td><td>20.12</td><td>18.81</td><td>18.08</td></tr><tr><td rowspan="3">Multi</td><td>PTFS</td><td>1.00×</td><td>20.28</td><td>18.88</td><td>18.06</td></tr><tr><td>CPT</td><td>0.40×</td><td>20.62</td><td>19.37</td><td>18.65</td></tr><tr><td>Ours</td><td>0.58×</td><td>20.40</td><td>18.88</td><td>18.09</td></tr></table>
97
+
98
+ Table 3: The comparison of different paradigms for training four versions of LLaMA-1.2B.
99
+
100
+ Obviously, our paradigm has lower training cost than PTFS, as it conducts continual pre-training based on the initialization checkpoints from the main path. Unlike CPT, these checkpoints are obtained through training from scratch with the maximum learning rate, which enables the updated LLMs to achieve better performance, as analyzed in Section 2. The following experiments fully confirm the superiority of our paradigm in balancing model performance and training cost.
101
+
102
+ # 3.2 Time Complexity Analysis
103
+
104
+ To further compare different training paradigms in terms of training cost, we define their time complexity functions as the total training steps of version updates.
105
+
106
+ Before providing our definitions, we first introduce two symbols to facilitate the subsequent descriptions: 1) $N_v$ : the number of version updates of LLMs; 2) $T$ : the amount of data added for each update, assuming it remains consistent. When updating the $i-th$ version of LLMs, PTFS requires updating $iT(1 \leq i \leq N_v)$ steps each time, CPT needs to train for $T$ steps, and our paradigm requires training $T + \alpha T$ steps, where $\alpha (0 \leq \alpha \leq 1)$ controls the proportion of fast-decaying steps to the total steps in each update.
107
+
108
+ Formally, the time complexity functions of PTFS, CPT and our paradigm can be described
109
+
110
+ <table><tr><td>Ver.</td><td>TP</td><td>C3</td><td>GSM8K</td><td>MMLU</td><td>CSL</td><td>C-EVAL</td><td>BBH</td><td>CMMLU</td><td>GAOKAO</td><td>AGIEval</td><td>AVG</td></tr><tr><td rowspan="3">V2</td><td>PTFS</td><td>38.00</td><td>4.63</td><td>24.00</td><td>38.25</td><td>30.09</td><td>17.43</td><td>25.37</td><td>18.10</td><td>14.59</td><td>23.38</td></tr><tr><td>CPT</td><td>37.00</td><td>4.09</td><td>23.52</td><td>35.11</td><td>27.42</td><td>18.55</td><td>25.63</td><td>18.86</td><td>13.40</td><td>22.62</td></tr><tr><td>Ours</td><td>38.60</td><td>5.08</td><td>22.94</td><td>39.08</td><td>28.38</td><td>20.79</td><td>24.88</td><td>18.48</td><td>14.73</td><td>23.66</td></tr><tr><td rowspan="3">V3</td><td>PTFS</td><td>40.30</td><td>3.34</td><td>24.33</td><td>39.17</td><td>25.85</td><td>17.11</td><td>25.30</td><td>22.03</td><td>14.34</td><td>23.53</td></tr><tr><td>CPT</td><td>38.30</td><td>4.70</td><td>23.32</td><td>36.40</td><td>28.38</td><td>21.11</td><td>24.76</td><td>17.85</td><td>13.47</td><td>23.14</td></tr><tr><td>Ours</td><td>42.10</td><td>4.63</td><td>23.22</td><td>34.91</td><td>29.35</td><td>19.70</td><td>24.73</td><td>19.24</td><td>14.90</td><td>23.64</td></tr><tr><td rowspan="3">V4</td><td>PTFS</td><td>35.70</td><td>4.25</td><td>24.93</td><td>38.75</td><td>27.04</td><td>16.73</td><td>24.97</td><td>21.01</td><td>14.10</td><td>23.05</td></tr><tr><td>CPT</td><td>43.90</td><td>4.55</td><td>22.20</td><td>38.69</td><td>27.19</td><td>21.62</td><td>24.43</td><td>18.23</td><td>13.50</td><td>23.81</td></tr><tr><td>Ours</td><td>41.90</td><td>5.53</td><td>24.09</td><td>40.24</td><td>27.71</td><td>21.84</td><td>24.78</td><td>17.24</td><td>14.40</td><td>24.19</td></tr></table>
111
+
112
+ Table 4: The performance of LLMs across different versions on downstream tasks. "Ver." indicates the version number of the LLMs. Additional experimental results for LLMs with larger model sizes or data sizes are listed in Appendix C.
113
+
114
+ as follows:
115
+
116
+ $$
117
+ \mathbf {C} _ {\mathrm {p t f s}} \left(N _ {v}\right) = \sum_ {i = 1} ^ {N _ {v}} i T = 0. 5 T N _ {v} ^ {2} + 0. 5 T N _ {v},
118
+ $$
119
+
120
+ $$
121
+ \mathbf {C} _ {\mathrm {c p t}} (N _ {v}) = \sum_ {i = 1} ^ {N _ {v}} T = T N _ {v},
122
+ $$
123
+
124
+ $$
125
+ \begin{array}{l} \mathbf {C} _ {\text {o u r s}} (N _ {v}) = \sum_ {i = 1} ^ {N _ {v} - 1} (T + \alpha T) + T \\ = (1 + \alpha) T N _ {v} - \alpha T. \\ \end{array}
126
+ $$
127
+
128
+ Please note that, for the last version, the additional main path training for preparing the initialization checkpoint for the next update can be omitted, which counts as $\alpha T$ steps. Thus, only $T$ steps are required.
129
+
130
+ Comparing the above functions, we observe that $\mathbf{C}_{\mathrm{ptfs}}(N_v)$ is a quadratic function in terms of $N_{v}$ , whereas both $\mathbf{C}_{\mathrm{cpt}}(N_v)$ and $\mathbf{C}_{\mathrm{ours}}(N_v)$ are linear functions. Moreover, the gaps between $\mathbf{C}_{\mathrm{ptfs}}(N_v)$ and the other two functions significantly widens as $N_{v}$ increases. For example, when $N_{v} = 4$ ,the values of these three time complexity functions are $10T$ $4T$ and $5.8T$ , respectively. When $N_{v} = 10$ the gaps widen as the values of these functions increase to $55T$ $10T$ and $15.4T$
131
+
132
+ # 4 Experiment
133
+
134
+ In this section, we still use the settings of the preliminary study to conduct more experiments, comparing the performance and training cost of different training paradigms.
135
+
136
+ # 4.1 Effect of Hyper-Parameter $\alpha$
137
+
138
+ As described in Section 3, $\alpha$ is one of the most important hyper-parameters in our paradigm, as it controls the proportion of fast-decaying steps to the total steps in each update. The fast-decaying steps influence model performance and training cost of our paradigm. To select an optimal $\alpha$ value, we experiment with different $\alpha$ values, ranging from 0.2 to 0.8 with an interval of 0.2, and then observe the changes in pre-training performance and training cost.
139
+
140
+ Experimental results are listed in Table 2, showing that the overall performance of LLMs across different versions is optimal at $\alpha = 0.6$ and $\alpha = 0.8$ . However, when $\alpha = 0.6$ , our paradigm achieves lower training cost. Thus, we adopt $\alpha = 0.6$ in subsequent experiments.
141
+
142
+ # 4.2 Main Experiments
143
+
144
+ Then, we compare different paradigms in terms of training cost, pre-training performance and downstream performance. To comprehensively examine our paradigm, we conduct a series of experiments with the three aforementioned learning rate schedules.
145
+
146
+ Pre-Training Performance From Table 3, we observe that, compared to PTFS, our paradigm reduces the total training cost to $58\%$ while maintaining comparable pre-training performance. Particularly, when using the cosine learning rate schedule, our paradigm even slightly outperforms PTFS. On the other hand, as expected, the training cost of our paradigm is still higher than that of CPT, however, it always achieves better performance than CPT, regardless of the schedule used.
147
+
148
+ <table><tr><td rowspan="2">LRS</td><td rowspan="2">TP</td><td rowspan="2">Cost</td><td colspan="3">PPL</td></tr><tr><td>V2</td><td>V3</td><td>V4</td></tr><tr><td rowspan="3">Cos</td><td>PTFS</td><td>1.00×</td><td>20.94</td><td>19.35</td><td>18.41</td></tr><tr><td>CPT</td><td>0.40×</td><td>21.23</td><td>19.78</td><td>18.92</td></tr><tr><td>Ours</td><td>0.58×</td><td>20.23</td><td>18.87</td><td>18.11</td></tr><tr><td rowspan="3">Knee</td><td>PTFS</td><td>1.00×</td><td>20.30</td><td>18.84</td><td>17.98</td></tr><tr><td>CPT</td><td>0.40×</td><td>20.67</td><td>19.34</td><td>18.56</td></tr><tr><td>Ours</td><td>0.58×</td><td>20.20</td><td>18.85</td><td>18.09</td></tr><tr><td rowspan="3">Multi</td><td>PTFS</td><td>1.00×</td><td>20.37</td><td>18.92</td><td>18.06</td></tr><tr><td>CPT</td><td>0.40×</td><td>20.74</td><td>19.44</td><td>18.68</td></tr><tr><td>Ours</td><td>0.58×</td><td>20.49</td><td>18.92</td><td>18.09</td></tr></table>
149
+
150
+ Overall, our paradigm achieves a better balance between pre-training performance and total training cost during version updates of LLMs.3
151
+
152
+ Performance on Downstream Tasks Furthermore, we investigate the performance of different training paradigms across nine downstream tasks, including $C^3$ (Sun et al., 2020), GSM8K (Cobbe et al., 2021), MMLU (Hendrycks et al., 2021), CSL (Li et al., 2022), C-EVAL (Huang et al., 2023), BBH (Suzgun et al., 2023), CMMLU (Li et al., 2024), GAOKAO (Zhang et al., 2023) and AGIEval (Zhong et al., 2024). To this end, we first construct a general supervised fine-tuning (SFT) dataset with 1.8B tokens and then we perform SFT on each of the four versions of the updated LLMs.
153
+
154
+ From the results listed in Table 4, we clearly find that our paradigm can still obtain better average performance than PTFS and CPT, which further proves the effectiveness of our paradigm.
155
+
156
+ # 4.3 Generalization of Our Paradigm
157
+
158
+ Subsequently, we explore the generalization of our paradigm in the following aspects, including model architecture, model size, data size, and maximum learning rate, all of which are crucial for the practical applications of LLMs. In all of these experiments, we maintain the use of the cosine learning rate schedule.
159
+
160
+ Table 5: The generalization of our paradigm in terms of model architecture. Based on Qwen-1.2B, we conduct experiments with the same setting as LLaMA-1.2B.
161
+
162
+ <table><tr><td rowspan="2">Size</td><td rowspan="2">TP</td><td colspan="3">PPL</td></tr><tr><td>V2</td><td>V3</td><td>V4</td></tr><tr><td rowspan="3">203M</td><td>PTFS</td><td>30.97</td><td>29.50</td><td>28.65</td></tr><tr><td>CPT</td><td>31.31</td><td>29.90</td><td>29.07</td></tr><tr><td>Ours</td><td>30.25</td><td>28.94</td><td>28.19</td></tr><tr><td rowspan="3">406M</td><td>PTFS</td><td>26.58</td><td>25.06</td><td>24.19</td></tr><tr><td>CPT</td><td>26.89</td><td>25.49</td><td>24.67</td></tr><tr><td>Ours</td><td>25.85</td><td>24.52</td><td>23.79</td></tr><tr><td rowspan="3">608M</td><td>PTFS</td><td>23.12</td><td>21.75</td><td>20.93</td></tr><tr><td>CPT</td><td>23.50</td><td>22.26</td><td>21.52</td></tr><tr><td>Ours</td><td>22.59</td><td>21.43</td><td>20.77</td></tr><tr><td rowspan="3">1.2B</td><td>PTFS</td><td>20.84</td><td>19.28</td><td>18.36</td></tr><tr><td>CPT</td><td>21.22</td><td>19.79</td><td>18.97</td></tr><tr><td>Ours</td><td>20.13</td><td>18.81</td><td>18.09</td></tr><tr><td rowspan="3">2.1B</td><td>PTFS</td><td>18.33</td><td>16.88</td><td>16.04</td></tr><tr><td>CPT</td><td>18.76</td><td>17.47</td><td>16.72</td></tr><tr><td>Ours</td><td>17.82</td><td>16.63</td><td>15.97</td></tr><tr><td rowspan="3">3.1B</td><td>PTFS</td><td>17.22</td><td>15.87</td><td>15.07</td></tr><tr><td>CPT</td><td>17.67</td><td>16.48</td><td>15.77</td></tr><tr><td>Ours</td><td>16.84</td><td>15.72</td><td>15.09</td></tr></table>
163
+
164
+ Table 6: The generalization of our paradigm in terms of model size. The model sizes range from 203M to 3.1B.
165
+
166
+ Model Architecture To demonstrate the generalization of our paradigm on model architecture, we use Qwen-1.2B (Bai et al., 2023) to re-conduct experiments with the same setting as LLaMA-1.2B.
167
+
168
+ Similar to the experimental results of LLaMA-1.2B presented in Table 3, the experimental results of Qwen-1.2B shown in Table 5 further demonstrate the superiority of our paradigm in balancing model performance and training cost. This validates the generalization of our paradigm in terms of model architecture.
169
+
170
+ Model Size We then focus on the generalization of our paradigm on model size. To this end, we vary the number of model parameters to conduct experiments. In total, we consider the following six model sizes: 203M, 406M, 608M, 1.2B, 2.1B, 3.1B, of which detailed hyper-parameters are listed in Appendix A.
171
+
172
+ From the results shown in Table 6, we observe that our paradigm achieves pre-training performance comparable to PTFS across different sizes of LLMs and outperforms CPT. This validates the generalization of our paradigm in terms of model size.
173
+
174
+ <table><tr><td rowspan="2">Data</td><td rowspan="2">TP</td><td colspan="3">PPL</td></tr><tr><td>V2</td><td>V3</td><td>V4</td></tr><tr><td rowspan="3">21B</td><td>PTFS</td><td>24.66</td><td>22.31</td><td>20.84</td></tr><tr><td>CPT</td><td>25.10</td><td>22.84</td><td>21.56</td></tr><tr><td>Ours</td><td>23.59</td><td>21.41</td><td>20.27</td></tr><tr><td rowspan="3">42B</td><td>PTFS</td><td>20.84</td><td>19.28</td><td>18.36</td></tr><tr><td>CPT</td><td>21.11</td><td>19.70</td><td>18.87</td></tr><tr><td>Ours</td><td>20.13</td><td>18.81</td><td>18.09</td></tr><tr><td rowspan="3">168B</td><td>PTFS</td><td>16.70</td><td>15.97</td><td>15.54</td></tr><tr><td>CPT</td><td>16.90</td><td>16.25</td><td>15.86</td></tr><tr><td>Ours</td><td>16.47</td><td>15.86</td><td>15.51</td></tr></table>
175
+
176
+ Data Size Next, we switch our attention to the generalization of our paradigm on data size. To do this, we conduct experiments using different sizes of training data: 21B, 42B, and 168B tokens. Correspondingly, the training steps are 5K, 10K and 40K for each LLM update, respectively.
177
+
178
+ As shown in Table 7, our paradigm achieves optimal pre-training performance across different data sizes, which further demonstrates the generalization of our paradigm.
179
+
180
+ Maximum Learning Rate Finally, we aim to verify the generalization of our paradigm in terms of the maximum learning rate. We conduct experiments by setting the maximum learning rates as 5e-5, 1e-4, 3e-4, 5e-4, 8e-4, respectively.
181
+
182
+ As shown in Table 8, as the maximum learning rate increases, our paradigm consistently achieves better or comparable performance than PTFS, and significantly outperforms CPT. This strongly highlights the generalization of our paradigm in terms of the maximum learning rate.
183
+
184
+ # 5 Related Work
185
+
186
+ Continual Training As one of the most direct approaches for version updates of LLMs, continual training has attracted increasing attention, of which related studies can be broadly categorized into the following four types: 1) methods introducing additional parameters (Ke et al., 2022, 2023; Song et al., 2023; PENG et al., 2024), 2) prompt-based methods (Wang et al., 2022b,a; Razdaibiedina et al., 2023), 3) multi-stage training methods (Liu et al., 2021; Zhou et al., 2022, 2023; Liu et al., 2023;
187
+
188
+ Table 7: The generalization of our paradigm in terms of data size. The total data sizes (for four versions) range from 21B to 168B.
189
+
190
+ <table><tr><td rowspan="2">MLR</td><td rowspan="2">TP</td><td colspan="3">PPL</td></tr><tr><td>V2</td><td>V3</td><td>V4</td></tr><tr><td rowspan="3">5e-5</td><td>PTFS</td><td>34.78</td><td>29.53</td><td>26.65</td></tr><tr><td>CPT</td><td>35.23</td><td>30.08</td><td>27.23</td></tr><tr><td>Ours</td><td>29.99</td><td>25.54</td><td>23.27</td></tr><tr><td rowspan="3">1e-4</td><td>PTFS</td><td>26.34</td><td>23.28</td><td>21.57</td></tr><tr><td>CPT</td><td>26.64</td><td>23.70</td><td>22.04</td></tr><tr><td>Ours</td><td>23.89</td><td>21.32</td><td>19.97</td></tr><tr><td rowspan="3">3e-4</td><td>PTFS</td><td>20.84</td><td>19.28</td><td>18.36</td></tr><tr><td>CPT</td><td>21.22</td><td>19.79</td><td>18.97</td></tr><tr><td>Ours</td><td>20.13</td><td>18.81</td><td>18.09</td></tr><tr><td rowspan="3">5e-4</td><td>PTFS</td><td>19.89</td><td>18.62</td><td>17.85</td></tr><tr><td>CPT</td><td>20.17</td><td>19.05</td><td>18.38</td></tr><tr><td>Ours</td><td>19.53</td><td>18.45</td><td>17.85</td></tr><tr><td rowspan="3">8e-4</td><td>PTFS</td><td>19.38</td><td>18.26</td><td>17.58</td></tr><tr><td>CPT</td><td>19.69</td><td>18.73</td><td>18.16</td></tr><tr><td>Ours</td><td>19.22</td><td>18.30</td><td>17.78</td></tr></table>
191
+
192
+ Table 8: The generalization of our paradigm in terms of the maximum learning rate. The maximum learning rate ranges from 5e-5 to 8e-4. "MLR" indicates the maximum learning rate.
193
+
194
+ Huang et al., 2024), and 4) scenario-specific methods (Peng et al., 2023; Gogoulou et al., 2024; Xie et al., 2024). Significantly different from the above studies, our paradigm comprises one main learning rate path, where we perform pre-training from scratch with the maximal learning rate, and multiple learning rate branching paths, where we perform continual pre-training with a complete learning rate decay process. Thus, our paradigm achieve a better trade-off between the performance and training cost than PTFS and CPT.
195
+
196
+ Learning Rate The learning rate is one of the most crucial hyper-parameters for training LLMs. Existing learning rate schedules can be broadly divided into the following four policies (Wu et al., 2019; Wu and Liu, 2023; Jin et al., 2023): 1) Fixed learning rate policy, such as constant learning rate schedule; 2) Decaying learning rate policy, such as inverse square root learning rate schedule; 3) Cyclic learning rate policy, such as cosine learning rate schedule; 4) Composite learning rate policy, such as Knee and multi-step learning rate schedules. In addition, there are some recent studies exploring learning rate schedules for LLMs, including Warmup-Stable-Decay schedule (Hu et al., 2024)
197
+
198
+ and constant learning rate with cooldown (Hägele et al., 2024). Particularly, our paradigm is a well-designed training paradigm for version updates of LLMs, which is applicable to cosine, Knee, and multi-step and other learning rate schedules.
199
+
200
+ # 6 Conclusion and Future Work
201
+
202
+ This paper focuses on how to effectively balance model performance and training cost for version updates of LLMs. We begin by comparing two training paradigms: PTFS and CPT, concluding that PTFS achieves better pre-training performance, while CPT has lower training cost. Through the analysis in the preliminary study, we find that 1) a large learning rate is beneficial for providing better initialization checkpoints for subsequent updates, and 2) a complete learning rate decay process enables the updated LLMs to achieve optimal performance. Based on the above two findings, we propose a learning rate path switching paradigm for version updates of LLMs, which comprises one main path and multiple branching paths. On the main path, we pre-train the LLMs with the maximal learning rate to provide superior initialization checkpoints for subsequent updates. When an update is required, our paradigm switches from the main path to a branching path, undergoing a complete learning rate decay process. Experimental results and further analyses strongly demonstrate the effectiveness and generalization of our paradigm.
203
+
204
+ In the future, we will further expand the practical scope of our paradigm. Current research mainly focuses on the pre-training phase and does not include supervised fine-tuning, safety alignment, etc., which could be integrated into the fast-decaying stage of our paradigm. Additionally, we plan to explore the applicability of our paradigm in the context of multimodal large language models.
205
+
206
+ # Limitations
207
+
208
+ Although the training cost of our paradigm is significantly lower than that of PTFS, it is still higher than that of CPT. Hence, we plan to design a precise method to determine the proportion of the fast-decaying steps to the total steps, which can further reduce the training cost of our paradigm.
209
+
210
+ # Acknowledgements
211
+
212
+ The project was supported by National Key R&D Program of China (No. 2022ZD0160501), National Natural Science Foundation of China (No.
213
+
214
+ 62276219), and the Public Technology Service Platform Project of Xiamen (No. 3502Z20231043).
215
+
216
+ # References
217
+
218
+ Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, et al. 2023. Qwen technical report. arXiv.
219
+ Xiao Bi, Deli Chen, Guanting Chen, Shanhuang Chen, Damai Dai, Chengqi Deng, et al. 2024. Deepseek llm: Scaling open-source language models with longtermism. arXiv.
220
+ Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, et al. 2021. Training verifiers to solve math word problems. arXiv.
221
+ Can Cui, Yunsheng Ma, Xu Cao, Wenqian Ye, Yang Zhou, Kaizhao Liang, et al. 2024. A survey on multimodal large language models for autonomous driving. In WACVW Workshops.
222
+ Evangelia Gogoulou, Timothee Lesort, Magnus Boman, and Joakim Nivre. 2024. Continual learning under language shift. In TSD.
223
+ Taicheng Guo, Xiuying Chen, Yaqi Wang, Ruidi Chang, Shichao Pei, Nitesh V. Chawla, et al. 2024. Large language model based multi-agents: A survey of progress and challenges. arXiv.
224
+ Kshitij Gupta, Benjamin Thérien, Adam Ibrahim, Mats L. Richter, Quentin Anthony, Eugene Belilovsky, et al. 2023. Continual pre-training of large language models: How to (re)warm your model? In ICML Workshop.
225
+ Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language understanding. In ICLR.
226
+ Shengding Hu, Yuge Tu, Xu Han, Chaoqun He, Ganqu Cui, Xiang Long, et al. 2024. Minicpm: Unveiling the potential of small language models with scalable training strategies. In COLM.
227
+ Jianheng Huang, Leyang Cui, Ante Wang, Chengyi Yang, Xinting Liao, Linfeng Song, Junfeng Yao, and Jinsong Su. 2024. Mitigating catastrophic forgetting in large language models with self-synthesized rehearsal. In ACL.
228
+ Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, et al. 2023. C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models. In NeurIPS.
229
+ Alexander Hagele, Elie Bakouch, Atli Kosson, Loubna Ben Allal, Leandro Von Werra, and Martin Jaggi. 2024. Scaling laws and compute-optimal training beyond fixed training durations. In ICML Workshop.
230
+
231
+ Nikhil Iyer, V Thejas, Nipun Kwatra, Ramachandran Ramjee, and Muthian Sivathanu. 2023. Wide-minima density hypothesis and the explore-exploit learning rate schedule. JMLR.
232
+ Hongpeng Jin, Wenqi Wei, Xuyu Wang, Wenbin Zhang, Hongpeng Wu, YanzhaoJin, Wenqi Wei, et al. 2023. Rethinking learning rate tuning in the era of large language models. In CogMI.
233
+ Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, et al. 2020. Scaling laws for neural language models. arXiv.
234
+ Zixuan Ke, Haowei Lin, Yijia Shao, Hu Xu, Lei Shu, and Bing Liu. 2022. Continual training of language models for few-shot learning. In EMNLP.
235
+ Zixuan Ke, Yijia Shao, Haowei Lin, Tatsuya Konishi, Gyuhak Kim, and Bing Liu. 2023. Continual pretraining of language models. In ICLR.
236
+ Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, and Timothy Baldwin. 2024. Cmmlu: Measuring massive multitask language understanding in chinese. In Findings of ACL.
237
+ Yudong Li, Yuqing Zhang, Zhe Zhao, Linlin Shen, Wei-jie Liu, Weiquan Mao, and Hui Zhang. 2022. Csl: A large-scale chinese scientific literature dataset. In COLING.
238
+ Junpeng Liu, Kaiyu Huang, Hao Yu, Jiuyi Li, Jinsong Su, and Degen Huang. 2023. Continual learning for multilingual neural machine translation via dual importance-based model division. In EMNLP.
239
+ Xin Liu, Baosong Yang, Dayiheng Liu, Haibo Zhang, Weihua Luo, Min Zhang, Haiying Zhang, and Jinsong Su. 2021. Bridging subword gaps in pretrain-finetune paradigm for natural language generation. In ACL.
240
+ Ben Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, A Neelakantan, et al. 2020. Language models are few-shot learners. In NeurIPS.
241
+ Bohao PENG, Zhuotao Tian, Shu Liu, Ming-Chang Yang, and Jiaya Jia. 2024. Scalable language model with generalized continual learning. In ICLR.
242
+ Guangyue Peng, Tao Ge, Si-Qing Chen, Furu Wei, and Houfeng Wang. 2023. Semiparametric language models are scalable continual learners. arXiv.
243
+ Yujia Qin, Jiajie Zhang, Yankai Lin, Zhiyuan Liu, Peng Li, Maosong Sun, and Jie Zhou. 2022. Elle: Efficient lifelong pre-training for emerging data. In Findings of ACL.
244
+ Anastasia Razdaibiedina, Yuning Mao, Rui Hou, Madian Khabsa, Mike Lewis, and Amjad Almahairi. 2023. Progressive prompts: Continual learning for language models. In ICLR.
245
+
246
+ Leslie N Smith and Nicholay Topin. 2019. Superconvergence: Very fast training of neural networks using large learning rates. In Artificial Intelligence and Machine Learning for Multi-domain Operations Applications.
247
+ Chenyang Song, Xu Han, Zheni Zeng, Kuai Li, Chen Chen, Zhiyuan Liu, et al. 2023. Conpet: Continual parameter-efficient tuning for large language models. arXiv.
248
+ Kai Sun, Dian Yu, Dong Yu, and Claire Cardie. 2020. Investigating prior knowledge for challenging Chinese machine reading comprehension. TACL.
249
+ Mirac Suzgun, Nathan Scales, Nathanael Scharli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, et al. 2023. Challenging big-bench tasks and whether chain-of-thought can solve them. In Findings of ACL.
250
+ Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, et al. 2023a. Llama: Open and efficient foundation language models. arXiv.
251
+ Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. arXiv.
252
+ Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, et al. 2024. A survey on large language model based autonomous agents. Frontiers Comput. Sci.
253
+ Zifeng Wang, Zizhao Zhang, Sayna Ebrahimi, Ruoxi Sun, Han Zhang, Chen-Yu Lee, et al. 2022a. Dualprompt: Complementary prompting for rehearsal-free continual learning. In ECCV.
254
+ Zifeng Wang, Zizhao Zhang, Chen-Yu Lee, Han Zhang, Ruoxi Sun, Xiaoqi Ren, et al. 2022b. Learning to prompt for continual learning. In CVPR.
255
+ Jiayang Wu, Wensheng Gan, Zefeng Chen, Shicheng Wan, and Philip S. Yu. 2023. Multimodal large language models: A survey. In IEEE BigData.
256
+ Yanzhao Wu and Ling Liu. 2023. Selecting and composing learning rate policies for deep neural networks. ACM TIST.
257
+ Yanzhao Wu, Ling Liu, Juhyun Bae, Ka-Ho Chow, Arun Iyengar, Calton Pu, et al. 2019. Demystifying learning rate policies for high accuracy training of deep neural networks. In IEEE BigData.
258
+ Yong Xie, Karan Aggarwal, and Aitzaz Ahmad. 2024. Efficient continual pre-training for building domain specific large language models. In *Findings of ACL*.
259
+ Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, et al. 2023. Baichuan 2: Open large-scale language models. arXiv.
260
+
261
+ Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, et al. 2023. Glm-130b: An open bilingual pre-trained model. In ICLR.
262
+ Xiaotian Zhang, Chunyang Li, Yi Zong, Zhengyu Ying, Liang He, and Xipeng Qiu. 2023. Evaluating the performance of large language models on gaokao benchmark. arXiv.
263
+ Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, et al. 2023. A survey of large language models. arXiv.
264
+ Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. 2024. Agieval: A human-centric benchmark for evaluating foundation models. In Findings of NAACL.
265
+ Chulun Zhou, Yunlong Liang, Fandong Meng, Jie Zhou, Jinan Xu, Hongji Wang, Min Zhang, and Jinsong Su. 2023. A multi-task multi-stage transitional training framework for neural chat translation. TPAMI.
266
+ Chulun Zhou, Fandong Meng, Jie Zhou, Min Zhang, Hongji Wang, and Jinsong Su. 2022. Confidence based bidirectional global context aware training framework for neural machine translation. In ACL.
267
+
268
+ <table><tr><td>Size</td><td>MLR</td><td>Hidden</td><td>Head</td><td>Layer</td></tr><tr><td>203M</td><td>1e-3</td><td>512</td><td>8</td><td>24</td></tr><tr><td>406M</td><td>6e-4</td><td>1,024</td><td>16</td><td>12</td></tr><tr><td>608M</td><td>6e-4</td><td>1,024</td><td>16</td><td>24</td></tr><tr><td>1.2B</td><td>3e-4</td><td>1,536</td><td>16</td><td>24</td></tr><tr><td>2.1B</td><td>3e-4</td><td>1,536</td><td>16</td><td>48</td></tr><tr><td>3.1B</td><td>3e-4</td><td>8,192</td><td>32</td><td>40</td></tr></table>
269
+
270
+ ![](images/c6e98f0abb5e8cd027be577ef73c83be5fa2b43cd49a852b03bc13f6bff370fe.jpg)
271
+ Figure 6: The learning rate curves of different adaptation method of CPT for version updates of LLMs. The learning rate curves are plotted based on cosine learning rate schedules.
272
+
273
+ # A Detailed Hyper-Parameters
274
+
275
+ In this work, we compare PTFS, CPT and our paradigm based on LLMs with different sizes, whose hyper-parameters are listed in Table 9. Following Kaplan et al.; Mann et al., we set smaller maximum learning rates for larger LLMs. Besides, the minimum learning rate is configured to be $10\%$ of the maximum learning rate.
276
+
277
+ # B CPT Variants
278
+
279
+ In order to adapt traditional CPT for version updates of LLMs, we compare three variants of CPT in Figure 6:
280
+
281
+ - RewarmMax: Warm up the learning rate periodically, and use the learning rate schedule of the old version to train the new version of LLMs (Gupta et al., 2023).
282
+
283
+ Table 9: The detailed hyper-parameters of LLMs with different model sizes.
284
+
285
+ <table><tr><td rowspan="2">LRS</td><td rowspan="2">Variant</td><td colspan="3">PPL</td></tr><tr><td>V2</td><td>V3</td><td>V4</td></tr><tr><td rowspan="3">Cos</td><td>RewarmMax</td><td>21.22</td><td>19.79</td><td>18.97</td></tr><tr><td>ResetMax</td><td>21.11</td><td>19.70</td><td>18.87</td></tr><tr><td>KeepMin</td><td>23.00</td><td>21.99</td><td>21.26</td></tr><tr><td rowspan="3">Knee</td><td>RewarmMax</td><td>20.74</td><td>19.46</td><td>18.70</td></tr><tr><td>ResetMax</td><td>20.56</td><td>19.27</td><td>18.52</td></tr><tr><td>KeepMin</td><td>22.22</td><td>21.36</td><td>20.37</td></tr><tr><td rowspan="3">Multi</td><td>RewarmMax</td><td>20.80</td><td>19.55</td><td>18.82</td></tr><tr><td>ResetMax</td><td>20.62</td><td>19.37</td><td>18.65</td></tr><tr><td>KeepMin</td><td>22.11</td><td>21.24</td><td>20.60</td></tr></table>
286
+
287
+ Table 10: The comparison among RewarmMax, ResetMax and KeepMin for CPT.
288
+
289
+ - ResetMax: Directly set the learning rate as the maximum periodically, and use the learning rate schedule of the old version to train the new version of LLMs (Gupta et al., 2023).
290
+ - KeepMin: Keep the learning rate at the minimum by using a constant learning rate schedule to ensure the convergence of LLMs during training (Gogoulou et al., 2024).
291
+
292
+ Experimental results are listed in Table 10. We observe that ResetMax achieves the best pre-training performance among these variants. Therefore, we use ResetMax for the other experiments.
293
+
294
+ # C Performance of Downstream Tasks
295
+
296
+ In addition to the standard training scale (LLaMA-1.2B trained for 42B tokens), we also evaluate LLMs with a larger training dataset (LLaMA-1.2B trained for 168B tokens) and a larger model size (LLaMA-3.1B trained for 42B tokens). We report the performance of downstream tasks across different versions of LLMs, as shown in Table 11. Experimental results show that our paradigm achieves superior average performance compared with PTFS and CPT across different training scales for downstream tasks.
297
+
298
+ # D Comparison between CPT and Ours
299
+
300
+ Existing experimental results show that while our paradigm outperforms CPT in terms of performance, it has higher training cost. To provide a more direct comparison between CPT and our paradigm, we conduct an experiment where the training cost (measured by training steps) are kept
301
+
302
+ <table><tr><td>Scale</td><td>Ver.</td><td>TP</td><td>C3</td><td>GSM8K</td><td>MMLU</td><td>CSL</td><td>C-EVAL</td><td>BBH</td><td>CMMLU</td><td>GAOKAO</td><td>AGIEval</td><td>AVG</td></tr><tr><td rowspan="9">1.2B 42B</td><td rowspan="3">V2</td><td>PTFS</td><td>38.00</td><td>4.63</td><td>24.00</td><td>38.25</td><td>30.09</td><td>17.43</td><td>25.37</td><td>18.10</td><td>14.59</td><td>23.38</td></tr><tr><td>CPT</td><td>37.00</td><td>4.09</td><td>23.52</td><td>35.11</td><td>27.42</td><td>18.55</td><td>25.63</td><td>18.86</td><td>13.40</td><td>22.62</td></tr><tr><td>Ours</td><td>38.60</td><td>5.08</td><td>22.94</td><td>39.08</td><td>28.38</td><td>20.79</td><td>24.88</td><td>18.48</td><td>14.73</td><td>23.66</td></tr><tr><td rowspan="3">V3</td><td>PTFS</td><td>40.30</td><td>3.34</td><td>24.33</td><td>39.17</td><td>25.85</td><td>17.11</td><td>25.30</td><td>22.03</td><td>14.34</td><td>23.53</td></tr><tr><td>CPT</td><td>38.30</td><td>4.70</td><td>23.32</td><td>36.40</td><td>28.38</td><td>21.11</td><td>24.76</td><td>17.85</td><td>13.47</td><td>23.14</td></tr><tr><td>Ours</td><td>42.10</td><td>4.63</td><td>23.22</td><td>34.91</td><td>29.35</td><td>19.70</td><td>24.73</td><td>19.24</td><td>14.90</td><td>23.64</td></tr><tr><td rowspan="3">V4</td><td>PTFS</td><td>35.70</td><td>4.25</td><td>24.93</td><td>38.75</td><td>27.04</td><td>16.73</td><td>24.97</td><td>21.01</td><td>14.10</td><td>23.05</td></tr><tr><td>CPT</td><td>43.90</td><td>4.55</td><td>22.20</td><td>38.69</td><td>27.19</td><td>21.62</td><td>24.43</td><td>18.23</td><td>13.50</td><td>23.81</td></tr><tr><td>Ours</td><td>41.90</td><td>5.53</td><td>24.09</td><td>40.24</td><td>27.71</td><td>21.84</td><td>24.78</td><td>17.24</td><td>14.40</td><td>24.19</td></tr><tr><td rowspan="9">1.2B 168B</td><td rowspan="3">V2</td><td>PTFS</td><td>38.90</td><td>6.82</td><td>23.49</td><td>40.33</td><td>29.27</td><td>23.28</td><td>25.14</td><td>23.29</td><td>14.39</td><td>24.99</td></tr><tr><td>CPT</td><td>43.80</td><td>7.13</td><td>24.61</td><td>37.22</td><td>26.52</td><td>22.96</td><td>25.40</td><td>20.13</td><td>14.25</td><td>24.67</td></tr><tr><td>Ours</td><td>43.20</td><td>8.95</td><td>25.43</td><td>40.45</td><td>26.90</td><td>22.16</td><td>25.45</td><td>18.73</td><td>15.94</td><td>25.25</td></tr><tr><td rowspan="3">V3</td><td>PTFS</td><td>47.40</td><td>8.49</td><td>25.04</td><td>42.42</td><td>27.42</td><td>26.88</td><td>25.06</td><td>18.23</td><td>16.59</td><td>26.39</td></tr><tr><td>CPT</td><td>40.30</td><td>8.42</td><td>24.30</td><td>41.61</td><td>26.30</td><td>24.07</td><td>24.59</td><td>20.00</td><td>18.00</td><td>25.29</td></tr><tr><td>Ours</td><td>47.70</td><td>9.33</td><td>25.35</td><td>44.39</td><td>25.85</td><td>23.05</td><td>24.85</td><td>17.60</td><td>15.63</td><td>25.97</td></tr><tr><td rowspan="3">V4</td><td>PTFS</td><td>48.50</td><td>8.19</td><td>24.73</td><td>44.37</td><td>26.82</td><td>25.70</td><td>25.19</td><td>19.49</td><td>15.36</td><td>26.48</td></tr><tr><td>CPT</td><td>49.10</td><td>8.34</td><td>25.48</td><td>40.60</td><td>27.27</td><td>22.54</td><td>25.38</td><td>21.39</td><td>17.44</td><td>26.39</td></tr><tr><td>Ours</td><td>48.20</td><td>9.02</td><td>26.30</td><td>44.56</td><td>27.27</td><td>23.69</td><td>25.56</td><td>22.53</td><td>14.20</td><td>26.81</td></tr><tr><td rowspan="9">3.1B 42B</td><td rowspan="3">V2</td><td>PTFS</td><td>41.10</td><td>6.37</td><td>24.00</td><td>36.43</td><td>24.15</td><td>21.62</td><td>24.97</td><td>19.75</td><td>14.22</td><td>23.62</td></tr><tr><td>CPT</td><td>46.00</td><td>6.14</td><td>24.00</td><td>40.81</td><td>27.04</td><td>21.94</td><td>23.57</td><td>20.89</td><td>13.28</td><td>24.85</td></tr><tr><td>Ours</td><td>43.70</td><td>8.57</td><td>24.23</td><td>40.17</td><td>25.78</td><td>24.59</td><td>25.70</td><td>19.37</td><td>14.22</td><td>25.15</td></tr><tr><td rowspan="3">V3</td><td>PTFS</td><td>44.30</td><td>8.34</td><td>23.83</td><td>40.99</td><td>27.12</td><td>21.71</td><td>24.73</td><td>21.65</td><td>15.48</td><td>25.35</td></tr><tr><td>CPT</td><td>43.90</td><td>8.11</td><td>25.23</td><td>41.24</td><td>26.00</td><td>25.00</td><td>25.44</td><td>20.00</td><td>13.40</td><td>25.37</td></tr><tr><td>Ours</td><td>47.90</td><td>9.48</td><td>24.02</td><td>40.74</td><td>25.71</td><td>25.73</td><td>25.09</td><td>19.62</td><td>14.54</td><td>25.87</td></tr><tr><td rowspan="3">V4</td><td>PTFS</td><td>50.20</td><td>11.22</td><td>25.98</td><td>39.89</td><td>27.64</td><td>23.12</td><td>25.47</td><td>21.65</td><td>15.46</td><td>26.74</td></tr><tr><td>CPT</td><td>50.60</td><td>9.78</td><td>25.12</td><td>41.03</td><td>28.08</td><td>22.48</td><td>25.38</td><td>21.01</td><td>13.93</td><td>26.38</td></tr><tr><td>Ours</td><td>49.80</td><td>10.77</td><td>25.77</td><td>42.95</td><td>26.97</td><td>22.45</td><td>26.25</td><td>22.41</td><td>14.80</td><td>26.91</td></tr></table>
303
+
304
+ Table 11: The performance of downstream tasks for LLMs across four versions. In addition to the standard training scale (LLaMA-1.2B trained for 42B tokens), we further evaluate LLMs trained on more data (LLaMA-1.2B trained for 168B tokens) and LLMs with a larger size (LLaMA-3.1B trained for 42B tokens).
305
+
306
+ <table><tr><td rowspan="2">α</td><td rowspan="2">TP</td><td colspan="3">PPL</td></tr><tr><td>V2</td><td>V3</td><td>V4</td></tr><tr><td rowspan="2">0.2</td><td>CPT</td><td>21.40</td><td>20.69</td><td>20.20</td></tr><tr><td>Ours</td><td>20.96</td><td>20.05</td><td>19.59</td></tr><tr><td rowspan="2">0.4</td><td>CPT</td><td>21.06</td><td>20.43</td><td>20.18</td></tr><tr><td>Ours</td><td>20.81</td><td>19.88</td><td>19.42</td></tr><tr><td rowspan="2">0.6</td><td>CPT</td><td>20.85</td><td>20.21</td><td>19.85</td></tr><tr><td>Ours</td><td>20.82</td><td>19.86</td><td>19.40</td></tr></table>
307
+
308
+ Table 12: The comparison between CPT and our paradigm with equal training cost. The $\alpha$ ranges from 0.2 to 0.6.
309
+
310
+ <table><tr><td rowspan="2">TP</td><td colspan="3">PPL</td></tr><tr><td>10.5B</td><td>21.0B</td><td>31.5B</td></tr><tr><td>PTFS</td><td>20.84</td><td>19.28</td><td>18.36</td></tr><tr><td>CPT</td><td>21.11</td><td>19.50</td><td>18.47</td></tr><tr><td>Ours</td><td>20.13</td><td>18.81</td><td>18.09</td></tr></table>
311
+
312
+ Table 13: The comparison of different paradigms for training two versions of LLaMA-1.2B. The data increment of the second version varies from 10.5B to 31.5B tokens.
313
+
314
+ consistent. Concretely, we sample a dataset of approximately 5.25B tokens (5K steps) and use it to train four versions of LLMs. As mentioned in section 3.2, we analyze the time complexity of CPT and our paradigm, obtaining that the time complexity of ours is about $1 + \alpha$ times that of CPT.
315
+
316
+ We compare the results for different $\alpha$ (proportion of fast-decaying steps) set as 0.2, 0.4 and 0.6, respectively. To ensure that the total training cost used for these two paradigms are consistent, our paradigm always includes an additional $10\mathrm{K}$ steps for each version update, while CPT uses additional $12\mathrm{K} (\alpha = 0.2)$ , $14\mathrm{K} (\alpha = 0.4)$ and $16\mathrm{K} (\alpha = 0.6)$ steps for each version update, respectively. The experimental results in Table 12 demonstrate that our paradigm remains effective even when the total training cost are kept consistent with CPT.
317
+
318
+ # E Version Updates with Inconsistent Data Increments
319
+
320
+ Existing experiments are based on the assumption of consistent data increments during version updates of LLMs. The effectiveness of our paradigm has not yet been validated in the scenario with varying data increments. Hence, we conduct a compar
321
+
322
+ ative experiment involving PTFS, CPT, and ours, training two versions of LLMs. For all paradigms, the LLMs of the first version are trained with 10.5B tokens (10K steps), while the LLMs of second version are trained with 10.5B, 21B, and 31.5B tokens, respectively. As the experimental results shown in Table 13, our paradigm maintains a better pre-training performance than PTFS and CPT in the scenario with inconsistent data increments. This further demonstrates the generalization of our paradigm.
2024/A Learning Rate Path Switching Training Paradigm for Version Updates of Large Language Models/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d9f6652365d53561874a3638517e41ee38e807053b74e525f514b254745c338d
3
+ size 995487
2024/A Learning Rate Path Switching Training Paradigm for Version Updates of Large Language Models/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2024/A Morphology-Based Investigation of Positional Encodings/b5b51b30-492b-4f55-95ec-1b80f07d62bc_content_list.json ADDED
@@ -0,0 +1,1613 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "A Morphology-Based Investigation of Positional Encodings",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 191,
8
+ 90,
9
+ 803,
10
+ 111
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Poulami Ghosh*, Shikhar Vashishth†, Raj Dabre‡, Pushpak Bhattacharyya*",
17
+ "bbox": [
18
+ 171,
19
+ 143,
20
+ 821,
21
+ 162
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "text",
27
+ "text": "*IIT Bombay, India, †Google Research, India ‡NICT, Japan",
28
+ "bbox": [
29
+ 263,
30
+ 162,
31
+ 742,
32
+ 179
33
+ ],
34
+ "page_idx": 0
35
+ },
36
+ {
37
+ "type": "text",
38
+ "text": "{poulami, pb}@cse.iitb.ac.in",
39
+ "bbox": [
40
+ 373,
41
+ 179,
42
+ 623,
43
+ 195
44
+ ],
45
+ "page_idx": 0
46
+ },
47
+ {
48
+ "type": "text",
49
+ "text": "Abstract",
50
+ "text_level": 1,
51
+ "bbox": [
52
+ 260,
53
+ 261,
54
+ 339,
55
+ 275
56
+ ],
57
+ "page_idx": 0
58
+ },
59
+ {
60
+ "type": "text",
61
+ "text": "Contemporary deep learning models effectively handle languages with diverse morphology despite not being directly integrated into them. Morphology and word order are closely linked, with the latter incorporated into transformer-based models through positional encodings. This prompts a fundamental inquiry: Is there a correlation between the morphological complexity of a language and the utilization of positional encoding in pre-trained language models? In pursuit of an answer, we present the first study addressing this question, encompassing 22 languages and 5 downstream tasks. Our findings reveal that the importance of positional encoding diminishes with increasing morphological complexity in languages. Our study motivates the need for a deeper understanding of positional encoding, augmenting them to better reflect the different languages under consideration.",
62
+ "bbox": [
63
+ 144,
64
+ 290,
65
+ 460,
66
+ 574
67
+ ],
68
+ "page_idx": 0
69
+ },
70
+ {
71
+ "type": "text",
72
+ "text": "1 Introduction",
73
+ "text_level": 1,
74
+ "bbox": [
75
+ 114,
76
+ 589,
77
+ 258,
78
+ 602
79
+ ],
80
+ "page_idx": 0
81
+ },
82
+ {
83
+ "type": "text",
84
+ "text": "Pre-trained language models (PLMs) (Devlin et al., 2018; Liu et al., 2019a; Radford et al., 2019; Raffel et al., 2020a; Brown et al., 2020) built upon transformers (Vaswani et al., 2017) have achieved ground-breaking results across a wide spectrum of language processing tasks such as natural language inference (Liu et al., 2019b), text classification (Raffel et al., 2020b), named entity recognition (Liu et al., 2019b), and part-of-speech tagging (Martin et al., 2020). However, only a few models take into account various linguistic aspects and theories in their design (Nzeyimana and Rubungo, 2022; Park et al., 2021). Morphology and word order of a language are closely related (Sapir, 1921; Comrie, 1989; Blake, 2001); the latter is incorporated into transformer-based models through positional encoding (PE) (Dufter et al., 2022). As language models are being developed for more languages which significantly differ in morphological typology, it",
85
+ "bbox": [
86
+ 115,
87
+ 615,
88
+ 489,
89
+ 921
90
+ ],
91
+ "page_idx": 0
92
+ },
93
+ {
94
+ "type": "text",
95
+ "text": "could be beneficial to construct language models that are sensitive to these linguistic nuances. Moreover, the enormous computational cost incurred during their training is a major challenge in the development of PLMs. Acquiring a deeper understanding of how various components of a PLM function in different languages can provide valuable insights regarding their necessity across languages. This motivates us to investigate the relation between positional encoding and morphology, which is essential for wider usage of PLMs across different languages. Our contributions are:",
96
+ "bbox": [
97
+ 507,
98
+ 261,
99
+ 884,
100
+ 453
101
+ ],
102
+ "page_idx": 0
103
+ },
104
+ {
105
+ "type": "list",
106
+ "sub_type": "text",
107
+ "list_items": [
108
+ "1. Performing the first study about the varying importance of positional encoding across languages with different morphological complexity.",
109
+ "2. Showing that the impact of PE diminishes as the morphological complexity of a language increases.",
110
+ "3. Conducting exhaustive experiments covering 22 different languages across 9 language families and 5 diverse natural language processing tasks."
111
+ ],
112
+ "bbox": [
113
+ 507,
114
+ 455,
115
+ 882,
116
+ 583
117
+ ],
118
+ "page_idx": 0
119
+ },
120
+ {
121
+ "type": "text",
122
+ "text": "2 Related work",
123
+ "text_level": 1,
124
+ "bbox": [
125
+ 509,
126
+ 601,
127
+ 660,
128
+ 615
129
+ ],
130
+ "page_idx": 0
131
+ },
132
+ {
133
+ "type": "text",
134
+ "text": "Positional Encoding (PE): Various methods have been proposed to incorporate position information in transformer models. Absolute positions in a sequence, represented by fixed (Vaswani et al., 2017) or trainable encodings (Gehring et al., 2017; Devlin et al., 2018; Radford et al., 2019; Lan et al., 2019), are typically added to input embeddings. Relative positions are encoded by directly adding position biases into the attention matrix (Shaw et al., 2018; Yang et al., 2019b; Raffel et al., 2020a; Huang et al., 2020; He et al., 2020; Press et al., 2021). Su et al. (2021) introduce rotary positional embeddings, employing a rotation matrix to encode both absolute and relative position information. Our study focuses on BERT models, which use learnable absolute PEs (Wang et al., 2020; Huang et al., 2020). We draw insights from linguistics theories and question the design choices for BERT-style",
135
+ "bbox": [
136
+ 507,
137
+ 631,
138
+ 884,
139
+ 921
140
+ ],
141
+ "page_idx": 0
142
+ },
143
+ {
144
+ "type": "page_number",
145
+ "text": "21035",
146
+ "bbox": [
147
+ 473,
148
+ 927,
149
+ 524,
150
+ 940
151
+ ],
152
+ "page_idx": 0
153
+ },
154
+ {
155
+ "type": "footer",
156
+ "text": "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 21035-21045 November 12-16, 2024 ©2024 Association for Computational Linguistics",
157
+ "bbox": [
158
+ 152,
159
+ 945,
160
+ 843,
161
+ 972
162
+ ],
163
+ "page_idx": 0
164
+ },
165
+ {
166
+ "type": "image",
167
+ "img_path": "images/c26bf832c8c046b5e23485c10ab518d0edaa01615dafd02b020466a5b57edeeb.jpg",
168
+ "image_caption": [
169
+ "Figure 1: The figure illustrates the effect of word order on semantics for two languages: English (left) and Sanskrit (right). English is a morphologically poor language with SVO word order whereas Sanskrit is a morphologically rich language with no dominant word order (NODOM). Distorting the word order completely alters the meaning for English. However, for Sanskrit the meaning remains intact."
170
+ ],
171
+ "image_footnote": [],
172
+ "bbox": [
173
+ 139,
174
+ 80,
175
+ 855,
176
+ 262
177
+ ],
178
+ "page_idx": 1
179
+ },
180
+ {
181
+ "type": "text",
182
+ "text": "models that were designed with English in mind.",
183
+ "bbox": [
184
+ 112,
185
+ 351,
186
+ 478,
187
+ 366
188
+ ],
189
+ "page_idx": 1
190
+ },
191
+ {
192
+ "type": "text",
193
+ "text": "Absence of Positional Information: Eliminating positional encoding results in a bag-of-words representation. Sinha et al. (2021) pre-train a RoBERTa model without positional embeddings and observe degraded performance on GLUE and PAWS tasks. Haviv et al. (2022); Le Scao et al. (2022) demonstrate that causal language models lacking explicit PE remain competitive with standard position-aware models. Additionally, Haviv et al. (2022) find that a pre-trained RoBERTa large model without PE exhibits higher perplexities than position-informed models. However, all these observations are limited to the English language.",
194
+ "bbox": [
195
+ 112,
196
+ 367,
197
+ 487,
198
+ 576
199
+ ],
200
+ "page_idx": 1
201
+ },
202
+ {
203
+ "type": "text",
204
+ "text": "Linguistic Information in BERT: Several works studied the linguistic knowledge encoded in PLMs such as BERT, focusing on different aspects of linguistics such as syntax (Goldberg, 2019; Jawahar et al., 2019), semantics (Ethayarajh, 2019) and morphology (Edmiston, 2020). Tenney et al. (2019); Puccetti et al. (2021) investigate the extent and organization of the linguistic information encoded in BERT. Gerz et al. (2018) investigate the connection between language modeling and linguistic typology across 50 different languages. However, they do not consider PLMs. Otmakhova et al. (2022) examine how various layers within a BERT model encode morphology and syntax.",
205
+ "bbox": [
206
+ 112,
207
+ 577,
208
+ 489,
209
+ 802
210
+ ],
211
+ "page_idx": 1
212
+ },
213
+ {
214
+ "type": "text",
215
+ "text": "3 The Relationship between Morphology and Word Order",
216
+ "text_level": 1,
217
+ "bbox": [
218
+ 112,
219
+ 814,
220
+ 482,
221
+ 845
222
+ ],
223
+ "page_idx": 1
224
+ },
225
+ {
226
+ "type": "text",
227
+ "text": "In this section, we investigate the relationship between morphology and word order as outlined in different linguistic theories.",
228
+ "bbox": [
229
+ 112,
230
+ 857,
231
+ 487,
232
+ 903
233
+ ],
234
+ "page_idx": 1
235
+ },
236
+ {
237
+ "type": "text",
238
+ "text": "Linguistic theories : Morphological case mark-",
239
+ "bbox": [
240
+ 112,
241
+ 904,
242
+ 487,
243
+ 920
244
+ ],
245
+ "page_idx": 1
246
+ },
247
+ {
248
+ "type": "text",
249
+ "text": "ings serve a similar function as word order (Sapir, 1921; Blake, 2001). Several theories suggest that the presence of morphological case is necessary for free word order in a language (Comrie, 1989; Haspelmath, 1999). Either morphological case or structural position facilitates the unambiguous determination of grammatical role of the constituents of a sentence. The existence of morphological case reduces the need for fixed structural position in determination of the grammatical function of a word or phrase, allowing for variable word order. However, if morphological case is absent, fixed placement of words (and phrases) is necessary, exhibiting a fixed or rigid word order. In this work, we align our empirical study in accordance with the above theories that hints at the existence of a correlation between morphology and word order. Specifically, morphologically rich languages which tend to exhibit higher word-order flexibility as compared to morphologically poor languages<sup>1</sup>.",
250
+ "bbox": [
251
+ 507,
252
+ 351,
253
+ 884,
254
+ 674
255
+ ],
256
+ "page_idx": 1
257
+ },
258
+ {
259
+ "type": "text",
260
+ "text": "3.1 Spectrum of Morphological Complexity",
261
+ "text_level": 1,
262
+ "bbox": [
263
+ 507,
264
+ 683,
265
+ 867,
266
+ 699
267
+ ],
268
+ "page_idx": 1
269
+ },
270
+ {
271
+ "type": "text",
272
+ "text": "Through the lens of morphological typology (Haspelmath and Sims, 2013), we can categorize and cluster languages by studying their inherent morphological structures. At one extreme, we find languages such as Chinese and Vietnamese, which fall into the category of analytic languages and are morphologically poor. In these languages, it is",
273
+ "bbox": [
274
+ 507,
275
+ 703,
276
+ 882,
277
+ 816
278
+ ],
279
+ "page_idx": 1
280
+ },
281
+ {
282
+ "type": "page_footnote",
283
+ "text": "<sup>1</sup>The theory regarding morphology and word order is a linguistically complex topic. Concurrent theories in the literature propose that there is no correlation between morphological complexity and word order. Müller (2002) demonstrates phenomena like scrambling and topic shift, where the change in word order does not necessarily require a high level of morphological complexity. However, this is beyond the scope of our study.",
284
+ "bbox": [
285
+ 507,
286
+ 824,
287
+ 882,
288
+ 920
289
+ ],
290
+ "page_idx": 1
291
+ },
292
+ {
293
+ "type": "page_number",
294
+ "text": "21036",
295
+ "bbox": [
296
+ 475,
297
+ 927,
298
+ 524,
299
+ 940
300
+ ],
301
+ "page_idx": 1
302
+ },
303
+ {
304
+ "type": "text",
305
+ "text": "essential for words to maintain fixed positions in order to accurately convey grammatical relationships, resulting in a strict and invariant word order. On the other extreme, we find synthetic languages such as Sanskrit and Finnish, known for their rich morphology, where it's possible to rearrange the word order within a sentence without changing its meaning, as illustrated in Figure 1. However, most languages fall between these two extremes. Synthetic languages can be categorized into two main types. Agglutinative languages like Hungarian and Turkish tend to stick together multiple morphemes while fusional languages fuse several morphemes to express various grammatical features.",
306
+ "bbox": [
307
+ 112,
308
+ 84,
309
+ 492,
310
+ 311
311
+ ],
312
+ "page_idx": 2
313
+ },
314
+ {
315
+ "type": "text",
316
+ "text": "4 Methodology",
317
+ "text_level": 1,
318
+ "bbox": [
319
+ 112,
320
+ 319,
321
+ 265,
322
+ 337
323
+ ],
324
+ "page_idx": 2
325
+ },
326
+ {
327
+ "type": "text",
328
+ "text": "In our work, we first quantify morphological complexity, and then systematically study the effect of removal of positional encodings during fine-tuning. Please refer to Section 3 for details on linguistic theories governing our study.",
329
+ "bbox": [
330
+ 112,
331
+ 344,
332
+ 489,
333
+ 426
334
+ ],
335
+ "page_idx": 2
336
+ },
337
+ {
338
+ "type": "text",
339
+ "text": "4.1 Quantifying Morphological Complexity",
340
+ "text_level": 1,
341
+ "bbox": [
342
+ 112,
343
+ 435,
344
+ 470,
345
+ 451
346
+ ],
347
+ "page_idx": 2
348
+ },
349
+ {
350
+ "type": "text",
351
+ "text": "Following Kettunen (2014); Jayanthi and Pratapa (2021); Cöltekin and Rama (2023), we employ type-token ratio (TTR) as an empirical proxy of morphological complexity of a language. We use the many-to-many multilingual Flores-200 benchmark (Costa-jussa et al., 2022) to ensure information consistency across languages. As Chinese is an unsegmented language, we use character level ELMo model from the pywordseg library (Chuang, 2019) to split Chinese text into words. The remaining languages are space-delimited. Please refer to Appendix B for more details.",
352
+ "bbox": [
353
+ 112,
354
+ 456,
355
+ 489,
356
+ 650
357
+ ],
358
+ "page_idx": 2
359
+ },
360
+ {
361
+ "type": "text",
362
+ "text": "4.2 Morphology-based Investigation",
363
+ "text_level": 1,
364
+ "bbox": [
365
+ 112,
366
+ 659,
367
+ 415,
368
+ 674
369
+ ],
370
+ "page_idx": 2
371
+ },
372
+ {
373
+ "type": "text",
374
+ "text": "To evaluate the impact of positional embeddings, we set them to 0, effectively nullifying its effect during fine-tuning. Our objective was to include multiple languages to ensure the generalizability of our findings. However, training language models from scratch (without positional encoding) for a large number of languages requires significant computational power and financial resources. Therefore, our primary focus was investigating the impact of nullifying positional encoding during fine-tuning.",
375
+ "bbox": [
376
+ 112,
377
+ 680,
378
+ 489,
379
+ 840
380
+ ],
381
+ "page_idx": 2
382
+ },
383
+ {
384
+ "type": "text",
385
+ "text": "We posit that for morphologically rich languages like Sanskrit, this would have minimal impact on downstream performance. For example, as depicted in Figure 1, the semantic meaning of a sentence in Sanskrit remains consistent even when the",
386
+ "bbox": [
387
+ 112,
388
+ 841,
389
+ 489,
390
+ 920
391
+ ],
392
+ "page_idx": 2
393
+ },
394
+ {
395
+ "type": "text",
396
+ "text": "order of tokens is shuffled. However, this does not hold for morphologically poor languages.",
397
+ "bbox": [
398
+ 507,
399
+ 84,
400
+ 882,
401
+ 117
402
+ ],
403
+ "page_idx": 2
404
+ },
405
+ {
406
+ "type": "text",
407
+ "text": "5 Experimental Setup",
408
+ "text_level": 1,
409
+ "bbox": [
410
+ 509,
411
+ 131,
412
+ 717,
413
+ 149
414
+ ],
415
+ "page_idx": 2
416
+ },
417
+ {
418
+ "type": "text",
419
+ "text": "To ensure the generalizability of our findings, we choose to perform a comprehensive study spanning different languages and tasks.",
420
+ "bbox": [
421
+ 507,
422
+ 162,
423
+ 882,
424
+ 210
425
+ ],
426
+ "page_idx": 2
427
+ },
428
+ {
429
+ "type": "text",
430
+ "text": "5.1 Tasks and Languages",
431
+ "text_level": 1,
432
+ "bbox": [
433
+ 507,
434
+ 225,
435
+ 724,
436
+ 240
437
+ ],
438
+ "page_idx": 2
439
+ },
440
+ {
441
+ "type": "text",
442
+ "text": "As our work deals with the interplay of morphology and syntax in PLMs, we consider two sets of tasks:",
443
+ "bbox": [
444
+ 507,
445
+ 248,
446
+ 884,
447
+ 294
448
+ ],
449
+ "page_idx": 2
450
+ },
451
+ {
452
+ "type": "list",
453
+ "sub_type": "text",
454
+ "list_items": [
455
+ "a. Syntactic tasks: Part-of-speech (POS) tagging, Named Entity Recognition (NER), Dependency Parsing",
456
+ "b. Semantic tasks: Natural Language Inference (NLI), Paraphrasing"
457
+ ],
458
+ "bbox": [
459
+ 507,
460
+ 297,
461
+ 882,
462
+ 376
463
+ ],
464
+ "page_idx": 2
465
+ },
466
+ {
467
+ "type": "text",
468
+ "text": "Factors considered in task and language selection include (1) availability of monolingual BERT-base model on HuggingFace Hub (Wolf et al., 2019), (2) availability of sufficient monolingual training data across different tasks, and (3) typological diversity. We aim to cover as many languages and language families as possible. Overall, we cover 22 languages distributed across 9 language families and one language isolate. We present an outline of the languages in Appendix A due to space constraints.",
469
+ "bbox": [
470
+ 507,
471
+ 378,
472
+ 882,
473
+ 538
474
+ ],
475
+ "page_idx": 2
476
+ },
477
+ {
478
+ "type": "text",
479
+ "text": "5.2 Datasets",
480
+ "text_level": 1,
481
+ "bbox": [
482
+ 507,
483
+ 552,
484
+ 623,
485
+ 567
486
+ ],
487
+ "page_idx": 2
488
+ },
489
+ {
490
+ "type": "text",
491
+ "text": "Our study includes tasks from the XTREME benchmark (Hu et al., 2020), covering natural language inference (XNLI) (Conneau et al., 2018), paraphrasing (PAWS-X) (Yang et al., 2019a), and structure prediction tasks such as POS tagging and NER. We use the data from the Universal Dependencies v2.12 (Zeman et al., 2023) for the task of dependency parsing. The treebanks used for different languages are listed in Table 4 in Appendix.",
492
+ "bbox": [
493
+ 507,
494
+ 576,
495
+ 884,
496
+ 722
497
+ ],
498
+ "page_idx": 2
499
+ },
500
+ {
501
+ "type": "text",
502
+ "text": "5.3 Model Selection",
503
+ "text_level": 1,
504
+ "bbox": [
505
+ 507,
506
+ 737,
507
+ 680,
508
+ 751
509
+ ],
510
+ "page_idx": 2
511
+ },
512
+ {
513
+ "type": "text",
514
+ "text": "In our research, we use monolingual pre-trained language models to prevent cross-lingual transfer from influencing our results. Given the availability of monolingual BERT models in various languages, we select BERT as the example PLM for our study. We consider BERT-base model for all languages to ensure that variations in model size and architecture do not influence the results. We consider fine-tuned BERT-base models with PE and without PE as the baseline and perturbed models, respectively.",
515
+ "bbox": [
516
+ 507,
517
+ 760,
518
+ 882,
519
+ 921
520
+ ],
521
+ "page_idx": 2
522
+ },
523
+ {
524
+ "type": "page_number",
525
+ "text": "21037",
526
+ "bbox": [
527
+ 475,
528
+ 927,
529
+ 524,
530
+ 940
531
+ ],
532
+ "page_idx": 2
533
+ },
534
+ {
535
+ "type": "image",
536
+ "img_path": "images/3286884868233c9e5d6aca598a13c3d710cdb94de44a41011968322fa6afc83d.jpg",
537
+ "image_caption": [
538
+ "Figure 2: Effect of Positional Encoding on NER task."
539
+ ],
540
+ "image_footnote": [],
541
+ "bbox": [
542
+ 117,
543
+ 80,
544
+ 487,
545
+ 236
546
+ ],
547
+ "page_idx": 3
548
+ },
549
+ {
550
+ "type": "image",
551
+ "img_path": "images/b9ece6e4122ab7ecddbf6386f12ec1f002d7b9763cb0caa7f2e6ab8da1056f60.jpg",
552
+ "image_caption": [
553
+ "Figure 3: Effect of Positional Encoding on POS task."
554
+ ],
555
+ "image_footnote": [],
556
+ "bbox": [
557
+ 117,
558
+ 274,
559
+ 485,
560
+ 425
561
+ ],
562
+ "page_idx": 3
563
+ },
564
+ {
565
+ "type": "text",
566
+ "text": "5.4 Evaluation Metrics",
567
+ "text_level": 1,
568
+ "bbox": [
569
+ 112,
570
+ 476,
571
+ 302,
572
+ 489
573
+ ],
574
+ "page_idx": 3
575
+ },
576
+ {
577
+ "type": "text",
578
+ "text": "The metric used for different tasks is outlined in Table 1. For a given task, let $m$ and $n$ denote the metric scores for the baseline and perturbed models, respectively. We use the relative decrease in performance, calculated as $(m - n) / m$ , as a quantitative measure of the importance of PE on the language. A higher value indicates a greater utilization of PE in effectively modeling the language.",
579
+ "bbox": [
580
+ 112,
581
+ 497,
582
+ 489,
583
+ 626
584
+ ],
585
+ "page_idx": 3
586
+ },
587
+ {
588
+ "type": "text",
589
+ "text": "5.5 Training and evaluation setup",
590
+ "text_level": 1,
591
+ "bbox": [
592
+ 112,
593
+ 637,
594
+ 394,
595
+ 653
596
+ ],
597
+ "page_idx": 3
598
+ },
599
+ {
600
+ "type": "text",
601
+ "text": "For text classification tasks, we follow the generic pipeline. For dependency parsing, we implemented a biaffine parser by applying a biaffine attention layer directly on the output of BERT as described in Glavaš and Vulić (2021). As suggested in the XTREME benchmark, we have performed hyperparameter tuning on English validation data. However, since our goal is not to achieve the best absolute performance, we avoided conducting extensive hyperparameter tuning. More details are present in the Appendix E. Results are reported across 3 random trials of each experiment.",
602
+ "bbox": [
603
+ 112,
604
+ 658,
605
+ 489,
606
+ 851
607
+ ],
608
+ "page_idx": 3
609
+ },
610
+ {
611
+ "type": "text",
612
+ "text": "6 Results",
613
+ "text_level": 1,
614
+ "bbox": [
615
+ 112,
616
+ 863,
617
+ 213,
618
+ 878
619
+ ],
620
+ "page_idx": 3
621
+ },
622
+ {
623
+ "type": "text",
624
+ "text": "In this section, we present the findings of our experiments on syntactic and semantic tasks. We also",
625
+ "bbox": [
626
+ 112,
627
+ 889,
628
+ 489,
629
+ 921
630
+ ],
631
+ "page_idx": 3
632
+ },
633
+ {
634
+ "type": "image",
635
+ "img_path": "images/d089e99956e7d0e2d3c596ba1bc0d597908d90f86642218a76d917c334d619c5.jpg",
636
+ "image_caption": [
637
+ "Figure 4: Effect of Positional Encoding on Dependency Parsing."
638
+ ],
639
+ "image_footnote": [],
640
+ "bbox": [
641
+ 512,
642
+ 82,
643
+ 884,
644
+ 237
645
+ ],
646
+ "page_idx": 3
647
+ },
648
+ {
649
+ "type": "image",
650
+ "img_path": "images/000fc06a6dae739b4f7c850bf69b88a759c9ec446e447520573643f29e84a46d.jpg",
651
+ "image_caption": [
652
+ "Figure 5: Effect of Positional Encoding on XNLI."
653
+ ],
654
+ "image_footnote": [],
655
+ "bbox": [
656
+ 512,
657
+ 290,
658
+ 884,
659
+ 447
660
+ ],
661
+ "page_idx": 3
662
+ },
663
+ {
664
+ "type": "text",
665
+ "text": "conducted preliminary experiments on the GLUE benchmark, as discussed in Appendix D.",
666
+ "bbox": [
667
+ 507,
668
+ 498,
669
+ 882,
670
+ 531
671
+ ],
672
+ "page_idx": 3
673
+ },
674
+ {
675
+ "type": "text",
676
+ "text": "6.1 Results on Syntactic Tasks",
677
+ "text_level": 1,
678
+ "bbox": [
679
+ 507,
680
+ 544,
681
+ 764,
682
+ 558
683
+ ],
684
+ "page_idx": 3
685
+ },
686
+ {
687
+ "type": "text",
688
+ "text": "Figures 2, 3 demonstrate the effect of removing positional encoding in NER, POS tagging tasks. For dependency parsing, figures 4 and 7 depict the effects on UAS and LAS scores, respectively.",
689
+ "bbox": [
690
+ 507,
691
+ 565,
692
+ 882,
693
+ 630
694
+ ],
695
+ "page_idx": 3
696
+ },
697
+ {
698
+ "type": "list",
699
+ "sub_type": "text",
700
+ "list_items": [
701
+ "a. Analytic languages like Chinese and Vietnamese, characterized by minimal or no morphology, exhibit the most significant decrease in performance when PE is removed. Moderately analytic languages like English and French follow.",
702
+ "b. In synthetic languages such as Hungarian, Finnish, and Turkish, known for their rich morphological systems, the function of morphology in encoding grammatical roles surpasses that of word order, resulting in a considerably smaller decrease in performance when PE is eliminated.",
703
+ "In the XTREME benchmark, different methods were employed for data annotation for POS tagging and NER datasets. While the former was human annotated, the latter was created through automatic annotation using weak supervision. Despite these disparities, the findings of the POS tagging and NER experiments are similar."
704
+ ],
705
+ "bbox": [
706
+ 507,
707
+ 631,
708
+ 882,
709
+ 920
710
+ ],
711
+ "page_idx": 3
712
+ },
713
+ {
714
+ "type": "page_number",
715
+ "text": "21038",
716
+ "bbox": [
717
+ 475,
718
+ 927,
719
+ 524,
720
+ 940
721
+ ],
722
+ "page_idx": 3
723
+ },
724
+ {
725
+ "type": "image",
726
+ "img_path": "images/232ff9f2beb3e622a8ab9e1b16617ceea13a198cf5b185568e60af7038c229a4.jpg",
727
+ "image_caption": [
728
+ "Figure 6: Effect of Positional Encoding on PAWS-X."
729
+ ],
730
+ "image_footnote": [],
731
+ "bbox": [
732
+ 117,
733
+ 80,
734
+ 489,
735
+ 239
736
+ ],
737
+ "page_idx": 4
738
+ },
739
+ {
740
+ "type": "text",
741
+ "text": "6.2 Results on Semantic Tasks",
742
+ "text_level": 1,
743
+ "bbox": [
744
+ 112,
745
+ 288,
746
+ 369,
747
+ 304
748
+ ],
749
+ "page_idx": 4
750
+ },
751
+ {
752
+ "type": "text",
753
+ "text": "Natural language inference and paraphrasing tasks primarily involve understanding the semantic relationships and meaning between sentences or phrases. Syntax, including word order and grammatical rules, can influence the overall coherence and clarity of the expressions, but it is not the primary focus of these tasks.",
754
+ "bbox": [
755
+ 112,
756
+ 309,
757
+ 489,
758
+ 420
759
+ ],
760
+ "page_idx": 4
761
+ },
762
+ {
763
+ "type": "text",
764
+ "text": "The results depicted in Figures 5 and 6 illustrate the impact of nullifying PE in tasks related to natural language inference and paraphrasing. We notice a consistent pattern emerge in the graphs where morphologically poor languages are notably affected by the absence of positional encoding, while the impact is comparatively less for morphologically rich languages. However, in contrast to syntactic tasks, the variability in impact across different languages is less pronounced for semantic tasks.",
765
+ "bbox": [
766
+ 112,
767
+ 420,
768
+ 489,
769
+ 583
770
+ ],
771
+ "page_idx": 4
772
+ },
773
+ {
774
+ "type": "table",
775
+ "img_path": "images/50b74d08d9372c2df387ada0fda447249772513c7d74b663cd6f541f5b627475.jpg",
776
+ "table_caption": [],
777
+ "table_footnote": [],
778
+ "table_body": "<table><tr><td>Task (Metric)</td><td>Correlation</td></tr><tr><td>NER (F1)</td><td>-0.742</td></tr><tr><td>POS (F1)</td><td>-0.693</td></tr><tr><td>Dependency Parsing (UAS)</td><td>-0.882</td></tr><tr><td>Dependency Parsing (LAS)</td><td>-0.873</td></tr><tr><td>XNLI (Accuracy)</td><td>-0.773</td></tr><tr><td>PAWS-X (Accuracy)</td><td>-0.486</td></tr></table>",
779
+ "bbox": [
780
+ 136,
781
+ 593,
782
+ 467,
783
+ 725
784
+ ],
785
+ "page_idx": 4
786
+ },
787
+ {
788
+ "type": "text",
789
+ "text": "Table 1: Spearman correlation coefficient between morphological complexity of a language and relative decrease in performance across different tasks",
790
+ "bbox": [
791
+ 112,
792
+ 733,
793
+ 489,
794
+ 778
795
+ ],
796
+ "page_idx": 4
797
+ },
798
+ {
799
+ "type": "text",
800
+ "text": "In Table 1, we report the statistical correlation between the morphological complexity of a language and the relative decrease in performance across tasks, as determined by the Spearman Correlation Coefficient. A strong negative correlation is observed, indicating that higher morphological complexity is associated with a lower relative decrease in performance.",
801
+ "bbox": [
802
+ 112,
803
+ 791,
804
+ 489,
805
+ 921
806
+ ],
807
+ "page_idx": 4
808
+ },
809
+ {
810
+ "type": "text",
811
+ "text": "6.3 Analysis",
812
+ "text_level": 1,
813
+ "bbox": [
814
+ 509,
815
+ 84,
816
+ 623,
817
+ 99
818
+ ],
819
+ "page_idx": 4
820
+ },
821
+ {
822
+ "type": "text",
823
+ "text": "We investigate the impact of removing positional encoding across languages in POS tagging and dependency parsing tasks. Both tasks depend on the interaction between morphology and word order to accurately interpret sentence structure.",
824
+ "bbox": [
825
+ 507,
826
+ 105,
827
+ 884,
828
+ 186
829
+ ],
830
+ "page_idx": 4
831
+ },
832
+ {
833
+ "type": "text",
834
+ "text": "For morphologically poor languages with relatively fixed word order, such as English and French, we observe increased ambiguity in distinguishing part-of-speech tags, particularly between nouns and pronouns, nouns and verbs, and adjectives and adverbs. In morphologically rich languages like Turkish, Finnish, and Arabic, the absence of positional encoding has minimal impact, as inflectional affixes marking features like number, tense, and person aid in correctly identifying the part of speech of a word.",
835
+ "bbox": [
836
+ 507,
837
+ 187,
838
+ 884,
839
+ 362
840
+ ],
841
+ "page_idx": 4
842
+ },
843
+ {
844
+ "type": "text",
845
+ "text": "Removing positional encoding has a more significant impact on dependency parsing than on POS tagging. In morphologically poor, fixed-word-order languages such as Chinese, English, and French, we observe a sharper decline in performance, with an increased tendency to misidentify subjects and objects. Parsers in these languages rely on word order to correctly assign modifiers to their heads, and without positional encoding, capturing head-dependent relationships (e.g., between adjectives or adverbs and their heads) becomes more challenging. In contrast, morphologically rich languages experience a much smaller drop in UAS and LAS scores, as morphological case markers (e.g., nominative, accusative) help identify syntactic roles more effectively.",
846
+ "bbox": [
847
+ 507,
848
+ 365,
849
+ 884,
850
+ 623
851
+ ],
852
+ "page_idx": 4
853
+ },
854
+ {
855
+ "type": "text",
856
+ "text": "7 Conclusion",
857
+ "text_level": 1,
858
+ "bbox": [
859
+ 509,
860
+ 636,
861
+ 640,
862
+ 652
863
+ ],
864
+ "page_idx": 4
865
+ },
866
+ {
867
+ "type": "text",
868
+ "text": "In this work, we demonstrate the interplay between positional encoding and morphology for morphologically diverse languages. We present the first study regarding the varying impact of positional encoding across languages with varying morphological complexity. We cover 22 different languages across 9 language families and 5 diverse natural language processing tasks for our investigation. Our results reveal that the importance of positional encoding diminishes as the morphological complexity of a language increases. Our study also emphasizes the need for a deeper understanding of positional encoding, augmenting them to better reflect the different languages under consideration.",
869
+ "bbox": [
870
+ 507,
871
+ 664,
872
+ 882,
873
+ 904
874
+ ],
875
+ "page_idx": 4
876
+ },
877
+ {
878
+ "type": "page_number",
879
+ "text": "21039",
880
+ "bbox": [
881
+ 475,
882
+ 927,
883
+ 524,
884
+ 940
885
+ ],
886
+ "page_idx": 4
887
+ },
888
+ {
889
+ "type": "text",
890
+ "text": "Acknowledgements",
891
+ "text_level": 1,
892
+ "bbox": [
893
+ 114,
894
+ 84,
895
+ 285,
896
+ 99
897
+ ],
898
+ "page_idx": 5
899
+ },
900
+ {
901
+ "type": "text",
902
+ "text": "We would like to thank our anonymous reviewers as well as the ARR, EMNLP action editors. Their insightful comments helped us improve the current version of the paper. The first author is grateful for the scholarship under the Prime Minister's Research Fellowship category at the Indian Institute of Technology Bombay.",
903
+ "bbox": [
904
+ 112,
905
+ 109,
906
+ 489,
907
+ 221
908
+ ],
909
+ "page_idx": 5
910
+ },
911
+ {
912
+ "type": "text",
913
+ "text": "Ethics Statement",
914
+ "text_level": 1,
915
+ "bbox": [
916
+ 114,
917
+ 233,
918
+ 265,
919
+ 248
920
+ ],
921
+ "page_idx": 5
922
+ },
923
+ {
924
+ "type": "text",
925
+ "text": "All the experiments are conducted on openly available datasets and benchmarks with no ethical consideration.",
926
+ "bbox": [
927
+ 112,
928
+ 259,
929
+ 489,
930
+ 306
931
+ ],
932
+ "page_idx": 5
933
+ },
934
+ {
935
+ "type": "text",
936
+ "text": "Limitations",
937
+ "text_level": 1,
938
+ "bbox": [
939
+ 114,
940
+ 319,
941
+ 218,
942
+ 335
943
+ ],
944
+ "page_idx": 5
945
+ },
946
+ {
947
+ "type": "text",
948
+ "text": "Our goal was to incorporate multiple languages to ensure the generalizability of our findings. However, the process of training language models from scratch (without positional encoding) for a large number of languages requires significant computational power and financial resources. We agree that pre-training without positional encoding would be a more holistic approach. However, due to limited computational resources, our primary focus was directed towards fine-tuning. However, we firmly believe that removing positional encoding in the pre-training phase would yield more pronounced results.",
949
+ "bbox": [
950
+ 112,
951
+ 344,
952
+ 489,
953
+ 552
954
+ ],
955
+ "page_idx": 5
956
+ },
957
+ {
958
+ "type": "text",
959
+ "text": "References",
960
+ "text_level": 1,
961
+ "bbox": [
962
+ 114,
963
+ 581,
964
+ 213,
965
+ 596
966
+ ],
967
+ "page_idx": 5
968
+ },
969
+ {
970
+ "type": "list",
971
+ "sub_type": "ref_text",
972
+ "list_items": [
973
+ "Barry J Blake. 2001. Case. Cambridge University Press.",
974
+ "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901.",
975
+ "Yung-Sung Chuang. 2019. Robust Chinese word segmentation with contextualized word representations. arXiv preprint arXiv:1901.05816.",
976
+ "Çağrı Çoltekin and Taraka Rama. 2023. What do complexity measures measure? correlating and validating corpus-based measures of morphological complexity. Linguistics Vanguard, 9(s1):27-43.",
977
+ "Bernard Comrie. 1989. Language universals and linguistic typology: Syntax and morphology. University of Chicago press.",
978
+ "Alexis Conneau, Rudy Rinott, Guillaume Lample, Adina Williams, Samuel R. Bowman, Holger Schwenk,"
979
+ ],
980
+ "bbox": [
981
+ 114,
982
+ 604,
983
+ 487,
984
+ 920
985
+ ],
986
+ "page_idx": 5
987
+ },
988
+ {
989
+ "type": "list",
990
+ "sub_type": "ref_text",
991
+ "list_items": [
992
+ "and Veselin Stoyanov. 2018. Xnli: Evaluating crosslingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.",
993
+ "Marta R Costa-jussa, James Cross, Onur Celebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, et al. 2022. No language left behind: Scaling human-centered machine translation. arXiv preprint arXiv:2207.04672.",
994
+ "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.",
995
+ "Philipp Dufter, Martin Schmitt, and Hinrich Schütze. 2022. Position information in transformers: An overview. Computational Linguistics, 48(3):733-763.",
996
+ "Daniel Edmiston. 2020. A systematic analysis of morphological content in bert models for multiple languages. arXiv preprint arXiv:2004.03032.",
997
+ "Kawin Ethayarajh. 2019. How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 55–65, Hong Kong, China. Association for Computational Linguistics.",
998
+ "Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. In International conference on machine learning, pages 1243-1252. PMLR.",
999
+ "Daniela Gerz, Ivan Vulic, Edoardo Maria Ponti, Roi Reichart, and Anna Korhonen. 2018. On the relation between linguistic typology and (limitations of) multilingual language modeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 316-327, Brussels, Belgium. Association for Computational Linguistics.",
1000
+ "Goran Glavaš and Ivan Vulić. 2021. Is supervised syntactic parsing beneficial for language understanding tasks? an empirical investigation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3090-3104, Online. Association for Computational Linguistics.",
1001
+ "Yoav Goldberg. 2019. Assessing bert's syntactic abilities. arXiv preprint arXiv:1901.05287.",
1002
+ "Martin Haspelmath. 1999. Optimality and diachronic adaptation. Zeitschrift für Sprachwissenschaft, 18(2):180-205.",
1003
+ "Martin Haspelmath and Andrea Sims. 2013. Understanding morphology. Routledge."
1004
+ ],
1005
+ "bbox": [
1006
+ 510,
1007
+ 85,
1008
+ 882,
1009
+ 920
1010
+ ],
1011
+ "page_idx": 5
1012
+ },
1013
+ {
1014
+ "type": "page_number",
1015
+ "text": "21040",
1016
+ "bbox": [
1017
+ 475,
1018
+ 927,
1019
+ 524,
1020
+ 940
1021
+ ],
1022
+ "page_idx": 5
1023
+ },
1024
+ {
1025
+ "type": "list",
1026
+ "sub_type": "ref_text",
1027
+ "list_items": [
1028
+ "Adi Haviv, Ori Ram, Ofir Press, Peter Izsak, and Omer Levy. 2022. Transformer language models without positional encodings still learn positional information. In *Findings of the Association for Computational Linguistics: EMNLP* 2022, pages 1382-1390, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.",
1029
+ "Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654.",
1030
+ "Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan First, and Melvin Johnson. 2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization. CoRR, abs/2003.11080.",
1031
+ "Zhiheng Huang, Davis Liang, Peng Xu, and Bing Xi-ang. 2020. Improve transformer models with better relative position embeddings. arXiv preprint arXiv:2009.13658.",
1032
+ "Ganesh Jawahar, Benoit Sagot, and Djamé Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3651-3657, Florence, Italy. Association for Computational Linguistics.",
1033
+ "Sai Muralidhar Jayanthi and Adithya Pratapa. 2021. A study of morphological robustness of neural machine translation. In Proceedings of the 18th SIGMOR-PHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 49-59, Online. Association for Computational Linguistics.",
1034
+ "Kimmo Kettunen. 2014. Can type-token ratio be used to show morphological complexity of languages? Journal of Quantitative Linguistics, 21(3):223-245.",
1035
+ "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. ArXiv, abs/1909.11942.",
1036
+ "Tven Le Scao, Thomas Wang, Daniel Hesslow, Stas Bekman, M Saiful Bari, Stella Biderman, Hady Elsahar, Niklas Muennighoff, Jason Phang, Ofir Press, Colin Raffel, Victor Sanh, Sheng Shen, Lintang Sutawika, Jaesung Tae, Zheng Xin Yong, Julien Lauy, and Iz Beltagy. 2022. What language model to train if you have one million GPU hours? In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 765-782, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.",
1037
+ "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019a. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692."
1038
+ ],
1039
+ "bbox": [
1040
+ 115,
1041
+ 85,
1042
+ 489,
1043
+ 920
1044
+ ],
1045
+ "page_idx": 6
1046
+ },
1047
+ {
1048
+ "type": "list",
1049
+ "sub_type": "ref_text",
1050
+ "list_items": [
1051
+ "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692.",
1052
+ "Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric de la Clergerie, Djamé Seddah, and Benoit Sagot. 2020. CamemBERT: a tasty French language model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7203-7219, Online. Association for Computational Linguistics.",
1053
+ "Gereon Müller. 2002. Free word order, morphological case, and sympathy theory. *Resolving Conflicts in Grammars: Optimality Theory in Syntax*, Morphology, and Phonology. Buske, Hamburg, pages 9-48.",
1054
+ "Antoine Nzeyimana and Andre Niyongabo Rubungo. 2022. Kinyabert: a morphology-aware kinyarwanda language model. arXiv preprint arXiv:2203.08459.",
1055
+ "Yulia Otmakhova, Karin Verspoor, and Joy Han Lau. 2022. Cross-linguistic comparison of linguistic feature encoding in BERT models for typologically different languages. In Proceedings of the 4th Workshop on Research in Computational Linguistic Typology and Multilingual NLP, pages 27-35, Seattle, Washington. Association for Computational Linguistics.",
1056
+ "Hyunjhi Hayley Park, Katherine J Zhang, Coleman Haley, Kenneth Steimel, Han Liu, and Lane Schwartz. 2021. Morphology matters: a multilingual language modeling analysis. Transactions of the Association for Computational Linguistics, 9:261-276.",
1057
+ "Ofir Press, Noah A Smith, and Mike Lewis. 2021. Train short, test long: Attention with linear biases enables input length extrapolation. arXiv preprint arXiv:2108.12409.",
1058
+ "Giovanni Puccetti, Alessio Miaschi, and Felice Dell'Orletta. 2021. How do BERT embeddings organize linguistic knowledge? In Proceedings of Deep Learning Inside Out (DeeLIO): The 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 48-57, Online. Association for Computational Linguistics.",
1059
+ "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.",
1060
+ "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020a. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485-5551.",
1061
+ "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020b. Exploring the limits"
1062
+ ],
1063
+ "bbox": [
1064
+ 510,
1065
+ 85,
1066
+ 882,
1067
+ 920
1068
+ ],
1069
+ "page_idx": 6
1070
+ },
1071
+ {
1072
+ "type": "page_number",
1073
+ "text": "21041",
1074
+ "bbox": [
1075
+ 475,
1076
+ 928,
1077
+ 522,
1078
+ 940
1079
+ ],
1080
+ "page_idx": 6
1081
+ },
1082
+ {
1083
+ "type": "text",
1084
+ "text": "of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(1).",
1085
+ "bbox": [
1086
+ 132,
1087
+ 85,
1088
+ 489,
1089
+ 112
1090
+ ],
1091
+ "page_idx": 7
1092
+ },
1093
+ {
1094
+ "type": "text",
1095
+ "text": "Edward Sapir. 1921. Language, an Introduction to the Study of Speech, by Edward Sapir. New York:: Harcourt, Brace.",
1096
+ "bbox": [
1097
+ 114,
1098
+ 123,
1099
+ 487,
1100
+ 162
1101
+ ],
1102
+ "page_idx": 7
1103
+ },
1104
+ {
1105
+ "type": "text",
1106
+ "text": "Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. arXiv preprint arXiv:1803.02155.",
1107
+ "bbox": [
1108
+ 114,
1109
+ 173,
1110
+ 489,
1111
+ 214
1112
+ ],
1113
+ "page_idx": 7
1114
+ },
1115
+ {
1116
+ "type": "text",
1117
+ "text": "Koustuv Sinha, Robin Jia, Dieuwke Hupkes, Joelle Pineau, Adina Williams, and Douwe Kiela. 2021. Masked language modeling and the distributional hypothesis: Order word matters pre-training for little. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2888-2913, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.",
1118
+ "bbox": [
1119
+ 114,
1120
+ 225,
1121
+ 489,
1122
+ 331
1123
+ ],
1124
+ "page_idx": 7
1125
+ },
1126
+ {
1127
+ "type": "text",
1128
+ "text": "Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, and Yunfeng Liu. 2021. Roformer: Enhanced transformer with rotary position embedding. arXiv preprint arXiv:2104.09864.",
1129
+ "bbox": [
1130
+ 114,
1131
+ 341,
1132
+ 489,
1133
+ 394
1134
+ ],
1135
+ "page_idx": 7
1136
+ },
1137
+ {
1138
+ "type": "text",
1139
+ "text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593-4601, Florence, Italy. Association for Computational Linguistics.",
1140
+ "bbox": [
1141
+ 114,
1142
+ 404,
1143
+ 489,
1144
+ 483
1145
+ ],
1146
+ "page_idx": 7
1147
+ },
1148
+ {
1149
+ "type": "text",
1150
+ "text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.",
1151
+ "bbox": [
1152
+ 114,
1153
+ 494,
1154
+ 487,
1155
+ 560
1156
+ ],
1157
+ "page_idx": 7
1158
+ },
1159
+ {
1160
+ "type": "text",
1161
+ "text": "Benyou Wang, Lifeng Shang, Christina Lioma, Xin Jiang, Hao Yang, Qun Liu, and Jakob Grue Simonsen. 2020. On position embeddings in bert. In International Conference on Learning Representations.",
1162
+ "bbox": [
1163
+ 114,
1164
+ 571,
1165
+ 489,
1166
+ 625
1167
+ ],
1168
+ "page_idx": 7
1169
+ },
1170
+ {
1171
+ "type": "text",
1172
+ "text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771.",
1173
+ "bbox": [
1174
+ 114,
1175
+ 634,
1176
+ 489,
1177
+ 713
1178
+ ],
1179
+ "page_idx": 7
1180
+ },
1181
+ {
1182
+ "type": "text",
1183
+ "text": "Yinfei Yang, Yuan Zhang, Chris Tar, and Jason Baldridge. 2019a. PAWS-X: A cross-lingual adversarial dataset for paraphrase identification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3687-3692, Hong Kong, China. Association for Computational Linguistics.",
1184
+ "bbox": [
1185
+ 114,
1186
+ 725,
1187
+ 489,
1188
+ 843
1189
+ ],
1190
+ "page_idx": 7
1191
+ },
1192
+ {
1193
+ "type": "text",
1194
+ "text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019b. Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32.",
1195
+ "bbox": [
1196
+ 114,
1197
+ 853,
1198
+ 489,
1199
+ 920
1200
+ ],
1201
+ "page_idx": 7
1202
+ },
1203
+ {
1204
+ "type": "text",
1205
+ "text": "Daniel Zeman et al. 2023. Universal dependencies 2.12. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (UFL), Faculty of Mathematics and Physics, Charles University.",
1206
+ "bbox": [
1207
+ 509,
1208
+ 84,
1209
+ 884,
1210
+ 141
1211
+ ],
1212
+ "page_idx": 7
1213
+ },
1214
+ {
1215
+ "type": "text",
1216
+ "text": "A Details of Languages",
1217
+ "text_level": 1,
1218
+ "bbox": [
1219
+ 509,
1220
+ 154,
1221
+ 727,
1222
+ 171
1223
+ ],
1224
+ "page_idx": 7
1225
+ },
1226
+ {
1227
+ "type": "text",
1228
+ "text": "We provide an overview of the languages included in our study in Table 2. Additionally, Table 4 presents the details of the treebanks used in the dependency parsing experiments.",
1229
+ "bbox": [
1230
+ 507,
1231
+ 180,
1232
+ 880,
1233
+ 244
1234
+ ],
1235
+ "page_idx": 7
1236
+ },
1237
+ {
1238
+ "type": "text",
1239
+ "text": "B TTR-based Morphological Complexity",
1240
+ "text_level": 1,
1241
+ "bbox": [
1242
+ 509,
1243
+ 256,
1244
+ 880,
1245
+ 274
1246
+ ],
1247
+ "page_idx": 7
1248
+ },
1249
+ {
1250
+ "type": "text",
1251
+ "text": "The TTR-based morphological complexity of different languages are listed in Table 3. For space-delimited languages, we use the tokenizer from NLTK library for word segmentation.",
1252
+ "bbox": [
1253
+ 507,
1254
+ 284,
1255
+ 884,
1256
+ 348
1257
+ ],
1258
+ "page_idx": 7
1259
+ },
1260
+ {
1261
+ "type": "text",
1262
+ "text": "C Additional Results",
1263
+ "text_level": 1,
1264
+ "bbox": [
1265
+ 509,
1266
+ 361,
1267
+ 709,
1268
+ 376
1269
+ ],
1270
+ "page_idx": 7
1271
+ },
1272
+ {
1273
+ "type": "text",
1274
+ "text": "The effect of removing positional encoding in dependency parsing is examined by analyzing the relative decrease in UAS (Figure 4) and LAS scores (Figure 7).",
1275
+ "bbox": [
1276
+ 507,
1277
+ 387,
1278
+ 884,
1279
+ 451
1280
+ ],
1281
+ "page_idx": 7
1282
+ },
1283
+ {
1284
+ "type": "image",
1285
+ "img_path": "images/874c485c5e21e80640971418b884ad7b4c264f7ed99b6667084907b56f94f24a.jpg",
1286
+ "image_caption": [
1287
+ "Figure 7: Effect of Positional Encoding on Dependency Parsing (LAS)."
1288
+ ],
1289
+ "image_footnote": [],
1290
+ "bbox": [
1291
+ 510,
1292
+ 462,
1293
+ 885,
1294
+ 615
1295
+ ],
1296
+ "page_idx": 7
1297
+ },
1298
+ {
1299
+ "type": "text",
1300
+ "text": "D Results on GLUE tasks",
1301
+ "text_level": 1,
1302
+ "bbox": [
1303
+ 509,
1304
+ 684,
1305
+ 749,
1306
+ 699
1307
+ ],
1308
+ "page_idx": 7
1309
+ },
1310
+ {
1311
+ "type": "text",
1312
+ "text": "In this section, we discuss the preliminary experiments conducted on the GLUE benchmark.",
1313
+ "bbox": [
1314
+ 507,
1315
+ 711,
1316
+ 882,
1317
+ 741
1318
+ ],
1319
+ "page_idx": 7
1320
+ },
1321
+ {
1322
+ "type": "text",
1323
+ "text": "D.1 Impact of positional encoding",
1324
+ "text_level": 1,
1325
+ "bbox": [
1326
+ 507,
1327
+ 755,
1328
+ 793,
1329
+ 771
1330
+ ],
1331
+ "page_idx": 7
1332
+ },
1333
+ {
1334
+ "type": "text",
1335
+ "text": "Removing positional encoding leads to a varied decrease in performance across different tasks, as evident in Table 5.",
1336
+ "bbox": [
1337
+ 507,
1338
+ 776,
1339
+ 880,
1340
+ 822
1341
+ ],
1342
+ "page_idx": 7
1343
+ },
1344
+ {
1345
+ "type": "text",
1346
+ "text": "Sentence/Grammatical acceptability tasks: Positional encoding helps the model understand the hierarchical structure and dependencies between words, which is essential for determining the grammaticality of a sentence. As a result, in case of CoLA task, when positional encoding is removed,",
1347
+ "bbox": [
1348
+ 507,
1349
+ 825,
1350
+ 882,
1351
+ 920
1352
+ ],
1353
+ "page_idx": 7
1354
+ },
1355
+ {
1356
+ "type": "page_number",
1357
+ "text": "21042",
1358
+ "bbox": [
1359
+ 475,
1360
+ 927,
1361
+ 524,
1362
+ 940
1363
+ ],
1364
+ "page_idx": 7
1365
+ },
1366
+ {
1367
+ "type": "table",
1368
+ "img_path": "images/8554e5a328327ac43e4aec2cb282ad94bcb23d0de11407b0a7b4c1ad376ee6a4.jpg",
1369
+ "table_caption": [],
1370
+ "table_footnote": [],
1371
+ "table_body": "<table><tr><td>Language (ISO code)</td><td>Language family</td><td>Hugging Face Model id</td></tr><tr><td>Arabic (ar)</td><td>Afro-Asiatic</td><td>aubmindlab/bert-base-arabertv02</td></tr><tr><td>Basque (eu)</td><td>Basque</td><td>orai-nlp/ElhBERTeu</td></tr><tr><td>Bengali (bn)</td><td>Indo-European: Indo-Aryan</td><td>sagorsarker/bangla-bert-base</td></tr><tr><td>Bulgarian (bg)</td><td>Indo-European: Slavic</td><td>usmiva/bert-web-bg</td></tr><tr><td>Chinese (zh)</td><td>Sino-Tibetan</td><td>bert-base-chinese</td></tr><tr><td>Dutch (nl)</td><td>Indo-European: Germanic</td><td>GroNLP/bert-base-dutch-cased</td></tr><tr><td>English (en)</td><td>Indo-European: Germanic</td><td>bert-base-cased</td></tr><tr><td>Finnish (fi)</td><td>Uralic</td><td>TurkuNLP/bert-base-finnish-cased-v1</td></tr><tr><td>French (fr)</td><td>Indo-European: Romance</td><td>dbmdz/bert-base-french-europeana-cased</td></tr><tr><td>German (de)</td><td>Indo-European: Germanic</td><td>dbmdz/bert-base-german-cased</td></tr><tr><td>Greek (el)</td><td>Indo-European: Greek</td><td>nlpaeb/bert-base-greek-uncased-v1</td></tr><tr><td>Hebrew (he)</td><td>Afro-Asiatic</td><td>onlplab/alephbert-base</td></tr><tr><td>Hungarian (hu)</td><td>Uralic</td><td>SZTAKI-HLT/hubert-base-cc</td></tr><tr><td>Indonesian (id)</td><td>Austronesian</td><td>indolem/indobert-base-uncased</td></tr><tr><td>Italian (it)</td><td>Indo-European: Romance</td><td>dbmdz/bert-base-italian-cased</td></tr><tr><td>Korean (ko)</td><td>Koreanic</td><td>kykim/bert-kor-base</td></tr><tr><td>Portuguese (pt)</td><td>Indo-European: Romance</td><td>neuralmind/bert-base-portuguese-cased</td></tr><tr><td>Russian (ru)</td><td>Indo-European: Slavic</td><td>DeepPavlov/rubert-base-cased</td></tr><tr><td>Spanish (es)</td><td>Indo-European: Romance</td><td>dccuchile/bert-base-spanish-wwm-cased</td></tr><tr><td>Swahili (sw)</td><td>Niger-Congo</td><td>flax-community/bert-base-uncased-swahili</td></tr><tr><td>Turkish (tr)</td><td>Turkic</td><td>dbmdz/bert-base-turkish-cased</td></tr><tr><td>Vietnamese (vi)</td><td>Austro-Asiatic</td><td>trituenhantaoio/bert-base-vietnamese-uncased</td></tr></table>",
1372
+ "bbox": [
1373
+ 115,
1374
+ 83,
1375
+ 868,
1376
+ 468
1377
+ ],
1378
+ "page_idx": 8
1379
+ },
1380
+ {
1381
+ "type": "table",
1382
+ "img_path": "images/83e59f8b165d157eee3979435265593cc2957d89782637ecb276ec0b0aa98b64.jpg",
1383
+ "table_caption": [
1384
+ "Table 2: Overview of different languages"
1385
+ ],
1386
+ "table_footnote": [],
1387
+ "table_body": "<table><tr><td>Language (ISO code)</td><td>FLORES-200 code</td><td>TTR</td></tr><tr><td>Arabic (ar)</td><td>arb_Arab</td><td>0.359</td></tr><tr><td>Basque (eu)</td><td>eus_Latn</td><td>0.324</td></tr><tr><td>Bengali (bn)</td><td>ben_Beng</td><td>0.292</td></tr><tr><td>Bulgarian (bg)</td><td>bul_Cyrl</td><td>0.268</td></tr><tr><td>Chinese (zh)</td><td>zho_Hans</td><td>0.17</td></tr><tr><td>Dutch (nl)</td><td>nld_Latn</td><td>0.207</td></tr><tr><td>English (en)</td><td>eng_Latn</td><td>0.194</td></tr><tr><td>Finnish (fi)</td><td>fin_Latn</td><td>0.428</td></tr><tr><td>French (fr)</td><td>fra_Latn</td><td>0.191</td></tr><tr><td>German (de)</td><td>deu_Latn</td><td>0.244</td></tr><tr><td>Greek (el)</td><td>ell_Grek</td><td>0.253</td></tr><tr><td>Hebrew (he)</td><td>heb_Hebr</td><td>0.364</td></tr><tr><td>Hungarian (hu)</td><td>hun_Latn</td><td>0.345</td></tr><tr><td>Indonesian (id)</td><td>ind_Latn</td><td>0.195</td></tr><tr><td>Italian (it)</td><td>ita_Latn</td><td>0.217</td></tr><tr><td>Korean (ko)</td><td>kor_Hang</td><td>0.465</td></tr><tr><td>Portuguese (pt)</td><td>por_Latn</td><td>0.205</td></tr><tr><td>Russian (ru)</td><td>rus_Cyrl</td><td>0.334</td></tr><tr><td>Spanish (es)</td><td>spa_Latn</td><td>0.192</td></tr><tr><td>Swahili (sw)</td><td>swh_Latn</td><td>0.212</td></tr><tr><td>Turkish (tr)</td><td>tur_Latn</td><td>0.376</td></tr><tr><td>Vietnamese (vi)</td><td>vie_Latn</td><td>0.077</td></tr></table>",
1388
+ "bbox": [
1389
+ 299,
1390
+ 502,
1391
+ 699,
1392
+ 890
1393
+ ],
1394
+ "page_idx": 8
1395
+ },
1396
+ {
1397
+ "type": "text",
1398
+ "text": "Table 3: TTR-based morphological complexity of different languages",
1399
+ "bbox": [
1400
+ 262,
1401
+ 900,
1402
+ 732,
1403
+ 916
1404
+ ],
1405
+ "page_idx": 8
1406
+ },
1407
+ {
1408
+ "type": "page_number",
1409
+ "text": "21043",
1410
+ "bbox": [
1411
+ 475,
1412
+ 927,
1413
+ 524,
1414
+ 940
1415
+ ],
1416
+ "page_idx": 8
1417
+ },
1418
+ {
1419
+ "type": "table",
1420
+ "img_path": "images/54ede952c662a9d15b63f50e4bf303f07699d204ce481cf55a7d09903bdf31de.jpg",
1421
+ "table_caption": [],
1422
+ "table_footnote": [],
1423
+ "table_body": "<table><tr><td>Language</td><td>Treebank</td></tr><tr><td>Chinese (zh)</td><td>UD_Chinese-GSD</td></tr><tr><td>Portuguese (pt)</td><td>UD_Portuguese-Bosque</td></tr><tr><td>Spanish (es)</td><td>UD_Spanish-GSD</td></tr><tr><td>English (en)</td><td>UD_English-GUM</td></tr><tr><td>French (fr)</td><td>UD_French-GSD</td></tr><tr><td>Italian (it)</td><td>UD_Italian-ISDT</td></tr><tr><td>Russian (ru)</td><td>UD_Russian-Taiga</td></tr><tr><td>German (de)</td><td>UD_German-GSD</td></tr><tr><td>Basque (eu)</td><td>UD_Basque-BDT</td></tr><tr><td>Finnish (fi)</td><td>UD_Finnish-FTB</td></tr><tr><td>Turkish (tr)</td><td>UD_Turkish-Penn</td></tr></table>",
1424
+ "bbox": [
1425
+ 339,
1426
+ 103,
1427
+ 657,
1428
+ 313
1429
+ ],
1430
+ "page_idx": 9
1431
+ },
1432
+ {
1433
+ "type": "table",
1434
+ "img_path": "images/aedf78a738df0d5286f6b9aabbeddafa3efe203cb36ade42a4d9f0625b77143c.jpg",
1435
+ "table_caption": [
1436
+ "Table 4: Details of treebanks of different languages"
1437
+ ],
1438
+ "table_footnote": [],
1439
+ "table_body": "<table><tr><td rowspan=\"2\">Task</td><td rowspan=\"2\">Dataset</td><td colspan=\"4\">With Positional Encoding</td><td rowspan=\"2\">Without Positional Encoding</td><td rowspan=\"2\">Relative decrease</td></tr><tr><td>Unigram Permutation</td><td>Bigram Permutation</td><td>Trigram Permutation</td><td>Without Permutation</td></tr><tr><td>Sentence Acceptability</td><td>CoLA</td><td>4.4</td><td>12.3</td><td>16.8</td><td>59.1</td><td>23.8</td><td>59.7</td></tr><tr><td>Sentiment Analysis</td><td>SST-2</td><td>81.6</td><td>86.0</td><td>85.1</td><td>91.8</td><td>86.5</td><td>5.8</td></tr><tr><td rowspan=\"3\">Paraphrasing / Sentence Similarity</td><td>MPRC</td><td>83.5</td><td>84.2</td><td>85.4</td><td>89.8</td><td>84.6</td><td>5.8</td></tr><tr><td>QQP</td><td>75.6</td><td>79.1</td><td>80.8</td><td>87.1</td><td>85.8</td><td>1.5</td></tr><tr><td>STS-B</td><td>85.2</td><td>87.1</td><td>86.6</td><td>89.0</td><td>86.6</td><td>2.7</td></tr><tr><td rowspan=\"4\">Natural Language Inference</td><td>MNLI</td><td>68.3</td><td>74.8</td><td>76.5</td><td>83.6</td><td>79.7</td><td>4.7</td></tr><tr><td>MNLI-MM</td><td>68.7</td><td>74.4</td><td>76.6</td><td>84.0</td><td>79.8</td><td>5.0</td></tr><tr><td>QNLI</td><td>81.3</td><td>85.0</td><td>86.5</td><td>91.0</td><td>87.2</td><td>4.2</td></tr><tr><td>RTE</td><td>58.1</td><td>61.5</td><td>61.8</td><td>64.5</td><td>62.8</td><td>2.6</td></tr></table>",
1440
+ "bbox": [
1441
+ 114,
1442
+ 386,
1443
+ 882,
1444
+ 615
1445
+ ],
1446
+ "page_idx": 9
1447
+ },
1448
+ {
1449
+ "type": "table",
1450
+ "img_path": "images/98a5def11baa30e5b15550cc0a933a9b535fc17800c3bd95c08113fe2ef9def1.jpg",
1451
+ "table_caption": [
1452
+ "Table 5: GLUE Results for English language: The evaluation metrics used for reporting the performance of QQP and MRPC tasks are F1 scores, while for the STS-B task, Spearman correlations are used, and accuracy scores are employed for the remaining tasks. The average and standard deviation are reported across 3 trails of the evaluation on the validation set. The relative decrease quantifies the decline in performance when positional encoding is excluded compared to when positional encoding was present. Additionally, we conducted experiments in which we removed positional encoding and perturbed the input to the model. Since the removal of positional encoding results in a bag of words model, we observed no noticeable change upon further distortion."
1453
+ ],
1454
+ "table_footnote": [],
1455
+ "table_body": "<table><tr><td>Task</td><td>learning rate</td><td>batch size</td><td>number of epochs</td></tr><tr><td>NER</td><td>2.00E-05</td><td>16</td><td>3</td></tr><tr><td>POS</td><td>3.00E-05</td><td>8</td><td>3</td></tr><tr><td>XNLI</td><td>3.00E-05</td><td>32</td><td>3</td></tr><tr><td>PAWS-X</td><td>3.00E-05</td><td>32</td><td>3</td></tr></table>",
1456
+ "bbox": [
1457
+ 273,
1458
+ 772,
1459
+ 724,
1460
+ 871
1461
+ ],
1462
+ "page_idx": 9
1463
+ },
1464
+ {
1465
+ "type": "text",
1466
+ "text": "Table 6: Hyper-parameter details",
1467
+ "bbox": [
1468
+ 383,
1469
+ 879,
1470
+ 611,
1471
+ 895
1472
+ ],
1473
+ "page_idx": 9
1474
+ },
1475
+ {
1476
+ "type": "page_number",
1477
+ "text": "21044",
1478
+ "bbox": [
1479
+ 475,
1480
+ 927,
1481
+ 524,
1482
+ 940
1483
+ ],
1484
+ "page_idx": 9
1485
+ },
1486
+ {
1487
+ "type": "text",
1488
+ "text": "the model struggles to identify grammatically acceptable sentences, leading to a notable decline of $59.7\\%$ in performance.",
1489
+ "bbox": [
1490
+ 112,
1491
+ 84,
1492
+ 489,
1493
+ 131
1494
+ ],
1495
+ "page_idx": 10
1496
+ },
1497
+ {
1498
+ "type": "text",
1499
+ "text": "Paraphrasing and sentence similarity tasks: Models can effectively capture the similarity or relatedness between sentences by focusing on common signals present across sentences. These tasks primarily require understanding the underlying semantic meaning and contextual similarities between sentences rather than the syntactic structure. As a result, when positional encoding is removed, the relative decrease in performance is considerably smaller (5.8% for MRPC, 2.7% for STS-B, and 1.5% for QQP). This indicates that while positional encoding does provide some benefit in capturing the positional information within sentences, it is not very crucial for these tasks.",
1500
+ "bbox": [
1501
+ 112,
1502
+ 133,
1503
+ 489,
1504
+ 356
1505
+ ],
1506
+ "page_idx": 10
1507
+ },
1508
+ {
1509
+ "type": "text",
1510
+ "text": "Natural language inference tasks: The removal of positional encoding leads to a relative decrease of $4.7\\%$ for MNLI, $5\\%$ for the mismatched version of MNLI, $4.2\\%$ for QNLI, and $2.6\\%$ for RTE. The decrease in performance is still relatively modest for these tasks. This highlights that positional encoding does not play a significant role in understanding the logical inference and entailment relationships between sentence pairs (Wang et al., 2020).",
1511
+ "bbox": [
1512
+ 112,
1513
+ 357,
1514
+ 489,
1515
+ 517
1516
+ ],
1517
+ "page_idx": 10
1518
+ },
1519
+ {
1520
+ "type": "text",
1521
+ "text": "Even in the absence of positional encoding, the bag of words model gives considerably good performance for paraphrasing and natural language inference tasks. The results on GLUE benchmark serve as a driving force behind our investigation, where we aim to further test our hypothesis across morphologically diverse languages.",
1522
+ "bbox": [
1523
+ 112,
1524
+ 519,
1525
+ 489,
1526
+ 632
1527
+ ],
1528
+ "page_idx": 10
1529
+ },
1530
+ {
1531
+ "type": "text",
1532
+ "text": "D.2 Impact of permutation:",
1533
+ "text_level": 1,
1534
+ "bbox": [
1535
+ 114,
1536
+ 646,
1537
+ 347,
1538
+ 663
1539
+ ],
1540
+ "page_idx": 10
1541
+ },
1542
+ {
1543
+ "type": "text",
1544
+ "text": "In addition to studying the effect of positional encoding, we also conducted experiments to examine the impact of permutation on various GLUE tasks.",
1545
+ "bbox": [
1546
+ 112,
1547
+ 671,
1548
+ 489,
1549
+ 719
1550
+ ],
1551
+ "page_idx": 10
1552
+ },
1553
+ {
1554
+ "type": "list",
1555
+ "sub_type": "text",
1556
+ "list_items": [
1557
+ "- Unigram permutation causes the most significant performance drop. However, as we increase the ngram order, which involves shuffling chunk of words instead of individual words, the decrease in performance is significantly less. This indicates that shuffling at higher ngram levels add less distortion and preserve the integrity of word order to a greater extent.",
1558
+ "- The results also imply that lower order ngrams capture vocabulary match and is completely ignorant of word order whereas higher order ngrams capture word order and other dependen"
1559
+ ],
1560
+ "bbox": [
1561
+ 114,
1562
+ 722,
1563
+ 487,
1564
+ 920
1565
+ ],
1566
+ "page_idx": 10
1567
+ },
1568
+ {
1569
+ "type": "text",
1570
+ "text": "cies present in a sentence.",
1571
+ "bbox": [
1572
+ 524,
1573
+ 85,
1574
+ 717,
1575
+ 99
1576
+ ],
1577
+ "page_idx": 10
1578
+ },
1579
+ {
1580
+ "type": "text",
1581
+ "text": "E Hyper-parameter details:",
1582
+ "text_level": 1,
1583
+ "bbox": [
1584
+ 509,
1585
+ 111,
1586
+ 764,
1587
+ 128
1588
+ ],
1589
+ "page_idx": 10
1590
+ },
1591
+ {
1592
+ "type": "text",
1593
+ "text": "The hyper-parameter details used at the time of fine-tuning are outlined in Table 6. For dependency parsing, we have followed the hyper-parameter settings mentioned in Glavaš and Vulić (2021).",
1594
+ "bbox": [
1595
+ 507,
1596
+ 137,
1597
+ 884,
1598
+ 200
1599
+ ],
1600
+ "page_idx": 10
1601
+ },
1602
+ {
1603
+ "type": "page_number",
1604
+ "text": "21045",
1605
+ "bbox": [
1606
+ 475,
1607
+ 927,
1608
+ 524,
1609
+ 940
1610
+ ],
1611
+ "page_idx": 10
1612
+ }
1613
+ ]