Add Batch 371ef547-f3da-4e9f-a9ad-6136075aed26 data
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- .gitattributes +64 -0
- 2023/A New Direction in Stance Detection_ Target-Stance Extraction in the Wild/05054386-a5e0-4465-a9e7-3cb95fe5c25f_content_list.json +2036 -0
- 2023/A New Direction in Stance Detection_ Target-Stance Extraction in the Wild/05054386-a5e0-4465-a9e7-3cb95fe5c25f_model.json +0 -0
- 2023/A New Direction in Stance Detection_ Target-Stance Extraction in the Wild/05054386-a5e0-4465-a9e7-3cb95fe5c25f_origin.pdf +3 -0
- 2023/A New Direction in Stance Detection_ Target-Stance Extraction in the Wild/full.md +384 -0
- 2023/A New Direction in Stance Detection_ Target-Stance Extraction in the Wild/images.zip +3 -0
- 2023/A New Direction in Stance Detection_ Target-Stance Extraction in the Wild/layout.json +0 -0
- 2023/A Novel Table-to-Graph Generation Approach for Document-Level Joint Entity and Relation Extraction/a48b0d22-32b5-4318-8d85-fd66569fde26_content_list.json +0 -0
- 2023/A Novel Table-to-Graph Generation Approach for Document-Level Joint Entity and Relation Extraction/a48b0d22-32b5-4318-8d85-fd66569fde26_model.json +0 -0
- 2023/A Novel Table-to-Graph Generation Approach for Document-Level Joint Entity and Relation Extraction/a48b0d22-32b5-4318-8d85-fd66569fde26_origin.pdf +3 -0
- 2023/A Novel Table-to-Graph Generation Approach for Document-Level Joint Entity and Relation Extraction/full.md +499 -0
- 2023/A Novel Table-to-Graph Generation Approach for Document-Level Joint Entity and Relation Extraction/images.zip +3 -0
- 2023/A Novel Table-to-Graph Generation Approach for Document-Level Joint Entity and Relation Extraction/layout.json +0 -0
- 2023/A Probabilistic Framework for Discovering New Intents/9ad2d74c-e7a6-461e-a960-992c009c7e1d_content_list.json +0 -0
- 2023/A Probabilistic Framework for Discovering New Intents/9ad2d74c-e7a6-461e-a960-992c009c7e1d_model.json +0 -0
- 2023/A Probabilistic Framework for Discovering New Intents/9ad2d74c-e7a6-461e-a960-992c009c7e1d_origin.pdf +3 -0
- 2023/A Probabilistic Framework for Discovering New Intents/full.md +429 -0
- 2023/A Probabilistic Framework for Discovering New Intents/images.zip +3 -0
- 2023/A Probabilistic Framework for Discovering New Intents/layout.json +0 -0
- 2023/A Simple and Flexible Modeling for Mental Disorder Detection by Learning from Clinical Questionnaires/803c053e-27a1-45b7-89d4-70d6ed2bf19b_content_list.json +0 -0
- 2023/A Simple and Flexible Modeling for Mental Disorder Detection by Learning from Clinical Questionnaires/803c053e-27a1-45b7-89d4-70d6ed2bf19b_model.json +0 -0
- 2023/A Simple and Flexible Modeling for Mental Disorder Detection by Learning from Clinical Questionnaires/803c053e-27a1-45b7-89d4-70d6ed2bf19b_origin.pdf +3 -0
- 2023/A Simple and Flexible Modeling for Mental Disorder Detection by Learning from Clinical Questionnaires/full.md +541 -0
- 2023/A Simple and Flexible Modeling for Mental Disorder Detection by Learning from Clinical Questionnaires/images.zip +3 -0
- 2023/A Simple and Flexible Modeling for Mental Disorder Detection by Learning from Clinical Questionnaires/layout.json +0 -0
- 2023/A Survey for Efficient Open Domain Question Answering/872e4524-758a-44e3-aed7-10ee6b0fce7e_content_list.json +0 -0
- 2023/A Survey for Efficient Open Domain Question Answering/872e4524-758a-44e3-aed7-10ee6b0fce7e_model.json +0 -0
- 2023/A Survey for Efficient Open Domain Question Answering/872e4524-758a-44e3-aed7-10ee6b0fce7e_origin.pdf +3 -0
- 2023/A Survey for Efficient Open Domain Question Answering/full.md +403 -0
- 2023/A Survey for Efficient Open Domain Question Answering/images.zip +3 -0
- 2023/A Survey for Efficient Open Domain Question Answering/layout.json +0 -0
- 2023/A Survey of Deep Learning for Mathematical Reasoning/2b7d63a4-978e-4e60-8b54-4fb49d34c681_content_list.json +0 -0
- 2023/A Survey of Deep Learning for Mathematical Reasoning/2b7d63a4-978e-4e60-8b54-4fb49d34c681_model.json +0 -0
- 2023/A Survey of Deep Learning for Mathematical Reasoning/2b7d63a4-978e-4e60-8b54-4fb49d34c681_origin.pdf +3 -0
- 2023/A Survey of Deep Learning for Mathematical Reasoning/full.md +0 -0
- 2023/A Survey of Deep Learning for Mathematical Reasoning/images.zip +3 -0
- 2023/A Survey of Deep Learning for Mathematical Reasoning/layout.json +0 -0
- 2023/A Survey on Asking Clarification Questions Datasets in Conversational Systems/8d770436-f7f9-4204-a41b-8e2e7dd040b2_content_list.json +0 -0
- 2023/A Survey on Asking Clarification Questions Datasets in Conversational Systems/8d770436-f7f9-4204-a41b-8e2e7dd040b2_model.json +0 -0
- 2023/A Survey on Asking Clarification Questions Datasets in Conversational Systems/8d770436-f7f9-4204-a41b-8e2e7dd040b2_origin.pdf +3 -0
- 2023/A Survey on Asking Clarification Questions Datasets in Conversational Systems/full.md +421 -0
- 2023/A Survey on Asking Clarification Questions Datasets in Conversational Systems/images.zip +3 -0
- 2023/A Survey on Asking Clarification Questions Datasets in Conversational Systems/layout.json +0 -0
- 2023/A Survey on Zero Pronoun Translation/3ae06d18-3d40-4838-ba49-dbe91c97e883_content_list.json +2162 -0
- 2023/A Survey on Zero Pronoun Translation/3ae06d18-3d40-4838-ba49-dbe91c97e883_model.json +0 -0
- 2023/A Survey on Zero Pronoun Translation/3ae06d18-3d40-4838-ba49-dbe91c97e883_origin.pdf +3 -0
- 2023/A Survey on Zero Pronoun Translation/full.md +396 -0
- 2023/A Survey on Zero Pronoun Translation/images.zip +3 -0
- 2023/A Survey on Zero Pronoun Translation/layout.json +0 -0
- 2023/A Synthetic Data Generation Framework for Grounded Dialogues/ff57bbb3-298c-4198-bfad-c2e95d1773a0_content_list.json +0 -0
.gitattributes
CHANGED
|
@@ -5987,3 +5987,67 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 5987 |
2023/vONTSS_[[:space:]]vMF[[:space:]]based[[:space:]]semi-supervised[[:space:]]neural[[:space:]]topic[[:space:]]modeling[[:space:]]with[[:space:]]optimal[[:space:]]transport/f21dd8d7-af9c-44fa-83d3-d7428e2956d3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5988 |
2023/“A[[:space:]]Little[[:space:]]is[[:space:]]Enough”_[[:space:]]Few-Shot[[:space:]]Quality[[:space:]]Estimation[[:space:]]based[[:space:]]Corpus[[:space:]]Filtering[[:space:]]improves[[:space:]]Machine[[:space:]]Translation/509a7d47-9017-4fe4-a7da-a16882d446b8_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5989 |
2023/“Low-Resource”[[:space:]]Text[[:space:]]Classification_[[:space:]]A[[:space:]]Parameter-Free[[:space:]]Classification[[:space:]]Method[[:space:]]with[[:space:]]Compressors/547ef4fc-a3bf-4240-99b8-1f29887900a3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5987 |
2023/vONTSS_[[:space:]]vMF[[:space:]]based[[:space:]]semi-supervised[[:space:]]neural[[:space:]]topic[[:space:]]modeling[[:space:]]with[[:space:]]optimal[[:space:]]transport/f21dd8d7-af9c-44fa-83d3-d7428e2956d3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5988 |
2023/“A[[:space:]]Little[[:space:]]is[[:space:]]Enough”_[[:space:]]Few-Shot[[:space:]]Quality[[:space:]]Estimation[[:space:]]based[[:space:]]Corpus[[:space:]]Filtering[[:space:]]improves[[:space:]]Machine[[:space:]]Translation/509a7d47-9017-4fe4-a7da-a16882d446b8_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5989 |
2023/“Low-Resource”[[:space:]]Text[[:space:]]Classification_[[:space:]]A[[:space:]]Parameter-Free[[:space:]]Classification[[:space:]]Method[[:space:]]with[[:space:]]Compressors/547ef4fc-a3bf-4240-99b8-1f29887900a3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5990 |
+
2023/A[[:space:]]New[[:space:]]Direction[[:space:]]in[[:space:]]Stance[[:space:]]Detection_[[:space:]]Target-Stance[[:space:]]Extraction[[:space:]]in[[:space:]]the[[:space:]]Wild/05054386-a5e0-4465-a9e7-3cb95fe5c25f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5991 |
+
2023/A[[:space:]]Novel[[:space:]]Table-to-Graph[[:space:]]Generation[[:space:]]Approach[[:space:]]for[[:space:]]Document-Level[[:space:]]Joint[[:space:]]Entity[[:space:]]and[[:space:]]Relation[[:space:]]Extraction/a48b0d22-32b5-4318-8d85-fd66569fde26_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5992 |
+
2023/A[[:space:]]Probabilistic[[:space:]]Framework[[:space:]]for[[:space:]]Discovering[[:space:]]New[[:space:]]Intents/9ad2d74c-e7a6-461e-a960-992c009c7e1d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5993 |
+
2023/A[[:space:]]Simple[[:space:]]and[[:space:]]Flexible[[:space:]]Modeling[[:space:]]for[[:space:]]Mental[[:space:]]Disorder[[:space:]]Detection[[:space:]]by[[:space:]]Learning[[:space:]]from[[:space:]]Clinical[[:space:]]Questionnaires/803c053e-27a1-45b7-89d4-70d6ed2bf19b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5994 |
+
2023/A[[:space:]]Survey[[:space:]]for[[:space:]]Efficient[[:space:]]Open[[:space:]]Domain[[:space:]]Question[[:space:]]Answering/872e4524-758a-44e3-aed7-10ee6b0fce7e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5995 |
+
2023/A[[:space:]]Survey[[:space:]]of[[:space:]]Deep[[:space:]]Learning[[:space:]]for[[:space:]]Mathematical[[:space:]]Reasoning/2b7d63a4-978e-4e60-8b54-4fb49d34c681_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5996 |
+
2023/A[[:space:]]Survey[[:space:]]on[[:space:]]Asking[[:space:]]Clarification[[:space:]]Questions[[:space:]]Datasets[[:space:]]in[[:space:]]Conversational[[:space:]]Systems/8d770436-f7f9-4204-a41b-8e2e7dd040b2_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5997 |
+
2023/A[[:space:]]Survey[[:space:]]on[[:space:]]Zero[[:space:]]Pronoun[[:space:]]Translation/3ae06d18-3d40-4838-ba49-dbe91c97e883_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5998 |
+
2023/A[[:space:]]Synthetic[[:space:]]Data[[:space:]]Generation[[:space:]]Framework[[:space:]]for[[:space:]]Grounded[[:space:]]Dialogues/ff57bbb3-298c-4198-bfad-c2e95d1773a0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 5999 |
+
2023/A[[:space:]]Systematic[[:space:]]Study[[:space:]]of[[:space:]]Knowledge[[:space:]]Distillation[[:space:]]for[[:space:]]Natural[[:space:]]Language[[:space:]]Generation[[:space:]]with[[:space:]]Pseudo-Target[[:space:]]Training/8afebe1a-34c8-429b-aed2-17f9c03204a6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6000 |
+
2023/A[[:space:]]Textual[[:space:]]Dataset[[:space:]]for[[:space:]]Situated[[:space:]]Proactive[[:space:]]Response[[:space:]]Selection/387814d9-8f78-4c94-93fc-a09ed5e51ccf_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6001 |
+
2023/A[[:space:]]Theory[[:space:]]of[[:space:]]Unsupervised[[:space:]]Speech[[:space:]]Recognition/be427c69-594e-47d7-940f-1f4097d8fe71_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6002 |
+
2023/A[[:space:]]Universal[[:space:]]Discriminator[[:space:]]for[[:space:]]Zero-Shot[[:space:]]Generalization/32c7a2f1-84bf-4791-b31f-7c19e714bfec_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6003 |
+
2023/A[[:space:]]dynamic[[:space:]]programming[[:space:]]algorithm[[:space:]]for[[:space:]]span-based[[:space:]]nested[[:space:]]named-entity[[:space:]]recognition[[:space:]]in[[:space:]]O(n2)/0495ac5b-b465-4228-b5d4-f69d569070af_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6004 |
+
2023/A[[:space:]]fine-grained[[:space:]]comparison[[:space:]]of[[:space:]]pragmatic[[:space:]]language[[:space:]]understanding[[:space:]]in[[:space:]]humans[[:space:]]and[[:space:]]language[[:space:]]models/a9a36605-947d-4882-9e7e-360a86458ba1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6005 |
+
2023/ACCENT_[[:space:]]An[[:space:]]Automatic[[:space:]]Event[[:space:]]Commonsense[[:space:]]Evaluation[[:space:]]Metric[[:space:]]for[[:space:]]Open-Domain[[:space:]]Dialogue[[:space:]]Systems/92f3e19f-06e9-454d-a2b2-83f83888e1e7_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6006 |
+
2023/ACLM_[[:space:]]A[[:space:]]Selective-Denoising[[:space:]]based[[:space:]]Generative[[:space:]]Data[[:space:]]Augmentation[[:space:]]Approach[[:space:]]for[[:space:]]Low-Resource[[:space:]]Complex[[:space:]]NER/0b69e674-6367-45fa-a4f3-60843206c20e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6007 |
+
2023/AD-KD_[[:space:]]Attribution-Driven[[:space:]]Knowledge[[:space:]]Distillation[[:space:]]for[[:space:]]Language[[:space:]]Model[[:space:]]Compression/7a308754-c09b-4a37-a605-d2c7cd65fe75_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6008 |
+
2023/ALERT_[[:space:]]Adapt[[:space:]]Language[[:space:]]Models[[:space:]]to[[:space:]]Reasoning[[:space:]]Tasks/13abf6b1-d127-4734-a4bb-c3cc2769bc62_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6009 |
+
2023/AMPERE_[[:space:]]AMR-Aware[[:space:]]Prefix[[:space:]]for[[:space:]]Generation-Based[[:space:]]Event[[:space:]]Argument[[:space:]]Extraction[[:space:]]Model/436c03c0-46a7-4668-b07c-655c1007d097_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6010 |
+
2023/AMR-based[[:space:]]Network[[:space:]]for[[:space:]]Aspect-based[[:space:]]Sentiment[[:space:]]Analysis/65ca6666-2538-4460-801d-c13d5b6bd2c5_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6011 |
+
2023/APOLLO_[[:space:]]A[[:space:]]Simple[[:space:]]Approach[[:space:]]for[[:space:]]Adaptive[[:space:]]Pretraining[[:space:]]of[[:space:]]Language[[:space:]]Models[[:space:]]for[[:space:]]Logical[[:space:]]Reasoning/ab95127e-4c6f-414d-88c9-a1b71990a484_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6012 |
+
2023/AV-TranSpeech_[[:space:]]Audio-Visual[[:space:]]Robust[[:space:]]Speech-to-Speech[[:space:]]Translation/d4275f60-7a3d-450d-bb27-0eed264e2215_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6013 |
+
2023/Abductive[[:space:]]Commonsense[[:space:]]Reasoning[[:space:]]Exploiting[[:space:]]Mutually[[:space:]]Exclusive[[:space:]]Explanations/15aa9f0b-ccfc-4486-96ee-4b189be9ac2b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6014 |
+
2023/Accelerating[[:space:]]Transformer[[:space:]]Inference[[:space:]]for[[:space:]]Translation[[:space:]]via[[:space:]]Parallel[[:space:]]Decoding/4090c341-5ebb-47f1-b801-0ff756c1c814_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6015 |
+
2023/Actively[[:space:]]Supervised[[:space:]]Clustering[[:space:]]for[[:space:]]Open[[:space:]]Relation[[:space:]]Extraction/b96a6d17-18bc-44ec-b68e-7c0258200d66_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6016 |
+
2023/Adaptive[[:space:]]and[[:space:]]Personalized[[:space:]]Exercise[[:space:]]Generation[[:space:]]for[[:space:]]Online[[:space:]]Language[[:space:]]Learning/1e8e83d4-ba76-454a-bbfe-afb2c136470e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6017 |
+
2023/Advancing[[:space:]]Multi-Criteria[[:space:]]Chinese[[:space:]]Word[[:space:]]Segmentation[[:space:]]Through[[:space:]]Criterion[[:space:]]Classification[[:space:]]and[[:space:]]Denoising/654811cb-0e2a-4224-92f1-ee372104182c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6018 |
+
2023/Aggregating[[:space:]]Multiple[[:space:]]Heuristic[[:space:]]Signals[[:space:]]as[[:space:]]Supervision[[:space:]]for[[:space:]]Unsupervised[[:space:]]Automated[[:space:]]Essay[[:space:]]Scoring/88b69f7a-cefe-4cce-aa74-87207f86b433_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6019 |
+
2023/AlignScore_[[:space:]]Evaluating[[:space:]]Factual[[:space:]]Consistency[[:space:]]with[[:space:]]A[[:space:]]Unified[[:space:]]Alignment[[:space:]]Function/00d45b25-c717-45e6-a0b8-13fc17e3c513_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6020 |
+
2023/Alleviating[[:space:]]Over-smoothing[[:space:]]for[[:space:]]Unsupervised[[:space:]]Sentence[[:space:]]Representation/b5324667-b5a6-48d3-8ca8-4b9b6190470e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6021 |
+
2023/Ambiguous[[:space:]]Learning[[:space:]]from[[:space:]]Retrieval_[[:space:]]Towards[[:space:]]Zero-shot[[:space:]]Semantic[[:space:]]Parsing/a6800951-e028-4f63-8fc7-416204d3e3ea_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6022 |
+
2023/An[[:space:]]AMR-based[[:space:]]Link[[:space:]]Prediction[[:space:]]Approach[[:space:]]for[[:space:]]Document-level[[:space:]]Event[[:space:]]Argument[[:space:]]Extraction/9c171248-a082-4a49-b2c7-df37e6d4c44b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6023 |
+
2023/An[[:space:]]Empirical[[:space:]]Analysis[[:space:]]of[[:space:]]Parameter-Efficient[[:space:]]Methods[[:space:]]for[[:space:]]Debiasing[[:space:]]Pre-Trained[[:space:]]Language[[:space:]]Models/82e14613-cd21-4dc1-9fda-8fc129fe2691_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6024 |
+
2023/An[[:space:]]Extensible[[:space:]]Plug-and-Play[[:space:]]Method[[:space:]]for[[:space:]]Multi-Aspect[[:space:]]Controllable[[:space:]]Text[[:space:]]Generation/c25d0f0d-ff1d-4a37-9ace-9821ed6ffd13_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6025 |
+
2023/An[[:space:]]Inclusive[[:space:]]Notion[[:space:]]of[[:space:]]Text/0271e9ea-fa99-499c-bb3f-72052159ee30_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6026 |
+
2023/An[[:space:]]Inner[[:space:]]Table[[:space:]]Retriever[[:space:]]for[[:space:]]Robust[[:space:]]Table[[:space:]]Question[[:space:]]Answering/47491908-96a9-4c78-aa43-5aebfd6ca8ab_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6027 |
+
2023/An[[:space:]]Invariant[[:space:]]Learning[[:space:]]Characterization[[:space:]]of[[:space:]]Controlled[[:space:]]Text[[:space:]]Generation/de4a60e0-b0be-46f1-a6ad-5e89736a238e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6028 |
+
2023/An[[:space:]]Ordinal[[:space:]]Latent[[:space:]]Variable[[:space:]]Model[[:space:]]of[[:space:]]Conflict[[:space:]]Intensity/a5a18e0f-37ba-46d7-a6af-2522f32abaf7_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6029 |
+
2023/Analyzing[[:space:]]Transformers[[:space:]]in[[:space:]]Embedding[[:space:]]Space/4dc128bf-1845-47a4-b5ad-58a06e06c478_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6030 |
+
2023/Analyzing[[:space:]]and[[:space:]]Reducing[[:space:]]the[[:space:]]Performance[[:space:]]Gap[[:space:]]in[[:space:]]Cross-Lingual[[:space:]]Transfer[[:space:]]with[[:space:]]Fine-tuning[[:space:]]Slow[[:space:]]and[[:space:]]Fast/3d352785-df5c-4cad-9115-ea672ac097f6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6031 |
+
2023/Annotating[[:space:]]Mentions[[:space:]]Alone[[:space:]]Enables[[:space:]]Efficient[[:space:]]Domain[[:space:]]Adaptation[[:space:]]for[[:space:]]Coreference[[:space:]]Resolution/677d4fee-2fba-449f-806c-b94ce86fb1da_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6032 |
+
2023/Annotating[[:space:]]and[[:space:]]Detecting[[:space:]]Fine-grained[[:space:]]Factual[[:space:]]Errors[[:space:]]for[[:space:]]Dialogue[[:space:]]Summarization/82bf4bac-aac4-4d8c-a61f-ecf07c1d7a73_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6033 |
+
2023/Annotation-Inspired[[:space:]]Implicit[[:space:]]Discourse[[:space:]]Relation[[:space:]]Classification[[:space:]]with[[:space:]]Auxiliary[[:space:]]Discourse[[:space:]]Connective[[:space:]]Generation/2597789c-ed46-4a32-acb1-df6e8d48091e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6034 |
+
2023/Answering[[:space:]]Ambiguous[[:space:]]Questions[[:space:]]via[[:space:]]Iterative[[:space:]]Prompting/b20074db-6c55-4ee3-8099-035b2f86c367_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6035 |
+
2023/Are[[:space:]]Experts[[:space:]]Needed_[[:space:]]On[[:space:]]Human[[:space:]]Evaluation[[:space:]]of[[:space:]]Counselling[[:space:]]Reflection[[:space:]]Generation/569b27ec-e100-41d7-85bc-b43a8135d3b8_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6036 |
+
2023/Are[[:space:]]Fairy[[:space:]]Tales[[:space:]]Fair_[[:space:]]Analyzing[[:space:]]Gender[[:space:]]Bias[[:space:]]in[[:space:]]Temporal[[:space:]]Narrative[[:space:]]Event[[:space:]]Chains[[:space:]]of[[:space:]]Children’s[[:space:]]Fairy[[:space:]]Tales/f6af969e-b24d-41ee-95a2-fd612372b14b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6037 |
+
2023/Are[[:space:]]Human[[:space:]]Explanations[[:space:]]Always[[:space:]]Helpful_[[:space:]]Towards[[:space:]]Objective[[:space:]]Evaluation[[:space:]]of[[:space:]]Human[[:space:]]Natural[[:space:]]Language[[:space:]]Explanations/dd086fa7-cf65-46d0-89f7-28d1f3daa7ad_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6038 |
+
2023/Are[[:space:]]Machine[[:space:]]Rationales[[:space:]](Not)[[:space:]]Useful[[:space:]]to[[:space:]]Humans_[[:space:]]Measuring[[:space:]]and[[:space:]]Improving[[:space:]]Human[[:space:]]Utility[[:space:]]of[[:space:]]Free-text[[:space:]]Rationales/c102269a-bf47-4d87-9c13-83dab8fde23d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6039 |
+
2023/Are[[:space:]]Message[[:space:]]Passing[[:space:]]Neural[[:space:]]Networks[[:space:]]Really[[:space:]]Helpful[[:space:]]for[[:space:]]Knowledge[[:space:]]Graph[[:space:]]Completion_/f15e01a2-78ac-43b1-b034-10707655d975_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6040 |
+
2023/Are[[:space:]]You[[:space:]]Copying[[:space:]]My[[:space:]]Model_[[:space:]]Protecting[[:space:]]the[[:space:]]Copyright[[:space:]]of[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]for[[:space:]]EaaS[[:space:]]via[[:space:]]Backdoor[[:space:]]Watermark/18bab685-7d48-4d4d-bc7e-e1718eeb9214_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6041 |
+
2023/ArgAnalysis35K[[:space:]]_[[:space:]]A[[:space:]]large-scale[[:space:]]dataset[[:space:]]for[[:space:]]Argument[[:space:]]Quality[[:space:]]Analysis/0c46d690-13ee-4c75-950d-5bb62d5c666c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6042 |
+
2023/ArgU_[[:space:]]A[[:space:]]Controllable[[:space:]]Factual[[:space:]]Argument[[:space:]]Generator/e4cedbdd-09db-4e9a-ae6f-1b4b6cb07c62_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6043 |
+
2023/AtTGen_[[:space:]]Attribute[[:space:]]Tree[[:space:]]Generation[[:space:]]for[[:space:]]Real-World[[:space:]]Attribute[[:space:]]Joint[[:space:]]Extraction/4cea9f31-5fea-4372-a1e4-88b71c0b6066_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6044 |
+
2023/Attention[[:space:]]as[[:space:]]a[[:space:]]Guide[[:space:]]for[[:space:]]Simultaneous[[:space:]]Speech[[:space:]]Translation/5e458ea4-1208-4dac-a471-13b0c4234f56_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6045 |
+
2023/Attractive[[:space:]]Storyteller_[[:space:]]Stylized[[:space:]]Visual[[:space:]]Storytelling[[:space:]]with[[:space:]]Unpaired[[:space:]]Text/4b9fd1ff-7a20-415c-b1ad-522e88aa12fe_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6046 |
+
2023/Attributable[[:space:]]and[[:space:]]Scalable[[:space:]]Opinion[[:space:]]Summarization/d6ef21a4-4c7f-4f62-a4d7-2c5e40b69f91_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6047 |
+
2023/Augmentation-Adapted[[:space:]]Retriever[[:space:]]Improves[[:space:]]Generalization[[:space:]]of[[:space:]]Language[[:space:]]Models[[:space:]]as[[:space:]]Generic[[:space:]]Plug-In/f750f44b-7b98-4735-b38f-cb54b864782c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6048 |
+
2023/Automated[[:space:]]Metrics[[:space:]]for[[:space:]]Medical[[:space:]]Multi-Document[[:space:]]Summarization[[:space:]]Disagree[[:space:]]with[[:space:]]Human[[:space:]]Evaluations/437ee194-a1a1-4d8f-8d24-e63dcb87022b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6049 |
+
2023/Automatic[[:space:]]Annotation[[:space:]]of[[:space:]]Direct[[:space:]]Speech[[:space:]]in[[:space:]]Written[[:space:]]French[[:space:]]Narratives/d498ee1d-fbae-418f-98e5-96088ed05ff3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6050 |
+
2023/Automatic[[:space:]]Creation[[:space:]]of[[:space:]]Named[[:space:]]Entity[[:space:]]Recognition[[:space:]]Datasets[[:space:]]by[[:space:]]Querying[[:space:]]Phrase[[:space:]]Representations/341d663d-2d36-48e9-bc15-815bc30ce7f7_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6051 |
+
2023/BERM_[[:space:]]Training[[:space:]]the[[:space:]]Balanced[[:space:]]and[[:space:]]Extractable[[:space:]]Representation[[:space:]]for[[:space:]]Matching[[:space:]]to[[:space:]]Improve[[:space:]]Generalization[[:space:]]Ability[[:space:]]of[[:space:]]Dense[[:space:]]Retrieval/2a0bd5ab-3e8b-4db1-8f63-6a711223d6ad_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6052 |
+
2023/BIC_[[:space:]]Twitter[[:space:]]Bot[[:space:]]Detection[[:space:]]with[[:space:]]Text-Graph[[:space:]]Interaction[[:space:]]and[[:space:]]Semantic[[:space:]]Consistency/c6990410-4b34-4120-9f86-f6131ee4f1d5_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 6053 |
+
2023/BIG-C_[[:space:]]a[[:space:]]Multimodal[[:space:]]Multi-Purpose[[:space:]]Dataset[[:space:]]for[[:space:]]Bemba/dc0fe768-7bd6-4cb1-b433-66a67bb8fcb2_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
2023/A New Direction in Stance Detection_ Target-Stance Extraction in the Wild/05054386-a5e0-4465-a9e7-3cb95fe5c25f_content_list.json
ADDED
|
@@ -0,0 +1,2036 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "A New Direction in Stance Detection: Target-Stance Extraction in the Wild",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
114,
|
| 8 |
+
89,
|
| 9 |
+
882,
|
| 10 |
+
110
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Yingjie Li* Krishna Garg* Cornelia Caragea",
|
| 17 |
+
"text_level": 1,
|
| 18 |
+
"bbox": [
|
| 19 |
+
280,
|
| 20 |
+
136,
|
| 21 |
+
710,
|
| 22 |
+
154
|
| 23 |
+
],
|
| 24 |
+
"page_idx": 0
|
| 25 |
+
},
|
| 26 |
+
{
|
| 27 |
+
"type": "text",
|
| 28 |
+
"text": "University of Illinois at Chicago {yli300,kgarg8,cornelia}@uic.edu",
|
| 29 |
+
"bbox": [
|
| 30 |
+
337,
|
| 31 |
+
170,
|
| 32 |
+
663,
|
| 33 |
+
204
|
| 34 |
+
],
|
| 35 |
+
"page_idx": 0
|
| 36 |
+
},
|
| 37 |
+
{
|
| 38 |
+
"type": "text",
|
| 39 |
+
"text": "Abstract",
|
| 40 |
+
"text_level": 1,
|
| 41 |
+
"bbox": [
|
| 42 |
+
260,
|
| 43 |
+
252,
|
| 44 |
+
339,
|
| 45 |
+
266
|
| 46 |
+
],
|
| 47 |
+
"page_idx": 0
|
| 48 |
+
},
|
| 49 |
+
{
|
| 50 |
+
"type": "text",
|
| 51 |
+
"text": "Stance detection aims to detect the stance toward a corresponding target. Existing works have achieved promising progress on stance detection tasks in which the goal is to predict the stance given both a target and a text. However, they all work under the assumption that the target is known in advance, which is often not the case in the wild. Given a text from social media platforms, the target information is often unknown due to implicit mentions in the source text and it is infeasible to have manual target annotations at a large scale. Therefore, in this paper, we propose a new task Target-Stance Extraction (TSE) that aims to extract the (target, stance) pair from the text. We benchmark the task by proposing a two-stage framework that first identifies the relevant target in the text and then detects the stance given the predicted target and text. Specifically, we first propose two different settings: Target Classification and Target Generation, to identify the potential target from a given text. Then we propose a multi-task approach that takes target prediction as the auxiliary task to detect the stance toward the predicted target. We evaluate the proposed framework on both in-target stance detection in which the test target is always seen in the training stage and zero-shot stance detection that needs to detect the stance for the unseen target during the inference stage. The new TSE task can facilitate future research in the field of stance detection. We publicly release our code.<sup>1</sup>",
|
| 52 |
+
"bbox": [
|
| 53 |
+
141,
|
| 54 |
+
279,
|
| 55 |
+
460,
|
| 56 |
+
747
|
| 57 |
+
],
|
| 58 |
+
"page_idx": 0
|
| 59 |
+
},
|
| 60 |
+
{
|
| 61 |
+
"type": "text",
|
| 62 |
+
"text": "1 Introduction",
|
| 63 |
+
"text_level": 1,
|
| 64 |
+
"bbox": [
|
| 65 |
+
114,
|
| 66 |
+
762,
|
| 67 |
+
258,
|
| 68 |
+
778
|
| 69 |
+
],
|
| 70 |
+
"page_idx": 0
|
| 71 |
+
},
|
| 72 |
+
{
|
| 73 |
+
"type": "text",
|
| 74 |
+
"text": "Stance detection aims to automatically identify people's attitude/viewpoint (e.g., Favor or Against) expressed in texts toward a target that is generally a controversial topic or political figure (ALDayel and Magdy, 2021; Kucuk and Can, 2020; Hardalov et al., 2021). For example, the tweet in Figure 1",
|
| 75 |
+
"bbox": [
|
| 76 |
+
112,
|
| 77 |
+
788,
|
| 78 |
+
489,
|
| 79 |
+
885
|
| 80 |
+
],
|
| 81 |
+
"page_idx": 0
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"type": "text",
|
| 85 |
+
"text": "Both authors contributed equally to this research. \n<sup>1</sup>https://github.com/chuchun8/TSE",
|
| 86 |
+
"bbox": [
|
| 87 |
+
136,
|
| 88 |
+
891,
|
| 89 |
+
442,
|
| 90 |
+
917
|
| 91 |
+
],
|
| 92 |
+
"page_idx": 0
|
| 93 |
+
},
|
| 94 |
+
{
|
| 95 |
+
"type": "image",
|
| 96 |
+
"img_path": "images/9cb2dfb44576dc0c1c84bc30255abfb5566fd9ea2d994fc61ed7c12dfcd56b85.jpg",
|
| 97 |
+
"image_caption": [
|
| 98 |
+
"Figure 1: The comparison between the proposed Target-Stance Extraction (TSE) task and original stance detection task."
|
| 99 |
+
],
|
| 100 |
+
"image_footnote": [],
|
| 101 |
+
"bbox": [
|
| 102 |
+
524,
|
| 103 |
+
250,
|
| 104 |
+
870,
|
| 105 |
+
373
|
| 106 |
+
],
|
| 107 |
+
"page_idx": 0
|
| 108 |
+
},
|
| 109 |
+
{
|
| 110 |
+
"type": "text",
|
| 111 |
+
"text": "expresses a stance of \"Against\" toward the target \"Atheism.\"",
|
| 112 |
+
"bbox": [
|
| 113 |
+
507,
|
| 114 |
+
463,
|
| 115 |
+
882,
|
| 116 |
+
493
|
| 117 |
+
],
|
| 118 |
+
"page_idx": 0
|
| 119 |
+
},
|
| 120 |
+
{
|
| 121 |
+
"type": "text",
|
| 122 |
+
"text": "Social media platforms like Twitter, Facebook and other debate forums have become an integral way of opinion dissemination these days (Khan et al., 2021). The peculiar characteristics of these platforms are that the information is usually scattered across texts and the opinionated text could be expressed toward target entities in an implicit way. Existing methods have achieved promising performance on in-target stance detection in which same targets are seen in both train and test sets (Mohammad et al., 2016a; Sobhani et al., 2017; Li and Caragea, 2019, 2021a) and cross-target stance detection that aims to transfer the knowledge from a source target to a destination target (Augenstein et al., 2016; Xu et al., 2018; Zhang et al., 2020). However, almost all previous methods work under the assumption that the target is known or manually identified, which is often not the case in the wild. In practice, the target is unknown given a text and it is usually implicitly mentioned in the text, as can be seen from the example shown in Figure 1. Therefore, instead of detecting the stance given both the target and text, we propose a more challenging task Target-Stance Extraction (TSE) in the context of stance detection that aims to extract the (target, stance) pair from the text. The new TSE",
|
| 123 |
+
"bbox": [
|
| 124 |
+
507,
|
| 125 |
+
500,
|
| 126 |
+
882,
|
| 127 |
+
917
|
| 128 |
+
],
|
| 129 |
+
"page_idx": 0
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"type": "page_number",
|
| 133 |
+
"text": "10071",
|
| 134 |
+
"bbox": [
|
| 135 |
+
475,
|
| 136 |
+
927,
|
| 137 |
+
522,
|
| 138 |
+
940
|
| 139 |
+
],
|
| 140 |
+
"page_idx": 0
|
| 141 |
+
},
|
| 142 |
+
{
|
| 143 |
+
"type": "footer",
|
| 144 |
+
"text": "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics",
|
| 145 |
+
"bbox": [
|
| 146 |
+
226,
|
| 147 |
+
945,
|
| 148 |
+
769,
|
| 149 |
+
957
|
| 150 |
+
],
|
| 151 |
+
"page_idx": 0
|
| 152 |
+
},
|
| 153 |
+
{
|
| 154 |
+
"type": "footer",
|
| 155 |
+
"text": "Volume 1: Long Papers, pages 10071-10085",
|
| 156 |
+
"bbox": [
|
| 157 |
+
361,
|
| 158 |
+
958,
|
| 159 |
+
636,
|
| 160 |
+
971
|
| 161 |
+
],
|
| 162 |
+
"page_idx": 0
|
| 163 |
+
},
|
| 164 |
+
{
|
| 165 |
+
"type": "footer",
|
| 166 |
+
"text": "July 9-14, 2023 ©2023 Association for Computational Linguistics",
|
| 167 |
+
"bbox": [
|
| 168 |
+
295,
|
| 169 |
+
972,
|
| 170 |
+
700,
|
| 171 |
+
985
|
| 172 |
+
],
|
| 173 |
+
"page_idx": 0
|
| 174 |
+
},
|
| 175 |
+
{
|
| 176 |
+
"type": "text",
|
| 177 |
+
"text": "task is more challenging because it includes both target identification and stance detection.",
|
| 178 |
+
"bbox": [
|
| 179 |
+
112,
|
| 180 |
+
84,
|
| 181 |
+
485,
|
| 182 |
+
115
|
| 183 |
+
],
|
| 184 |
+
"page_idx": 1
|
| 185 |
+
},
|
| 186 |
+
{
|
| 187 |
+
"type": "text",
|
| 188 |
+
"text": "To tackle this task, we propose a two-step framework that first identifies the relevant target in the text and then detects the stance given the predicted target and the text, as shown in Figure 1. In the first stage, we propose two different settings to identify the target discussed in a text: (1) Target Classification, where we train a text classifier (Schuster and Paliwal, 1997; Devlin et al., 2019; Nguyen et al., 2020) to predict the target as one of the pre-defined targets, and (2) Target Generation, where we leverage BART (Lewis et al., 2020) model that is pretrained on a keyphrase generation dataset (Xiong et al., 2019; Gallina et al., 2019; Garg et al., 2022) to generate keyphrases (e.g., \"Christianity\" in Figure 1), and then map them to one of the pre-defined targets (e.g., \"Atheism\"). In the second stage, we propose a multi-task framework that takes the target prediction as the auxiliary task for stance detection. We expect the stance detection model to better capture the target-related features and to develop a better understanding of the text itself with the auxiliary task.",
|
| 189 |
+
"bbox": [
|
| 190 |
+
112,
|
| 191 |
+
117,
|
| 192 |
+
489,
|
| 193 |
+
470
|
| 194 |
+
],
|
| 195 |
+
"page_idx": 1
|
| 196 |
+
},
|
| 197 |
+
{
|
| 198 |
+
"type": "text",
|
| 199 |
+
"text": "Our proposed two-step framework can not only be applied to in-target stance detection, but also zero-shot stance detection in which targets of test examples are not seen in the train set. We evaluate the proposed framework on the combined set of four stance datasets (Mohammad et al., 2016a; Stab et al., 2018; Glandt et al., 2021; Li et al., 2021a) for in-target stance detection. Further, we extend our framework to zero-shot stance detection and test it on six targets of diverse domains (Somasundaran and Wiebe, 2010; Mohammad et al., 2016a; Conforti et al., 2020; Miao et al., 2020; Gautam et al., 2020). It is worth noting that our primary goal is not to present a new state-of-the-art model, but to deliver a new and more challenging task to stimulate research on stance detection.",
|
| 200 |
+
"bbox": [
|
| 201 |
+
112,
|
| 202 |
+
470,
|
| 203 |
+
489,
|
| 204 |
+
727
|
| 205 |
+
],
|
| 206 |
+
"page_idx": 1
|
| 207 |
+
},
|
| 208 |
+
{
|
| 209 |
+
"type": "text",
|
| 210 |
+
"text": "We summarize our contributions as follows:",
|
| 211 |
+
"bbox": [
|
| 212 |
+
132,
|
| 213 |
+
734,
|
| 214 |
+
460,
|
| 215 |
+
750
|
| 216 |
+
],
|
| 217 |
+
"page_idx": 1
|
| 218 |
+
},
|
| 219 |
+
{
|
| 220 |
+
"type": "list",
|
| 221 |
+
"sub_type": "text",
|
| 222 |
+
"list_items": [
|
| 223 |
+
"- We propose a new Target-Stance Extraction (TSE) task, aimed to extract the pair of target and stance from each sentence.",
|
| 224 |
+
"- We benchmark the task by proposing a two-step framework that can be applied to both in-target and zero-shot stance detection.",
|
| 225 |
+
"- We propose a multi-task framework that uses the target prediction as an auxiliary task to improve the performance of stance detection."
|
| 226 |
+
],
|
| 227 |
+
"bbox": [
|
| 228 |
+
136,
|
| 229 |
+
757,
|
| 230 |
+
489,
|
| 231 |
+
919
|
| 232 |
+
],
|
| 233 |
+
"page_idx": 1
|
| 234 |
+
},
|
| 235 |
+
{
|
| 236 |
+
"type": "text",
|
| 237 |
+
"text": "2 Related Work",
|
| 238 |
+
"text_level": 1,
|
| 239 |
+
"bbox": [
|
| 240 |
+
509,
|
| 241 |
+
83,
|
| 242 |
+
665,
|
| 243 |
+
98
|
| 244 |
+
],
|
| 245 |
+
"page_idx": 1
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"type": "text",
|
| 249 |
+
"text": "Stance Detection The stance detection task aims to detect the stance toward a specific target (Mohammad et al., 2016a; Schiller et al., 2021; Hardalov et al., 2022). The target could be defined in a variety of forms: a controversial figure (Darwish et al., 2017; Grimminger and Klinger, 2021; Li et al., 2021a), a hot topic such as gun control (Hasan and Ng, 2014; Mohammad et al., 2016a; Stab et al., 2018; Vamvas and Sennrich, 2020; Conforti et al., 2020; Glandt et al., 2021) or a claim (Rao and Pomerleau, 2017; Derczynski et al., 2017; Gorrell et al., 2019). In previous works, the target is usually manually provided along with the input sentence to a stance classifier. However, given a post on social media, we may not have a direct clue about the target information due to their implicit mentions, and it is infeasible to do large-scale target annotations by humans. Motivated by this observation, we propose a new task named Target-Stance Extraction (TSE) that aims to extract both the target and the corresponding stance from a given text.",
|
| 250 |
+
"bbox": [
|
| 251 |
+
507,
|
| 252 |
+
112,
|
| 253 |
+
884,
|
| 254 |
+
451
|
| 255 |
+
],
|
| 256 |
+
"page_idx": 1
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"type": "text",
|
| 260 |
+
"text": "Besides the in-target stance detection (Mohammad et al., 2016a; Li and Caragea, 2021b) in which the test target is seen in the training stage, crosstarget stance detection (Augenstein et al., 2016; Xu et al., 2018; Zhang et al., 2020; Liang et al., 2021) and zero-shot stance detection (Allaway and McKeown, 2020; Liang et al., 2022; Li et al., 2023) have also drawn a lot of attention recently. In crosstarget stance detection, a classifier is adapted from a different but related target to a destination target in a one-to-one way, whereas in zero-shot stance detection we need to detect the stance for a variety of unseen targets at the inference stage. In this paper, we evaluate our proposed framework in both in-target and zero-shot settings.",
|
| 261 |
+
"bbox": [
|
| 262 |
+
507,
|
| 263 |
+
454,
|
| 264 |
+
884,
|
| 265 |
+
695
|
| 266 |
+
],
|
| 267 |
+
"page_idx": 1
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"type": "text",
|
| 271 |
+
"text": "Keyphrase Generation / Extraction Keyphrase generation or extraction is the task where given a source document (e.g., a scientific article, newspaper article, or webpage), we predict the keyphrases that best describe or summarize that document (Garg et al., 2022; Ray Chowdhury et al., 2022, 2019; Alzaidy et al., 2019; Patel and Caragea, 2019; Meng et al., 2017; Yuan et al., 2020; Ye et al., 2021; Florescu and Caragea, 2017; Sterckx et al., 2016; Caragea et al., 2014). In the context of stance detection, we can use keyphrase generation models to generate keyphrases that are target-related words give an input text. To our knowledge, target-related",
|
| 272 |
+
"bbox": [
|
| 273 |
+
507,
|
| 274 |
+
709,
|
| 275 |
+
885,
|
| 276 |
+
919
|
| 277 |
+
],
|
| 278 |
+
"page_idx": 1
|
| 279 |
+
},
|
| 280 |
+
{
|
| 281 |
+
"type": "page_number",
|
| 282 |
+
"text": "10072",
|
| 283 |
+
"bbox": [
|
| 284 |
+
477,
|
| 285 |
+
927,
|
| 286 |
+
524,
|
| 287 |
+
940
|
| 288 |
+
],
|
| 289 |
+
"page_idx": 1
|
| 290 |
+
},
|
| 291 |
+
{
|
| 292 |
+
"type": "text",
|
| 293 |
+
"text": "keyphrase generation task has not been explored before for stance detection.",
|
| 294 |
+
"bbox": [
|
| 295 |
+
112,
|
| 296 |
+
84,
|
| 297 |
+
485,
|
| 298 |
+
115
|
| 299 |
+
],
|
| 300 |
+
"page_idx": 2
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"type": "text",
|
| 304 |
+
"text": "The most popular paradigm for the keyphrase generation task is the One2Seq encoder-decoder framework (Meng et al., 2017) where given a document, we generate a sequence of [SEP] separated keyphrases in an auto-regressive way. We use the pre-trained BART model (Lewis et al., 2020) finetuned separately on three keyphrase generation datasets, i.e., OpenKP (Xiong et al., 2019), KP-Times (Gallina et al., 2019), and FullTextKP (Garg et al., 2022) and generate keyphrases using the One2Seq model.",
|
| 305 |
+
"bbox": [
|
| 306 |
+
112,
|
| 307 |
+
117,
|
| 308 |
+
489,
|
| 309 |
+
294
|
| 310 |
+
],
|
| 311 |
+
"page_idx": 2
|
| 312 |
+
},
|
| 313 |
+
{
|
| 314 |
+
"type": "text",
|
| 315 |
+
"text": "3 Task and Datasets",
|
| 316 |
+
"text_level": 1,
|
| 317 |
+
"bbox": [
|
| 318 |
+
112,
|
| 319 |
+
304,
|
| 320 |
+
305,
|
| 321 |
+
319
|
| 322 |
+
],
|
| 323 |
+
"page_idx": 2
|
| 324 |
+
},
|
| 325 |
+
{
|
| 326 |
+
"type": "text",
|
| 327 |
+
"text": "3.1 Task Definition",
|
| 328 |
+
"text_level": 1,
|
| 329 |
+
"bbox": [
|
| 330 |
+
112,
|
| 331 |
+
329,
|
| 332 |
+
280,
|
| 333 |
+
343
|
| 334 |
+
],
|
| 335 |
+
"page_idx": 2
|
| 336 |
+
},
|
| 337 |
+
{
|
| 338 |
+
"type": "text",
|
| 339 |
+
"text": "Let $D_{tr} = \\{x_i, t_i, y_i\\}_{i=1}^n$ be a train set where $x_i$ is a sequence of words, $t_i$ is the target holding the stance and $y_i$ is the stance label. In the original stance detection task the aim was to only detect the stance $y_i$ given the target $t_i$ and the text $x_i$ .",
|
| 340 |
+
"bbox": [
|
| 341 |
+
112,
|
| 342 |
+
349,
|
| 343 |
+
487,
|
| 344 |
+
430
|
| 345 |
+
],
|
| 346 |
+
"page_idx": 2
|
| 347 |
+
},
|
| 348 |
+
{
|
| 349 |
+
"type": "text",
|
| 350 |
+
"text": "Target-Stance Extraction Objective In our proposed Target-Stance Extraction (TSE) task the goal is to extract the target-stance pair $(t_i, y_i)$ given $x_i$ .",
|
| 351 |
+
"bbox": [
|
| 352 |
+
112,
|
| 353 |
+
438,
|
| 354 |
+
489,
|
| 355 |
+
487
|
| 356 |
+
],
|
| 357 |
+
"page_idx": 2
|
| 358 |
+
},
|
| 359 |
+
{
|
| 360 |
+
"type": "text",
|
| 361 |
+
"text": "3.2 Datasets",
|
| 362 |
+
"text_level": 1,
|
| 363 |
+
"bbox": [
|
| 364 |
+
112,
|
| 365 |
+
495,
|
| 366 |
+
228,
|
| 367 |
+
510
|
| 368 |
+
],
|
| 369 |
+
"page_idx": 2
|
| 370 |
+
},
|
| 371 |
+
{
|
| 372 |
+
"type": "text",
|
| 373 |
+
"text": "In-Target TSE For in-target TSE, we conduct experiments on the merged set of four stance detection datasets to evaluate the proposed framework. 1) SemEval-2016 (SE) (Mohammad et al., 2016b) contains 5 pre-defined targets, including Atheism, Climate Change is a Real Concern, Feminist Movement, Hillary Clinton and Legalization of Abortion. Each sample is annotated with Favor, Against or None. 2) AM (Stab et al., 2018) is an argument mining dataset containing 8 targets, including Abortion, Cloning, Death Penalty, Gun Control, Marijuana Legalization, Minimum Wage, Nuclear Energy and School Uniforms. Each sample is annotated with Support, Oppose or Neutral. 3) COVID-19 (C19) (Glandt et al., 2021) contains 4 targets related to COVID-19: Wearing a Face Mask, Anthony S. Fauci, School Closures and Stay at Home Orders. Each sample can be classified as Favor, Against or None. 4) P-Stance (PS) (Li et al., 2021a) contains 3 targets related to the 2020 U.S. presidential election: Donald Trump, Joe Biden and Bernie Sanders. Each instance is annotated with Favor or Against.",
|
| 374 |
+
"bbox": [
|
| 375 |
+
112,
|
| 376 |
+
516,
|
| 377 |
+
489,
|
| 378 |
+
885
|
| 379 |
+
],
|
| 380 |
+
"page_idx": 2
|
| 381 |
+
},
|
| 382 |
+
{
|
| 383 |
+
"type": "text",
|
| 384 |
+
"text": "Train, validation and test sets are provided for the AM, COVID-19, and P-Stance datasets. For",
|
| 385 |
+
"bbox": [
|
| 386 |
+
112,
|
| 387 |
+
887,
|
| 388 |
+
487,
|
| 389 |
+
917
|
| 390 |
+
],
|
| 391 |
+
"page_idx": 2
|
| 392 |
+
},
|
| 393 |
+
{
|
| 394 |
+
"type": "text",
|
| 395 |
+
"text": "SemEval-2016, train and test sets are provided and we split the train set into train and validation sets. We remove the target Climate Change of SemEval-2016 from training for the usage of zero-shot setting. Data statistics and examples of these datasets are shown in Tables 1 and 2.",
|
| 396 |
+
"bbox": [
|
| 397 |
+
507,
|
| 398 |
+
84,
|
| 399 |
+
884,
|
| 400 |
+
180
|
| 401 |
+
],
|
| 402 |
+
"page_idx": 2
|
| 403 |
+
},
|
| 404 |
+
{
|
| 405 |
+
"type": "text",
|
| 406 |
+
"text": "Zero-Shot TSE We also curate a new zero-shot dataset from existing datasets to test the model performance on unseen targets during the inference stage. We collect 500 samples for each of the following targets from its original dataset: 1) Creationism (Somasundaran and Wiebe, 2010), 2) Gay Rights (Somasundaran and Wiebe, 2010), 3) Climate Change is a Concern (Mohammad et al., 2016a), 4) MeToo Movement (Gautam et al., 2020), 5) Merger of Disney and Fox (Conforti et al., 2020), 6) Lockdown in New York State (Miao et al., 2020).",
|
| 407 |
+
"bbox": [
|
| 408 |
+
507,
|
| 409 |
+
190,
|
| 410 |
+
884,
|
| 411 |
+
366
|
| 412 |
+
],
|
| 413 |
+
"page_idx": 2
|
| 414 |
+
},
|
| 415 |
+
{
|
| 416 |
+
"type": "text",
|
| 417 |
+
"text": "To mimic the real-world scenario that a text may contain no targets of interest, we consider an additional target label Unrelated in both in-target and zero-shot settings. We provide the details about the curation of such samples in the Appendix A. We maintain a ratio of 5:1 for interested targets vs. the Unrelated category in the final datasets for both in-target and zero-shot TSE. The numbers of targets for in-target and zero-shot datasets are $18^{2}$ and 6, respectively, and we add the Unrelated category in each dataset.",
|
| 418 |
+
"bbox": [
|
| 419 |
+
507,
|
| 420 |
+
367,
|
| 421 |
+
882,
|
| 422 |
+
542
|
| 423 |
+
],
|
| 424 |
+
"page_idx": 2
|
| 425 |
+
},
|
| 426 |
+
{
|
| 427 |
+
"type": "text",
|
| 428 |
+
"text": "4 Approach",
|
| 429 |
+
"text_level": 1,
|
| 430 |
+
"bbox": [
|
| 431 |
+
507,
|
| 432 |
+
557,
|
| 433 |
+
630,
|
| 434 |
+
573
|
| 435 |
+
],
|
| 436 |
+
"page_idx": 2
|
| 437 |
+
},
|
| 438 |
+
{
|
| 439 |
+
"type": "text",
|
| 440 |
+
"text": "As discussed in the previous section, TSE is a challenging task that involves both target identification and stance detection given a text. To tackle this task, we propose a two-stage framework, in which we first identify the target from a given text using either a target classification or target generation approach and then detect the stance toward the predicted target with a stance classifier in the second stage. The overall framework of our proposed approach is shown in Figure 2.",
|
| 441 |
+
"bbox": [
|
| 442 |
+
507,
|
| 443 |
+
582,
|
| 444 |
+
882,
|
| 445 |
+
743
|
| 446 |
+
],
|
| 447 |
+
"page_idx": 2
|
| 448 |
+
},
|
| 449 |
+
{
|
| 450 |
+
"type": "text",
|
| 451 |
+
"text": "4.1 Stage 1: Target Identification",
|
| 452 |
+
"text_level": 1,
|
| 453 |
+
"bbox": [
|
| 454 |
+
507,
|
| 455 |
+
753,
|
| 456 |
+
786,
|
| 457 |
+
769
|
| 458 |
+
],
|
| 459 |
+
"page_idx": 2
|
| 460 |
+
},
|
| 461 |
+
{
|
| 462 |
+
"type": "text",
|
| 463 |
+
"text": "In this stage, we extract the target from the text based on either training classifiers, e.g., BiLSTM or BERT, to predict the target from a set of pre-defined targets or by using a BART-fine-tuned keyphrase generation module to generate keyphrases for the text and then map them to the pre-defined set of",
|
| 464 |
+
"bbox": [
|
| 465 |
+
507,
|
| 466 |
+
775,
|
| 467 |
+
880,
|
| 468 |
+
871
|
| 469 |
+
],
|
| 470 |
+
"page_idx": 2
|
| 471 |
+
},
|
| 472 |
+
{
|
| 473 |
+
"type": "page_footnote",
|
| 474 |
+
"text": "2We merge the semantically similar targets Abortion (AM) and Legalization of Abortion (SemEval-2016) for the merged training dataset.",
|
| 475 |
+
"bbox": [
|
| 476 |
+
507,
|
| 477 |
+
879,
|
| 478 |
+
880,
|
| 479 |
+
917
|
| 480 |
+
],
|
| 481 |
+
"page_idx": 2
|
| 482 |
+
},
|
| 483 |
+
{
|
| 484 |
+
"type": "page_number",
|
| 485 |
+
"text": "10073",
|
| 486 |
+
"bbox": [
|
| 487 |
+
477,
|
| 488 |
+
927,
|
| 489 |
+
524,
|
| 490 |
+
940
|
| 491 |
+
],
|
| 492 |
+
"page_idx": 2
|
| 493 |
+
},
|
| 494 |
+
{
|
| 495 |
+
"type": "table",
|
| 496 |
+
"img_path": "images/fce267fbf44d8db43a6821d7574ebfb13399a5341df8b84be5e4d3731713d05b.jpg",
|
| 497 |
+
"table_caption": [],
|
| 498 |
+
"table_footnote": [],
|
| 499 |
+
"table_body": "<table><tr><td>Dataset</td><td>#Train</td><td>#Val</td><td>#Test</td><td>Targets</td></tr><tr><td>SemEval-2016</td><td>2,160</td><td>359</td><td>1,080</td><td>Atheism, Feminist Movement, Hillary Clinton, Legalization of Abortion</td></tr><tr><td>AM</td><td>18,341</td><td>2,042</td><td>5,109</td><td>Abortion, Cloning, Death Penalty, Gun Control, Marijuana Legalization, Minimum Wage, Nuclear Energy, School Uniforms</td></tr><tr><td>COVID-19</td><td>4,533</td><td>800</td><td>800</td><td>Face Masks, Fauci, Stay at Home Orders, School Closures</td></tr><tr><td>P-Stance</td><td>17,224</td><td>2,193</td><td>2,157</td><td>Joe Biden, Bernie Sanders, Donald Trump</td></tr><tr><td>Zero-Shot</td><td>-</td><td>-</td><td>3,000</td><td>Creationism, Gay Rights, Climate Change is a Concern, MeToo Move-ment, Merger of Disney and Fox, Lockdown in New York State</td></tr></table>",
|
| 500 |
+
"bbox": [
|
| 501 |
+
132,
|
| 502 |
+
80,
|
| 503 |
+
863,
|
| 504 |
+
203
|
| 505 |
+
],
|
| 506 |
+
"page_idx": 3
|
| 507 |
+
},
|
| 508 |
+
{
|
| 509 |
+
"type": "table",
|
| 510 |
+
"img_path": "images/c9f740ea3ee79a5dba803c6594b3418e69cbb04d359deddff01dca9d13b23a18.jpg",
|
| 511 |
+
"table_caption": [
|
| 512 |
+
"Table 1: Data split statistics for SemEval-2016, AM, COVID-19, P-Stance and Zero-Shot datasets."
|
| 513 |
+
],
|
| 514 |
+
"table_footnote": [],
|
| 515 |
+
"table_body": "<table><tr><td>Dataset</td><td>Target</td><td>Tweet</td><td>Stance</td></tr><tr><td>SemEval-2016</td><td>Atheism</td><td>Religious leaders are like political leaders - they say what they think people want to hear. #freethinker #SemST</td><td>Favor</td></tr><tr><td>AM</td><td>Gun Control</td><td>Restrictions on gun ownership will only encourage outlaws to have heavy ammunition and high calibre weapons.</td><td>Against</td></tr><tr><td>COVID-19</td><td>Face Masks</td><td>@MrMasonMills @YcmiYcmiu There is air in houses/buildings too. Are we expected to live in a mask constantly?</td><td>Against</td></tr><tr><td>P-Stance</td><td>Donald Trump</td><td>There was no collusion Collusion is not a crime Even if it's a crime, it's doesn't matter. It's ALL HILLARY AND OBAMA'S FAULT The evolution of the #Trump defense</td><td>Favor</td></tr><tr><td>Zero-Shot</td><td>Gay Rights</td><td>Yes! You rock gay people. They are people just like we are and if two men want to marry each other, than go for it</td><td>Favor</td></tr></table>",
|
| 516 |
+
"bbox": [
|
| 517 |
+
151,
|
| 518 |
+
240,
|
| 519 |
+
845,
|
| 520 |
+
411
|
| 521 |
+
],
|
| 522 |
+
"page_idx": 3
|
| 523 |
+
},
|
| 524 |
+
{
|
| 525 |
+
"type": "text",
|
| 526 |
+
"text": "Table 2: Examples from stance detection datasets.",
|
| 527 |
+
"bbox": [
|
| 528 |
+
324,
|
| 529 |
+
420,
|
| 530 |
+
668,
|
| 531 |
+
435
|
| 532 |
+
],
|
| 533 |
+
"page_idx": 3
|
| 534 |
+
},
|
| 535 |
+
{
|
| 536 |
+
"type": "text",
|
| 537 |
+
"text": "targets. Our intuition is that the keyphrases corresponding to a text capture its essence and they should correlate well with the target towards which the stance is expressed. For instance, in Figure 1, the generated target Christianity quite succinctly captures the essence from the tweet Jesus, you are my helper... and at the same time, the generated target Christianity correlates semantically well to the golden target Atheism.",
|
| 538 |
+
"bbox": [
|
| 539 |
+
112,
|
| 540 |
+
461,
|
| 541 |
+
487,
|
| 542 |
+
605
|
| 543 |
+
],
|
| 544 |
+
"page_idx": 3
|
| 545 |
+
},
|
| 546 |
+
{
|
| 547 |
+
"type": "text",
|
| 548 |
+
"text": "Target Classification In this approach, we train a classifier based on the merged dataset with texts as inputs and their corresponding targets as the ground truth labels. Note that the stance labels are not used in this target classification task. We discuss this approach in more details in §5.2.",
|
| 549 |
+
"bbox": [
|
| 550 |
+
112,
|
| 551 |
+
614,
|
| 552 |
+
487,
|
| 553 |
+
709
|
| 554 |
+
],
|
| 555 |
+
"page_idx": 3
|
| 556 |
+
},
|
| 557 |
+
{
|
| 558 |
+
"type": "text",
|
| 559 |
+
"text": "Target Generation In this approach, we first fine-tune a BART model on one of the keyphrase generation datasets separately, $^{3}$ i.e., OpenKP (Xiong et al., 2019), KPTimes (Gallina et al., 2019) and FullTextKP (Garg et al., 2022). The BART keyphrase generation model is used to generate keyphrases (e.g., \"Christianity\") given a text. Note that the generated keyphrases may not directly belong to any of the",
|
| 560 |
+
"bbox": [
|
| 561 |
+
110,
|
| 562 |
+
720,
|
| 563 |
+
489,
|
| 564 |
+
850
|
| 565 |
+
],
|
| 566 |
+
"page_idx": 3
|
| 567 |
+
},
|
| 568 |
+
{
|
| 569 |
+
"type": "text",
|
| 570 |
+
"text": "target classes we are interested in. Therefore, a similarity mapping is adopted to map the generated keyphrases into one of the pre-defined targets.",
|
| 571 |
+
"bbox": [
|
| 572 |
+
507,
|
| 573 |
+
461,
|
| 574 |
+
880,
|
| 575 |
+
508
|
| 576 |
+
],
|
| 577 |
+
"page_idx": 3
|
| 578 |
+
},
|
| 579 |
+
{
|
| 580 |
+
"type": "text",
|
| 581 |
+
"text": "For similarity mapping, we first train a FastText model (Bojanowski et al., 2017) on the train set of the merged dataset. Our choice for FastText is motivated by its efficiency while maintaining comparative performance with BERT-based models. Then we obtain word embeddings of the generated keyphrases by sending them as inputs to the FastText model. Finally, a cosine similarity score is calculated between the embeddings of generated keyphrase and each pre-defined target. We predict the target that has the highest similarity score with the generated keyphrase. Note that the generated keyphrase is classified as Unrelated if the highest similarity score is below a specific threshold.",
|
| 582 |
+
"bbox": [
|
| 583 |
+
507,
|
| 584 |
+
511,
|
| 585 |
+
882,
|
| 586 |
+
734
|
| 587 |
+
],
|
| 588 |
+
"page_idx": 3
|
| 589 |
+
},
|
| 590 |
+
{
|
| 591 |
+
"type": "text",
|
| 592 |
+
"text": "4.2 Stage 2: Stance Detection",
|
| 593 |
+
"text_level": 1,
|
| 594 |
+
"bbox": [
|
| 595 |
+
507,
|
| 596 |
+
750,
|
| 597 |
+
756,
|
| 598 |
+
765
|
| 599 |
+
],
|
| 600 |
+
"page_idx": 3
|
| 601 |
+
},
|
| 602 |
+
{
|
| 603 |
+
"type": "text",
|
| 604 |
+
"text": "Given a text in the wild, the target information is usually unknown, and thus we first predict the target from either target classification or target generation in the first stage. Then in the second stage, we use a stance classifier that is trained on the merged set to detect the stance of predicted targets.",
|
| 605 |
+
"bbox": [
|
| 606 |
+
507,
|
| 607 |
+
772,
|
| 608 |
+
884,
|
| 609 |
+
869
|
| 610 |
+
],
|
| 611 |
+
"page_idx": 3
|
| 612 |
+
},
|
| 613 |
+
{
|
| 614 |
+
"type": "text",
|
| 615 |
+
"text": "For stance detection, we train a stance classifier as follows. Given a text $x_{i}$ and a target $t_i$ , we first formulate the input as a sequence $s_i = [[CLS] t_i$",
|
| 616 |
+
"bbox": [
|
| 617 |
+
507,
|
| 618 |
+
871,
|
| 619 |
+
882,
|
| 620 |
+
919
|
| 621 |
+
],
|
| 622 |
+
"page_idx": 3
|
| 623 |
+
},
|
| 624 |
+
{
|
| 625 |
+
"type": "page_footnote",
|
| 626 |
+
"text": "3We also fine-tuned the BART model on stance datasets to directly learn to generate the targets of interest. However, it shows much worse performance than the models trained on keyphrase generation datasets potentially due to the smaller size of the stance datasets.",
|
| 627 |
+
"bbox": [
|
| 628 |
+
112,
|
| 629 |
+
856,
|
| 630 |
+
487,
|
| 631 |
+
917
|
| 632 |
+
],
|
| 633 |
+
"page_idx": 3
|
| 634 |
+
},
|
| 635 |
+
{
|
| 636 |
+
"type": "page_number",
|
| 637 |
+
"text": "10074",
|
| 638 |
+
"bbox": [
|
| 639 |
+
477,
|
| 640 |
+
927,
|
| 641 |
+
524,
|
| 642 |
+
940
|
| 643 |
+
],
|
| 644 |
+
"page_idx": 3
|
| 645 |
+
},
|
| 646 |
+
{
|
| 647 |
+
"type": "image",
|
| 648 |
+
"img_path": "images/3daf542481a6e80f1c158421c2f132705829d67955466ef2eebdb93ddf519ec8.jpg",
|
| 649 |
+
"image_caption": [],
|
| 650 |
+
"image_footnote": [],
|
| 651 |
+
"bbox": [
|
| 652 |
+
122,
|
| 653 |
+
82,
|
| 654 |
+
480,
|
| 655 |
+
247
|
| 656 |
+
],
|
| 657 |
+
"page_idx": 4
|
| 658 |
+
},
|
| 659 |
+
{
|
| 660 |
+
"type": "image",
|
| 661 |
+
"img_path": "images/e9706fd46c7448ef123ad0d0c727a718c82f03b65523a3b71953eea6f0ce3bcb.jpg",
|
| 662 |
+
"image_caption": [],
|
| 663 |
+
"image_footnote": [],
|
| 664 |
+
"bbox": [
|
| 665 |
+
122,
|
| 666 |
+
249,
|
| 667 |
+
480,
|
| 668 |
+
426
|
| 669 |
+
],
|
| 670 |
+
"page_idx": 4
|
| 671 |
+
},
|
| 672 |
+
{
|
| 673 |
+
"type": "image",
|
| 674 |
+
"img_path": "images/22c4364a4d8acedc95247e416fda8f538cfa916b5859f7001050e565b0bb3e33.jpg",
|
| 675 |
+
"image_caption": [
|
| 676 |
+
"Figure 2: Model architecture of our two-stage approach for Target-Stance Extraction task. Models in black dash boxes can be replaced with other baselines. Model architecture in the red dash box indicates the alternative solution in the first stage."
|
| 677 |
+
],
|
| 678 |
+
"image_footnote": [],
|
| 679 |
+
"bbox": [
|
| 680 |
+
119,
|
| 681 |
+
432,
|
| 682 |
+
480,
|
| 683 |
+
625
|
| 684 |
+
],
|
| 685 |
+
"page_idx": 4
|
| 686 |
+
},
|
| 687 |
+
{
|
| 688 |
+
"type": "text",
|
| 689 |
+
"text": "$[SEP] x_{i}]$ where $[CLS]$ is a token that encodes the sentence and $[SEP]$ is used to separate the sentence $x_{i}$ and the target $t_{i}$ . Then the representation of $[CLS]$ token is used to predict the stance toward target $t_{i}$ . Note that $t_{i}$ is the golden target in the training stage and is the predicted target from target identification at the inference stage.",
|
| 690 |
+
"bbox": [
|
| 691 |
+
112,
|
| 692 |
+
739,
|
| 693 |
+
487,
|
| 694 |
+
851
|
| 695 |
+
],
|
| 696 |
+
"page_idx": 4
|
| 697 |
+
},
|
| 698 |
+
{
|
| 699 |
+
"type": "text",
|
| 700 |
+
"text": "To facilitate a model's ability to capture target-related features that are of vital importance to stance detection, we propose a multi-task framework that uses target prediction as the auxiliary",
|
| 701 |
+
"bbox": [
|
| 702 |
+
112,
|
| 703 |
+
854,
|
| 704 |
+
489,
|
| 705 |
+
919
|
| 706 |
+
],
|
| 707 |
+
"page_idx": 4
|
| 708 |
+
},
|
| 709 |
+
{
|
| 710 |
+
"type": "text",
|
| 711 |
+
"text": "task that aims to predict the target given the input text for stance detection. More specifically, in the auxiliary task, we formulate the input as $[CLS] x_{i}$ [SEP]] and the golden label is target $t_{i}$ . The layers of encoders are shared across tasks and each task has its specific fully-connected layer on top, which is updated during the training. We expect the model to be able to put more attention on target-related words with the auxiliary task, and thus show better performance on stance detection task. The overall architecture is shown in Figure 2.",
|
| 712 |
+
"bbox": [
|
| 713 |
+
507,
|
| 714 |
+
84,
|
| 715 |
+
882,
|
| 716 |
+
261
|
| 717 |
+
],
|
| 718 |
+
"page_idx": 4
|
| 719 |
+
},
|
| 720 |
+
{
|
| 721 |
+
"type": "text",
|
| 722 |
+
"text": "Note that the auxiliary task is similar with target classification of Stage 1 and thus it cannot be used in zero-shot stance detection. In zero-shot setting, we first leverage the keyphrase generation model for target prediction and then detect the stance toward the predicted target with the multi-task stance model. In order to be consistent with the target generation setting that decouples target identification from stance detection, we train a separate target classification model (BERTweet or BiLSTM) in Stage 1 and a multi-task model (BERTweet or other stance detection baselines) in Stage 2 for stance detection. However, note that the target classification of the auxiliary task can be used for the in-target TSE setting.",
|
| 723 |
+
"bbox": [
|
| 724 |
+
507,
|
| 725 |
+
262,
|
| 726 |
+
884,
|
| 727 |
+
502
|
| 728 |
+
],
|
| 729 |
+
"page_idx": 4
|
| 730 |
+
},
|
| 731 |
+
{
|
| 732 |
+
"type": "text",
|
| 733 |
+
"text": "5 Experimental Settings",
|
| 734 |
+
"text_level": 1,
|
| 735 |
+
"bbox": [
|
| 736 |
+
507,
|
| 737 |
+
517,
|
| 738 |
+
737,
|
| 739 |
+
533
|
| 740 |
+
],
|
| 741 |
+
"page_idx": 4
|
| 742 |
+
},
|
| 743 |
+
{
|
| 744 |
+
"type": "text",
|
| 745 |
+
"text": "5.1 Evaluation Metrics",
|
| 746 |
+
"text_level": 1,
|
| 747 |
+
"bbox": [
|
| 748 |
+
507,
|
| 749 |
+
545,
|
| 750 |
+
705,
|
| 751 |
+
558
|
| 752 |
+
],
|
| 753 |
+
"page_idx": 4
|
| 754 |
+
},
|
| 755 |
+
{
|
| 756 |
+
"type": "text",
|
| 757 |
+
"text": "Target-Stance Extraction Target-Stance Extraction (TSE) task aims to extract the target-stance pair from a given text. We propose to solve this task by first identifying the target from the text and then detecting the stance toward the predicted target. We gather the (predicted target, predicted stance) pair for evaluation. For TSE task, we use the $F_{1}$ and accuracy as the evaluation metrics. The calculation of $F_{1}$ is shown as follows:",
|
| 758 |
+
"bbox": [
|
| 759 |
+
507,
|
| 760 |
+
567,
|
| 761 |
+
882,
|
| 762 |
+
712
|
| 763 |
+
],
|
| 764 |
+
"page_idx": 4
|
| 765 |
+
},
|
| 766 |
+
{
|
| 767 |
+
"type": "equation",
|
| 768 |
+
"text": "\n$$\nP r e c i s i o n = \\frac {\\# c o r r e c t}{\\# p r e d i c t}, \\tag {1}\n$$\n",
|
| 769 |
+
"text_format": "latex",
|
| 770 |
+
"bbox": [
|
| 771 |
+
600,
|
| 772 |
+
724,
|
| 773 |
+
882,
|
| 774 |
+
758
|
| 775 |
+
],
|
| 776 |
+
"page_idx": 4
|
| 777 |
+
},
|
| 778 |
+
{
|
| 779 |
+
"type": "equation",
|
| 780 |
+
"text": "\n$$\n\\text {R e c a l l} = \\frac {\\# \\text {c o r r e c t}}{\\# \\text {g o l d}}, \\tag {2}\n$$\n",
|
| 781 |
+
"text_format": "latex",
|
| 782 |
+
"bbox": [
|
| 783 |
+
613,
|
| 784 |
+
772,
|
| 785 |
+
882,
|
| 786 |
+
806
|
| 787 |
+
],
|
| 788 |
+
"page_idx": 4
|
| 789 |
+
},
|
| 790 |
+
{
|
| 791 |
+
"type": "equation",
|
| 792 |
+
"text": "\n$$\nF _ {1} = \\frac {2 \\times \\text {P r e c i s i o n} \\times \\text {R e c a l l}}{\\text {P r e c i s i o n} + \\text {R e c a l l}} \\tag {3}\n$$\n",
|
| 793 |
+
"text_format": "latex",
|
| 794 |
+
"bbox": [
|
| 795 |
+
576,
|
| 796 |
+
814,
|
| 797 |
+
882,
|
| 798 |
+
847
|
| 799 |
+
],
|
| 800 |
+
"page_idx": 4
|
| 801 |
+
},
|
| 802 |
+
{
|
| 803 |
+
"type": "text",
|
| 804 |
+
"text": "where #correct denotes the number of target-stance pairs correctly predicted by the model, #predict denotes the number of target-stance pairs whose target is predicted as one of our interested",
|
| 805 |
+
"bbox": [
|
| 806 |
+
507,
|
| 807 |
+
854,
|
| 808 |
+
882,
|
| 809 |
+
917
|
| 810 |
+
],
|
| 811 |
+
"page_idx": 4
|
| 812 |
+
},
|
| 813 |
+
{
|
| 814 |
+
"type": "page_number",
|
| 815 |
+
"text": "10075",
|
| 816 |
+
"bbox": [
|
| 817 |
+
477,
|
| 818 |
+
927,
|
| 819 |
+
524,
|
| 820 |
+
940
|
| 821 |
+
],
|
| 822 |
+
"page_idx": 4
|
| 823 |
+
},
|
| 824 |
+
{
|
| 825 |
+
"type": "text",
|
| 826 |
+
"text": "targets (not Unrelated) by the model, #gold denotes the number of target-stance pairs whose target is not Unrelated in the dataset.",
|
| 827 |
+
"bbox": [
|
| 828 |
+
112,
|
| 829 |
+
84,
|
| 830 |
+
489,
|
| 831 |
+
131
|
| 832 |
+
],
|
| 833 |
+
"page_idx": 5
|
| 834 |
+
},
|
| 835 |
+
{
|
| 836 |
+
"type": "text",
|
| 837 |
+
"text": "For accuracy, a prediction pair can be counted as a correct prediction if it satisfies one of the following two conditions: 1) the predicted target-stance pair is the same as the golden one if the golden target is not Unrelated, 2) the predicted target and the golden target are both Unrelated. Since we show no interest in Unrelated category, we do not detect the stance toward the Unrelated category.",
|
| 838 |
+
"bbox": [
|
| 839 |
+
112,
|
| 840 |
+
133,
|
| 841 |
+
489,
|
| 842 |
+
261
|
| 843 |
+
],
|
| 844 |
+
"page_idx": 5
|
| 845 |
+
},
|
| 846 |
+
{
|
| 847 |
+
"type": "text",
|
| 848 |
+
"text": "Target Identification We evaluate the target classification and target generation using microaveraged $F_{1}$ over the golden targets in each dataset.",
|
| 849 |
+
"bbox": [
|
| 850 |
+
112,
|
| 851 |
+
269,
|
| 852 |
+
489,
|
| 853 |
+
317
|
| 854 |
+
],
|
| 855 |
+
"page_idx": 5
|
| 856 |
+
},
|
| 857 |
+
{
|
| 858 |
+
"type": "text",
|
| 859 |
+
"text": "Stance Detection For the original formulation of the stance detection task, we use the $F_{avg}$ , macro-average of $F_{1}$ ( $F_{mac}$ ) and micro-average of $F_{1}$ ( $F_{mic}$ ) as the evaluation metrics following the previous work (Mohammad et al., 2016b). $F_{avg}$ is calculated as the average $F_{1}$ of Favor and Against toward each dataset. Further, $F_{mac}$ is calculated by averaging the $F_{avg}$ across all four datasets. We obtain $F_{mic}$ by averaging the $F_{1}$ of Favor and Against across the merged dataset.",
|
| 860 |
+
"bbox": [
|
| 861 |
+
112,
|
| 862 |
+
326,
|
| 863 |
+
489,
|
| 864 |
+
487
|
| 865 |
+
],
|
| 866 |
+
"page_idx": 5
|
| 867 |
+
},
|
| 868 |
+
{
|
| 869 |
+
"type": "text",
|
| 870 |
+
"text": "5.2 Baseline Models",
|
| 871 |
+
"text_level": 1,
|
| 872 |
+
"bbox": [
|
| 873 |
+
112,
|
| 874 |
+
497,
|
| 875 |
+
287,
|
| 876 |
+
511
|
| 877 |
+
],
|
| 878 |
+
"page_idx": 5
|
| 879 |
+
},
|
| 880 |
+
{
|
| 881 |
+
"type": "text",
|
| 882 |
+
"text": "Target Classification As discussed in §4.1, this task involves training a classifier which can predict the target mentioned in the given tweet. We use the following neural network based classifiers:",
|
| 883 |
+
"bbox": [
|
| 884 |
+
112,
|
| 885 |
+
518,
|
| 886 |
+
487,
|
| 887 |
+
581
|
| 888 |
+
],
|
| 889 |
+
"page_idx": 5
|
| 890 |
+
},
|
| 891 |
+
{
|
| 892 |
+
"type": "list",
|
| 893 |
+
"sub_type": "text",
|
| 894 |
+
"list_items": [
|
| 895 |
+
"- BiLSTM (Schuster and Paliwal, 1997): We use BiLSTM networks followed by two linear layers to predict the target given a text.",
|
| 896 |
+
"- BERT (Devlin et al., 2019): A pre-trained language model that predicts the target by appending a linear layer to the hidden representation of $[CLS]$ token. We fine-tune the BERT-base on the target classification task.",
|
| 897 |
+
"- BERTweet (Nguyen et al., 2020): This variant of BERT is pre-trained on 845M English Tweets following the training procedure of RoBERTa (Liu et al., 2019). We fine-tune the BERTweet on the target classification task."
|
| 898 |
+
],
|
| 899 |
+
"bbox": [
|
| 900 |
+
136,
|
| 901 |
+
590,
|
| 902 |
+
487,
|
| 903 |
+
820
|
| 904 |
+
],
|
| 905 |
+
"page_idx": 5
|
| 906 |
+
},
|
| 907 |
+
{
|
| 908 |
+
"type": "text",
|
| 909 |
+
"text": "Target Generation As discussed in §4.1, we train the BART model separately on the keyphrase generation datasets as described below:",
|
| 910 |
+
"bbox": [
|
| 911 |
+
112,
|
| 912 |
+
829,
|
| 913 |
+
487,
|
| 914 |
+
876
|
| 915 |
+
],
|
| 916 |
+
"page_idx": 5
|
| 917 |
+
},
|
| 918 |
+
{
|
| 919 |
+
"type": "text",
|
| 920 |
+
"text": "- BART-OpenKP: BART, pre-trained on the OpenKeyPhrase (OpenKP) dataset (Xiong",
|
| 921 |
+
"bbox": [
|
| 922 |
+
136,
|
| 923 |
+
887,
|
| 924 |
+
487,
|
| 925 |
+
919
|
| 926 |
+
],
|
| 927 |
+
"page_idx": 5
|
| 928 |
+
},
|
| 929 |
+
{
|
| 930 |
+
"type": "text",
|
| 931 |
+
"text": "et al., 2019), is used as a baseline for generating keyphrases for the input texts. OpenKP is a large-scale open domain keyphrase extraction dataset consisting of 148,124 annotated real-world webpages.",
|
| 932 |
+
"bbox": [
|
| 933 |
+
542,
|
| 934 |
+
84,
|
| 935 |
+
884,
|
| 936 |
+
165
|
| 937 |
+
],
|
| 938 |
+
"page_idx": 5
|
| 939 |
+
},
|
| 940 |
+
{
|
| 941 |
+
"type": "list",
|
| 942 |
+
"sub_type": "text",
|
| 943 |
+
"list_items": [
|
| 944 |
+
"- BART-KPTimes: BART, pre-trained on the KPTimes (Gallina et al., 2019) dataset, serves as another baseline for target generation. KP-Times is a large-scale keyphrase generation dataset consisting of $\\sim 280,000$ news articles with the editor-curated keyphrases.",
|
| 945 |
+
"- BART-FullTextKP: BART is pre-trained on the FullTextKP (Garg et al., 2022) dataset. FullTextKP is a collection of 142,844 scientific articles along with the annotated keyphrases. We use the version of FullTextKP which contains only the titles and abstracts of those articles."
|
| 946 |
+
],
|
| 947 |
+
"bbox": [
|
| 948 |
+
531,
|
| 949 |
+
175,
|
| 950 |
+
884,
|
| 951 |
+
394
|
| 952 |
+
],
|
| 953 |
+
"page_idx": 5
|
| 954 |
+
},
|
| 955 |
+
{
|
| 956 |
+
"type": "text",
|
| 957 |
+
"text": "Stance Detection We first train the model on the merged dataset and then apply the well-trained model to predict the stance toward the predicted target from the target identification stage. We conduct experiments with the following baselines:",
|
| 958 |
+
"bbox": [
|
| 959 |
+
507,
|
| 960 |
+
406,
|
| 961 |
+
882,
|
| 962 |
+
486
|
| 963 |
+
],
|
| 964 |
+
"page_idx": 5
|
| 965 |
+
},
|
| 966 |
+
{
|
| 967 |
+
"type": "list",
|
| 968 |
+
"sub_type": "text",
|
| 969 |
+
"list_items": [
|
| 970 |
+
"- BiLSTM (Schuster and Paliwal, 1997): A BiLSTM model is used to predict the stance without considering the target information.",
|
| 971 |
+
"- BiCond (Augenstein et al., 2016): A BiLSTM model that uses a conditional encoding method. The target is first encoded by a BiLSTM, whose hidden representations are then used to initialize another BiLSTM with sentences as inputs.",
|
| 972 |
+
"- TAN (Du et al., 2017): An attention-based BiLSTM model that learns the correlation between target and sentence representations.",
|
| 973 |
+
"- CrossNet (Xu et al., 2018): A variant of BiCond model, which adds an attention layer to capture the important words of inputs.",
|
| 974 |
+
"- TGA-Net (Allaway and McKeown, 2020): A BERT-based model that uses topic-grouped attention.",
|
| 975 |
+
"- BERTweet (Li et al., 2021a,b): A pre-trained language model that is fine-tuned by adding a linear layer to the hidden representation of the [CLS] token. The input is formulated as: [CLS] target [SEP] text."
|
| 976 |
+
],
|
| 977 |
+
"bbox": [
|
| 978 |
+
531,
|
| 979 |
+
497,
|
| 980 |
+
882,
|
| 981 |
+
917
|
| 982 |
+
],
|
| 983 |
+
"page_idx": 5
|
| 984 |
+
},
|
| 985 |
+
{
|
| 986 |
+
"type": "page_number",
|
| 987 |
+
"text": "10076",
|
| 988 |
+
"bbox": [
|
| 989 |
+
477,
|
| 990 |
+
927,
|
| 991 |
+
524,
|
| 992 |
+
940
|
| 993 |
+
],
|
| 994 |
+
"page_idx": 5
|
| 995 |
+
},
|
| 996 |
+
{
|
| 997 |
+
"type": "table",
|
| 998 |
+
"img_path": "images/1e379f68d16fa6f18148391c34728cc05e5a06897731c31f86ec42a5970a0bb9.jpg",
|
| 999 |
+
"table_caption": [],
|
| 1000 |
+
"table_footnote": [],
|
| 1001 |
+
"table_body": "<table><tr><td>Model</td><td>SE</td><td>AM</td><td>C19</td><td>PS</td><td>Merged</td></tr><tr><td>BiLSTM</td><td>52.07</td><td>54.56</td><td>50.00</td><td>60.79</td><td>61.00</td></tr><tr><td>BERT</td><td>77.38</td><td>70.40</td><td>66.38</td><td>70.10</td><td>74.70</td></tr><tr><td>BERTweet</td><td>81.27</td><td>70.55</td><td>66.54</td><td>72.25</td><td>75.59</td></tr></table>",
|
| 1002 |
+
"bbox": [
|
| 1003 |
+
134,
|
| 1004 |
+
80,
|
| 1005 |
+
467,
|
| 1006 |
+
148
|
| 1007 |
+
],
|
| 1008 |
+
"page_idx": 6
|
| 1009 |
+
},
|
| 1010 |
+
{
|
| 1011 |
+
"type": "text",
|
| 1012 |
+
"text": "6 Results and Analysis",
|
| 1013 |
+
"text_level": 1,
|
| 1014 |
+
"bbox": [
|
| 1015 |
+
112,
|
| 1016 |
+
212,
|
| 1017 |
+
327,
|
| 1018 |
+
229
|
| 1019 |
+
],
|
| 1020 |
+
"page_idx": 6
|
| 1021 |
+
},
|
| 1022 |
+
{
|
| 1023 |
+
"type": "text",
|
| 1024 |
+
"text": "In this section, we first present the results for target classification and target generation in §6.1. We then present the set of experiments performed on the intarget TSE task and show the results obtained by using the aforementioned baselines in §6.2. In the next section §6.3, we report the results for the zero-shot TSE task where targets of test set are not seen in the train set. Finally, we study the performance of multi-task models in §6.4. Each result is the average of three runs with different initializations.",
|
| 1025 |
+
"bbox": [
|
| 1026 |
+
112,
|
| 1027 |
+
239,
|
| 1028 |
+
487,
|
| 1029 |
+
399
|
| 1030 |
+
],
|
| 1031 |
+
"page_idx": 6
|
| 1032 |
+
},
|
| 1033 |
+
{
|
| 1034 |
+
"type": "text",
|
| 1035 |
+
"text": "6.1 Target Classification and Target Generation",
|
| 1036 |
+
"text_level": 1,
|
| 1037 |
+
"bbox": [
|
| 1038 |
+
112,
|
| 1039 |
+
413,
|
| 1040 |
+
411,
|
| 1041 |
+
443
|
| 1042 |
+
],
|
| 1043 |
+
"page_idx": 6
|
| 1044 |
+
},
|
| 1045 |
+
{
|
| 1046 |
+
"type": "text",
|
| 1047 |
+
"text": "For target classification, BERT-based models consistently outperform the BiLSTM model by a wide margin and BERTweet further supersedes BERT across all datasets, as shown in Table 3. We can also observe that all models achieve relatively low performance on the COVID-19 dataset. One reason is that targets in this dataset are all closely related to COVID-19 and thus share a lot of topics / commonalities, which make the target classification task more challenging.",
|
| 1048 |
+
"bbox": [
|
| 1049 |
+
112,
|
| 1050 |
+
450,
|
| 1051 |
+
487,
|
| 1052 |
+
611
|
| 1053 |
+
],
|
| 1054 |
+
"page_idx": 6
|
| 1055 |
+
},
|
| 1056 |
+
{
|
| 1057 |
+
"type": "text",
|
| 1058 |
+
"text": "For target generation, we report the performance of different pre-trained BART models in Table 4. We can observe that the overall performance of target generation is lower than the target classification task, which implies that the target generation task is more challenging. However, unlike the target classification models that can only be applied to in-target stance detection, target generation models can be directly extended to zero-shot stance detection that needs to detect the stance for targets unseen during training. In addition, the keyphrase generation models produce interesting generations as shown in Appendix B, that could be leveraged for other research purposes for stance detection such as data augmentation as part of future work.",
|
| 1059 |
+
"bbox": [
|
| 1060 |
+
112,
|
| 1061 |
+
612,
|
| 1062 |
+
487,
|
| 1063 |
+
853
|
| 1064 |
+
],
|
| 1065 |
+
"page_idx": 6
|
| 1066 |
+
},
|
| 1067 |
+
{
|
| 1068 |
+
"type": "text",
|
| 1069 |
+
"text": "6.2 In-Target TSE",
|
| 1070 |
+
"text_level": 1,
|
| 1071 |
+
"bbox": [
|
| 1072 |
+
112,
|
| 1073 |
+
865,
|
| 1074 |
+
275,
|
| 1075 |
+
881
|
| 1076 |
+
],
|
| 1077 |
+
"page_idx": 6
|
| 1078 |
+
},
|
| 1079 |
+
{
|
| 1080 |
+
"type": "text",
|
| 1081 |
+
"text": "TSE with Target Classification In Table 5, we report the performance of our proposed two-stage",
|
| 1082 |
+
"bbox": [
|
| 1083 |
+
112,
|
| 1084 |
+
887,
|
| 1085 |
+
487,
|
| 1086 |
+
917
|
| 1087 |
+
],
|
| 1088 |
+
"page_idx": 6
|
| 1089 |
+
},
|
| 1090 |
+
{
|
| 1091 |
+
"type": "table",
|
| 1092 |
+
"img_path": "images/b89cb15b52ef020b6894ec6eb4a04c1001eff60499e37cd9a436c28a48ded304.jpg",
|
| 1093 |
+
"table_caption": [
|
| 1094 |
+
"Table 3: Performance comparisons of different models in micro-averaged $F_{1}$ on target classification."
|
| 1095 |
+
],
|
| 1096 |
+
"table_footnote": [],
|
| 1097 |
+
"table_body": "<table><tr><td>Model</td><td>SE</td><td>AM</td><td>C19</td><td>PS</td><td>Merged</td></tr><tr><td>OpenKP</td><td>32.22</td><td>61.24</td><td>28.25</td><td>43.81</td><td>43.02</td></tr><tr><td>KPTimes</td><td>30.83</td><td>66.31</td><td>26.00</td><td>63.65</td><td>48.31</td></tr><tr><td>FullTextKP</td><td>28.06</td><td>64.67</td><td>29.38</td><td>44.83</td><td>43.81</td></tr></table>",
|
| 1098 |
+
"bbox": [
|
| 1099 |
+
526,
|
| 1100 |
+
80,
|
| 1101 |
+
867,
|
| 1102 |
+
149
|
| 1103 |
+
],
|
| 1104 |
+
"page_idx": 6
|
| 1105 |
+
},
|
| 1106 |
+
{
|
| 1107 |
+
"type": "text",
|
| 1108 |
+
"text": "Table 4: Performance comparisons of different models in micro-averaged $F_{1}$ on target generation.",
|
| 1109 |
+
"bbox": [
|
| 1110 |
+
507,
|
| 1111 |
+
158,
|
| 1112 |
+
880,
|
| 1113 |
+
187
|
| 1114 |
+
],
|
| 1115 |
+
"page_idx": 6
|
| 1116 |
+
},
|
| 1117 |
+
{
|
| 1118 |
+
"type": "text",
|
| 1119 |
+
"text": "framework with target classification. Stance detection baselines are trained in our proposed multitask setting on the merged dataset. Note that the BiLSTM, BERT and BERTweet in the first row of Table 5 are the target classification models. GT means that all ground-truth targets are used for stance detection (Stage 2). First, it can be seen that the overall performance of stance baselines is relatively low, which indicates that our proposed TSE task is very challenging. Second, we can observe that stance classifier BERTweet achieves the best performance across all target classification models, which is consistent with our observation in Table 8 that BERTweet performs best on in-target stance detection. Third, we can observe that each stance classifier achieves the best performance on target classifier BERTweet also due to its higher accuracy in target identification. Fourth, a significant performance drop can be seen between GT and each target classification model, which indicates that it is challenging to correctly identify the targets in our proposed framework.",
|
| 1120 |
+
"bbox": [
|
| 1121 |
+
507,
|
| 1122 |
+
214,
|
| 1123 |
+
884,
|
| 1124 |
+
568
|
| 1125 |
+
],
|
| 1126 |
+
"page_idx": 6
|
| 1127 |
+
},
|
| 1128 |
+
{
|
| 1129 |
+
"type": "text",
|
| 1130 |
+
"text": "TSE with Target Generation Besides target classification, we report the performance of our proposed two-stage framework with target generation in Table 6. Stance detection baselines are trained in our proposed multi-task setting on the merged dataset. The OpenKP, KPTimes, and Full-TextKP of the first row indicate the train sets of the keyphrase generation models. First, we see that stance classifiers show lower performance in the target generation setting in overall than the target classification setting. One explanation is that keyphrases generated by the keyphrase generation models might be related to other topics contained in the sentence. However, in most datasets, one sentence is annotated with only one target and thus the generated keyphrases may be mapped to wrong targets.",
|
| 1131 |
+
"bbox": [
|
| 1132 |
+
507,
|
| 1133 |
+
580,
|
| 1134 |
+
882,
|
| 1135 |
+
852
|
| 1136 |
+
],
|
| 1137 |
+
"page_idx": 6
|
| 1138 |
+
},
|
| 1139 |
+
{
|
| 1140 |
+
"type": "text",
|
| 1141 |
+
"text": "Second, we can observe that stance classifiers achieve higher performance in evaluation metric $F_{1}$ over accuracy in Table 6, which is different from the observation in Table 5. The reason is that target",
|
| 1142 |
+
"bbox": [
|
| 1143 |
+
507,
|
| 1144 |
+
854,
|
| 1145 |
+
882,
|
| 1146 |
+
917
|
| 1147 |
+
],
|
| 1148 |
+
"page_idx": 6
|
| 1149 |
+
},
|
| 1150 |
+
{
|
| 1151 |
+
"type": "page_number",
|
| 1152 |
+
"text": "10077",
|
| 1153 |
+
"bbox": [
|
| 1154 |
+
477,
|
| 1155 |
+
927,
|
| 1156 |
+
524,
|
| 1157 |
+
940
|
| 1158 |
+
],
|
| 1159 |
+
"page_idx": 6
|
| 1160 |
+
},
|
| 1161 |
+
{
|
| 1162 |
+
"type": "table",
|
| 1163 |
+
"img_path": "images/ddeac4d810739c03fa78e8de6679b20031a710d9023e489bc5903e68d4689e48.jpg",
|
| 1164 |
+
"table_caption": [],
|
| 1165 |
+
"table_footnote": [],
|
| 1166 |
+
"table_body": "<table><tr><td rowspan=\"2\">Model</td><td colspan=\"2\">BiLSTM</td><td colspan=\"2\">BERT</td><td colspan=\"2\">BERTweet</td><td colspan=\"2\">GT</td></tr><tr><td>F1</td><td>Acc</td><td>F1</td><td>Acc</td><td>F1</td><td>Acc</td><td>F1</td><td>Acc</td></tr><tr><td>BiLSTM</td><td>35.38</td><td>44.64</td><td>44.81</td><td>53.15</td><td>45.46</td><td>53.61</td><td>65.23</td><td>71.16</td></tr><tr><td>BiCond</td><td>35.36</td><td>44.63</td><td>44.94</td><td>53.26</td><td>45.59</td><td>53.72</td><td>65.61</td><td>71.48</td></tr><tr><td>TAN</td><td>36.69</td><td>45.73</td><td>46.32</td><td>54.41</td><td>47.02</td><td>54.91</td><td>67.33</td><td>72.90</td></tr><tr><td>CrossNet</td><td>36.30</td><td>45.41</td><td>45.81</td><td>53.98</td><td>46.41</td><td>54.40</td><td>67.09</td><td>72.70</td></tr><tr><td>TGA-Net</td><td>39.23</td><td>47.83</td><td>49.46</td><td>57.02</td><td>50.31</td><td>57.65</td><td>71.73</td><td>76.55</td></tr><tr><td>BERTweet</td><td>41.35</td><td>49.59</td><td>52.24</td><td>59.33</td><td>53.30</td><td>60.13</td><td>75.28</td><td>79.49</td></tr></table>",
|
| 1167 |
+
"bbox": [
|
| 1168 |
+
243,
|
| 1169 |
+
80,
|
| 1170 |
+
752,
|
| 1171 |
+
211
|
| 1172 |
+
],
|
| 1173 |
+
"page_idx": 7
|
| 1174 |
+
},
|
| 1175 |
+
{
|
| 1176 |
+
"type": "table",
|
| 1177 |
+
"img_path": "images/6e924b2fcc9d7f34cd4ac5351e29f0caa3f12985a78ee784bb9bbe599d595d4f.jpg",
|
| 1178 |
+
"table_caption": [
|
| 1179 |
+
"Table 5: Performance comparisons of different models in $F_{1}$ and accuracy on the merged dataset and in-target TSE task with target classification setting. GT: ground-truth targets are used for stance detection (Stage 2), which is the upper bound of model performance in our proposed framework."
|
| 1180 |
+
],
|
| 1181 |
+
"table_footnote": [],
|
| 1182 |
+
"table_body": "<table><tr><td rowspan=\"2\">Model</td><td colspan=\"2\">OpenKP</td><td colspan=\"2\">KPTimes</td><td colspan=\"2\">FullTextKP</td><td colspan=\"2\">GT</td></tr><tr><td>F1</td><td>Acc</td><td>F1</td><td>Acc</td><td>F1</td><td>Acc</td><td>F1</td><td>Acc</td></tr><tr><td>BiLSTM</td><td>28.40</td><td>26.50</td><td>32.69</td><td>30.06</td><td>29.24</td><td>26.86</td><td>65.23</td><td>71.16</td></tr><tr><td>BiCond</td><td>28.64</td><td>26.71</td><td>32.94</td><td>30.29</td><td>29.22</td><td>26.84</td><td>65.61</td><td>71.48</td></tr><tr><td>TAN</td><td>29.75</td><td>27.72</td><td>34.13</td><td>31.37</td><td>30.52</td><td>28.03</td><td>67.33</td><td>72.90</td></tr><tr><td>CrossNet</td><td>29.25</td><td>27.27</td><td>33.63</td><td>30.92</td><td>30.19</td><td>27.73</td><td>67.09</td><td>72.70</td></tr><tr><td>TGA-Net</td><td>31.89</td><td>29.65</td><td>36.76</td><td>33.77</td><td>32.86</td><td>30.16</td><td>71.73</td><td>76.55</td></tr><tr><td>BERTweet</td><td>34.02</td><td>31.57</td><td>38.92</td><td>35.74</td><td>35.16</td><td>32.26</td><td>75.28</td><td>79.49</td></tr></table>",
|
| 1183 |
+
"bbox": [
|
| 1184 |
+
243,
|
| 1185 |
+
269,
|
| 1186 |
+
752,
|
| 1187 |
+
398
|
| 1188 |
+
],
|
| 1189 |
+
"page_idx": 7
|
| 1190 |
+
},
|
| 1191 |
+
{
|
| 1192 |
+
"type": "text",
|
| 1193 |
+
"text": "Table 6: Performance comparisons of different models in $F_{1}$ and accuracy on the merged dataset and in-target TSE task with target generation setting. GT: ground-truth targets are used for stance detection (Stage 2), which is the upper bound of model performance in our proposed framework.",
|
| 1194 |
+
"bbox": [
|
| 1195 |
+
112,
|
| 1196 |
+
407,
|
| 1197 |
+
882,
|
| 1198 |
+
451
|
| 1199 |
+
],
|
| 1200 |
+
"page_idx": 7
|
| 1201 |
+
},
|
| 1202 |
+
{
|
| 1203 |
+
"type": "text",
|
| 1204 |
+
"text": "classifiers show much better performance on the class Unrelated because samples of Unrelated are seen during training. However, in target generation, we predict the generated keyphrases as Unrelated category with a threshold, which is not accurate in some cases and introduces another source of error.",
|
| 1205 |
+
"bbox": [
|
| 1206 |
+
112,
|
| 1207 |
+
466,
|
| 1208 |
+
487,
|
| 1209 |
+
561
|
| 1210 |
+
],
|
| 1211 |
+
"page_idx": 7
|
| 1212 |
+
},
|
| 1213 |
+
{
|
| 1214 |
+
"type": "text",
|
| 1215 |
+
"text": "Third, we can observe that BERTweet still achieves the best performance across all keyphrase generation models, indicating its effectiveness on in-target stance detection.",
|
| 1216 |
+
"bbox": [
|
| 1217 |
+
112,
|
| 1218 |
+
563,
|
| 1219 |
+
487,
|
| 1220 |
+
626
|
| 1221 |
+
],
|
| 1222 |
+
"page_idx": 7
|
| 1223 |
+
},
|
| 1224 |
+
{
|
| 1225 |
+
"type": "text",
|
| 1226 |
+
"text": "Fourth, we can see that stance classifiers generally achieve better performance with the generation model trained on KPTimes, which is consistent with our observation in Table 4.",
|
| 1227 |
+
"bbox": [
|
| 1228 |
+
112,
|
| 1229 |
+
627,
|
| 1230 |
+
487,
|
| 1231 |
+
690
|
| 1232 |
+
],
|
| 1233 |
+
"page_idx": 7
|
| 1234 |
+
},
|
| 1235 |
+
{
|
| 1236 |
+
"type": "text",
|
| 1237 |
+
"text": "Fifth, as before, we can observe a significant performance drop between GT and each target generation model (even higher than the target classification). This is not surprising since target generation is even more challenging than target classification.",
|
| 1238 |
+
"bbox": [
|
| 1239 |
+
112,
|
| 1240 |
+
692,
|
| 1241 |
+
487,
|
| 1242 |
+
772
|
| 1243 |
+
],
|
| 1244 |
+
"page_idx": 7
|
| 1245 |
+
},
|
| 1246 |
+
{
|
| 1247 |
+
"type": "text",
|
| 1248 |
+
"text": "6.3 Zero-Shot TSE",
|
| 1249 |
+
"text_level": 1,
|
| 1250 |
+
"bbox": [
|
| 1251 |
+
112,
|
| 1252 |
+
785,
|
| 1253 |
+
278,
|
| 1254 |
+
799
|
| 1255 |
+
],
|
| 1256 |
+
"page_idx": 7
|
| 1257 |
+
},
|
| 1258 |
+
{
|
| 1259 |
+
"type": "text",
|
| 1260 |
+
"text": "To investigate the ability of different baselines in addressing the unseen targets, we further evaluate the performance of baselines on zero-shot stance detection where targets of test set are not seen in train and validation sets. Table 7 shows performance comparisons of baseline models on the zero-shot TSE task in target generation setting. Note that",
|
| 1261 |
+
"bbox": [
|
| 1262 |
+
112,
|
| 1263 |
+
806,
|
| 1264 |
+
487,
|
| 1265 |
+
917
|
| 1266 |
+
],
|
| 1267 |
+
"page_idx": 7
|
| 1268 |
+
},
|
| 1269 |
+
{
|
| 1270 |
+
"type": "text",
|
| 1271 |
+
"text": "target classification cannot be directly applied to identify the target in zero-shot tasks because given an input sentence, the predicted target of target classification must be one of the seen targets in train set. We can observe that zero-shot baseline TGA-Net achieves the best performance across all keyphrase generation models, indicating that TGA-Net shows better ability to generalize to unseen targets with topic-grouped attention. In addition, stance classifiers show best results with the generation model trained on KPTimes, which is consistent with the results in Table 4. It can be seen that even GT does not perform well on the zero-shot dataset, indicating the difficulty of our zero-shot task.",
|
| 1272 |
+
"bbox": [
|
| 1273 |
+
507,
|
| 1274 |
+
466,
|
| 1275 |
+
884,
|
| 1276 |
+
690
|
| 1277 |
+
],
|
| 1278 |
+
"page_idx": 7
|
| 1279 |
+
},
|
| 1280 |
+
{
|
| 1281 |
+
"type": "text",
|
| 1282 |
+
"text": "6.4 Effectiveness of Multi-Task Learning on Stance Detection",
|
| 1283 |
+
"text_level": 1,
|
| 1284 |
+
"bbox": [
|
| 1285 |
+
507,
|
| 1286 |
+
703,
|
| 1287 |
+
870,
|
| 1288 |
+
733
|
| 1289 |
+
],
|
| 1290 |
+
"page_idx": 7
|
| 1291 |
+
},
|
| 1292 |
+
{
|
| 1293 |
+
"type": "text",
|
| 1294 |
+
"text": "As mentioned before, all results reported in §6.2 and §6.3 are based on multi-task models. To investigate the effectiveness of multi-task learning, we compare the performance of multi-task models with single-task models in Table 8. Each model is trained and validated on the merged set and tested on the individual datasets where targets are golden targets instead of generated targets for a better understanding of experimental results. We can observe that all multi-task models consistently outperforms single-task models on all datasets, demon",
|
| 1295 |
+
"bbox": [
|
| 1296 |
+
507,
|
| 1297 |
+
741,
|
| 1298 |
+
884,
|
| 1299 |
+
917
|
| 1300 |
+
],
|
| 1301 |
+
"page_idx": 7
|
| 1302 |
+
},
|
| 1303 |
+
{
|
| 1304 |
+
"type": "page_number",
|
| 1305 |
+
"text": "10078",
|
| 1306 |
+
"bbox": [
|
| 1307 |
+
477,
|
| 1308 |
+
927,
|
| 1309 |
+
524,
|
| 1310 |
+
940
|
| 1311 |
+
],
|
| 1312 |
+
"page_idx": 7
|
| 1313 |
+
},
|
| 1314 |
+
{
|
| 1315 |
+
"type": "table",
|
| 1316 |
+
"img_path": "images/1f164c57013354a58845d83619981aaf14042c4c4389dcbdd86e101d688b25ca.jpg",
|
| 1317 |
+
"table_caption": [],
|
| 1318 |
+
"table_footnote": [],
|
| 1319 |
+
"table_body": "<table><tr><td rowspan=\"2\">Model</td><td colspan=\"2\">OpenKP</td><td colspan=\"2\">KPTimes</td><td colspan=\"2\">FullTextKP</td><td colspan=\"2\">GT</td></tr><tr><td>F1</td><td>Acc</td><td>F1</td><td>Acc</td><td>F1</td><td>Acc</td><td>F1</td><td>Acc</td></tr><tr><td>BiLSTM</td><td>12.77</td><td>11.81</td><td>13.15</td><td>12.10</td><td>12.95</td><td>11.91</td><td>27.42</td><td>39.52</td></tr><tr><td>BiCond</td><td>13.60</td><td>12.57</td><td>14.31</td><td>13.17</td><td>13.77</td><td>12.66</td><td>28.98</td><td>40.81</td></tr><tr><td>TAN</td><td>13.30</td><td>12.31</td><td>13.29</td><td>12.23</td><td>13.53</td><td>12.44</td><td>27.51</td><td>39.59</td></tr><tr><td>CrossNet</td><td>14.38</td><td>13.29</td><td>14.89</td><td>13.69</td><td>14.39</td><td>13.23</td><td>30.73</td><td>42.28</td></tr><tr><td>TGA-Net</td><td>21.47</td><td>19.76</td><td>22.83</td><td>20.95</td><td>21.36</td><td>19.61</td><td>40.94</td><td>50.79</td></tr><tr><td>BERTweet</td><td>19.11</td><td>17.60</td><td>20.45</td><td>18.78</td><td>20.11</td><td>18.46</td><td>38.51</td><td>48.76</td></tr></table>",
|
| 1320 |
+
"bbox": [
|
| 1321 |
+
243,
|
| 1322 |
+
80,
|
| 1323 |
+
752,
|
| 1324 |
+
210
|
| 1325 |
+
],
|
| 1326 |
+
"page_idx": 8
|
| 1327 |
+
},
|
| 1328 |
+
{
|
| 1329 |
+
"type": "table",
|
| 1330 |
+
"img_path": "images/908339ced63189ee408d97e2f6371cc267d6ed3b9ea8ba57f3c5d6d790c3e8b6.jpg",
|
| 1331 |
+
"table_caption": [
|
| 1332 |
+
"Table 7: Performance comparisons of different models in $F_{1}$ and accuracy on the zero-shot dataset and zero-shot TSE task with target generation setting. GT: ground-truth targets are used for stance detection (Stage 2), which is the upper bound of model performance in our proposed framework."
|
| 1333 |
+
],
|
| 1334 |
+
"table_footnote": [],
|
| 1335 |
+
"table_body": "<table><tr><td>Model</td><td>SE</td><td>AM</td><td>C19</td><td>PS</td><td>Fmac</td><td>Fmic</td></tr><tr><td colspan=\"7\">Single-Task</td></tr><tr><td>BiLSTM</td><td>53.05</td><td>45.70</td><td>53.34</td><td>73.62</td><td>56.43</td><td>58.75</td></tr><tr><td>BiCond</td><td>52.63</td><td>46.96</td><td>58.73</td><td>74.56</td><td>58.22</td><td>60.14</td></tr><tr><td>TAN</td><td>55.26</td><td>50.85</td><td>56.83</td><td>74.67</td><td>59.40</td><td>61.60</td></tr><tr><td>CrossNet</td><td>61.06</td><td>50.79</td><td>65.89</td><td>75.08</td><td>63.21</td><td>63.03</td></tr><tr><td>TGA-Net</td><td>63.74</td><td>58.71</td><td>64.70</td><td>77.70</td><td>66.21</td><td>67.56</td></tr><tr><td>BERTweet</td><td>68.03</td><td>64.31</td><td>72.99</td><td>81.47</td><td>71.70</td><td>72.26</td></tr><tr><td colspan=\"7\">Multi-Task</td></tr><tr><td>BiLSTM</td><td>57.03</td><td>47.45</td><td>59.35</td><td>74.22</td><td>59.51</td><td>60.63</td></tr><tr><td>BiCond</td><td>56.22</td><td>47.11</td><td>61.69</td><td>75.29</td><td>60.08</td><td>60.98</td></tr><tr><td>TAN</td><td>58.54</td><td>52.13</td><td>60.31</td><td>76.29</td><td>61.82</td><td>63.32</td></tr><tr><td>CrossNet</td><td>61.41</td><td>51.30</td><td>67.65</td><td>76.45</td><td>64.20</td><td>63.89</td></tr><tr><td>TGA-Net</td><td>64.05</td><td>59.26</td><td>66.77</td><td>78.67</td><td>67.19</td><td>68.12</td></tr><tr><td>BERTweet</td><td>70.62</td><td>64.85</td><td>74.42</td><td>81.67</td><td>72.89</td><td>73.01</td></tr></table>",
|
| 1336 |
+
"bbox": [
|
| 1337 |
+
117,
|
| 1338 |
+
278,
|
| 1339 |
+
485,
|
| 1340 |
+
514
|
| 1341 |
+
],
|
| 1342 |
+
"page_idx": 8
|
| 1343 |
+
},
|
| 1344 |
+
{
|
| 1345 |
+
"type": "text",
|
| 1346 |
+
"text": "Table 8: Performance comparisons of different models on in-target stance detection. We report $F_{avg}$ , macro-average of $F_{1}(F_{mac})$ and micro-average of $F_{1}(F_{mic})$ .",
|
| 1347 |
+
"bbox": [
|
| 1348 |
+
112,
|
| 1349 |
+
524,
|
| 1350 |
+
489,
|
| 1351 |
+
569
|
| 1352 |
+
],
|
| 1353 |
+
"page_idx": 8
|
| 1354 |
+
},
|
| 1355 |
+
{
|
| 1356 |
+
"type": "text",
|
| 1357 |
+
"text": "strating the effectiveness of the multi-task learning. Specifically, the average improvements of multitask models over single-task models are $2.35\\%$ , $0.80\\%$ , $2.95\\%$ and $0.92\\%$ in $F_{avg}$ on SemEval2016, AM, COVID-19, and P-Stance datasets, respectively. In addition, we can see that multi-task models achieve larger improvements on SemEval2016 and COVID-19 datasets. One possible reason is that there are fewer train samples in SemEval2016 and COVID-19 datasets than the rest and thus the auxiliary task of identifying targets can help models better capture the target-related features.",
|
| 1358 |
+
"bbox": [
|
| 1359 |
+
112,
|
| 1360 |
+
586,
|
| 1361 |
+
489,
|
| 1362 |
+
778
|
| 1363 |
+
],
|
| 1364 |
+
"page_idx": 8
|
| 1365 |
+
},
|
| 1366 |
+
{
|
| 1367 |
+
"type": "text",
|
| 1368 |
+
"text": "7 Conclusion",
|
| 1369 |
+
"text_level": 1,
|
| 1370 |
+
"bbox": [
|
| 1371 |
+
112,
|
| 1372 |
+
794,
|
| 1373 |
+
247,
|
| 1374 |
+
809
|
| 1375 |
+
],
|
| 1376 |
+
"page_idx": 8
|
| 1377 |
+
},
|
| 1378 |
+
{
|
| 1379 |
+
"type": "text",
|
| 1380 |
+
"text": "In this paper, we introduce a new Target-Stance Extraction (TSE) task to identify both target and corresponding stance in the wild. Different from original stance detection task that aims to only detect the stance given the target and text, our proposed task includes both target identification and stance de",
|
| 1381 |
+
"bbox": [
|
| 1382 |
+
112,
|
| 1383 |
+
822,
|
| 1384 |
+
489,
|
| 1385 |
+
917
|
| 1386 |
+
],
|
| 1387 |
+
"page_idx": 8
|
| 1388 |
+
},
|
| 1389 |
+
{
|
| 1390 |
+
"type": "text",
|
| 1391 |
+
"text": "tection, which makes it a more challenging task. We benchmark the task by proposing a two-stage framework that first identifies the target from a text and then detects the stance toward the predicted target. Our two-stage framework can not only be applied to in-target stance detection but also zero-shot stance detection. In addition, we propose a multi-task approach that takes target prediction as an auxiliary task to improve the task performance of stance detection.",
|
| 1392 |
+
"bbox": [
|
| 1393 |
+
507,
|
| 1394 |
+
281,
|
| 1395 |
+
884,
|
| 1396 |
+
441
|
| 1397 |
+
],
|
| 1398 |
+
"page_idx": 8
|
| 1399 |
+
},
|
| 1400 |
+
{
|
| 1401 |
+
"type": "text",
|
| 1402 |
+
"text": "It is worth noting that the primary goal of this paper is the introduction of new stance detection task. The proposed framework provides a good starting point and leaves much room for further improvements. Future work includes improving the target identification task, e.g., with a better mapping strategy.",
|
| 1403 |
+
"bbox": [
|
| 1404 |
+
507,
|
| 1405 |
+
445,
|
| 1406 |
+
884,
|
| 1407 |
+
557
|
| 1408 |
+
],
|
| 1409 |
+
"page_idx": 8
|
| 1410 |
+
},
|
| 1411 |
+
{
|
| 1412 |
+
"type": "text",
|
| 1413 |
+
"text": "8 Limitations",
|
| 1414 |
+
"text_level": 1,
|
| 1415 |
+
"bbox": [
|
| 1416 |
+
507,
|
| 1417 |
+
580,
|
| 1418 |
+
643,
|
| 1419 |
+
595
|
| 1420 |
+
],
|
| 1421 |
+
"page_idx": 8
|
| 1422 |
+
},
|
| 1423 |
+
{
|
| 1424 |
+
"type": "text",
|
| 1425 |
+
"text": "We present a novel (Target, Stance) pair Extraction task (TSE) for understanding the stance of interesting topics in the wild. There are two potential limitations to our work. First, the mapping module requires a predefined list of targets. Without the predefined list of targets, it is very difficult to understand the correctness of stance labels for the predicted targets in the absence of gold labels. On the other hand, the predefined list of targets makes the entire system end-to-end and automatically evaluable. Second, the process of mapping might become too slow if the number of targets of interest grows bigger. Future works include solving the given limitations and extracting (target, stance) pairs in a unified setting. However, the primary contribution of the work is not to present a fully robust pipeline model but to present a novel, interesting, and challenging task to the community working in stance detection.",
|
| 1426 |
+
"bbox": [
|
| 1427 |
+
507,
|
| 1428 |
+
613,
|
| 1429 |
+
884,
|
| 1430 |
+
917
|
| 1431 |
+
],
|
| 1432 |
+
"page_idx": 8
|
| 1433 |
+
},
|
| 1434 |
+
{
|
| 1435 |
+
"type": "page_number",
|
| 1436 |
+
"text": "10079",
|
| 1437 |
+
"bbox": [
|
| 1438 |
+
477,
|
| 1439 |
+
927,
|
| 1440 |
+
524,
|
| 1441 |
+
940
|
| 1442 |
+
],
|
| 1443 |
+
"page_idx": 8
|
| 1444 |
+
},
|
| 1445 |
+
{
|
| 1446 |
+
"type": "text",
|
| 1447 |
+
"text": "9 Ethical Considerations",
|
| 1448 |
+
"text_level": 1,
|
| 1449 |
+
"bbox": [
|
| 1450 |
+
112,
|
| 1451 |
+
83,
|
| 1452 |
+
346,
|
| 1453 |
+
98
|
| 1454 |
+
],
|
| 1455 |
+
"page_idx": 9
|
| 1456 |
+
},
|
| 1457 |
+
{
|
| 1458 |
+
"type": "text",
|
| 1459 |
+
"text": "Beyond the proposed two-step framework that helps collect the stance in the wild, it is very important to consider the ethical implications of stance detection systems. Since stance detection systems could automatically collect and aggregate the topical stance for a specific target, these systems may have significant impact on decision-making. Algorithms are not perfect, and thus a potential harm is that these systems may make incorrect predictions and further mislead the decision-making. Researchers should be aware of potential harms from the misuse of stance detection systems, and should respect people's privacy during the data collection.",
|
| 1460 |
+
"bbox": [
|
| 1461 |
+
112,
|
| 1462 |
+
108,
|
| 1463 |
+
490,
|
| 1464 |
+
317
|
| 1465 |
+
],
|
| 1466 |
+
"page_idx": 9
|
| 1467 |
+
},
|
| 1468 |
+
{
|
| 1469 |
+
"type": "text",
|
| 1470 |
+
"text": "Acknowledgments",
|
| 1471 |
+
"text_level": 1,
|
| 1472 |
+
"bbox": [
|
| 1473 |
+
114,
|
| 1474 |
+
328,
|
| 1475 |
+
278,
|
| 1476 |
+
344
|
| 1477 |
+
],
|
| 1478 |
+
"page_idx": 9
|
| 1479 |
+
},
|
| 1480 |
+
{
|
| 1481 |
+
"type": "text",
|
| 1482 |
+
"text": "We thank the National Science Foundation for support from grants IIS-1912887, IIS-2107487, and ITE-2137846 which supported the research and the computation in this study. We also thank our reviewers for their insightful feedback and comments.",
|
| 1483 |
+
"bbox": [
|
| 1484 |
+
112,
|
| 1485 |
+
351,
|
| 1486 |
+
489,
|
| 1487 |
+
447
|
| 1488 |
+
],
|
| 1489 |
+
"page_idx": 9
|
| 1490 |
+
},
|
| 1491 |
+
{
|
| 1492 |
+
"type": "text",
|
| 1493 |
+
"text": "References",
|
| 1494 |
+
"text_level": 1,
|
| 1495 |
+
"bbox": [
|
| 1496 |
+
114,
|
| 1497 |
+
474,
|
| 1498 |
+
213,
|
| 1499 |
+
489
|
| 1500 |
+
],
|
| 1501 |
+
"page_idx": 9
|
| 1502 |
+
},
|
| 1503 |
+
{
|
| 1504 |
+
"type": "list",
|
| 1505 |
+
"sub_type": "ref_text",
|
| 1506 |
+
"list_items": [
|
| 1507 |
+
"Abeer ALDayel and Walid Magdy. 2021. Stance detection on social media: State of the art and trends. International Journal on Information Processing and Management, 58(4).",
|
| 1508 |
+
"Emily Allaway and Kathleen McKeown. 2020. Zero-shot stance detection: A dataset and model using generalized topic representations. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8913-8931.",
|
| 1509 |
+
"Rabah Alzaidy, Cornelia Caragea, and C. Lee Giles. 2019. Bi-lstm-crf sequence labeling for keyphrase extraction from scholarly documents. In *The World Wide Web Conference*, WWW '19, page 2551-2557, New York, NY, USA. Association for Computing Machinery.",
|
| 1510 |
+
"Isabelle Augenstein, Tim Roktaschel, Andreas Vlachos, and Kalina Bontcheva. 2016. Stance detection with bidirectional conditional encoding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 876-885.",
|
| 1511 |
+
"Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146.",
|
| 1512 |
+
"Cornelia Caragea, Florin Adrian Bulgarov, Andreea Godea, and Sujatha Das Gollapalli. 2014. Citation-enhanced keyphrase extraction from research papers: A supervised approach. In Proceedings of the"
|
| 1513 |
+
],
|
| 1514 |
+
"bbox": [
|
| 1515 |
+
115,
|
| 1516 |
+
495,
|
| 1517 |
+
489,
|
| 1518 |
+
917
|
| 1519 |
+
],
|
| 1520 |
+
"page_idx": 9
|
| 1521 |
+
},
|
| 1522 |
+
{
|
| 1523 |
+
"type": "list",
|
| 1524 |
+
"sub_type": "ref_text",
|
| 1525 |
+
"list_items": [
|
| 1526 |
+
"2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1435-1446, Doha, Qatar. Association for Computational Linguistics.",
|
| 1527 |
+
"Costanza Conforti, Jakob Berndt, Mohammad Taher Pilehvar, Chryssi Giannitsarou, Flavio Toxvaerd, and Nigel Collier. 2020. Will-they-won't-they: A very large dataset for stance detection on Twitter. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1715-1724.",
|
| 1528 |
+
"Kareem Darwish, Walid Magdy, and Tahar Zanouda. 2017. Trump vs. Hillary: What went viral during the 2016 US presidential election. In *Social Informatics*, pages 143-161.",
|
| 1529 |
+
"Leon Derczynski, Kalina Bontcheva, Maria Liakata, Rob Procter, Geraldine Wong Sak Hoi, and Arkaitz Zubiaga. 2017. SemEval-2017 task 8: RumourEval: Determining rumour veracity and support for rumours. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 69-76.",
|
| 1530 |
+
"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
|
| 1531 |
+
"Jiachen Du, Ruifeng Xu, Yulan He, and Lin Gui. 2017. Stance classification with target-specific neural attention networks. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 3988-3994.",
|
| 1532 |
+
"Corina Florescu and Cornelia Caragea. 2017. PositionRank: An unsupervised approach to keyphrase extraction from scholarly documents. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1105-1115, Vancouver, Canada. Association for Computational Linguistics.",
|
| 1533 |
+
"Ygor Gallina, Florian Boudin, and Beatrice Daille. 2019. KPTimes: A large-scale dataset for keyphrase generation on news documents. In Proceedings of the 12th International Conference on Natural Language Generation, pages 130-135.",
|
| 1534 |
+
"Krishna Garg, Jishnu Ray Chowdhury, and Cornelia Caragea. 2022. Keyphrase generation beyond the boundaries of title and abstract. In *Findings of the Association for Computational Linguistics: EMNLP* 2022, pages 5809-5821, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.",
|
| 1535 |
+
"Akash Gautam, Puneet Mathur, Rakesh Gosangi, Debanjan Mahata, Ramit Sawhney, and Rajiv Ratn Shah. 2020. #MeTooMA: Multi-aspect annotations"
|
| 1536 |
+
],
|
| 1537 |
+
"bbox": [
|
| 1538 |
+
510,
|
| 1539 |
+
85,
|
| 1540 |
+
884,
|
| 1541 |
+
917
|
| 1542 |
+
],
|
| 1543 |
+
"page_idx": 9
|
| 1544 |
+
},
|
| 1545 |
+
{
|
| 1546 |
+
"type": "page_number",
|
| 1547 |
+
"text": "10080",
|
| 1548 |
+
"bbox": [
|
| 1549 |
+
477,
|
| 1550 |
+
927,
|
| 1551 |
+
524,
|
| 1552 |
+
940
|
| 1553 |
+
],
|
| 1554 |
+
"page_idx": 9
|
| 1555 |
+
},
|
| 1556 |
+
{
|
| 1557 |
+
"type": "list",
|
| 1558 |
+
"sub_type": "ref_text",
|
| 1559 |
+
"list_items": [
|
| 1560 |
+
"of tweets related to the MeToo movement. Proceedings of the International AAAI Conference on Web and Social Media, 14(1):209-216.",
|
| 1561 |
+
"Kyle Glandt, Sarthak Khanal, Yingjie Li, Doina Caragea, and Cornelia Caragea. 2021. Stance detection in COVID-19 tweets. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1596-1611.",
|
| 1562 |
+
"Genevieve Gorrell, Elena Kochkina, Maria Liakata, Ahmet Aker, Arkaitz Zubiaga, Kalina Bontcheva, and Leon Derczynski. 2019. SemEval-2019 task 7: RumourEval, determining rumour veracity and support for rumours. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 845-854.",
|
| 1563 |
+
"Lara Grimminger and Roman Klinger. 2021. Hate towards the political opponent: A Twitter corpus study of the 2020 US elections on the basis of offensive speech and stance detection. In Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 171-180.",
|
| 1564 |
+
"Momchil Hardalov, Arnav Arora, Preslav Nakov, and Isabelle Augenstein. 2021. Cross-domain label-adaptive stance detection. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9011-9028.",
|
| 1565 |
+
"Momchil Hardalov, Arnav Arora, Preslav Nakov, and Isabelle Augenstein. 2022. A survey on stance detection for mis- and disinformation identification. In *Findings of the Association for Computational Linguistics: NAACL* 2022, pages 1259–1277.",
|
| 1566 |
+
"Kazi Saidul Hasan and Vincent Ng. 2014. Why are you taking this stance? Identifying and classifying reasons in ideological debates. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 751-762.",
|
| 1567 |
+
"Muhammad Naeem Khan, Muhammad Azeem Ashraf, Donald Seinen, Kashif Ullah Khan, and Rizwan Ahmed Laar. 2021. Social media for knowledge acquisition and dissemination: The impact of the COVID-19 pandemic on collaborative learning driven social media adoption. Frontiers in Psychology, 12.",
|
| 1568 |
+
"Dilek Kucuk and Fazli Can. 2020. Stance detection: A survey. ACM Comput. Surv., 53(1):1-37.",
|
| 1569 |
+
"Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880."
|
| 1570 |
+
],
|
| 1571 |
+
"bbox": [
|
| 1572 |
+
115,
|
| 1573 |
+
85,
|
| 1574 |
+
489,
|
| 1575 |
+
917
|
| 1576 |
+
],
|
| 1577 |
+
"page_idx": 10
|
| 1578 |
+
},
|
| 1579 |
+
{
|
| 1580 |
+
"type": "list",
|
| 1581 |
+
"sub_type": "ref_text",
|
| 1582 |
+
"list_items": [
|
| 1583 |
+
"Yingjie Li and Cornelia Caragea. 2019. Multi-task stance detection with sentiment and stance lexicons. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6298-6304.",
|
| 1584 |
+
"Yingjie Li and Cornelia Caragea. 2021a. A multi-task learning framework for multi-target stance detection. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 2320-2326.",
|
| 1585 |
+
"Yingjie Li and Cornelia Caragea. 2021b. Target-aware data augmentation for stance detection. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1850-1860.",
|
| 1586 |
+
"Yingjie Li, Tiberiu Sosea, Aditya Sawant, Ajith Jayaraman Nair, Diana Inkpen, and Cornelia Caragea. 2021a. P-Stance: A large dataset for stance detection in political domain. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 2355-2365.",
|
| 1587 |
+
"Yingjie Li, Chenye Zhao, and Cornelia Caragea. 2021b. Improving stance detection with multi-dataset learning and knowledge distillation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6332-6345.",
|
| 1588 |
+
"Yingjie Li, Chenye Zhao, and Cornelia Caragea. 2023. Tts: A target-based teacher-student framework for zero-shot stance detection. In Proceedings of the ACM Web Conference 2023, page 1500-1509.",
|
| 1589 |
+
"Bin Liang, Yonghao Fu, Lin Gui, Min Yang, Jiachen Du, Yulan He, and Ruifeng Xu. 2021. Target-adaptive graph for cross-target stance detection. In Proceedings of the Web Conference 2021, page 3453-3464.",
|
| 1590 |
+
"Bin Liang, Qinglin Zhu, Xiang Li, Min Yang, Lin Gui, Yulan He, and Ruifeng Xu. 2022. JointCL: A joint contrastive learning framework for zero-shot stance detection. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 81-91.",
|
| 1591 |
+
"Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.",
|
| 1592 |
+
"Rui Meng, Sanqiang Zhao, Shuguang Han, Daqing He, Peter Brusilovsky, and Yu Chi. 2017. Deep keyphrase generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 582-592, Vancouver, Canada. Association for Computational Linguistics.",
|
| 1593 |
+
"Lin Miao, Mark Last, and Marina Litvak. 2020. Twitter data augmentation for monitoring public opinion on COVID-19 intervention measures. In Proceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020."
|
| 1594 |
+
],
|
| 1595 |
+
"bbox": [
|
| 1596 |
+
510,
|
| 1597 |
+
85,
|
| 1598 |
+
882,
|
| 1599 |
+
917
|
| 1600 |
+
],
|
| 1601 |
+
"page_idx": 10
|
| 1602 |
+
},
|
| 1603 |
+
{
|
| 1604 |
+
"type": "page_number",
|
| 1605 |
+
"text": "10081",
|
| 1606 |
+
"bbox": [
|
| 1607 |
+
477,
|
| 1608 |
+
928,
|
| 1609 |
+
522,
|
| 1610 |
+
940
|
| 1611 |
+
],
|
| 1612 |
+
"page_idx": 10
|
| 1613 |
+
},
|
| 1614 |
+
{
|
| 1615 |
+
"type": "list",
|
| 1616 |
+
"sub_type": "ref_text",
|
| 1617 |
+
"list_items": [
|
| 1618 |
+
"Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiaodan Zhu, and Colin Cherry. 2016a. A dataset for detecting stance in tweets. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 3945-3952.",
|
| 1619 |
+
"Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiaodan Zhu, and Colin Cherry. 2016b. SemEval-2016 task 6: Detecting stance in tweets. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 31-41.",
|
| 1620 |
+
"Dat Quoc Nguyen, Thanh Vu, and Anh Tuan Nguyen. 2020. BERTweet: A pre-trained language model for English tweets. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 9-14.",
|
| 1621 |
+
"Krutarth Patel and Cornelia Caragea. 2019. Exploring word embeddings in crf-based keyphrase extraction from research papers. In Proceedings of the 10th International Conference on Knowledge Capture, K-CAP '19, page 37-44, New York, NY, USA. Association for Computing Machinery.",
|
| 1622 |
+
"Delip Rao and Dean Pomerleau. 2017. Fake news challenge.",
|
| 1623 |
+
"Jishnu Ray Chowdhury, Cornelia Caragea, and Doina Caragea. 2019. Keyphrase extraction from disaster-related tweets. In *The World Wide Web Conference*, WWW '19, page 1555–1566, New York, NY, USA. Association for Computing Machinery.",
|
| 1624 |
+
"Jishnu Ray Chowdhury, Seo Yeon Park, Tuhin Kundu, and Cornelia Caragea. 2022. KPDROP: Improving absent keyphrase generation. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 4853-4870, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.",
|
| 1625 |
+
"Benjamin Schiller, Johannes Daxenberger, and Iryna Gurevych. 2021. Stance detection benchmark: How robust is your stance detection? KI - Künstliche Intelligenz.",
|
| 1626 |
+
"Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673-2681.",
|
| 1627 |
+
"Parinaz Sobhani, Diana Inkpen, and Xiaodan Zhu. 2017. A dataset for multi-target stance detection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 551-557.",
|
| 1628 |
+
"Swapna Somasundaran and Janyce Wiebe. 2010. Recognizing stances in ideological on-line debates. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, pages 116-124."
|
| 1629 |
+
],
|
| 1630 |
+
"bbox": [
|
| 1631 |
+
115,
|
| 1632 |
+
85,
|
| 1633 |
+
489,
|
| 1634 |
+
917
|
| 1635 |
+
],
|
| 1636 |
+
"page_idx": 11
|
| 1637 |
+
},
|
| 1638 |
+
{
|
| 1639 |
+
"type": "list",
|
| 1640 |
+
"sub_type": "ref_text",
|
| 1641 |
+
"list_items": [
|
| 1642 |
+
"Christian Stab, Tristan Miller, Benjamin Schiller, Pranav Rai, and Iryna Gurevych. 2018. Cross-topic argument mining from heterogeneous sources. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3664-3674.",
|
| 1643 |
+
"Lucas Sterckx, Cornelia Caragea, Thomas Demeester, and Chris Develder. 2016. Supervised keyphrase extraction as positive unlabeled learning. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1924-1929, Austin, Texas. Association for Computational Linguistics.",
|
| 1644 |
+
"Jannis Vamvas and Rico Sennrich. 2020. X-Stance: A multilingual multi-target dataset for stance detection. In Proceedings of the 5th Swiss Text Analytics Conference (SwissText) & 16th Conference on Natural Language Processing (KONVENS).",
|
| 1645 |
+
"Wikipedia. Wikipedia: list of controversial issues. [Online; accessed 10-December-2012].",
|
| 1646 |
+
"Lee Xiong, Chuan Hu, Chenyan Xiong, Daniel Campos, and Arnold Overwijk. 2019. Open domain web keyphrase extraction beyond language modeling. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5174-5183.",
|
| 1647 |
+
"Chang Xu, Cécile Paris, Surya Nepal, and Ross Sparks. 2018. Cross-target stance classification with self-attention networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 778-783.",
|
| 1648 |
+
"Jiacheng Ye, Tao Gui, Yichao Luo, Yige Xu, and Qi Zhang. 2021. One2Set: Generating diverse keyphrases as a set. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4598-4608.",
|
| 1649 |
+
"Xingdi Yuan, Tong Wang, Rui Meng, Khushboo Thaker, Peter Brusilovsky, Daqing He, and Adam Trischler. 2020. One size does not fit all: Generating and evaluating variable number of keyphrases. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7961-7975.",
|
| 1650 |
+
"Bowen Zhang, Min Yang, Xutao Li, Yunming Ye, Xiaofei Xu, and Kuai Dai. 2020. Enhancing cross-target stance detection with transferable semantic emotion knowledge. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3188-3197."
|
| 1651 |
+
],
|
| 1652 |
+
"bbox": [
|
| 1653 |
+
510,
|
| 1654 |
+
85,
|
| 1655 |
+
882,
|
| 1656 |
+
848
|
| 1657 |
+
],
|
| 1658 |
+
"page_idx": 11
|
| 1659 |
+
},
|
| 1660 |
+
{
|
| 1661 |
+
"type": "text",
|
| 1662 |
+
"text": "A Curation of Unrelated target samples",
|
| 1663 |
+
"text_level": 1,
|
| 1664 |
+
"bbox": [
|
| 1665 |
+
510,
|
| 1666 |
+
862,
|
| 1667 |
+
865,
|
| 1668 |
+
878
|
| 1669 |
+
],
|
| 1670 |
+
"page_idx": 11
|
| 1671 |
+
},
|
| 1672 |
+
{
|
| 1673 |
+
"type": "text",
|
| 1674 |
+
"text": "We retrieved a collection of tweets using Twitter API for some controversial topics such as Black",
|
| 1675 |
+
"bbox": [
|
| 1676 |
+
510,
|
| 1677 |
+
888,
|
| 1678 |
+
882,
|
| 1679 |
+
917
|
| 1680 |
+
],
|
| 1681 |
+
"page_idx": 11
|
| 1682 |
+
},
|
| 1683 |
+
{
|
| 1684 |
+
"type": "page_number",
|
| 1685 |
+
"text": "10082",
|
| 1686 |
+
"bbox": [
|
| 1687 |
+
477,
|
| 1688 |
+
927,
|
| 1689 |
+
524,
|
| 1690 |
+
940
|
| 1691 |
+
],
|
| 1692 |
+
"page_idx": 11
|
| 1693 |
+
},
|
| 1694 |
+
{
|
| 1695 |
+
"type": "image",
|
| 1696 |
+
"img_path": "images/2c32350a8efc0ce714c5484084d3fd7d73366f4ae14f4e7a2282179e74dcc2de.jpg",
|
| 1697 |
+
"image_caption": [],
|
| 1698 |
+
"image_footnote": [],
|
| 1699 |
+
"bbox": [
|
| 1700 |
+
115,
|
| 1701 |
+
80,
|
| 1702 |
+
287,
|
| 1703 |
+
202
|
| 1704 |
+
],
|
| 1705 |
+
"page_idx": 12
|
| 1706 |
+
},
|
| 1707 |
+
{
|
| 1708 |
+
"type": "image",
|
| 1709 |
+
"img_path": "images/9094399ad6e77716b11a7637bc39b984a3cfd8c57b454db4d64a558ae0306ced.jpg",
|
| 1710 |
+
"image_caption": [
|
| 1711 |
+
"(a) Abortion",
|
| 1712 |
+
"(c) Feminist Movement",
|
| 1713 |
+
"Figure 3: Wordclouds of generated keyphrases for different targets of SemEval-2016 dataset."
|
| 1714 |
+
],
|
| 1715 |
+
"image_footnote": [],
|
| 1716 |
+
"bbox": [
|
| 1717 |
+
115,
|
| 1718 |
+
221,
|
| 1719 |
+
287,
|
| 1720 |
+
341
|
| 1721 |
+
],
|
| 1722 |
+
"page_idx": 12
|
| 1723 |
+
},
|
| 1724 |
+
{
|
| 1725 |
+
"type": "image",
|
| 1726 |
+
"img_path": "images/4f54de548f97cd1796b7b3b321f09b08e9d0d32c4cf415beb47e46bb41c6cbe6.jpg",
|
| 1727 |
+
"image_caption": [],
|
| 1728 |
+
"image_footnote": [],
|
| 1729 |
+
"bbox": [
|
| 1730 |
+
314,
|
| 1731 |
+
80,
|
| 1732 |
+
485,
|
| 1733 |
+
202
|
| 1734 |
+
],
|
| 1735 |
+
"page_idx": 12
|
| 1736 |
+
},
|
| 1737 |
+
{
|
| 1738 |
+
"type": "image",
|
| 1739 |
+
"img_path": "images/2cd8a5a06ec9c00c506b525a04bb3d53ab38c60173eb592153ff7440df989aba.jpg",
|
| 1740 |
+
"image_caption": [
|
| 1741 |
+
"(b) Atheism",
|
| 1742 |
+
"(d) Hillary Clinton"
|
| 1743 |
+
],
|
| 1744 |
+
"image_footnote": [],
|
| 1745 |
+
"bbox": [
|
| 1746 |
+
314,
|
| 1747 |
+
219,
|
| 1748 |
+
485,
|
| 1749 |
+
341
|
| 1750 |
+
],
|
| 1751 |
+
"page_idx": 12
|
| 1752 |
+
},
|
| 1753 |
+
{
|
| 1754 |
+
"type": "text",
|
| 1755 |
+
"text": "Lives Matter, Communism, Conservatism, Morality, etc. The controversial topics were collected from Wikipedia. We manually removed the topics that are related to the targets of our merged and zero-shot datasets. Further, we performed the following preprocessing steps: (1) We removed the duplicates and retweets. (2) We removed the topics that appear in less than 100 tweets. (3) We removed the tweets that contain any explicit mentions of the targets of our merged and zero-shot datasets. (4) We created the train, validation and test sets following an 80/10/10 split for each topic. Thus, we curated a filtered collection for Unrelated samples. Note that Unrelated samples used in the merged and zero-shot datasets are not overlapped and examples of Unrelated category are shown in Table 9.",
|
| 1756 |
+
"bbox": [
|
| 1757 |
+
112,
|
| 1758 |
+
426,
|
| 1759 |
+
489,
|
| 1760 |
+
697
|
| 1761 |
+
],
|
| 1762 |
+
"page_idx": 12
|
| 1763 |
+
},
|
| 1764 |
+
{
|
| 1765 |
+
"type": "table",
|
| 1766 |
+
"img_path": "images/f9b20a8c0584e9f9a96dbc422a7df41c1693b2e2ac68637cf32425eff3c4182d.jpg",
|
| 1767 |
+
"table_caption": [],
|
| 1768 |
+
"table_footnote": [],
|
| 1769 |
+
"table_body": "<table><tr><td>Topic</td><td>Tweet</td></tr><tr><td>Black Lives Matter</td><td>Black Lives Matter Proclaims Thanksgiving Is A Holiday Of Colonization On Stolen Land</td></tr><tr><td>Communism</td><td>We are told that communism causes famines. But it is actually capitalism, colonialism & imperialism that cause food insecurity and mass hunger.</td></tr><tr><td>Conservatism</td><td>Conservatism isn’t about freedoms it’s all about control.</td></tr><tr><td>Morality</td><td>To place morality above compassion or law before love is to nullify nature and scorn nurture. Love knows no wrong.</td></tr></table>",
|
| 1770 |
+
"bbox": [
|
| 1771 |
+
115,
|
| 1772 |
+
711,
|
| 1773 |
+
495,
|
| 1774 |
+
888
|
| 1775 |
+
],
|
| 1776 |
+
"page_idx": 12
|
| 1777 |
+
},
|
| 1778 |
+
{
|
| 1779 |
+
"type": "text",
|
| 1780 |
+
"text": "Table 9: Examples from Unrelated category samples.",
|
| 1781 |
+
"bbox": [
|
| 1782 |
+
115,
|
| 1783 |
+
898,
|
| 1784 |
+
482,
|
| 1785 |
+
912
|
| 1786 |
+
],
|
| 1787 |
+
"page_idx": 12
|
| 1788 |
+
},
|
| 1789 |
+
{
|
| 1790 |
+
"type": "text",
|
| 1791 |
+
"text": "B Generated Keyphrases in Target Generation Task",
|
| 1792 |
+
"text_level": 1,
|
| 1793 |
+
"bbox": [
|
| 1794 |
+
509,
|
| 1795 |
+
83,
|
| 1796 |
+
826,
|
| 1797 |
+
115
|
| 1798 |
+
],
|
| 1799 |
+
"page_idx": 12
|
| 1800 |
+
},
|
| 1801 |
+
{
|
| 1802 |
+
"type": "text",
|
| 1803 |
+
"text": "As discussed in §6.1, target generation models produce worse performance than target classification models in target identification task. The reason could be that the generated keyphrases might be related to other topics contained in the sentence, which are not correctly mapped to the golden targets in target identification task.",
|
| 1804 |
+
"bbox": [
|
| 1805 |
+
507,
|
| 1806 |
+
126,
|
| 1807 |
+
884,
|
| 1808 |
+
237
|
| 1809 |
+
],
|
| 1810 |
+
"page_idx": 12
|
| 1811 |
+
},
|
| 1812 |
+
{
|
| 1813 |
+
"type": "text",
|
| 1814 |
+
"text": "In Figure 3, we show the wordclouds for the generated keyphrases using our keyphrase generation models as described in §4.1 and §6.1. For instance, for the ground truth label Atheism, the generated keyphrases are spirituality, religion, faith, belief, philosophy, etc. We can observe that these generated keyphrases are semantically related to the ground truth target Atheism and these generated keyphrases could further be used for other research purposes such as data augmentation of stance detection and multi-target stance annotation.",
|
| 1815 |
+
"bbox": [
|
| 1816 |
+
507,
|
| 1817 |
+
239,
|
| 1818 |
+
884,
|
| 1819 |
+
414
|
| 1820 |
+
],
|
| 1821 |
+
"page_idx": 12
|
| 1822 |
+
},
|
| 1823 |
+
{
|
| 1824 |
+
"type": "page_number",
|
| 1825 |
+
"text": "10083",
|
| 1826 |
+
"bbox": [
|
| 1827 |
+
477,
|
| 1828 |
+
927,
|
| 1829 |
+
524,
|
| 1830 |
+
940
|
| 1831 |
+
],
|
| 1832 |
+
"page_idx": 12
|
| 1833 |
+
},
|
| 1834 |
+
{
|
| 1835 |
+
"type": "text",
|
| 1836 |
+
"text": "A For every submission:",
|
| 1837 |
+
"bbox": [
|
| 1838 |
+
115,
|
| 1839 |
+
107,
|
| 1840 |
+
322,
|
| 1841 |
+
122
|
| 1842 |
+
],
|
| 1843 |
+
"page_idx": 13
|
| 1844 |
+
},
|
| 1845 |
+
{
|
| 1846 |
+
"type": "list",
|
| 1847 |
+
"sub_type": "text",
|
| 1848 |
+
"list_items": [
|
| 1849 |
+
"A1. Did you describe the limitations of your work? Left blank.",
|
| 1850 |
+
"A2. Did you discuss any potential risks of your work? 9",
|
| 1851 |
+
"A3. Do the abstract and introduction summarize the paper's main claims? Left blank.",
|
| 1852 |
+
"A4. Have you used AI writing assistants when working on this paper? Left blank."
|
| 1853 |
+
],
|
| 1854 |
+
"bbox": [
|
| 1855 |
+
129,
|
| 1856 |
+
126,
|
| 1857 |
+
695,
|
| 1858 |
+
287
|
| 1859 |
+
],
|
| 1860 |
+
"page_idx": 13
|
| 1861 |
+
},
|
| 1862 |
+
{
|
| 1863 |
+
"type": "text",
|
| 1864 |
+
"text": "B Did you use or create scientific artifacts?",
|
| 1865 |
+
"bbox": [
|
| 1866 |
+
114,
|
| 1867 |
+
299,
|
| 1868 |
+
487,
|
| 1869 |
+
315
|
| 1870 |
+
],
|
| 1871 |
+
"page_idx": 13
|
| 1872 |
+
},
|
| 1873 |
+
{
|
| 1874 |
+
"type": "text",
|
| 1875 |
+
"text": "3.2",
|
| 1876 |
+
"bbox": [
|
| 1877 |
+
134,
|
| 1878 |
+
321,
|
| 1879 |
+
159,
|
| 1880 |
+
334
|
| 1881 |
+
],
|
| 1882 |
+
"page_idx": 13
|
| 1883 |
+
},
|
| 1884 |
+
{
|
| 1885 |
+
"type": "list",
|
| 1886 |
+
"sub_type": "text",
|
| 1887 |
+
"list_items": [
|
| 1888 |
+
"B1. Did you cite the creators of artifacts you used? 3.2",
|
| 1889 |
+
"B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank.",
|
| 1890 |
+
"B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank.",
|
| 1891 |
+
"B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank.",
|
| 1892 |
+
"B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank.",
|
| 1893 |
+
"B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 3.2"
|
| 1894 |
+
],
|
| 1895 |
+
"bbox": [
|
| 1896 |
+
127,
|
| 1897 |
+
346,
|
| 1898 |
+
880,
|
| 1899 |
+
751
|
| 1900 |
+
],
|
| 1901 |
+
"page_idx": 13
|
| 1902 |
+
},
|
| 1903 |
+
{
|
| 1904 |
+
"type": "text",
|
| 1905 |
+
"text": "C Did you run computational experiments?",
|
| 1906 |
+
"bbox": [
|
| 1907 |
+
114,
|
| 1908 |
+
764,
|
| 1909 |
+
492,
|
| 1910 |
+
781
|
| 1911 |
+
],
|
| 1912 |
+
"page_idx": 13
|
| 1913 |
+
},
|
| 1914 |
+
{
|
| 1915 |
+
"type": "text",
|
| 1916 |
+
"text": "8",
|
| 1917 |
+
"bbox": [
|
| 1918 |
+
134,
|
| 1919 |
+
788,
|
| 1920 |
+
146,
|
| 1921 |
+
799
|
| 1922 |
+
],
|
| 1923 |
+
"page_idx": 13
|
| 1924 |
+
},
|
| 1925 |
+
{
|
| 1926 |
+
"type": "text",
|
| 1927 |
+
"text": "C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used?",
|
| 1928 |
+
"bbox": [
|
| 1929 |
+
129,
|
| 1930 |
+
812,
|
| 1931 |
+
880,
|
| 1932 |
+
844
|
| 1933 |
+
],
|
| 1934 |
+
"page_idx": 13
|
| 1935 |
+
},
|
| 1936 |
+
{
|
| 1937 |
+
"type": "text",
|
| 1938 |
+
"text": "8",
|
| 1939 |
+
"bbox": [
|
| 1940 |
+
152,
|
| 1941 |
+
847,
|
| 1942 |
+
164,
|
| 1943 |
+
857
|
| 1944 |
+
],
|
| 1945 |
+
"page_idx": 13
|
| 1946 |
+
},
|
| 1947 |
+
{
|
| 1948 |
+
"type": "header",
|
| 1949 |
+
"text": "ACL 2023 Responsible NLP Checklist",
|
| 1950 |
+
"bbox": [
|
| 1951 |
+
132,
|
| 1952 |
+
84,
|
| 1953 |
+
433,
|
| 1954 |
+
99
|
| 1955 |
+
],
|
| 1956 |
+
"page_idx": 13
|
| 1957 |
+
},
|
| 1958 |
+
{
|
| 1959 |
+
"type": "page_footnote",
|
| 1960 |
+
"text": "The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.",
|
| 1961 |
+
"bbox": [
|
| 1962 |
+
112,
|
| 1963 |
+
866,
|
| 1964 |
+
877,
|
| 1965 |
+
889
|
| 1966 |
+
],
|
| 1967 |
+
"page_idx": 13
|
| 1968 |
+
},
|
| 1969 |
+
{
|
| 1970 |
+
"type": "page_number",
|
| 1971 |
+
"text": "10084",
|
| 1972 |
+
"bbox": [
|
| 1973 |
+
477,
|
| 1974 |
+
927,
|
| 1975 |
+
524,
|
| 1976 |
+
940
|
| 1977 |
+
],
|
| 1978 |
+
"page_idx": 13
|
| 1979 |
+
},
|
| 1980 |
+
{
|
| 1981 |
+
"type": "list",
|
| 1982 |
+
"sub_type": "text",
|
| 1983 |
+
"list_items": [
|
| 1984 |
+
"C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? we use the default parameters without hyperparameter tuning",
|
| 1985 |
+
"C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 6",
|
| 1986 |
+
"C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix"
|
| 1987 |
+
],
|
| 1988 |
+
"bbox": [
|
| 1989 |
+
129,
|
| 1990 |
+
84,
|
| 1991 |
+
880,
|
| 1992 |
+
282
|
| 1993 |
+
],
|
| 1994 |
+
"page_idx": 14
|
| 1995 |
+
},
|
| 1996 |
+
{
|
| 1997 |
+
"type": "text",
|
| 1998 |
+
"text": "D Did you use human annotators (e.g., crowdworkers) or research with human participants? Left blank.",
|
| 1999 |
+
"bbox": [
|
| 2000 |
+
112,
|
| 2001 |
+
293,
|
| 2002 |
+
877,
|
| 2003 |
+
330
|
| 2004 |
+
],
|
| 2005 |
+
"page_idx": 14
|
| 2006 |
+
},
|
| 2007 |
+
{
|
| 2008 |
+
"type": "list",
|
| 2009 |
+
"sub_type": "text",
|
| 2010 |
+
"list_items": [
|
| 2011 |
+
"D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank.",
|
| 2012 |
+
"D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank.",
|
| 2013 |
+
"D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank.",
|
| 2014 |
+
"D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank.",
|
| 2015 |
+
"D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank."
|
| 2016 |
+
],
|
| 2017 |
+
"bbox": [
|
| 2018 |
+
127,
|
| 2019 |
+
340,
|
| 2020 |
+
880,
|
| 2021 |
+
640
|
| 2022 |
+
],
|
| 2023 |
+
"page_idx": 14
|
| 2024 |
+
},
|
| 2025 |
+
{
|
| 2026 |
+
"type": "page_number",
|
| 2027 |
+
"text": "10085",
|
| 2028 |
+
"bbox": [
|
| 2029 |
+
477,
|
| 2030 |
+
927,
|
| 2031 |
+
524,
|
| 2032 |
+
940
|
| 2033 |
+
],
|
| 2034 |
+
"page_idx": 14
|
| 2035 |
+
}
|
| 2036 |
+
]
|
2023/A New Direction in Stance Detection_ Target-Stance Extraction in the Wild/05054386-a5e0-4465-a9e7-3cb95fe5c25f_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A New Direction in Stance Detection_ Target-Stance Extraction in the Wild/05054386-a5e0-4465-a9e7-3cb95fe5c25f_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d58780ab49a57eeec7afce210c683ae50cf3fa7f610d888a8b9ef5e42ecd5a0c
|
| 3 |
+
size 1978308
|
2023/A New Direction in Stance Detection_ Target-Stance Extraction in the Wild/full.md
ADDED
|
@@ -0,0 +1,384 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A New Direction in Stance Detection: Target-Stance Extraction in the Wild
|
| 2 |
+
|
| 3 |
+
# Yingjie Li* Krishna Garg* Cornelia Caragea
|
| 4 |
+
|
| 5 |
+
University of Illinois at Chicago {yli300,kgarg8,cornelia}@uic.edu
|
| 6 |
+
|
| 7 |
+
# Abstract
|
| 8 |
+
|
| 9 |
+
Stance detection aims to detect the stance toward a corresponding target. Existing works have achieved promising progress on stance detection tasks in which the goal is to predict the stance given both a target and a text. However, they all work under the assumption that the target is known in advance, which is often not the case in the wild. Given a text from social media platforms, the target information is often unknown due to implicit mentions in the source text and it is infeasible to have manual target annotations at a large scale. Therefore, in this paper, we propose a new task Target-Stance Extraction (TSE) that aims to extract the (target, stance) pair from the text. We benchmark the task by proposing a two-stage framework that first identifies the relevant target in the text and then detects the stance given the predicted target and text. Specifically, we first propose two different settings: Target Classification and Target Generation, to identify the potential target from a given text. Then we propose a multi-task approach that takes target prediction as the auxiliary task to detect the stance toward the predicted target. We evaluate the proposed framework on both in-target stance detection in which the test target is always seen in the training stage and zero-shot stance detection that needs to detect the stance for the unseen target during the inference stage. The new TSE task can facilitate future research in the field of stance detection. We publicly release our code.<sup>1</sup>
|
| 10 |
+
|
| 11 |
+
# 1 Introduction
|
| 12 |
+
|
| 13 |
+
Stance detection aims to automatically identify people's attitude/viewpoint (e.g., Favor or Against) expressed in texts toward a target that is generally a controversial topic or political figure (ALDayel and Magdy, 2021; Kucuk and Can, 2020; Hardalov et al., 2021). For example, the tweet in Figure 1
|
| 14 |
+
|
| 15 |
+
Both authors contributed equally to this research.
|
| 16 |
+
<sup>1</sup>https://github.com/chuchun8/TSE
|
| 17 |
+
|
| 18 |
+

|
| 19 |
+
Figure 1: The comparison between the proposed Target-Stance Extraction (TSE) task and original stance detection task.
|
| 20 |
+
|
| 21 |
+
expresses a stance of "Against" toward the target "Atheism."
|
| 22 |
+
|
| 23 |
+
Social media platforms like Twitter, Facebook and other debate forums have become an integral way of opinion dissemination these days (Khan et al., 2021). The peculiar characteristics of these platforms are that the information is usually scattered across texts and the opinionated text could be expressed toward target entities in an implicit way. Existing methods have achieved promising performance on in-target stance detection in which same targets are seen in both train and test sets (Mohammad et al., 2016a; Sobhani et al., 2017; Li and Caragea, 2019, 2021a) and cross-target stance detection that aims to transfer the knowledge from a source target to a destination target (Augenstein et al., 2016; Xu et al., 2018; Zhang et al., 2020). However, almost all previous methods work under the assumption that the target is known or manually identified, which is often not the case in the wild. In practice, the target is unknown given a text and it is usually implicitly mentioned in the text, as can be seen from the example shown in Figure 1. Therefore, instead of detecting the stance given both the target and text, we propose a more challenging task Target-Stance Extraction (TSE) in the context of stance detection that aims to extract the (target, stance) pair from the text. The new TSE
|
| 24 |
+
|
| 25 |
+
task is more challenging because it includes both target identification and stance detection.
|
| 26 |
+
|
| 27 |
+
To tackle this task, we propose a two-step framework that first identifies the relevant target in the text and then detects the stance given the predicted target and the text, as shown in Figure 1. In the first stage, we propose two different settings to identify the target discussed in a text: (1) Target Classification, where we train a text classifier (Schuster and Paliwal, 1997; Devlin et al., 2019; Nguyen et al., 2020) to predict the target as one of the pre-defined targets, and (2) Target Generation, where we leverage BART (Lewis et al., 2020) model that is pretrained on a keyphrase generation dataset (Xiong et al., 2019; Gallina et al., 2019; Garg et al., 2022) to generate keyphrases (e.g., "Christianity" in Figure 1), and then map them to one of the pre-defined targets (e.g., "Atheism"). In the second stage, we propose a multi-task framework that takes the target prediction as the auxiliary task for stance detection. We expect the stance detection model to better capture the target-related features and to develop a better understanding of the text itself with the auxiliary task.
|
| 28 |
+
|
| 29 |
+
Our proposed two-step framework can not only be applied to in-target stance detection, but also zero-shot stance detection in which targets of test examples are not seen in the train set. We evaluate the proposed framework on the combined set of four stance datasets (Mohammad et al., 2016a; Stab et al., 2018; Glandt et al., 2021; Li et al., 2021a) for in-target stance detection. Further, we extend our framework to zero-shot stance detection and test it on six targets of diverse domains (Somasundaran and Wiebe, 2010; Mohammad et al., 2016a; Conforti et al., 2020; Miao et al., 2020; Gautam et al., 2020). It is worth noting that our primary goal is not to present a new state-of-the-art model, but to deliver a new and more challenging task to stimulate research on stance detection.
|
| 30 |
+
|
| 31 |
+
We summarize our contributions as follows:
|
| 32 |
+
|
| 33 |
+
- We propose a new Target-Stance Extraction (TSE) task, aimed to extract the pair of target and stance from each sentence.
|
| 34 |
+
- We benchmark the task by proposing a two-step framework that can be applied to both in-target and zero-shot stance detection.
|
| 35 |
+
- We propose a multi-task framework that uses the target prediction as an auxiliary task to improve the performance of stance detection.
|
| 36 |
+
|
| 37 |
+
# 2 Related Work
|
| 38 |
+
|
| 39 |
+
Stance Detection The stance detection task aims to detect the stance toward a specific target (Mohammad et al., 2016a; Schiller et al., 2021; Hardalov et al., 2022). The target could be defined in a variety of forms: a controversial figure (Darwish et al., 2017; Grimminger and Klinger, 2021; Li et al., 2021a), a hot topic such as gun control (Hasan and Ng, 2014; Mohammad et al., 2016a; Stab et al., 2018; Vamvas and Sennrich, 2020; Conforti et al., 2020; Glandt et al., 2021) or a claim (Rao and Pomerleau, 2017; Derczynski et al., 2017; Gorrell et al., 2019). In previous works, the target is usually manually provided along with the input sentence to a stance classifier. However, given a post on social media, we may not have a direct clue about the target information due to their implicit mentions, and it is infeasible to do large-scale target annotations by humans. Motivated by this observation, we propose a new task named Target-Stance Extraction (TSE) that aims to extract both the target and the corresponding stance from a given text.
|
| 40 |
+
|
| 41 |
+
Besides the in-target stance detection (Mohammad et al., 2016a; Li and Caragea, 2021b) in which the test target is seen in the training stage, crosstarget stance detection (Augenstein et al., 2016; Xu et al., 2018; Zhang et al., 2020; Liang et al., 2021) and zero-shot stance detection (Allaway and McKeown, 2020; Liang et al., 2022; Li et al., 2023) have also drawn a lot of attention recently. In crosstarget stance detection, a classifier is adapted from a different but related target to a destination target in a one-to-one way, whereas in zero-shot stance detection we need to detect the stance for a variety of unseen targets at the inference stage. In this paper, we evaluate our proposed framework in both in-target and zero-shot settings.
|
| 42 |
+
|
| 43 |
+
Keyphrase Generation / Extraction Keyphrase generation or extraction is the task where given a source document (e.g., a scientific article, newspaper article, or webpage), we predict the keyphrases that best describe or summarize that document (Garg et al., 2022; Ray Chowdhury et al., 2022, 2019; Alzaidy et al., 2019; Patel and Caragea, 2019; Meng et al., 2017; Yuan et al., 2020; Ye et al., 2021; Florescu and Caragea, 2017; Sterckx et al., 2016; Caragea et al., 2014). In the context of stance detection, we can use keyphrase generation models to generate keyphrases that are target-related words give an input text. To our knowledge, target-related
|
| 44 |
+
|
| 45 |
+
keyphrase generation task has not been explored before for stance detection.
|
| 46 |
+
|
| 47 |
+
The most popular paradigm for the keyphrase generation task is the One2Seq encoder-decoder framework (Meng et al., 2017) where given a document, we generate a sequence of [SEP] separated keyphrases in an auto-regressive way. We use the pre-trained BART model (Lewis et al., 2020) finetuned separately on three keyphrase generation datasets, i.e., OpenKP (Xiong et al., 2019), KP-Times (Gallina et al., 2019), and FullTextKP (Garg et al., 2022) and generate keyphrases using the One2Seq model.
|
| 48 |
+
|
| 49 |
+
# 3 Task and Datasets
|
| 50 |
+
|
| 51 |
+
# 3.1 Task Definition
|
| 52 |
+
|
| 53 |
+
Let $D_{tr} = \{x_i, t_i, y_i\}_{i=1}^n$ be a train set where $x_i$ is a sequence of words, $t_i$ is the target holding the stance and $y_i$ is the stance label. In the original stance detection task the aim was to only detect the stance $y_i$ given the target $t_i$ and the text $x_i$ .
|
| 54 |
+
|
| 55 |
+
Target-Stance Extraction Objective In our proposed Target-Stance Extraction (TSE) task the goal is to extract the target-stance pair $(t_i, y_i)$ given $x_i$ .
|
| 56 |
+
|
| 57 |
+
# 3.2 Datasets
|
| 58 |
+
|
| 59 |
+
In-Target TSE For in-target TSE, we conduct experiments on the merged set of four stance detection datasets to evaluate the proposed framework. 1) SemEval-2016 (SE) (Mohammad et al., 2016b) contains 5 pre-defined targets, including Atheism, Climate Change is a Real Concern, Feminist Movement, Hillary Clinton and Legalization of Abortion. Each sample is annotated with Favor, Against or None. 2) AM (Stab et al., 2018) is an argument mining dataset containing 8 targets, including Abortion, Cloning, Death Penalty, Gun Control, Marijuana Legalization, Minimum Wage, Nuclear Energy and School Uniforms. Each sample is annotated with Support, Oppose or Neutral. 3) COVID-19 (C19) (Glandt et al., 2021) contains 4 targets related to COVID-19: Wearing a Face Mask, Anthony S. Fauci, School Closures and Stay at Home Orders. Each sample can be classified as Favor, Against or None. 4) P-Stance (PS) (Li et al., 2021a) contains 3 targets related to the 2020 U.S. presidential election: Donald Trump, Joe Biden and Bernie Sanders. Each instance is annotated with Favor or Against.
|
| 60 |
+
|
| 61 |
+
Train, validation and test sets are provided for the AM, COVID-19, and P-Stance datasets. For
|
| 62 |
+
|
| 63 |
+
SemEval-2016, train and test sets are provided and we split the train set into train and validation sets. We remove the target Climate Change of SemEval-2016 from training for the usage of zero-shot setting. Data statistics and examples of these datasets are shown in Tables 1 and 2.
|
| 64 |
+
|
| 65 |
+
Zero-Shot TSE We also curate a new zero-shot dataset from existing datasets to test the model performance on unseen targets during the inference stage. We collect 500 samples for each of the following targets from its original dataset: 1) Creationism (Somasundaran and Wiebe, 2010), 2) Gay Rights (Somasundaran and Wiebe, 2010), 3) Climate Change is a Concern (Mohammad et al., 2016a), 4) MeToo Movement (Gautam et al., 2020), 5) Merger of Disney and Fox (Conforti et al., 2020), 6) Lockdown in New York State (Miao et al., 2020).
|
| 66 |
+
|
| 67 |
+
To mimic the real-world scenario that a text may contain no targets of interest, we consider an additional target label Unrelated in both in-target and zero-shot settings. We provide the details about the curation of such samples in the Appendix A. We maintain a ratio of 5:1 for interested targets vs. the Unrelated category in the final datasets for both in-target and zero-shot TSE. The numbers of targets for in-target and zero-shot datasets are $18^{2}$ and 6, respectively, and we add the Unrelated category in each dataset.
|
| 68 |
+
|
| 69 |
+
# 4 Approach
|
| 70 |
+
|
| 71 |
+
As discussed in the previous section, TSE is a challenging task that involves both target identification and stance detection given a text. To tackle this task, we propose a two-stage framework, in which we first identify the target from a given text using either a target classification or target generation approach and then detect the stance toward the predicted target with a stance classifier in the second stage. The overall framework of our proposed approach is shown in Figure 2.
|
| 72 |
+
|
| 73 |
+
# 4.1 Stage 1: Target Identification
|
| 74 |
+
|
| 75 |
+
In this stage, we extract the target from the text based on either training classifiers, e.g., BiLSTM or BERT, to predict the target from a set of pre-defined targets or by using a BART-fine-tuned keyphrase generation module to generate keyphrases for the text and then map them to the pre-defined set of
|
| 76 |
+
|
| 77 |
+
<table><tr><td>Dataset</td><td>#Train</td><td>#Val</td><td>#Test</td><td>Targets</td></tr><tr><td>SemEval-2016</td><td>2,160</td><td>359</td><td>1,080</td><td>Atheism, Feminist Movement, Hillary Clinton, Legalization of Abortion</td></tr><tr><td>AM</td><td>18,341</td><td>2,042</td><td>5,109</td><td>Abortion, Cloning, Death Penalty, Gun Control, Marijuana Legalization, Minimum Wage, Nuclear Energy, School Uniforms</td></tr><tr><td>COVID-19</td><td>4,533</td><td>800</td><td>800</td><td>Face Masks, Fauci, Stay at Home Orders, School Closures</td></tr><tr><td>P-Stance</td><td>17,224</td><td>2,193</td><td>2,157</td><td>Joe Biden, Bernie Sanders, Donald Trump</td></tr><tr><td>Zero-Shot</td><td>-</td><td>-</td><td>3,000</td><td>Creationism, Gay Rights, Climate Change is a Concern, MeToo Move-ment, Merger of Disney and Fox, Lockdown in New York State</td></tr></table>
|
| 78 |
+
|
| 79 |
+
Table 1: Data split statistics for SemEval-2016, AM, COVID-19, P-Stance and Zero-Shot datasets.
|
| 80 |
+
|
| 81 |
+
<table><tr><td>Dataset</td><td>Target</td><td>Tweet</td><td>Stance</td></tr><tr><td>SemEval-2016</td><td>Atheism</td><td>Religious leaders are like political leaders - they say what they think people want to hear. #freethinker #SemST</td><td>Favor</td></tr><tr><td>AM</td><td>Gun Control</td><td>Restrictions on gun ownership will only encourage outlaws to have heavy ammunition and high calibre weapons.</td><td>Against</td></tr><tr><td>COVID-19</td><td>Face Masks</td><td>@MrMasonMills @YcmiYcmiu There is air in houses/buildings too. Are we expected to live in a mask constantly?</td><td>Against</td></tr><tr><td>P-Stance</td><td>Donald Trump</td><td>There was no collusion Collusion is not a crime Even if it's a crime, it's doesn't matter. It's ALL HILLARY AND OBAMA'S FAULT The evolution of the #Trump defense</td><td>Favor</td></tr><tr><td>Zero-Shot</td><td>Gay Rights</td><td>Yes! You rock gay people. They are people just like we are and if two men want to marry each other, than go for it</td><td>Favor</td></tr></table>
|
| 82 |
+
|
| 83 |
+
Table 2: Examples from stance detection datasets.
|
| 84 |
+
|
| 85 |
+
targets. Our intuition is that the keyphrases corresponding to a text capture its essence and they should correlate well with the target towards which the stance is expressed. For instance, in Figure 1, the generated target Christianity quite succinctly captures the essence from the tweet Jesus, you are my helper... and at the same time, the generated target Christianity correlates semantically well to the golden target Atheism.
|
| 86 |
+
|
| 87 |
+
Target Classification In this approach, we train a classifier based on the merged dataset with texts as inputs and their corresponding targets as the ground truth labels. Note that the stance labels are not used in this target classification task. We discuss this approach in more details in §5.2.
|
| 88 |
+
|
| 89 |
+
Target Generation In this approach, we first fine-tune a BART model on one of the keyphrase generation datasets separately, $^{3}$ i.e., OpenKP (Xiong et al., 2019), KPTimes (Gallina et al., 2019) and FullTextKP (Garg et al., 2022). The BART keyphrase generation model is used to generate keyphrases (e.g., "Christianity") given a text. Note that the generated keyphrases may not directly belong to any of the
|
| 90 |
+
|
| 91 |
+
target classes we are interested in. Therefore, a similarity mapping is adopted to map the generated keyphrases into one of the pre-defined targets.
|
| 92 |
+
|
| 93 |
+
For similarity mapping, we first train a FastText model (Bojanowski et al., 2017) on the train set of the merged dataset. Our choice for FastText is motivated by its efficiency while maintaining comparative performance with BERT-based models. Then we obtain word embeddings of the generated keyphrases by sending them as inputs to the FastText model. Finally, a cosine similarity score is calculated between the embeddings of generated keyphrase and each pre-defined target. We predict the target that has the highest similarity score with the generated keyphrase. Note that the generated keyphrase is classified as Unrelated if the highest similarity score is below a specific threshold.
|
| 94 |
+
|
| 95 |
+
# 4.2 Stage 2: Stance Detection
|
| 96 |
+
|
| 97 |
+
Given a text in the wild, the target information is usually unknown, and thus we first predict the target from either target classification or target generation in the first stage. Then in the second stage, we use a stance classifier that is trained on the merged set to detect the stance of predicted targets.
|
| 98 |
+
|
| 99 |
+
For stance detection, we train a stance classifier as follows. Given a text $x_{i}$ and a target $t_i$ , we first formulate the input as a sequence $s_i = [[CLS] t_i$
|
| 100 |
+
|
| 101 |
+

|
| 102 |
+
|
| 103 |
+

|
| 104 |
+
|
| 105 |
+

|
| 106 |
+
Figure 2: Model architecture of our two-stage approach for Target-Stance Extraction task. Models in black dash boxes can be replaced with other baselines. Model architecture in the red dash box indicates the alternative solution in the first stage.
|
| 107 |
+
|
| 108 |
+
$[SEP] x_{i}]$ where $[CLS]$ is a token that encodes the sentence and $[SEP]$ is used to separate the sentence $x_{i}$ and the target $t_{i}$ . Then the representation of $[CLS]$ token is used to predict the stance toward target $t_{i}$ . Note that $t_{i}$ is the golden target in the training stage and is the predicted target from target identification at the inference stage.
|
| 109 |
+
|
| 110 |
+
To facilitate a model's ability to capture target-related features that are of vital importance to stance detection, we propose a multi-task framework that uses target prediction as the auxiliary
|
| 111 |
+
|
| 112 |
+
task that aims to predict the target given the input text for stance detection. More specifically, in the auxiliary task, we formulate the input as $[CLS] x_{i}$ [SEP]] and the golden label is target $t_{i}$ . The layers of encoders are shared across tasks and each task has its specific fully-connected layer on top, which is updated during the training. We expect the model to be able to put more attention on target-related words with the auxiliary task, and thus show better performance on stance detection task. The overall architecture is shown in Figure 2.
|
| 113 |
+
|
| 114 |
+
Note that the auxiliary task is similar with target classification of Stage 1 and thus it cannot be used in zero-shot stance detection. In zero-shot setting, we first leverage the keyphrase generation model for target prediction and then detect the stance toward the predicted target with the multi-task stance model. In order to be consistent with the target generation setting that decouples target identification from stance detection, we train a separate target classification model (BERTweet or BiLSTM) in Stage 1 and a multi-task model (BERTweet or other stance detection baselines) in Stage 2 for stance detection. However, note that the target classification of the auxiliary task can be used for the in-target TSE setting.
|
| 115 |
+
|
| 116 |
+
# 5 Experimental Settings
|
| 117 |
+
|
| 118 |
+
# 5.1 Evaluation Metrics
|
| 119 |
+
|
| 120 |
+
Target-Stance Extraction Target-Stance Extraction (TSE) task aims to extract the target-stance pair from a given text. We propose to solve this task by first identifying the target from the text and then detecting the stance toward the predicted target. We gather the (predicted target, predicted stance) pair for evaluation. For TSE task, we use the $F_{1}$ and accuracy as the evaluation metrics. The calculation of $F_{1}$ is shown as follows:
|
| 121 |
+
|
| 122 |
+
$$
|
| 123 |
+
P r e c i s i o n = \frac {\# c o r r e c t}{\# p r e d i c t}, \tag {1}
|
| 124 |
+
$$
|
| 125 |
+
|
| 126 |
+
$$
|
| 127 |
+
\text {R e c a l l} = \frac {\# \text {c o r r e c t}}{\# \text {g o l d}}, \tag {2}
|
| 128 |
+
$$
|
| 129 |
+
|
| 130 |
+
$$
|
| 131 |
+
F _ {1} = \frac {2 \times \text {P r e c i s i o n} \times \text {R e c a l l}}{\text {P r e c i s i o n} + \text {R e c a l l}} \tag {3}
|
| 132 |
+
$$
|
| 133 |
+
|
| 134 |
+
where #correct denotes the number of target-stance pairs correctly predicted by the model, #predict denotes the number of target-stance pairs whose target is predicted as one of our interested
|
| 135 |
+
|
| 136 |
+
targets (not Unrelated) by the model, #gold denotes the number of target-stance pairs whose target is not Unrelated in the dataset.
|
| 137 |
+
|
| 138 |
+
For accuracy, a prediction pair can be counted as a correct prediction if it satisfies one of the following two conditions: 1) the predicted target-stance pair is the same as the golden one if the golden target is not Unrelated, 2) the predicted target and the golden target are both Unrelated. Since we show no interest in Unrelated category, we do not detect the stance toward the Unrelated category.
|
| 139 |
+
|
| 140 |
+
Target Identification We evaluate the target classification and target generation using microaveraged $F_{1}$ over the golden targets in each dataset.
|
| 141 |
+
|
| 142 |
+
Stance Detection For the original formulation of the stance detection task, we use the $F_{avg}$ , macro-average of $F_{1}$ ( $F_{mac}$ ) and micro-average of $F_{1}$ ( $F_{mic}$ ) as the evaluation metrics following the previous work (Mohammad et al., 2016b). $F_{avg}$ is calculated as the average $F_{1}$ of Favor and Against toward each dataset. Further, $F_{mac}$ is calculated by averaging the $F_{avg}$ across all four datasets. We obtain $F_{mic}$ by averaging the $F_{1}$ of Favor and Against across the merged dataset.
|
| 143 |
+
|
| 144 |
+
# 5.2 Baseline Models
|
| 145 |
+
|
| 146 |
+
Target Classification As discussed in §4.1, this task involves training a classifier which can predict the target mentioned in the given tweet. We use the following neural network based classifiers:
|
| 147 |
+
|
| 148 |
+
- BiLSTM (Schuster and Paliwal, 1997): We use BiLSTM networks followed by two linear layers to predict the target given a text.
|
| 149 |
+
- BERT (Devlin et al., 2019): A pre-trained language model that predicts the target by appending a linear layer to the hidden representation of $[CLS]$ token. We fine-tune the BERT-base on the target classification task.
|
| 150 |
+
- BERTweet (Nguyen et al., 2020): This variant of BERT is pre-trained on 845M English Tweets following the training procedure of RoBERTa (Liu et al., 2019). We fine-tune the BERTweet on the target classification task.
|
| 151 |
+
|
| 152 |
+
Target Generation As discussed in §4.1, we train the BART model separately on the keyphrase generation datasets as described below:
|
| 153 |
+
|
| 154 |
+
- BART-OpenKP: BART, pre-trained on the OpenKeyPhrase (OpenKP) dataset (Xiong
|
| 155 |
+
|
| 156 |
+
et al., 2019), is used as a baseline for generating keyphrases for the input texts. OpenKP is a large-scale open domain keyphrase extraction dataset consisting of 148,124 annotated real-world webpages.
|
| 157 |
+
|
| 158 |
+
- BART-KPTimes: BART, pre-trained on the KPTimes (Gallina et al., 2019) dataset, serves as another baseline for target generation. KP-Times is a large-scale keyphrase generation dataset consisting of $\sim 280,000$ news articles with the editor-curated keyphrases.
|
| 159 |
+
- BART-FullTextKP: BART is pre-trained on the FullTextKP (Garg et al., 2022) dataset. FullTextKP is a collection of 142,844 scientific articles along with the annotated keyphrases. We use the version of FullTextKP which contains only the titles and abstracts of those articles.
|
| 160 |
+
|
| 161 |
+
Stance Detection We first train the model on the merged dataset and then apply the well-trained model to predict the stance toward the predicted target from the target identification stage. We conduct experiments with the following baselines:
|
| 162 |
+
|
| 163 |
+
- BiLSTM (Schuster and Paliwal, 1997): A BiLSTM model is used to predict the stance without considering the target information.
|
| 164 |
+
- BiCond (Augenstein et al., 2016): A BiLSTM model that uses a conditional encoding method. The target is first encoded by a BiLSTM, whose hidden representations are then used to initialize another BiLSTM with sentences as inputs.
|
| 165 |
+
- TAN (Du et al., 2017): An attention-based BiLSTM model that learns the correlation between target and sentence representations.
|
| 166 |
+
- CrossNet (Xu et al., 2018): A variant of BiCond model, which adds an attention layer to capture the important words of inputs.
|
| 167 |
+
- TGA-Net (Allaway and McKeown, 2020): A BERT-based model that uses topic-grouped attention.
|
| 168 |
+
- BERTweet (Li et al., 2021a,b): A pre-trained language model that is fine-tuned by adding a linear layer to the hidden representation of the [CLS] token. The input is formulated as: [CLS] target [SEP] text.
|
| 169 |
+
|
| 170 |
+
<table><tr><td>Model</td><td>SE</td><td>AM</td><td>C19</td><td>PS</td><td>Merged</td></tr><tr><td>BiLSTM</td><td>52.07</td><td>54.56</td><td>50.00</td><td>60.79</td><td>61.00</td></tr><tr><td>BERT</td><td>77.38</td><td>70.40</td><td>66.38</td><td>70.10</td><td>74.70</td></tr><tr><td>BERTweet</td><td>81.27</td><td>70.55</td><td>66.54</td><td>72.25</td><td>75.59</td></tr></table>
|
| 171 |
+
|
| 172 |
+
# 6 Results and Analysis
|
| 173 |
+
|
| 174 |
+
In this section, we first present the results for target classification and target generation in §6.1. We then present the set of experiments performed on the intarget TSE task and show the results obtained by using the aforementioned baselines in §6.2. In the next section §6.3, we report the results for the zero-shot TSE task where targets of test set are not seen in the train set. Finally, we study the performance of multi-task models in §6.4. Each result is the average of three runs with different initializations.
|
| 175 |
+
|
| 176 |
+
# 6.1 Target Classification and Target Generation
|
| 177 |
+
|
| 178 |
+
For target classification, BERT-based models consistently outperform the BiLSTM model by a wide margin and BERTweet further supersedes BERT across all datasets, as shown in Table 3. We can also observe that all models achieve relatively low performance on the COVID-19 dataset. One reason is that targets in this dataset are all closely related to COVID-19 and thus share a lot of topics / commonalities, which make the target classification task more challenging.
|
| 179 |
+
|
| 180 |
+
For target generation, we report the performance of different pre-trained BART models in Table 4. We can observe that the overall performance of target generation is lower than the target classification task, which implies that the target generation task is more challenging. However, unlike the target classification models that can only be applied to in-target stance detection, target generation models can be directly extended to zero-shot stance detection that needs to detect the stance for targets unseen during training. In addition, the keyphrase generation models produce interesting generations as shown in Appendix B, that could be leveraged for other research purposes for stance detection such as data augmentation as part of future work.
|
| 181 |
+
|
| 182 |
+
# 6.2 In-Target TSE
|
| 183 |
+
|
| 184 |
+
TSE with Target Classification In Table 5, we report the performance of our proposed two-stage
|
| 185 |
+
|
| 186 |
+
Table 3: Performance comparisons of different models in micro-averaged $F_{1}$ on target classification.
|
| 187 |
+
|
| 188 |
+
<table><tr><td>Model</td><td>SE</td><td>AM</td><td>C19</td><td>PS</td><td>Merged</td></tr><tr><td>OpenKP</td><td>32.22</td><td>61.24</td><td>28.25</td><td>43.81</td><td>43.02</td></tr><tr><td>KPTimes</td><td>30.83</td><td>66.31</td><td>26.00</td><td>63.65</td><td>48.31</td></tr><tr><td>FullTextKP</td><td>28.06</td><td>64.67</td><td>29.38</td><td>44.83</td><td>43.81</td></tr></table>
|
| 189 |
+
|
| 190 |
+
Table 4: Performance comparisons of different models in micro-averaged $F_{1}$ on target generation.
|
| 191 |
+
|
| 192 |
+
framework with target classification. Stance detection baselines are trained in our proposed multitask setting on the merged dataset. Note that the BiLSTM, BERT and BERTweet in the first row of Table 5 are the target classification models. GT means that all ground-truth targets are used for stance detection (Stage 2). First, it can be seen that the overall performance of stance baselines is relatively low, which indicates that our proposed TSE task is very challenging. Second, we can observe that stance classifier BERTweet achieves the best performance across all target classification models, which is consistent with our observation in Table 8 that BERTweet performs best on in-target stance detection. Third, we can observe that each stance classifier achieves the best performance on target classifier BERTweet also due to its higher accuracy in target identification. Fourth, a significant performance drop can be seen between GT and each target classification model, which indicates that it is challenging to correctly identify the targets in our proposed framework.
|
| 193 |
+
|
| 194 |
+
TSE with Target Generation Besides target classification, we report the performance of our proposed two-stage framework with target generation in Table 6. Stance detection baselines are trained in our proposed multi-task setting on the merged dataset. The OpenKP, KPTimes, and Full-TextKP of the first row indicate the train sets of the keyphrase generation models. First, we see that stance classifiers show lower performance in the target generation setting in overall than the target classification setting. One explanation is that keyphrases generated by the keyphrase generation models might be related to other topics contained in the sentence. However, in most datasets, one sentence is annotated with only one target and thus the generated keyphrases may be mapped to wrong targets.
|
| 195 |
+
|
| 196 |
+
Second, we can observe that stance classifiers achieve higher performance in evaluation metric $F_{1}$ over accuracy in Table 6, which is different from the observation in Table 5. The reason is that target
|
| 197 |
+
|
| 198 |
+
<table><tr><td rowspan="2">Model</td><td colspan="2">BiLSTM</td><td colspan="2">BERT</td><td colspan="2">BERTweet</td><td colspan="2">GT</td></tr><tr><td>F1</td><td>Acc</td><td>F1</td><td>Acc</td><td>F1</td><td>Acc</td><td>F1</td><td>Acc</td></tr><tr><td>BiLSTM</td><td>35.38</td><td>44.64</td><td>44.81</td><td>53.15</td><td>45.46</td><td>53.61</td><td>65.23</td><td>71.16</td></tr><tr><td>BiCond</td><td>35.36</td><td>44.63</td><td>44.94</td><td>53.26</td><td>45.59</td><td>53.72</td><td>65.61</td><td>71.48</td></tr><tr><td>TAN</td><td>36.69</td><td>45.73</td><td>46.32</td><td>54.41</td><td>47.02</td><td>54.91</td><td>67.33</td><td>72.90</td></tr><tr><td>CrossNet</td><td>36.30</td><td>45.41</td><td>45.81</td><td>53.98</td><td>46.41</td><td>54.40</td><td>67.09</td><td>72.70</td></tr><tr><td>TGA-Net</td><td>39.23</td><td>47.83</td><td>49.46</td><td>57.02</td><td>50.31</td><td>57.65</td><td>71.73</td><td>76.55</td></tr><tr><td>BERTweet</td><td>41.35</td><td>49.59</td><td>52.24</td><td>59.33</td><td>53.30</td><td>60.13</td><td>75.28</td><td>79.49</td></tr></table>
|
| 199 |
+
|
| 200 |
+
Table 5: Performance comparisons of different models in $F_{1}$ and accuracy on the merged dataset and in-target TSE task with target classification setting. GT: ground-truth targets are used for stance detection (Stage 2), which is the upper bound of model performance in our proposed framework.
|
| 201 |
+
|
| 202 |
+
<table><tr><td rowspan="2">Model</td><td colspan="2">OpenKP</td><td colspan="2">KPTimes</td><td colspan="2">FullTextKP</td><td colspan="2">GT</td></tr><tr><td>F1</td><td>Acc</td><td>F1</td><td>Acc</td><td>F1</td><td>Acc</td><td>F1</td><td>Acc</td></tr><tr><td>BiLSTM</td><td>28.40</td><td>26.50</td><td>32.69</td><td>30.06</td><td>29.24</td><td>26.86</td><td>65.23</td><td>71.16</td></tr><tr><td>BiCond</td><td>28.64</td><td>26.71</td><td>32.94</td><td>30.29</td><td>29.22</td><td>26.84</td><td>65.61</td><td>71.48</td></tr><tr><td>TAN</td><td>29.75</td><td>27.72</td><td>34.13</td><td>31.37</td><td>30.52</td><td>28.03</td><td>67.33</td><td>72.90</td></tr><tr><td>CrossNet</td><td>29.25</td><td>27.27</td><td>33.63</td><td>30.92</td><td>30.19</td><td>27.73</td><td>67.09</td><td>72.70</td></tr><tr><td>TGA-Net</td><td>31.89</td><td>29.65</td><td>36.76</td><td>33.77</td><td>32.86</td><td>30.16</td><td>71.73</td><td>76.55</td></tr><tr><td>BERTweet</td><td>34.02</td><td>31.57</td><td>38.92</td><td>35.74</td><td>35.16</td><td>32.26</td><td>75.28</td><td>79.49</td></tr></table>
|
| 203 |
+
|
| 204 |
+
Table 6: Performance comparisons of different models in $F_{1}$ and accuracy on the merged dataset and in-target TSE task with target generation setting. GT: ground-truth targets are used for stance detection (Stage 2), which is the upper bound of model performance in our proposed framework.
|
| 205 |
+
|
| 206 |
+
classifiers show much better performance on the class Unrelated because samples of Unrelated are seen during training. However, in target generation, we predict the generated keyphrases as Unrelated category with a threshold, which is not accurate in some cases and introduces another source of error.
|
| 207 |
+
|
| 208 |
+
Third, we can observe that BERTweet still achieves the best performance across all keyphrase generation models, indicating its effectiveness on in-target stance detection.
|
| 209 |
+
|
| 210 |
+
Fourth, we can see that stance classifiers generally achieve better performance with the generation model trained on KPTimes, which is consistent with our observation in Table 4.
|
| 211 |
+
|
| 212 |
+
Fifth, as before, we can observe a significant performance drop between GT and each target generation model (even higher than the target classification). This is not surprising since target generation is even more challenging than target classification.
|
| 213 |
+
|
| 214 |
+
# 6.3 Zero-Shot TSE
|
| 215 |
+
|
| 216 |
+
To investigate the ability of different baselines in addressing the unseen targets, we further evaluate the performance of baselines on zero-shot stance detection where targets of test set are not seen in train and validation sets. Table 7 shows performance comparisons of baseline models on the zero-shot TSE task in target generation setting. Note that
|
| 217 |
+
|
| 218 |
+
target classification cannot be directly applied to identify the target in zero-shot tasks because given an input sentence, the predicted target of target classification must be one of the seen targets in train set. We can observe that zero-shot baseline TGA-Net achieves the best performance across all keyphrase generation models, indicating that TGA-Net shows better ability to generalize to unseen targets with topic-grouped attention. In addition, stance classifiers show best results with the generation model trained on KPTimes, which is consistent with the results in Table 4. It can be seen that even GT does not perform well on the zero-shot dataset, indicating the difficulty of our zero-shot task.
|
| 219 |
+
|
| 220 |
+
# 6.4 Effectiveness of Multi-Task Learning on Stance Detection
|
| 221 |
+
|
| 222 |
+
As mentioned before, all results reported in §6.2 and §6.3 are based on multi-task models. To investigate the effectiveness of multi-task learning, we compare the performance of multi-task models with single-task models in Table 8. Each model is trained and validated on the merged set and tested on the individual datasets where targets are golden targets instead of generated targets for a better understanding of experimental results. We can observe that all multi-task models consistently outperforms single-task models on all datasets, demon
|
| 223 |
+
|
| 224 |
+
<table><tr><td rowspan="2">Model</td><td colspan="2">OpenKP</td><td colspan="2">KPTimes</td><td colspan="2">FullTextKP</td><td colspan="2">GT</td></tr><tr><td>F1</td><td>Acc</td><td>F1</td><td>Acc</td><td>F1</td><td>Acc</td><td>F1</td><td>Acc</td></tr><tr><td>BiLSTM</td><td>12.77</td><td>11.81</td><td>13.15</td><td>12.10</td><td>12.95</td><td>11.91</td><td>27.42</td><td>39.52</td></tr><tr><td>BiCond</td><td>13.60</td><td>12.57</td><td>14.31</td><td>13.17</td><td>13.77</td><td>12.66</td><td>28.98</td><td>40.81</td></tr><tr><td>TAN</td><td>13.30</td><td>12.31</td><td>13.29</td><td>12.23</td><td>13.53</td><td>12.44</td><td>27.51</td><td>39.59</td></tr><tr><td>CrossNet</td><td>14.38</td><td>13.29</td><td>14.89</td><td>13.69</td><td>14.39</td><td>13.23</td><td>30.73</td><td>42.28</td></tr><tr><td>TGA-Net</td><td>21.47</td><td>19.76</td><td>22.83</td><td>20.95</td><td>21.36</td><td>19.61</td><td>40.94</td><td>50.79</td></tr><tr><td>BERTweet</td><td>19.11</td><td>17.60</td><td>20.45</td><td>18.78</td><td>20.11</td><td>18.46</td><td>38.51</td><td>48.76</td></tr></table>
|
| 225 |
+
|
| 226 |
+
Table 7: Performance comparisons of different models in $F_{1}$ and accuracy on the zero-shot dataset and zero-shot TSE task with target generation setting. GT: ground-truth targets are used for stance detection (Stage 2), which is the upper bound of model performance in our proposed framework.
|
| 227 |
+
|
| 228 |
+
<table><tr><td>Model</td><td>SE</td><td>AM</td><td>C19</td><td>PS</td><td>Fmac</td><td>Fmic</td></tr><tr><td colspan="7">Single-Task</td></tr><tr><td>BiLSTM</td><td>53.05</td><td>45.70</td><td>53.34</td><td>73.62</td><td>56.43</td><td>58.75</td></tr><tr><td>BiCond</td><td>52.63</td><td>46.96</td><td>58.73</td><td>74.56</td><td>58.22</td><td>60.14</td></tr><tr><td>TAN</td><td>55.26</td><td>50.85</td><td>56.83</td><td>74.67</td><td>59.40</td><td>61.60</td></tr><tr><td>CrossNet</td><td>61.06</td><td>50.79</td><td>65.89</td><td>75.08</td><td>63.21</td><td>63.03</td></tr><tr><td>TGA-Net</td><td>63.74</td><td>58.71</td><td>64.70</td><td>77.70</td><td>66.21</td><td>67.56</td></tr><tr><td>BERTweet</td><td>68.03</td><td>64.31</td><td>72.99</td><td>81.47</td><td>71.70</td><td>72.26</td></tr><tr><td colspan="7">Multi-Task</td></tr><tr><td>BiLSTM</td><td>57.03</td><td>47.45</td><td>59.35</td><td>74.22</td><td>59.51</td><td>60.63</td></tr><tr><td>BiCond</td><td>56.22</td><td>47.11</td><td>61.69</td><td>75.29</td><td>60.08</td><td>60.98</td></tr><tr><td>TAN</td><td>58.54</td><td>52.13</td><td>60.31</td><td>76.29</td><td>61.82</td><td>63.32</td></tr><tr><td>CrossNet</td><td>61.41</td><td>51.30</td><td>67.65</td><td>76.45</td><td>64.20</td><td>63.89</td></tr><tr><td>TGA-Net</td><td>64.05</td><td>59.26</td><td>66.77</td><td>78.67</td><td>67.19</td><td>68.12</td></tr><tr><td>BERTweet</td><td>70.62</td><td>64.85</td><td>74.42</td><td>81.67</td><td>72.89</td><td>73.01</td></tr></table>
|
| 229 |
+
|
| 230 |
+
Table 8: Performance comparisons of different models on in-target stance detection. We report $F_{avg}$ , macro-average of $F_{1}(F_{mac})$ and micro-average of $F_{1}(F_{mic})$ .
|
| 231 |
+
|
| 232 |
+
strating the effectiveness of the multi-task learning. Specifically, the average improvements of multitask models over single-task models are $2.35\%$ , $0.80\%$ , $2.95\%$ and $0.92\%$ in $F_{avg}$ on SemEval2016, AM, COVID-19, and P-Stance datasets, respectively. In addition, we can see that multi-task models achieve larger improvements on SemEval2016 and COVID-19 datasets. One possible reason is that there are fewer train samples in SemEval2016 and COVID-19 datasets than the rest and thus the auxiliary task of identifying targets can help models better capture the target-related features.
|
| 233 |
+
|
| 234 |
+
# 7 Conclusion
|
| 235 |
+
|
| 236 |
+
In this paper, we introduce a new Target-Stance Extraction (TSE) task to identify both target and corresponding stance in the wild. Different from original stance detection task that aims to only detect the stance given the target and text, our proposed task includes both target identification and stance de
|
| 237 |
+
|
| 238 |
+
tection, which makes it a more challenging task. We benchmark the task by proposing a two-stage framework that first identifies the target from a text and then detects the stance toward the predicted target. Our two-stage framework can not only be applied to in-target stance detection but also zero-shot stance detection. In addition, we propose a multi-task approach that takes target prediction as an auxiliary task to improve the task performance of stance detection.
|
| 239 |
+
|
| 240 |
+
It is worth noting that the primary goal of this paper is the introduction of new stance detection task. The proposed framework provides a good starting point and leaves much room for further improvements. Future work includes improving the target identification task, e.g., with a better mapping strategy.
|
| 241 |
+
|
| 242 |
+
# 8 Limitations
|
| 243 |
+
|
| 244 |
+
We present a novel (Target, Stance) pair Extraction task (TSE) for understanding the stance of interesting topics in the wild. There are two potential limitations to our work. First, the mapping module requires a predefined list of targets. Without the predefined list of targets, it is very difficult to understand the correctness of stance labels for the predicted targets in the absence of gold labels. On the other hand, the predefined list of targets makes the entire system end-to-end and automatically evaluable. Second, the process of mapping might become too slow if the number of targets of interest grows bigger. Future works include solving the given limitations and extracting (target, stance) pairs in a unified setting. However, the primary contribution of the work is not to present a fully robust pipeline model but to present a novel, interesting, and challenging task to the community working in stance detection.
|
| 245 |
+
|
| 246 |
+
# 9 Ethical Considerations
|
| 247 |
+
|
| 248 |
+
Beyond the proposed two-step framework that helps collect the stance in the wild, it is very important to consider the ethical implications of stance detection systems. Since stance detection systems could automatically collect and aggregate the topical stance for a specific target, these systems may have significant impact on decision-making. Algorithms are not perfect, and thus a potential harm is that these systems may make incorrect predictions and further mislead the decision-making. Researchers should be aware of potential harms from the misuse of stance detection systems, and should respect people's privacy during the data collection.
|
| 249 |
+
|
| 250 |
+
# Acknowledgments
|
| 251 |
+
|
| 252 |
+
We thank the National Science Foundation for support from grants IIS-1912887, IIS-2107487, and ITE-2137846 which supported the research and the computation in this study. We also thank our reviewers for their insightful feedback and comments.
|
| 253 |
+
|
| 254 |
+
# References
|
| 255 |
+
|
| 256 |
+
Abeer ALDayel and Walid Magdy. 2021. Stance detection on social media: State of the art and trends. International Journal on Information Processing and Management, 58(4).
|
| 257 |
+
Emily Allaway and Kathleen McKeown. 2020. Zero-shot stance detection: A dataset and model using generalized topic representations. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8913-8931.
|
| 258 |
+
Rabah Alzaidy, Cornelia Caragea, and C. Lee Giles. 2019. Bi-lstm-crf sequence labeling for keyphrase extraction from scholarly documents. In *The World Wide Web Conference*, WWW '19, page 2551-2557, New York, NY, USA. Association for Computing Machinery.
|
| 259 |
+
Isabelle Augenstein, Tim Roktaschel, Andreas Vlachos, and Kalina Bontcheva. 2016. Stance detection with bidirectional conditional encoding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 876-885.
|
| 260 |
+
Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146.
|
| 261 |
+
Cornelia Caragea, Florin Adrian Bulgarov, Andreea Godea, and Sujatha Das Gollapalli. 2014. Citation-enhanced keyphrase extraction from research papers: A supervised approach. In Proceedings of the
|
| 262 |
+
|
| 263 |
+
2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1435-1446, Doha, Qatar. Association for Computational Linguistics.
|
| 264 |
+
Costanza Conforti, Jakob Berndt, Mohammad Taher Pilehvar, Chryssi Giannitsarou, Flavio Toxvaerd, and Nigel Collier. 2020. Will-they-won't-they: A very large dataset for stance detection on Twitter. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1715-1724.
|
| 265 |
+
Kareem Darwish, Walid Magdy, and Tahar Zanouda. 2017. Trump vs. Hillary: What went viral during the 2016 US presidential election. In *Social Informatics*, pages 143-161.
|
| 266 |
+
Leon Derczynski, Kalina Bontcheva, Maria Liakata, Rob Procter, Geraldine Wong Sak Hoi, and Arkaitz Zubiaga. 2017. SemEval-2017 task 8: RumourEval: Determining rumour veracity and support for rumours. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 69-76.
|
| 267 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.
|
| 268 |
+
Jiachen Du, Ruifeng Xu, Yulan He, and Lin Gui. 2017. Stance classification with target-specific neural attention networks. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 3988-3994.
|
| 269 |
+
Corina Florescu and Cornelia Caragea. 2017. PositionRank: An unsupervised approach to keyphrase extraction from scholarly documents. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1105-1115, Vancouver, Canada. Association for Computational Linguistics.
|
| 270 |
+
Ygor Gallina, Florian Boudin, and Beatrice Daille. 2019. KPTimes: A large-scale dataset for keyphrase generation on news documents. In Proceedings of the 12th International Conference on Natural Language Generation, pages 130-135.
|
| 271 |
+
Krishna Garg, Jishnu Ray Chowdhury, and Cornelia Caragea. 2022. Keyphrase generation beyond the boundaries of title and abstract. In *Findings of the Association for Computational Linguistics: EMNLP* 2022, pages 5809-5821, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
|
| 272 |
+
Akash Gautam, Puneet Mathur, Rakesh Gosangi, Debanjan Mahata, Ramit Sawhney, and Rajiv Ratn Shah. 2020. #MeTooMA: Multi-aspect annotations
|
| 273 |
+
|
| 274 |
+
of tweets related to the MeToo movement. Proceedings of the International AAAI Conference on Web and Social Media, 14(1):209-216.
|
| 275 |
+
Kyle Glandt, Sarthak Khanal, Yingjie Li, Doina Caragea, and Cornelia Caragea. 2021. Stance detection in COVID-19 tweets. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1596-1611.
|
| 276 |
+
Genevieve Gorrell, Elena Kochkina, Maria Liakata, Ahmet Aker, Arkaitz Zubiaga, Kalina Bontcheva, and Leon Derczynski. 2019. SemEval-2019 task 7: RumourEval, determining rumour veracity and support for rumours. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 845-854.
|
| 277 |
+
Lara Grimminger and Roman Klinger. 2021. Hate towards the political opponent: A Twitter corpus study of the 2020 US elections on the basis of offensive speech and stance detection. In Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 171-180.
|
| 278 |
+
Momchil Hardalov, Arnav Arora, Preslav Nakov, and Isabelle Augenstein. 2021. Cross-domain label-adaptive stance detection. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9011-9028.
|
| 279 |
+
Momchil Hardalov, Arnav Arora, Preslav Nakov, and Isabelle Augenstein. 2022. A survey on stance detection for mis- and disinformation identification. In *Findings of the Association for Computational Linguistics: NAACL* 2022, pages 1259–1277.
|
| 280 |
+
Kazi Saidul Hasan and Vincent Ng. 2014. Why are you taking this stance? Identifying and classifying reasons in ideological debates. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 751-762.
|
| 281 |
+
Muhammad Naeem Khan, Muhammad Azeem Ashraf, Donald Seinen, Kashif Ullah Khan, and Rizwan Ahmed Laar. 2021. Social media for knowledge acquisition and dissemination: The impact of the COVID-19 pandemic on collaborative learning driven social media adoption. Frontiers in Psychology, 12.
|
| 282 |
+
Dilek Kucuk and Fazli Can. 2020. Stance detection: A survey. ACM Comput. Surv., 53(1):1-37.
|
| 283 |
+
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880.
|
| 284 |
+
|
| 285 |
+
Yingjie Li and Cornelia Caragea. 2019. Multi-task stance detection with sentiment and stance lexicons. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6298-6304.
|
| 286 |
+
Yingjie Li and Cornelia Caragea. 2021a. A multi-task learning framework for multi-target stance detection. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 2320-2326.
|
| 287 |
+
Yingjie Li and Cornelia Caragea. 2021b. Target-aware data augmentation for stance detection. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1850-1860.
|
| 288 |
+
Yingjie Li, Tiberiu Sosea, Aditya Sawant, Ajith Jayaraman Nair, Diana Inkpen, and Cornelia Caragea. 2021a. P-Stance: A large dataset for stance detection in political domain. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 2355-2365.
|
| 289 |
+
Yingjie Li, Chenye Zhao, and Cornelia Caragea. 2021b. Improving stance detection with multi-dataset learning and knowledge distillation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6332-6345.
|
| 290 |
+
Yingjie Li, Chenye Zhao, and Cornelia Caragea. 2023. Tts: A target-based teacher-student framework for zero-shot stance detection. In Proceedings of the ACM Web Conference 2023, page 1500-1509.
|
| 291 |
+
Bin Liang, Yonghao Fu, Lin Gui, Min Yang, Jiachen Du, Yulan He, and Ruifeng Xu. 2021. Target-adaptive graph for cross-target stance detection. In Proceedings of the Web Conference 2021, page 3453-3464.
|
| 292 |
+
Bin Liang, Qinglin Zhu, Xiang Li, Min Yang, Lin Gui, Yulan He, and Ruifeng Xu. 2022. JointCL: A joint contrastive learning framework for zero-shot stance detection. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 81-91.
|
| 293 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.
|
| 294 |
+
Rui Meng, Sanqiang Zhao, Shuguang Han, Daqing He, Peter Brusilovsky, and Yu Chi. 2017. Deep keyphrase generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 582-592, Vancouver, Canada. Association for Computational Linguistics.
|
| 295 |
+
Lin Miao, Mark Last, and Marina Litvak. 2020. Twitter data augmentation for monitoring public opinion on COVID-19 intervention measures. In Proceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020.
|
| 296 |
+
|
| 297 |
+
Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiaodan Zhu, and Colin Cherry. 2016a. A dataset for detecting stance in tweets. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 3945-3952.
|
| 298 |
+
Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiaodan Zhu, and Colin Cherry. 2016b. SemEval-2016 task 6: Detecting stance in tweets. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 31-41.
|
| 299 |
+
Dat Quoc Nguyen, Thanh Vu, and Anh Tuan Nguyen. 2020. BERTweet: A pre-trained language model for English tweets. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 9-14.
|
| 300 |
+
Krutarth Patel and Cornelia Caragea. 2019. Exploring word embeddings in crf-based keyphrase extraction from research papers. In Proceedings of the 10th International Conference on Knowledge Capture, K-CAP '19, page 37-44, New York, NY, USA. Association for Computing Machinery.
|
| 301 |
+
Delip Rao and Dean Pomerleau. 2017. Fake news challenge.
|
| 302 |
+
Jishnu Ray Chowdhury, Cornelia Caragea, and Doina Caragea. 2019. Keyphrase extraction from disaster-related tweets. In *The World Wide Web Conference*, WWW '19, page 1555–1566, New York, NY, USA. Association for Computing Machinery.
|
| 303 |
+
Jishnu Ray Chowdhury, Seo Yeon Park, Tuhin Kundu, and Cornelia Caragea. 2022. KPDROP: Improving absent keyphrase generation. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 4853-4870, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
|
| 304 |
+
Benjamin Schiller, Johannes Daxenberger, and Iryna Gurevych. 2021. Stance detection benchmark: How robust is your stance detection? KI - Künstliche Intelligenz.
|
| 305 |
+
Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673-2681.
|
| 306 |
+
Parinaz Sobhani, Diana Inkpen, and Xiaodan Zhu. 2017. A dataset for multi-target stance detection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 551-557.
|
| 307 |
+
Swapna Somasundaran and Janyce Wiebe. 2010. Recognizing stances in ideological on-line debates. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, pages 116-124.
|
| 308 |
+
|
| 309 |
+
Christian Stab, Tristan Miller, Benjamin Schiller, Pranav Rai, and Iryna Gurevych. 2018. Cross-topic argument mining from heterogeneous sources. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3664-3674.
|
| 310 |
+
Lucas Sterckx, Cornelia Caragea, Thomas Demeester, and Chris Develder. 2016. Supervised keyphrase extraction as positive unlabeled learning. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1924-1929, Austin, Texas. Association for Computational Linguistics.
|
| 311 |
+
Jannis Vamvas and Rico Sennrich. 2020. X-Stance: A multilingual multi-target dataset for stance detection. In Proceedings of the 5th Swiss Text Analytics Conference (SwissText) & 16th Conference on Natural Language Processing (KONVENS).
|
| 312 |
+
Wikipedia. Wikipedia: list of controversial issues. [Online; accessed 10-December-2012].
|
| 313 |
+
Lee Xiong, Chuan Hu, Chenyan Xiong, Daniel Campos, and Arnold Overwijk. 2019. Open domain web keyphrase extraction beyond language modeling. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5174-5183.
|
| 314 |
+
Chang Xu, Cécile Paris, Surya Nepal, and Ross Sparks. 2018. Cross-target stance classification with self-attention networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 778-783.
|
| 315 |
+
Jiacheng Ye, Tao Gui, Yichao Luo, Yige Xu, and Qi Zhang. 2021. One2Set: Generating diverse keyphrases as a set. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4598-4608.
|
| 316 |
+
Xingdi Yuan, Tong Wang, Rui Meng, Khushboo Thaker, Peter Brusilovsky, Daqing He, and Adam Trischler. 2020. One size does not fit all: Generating and evaluating variable number of keyphrases. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7961-7975.
|
| 317 |
+
Bowen Zhang, Min Yang, Xutao Li, Yunming Ye, Xiaofei Xu, and Kuai Dai. 2020. Enhancing cross-target stance detection with transferable semantic emotion knowledge. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3188-3197.
|
| 318 |
+
|
| 319 |
+
# A Curation of Unrelated target samples
|
| 320 |
+
|
| 321 |
+
We retrieved a collection of tweets using Twitter API for some controversial topics such as Black
|
| 322 |
+
|
| 323 |
+

|
| 324 |
+
|
| 325 |
+

|
| 326 |
+
(a) Abortion
|
| 327 |
+
(c) Feminist Movement
|
| 328 |
+
Figure 3: Wordclouds of generated keyphrases for different targets of SemEval-2016 dataset.
|
| 329 |
+
|
| 330 |
+

|
| 331 |
+
|
| 332 |
+

|
| 333 |
+
(b) Atheism
|
| 334 |
+
(d) Hillary Clinton
|
| 335 |
+
|
| 336 |
+
Lives Matter, Communism, Conservatism, Morality, etc. The controversial topics were collected from Wikipedia. We manually removed the topics that are related to the targets of our merged and zero-shot datasets. Further, we performed the following preprocessing steps: (1) We removed the duplicates and retweets. (2) We removed the topics that appear in less than 100 tweets. (3) We removed the tweets that contain any explicit mentions of the targets of our merged and zero-shot datasets. (4) We created the train, validation and test sets following an 80/10/10 split for each topic. Thus, we curated a filtered collection for Unrelated samples. Note that Unrelated samples used in the merged and zero-shot datasets are not overlapped and examples of Unrelated category are shown in Table 9.
|
| 337 |
+
|
| 338 |
+
<table><tr><td>Topic</td><td>Tweet</td></tr><tr><td>Black Lives Matter</td><td>Black Lives Matter Proclaims Thanksgiving Is A Holiday Of Colonization On Stolen Land</td></tr><tr><td>Communism</td><td>We are told that communism causes famines. But it is actually capitalism, colonialism & imperialism that cause food insecurity and mass hunger.</td></tr><tr><td>Conservatism</td><td>Conservatism isn’t about freedoms it’s all about control.</td></tr><tr><td>Morality</td><td>To place morality above compassion or law before love is to nullify nature and scorn nurture. Love knows no wrong.</td></tr></table>
|
| 339 |
+
|
| 340 |
+
Table 9: Examples from Unrelated category samples.
|
| 341 |
+
|
| 342 |
+
# B Generated Keyphrases in Target Generation Task
|
| 343 |
+
|
| 344 |
+
As discussed in §6.1, target generation models produce worse performance than target classification models in target identification task. The reason could be that the generated keyphrases might be related to other topics contained in the sentence, which are not correctly mapped to the golden targets in target identification task.
|
| 345 |
+
|
| 346 |
+
In Figure 3, we show the wordclouds for the generated keyphrases using our keyphrase generation models as described in §4.1 and §6.1. For instance, for the ground truth label Atheism, the generated keyphrases are spirituality, religion, faith, belief, philosophy, etc. We can observe that these generated keyphrases are semantically related to the ground truth target Atheism and these generated keyphrases could further be used for other research purposes such as data augmentation of stance detection and multi-target stance annotation.
|
| 347 |
+
|
| 348 |
+
A For every submission:
|
| 349 |
+
|
| 350 |
+
A1. Did you describe the limitations of your work? Left blank.
|
| 351 |
+
A2. Did you discuss any potential risks of your work? 9
|
| 352 |
+
A3. Do the abstract and introduction summarize the paper's main claims? Left blank.
|
| 353 |
+
A4. Have you used AI writing assistants when working on this paper? Left blank.
|
| 354 |
+
|
| 355 |
+
B Did you use or create scientific artifacts?
|
| 356 |
+
|
| 357 |
+
3.2
|
| 358 |
+
|
| 359 |
+
B1. Did you cite the creators of artifacts you used? 3.2
|
| 360 |
+
B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank.
|
| 361 |
+
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank.
|
| 362 |
+
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank.
|
| 363 |
+
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank.
|
| 364 |
+
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 3.2
|
| 365 |
+
|
| 366 |
+
C Did you run computational experiments?
|
| 367 |
+
|
| 368 |
+
8
|
| 369 |
+
|
| 370 |
+
C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used?
|
| 371 |
+
|
| 372 |
+
8
|
| 373 |
+
|
| 374 |
+
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? we use the default parameters without hyperparameter tuning
|
| 375 |
+
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 6
|
| 376 |
+
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Appendix
|
| 377 |
+
|
| 378 |
+
D Did you use human annotators (e.g., crowdworkers) or research with human participants? Left blank.
|
| 379 |
+
|
| 380 |
+
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Not applicable. Left blank.
|
| 381 |
+
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Not applicable. Left blank.
|
| 382 |
+
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Not applicable. Left blank.
|
| 383 |
+
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank.
|
| 384 |
+
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Not applicable. Left blank.
|
2023/A New Direction in Stance Detection_ Target-Stance Extraction in the Wild/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d55153ab7e972dbbbde5bdf3a13c585d6212208c832bb19f275a113a31415bd6
|
| 3 |
+
size 661192
|
2023/A New Direction in Stance Detection_ Target-Stance Extraction in the Wild/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Novel Table-to-Graph Generation Approach for Document-Level Joint Entity and Relation Extraction/a48b0d22-32b5-4318-8d85-fd66569fde26_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Novel Table-to-Graph Generation Approach for Document-Level Joint Entity and Relation Extraction/a48b0d22-32b5-4318-8d85-fd66569fde26_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Novel Table-to-Graph Generation Approach for Document-Level Joint Entity and Relation Extraction/a48b0d22-32b5-4318-8d85-fd66569fde26_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a106f7dfd4fd535683f09464f31725947746d3a3cd8d128a2a55b1158977623b
|
| 3 |
+
size 523864
|
2023/A Novel Table-to-Graph Generation Approach for Document-Level Joint Entity and Relation Extraction/full.md
ADDED
|
@@ -0,0 +1,499 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Novel Table-to-Graph Generation Approach for Document-Level Joint Entity and Relation Extraction
|
| 2 |
+
|
| 3 |
+
Ruoyu Zhang $^{1}$ , Yanzeng Li $^{1}$ , Lei Zou $^{1,2*}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ Wangxuan Institute of Computer Technology, Peking University, Beijing, China
|
| 6 |
+
|
| 7 |
+
$^{2}$ TopGraph.AI
|
| 8 |
+
|
| 9 |
+
{ry_zhang, zoulei}@pku.edu.cn
|
| 10 |
+
|
| 11 |
+
liyanzeng@stu.pku.edu.cn
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
Document-level relation extraction (DocRE) aims to extract relations among entities within a document, which is crucial for applications like knowledge graph construction. Existing methods usually assume that entities and their mentions are identified beforehand, which falls short of real-world applications. To overcome this limitation, we propose TAG, a novel table-to-graph generation model for joint extraction of entities and relations at document-level. To enhance the learning of task dependencies, TAG induces a latent graph among mentions, with different types of edges indicating different task information, which is further broadcast with a relational graph convolutional network. To alleviate the error propagation problem, we adapt the hierarchical agglomerative clustering algorithm to back-propagate task information at decoding stage. Experiments on the benchmark dataset, DocRED, demonstrate that TAG surpasses previous methods by a large margin and achieves state-of-the-art results<sup>1</sup>.
|
| 16 |
+
|
| 17 |
+
# 1 Introduction
|
| 18 |
+
|
| 19 |
+
Relation extraction (RE) is the task to extract relational facts from natural language text, which plays a crucial role in various downstream tasks, e.g. knowledge graph construction and question answering (Yih et al., 2015; Trisedya et al., 2019; Li and Zou, 2022). Early studies mostly focus on sentence-level RE, i.e. predicting relations among entities in one single sentence. However, in real-world scenarios such as Wikipedia articles or scientific papers, large amounts of relational facts are expressed across multiple sentences, which necessitate inter-sentence reasoning skills. Hence, recent efforts have been moving towards the more realistic document-level RE (DocRE) (Yao et al., 2019; Nan et al., 2020; Zhou et al., 2021).
|
| 20 |
+
|
| 21 |
+
<table><tr><td colspan="2">Juan Balboa Boneke (9 June 1938 – 10 March 2014) was an Equatorial Guinean politician and writer. ... After his exile, he settled down in Valencia with his second wife and her family. Balboa Boneke died from renal problems, coupled with a three-year depression caused by the death of his wife, on 10 March 2014 in Valencia, Spain.</td></tr><tr><td>Subject: Balboa Boneke</td><td>Object: Equatorial Guinean</td></tr><tr><td>Relation: country of citizenship</td><td></td></tr><tr><td>Subject: Balboa Boneke</td><td>Object: Valencia</td></tr><tr><td>Relation: place of death</td><td></td></tr><tr><td>Subject: Valencia</td><td>Object: Spain</td></tr><tr><td>Relation: country</td><td></td></tr></table>
|
| 22 |
+
|
| 23 |
+
Figure 1: An example adapted from the DocRED dataset. Mentions refer to the same entity are in same color. We omit some relations and denote some entities with underline for clarity.
|
| 24 |
+
|
| 25 |
+
Despite the rapid progress, most previous DocRE methods solely focus on the task of relation extraction, which assumes that entities and their corresponding mentions are given beforehand. As shown by Figure 1, to extract both of entities and relations at document-level, a natural idea is to use a pipeline approach. Traditionally, it first divide the whole task into subtasks of mention extraction (ME), coreference resolution (COREF) and relation extraction (RE), then use separate models to conduct each task step by step (Zaporojets et al., 2021). However, the pipeline framework ignores the underlying dependencies among subtasks, which may lead to suboptimal performance. Some progress on jointly considering the subtasks has been made (Eberts and Ulges, 2021; Xu and Choi, 2022), yet, previous attempts still model the tasks of COREF and RE separately, inducing possible bias at both encoding and decoding stages. On the one hand, these methods still suffer from the problem with lack of information sharing. They either
|
| 26 |
+
|
| 27 |
+
completely rely on the shared language model (e.g. BERT) at representation level (Eberts and Ulges, 2021), or only consider one-way information flow from RE to COREF and neglect other cross-task dependencies (Xu and Choi, 2022). On the other hand, prior approaches mostly employ the pipeline-style decoding, which first recognize mention spans and form entity clusters, then perform relation classification for each entity pair. Such routine is not only time consuming, but faces with the error propagation problem (Li and Ji, 2014). The results of entity extraction may affect the performance of relation extraction and lead to cascading errors. Xu and Choi (2022) attempt to use a regularization term in COREF scorer to mitigate this issue, but the problem is still not fully resolved.
|
| 28 |
+
|
| 29 |
+
In this work, we propose TAG, a novel table-to-graph generation model, to address these aforementioned challenges. We first unify both tasks of COREF and RE with the classic table filling framework (Miwa and Sasaki, 2014; Gupta et al., 2016). We then devise a following table filler to encode original texts and make predictions for both tasks at a coarse level. Regarding mentions as nodes, we dynamically build two corresponding coreference and relation graphs, where the edges are weighted by the confidence scores of table filler. Besides, to alleviate the long-term dependency problem as well as explicitly model the syntactic information, we construct a syntactic graph over mentions. Given these three subgraphs, TAG regards them as three different types of edges and uses a relational graph convolutional network (RGCN, Schlichtkrull et al., 2018) to model implicit task dependencies at a fine level. Unlike previous multi-task systems that solely share span representations directly from the language model, our coarse-to-fine framework leverages rich node representations by propagating information through semantic and syntactic links.
|
| 30 |
+
|
| 31 |
+
Intuitively, mentions within the same entity cluster should establish similar relation links with other entities (Xu and Choi, 2022). To avoid the error propagation problem, we exploit this postulation and adapt the hierarchical agglomerative clustering (HAC) algorithm to cluster mentions. The core of HAC is the computation of coreference distance between each cluster pair. To back-propagate relational information, we compute the relation vectors of nodes and use the average Hamming distances among different clusters as additional penalty.
|
| 32 |
+
|
| 33 |
+
We evaluate TAG on DocRED (Yao et al., 2019), a widely-adopted DocRE benchmark. Experiments show that: (1) The coarse-grained table filler baseline establishes competitive results, as compared with previous methods. (2) The fine-grained information propagation module and enhanced HAC decoding algorithm can effectively promote cross-task interactions and better alleviate the error propagation problem. (3) Our proposed TAG achieves new state-of-the-art and outperforms prior approaches by a large margin. We also report the first result of joint entity and relation extraction on Re-DocRED (Tan et al., 2022), a revised version of DocRED, for future research.
|
| 34 |
+
|
| 35 |
+
Our contributions can be summarized as follow:
|
| 36 |
+
|
| 37 |
+
- We unify the tasks of COREF and RE in document-level joint entity and relation extraction with a table filling framework, and propose a novel table-to-graph generation method TAG to facilitate information sharing. During the decoding stage, we adapt the HAC algorithm to enhance COREF with RE predictions, thereby mitigating the issue of error propagation.
|
| 38 |
+
- We demonstrate that TAG surpasses previous methods and achieves new state-of-the-art results on the standard DocRE benchmark.
|
| 39 |
+
|
| 40 |
+
# 2 Problem Formulation
|
| 41 |
+
|
| 42 |
+
Given a document $D$ comprised of $L$ tokens, our goal is to jointly extract all entities and relations in an end-to-end manner. As an entity may occur multiple times in the document with different mentions, the joint extraction process can be naturally divided into three subtasks:
|
| 43 |
+
|
| 44 |
+
- Mention extraction (ME), which extracts all possible spans $\mathcal{M} = \{m_i\}_{i=1}^M$ for entities from original document, where a span is defined as a continuous sequence of words;
|
| 45 |
+
- Coreference resolution (COREF), which groups the local mentions into entity clusters $\mathcal{E} = \{e_i\}_{i=1}^E$ , where $e_i = \{m_j^i\}_{j=1}^{N_{e_i}}$ ;
|
| 46 |
+
- Relation extraction (RE), which predicts a subset from a pre-defined relation set $\mathcal{R} \cup \{\bot\}$ ( $\bot$ denotes no relation) between the entity pairs $(e_h, e_t)_{h,t=1,\dots,E; h \neq t}$ .
|
| 47 |
+
|
| 48 |
+
Unlike prior works, we formulate the tasks of COREF and RE with the table filling framework,
|
| 49 |
+
|
| 50 |
+

|
| 51 |
+
Figure 2: Overall architecture of TaG. Given a document, it first conducts mention extraction separately, and then use a table filler to predict coreference scores (purple matrix) and relational scores (blue matrix) at a coarse level. A mention graph with coreference, relational and syntactic edges is then built, based on which we leverage R-GCN to propagate information. We predict the final results with the fine-level mention representations.
|
| 52 |
+
|
| 53 |
+
i.e. multi-class classification between each mention pair $(m_i, m_j)$ . We maintain a table $T^{|M| \times |M|}$ to represent mention pairs and employ a shared representation for both tasks.
|
| 54 |
+
|
| 55 |
+
We assign COREF label $y_{c}^{(i,j)} \in \{0,1\}$ and RE label $y_{r}^{(i,j)} \subseteq \mathcal{R} \cup \{\bot\}$ for each cell in the table, respectively. For COREF, we use 1/0 to denote whether a mention pair belongs to the same entity. For RE, we transfer the entity-level label to mention-level, where mention pair $(m_i, m_j)$ is tagged with the same relations of their belonging entities $(e_h, e_t)$ , with $m_i \in e_h$ , $m_j \in e_t$ .
|
| 56 |
+
|
| 57 |
+
# 3 Methodology
|
| 58 |
+
|
| 59 |
+
Figure 2 shows the overall architecture of TAG. TAG first conducts ME to predict mention spans (§ 3.1), after which, it jointly learns the tasks of COREF and RE with a table-to-graph generation model (§ 3.2). We will also detail the multi-task training process in § 3.3 and enhanced decoding algorithm in § 3.4.
|
| 60 |
+
|
| 61 |
+
# 3.1 Mention Extractor
|
| 62 |
+
|
| 63 |
+
We cast the problem of entity mention extraction as a sequence tagging task with BIO label. Though span-based methods are more prevalent due to their stronger expressive power, they usually demand $\mathcal{O}(L^2)$ time complexity, while sequence-based methods only take linear time. Since the task of DocRE contains few overlapped mentions<sup>2</sup>, we adopt the sequential method for efficiency.
|
| 64 |
+
|
| 65 |
+
Following Devlin et al. (2019), we leverage pretrained language model (PLM) to convert the tokens in document into vectorized features, and use a classifier to predict the BIO label for each token. We denote the extracted mentions by $\{m_i\}_{i=1}^M$ .
|
| 66 |
+
|
| 67 |
+
# 3.2 Table-to-Graph Generation
|
| 68 |
+
|
| 69 |
+
# 3.2.1 Biaffine Table Filler
|
| 70 |
+
|
| 71 |
+
Given a document $D = [w_i]_{i=1}^L$ and mentions $\{m_i\}_{i=1}^M$ , we build the table representation of each mention pair. We adopt the entity marker strategy (Baldini Soares et al., 2019), which inserts a special token “*” at the start and end of each mention. We then use a separate $\mathrm{PLM}^3$ to obtain the contextual representations $\mathbf{H} = [\mathbf{h}_1, \dots, \mathbf{h}_L]^\top$ , $\mathbf{h}_i \in \mathbb{R}^d$ and the multi-head attention $\mathbf{A} \in \mathbb{R}^{H \times L \times L}$ :
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
\mathbf {H}, \mathbf {A} = \mathrm {P L M} ([ w _ {1}, \ldots , w _ {L} ]),
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
where $\mathbf{A}$ is the multi-head attention matrix in the last transformer layer. We take the embedding of start token “*” as mention embedding. To capture related context for mention pair $(m_i, m_j)$ , we apply the localized context pooling technique to compute context embedding $\mathbf{c}^{(i,j)}$ (Zhou et al., 2021):
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
\mathbf {q} ^ {(i, j)} = \sum_ {k = 1} ^ {H} \mathbf {A} _ {k} ^ {i} \circ \mathbf {A} _ {k} ^ {j},
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
\mathbf {c} ^ {(i, j)} = \mathbf {H} ^ {\top} \frac {\mathbf {q} ^ {(i , j)}}{\mathbf {1} ^ {\top} \mathbf {q} ^ {(i , j)}},
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
where $\circ$ refers to the Hadamard product and $\mathbf{A}_k^i, \mathbf{A}_k^j \in \mathbb{R}^L$ are the attention weights of $m_i, m_j$ in the $k^{th}$ attention head, respectively. $\mathbf{c}^{(i,j)}$ is aggregated from tokens with high attention towards both $m_i$ and $m_j$ , and hence is likely to be important to both of them.
|
| 88 |
+
|
| 89 |
+
Let $\mathbf{h}_i, \mathbf{h}_j$ be the hidden features of $m_i, m_j$ from PLM. We first project $\mathbf{h}_i, \mathbf{h}_j$ and $\mathbf{c}^{(i,j)}$ into head and tail features:
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
\mathbf {z} _ {i} ^ {(i, j)} = \tanh (\mathbf {W} _ {h} \mathbf {h} _ {i} + \mathbf {W} _ {c h} \mathbf {c} ^ {(i, j)}),
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
\mathbf {z} _ {j} ^ {(i, j)} = \mathrm {t a n h} (\mathbf {W} _ {t} \mathbf {h} _ {j} + \mathbf {W} _ {c t} \mathbf {c} ^ {(i, j)}),
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
where $\mathbf{W}_h, \mathbf{W}_{ch}, \mathbf{W}_t, \mathbf{W}_{ct} \in \mathbb{R}^{d \times d}$ are trainable parameters. We then employ a biaffine attention model (Dozat and Manning, 2017; Wang et al., 2021) to convert mention features into a table $\mathbf{S} \in \mathbb{R}^{M \times M}$ of scalar scores denoting either coreference or relational links:
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
s ^ {(i, j)} = \mathbf {z} _ {i} ^ {(i, j)} \mathbf {W} _ {1} \mathbf {z} _ {j} ^ {(i, j)} + \mathbf {w} _ {2} ^ {\top} (\mathbf {z} _ {i} ^ {(i, j)} \oplus \mathbf {z} _ {j} ^ {(i, j)}) + b,
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
where $\mathbf{W}_1\in \mathbb{R}^{d\times d}$ $\mathbf{w}_2\in \mathbb{R}^{2d}$ $b\in \mathbb{R}$ are trainable parameters, $\oplus$ denotes concatenation. We predict coreference and relational scores $\mathbf{S}_{tc},\mathbf{S}_{tr}$ respectively with shared representations $\mathbf{z}$ . Specifically, $s_{tr}^{(i,j)}$ is labeled with 1 if the RE label $y_{r}^{(i,j)}\neq \{\bot \}$ otherwise 0.
|
| 106 |
+
|
| 107 |
+
# 3.2.2 Latent Graph Construction
|
| 108 |
+
|
| 109 |
+
Coreference and Relational Graphs. After obtaining the coreference and relational scores $\mathbf{S}_{tc},\mathbf{S}_{tr}$ , we normalize each table with respect to column:
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
\mathbf {G} _ {c} = \operatorname {S o f t m a x} \left(\mathbf {S} _ {t c}\right),
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
\mathbf {G} _ {r} = \operatorname {S o f t m a x} \left(\mathbf {S} _ {t r}\right).
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
We take $\mathbf{G}_c$ and $\mathbf{G}_r$ as the dynamic weighted graphs of coreference and relational links predicted by our previous modules. Each cell $g^{(i,j)}$ represents the weight of directed edge $m_i\rightarrow m_j$ .
|
| 120 |
+
|
| 121 |
+
Syntactic Graph. To enhance learning of structured knowledge underlying natural language, we seek to explicitly introduce syntactic information into mention graph. Ideally, syntactic links can effectively encode local contexts, which can be further broadcast via coreference or relational links. Thus, it enables the model to learn long-term dependencies at a fine level.
|
| 122 |
+
|
| 123 |
+
There are several optional ways to build the desired syntactic graph. For instance, an intuitive
|
| 124 |
+
|
| 125 |
+
solution is to transfer the dependency tree over words to a graph, with mentions being the nodes. Since dependency tree only reveals intra-sentence clues, previous works (Christopoulou et al., 2019; Zeng et al., 2020) usually leverage co-occurrence information instead. Following this practice, our syntactic graph $\mathbf{G}_s$ connects all mentions within the same sentence using bidirectional edges.
|
| 126 |
+
|
| 127 |
+
# 3.2.3 Propagating Information with R-GCN
|
| 128 |
+
|
| 129 |
+
To consider the interactions between the tasks of COREF and RE, and to incorporate explicit syntax information, we propose an information propagation module to refine mention representations.
|
| 130 |
+
|
| 131 |
+
Specifically, we regard the latent graphs $\mathbf{G}_c$ , $\mathbf{G}_r$ and $\mathbf{G}_s$ as three different types of edges over the mention graph. We then apply a relational graph convolutional network on the mention graph to aggregate neighbor features along different types of edges. Given node $\mathbf{x}_i$ at the $l^{th}$ layer, the update process is calculated by
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
\mathbf {x} _ {i} ^ {(l + 1)} = \tanh (\sum_ {t \in \{c, r, s \}} \sum_ {j = 1} ^ {M} g _ {t} ^ {(i, j)} \mathbf {W} _ {t} ^ {l} \mathbf {x} _ {j} ^ {l} + \mathbf {b} _ {t} ^ {l}),
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
where $t$ is the type of edge, $g_{t}^{(i,j)}$ represents the weight of directed edge $m_i\rightarrow m_j$ , and $\mathbf{W}_t^l,\mathbf{b}_t^l$ are trainable parameters. We initialize node embedding $\mathbf{x}_i^0$ as the hidden feature $\mathbf{h}_i$ of mention $m_{i}$
|
| 138 |
+
|
| 139 |
+
In contrast to previous Joint $\mathrm{IE^4}$ approaches, which either propagate task information in a pipeline manner (DYGIE, Luan et al., 2019), or only consider one-way information flow (Xu and Choi, 2022), our module integrates cross-task information in parallel and extracts relevant mention features for both tasks.
|
| 140 |
+
|
| 141 |
+
# 3.2.4 Classifier
|
| 142 |
+
|
| 143 |
+
After $N$ times of propagation, we use the refined mention embeddings $\mathbf{x}_i^N,\mathbf{x}_j^N$ and context embedding $\mathbf{c}^{(i,j)}$ to predict the COREF score $s_{gc}^{(i,j)}$ and RE score $s_{gr}^{(i,j)}$ :
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
\mathbf {v} _ {i} ^ {(i, j)} = \tanh \bigl (\mathbf {U} _ {h} \mathbf {x} _ {i} ^ {N} + \mathbf {U} _ {c h} \mathbf {c} ^ {(i, j)} \bigr),
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
\mathbf {v} _ {j} ^ {(i, j)} = \tanh (\mathbf {U} _ {t} \mathbf {x} _ {j} ^ {N} + \mathbf {U} _ {c t} \mathbf {c} ^ {(i, j)}),
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
$$
|
| 154 |
+
s _ {g c} ^ {(i, j)} = \operatorname {C o r e f B i a f f} \left(\mathbf {v} _ {i} ^ {(i, j)}, \mathbf {v} _ {j} ^ {(i, j)}\right),
|
| 155 |
+
$$
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
s _ {g r} ^ {(i, j)} = \operatorname {R e B i a f f} \left(\mathbf {v} _ {i} ^ {(i, j)}, \mathbf {v} _ {j} ^ {(i, j)}\right),
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
where $\mathbf{U}_h, \mathbf{U}_{ch}, \mathbf{U}_t, \mathbf{U}_{ct} \in \mathbb{R}^{d \times d}$ are trainable parameters, and the $n$ -dimensional biaffine function is defined as
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
\operatorname {B i a f f} (\mathbf {x}, \mathbf {y}) := \mathbf {x} \mathbf {U} _ {1} ^ {\top} \mathbf {y} + \mathbf {U} _ {2} (\mathbf {x} \oplus \mathbf {y}) + \mathbf {b},
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
where $\mathbf{U}_1\in \mathbb{R}^{n\times d\times d}$ $\mathbf{U}_2\in \mathbb{R}^{n\times 2d}$ $\mathbf{b}\in \mathbb{R}^n$ are trainable parameters. Note that $n = 1$ for the task of COREF and $n = |\mathcal{R}| + 1$ for RE, where we use a dummy class TH to learn a dynamic threshold for multi-label classification (Zhou et al., 2021). At test time, relation types with scores higher than the TH class are predicted as output $\hat{y}_r^{(i,j)}$ . In cases where no such class exists, the classifier returns $\{\bot \}$ .
|
| 168 |
+
|
| 169 |
+
# 3.3 Training
|
| 170 |
+
|
| 171 |
+
We perform multi-task training and optimize the joint loss for all components. We detail the training objectives and label construction for each module as follows.
|
| 172 |
+
|
| 173 |
+
Table Encoder. Given mention pair $(m_i, m_j)$ , the table encoder predicts coreference and relational links in the form of scalar scores $s_{tc}^{(i,j)}$ , $s_{tr}^{(i,j)}$ . For coreference links, we directly use COREF label $y_c^{(i,j)}$ as gold label. For relational links, we define $y_{rbinary}^{(i,j)} := \mathbb{1}(y_r^{(i,j)} \neq \{\bot\})^5$ , denoting whether any relation $(e_h, r, e_t)$ exists, with $m_i \in e_h, m_j \in e_t$ . We convert $\mathbf{S}_c, \mathbf{S}_r$ to probability with the sigmoid function $\sigma$ and optimize with binary cross-entropy loss $\mathcal{L}_{tc}, \mathcal{L}_{tr}$ .
|
| 174 |
+
|
| 175 |
+
Coreference Resolution. The training objective and label for fine-level coreference resolution are identical to those for coreference link prediction in table encoder. The sole difference is that it takes the refined mention representations as input. We denote the loss as $\mathcal{L}_{gc}$ .
|
| 176 |
+
|
| 177 |
+
Relation Extraction. For $(m_i, m_j)$ , we divide the relation set $\mathcal{R}$ into two splits: positive set $\mathcal{P}$ that contains relation $x$ exists between $(m_i, m_j)$ , and negative set $\mathcal{N} = \mathcal{R} - \mathcal{P}$ . We apply the adaptive thresholding loss (Zhou et al., 2021) to learn the RE classifier:
|
| 178 |
+
|
| 179 |
+
$$
|
| 180 |
+
\begin{array}{l} l ^ {(i, j)} = - \sum_ {x \in \mathcal {P}} \log \left(\frac {\exp (s _ {x} ^ {(i , j)})}{\sum_ {x ^ {\prime} \in \mathcal {P} \cup \{\mathrm {T H} \}} \exp (s _ {x ^ {\prime}} ^ {(i , j)})}\right) \\ - \log \left(\frac {\exp (s _ {\mathrm {T H}} ^ {(i , j)})}{\sum_ {x ^ {\prime} \in \mathcal {N} \cup \{\mathrm {T H} \}} \exp (s _ {x ^ {\prime}} ^ {(i , j)})}\right), \\ \end{array}
|
| 181 |
+
$$
|
| 182 |
+
|
| 183 |
+
# Algorithm 1: HAC Decoding Algorithm
|
| 184 |
+
|
| 185 |
+
Input: Mention set $\mathcal{M}$ , threshold $t$
|
| 186 |
+
|
| 187 |
+
Output: A set of entity clusters $\mathcal{C}$
|
| 188 |
+
|
| 189 |
+
// Initialization
|
| 190 |
+
|
| 191 |
+
1 for $m_i\in \mathcal{M}$ do
|
| 192 |
+
2 $C_i\gets \{m_i\}$
|
| 193 |
+
// Recursively merge clusters
|
| 194 |
+
3 repeat
|
| 195 |
+
4 for $C_x, C_y \in \mathcal{C}, C_x \neq C_y$ do
|
| 196 |
+
5 $D^{(x,y)}\gets D_c^{(x,y)} + \rho \cdot D_r^{(x,y)}$
|
| 197 |
+
6 $(C_x,C_y)\gets \arg \min_{(C_x,C_y)}D^{(x,y)}$
|
| 198 |
+
7 $D_{\min}\gets D^{(x,y)}$
|
| 199 |
+
8 if $D_{\mathrm{min}} \leqslant t$ then
|
| 200 |
+
9 Merge $C_x$ and $C_y$
|
| 201 |
+
10 until $D_{\mathrm{min}} > t$
|
| 202 |
+
|
| 203 |
+
and we sum over all mention pairs to calculate fine-level relation extraction loss $\mathcal{L}_{gr}$ .
|
| 204 |
+
|
| 205 |
+
Finally, we jointly optimize TAG with
|
| 206 |
+
|
| 207 |
+
$$
|
| 208 |
+
\mathcal {L} = \mathcal {L} _ {t c} + \mathcal {L} _ {t r} + \alpha \cdot (\mathcal {L} _ {g c} + \mathcal {L} _ {g r}),
|
| 209 |
+
$$
|
| 210 |
+
|
| 211 |
+
where $\alpha$ is a hyperparameter balancing coarse-level and fine-level loss.
|
| 212 |
+
|
| 213 |
+
# 3.4 Decoding
|
| 214 |
+
|
| 215 |
+
To avoid the error propagation problem inherent in pipeline decoding, we aim to design a decoding algorithm such that upstream task (COREF) can efficiently utilize downstream task information (RE).
|
| 216 |
+
|
| 217 |
+
Entity Cluster Decoding. We decode entity clusters based on the hierarchical agglomerative clustering (HAC) algorithm, as described in Algorithm 1. The core of HAC is to measure the distance $D$ between two clusters $C_x$ and $C_y$ . We break down $D$ into two parts: coreference distance $D_c$ and relational distance $D_r$ . We use the average linkage to compute $D_c$ as
|
| 218 |
+
|
| 219 |
+
$$
|
| 220 |
+
D _ {c} = \frac {1}{| C _ {x} | \cdot | C _ {y} |} \sum_ {m _ {i} \in C _ {x}} \sum_ {m _ {j} \in C _ {y}} (1 - \sigma (s _ {g c} ^ {(i, j)})).
|
| 221 |
+
$$
|
| 222 |
+
|
| 223 |
+
At training stage, ground-truth relation $y_{r}^{(i,k)}$ and $y_{r}^{(j,k)}$ are identical if $m_{i}$ and $m_{j}$ belong to the same entity, for all $m_{k} \in \mathcal{M}$ . Therefore, for a well-trained model, mentions within the same entity cluster should establish similar relation links with other entities. We exploit this clue as the connection between COREF and RE. Let the predicted RE
|
| 224 |
+
|
| 225 |
+
<table><tr><td rowspan="2">Method</td><td rowspan="2">Encoder</td><td rowspan="2">ME</td><td rowspan="2">COREF</td><td colspan="2">RE</td></tr><tr><td>F1</td><td>Ign F1</td></tr><tr><td>KB-IE (Verlinden et al., 2021)</td><td>LSTM</td><td>-</td><td>83.6</td><td>25.7</td><td>-</td></tr><tr><td>JEREX (Eberts and Ulges, 2021)</td><td>BERT-base</td><td>92.99*</td><td>82.79*</td><td>40.38*</td><td>-</td></tr><tr><td>seq2rel (Giorgi et al., 2022)</td><td>BERT-base</td><td>-</td><td>-</td><td>38.2*</td><td>-</td></tr><tr><td>Pipeline (Xu and Choi, 2022)</td><td>SpanBERT-base</td><td>92.56</td><td>84.09</td><td>38.29</td><td>35.88</td></tr><tr><td>Joint (Xu and Choi, 2022)</td><td>SpanBERT-base</td><td>93.34</td><td>84.79</td><td>38.94</td><td>36.64</td></tr><tr><td>JointM+GPGC (Xu and Choi, 2022)</td><td>SpanBERT-base</td><td>93.35</td><td>84.96</td><td>40.62</td><td>38.28</td></tr><tr><td rowspan="2">TABLEFILLER</td><td>BERT-base</td><td>93.56 / 92.89</td><td>84.77 / 84.34</td><td>40.92 / 39.10</td><td>39.09 / 37.30</td></tr><tr><td>RoBERTa-base</td><td>93.63 / 92.95</td><td>85.87 / 85.49</td><td>42.00 / 40.92</td><td>40.09 / 38.97</td></tr><tr><td rowspan="2">TAG</td><td>BERT-base</td><td>93.56 / 92.89</td><td>85.07 / 84.75</td><td>41.87 / 40.65</td><td>39.82 / 38.27</td></tr><tr><td>RoBERTa-base</td><td>93.63 / 92.95</td><td>86.03 / 85.67</td><td>43.16 / 42.28</td><td>41.13 / 40.28</td></tr><tr><td>TAG</td><td>RoBERTa-large</td><td>93.84 / 93.32</td><td>86.37 / 85.87</td><td>44.97 / 43.21</td><td>42.88 / 41.22</td></tr></table>
|
| 226 |
+
|
| 227 |
+
Table 1: Overall performance on DocRED. Previous methods only report results on test set, while we report results on both test/dev set, respectively. In particular, JEREX and seq2rel use a custom split of DocRED, so their results are not directly comparable and only serve for reference.
|
| 228 |
+
|
| 229 |
+
label $\hat{y}_r^{(i,j)}$ be a $|\mathcal{R}|$ -dimensional 0-1 vector, where each digit indicates the presence of one relation type. We define the relation vector $\mathbf{r}_i\in \mathbb{R}^{2M\times |\mathcal{R}|}$ as
|
| 230 |
+
|
| 231 |
+
$$
|
| 232 |
+
\mathbf {r} _ {i} = [ \hat {y} _ {r} ^ {(i, 1)}, \dots , \hat {y} _ {r} ^ {(i, M)}, \hat {y} _ {r} ^ {(1, i)}, \dots , \hat {y} _ {r} ^ {(M, i)} ] ^ {\top}.
|
| 233 |
+
$$
|
| 234 |
+
|
| 235 |
+
We use the average Hamming distance between each mention pair in cluster $C_x, C_y$ as $D_r$ :
|
| 236 |
+
|
| 237 |
+
$$
|
| 238 |
+
D _ {r} = \frac {1}{\left| C _ {x} \right| \left| C _ {y} \right|} \sum_ {m _ {i} \in C _ {x}} \sum_ {m _ {j} \in C _ {y}} \sigma (\text {H a m m i n g} (\mathbf {r} _ {i}, \mathbf {r} _ {j})).
|
| 239 |
+
$$
|
| 240 |
+
|
| 241 |
+
Relation Triple Decoding. Given two entities $e_1$ and $e_2$ , we predict their relation label with the majority voting mechanism. For relation $x$ , the final prediction is determined by
|
| 242 |
+
|
| 243 |
+
$$
|
| 244 |
+
\hat {y} _ {x} ^ {(e _ {1}, e _ {2})} = \mathbb {1} ((\sum_ {m _ {i} \in e _ {1}} \sum_ {m _ {j} \in e _ {2}} \hat {y} _ {x} ^ {(i, j)}) > \frac {| e _ {1} | \cdot | e _ {2} |}{2}).
|
| 245 |
+
$$
|
| 246 |
+
|
| 247 |
+
# 4 Experiments
|
| 248 |
+
|
| 249 |
+
# 4.1 Setup
|
| 250 |
+
|
| 251 |
+
Dataset. We evaluate TAG on DocRED (Yao et al., 2019) and Re-DocRED (Tan et al., 2022). DocRED is a large-scale human-annotated dataset for DocRE constructed from Wikipedia and Wikidata. It covers a wide range of documents from general domain, with 3,053 documents for training, 1,000 for development, and 1,000 for test, respectively. DocRED contains 96 relation types, 132,375 entities and 63,427 relation instances. Since the original dataset is incomplete, i.e. there exists a considerable amount of false negative samples, Tan
|
| 252 |
+
|
| 253 |
+
et al. (2022) provide a revised version Re-DocRED on training and validation set, with 120,664 relation instances. Notably, we report the first joint extraction result on Re-DocRED for future reference.
|
| 254 |
+
|
| 255 |
+
Metrics. Following prior works (Eberts and Ulges, 2021; Xu and Choi, 2022), we report the performance of all three subtasks for detailed analysis. Specifically, our results include (1) mention extraction (ME) in mention-level F1 score, (2) coreference resolution (COREF) in averaged F1 score of MUC, $\mathbf{B}^3$ , and $\mathrm{CEAF}_{\phi_4}$ , and (3) relation extraction (RE) in hard entity-level F1 and Ign F1 scores, where Ign F1 measures the F1 score excluding the relational facts shared by training and validation/test sets.
|
| 256 |
+
|
| 257 |
+
# 4.2 Overall Performance
|
| 258 |
+
|
| 259 |
+
Baselines. We compare TAG with various baselines for joint extraction. Early approaches take LSTM as context encoder. Built on top of it, Verlinden et al. (2021) introduce KB-IE, which integrates background information of knowledge base (Wikipedia and Wikidata) into a joint IE model. Recent methods usually finetune PLM to learn richer features. Xu and Choi (2022) implement the standard pipeline method, as well as a joint method with shared encoder and joint loss. They also propose JointM+GPGC to enable one-way information flow from RE to COREF. Eberts and Ulges (2021) present JEREX, which incorporate multi-instance learning to enhance RE performance. Giorgi et al. (2022) develop a sequence
|
| 260 |
+
|
| 261 |
+
<table><tr><td rowspan="2">Method</td><td rowspan="2">ME</td><td rowspan="2">COREF</td><td colspan="2">RE</td></tr><tr><td>F1</td><td>Ign F1</td></tr><tr><td>TABLEFILLER</td><td>93.42</td><td>86.27</td><td>48.35</td><td>47.30</td></tr><tr><td>TAG</td><td>93.42</td><td>86.49</td><td>49.34</td><td>48.21</td></tr><tr><td>TABLEFILLER</td><td>92.91</td><td>85.25</td><td>48.94</td><td>48.02</td></tr><tr><td>TAG</td><td>92.91</td><td>85.61</td><td>49.38</td><td>48.47</td></tr></table>
|
| 262 |
+
|
| 263 |
+
Table 2: Performance on Re-DocRED, which takes RoBERTa<sub>base</sub> as encoder. The former/latter two lines denote results on dev/test set, respectively.
|
| 264 |
+
|
| 265 |
+

|
| 266 |
+
Figure 3: Recall of relations over different numbers of evidence sentences. We compare TAG and TABLE-FILLER with RoBERTa<sub>base</sub> on DocRED dev set.
|
| 267 |
+
|
| 268 |
+
to-sequence model with copy mechanism, seq2rel, with inferior performance but higher efficiency. Besides, we also devise a strong baseline, TableFiller, which ablates the graph module and adopts simple heuristic decoding algorithm, i.e. it only comprises a mention extractor, a biaffine encoder, and a classifier.
|
| 269 |
+
|
| 270 |
+
Table 1 depicts the overall performance of TAG on DocRED, in comparison to other baselines. We can observe that TABLEFILLER-BERT<sub>base</sub> marginally outperforms previous methods and establishes a competitive basis, which demonstrates the efficacy of the table filling framework. TAG-BERT<sub>base</sub> further advances it by consistent improvements on all three subtasks. Following Xu and Choi (2022), we replace BERT<sub>base</sub> with a stronger variant, RoBERT<sub>base</sub>, of the same size. TAG-RoBERT<sub>base</sub> attains substantial improvements of 1.07 in COREF F1 and 2.54/2.85 in RE F1/Ign F1 over SOTA on the test set. This suggests that TAG is better at capturing important information within the document-level context and across different subtasks. We also present TAG-RoBERT<sub>large</sub> to explore the boundaries of joint extraction performance, which reaches 93.84 in ME F1, 86.37 in COREF F1 and 44.97/42.88 in RE F1/Ign F1 on
|
| 271 |
+
|
| 272 |
+

|
| 273 |
+
Figure 4: Relation extraction F1 of TAG variants with different numbers of graph layers on DocRED dev set.
|
| 274 |
+
|
| 275 |
+
the test set, respectively.
|
| 276 |
+
|
| 277 |
+
Table 2 shows the performance of TABLE-FILLER and TAG on Re-DocRED. In comparison to DocRED, the same methods yield similar performances in coreference resolution, but improve by a large margin in relation extraction, which aligns with previous findings (Tan et al., 2022). Regarding the difference in architectures, TAG consistently outperforms TABLE-FILLER in all subtasks on both dev and test sets, highlighting the effectiveness of TAG for document-level joint extraction.
|
| 278 |
+
|
| 279 |
+
# 4.3 Analysis on Reasoning Skills
|
| 280 |
+
|
| 281 |
+
A major challenge for document-level RE is the requirement of rich reasoning skills, e.g. commonsense reasoning and logical reasoning (Yao et al., 2019). One indicator to distinguish the reasoning type is the amounts of evidence sentences. To understand the merits of TAG, we visualize the recall of relations over different amounts of evidence sentences, as shown by Figure 3.
|
| 282 |
+
|
| 283 |
+
Relation instance with 0 evidence can only be inferred from common-sense knowledge, either from PLM knowledge or training corpus. TAG outperforms TABLEFILLER on such type of instances by $1.8\%$ with the same encoder, which demonstrates the stronger ability of common-sense reasoning. TAG also consistently surpasses TABLEFILLER on a vast amount of relations with 2-4 evidence sentences, which either needs to (1) distinguish coreferential mentions within multiple sentences, or (2) perform logical reasoning over bridge entities. This reveals that the graph module and decoding algorithm are beneficial for both coreference reasoning and multi-hop logical reasoning. Finally, TAG substantially improves the recall of relations that require much evidence (6.0% for 5 sentences and
|
| 284 |
+
|
| 285 |
+
<table><tr><td>ρ</td><td>0</td><td>0.05</td><td>0.1</td><td>0.2</td><td>0.3</td></tr><tr><td>Averaged F1</td><td>85.36</td><td>85.46</td><td>85.67</td><td>85.51</td><td>85.44</td></tr><tr><td>Hard F1</td><td>82.75</td><td>82.81</td><td>83.06</td><td>82.92</td><td>82.73</td></tr></table>
|
| 286 |
+
|
| 287 |
+
Table 3: F1 scores of TAG-RoBERTa<sub>base</sub> on DocRED dev set with different hyperparameter ρ.
|
| 288 |
+
|
| 289 |
+
<table><tr><td></td><td>silver</td><td>scorec</td><td>scorer</td></tr><tr><td>silver</td><td>1.00</td><td>0.91</td><td>-0.72</td></tr><tr><td>scorec</td><td>-</td><td>1.00</td><td>-0.74</td></tr><tr><td>scorer</td><td>-</td><td>-</td><td>1.00</td></tr></table>
|
| 290 |
+
|
| 291 |
+
Table 4: Pearson's $r$ for silver COREF label, corefene- rece score and relational penalty on DocRED dev set. See Appendix B for details.
|
| 292 |
+
|
| 293 |
+
8.3% for more than 6 sentences), indicating that TAG is superior at complex logical reasoning.
|
| 294 |
+
|
| 295 |
+
# 4.4 The Impact of Graph Propagation
|
| 296 |
+
|
| 297 |
+
Figure 4 shows the effects of graph propagation on relation extraction F1 score, where -Coref, -Rel and -Syntax denote the removal of the corresponding type of edges, respectively. It can be seen that the F1 scores of all models usually peak at $2/3$ graph layers, and then decrease drastically. We hypothesize that a greater depth of layers facilitates the dissemination of information to a broader range, whereas the gradient vanishing problem counteracts this advantage (Li et al., 2019). Besides, all ablation models perform worse than TAG with full channels, indicating that all types of edges contribute to better reasoning.
|
| 298 |
+
|
| 299 |
+
As the depth of layers and types of edges influence RE F1 dramatically, in contrast, these different settings do not pose much impact on coreference resolution. We will dive deeper into this question in the following subsection.
|
| 300 |
+
|
| 301 |
+
# 4.5 Effectiveness of Decoding
|
| 302 |
+
|
| 303 |
+
To verify the effectiveness of our entity cluster decoding algorithm, we compare the performance of coreference resolution with different balancing hyperparameter $\rho$ in Table 3. Apart from the averaged F1 score of MUC, $\mathbf{B}^3$ , and $\mathrm{CEAF}_{\phi_4}$ , we also report the hard entity-level F1 score for transparently demonstrating the entity extraction performance. It can be seen that $\rho = 0.1$ yields the optimal performance with a $0.3\%$ F1 gain in both metrics.
|
| 304 |
+
|
| 305 |
+
Despite that the performance of HAC decoding algorithm is boosted by the relational distance $D_{r}$ , the observed improvement is not as substantial as
|
| 306 |
+
|
| 307 |
+
anticipated. Besides, adjusting $\rho$ does not influence much as well. These findings indicate that coreference resolution seems to be more robust with various settings. To understand such phenomenon, we conduct a correlation analysis among the silver COREF label and predicted scores, as shown by Table 4. While there exists a significant correlation of -0.72 between the relational penalty and the silver label, it is still well below the correlation between coreference score and silver label. This strong association partially accounts for the aforementioned results. It further shows that $D_{r}$ can only serve as a modest refining signal for coreference resolution, and increasing $\rho$ above the threshold may hurt COREF performance.
|
| 308 |
+
|
| 309 |
+
# 5 Related Works
|
| 310 |
+
|
| 311 |
+
Document-level extraction and joint extraction are two important topics in the field of IE. Our work lies at the intersection of these two lines, which aims to jointly extract entities and relations, two core elements of IE, at document-level.
|
| 312 |
+
|
| 313 |
+
Document-level RE. Current methods in DocRE can be mainly divided into two categories: (1) Graph-based methods, which first construct a document graph of heterogeneous nodes (e.g. mention, entity, sentence) with heuristic rules, and then use GNN to perform inference on the graph (Christopoulou et al., 2019; Nan et al., 2020; Zeng et al., 2020). (2) Transformer-based methods, which exploits pretrained language model to learn cross-sentence relations either implicitly or explicitly. Various techniques have been proposed, e.g. adaptive threshold (Zhou et al., 2021) and evidence retrieval (Huang et al., 2021; Xie et al., 2022). Recently, pioneers have attempted to develop end-to-end models that extracts entities and relations jointly at document-level, which is more practical and brings more challenges (Eberts and Ulges, 2021; Xu and Choi, 2022; Giorgi et al., 2022).
|
| 314 |
+
|
| 315 |
+
Joint information extraction. Early studies usually model Joint IE in a pipeline manner (Chan and Roth, 2011; Luan et al., 2019), which ignores the underlying correlation within different tasks, suffering from cascading errors and exposure bias. To address these problems, in one direction, some recent researches seek to integrate multiple subtasks by sharing information and building up implicit cross-task interaction (Zhang et al., 2020; Yan et al., 2021). In another direction, table filling strategy
|
| 316 |
+
|
| 317 |
+
has been developed, as it casts subtasks (usually NER and RE) as unified table to fill with, which explicitly leverages the interactions among subtasks (Miwa and Sasaki, 2014; Gupta et al., 2016; Wang et al., 2021).
|
| 318 |
+
|
| 319 |
+
# 6 Conclusion
|
| 320 |
+
|
| 321 |
+
In this paper, we propose TAG, a novel table-to-graph generation model, to jointly extract entities and relations within a document. Different from prior approaches, we unify the tasks of coreference resolution and relation extraction with a table filling framework, and leverage a coarse-to-fine strategy to facilitate information sharing among these subtasks. To avoid the error propagation problem, we adapt the HAC algorithm to enhance COREF with RE predictions at decoding stage. Experimental results on the widely-adopted benchmark, DocRED, demonstrate that TAG significantly outperforms previous methods. Further analysis also confirms the effectiveness of the modules in our model.
|
| 322 |
+
|
| 323 |
+
# Limitations
|
| 324 |
+
|
| 325 |
+
One major limitation of our work is that our experiments are only conducted on DocRED and Re-DocRED that consist of documents from general domain. Yet, information extraction has many broader applications in specific domains, e.g. biomedical data. We plan to adapt TAG to some biomedical datasets, like CDR (Li et al., 2016) and GDA (Wu et al., 2019), in the future.
|
| 326 |
+
|
| 327 |
+
Besides, since TAG consists of a number of modules and use PLM as encoder, the training process takes relatively more time and computational resources than dedicated DocRE model that only extract relations. We concern that it may affect the scalability with larger amount of either data or parameters.
|
| 328 |
+
|
| 329 |
+
# Ethics Statement
|
| 330 |
+
|
| 331 |
+
We use DocRED and Re-DocRED in our experiments, and we adhere to their user agreements and licenses. These datasets are constructed from Wikipedia, which we expect to have few offensive contents or leaked privacy information.
|
| 332 |
+
|
| 333 |
+
We shall point out that our system may generate false results due to the nature of neural networks, and may be biased in the cases of domain shift or out-of-distribution. We concern that appropriate quality control is needed in downstream applications, like knowledge base construction.
|
| 334 |
+
|
| 335 |
+
# Acknowledgements
|
| 336 |
+
|
| 337 |
+
We would like to appreciate the reviewers for their valuable comments that help us to improve this manuscript. This work was supported by NSFC under grant 61932001 and U20A20174. Lei Zou is the corresponding author of this paper.
|
| 338 |
+
|
| 339 |
+
# References
|
| 340 |
+
|
| 341 |
+
Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2895-2905, Florence, Italy. Association for Computational Linguistics.
|
| 342 |
+
Yee Seng Chan and Dan Roth. 2011. Exploiting syntactico-semantic structures for relation extraction. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 551-560, Portland, Oregon, USA. Association for Computational Linguistics.
|
| 343 |
+
Fenia Christopoulou, Makoto Miwa, and Sophia Ananiadou. 2019. Connecting the dots: Document-level neural relation extraction with edge-oriented graphs. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4925-4936, Hong Kong, China. Association for Computational Linguistics.
|
| 344 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 345 |
+
Timothy Dozat and Christopher D. Manning. 2017. Deep bioaffine attention for neural dependency parsing. In International Conference on Learning Representations.
|
| 346 |
+
Markus Eberts and Adrian Ulges. 2021. An end-to-end model for entity-level relation extraction using multi-instance learning. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3650-3660, Online. Association for Computational Linguistics.
|
| 347 |
+
John Giorgi, Gary Bader, and Bo Wang. 2022. A sequence-to-sequence approach for document-level
|
| 348 |
+
|
| 349 |
+
relation extraction. In Proceedings of the 21st Workshop on Biomedical Language Processing, pages 10-25, Dublin, Ireland. Association for Computational Linguistics.
|
| 350 |
+
Pankaj Gupta, Hinrich Schütze, and Bernt Andrassy. 2016. Table filling multi-task recurrent neural network for joint entity and relation extraction. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2537-2547, Osaka, Japan. The COLING 2016 Organizing Committee.
|
| 351 |
+
Kevin Huang, Peng Qi, Guangtao Wang, Tengyu Ma, and Jing Huang. 2021. Entity and evidence guided document-level relation extraction. In Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021), pages 307-315, Online. Association for Computational Linguistics.
|
| 352 |
+
Guohao Li, Matthias Muller, Ali Thabet, and Bernard Ghanem. 2019. Deep GCs: Can GCs go as deep as cnns? In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV).
|
| 353 |
+
Jiao Li, Yueping Sun, Robin J. Johnson, Daniela Sciaky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J. Mattingly, Thomas C. Wiegers, and Zhiyong Lu. 2016. BioCreative V CDR task corpus: a resource for chemical disease relation extraction. Database, 2016. Baw068.
|
| 354 |
+
Qi Li and Heng Ji. 2014. Incremental joint extraction of entity mentions and relations. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 402-412, Baltimore, Maryland. Association for Computational Linguistics.
|
| 355 |
+
Yanzeng Li and Lei Zou. 2022. gbuilder: A scalable knowledge graph construction system for unstructured corpus.
|
| 356 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach.
|
| 357 |
+
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Conference on Learning Representations.
|
| 358 |
+
Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A general framework for information extraction using dynamic span graphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3036-3046, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 359 |
+
Makoto Miwa and Yutaka Sasaki. 2014. Modeling joint entity and relation extraction with table representation. In Proceedings of the 2014 Conference on
|
| 360 |
+
|
| 361 |
+
Empirical Methods in Natural Language Processing (EMNLP), pages 1858-1869, Doha, Qatar. Association for Computational Linguistics.
|
| 362 |
+
Guoshun Nan, Zhijiang Guo, Ivan Sekulic, and Wei Lu. 2020. Reasoning with latent structure refinement for document-level relation extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1546-1557, Online. Association for Computational Linguistics.
|
| 363 |
+
Michael Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In *The Semantic Web*, pages 593–607, Cham. Springer International Publishing.
|
| 364 |
+
Qingyu Tan, Lu Xu, Lidong Bing, Hwee Tou Ng, and Sharifah Mahani Aljunied. 2022. Revisiting docred - addressing the false negative problem in relation extraction.
|
| 365 |
+
Bayu Distiawan Trisedya, Gerhard Weikum, Jianzhong Qi, and Rui Zhang. 2019. Neural relation extraction for knowledge base enrichment. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 229-240, Florence, Italy. Association for Computational Linguistics.
|
| 366 |
+
Severine Verlinden, Klim Zaporojets, Johannes Deleu, Thomas Demeester, and Chris Develder. 2021. Injecting knowledge base information into end-to-end joint entity and relation extraction and coreference resolution. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1952-1957, Online. Association for Computational Linguistics.
|
| 367 |
+
Yijun Wang, Changzhi Sun, Yuanbin Wu, Hao Zhou, Lei Li, and Junchi Yan. 2021. UniRE: A unified label space for entity relation extraction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 220-231, Online. Association for Computational Linguistics.
|
| 368 |
+
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2019. Huggingface's transformers: State-of-the-art natural language processing.
|
| 369 |
+
Ye Wu, Ruibang Luo, Henry C. M. Leung, Hing-Fung Ting, and Tak-Wah Lam. 2019. Renet: A deep learning approach for extracting gene-disease associations from literature. In Research in Computational Molecular Biology, pages 272-284, Cham. Springer International Publishing.
|
| 370 |
+
|
| 371 |
+
Yiqing Xie, Jiaming Shen, Sha Li, Yuning Mao, and Jiawei Han. 2022. Eider: Empowering document-level relation extraction with efficient evidence extraction and inference-stage fusion. In *Findings of the Association for Computational Linguistics: ACL* 2022, pages 257–268, Dublin, Ireland. Association for Computational Linguistics.
|
| 372 |
+
|
| 373 |
+
Liyan Xu and Jinho Choi. 2022. Modeling task interactions in document-level joint entity and relation extraction. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5409-5416, Seattle, United States. Association for Computational Linguistics.
|
| 374 |
+
|
| 375 |
+
Zhiheng Yan, Chong Zhang, Jinlan Fu, Qi Zhang, and Zhongyu Wei. 2021. A partition filter network for joint entity and relation extraction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 185-197, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 376 |
+
|
| 377 |
+
Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zhenghao Liu, Zhiyuan Liu, Lixin Huang, Jie Zhou, and Maosong Sun. 2019. DocRED: A large-scale document-level relation extraction dataset. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 764-777, Florence, Italy. Association for Computational Linguistics.
|
| 378 |
+
|
| 379 |
+
Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1321-1331, Beijing, China. Association for Computational Linguistics.
|
| 380 |
+
|
| 381 |
+
Klim Zaporojets, Johannes Deleu, Chris Develder, and Thomas Demeester. 2021. Dwie: An entity-centric dataset for multi-task document-level information extraction. Information Processing & Management, 58(4):102563.
|
| 382 |
+
|
| 383 |
+
Shuang Zeng, Runxin Xu, Baobao Chang, and Lei Li. 2020. Double graph based reasoning for document-level relation extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1630-1640, Online. Association for Computational Linguistics.
|
| 384 |
+
|
| 385 |
+
Ranran Haoran Zhang, Qianying Liu, Aysa Xuemo Fan, Heng Ji, Daojian Zeng, Fei Cheng, Daisuke Kawahara, and Sadao Kurohashi. 2020. Minimize exposure bias of Seq2Seq models in joint entity and relation extraction. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 236-246, Online. Association for Computational Linguistics.
|
| 386 |
+
|
| 387 |
+
Wenxuan Zhou, Kevin Huang, Tengyu Ma, and Jing Huang. 2021. Document-level relation extraction with adaptive thresholding and localized context pooling. Proceedings of the AAAI Conference on Artificial Intelligence, 35(16):14612-14620.
|
| 388 |
+
|
| 389 |
+
# A Implementation
|
| 390 |
+
|
| 391 |
+
Our model is implemented based on PyTorch and HuggingFace's Transformer (Wolf et al., 2019). We leverage BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) of different sizes as PLM encoder, and stack 2/3 layers of R-GCN for graph propagation for different settings/datasets. The hyperparameters $\alpha$ and $\rho$ for training and decoding are set to 1 and 0.1, respectively. We optimize our model using AdamW (Loshchilov and Hutter, 2019) with learning rate 3e-5 for PLM and 1e-4 for other parameters, under a linear warmup for the first $4\%$ steps. We train our model with a batch size of 4 for 50 epochs, which takes $\sim 5$ hours on a single A40 GPU. We use early stopping strategy for efficiency.
|
| 392 |
+
|
| 393 |
+
All experiments are conducted under 3 random seeds, and we report: (1) the result of model with best dev score for DocRED test set, since the evaluation is organized as a Codalab competition $^6$ , (2) the average result of all three runs for DocRED dev set and Re-DocRED.
|
| 394 |
+
|
| 395 |
+
# B Details for Correlation Analysis
|
| 396 |
+
|
| 397 |
+
We conduct the correlation analysis on dev set of DocRED with TAG-RoBERTa<sub>base</sub>. The variables are constructed as follow:
|
| 398 |
+
|
| 399 |
+
- Silver. Given predicted mention spans, we assign silver label 1 for mentions that occur within the same gold entity, and 0 otherwise.
|
| 400 |
+
- $\mathbf{Score}_c$ . The probability of coreference link $\sigma(s_{gc})$ .
|
| 401 |
+
- $\mathbf{Score}_r$ . $|\mathbf{s}_r^i - \mathbf{s}_r^j|_1$ , which serves as a pairwise estimation of the Hamming distance. Particularly, $\mathbf{s}_r^i$ is defined as
|
| 402 |
+
|
| 403 |
+
$$
|
| 404 |
+
\left[ s _ {g r} ^ {(i, 1)}, \ldots , s _ {g r} ^ {(i, M)}, s _ {g r} ^ {(1, i)}, \ldots , s _ {g r} ^ {(M, i)} \right] ^ {\top}.
|
| 405 |
+
$$
|
| 406 |
+
|
| 407 |
+
We then compute the Pearson correlation coefficients of these variables, and the results is shown in Table 4.
|
| 408 |
+
|
| 409 |
+
A For every submission:
|
| 410 |
+
|
| 411 |
+
A1. Did you describe the limitations of your work?
|
| 412 |
+
|
| 413 |
+
The limitations section after conclusion
|
| 414 |
+
|
| 415 |
+
A2. Did you discuss any potential risks of your work?
|
| 416 |
+
|
| 417 |
+
The ethics statement section after conclusion
|
| 418 |
+
|
| 419 |
+
A3. Do the abstract and introduction summarize the paper's main claims?
|
| 420 |
+
|
| 421 |
+
The abstract and section 1
|
| 422 |
+
|
| 423 |
+
A4. Have you used AI writing assistants when working on this paper?
|
| 424 |
+
|
| 425 |
+
Left blank.
|
| 426 |
+
|
| 427 |
+
B Did you use or create scientific artifacts?
|
| 428 |
+
|
| 429 |
+
Section 4
|
| 430 |
+
|
| 431 |
+
B1. Did you cite the creators of artifacts you used?
|
| 432 |
+
|
| 433 |
+
Section 4
|
| 434 |
+
|
| 435 |
+
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
|
| 436 |
+
|
| 437 |
+
The ethics statement section after conclusion
|
| 438 |
+
|
| 439 |
+
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
|
| 440 |
+
|
| 441 |
+
Section 3 and section 4
|
| 442 |
+
|
| 443 |
+
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
|
| 444 |
+
|
| 445 |
+
The ethics statement section after conclusion
|
| 446 |
+
|
| 447 |
+
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
|
| 448 |
+
|
| 449 |
+
Section 3
|
| 450 |
+
|
| 451 |
+
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
|
| 452 |
+
|
| 453 |
+
Section 4.1
|
| 454 |
+
|
| 455 |
+
C Did you run computational experiments?
|
| 456 |
+
|
| 457 |
+
Section 4
|
| 458 |
+
|
| 459 |
+
C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used?
|
| 460 |
+
|
| 461 |
+
Appendix A
|
| 462 |
+
|
| 463 |
+
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
|
| 464 |
+
|
| 465 |
+
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
|
| 466 |
+
|
| 467 |
+
Section 4.1 and Appendix A
|
| 468 |
+
|
| 469 |
+
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
|
| 470 |
+
|
| 471 |
+
Appendix A
|
| 472 |
+
|
| 473 |
+
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)?
|
| 474 |
+
|
| 475 |
+
Section 4.1 and Appendix A
|
| 476 |
+
|
| 477 |
+
D Did you use human annotators (e.g., crowdworkers) or research with human participants?
|
| 478 |
+
|
| 479 |
+
Left blank.
|
| 480 |
+
|
| 481 |
+
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
|
| 482 |
+
|
| 483 |
+
No response.
|
| 484 |
+
|
| 485 |
+
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)?
|
| 486 |
+
|
| 487 |
+
No response.
|
| 488 |
+
|
| 489 |
+
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
|
| 490 |
+
|
| 491 |
+
No response.
|
| 492 |
+
|
| 493 |
+
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
|
| 494 |
+
|
| 495 |
+
No response.
|
| 496 |
+
|
| 497 |
+
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
|
| 498 |
+
|
| 499 |
+
No response.
|
2023/A Novel Table-to-Graph Generation Approach for Document-Level Joint Entity and Relation Extraction/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f033ee647b6168c1de6ead9b10cec0225eca6af640a8c52c4aed2be20034b107
|
| 3 |
+
size 416487
|
2023/A Novel Table-to-Graph Generation Approach for Document-Level Joint Entity and Relation Extraction/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Probabilistic Framework for Discovering New Intents/9ad2d74c-e7a6-461e-a960-992c009c7e1d_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Probabilistic Framework for Discovering New Intents/9ad2d74c-e7a6-461e-a960-992c009c7e1d_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Probabilistic Framework for Discovering New Intents/9ad2d74c-e7a6-461e-a960-992c009c7e1d_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:38650f1bd2e8f0522029026e7a363a9be5dbfa7167190c67035896155e3c5991
|
| 3 |
+
size 417022
|
2023/A Probabilistic Framework for Discovering New Intents/full.md
ADDED
|
@@ -0,0 +1,429 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Probabilistic Framework for Discovering New Intents
|
| 2 |
+
|
| 3 |
+
Yunhua Zhou*, Guofeng Quan*, Xipeng Qiu†
|
| 4 |
+
School of Computer Science, Fudan University
|
| 5 |
+
{zhouyh20, xpqiu}@fudan.edu.cn
|
| 6 |
+
gfquan21@m.fudan.edu.cn
|
| 7 |
+
|
| 8 |
+
# Abstract
|
| 9 |
+
|
| 10 |
+
Discovering new intents is of great significance for establishing the Task-Oriented Dialogue System. Most prevailing approaches either cannot transfer prior knowledge inherent in known intents or fall into the dilemma of forgetting prior knowledge in the follow-up. Furthermore, such approaches fail to thoroughly explore the inherent structure of unlabeled data, thereby failing to capture the fundamental characteristics that define an intent in general sense. In this paper, starting from the intuition that discovering intents should be beneficial for identifying known intents, we propose a probabilistic framework for discovering intents where intent assignments are treated as latent variables. We adopt the Expectation Maximization framework for optimization. Specifically, In the E-step, we conduct intent discovery and explore the intrinsic structure of unlabeled data by the posterior of intent assignments. In the M-step, we alleviate the forgetting of prior knowledge transferred from known intents by optimizing the discrimination of labeled data. Extensive experiments conducted on three challenging real-world datasets demonstrate the generality and effectiveness of the proposed framework and implementation. Codes is publicly available.<sup>1</sup>
|
| 11 |
+
|
| 12 |
+
# 1 Introduction
|
| 13 |
+
|
| 14 |
+
Unknown intent detection (Zhou et al., 2022) in the Task-Oriented Dialogue System (TODS) has gradually attracted more and more attention from researchers. However, detecting unknown intent is only the first step. For the TODS, intent discovery is crucial but also more challenging. Because the pre-defined intent set in the TODS is limited to cover all intents, the TODS should discover potential new intents automatically during interactions
|
| 15 |
+
|
| 16 |
+
with the users. And as a practical matter, a large number of valuable unlabeled data will be generated within the interaction between users and the dialogue system. Considering the limited labeled corpus and time-consuming annotating, which also requires expertise, the TODS should adaptively discover intents from those unlabeled data with the aid of limited labeled data.
|
| 17 |
+
|
| 18 |
+
Just as discovering new intents plays a crucial role in establishing the TODS, discovering new intents has raised a lot of research interest just like unknown intent detection. Unsupervised cluster learning is a popular paradigm to solve this problem. Specifically, previous works (Hakkani-Tur et al., 2013, 2015; Shi et al., 2018; Padmasundari, 2018) formulate intent discovery as an unsupervised clustering process. However, these methods mainly focus on constructing pseudo-supervised signals to guide the clustering process while neglecting the prior knowledge embedded in the available labeled data. In real user-facing scenarios, we often possess a small amount of labeled data in advance, which contains prior knowledge that can guide the intent discovery process, and a substantial volume of unlabeled data generated in the interaction with the dialogue system mentioned above, which contains both known intents and unknown intents to be discovered.
|
| 19 |
+
|
| 20 |
+
How do discover intents in the unlabeled corpus using the labeled data? Recently, the semisupervised methods (Lin et al., 2020; Zhang et al., 2021) have become popular. DeepAligned (Zhang et al., 2021) is the most typical and has also inspired a series of effective works (Shen et al., 2021; Zhang et al., 2022) recently. DeepAligned first generalizes the prior knowledge into the semantic features of unlabeled data by pre-training. Then, to learn cluster-friendly representations, DeepAligned assigns a pseudo label to each unlabeled utterance and re-trains the model under the supervision of those pseudo labels.
|
| 21 |
+
|
| 22 |
+

|
| 23 |
+
Figure 1: The catastrophic forgetting of DeepAligned (Blue). During clustering in DeepAligned, we test the performance of the model on the verification set used in the transferring prior knowledge stage and show that with the advancement of clustering, the model constantly forgets the knowledge learned from labeled data. The brown line represents the baseline obtained by the model after transferring prior knowledge. In contrast, our method (Red) can alleviate forgetting well. See Section 5.6 for more discussion.
|
| 24 |
+
|
| 25 |
+
Nevertheless, DeepAligned suffers from many problems. Firstly, when the model is re-trained with the pseudo supervision signal, the model will forget the knowledge transferred in the transferring stage, which is demonstrated in Figure 1. Then, the model could be misled by inaccurate pseudo labels, particularly in large-sized intent space (Wang et al., 2021). More importantly, softmax loss formed by pseudo labels cannot explore the intrinsic structure of unlabeled data, so it can not provide accurate clustering supervised signals.
|
| 26 |
+
|
| 27 |
+
Different from the previous methods, we start from the intuition that the intent discovery should not damage the identification of the known intents. Ideally, the two processes should achieve a win-win situation. The knowledge contained in labeled data corpus (as known intents) can be used to guide the discovery, and the information learned from the unlabeled corpus during discovery could improve the identification of the known intents.
|
| 28 |
+
|
| 29 |
+
Therefore, with the help of optimizing the identification of labeled data given the whole data corpus, we propose a principled probabilistic framework for intent discovery, where intent assignments as a latent variable. We adopt Expectation Maximization as a principal template for optimizing this typical latent variable model. Specifically, in the E-step, we use the current model to discover intents and calculate a specified posterior probability of intent assignments to explore the intrinsic structure of data. In the M-step, the probability of identification of labeled data including those newly discovered from unlabeled data, and the posterior
|
| 30 |
+
|
| 31 |
+
probability of intent assignments, which is to help learn friendly-discovery features, are maximized simultaneously to optimize and update model parameters. Extensive experiments conducted in three benchmark datasets demonstrate our method can achieve substantial improvements over strong baselines. Our contributions are as follows:
|
| 32 |
+
|
| 33 |
+
(Theory) We introduce a principled probabilistic framework for discovering new intents and provide a learning algorithm based on Expectation Maximization. To the best of our knowledge, this is the first complete theoretical framework in this field and we hope it can inspire follow-up research.
|
| 34 |
+
|
| 35 |
+
(Methodology) We provide an efficient implementation based on the proposed probabilistic framework. After transferring prior knowledge, we use a simple yet effective method to alleviate forgetting. Furthermore, we propose a new contrastive paradigm to explore the intrinsic structure of unlabeled data, which avoids the model shift towards inaccurate pseudo labels but helps to better learn the friendly-discovery features.
|
| 36 |
+
|
| 37 |
+
(Experiments and Analysis) We conduct extensive experiments and detailed analyses on a suite of real-world datasets to demonstrate the generality and effectiveness of our proposed framework and implementation.
|
| 38 |
+
|
| 39 |
+
# 2 Related Work
|
| 40 |
+
|
| 41 |
+
Our work is mainly related to two lines of research: Unsupervised and Semi-supervised clustering.
|
| 42 |
+
|
| 43 |
+
Unsupervised Clustering Extracting meaningful information from unlabeled data has been studied for a long time. Traditional approaches like K-means (MacQueen et al., 1967) and Agglomerative Clustering (AC) (Gowda and Krishna, 1978) are seminal but hardly perform well in high-dimensional space. Recent efforts are devoted to using the deep neural network to obtain good clustering representations. Xie et al. (2016) propose Deep Embedded Cluster (DEC) to learn and refine the features iteratively by optimizing a clustering objective based on an auxiliary distribution. Unlike DEC, Yang et al. (2017) propose Deep Clustering Network (DCN) that performs nonlinear dimensionality reduction and k-means clustering jointly to learn friendly representation. Chang et al. (2017) (DAC) apply unsupervised clustering to image clustering and proposes a binary-classification framework that uses adaptive learning for optimization. Then, DeepCluster (Caron et al., 2018) proposes
|
| 44 |
+
|
| 45 |
+
an end-to-end training method that performs cluster assignments and representation learning alternately. However, the key drawback of unsupervised methods is their incapability of taking advantage of prior knowledge to guide the clustering.
|
| 46 |
+
|
| 47 |
+
Semi-supervised Clustering By virtue of a few labeled data, semi-supervised clustering usually produces better results compared with unsupervised counterparts. PCK-Means (Basu et al., 2004) proposes that the clustering can be supervised by pairwise constraints between samples in the dataset. KCL (Hsu et al., 2017) transfers knowledge in the form of pairwise similarity predictions first and learns a clustering network to transfer learning. Along this line, MCL (Hsu et al., 2019) further formulates multi-classification as meta classification that predicts pairwise similarity and generalizes the paradigm to various settings. DTC (Han et al., 2019) extends the DEC algorithm and proposes a mechanism to estimate the number of new images categories using labeled data. When it comes to the field of text clustering, CDAC+ (Lin et al., 2020) combines the pairwise constraints and target distribution to discover new intents while DeepAligned (Zhang et al., 2021) introduces an alignment strategy to improve the clustering consistency. Recently, SCL (Shen et al., 2021) incorporates a strong backbone MPNet in the Siamese Network structure with pairwise contrastive loss to learn the sentence representations. Similarly, MTP (Zhang et al., 2022) enhances sentence representation through multi-task pre-training strategy and extra data. Although these methods take known intents into account, they may suffer from knowledge forgetting during the training process. More importantly, these methods are insufficient in the probe into the intrinsic structure of unlabeled data, making it hard to distinguish the characteristics that form an intent.
|
| 48 |
+
|
| 49 |
+
# 3 Approach
|
| 50 |
+
|
| 51 |
+
# 3.1 Problem Definition
|
| 52 |
+
|
| 53 |
+
Given as input an labeled dataset $D^{l} = \{x_{i}^{l}, i = 1, \dots, N\}$ where intents $Y^{l} = \{y_{i}^{l}, i = 1, \dots, N\}$ are known and an unlabeled dataset $D^{u} = \{x_{i}^{u}, i = 1, \dots, M\}$ . Our goal is to produce intent assignments as output by clustering (or partitioning) the whole dataset $D$ , which denotes $D = D^{l} \cup D^{u}$ . Directly optimizing the goal is intractable as the lack of knowledge about new intents and the intrinsic structure of unlabeled data. As analyzed
|
| 54 |
+
|
| 55 |
+
in Section 1, discovering intents should not damage but be beneficial for the identification of known intents, which can be formulated to optimize $p(Y^l |D^l,D;\theta)$ . Since $D^{l}\subset D$ , the optimization objective can be written as: $p(Y^l |D;\theta)$ .
|
| 56 |
+
|
| 57 |
+
Denote our latent variable (representing intent assignments obtained by clustering on $D$ ) by $Z$ and let $\mathcal{Z}_D$ be a possible value of $Z$ . Using Bayes rule, $p(Y^l | D; \theta)$ can be calculated as:
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
p \left(Y ^ {l} \mid D\right) = \sum_ {\mathcal {Z} _ {\mathcal {D}} \in Z} p \left(Y ^ {l} \mid \mathcal {Z} _ {D}, D; \theta\right) p \left(\mathcal {Z} _ {D} \mid D; \theta\right). \tag {1}
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
Exactly optimizing Eq.(1) is intractable due to its combinatorial nature. Consider a specific value $\mathcal{Z}_D$ , the log-likelihood can be simplified as:
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
\mathcal {L} _ {o b j} = \log p \left(Y ^ {l} \mid \mathcal {Z} _ {\mathcal {D}}, D; \theta\right) + \log p \left(\mathcal {Z} _ {\mathcal {D}} \mid D; \theta\right). \tag {2}
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
Our goal is get better $\mathcal{Z}_D$ (i.e. intent discovery) by optimizing $\mathcal{L}_{obj}$ , and a better $\mathcal{Z}_D$ can also help optimize $\mathcal{L}_{obj}$ .
|
| 70 |
+
|
| 71 |
+
# 3.2 Intent Representation and Transferring Knowledge
|
| 72 |
+
|
| 73 |
+
Before optimizing $\mathcal{L}_{obj}$ , we want to transfer knowledge from the labeled corpus to initialize the model. Transferring knowledge has been widely studied and types of transferred knowledge have been proposed for a variety of circumstances. Considering the excellent generalization of the pre-trained model, we fine-tune BERT (Devlin et al., 2018) with labeled corpus under the supervision of cross entropy as suggested in (Zhang et al., 2021). Given the i-th labeled utterance $x_{i}$ , we first get its contextual embeddings by utilizing BERT and then perform mean-pooling to get sentence semantic representation $z_{i}$ . The objective of fine-tune $\mathcal{L}_{ce}$ as:
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
\mathcal {L} _ {\mathrm {c e}} = - \frac {1}{N} \sum_ {i = 1} ^ {N} \log \frac {\exp \left(\phi \left(z _ {i}\right) ^ {y _ {i}}\right)}{\sum_ {j = 1} ^ {K ^ {l}} \exp \left(\phi \left(z _ {i}\right) ^ {j}\right)}, \tag {3}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
where $\phi (\cdot)$ represents a linear classifier and $\phi (z_i)^j$ denotes the logits of the $j$ -th class, $K^l$ denotes the total number of known intents.
|
| 80 |
+
|
| 81 |
+
# 3.3 EM Framework for Optimization
|
| 82 |
+
|
| 83 |
+
Intent Assignments $\mathcal{Z}$ (In the following, we omit the subscript $D$ of $\mathcal{Z}_D$ for clarity.) Specific intent assignments $\mathcal{Z}$ involves two components: how to determine $K$ representing how many intents in
|
| 84 |
+
|
| 85 |
+
dataset $D$ and how to assign the utterance in the dataset to corresponding intent. Many methods (Han et al., 2019; Shen et al., 2021) have been proposed to estimate $K$ . Considering the tradeoff between efficiency and effect, we follow Zhang et al. (2021) (see Appendix D for discussions on more accurate estimating $K$ under our framework) and first set a rough value $\mathcal{K}$ (e.g., the multiple of the ground truth number) for $K$ and then refine it by dropping clusters (formed by grouping the dataset $D$ into $\mathcal{K}$ semantic clusters using k-means) whose size is less than a certain threshold. After estimating how many intents are contained in the dataset, we perform k-means to assign cluster assignments as (pseudo) intent to each utterance. Next, we discuss in detail how to further optimize Eq.(2) with Expectation-Maximization (EM) algorithm framework.
|
| 86 |
+
|
| 87 |
+
E-Step We have assigned a specific intent assignment $\mathcal{Z}$ to latent variable $Z$ based on prior knowledge. We expect that the intent assignments $\mathcal{Z}$ should reflect what characteristics make a good intent in general rather than specific intents. Therefore, the standard cross entropy loss formed by specific pseudo labels adopted by Caron et al. (2018); Zhang et al. (2021) can not achieve this purpose, and even the model may be confused by the false pseudo labels according to Wang et al. (2021). To better reflect the intrinsic structure of dataset $D$ and learn friendly features for intent assignments, we hope that intent assignments $\mathcal{Z}$ can make utterances with the same intent close enough and pull utterances with different intents far away in the semantic feature space. Inspired by contrastive learning paradigm, we estimate the posterior $p(\mathcal{Z}|D;\theta)$ :
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
\begin{array}{l} p (\mathcal {Z} | D; \theta) = \prod_ {C _ {k} \in \mathcal {Z}} p \left(C _ {k} | D; \theta\right) (4) \\ = \prod_ {C _ {k} \in \mathcal {Z}} \prod_ {x \in C _ {k}} p (x \in C _ {k} | D; \theta) (5) \\ \propto \prod_ {C _ {k} \in \mathcal {Z}} \prod_ {x \in C _ {k}} \frac {\sum_ {x ^ {+} \in C _ {k}} e x p \left(x \cdot x ^ {+}\right)}{\sum_ {x ^ {p} \in D \backslash \{x \}} e x p \left(x \cdot x ^ {p}\right)}, (6) \\ \end{array}
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
where $C_k$ is a cluster produced by $\mathcal{Z}$ , and $x \cdot x^+$ is calculated by consine between features. To optimize Eq.(2), we also need to compute $p(Y^l | \mathcal{Z}, D; \theta)$ . Exactly computing is difficult as the label space in $\mathcal{Z}$ does not match that of $Y^l$ . Consider the catastrophic forgetting as in Deepaligned
|
| 94 |
+
|
| 95 |
+
mentioned above, we approximate $p(Y^l|\mathcal{Z},D;\theta)$
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
\begin{array}{l} p \left(Y ^ {l} \mid \mathcal {Z}, D; \theta\right) = p \left(Y ^ {l} \mid \mathcal {Z}, D ^ {l}, D ^ {u}; \theta\right) (7) \\ \propto p \left(Y ^ {l} \mid D ^ {l}, \hat {D} ^ {l} \left(D ^ {u}, \mathcal {Z}\right); \theta\right) (8) \\ \propto \prod_ {x \in D ^ {l} \cup \hat {D} ^ {l}} \frac {\exp \left(\phi (x) ^ {y}\right)}{\sum_ {j = 1} ^ {K ^ {l}} \exp \left(\phi (x) ^ {j}\right)}, (9) \\ \end{array}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
where $\phi (\cdot)$ denotes same linear classifier as Eq.(3), $y$ denotes the label of $x$ , $K^l$ denotes the total number of known intents and $D^{l}$ denotes labeled data in $D$ . $\hat{D}^l (D^u,\mathcal{Z})$ refers to the set of samples in $D^{u}$ that can be considered as known intents after the operation of $\mathcal{Z}$ .
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
\hat {D} ^ {l} = \left\{\left(x, y ^ {l}\right) \mid x \in \mathcal {N} _ {\mathcal {Z}} \left(x ^ {l}\right), \left(x ^ {l}, y ^ {l}\right) \in D ^ {l} \right\}, \tag {10}
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
where $x^l$ is the sample from $D^l$ , $y^l$ is the label of $x^l$ . $\mathcal{N}_{\mathcal{Z}}(x^l)$ is the unlabeled nearest neighbor samples set that belongs to the same cluster (divided by $\mathcal{Z}$ ) as $x^l$ . See Appendix E for specific benefits from $\hat{D}^l$ . The labeled data is tailored to model training. On the one hand, the model will not lose the knowledge transferred from labeled data, on the other hand, the model can constantly explore the intrinsic structure of the dataset by utilizing it.
|
| 108 |
+
|
| 109 |
+
M-Step In the M-step, we update the $\theta$ in Eq. (2). In addition to bringing Eq. (4) and Eq. (7) into Eq. (2), we introduce two hyper-parameters to help optimize objectives. The overall loss $\mathcal{L}$ can be formulated as follows:
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
\begin{array}{l} \mathcal {L} = \lambda \cdot \sum_ {C _ {k} \in \mathcal {Z}} \sum_ {x \in C _ {k}} \log \frac {\sum_ {x ^ {+} \in C _ {k}} e x p \left(\frac {x \cdot x ^ {+}}{\tau}\right)}{\sum_ {x ^ {p} \in D \backslash \{x \}} e x p \left(\frac {x \cdot x ^ {p}}{\tau}\right)} (11) \\ + (1 - \lambda) \cdot \sum_ {x \in D ^ {l} \cup \hat {D} ^ {l}} \log \frac {\exp \left(\phi (x) ^ {y}\right)}{\sum_ {j = 1} ^ {K ^ {l}} \exp \left(\phi (x) ^ {j}\right)}, (12) \\ \end{array}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
where $\lambda$ is to balance the proportion of two log-likelihoods (discussed in Section 5.3) during training, $\tau$ is a hyper-parameter for temperature scaling which often appears in contrastive learning.
|
| 116 |
+
|
| 117 |
+
We summarize the whole training process of the EM framework in Algorithm 1 and the model architecture of our approach as shown in Figure 2.
|
| 118 |
+
|
| 119 |
+
It is worth noting that our method actually proposes a framework where probability estimation can flexibly adopt different ways for a variety of circumstances.
|
| 120 |
+
|
| 121 |
+

|
| 122 |
+
Figure 2: The model architecture of our implementation based on the proposed probabilistic framework. (a) Firstly, we transfer knowledge by fine-tuning BERT with labeled data. (b) Then, we perform intent assignments on full data (labeled and unlabeled data) and reflect the intrinsic structure of data in E-step. (c) And alleviate the forgetting of prior knowledge and update model parameters in M-step. The snow mark represents this step only needs forward without calculating the gradient.
|
| 123 |
+
|
| 124 |
+
# Algorithm 1 EM algorithm for optimization
|
| 125 |
+
|
| 126 |
+
Input: $D^{l} = \{x_{i}^{l}, i = 1, \dots, N\}$ , $Y^{l} = \{y_{i}^{l}, i = 1, \dots, N\}$ , $D^{u} = \{x_{i}^{u}, i = 1, \dots, M\}$ .
|
| 127 |
+
|
| 128 |
+
Parameter: Model parameters $\theta$
|
| 129 |
+
|
| 130 |
+
1: Initialize $\theta$ by transferring knowledge.
|
| 131 |
+
2: while not converged do
|
| 132 |
+
3: Perform intent assignment $(\mathcal{Z})$ using K-means; $\backslash E$ -Step
|
| 133 |
+
4: Compute $P(Y^l | \mathcal{Z}, D; \theta)$ and $P(\mathcal{Z} | D; \theta)$ using current parameters $\theta$ ; $\backslash E$ -Step
|
| 134 |
+
5: Update model parameters $\theta$ to maximize the log-likelihood $\mathcal{L}$ in Eq. (11). $\backslash M$ -Step
|
| 135 |
+
6: end while
|
| 136 |
+
7: return $\theta$
|
| 137 |
+
|
| 138 |
+
# 4 Experiments
|
| 139 |
+
|
| 140 |
+
# 4.1 Datasets
|
| 141 |
+
|
| 142 |
+
We conduct experiments on three challenging datasets to verify the effectiveness of our proposed method. The detailed statistics are shown in Appendix A.
|
| 143 |
+
|
| 144 |
+
CLINC (Larson et al., 2019) is a popular intent dataset designed for out-of-domain intent detection, which contains 150 intents from 10 domains and 22500 utterances.
|
| 145 |
+
|
| 146 |
+
BANKING (Casanueva et al., 2020) is a banking dataset covering 77 intents and containing 13083 utterances.
|
| 147 |
+
|
| 148 |
+
StackOverflow represents a dataset dispersed
|
| 149 |
+
|
| 150 |
+
through Kaggle.com, encompassing 20 intents and 20000 utterances. We adopt the dataset processed by Xu et al. (2015).
|
| 151 |
+
|
| 152 |
+
# 4.2 Baseline and Evaluation Metrics
|
| 153 |
+
|
| 154 |
+
We follow Lin et al. (2020); Zhang et al. (2021) and divide the baselines to be compared into two categories: Unsupervised (Unsup.) and Semi-supervised (Semi-sup.). All methods are introduced in Related Work (Section 2). For fairness, we uniformly use BERT as the backbone network when compared with the above methods. We also note that SCL (Shen et al., 2021) uses a stronger backbone network to obtain semantically meaningful sentence representations, and we also use the same backbone network in comparison with these methods. Similarly, when comparing with MTP-CLNN (Zhang et al., 2022), we use the same additional data and multi-task pre-training to enhance sentence representation.
|
| 155 |
+
|
| 156 |
+
To evaluate clustering results, we follow existing methods (Lin et al., 2020; Zhang et al., 2021) and adopt three widely recognized metrics: Normalized Mutual Information (NMI), Adjusted Rand Index (ARI), and clustering accuracy (ACC). It should be noted that when calculating ACC, the Hungarian algorithm is adopted to find the optimal alignment between the pseudo labels and the ground-truth labels as following Zhang et al. (2021).
|
| 157 |
+
|
| 158 |
+
<table><tr><td rowspan="2"></td><td rowspan="2">Methods</td><td colspan="3">CLINC</td><td colspan="3">BANKING</td><td colspan="3">StackOverflow</td></tr><tr><td>NMI</td><td>ARI</td><td>ACC</td><td>NMI</td><td>ARI</td><td>ACC</td><td>NMI</td><td>ARI</td><td>ACC</td></tr><tr><td rowspan="7">Unsup.</td><td>K-means</td><td>70.89</td><td>26.86</td><td>45.06</td><td>54.57</td><td>12.18</td><td>29.55</td><td>8.24</td><td>1.46</td><td>13.55</td></tr><tr><td>AC</td><td>73.07</td><td>27.70</td><td>44.03</td><td>57.07</td><td>13.31</td><td>31.58</td><td>10.62</td><td>2.12</td><td>14.66</td></tr><tr><td>SAE-KM</td><td>73.13</td><td>29.95</td><td>46.75</td><td>63.79</td><td>22.85</td><td>38.92</td><td>32.62</td><td>17.07</td><td>34.44</td></tr><tr><td>DEC</td><td>74.83</td><td>27.46</td><td>46.89</td><td>67.78</td><td>27.21</td><td>41.29</td><td>10.88</td><td>3.76</td><td>13.09</td></tr><tr><td>DCN</td><td>75.66</td><td>31.15</td><td>49.29</td><td>67.54</td><td>26.81</td><td>41.99</td><td>31.09</td><td>15.45</td><td>34.26</td></tr><tr><td>DAC</td><td>78.40</td><td>40.49</td><td>55.94</td><td>47.35</td><td>14.24</td><td>27.41</td><td>14.71</td><td>2.76</td><td>16.30</td></tr><tr><td>DeepCluster</td><td>65.58</td><td>19.11</td><td>35.70</td><td>41.77</td><td>8.95</td><td>20.69</td><td>-</td><td>-</td><td>-</td></tr><tr><td rowspan="6">Semi-sup.</td><td>PCKMeans</td><td>68.70</td><td>35.40</td><td>54.61</td><td>48.22</td><td>16.24</td><td>32.66</td><td>17.26</td><td>5.35</td><td>24.16</td></tr><tr><td>KCL(BERT)</td><td>86.82</td><td>58.79</td><td>68.86</td><td>75.21</td><td>46.72</td><td>60.15</td><td>8.84</td><td>7.81</td><td>13.94</td></tr><tr><td>MCL(BERT)</td><td>87.72</td><td>59.92</td><td>69.66</td><td>75.68</td><td>47.43</td><td>61.14</td><td>-</td><td>-</td><td>-</td></tr><tr><td>CDAC+</td><td>86.65</td><td>54.33</td><td>69.89</td><td>72.25</td><td>40.97</td><td>53.83</td><td>69.84</td><td>52.59</td><td>73.48</td></tr><tr><td>DTC(BERT)</td><td>90.54</td><td>65.02</td><td>74.15</td><td>76.55</td><td>44.70</td><td>56.51</td><td>-</td><td>-</td><td>-</td></tr><tr><td>DeepAligned</td><td>93.95</td><td>80.33</td><td>87.29</td><td>79.91</td><td>54.34</td><td>66.59</td><td>76.47</td><td>62.52</td><td>80.26</td></tr><tr><td></td><td>Ours</td><td>95.010.49</td><td>83.001.54</td><td>88.991.05</td><td>84.020.82</td><td>62.922.00</td><td>74.031.37</td><td>77.321.02</td><td>65.702.07</td><td>80.501.14</td></tr></table>
|
| 159 |
+
|
| 160 |
+
# 4.3 Experimental Settings
|
| 161 |
+
|
| 162 |
+
For each dataset, $75\%$ of all intents are randomly selected as known intent, with the remaining designated as unknown. Furthermore, $10\%$ of the known intents data are chosen randomly as labeled data. We set the number of intents as ground truth in line with previous methods Lin et al. (2020); Zhang et al. (2021, 2022). Our other experimental settings are mostly the same as Lin et al. (2020); Zhang et al. (2021, 2022) for a fair comparison. We take different random seeds to run at least three rounds on the test set and report the final averaged results.
|
| 163 |
+
|
| 164 |
+
Our main experiments use pre-trained BERT, which is implemented in the Huggingface Transformers², as the network backbone. We also replace the backbones of the compared baselines with the same BERT as ours. Only when comparing with SCL (Shen et al., 2021), which definitely point out that they use pre-trained MPNet (Reimers and Gurevych, 2019) as the backbone network, will we adopt the same backbone network for a fair comparison. Similarly, we will use the same additional data and the same pre-training strategy for fair comparison only when we compare with MTP (Zhang et al., 2022).
|
| 165 |
+
|
| 166 |
+
Moreover, considering the efficiency of the training process and the capacity of GPU, we only finetune the last transformer layer parameters during transferring knowledge and freeze all but the latter 6 transformer layers parameters during performing
|
| 167 |
+
|
| 168 |
+
the EM algorithm. See Appendix B for training details and parameters.
|
| 169 |
+
|
| 170 |
+
Table 1: The main results on three datasets. We re-run the result of DeepAligned by its release code. The other baselines on CLINC and BANKING are retrieved from Zhang et al. (2021). The baselines on StackOverflow are retrieved from Lin et al. (2020). All reported results are averaged over different seeds and the subscripts represent the corresponding standard deviations. See text for details.
|
| 171 |
+
|
| 172 |
+
<table><tr><td rowspan="2">Methods</td><td colspan="3">CLINC</td><td colspan="3">BANKING</td></tr><tr><td>NMI</td><td>ARI</td><td>ACC</td><td>NMI</td><td>ARI</td><td>ACC</td></tr><tr><td>SMPNET</td><td>93.39</td><td>74.28</td><td>83.24</td><td>82.22</td><td>58.82</td><td>71.82</td></tr><tr><td>SCL</td><td>94.75</td><td>81.64</td><td>86.91</td><td>85.04</td><td>65.43</td><td>76.55</td></tr><tr><td>SCL(EP)</td><td>95.25</td><td>83.44</td><td>88.68</td><td>84.77</td><td>64.44</td><td>75.18</td></tr><tr><td>SCL(IP)</td><td>94.95</td><td>82.32</td><td>88.28</td><td>84.82</td><td>64.51</td><td>74.81</td></tr><tr><td>SCL(AA)</td><td>95.11</td><td>83.09</td><td>88.49</td><td>85.02</td><td>64.91</td><td>75.66</td></tr><tr><td>SCL(AC)</td><td>94.04</td><td>78.99</td><td>84.58</td><td>83.52</td><td>62.18</td><td>73.09</td></tr><tr><td>Ours</td><td>95.940.24</td><td>85.690.90</td><td>90.440.77</td><td>86.850.40</td><td>69.280.32</td><td>79.320.91</td></tr></table>
|
| 173 |
+
|
| 174 |
+
Table 2: The results compared with SCL and variants. IP, EP, AA, and AC represent four pseudo label training strategies: inclusive pairing, exclusive pairing, Alignment-A, and Alignment-C respectively. The baselines are retrieved from Shen et al. (2021).
|
| 175 |
+
|
| 176 |
+
<table><tr><td rowspan="2">Methods</td><td colspan="3">BANKING</td><td colspan="3">Stackoverflow</td></tr><tr><td>NMI</td><td>ARI</td><td>ACC</td><td>NMI</td><td>ARI</td><td>ACC</td></tr><tr><td>MTP</td><td>85.17</td><td>64.37</td><td>74.20</td><td>80.70</td><td>71.68</td><td>83.74</td></tr><tr><td>MTP(DAC)</td><td>85.78</td><td>65.28</td><td>75.43</td><td>80.89</td><td>71.17</td><td>84.20</td></tr><tr><td>MTP(CLNN)</td><td>87.68</td><td>70.43</td><td>79.61</td><td>81.30</td><td>73.29</td><td>86.56</td></tr><tr><td>Ours</td><td>\( 88.61_{0.96} \)</td><td>\( 73.61_{2.61} \)</td><td>\( 83.15_{2.93} \)</td><td>\( 81.93_{0.24} \)</td><td>\( 74.76_{0.55} \)</td><td>\( 87.03_{0.21} \)</td></tr></table>
|
| 177 |
+
|
| 178 |
+
Table 3: The results compared with MTP and variants. DAC and CLNN are different strategies for intent discovery (see (Zhang et al., 2022) for details). We re-run the result of MTP(CLNN) by its released code. The other baselines are retrieved from Zhang et al. (2022).
|
| 179 |
+
|
| 180 |
+
# 5 Results and Discussion
|
| 181 |
+
|
| 182 |
+
# 5.1 Main results
|
| 183 |
+
|
| 184 |
+
We present the main results in table 1, where the best results are highlighted in bold. It is clear from the results that our method achieves substantial improvements in all metrics and all datasets, especially in the BANKING dataset, where the number of samples in each class is imbalanced. These results illustrate the effectiveness and generalization of our method. At the same time, we note that most semi-supervised methods are better than unsupervised as a whole, which further verifies the importance of labeled data. From this perspective, we can explain why our method can be better than DeepAligned as it will constantly forget the knowledge existing in labeled data as shown in Section 1, and our method tailors the labeled data into model training to guide clustering so that our method can achieve better results.
|
| 185 |
+
|
| 186 |
+
To make a fair comparison with SCL (Shen et al., 2021), we also replace the backbone network in our method with the same MPNet as SCL, keeping other parts of our method unchanged. We present the results of our comparison with SCL and various variants (See Shen et al. (2021) for the calculation of specific strategies) on CLINC and BANKING in Table 2, where the best results are also highlighted in bold. Table 3 is the result of the comparison between our method and MTP, where DAC and CLNN are different strategies for intent discovery after pretraining. To make a fair comparison, we only adopt the same additional data and pre-training strategies (based on its released code) as MTP in the first step (Finetune stage in Figure 2), and the rest of the methods remain unchanged.
|
| 187 |
+
|
| 188 |
+
<table><tr><td rowspan="2">Methods</td><td colspan="3">Known</td><td colspan="3">Unknown</td></tr><tr><td>NMI</td><td>ARI</td><td>ACC</td><td>NMI</td><td>ARI</td><td>ACC</td></tr><tr><td>DeepAligned</td><td>95.45</td><td>85.69</td><td>91.05</td><td>91.69</td><td>78.91</td><td>86.31</td></tr><tr><td>Ours(Clinc)</td><td>97.16</td><td>91.61</td><td>95.20</td><td>92.50</td><td>81.10</td><td>87.37</td></tr><tr><td>DeepAligned</td><td>82.13</td><td>60.62</td><td>72.00</td><td>78.11</td><td>61.23</td><td>74.74</td></tr><tr><td>Ours(Banking)</td><td>88.06</td><td>74.53</td><td>85.23</td><td>78.46</td><td>61.89</td><td>74.78</td></tr><tr><td>DeepAligned</td><td>78.77</td><td>61.83</td><td>81.86</td><td>59.36</td><td>52.83</td><td>75.20</td></tr><tr><td>Ours(Stackover.)</td><td>80.34</td><td>74.85</td><td>87.55</td><td>60.72</td><td>57.96</td><td>81.07</td></tr></table>
|
| 189 |
+
|
| 190 |
+
Table 4: Comparison results on Known and Unknown intents respectively. From top to bottom, there are CLINC, BANKING and Stackoverflow datasets (the name of the dataset is filled in parentheses).
|
| 191 |
+
|
| 192 |
+
# 5.2 A Closer Look at Effectiveness
|
| 193 |
+
|
| 194 |
+
To better verify the effectiveness of our proposed method, we analyze the comparison results between our method and DeepAligned in a more fine-grained way. We separate the known intents and the unknown intents from the test set and compare our method with DeepAligned on these two sub-datasets respectively (the experimental settings remain unchanged). The results are shown in Table 4, which demonstrates that our method can not only effectively apply to known intents, but also can more effectively discover new intents, and the effect of improvement is substantial. This also fully conforms to our expectations that the two processes of intent discovery and recognition of known intents can be "win-win".
|
| 195 |
+
|
| 196 |
+

|
| 197 |
+
|
| 198 |
+

|
| 199 |
+
|
| 200 |
+

|
| 201 |
+
Figure 3: The effects of $\lambda$ on CLINC and BANKING. Detailed performance are available in Appendix C.
|
| 202 |
+
|
| 203 |
+

|
| 204 |
+
|
| 205 |
+
# 5.3 Effect of Exploration and Utilization
|
| 206 |
+
|
| 207 |
+
In objective function Eq. (11), we use $\lambda$ to reconcile the effects of the two log-likelihoods. Intuitively, the first term is used to explore the intrinsic structure of unlabeled data, and the second term is used to strengthen the knowledge transferred from labeled data to utilize. We vary the value of $\lambda$ and conduct experiments on CLINC and BANKING to explore the effect of $\lambda$ , which also reflects the inference of exploration and utilization. As shown in Figure 3, only utilizing labeled data ( $\lambda = 0.0$ ) or only exploring ( $\lambda = 1.0$ ) the intrinsic structure will not achieve good results (below average). Interestingly, on all metrics and datasets, the effect of $\lambda$ shows a similar trend (increase first and then
|
| 208 |
+
|
| 209 |
+
decrease), which indicates that we can adjust the value of $\lambda$ to give full play to the role of both so that the model can make better use of known knowledge to discover intents accurately. This result shows that if the model wants to achieve good results, exploration and utilization are indispensable.
|
| 210 |
+
|
| 211 |
+

|
| 212 |
+
Figure 4: The effect of the initial number of intents on datasets(Left: CLINC, Right: BANKING). The compared results are retrieved from Zhang et al. (2021).
|
| 213 |
+
|
| 214 |
+
# 5.4 Effect of the Initial Number of Intents
|
| 215 |
+
|
| 216 |
+
Because we do not know the actual number of intents, we usually need to assign an initial number of intents (i.e., $\mathcal{K}$ ) in advance as we do earlier. This also requires us to investigate the sensitivity of the model to the initial K. We investigate the performance of our method in the datasets by varying initial values (leaving others unchanged). As shown Figure 4, compared with others, our method can better adapt to different initial values.
|
| 217 |
+
|
| 218 |
+

|
| 219 |
+
Figure 5: The effect of known class ratio on datasets (Left: BANKING, Right: CLINC). The compared results are retrieved from Zhang et al. (2021).
|
| 220 |
+
|
| 221 |
+
# 5.5 Effect of the Known Intent Ratio
|
| 222 |
+
|
| 223 |
+
We also investigate the effect of known intent ratios on performance by adopting different known class ratios (25%, 50% and 75%). As shown in Figure 5, our method also shows better performance compared with other baselines. Interestingly, The advantage of our method in dataset BANKING is significant. We speculate that this may be related to the unbalanced number of samples in BANKING.
|
| 224 |
+
|
| 225 |
+
Although there are more known intents, it does not mean that enough labeled and balanced samples are provided. As a result, the previous methods (e.g. DeepAligned) not only failed to transfer more prior knowledge but also exacerbated the speed of forgetting in the follow-up process. This also provides room for future research.
|
| 226 |
+
|
| 227 |
+

|
| 228 |
+
Figure 6: The knowledge curves of our method (Blue). During intent assignments, our performance is not only not forgotten, but also constantly strengthened compared with the pre-transfer stage (Red, approximated by the initial performance in clustering stage).
|
| 229 |
+
|
| 230 |
+

|
| 231 |
+
|
| 232 |
+
# 5.6 More Than Remembering Knowledge
|
| 233 |
+
|
| 234 |
+
We show knowledge forgetting in DeepAligned in Section 1. After fine-tuning with labeled data, the prior knowledge is stored in the model in the form of model parameters. With the subsequent clustering steps, the parameters change gradually (the forgetting process is step by step from the forgetting curve in previous works).
|
| 235 |
+
|
| 236 |
+
However, as shown in Figure 6, we observe that our method does not have the catastrophic forgetting that occurs in DeepAligned. On the contrary, with the iteration (EM algorithm), our performance is better than that in the pre-transfer stage. We surmise that this improvement is brought by the sample set $\hat{D}^l$ discovered in the unlabeled data (also can improve the intent discovery in Appendix E) corpus helps the identification of the known intents.
|
| 237 |
+
|
| 238 |
+
# 6 Conclusion
|
| 239 |
+
|
| 240 |
+
In this paper, we provide a probabilistic framework for intent discovery. This is the first complete theoretical framework for intent discovery. We also provide an efficient implementation based on this proposed framework. Compared with the existing methods, our method effectively alleviates the forgetting of prior knowledge transferred from known intents and provides intensive clustering supervised signals for discovering intents. Extensive experiments conducted in three challenging
|
| 241 |
+
|
| 242 |
+
datasets demonstrate our method can achieve substantial improvements. The subsequent analysis also shows that our method can better estimate the number of intents and adapt to various conditions. In the future, we will try different methods to perform intent assignments and explore more methods to approximate $p(Y^l|\mathcal{Z},D;\theta)$ and $p(\mathcal{Z}|D;\theta)$ .
|
| 243 |
+
|
| 244 |
+
# Limitations
|
| 245 |
+
|
| 246 |
+
To better inspire the follow-up work, we summarize the limitations of our method as follows: 1) From our experimental results the Appendix D, we can see that the estimation of the number of intents in our proposed can be further improved. 2) We do not try more means to prevent knowledge from forgetting. We can probe into the intrinsic structure of unlabeled data in a more fine-grained way by improving the posterior estimation. 3) According to Section 5.3, we have verified that both exploration and utilization are indispensable, but at the same time, we only empirically choose the specific proportion of both, without theoretical analysis of the most appropriate proportion for each dataset. We look forward to making progress in the follow-up research on the above limitations.
|
| 247 |
+
|
| 248 |
+
# Acknowledgements
|
| 249 |
+
|
| 250 |
+
We thank Dr.Wang yuxin and Dr.Liu Peiju for their patience and meaningful suggestions in this work early. This work was supported by the National Key Research and Development Program of China (No.2020AAA0108700), National Natural Science Foundation of China (No.62022027) and CAAI-Huawei MindSpore Open Fund.
|
| 251 |
+
|
| 252 |
+
# References
|
| 253 |
+
|
| 254 |
+
Sugato Basu, Arindam Banerjee, and Raymond J Mooney. 2004. Active semi-supervision for pairwise constrained clustering. In Proceedings of the 2004 SIAM international conference on data mining, pages 333-344. SIAM.
|
| 255 |
+
Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. 2018. Deep clustering for unsupervised learning of visual features. In Proceedings of the European Conference on Computer Vision (ECCV), pages 132-149.
|
| 256 |
+
Inigo Casanueva, Tadas Temčinas, Daniela Gerz, Matthew Henderson, and Ivan Vulić. 2020. Efficient intent detection with dual sentence encoders. arXiv preprint arXiv:2003.04807.
|
| 257 |
+
|
| 258 |
+
Jianlong Chang, Lingfeng Wang, Gaofeng Meng, Shiming Xiang, and Chunhong Pan. 2017. Deep adaptive image clustering. In Proceedings of the IEEE international conference on computer vision, pages 5879-5887.
|
| 259 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
|
| 260 |
+
K Chidananda Gowda and G Krishna. 1978. Agglomerative clustering using the concept of mutual nearest neighbourhood. Pattern recognition, 10(2):105-112.
|
| 261 |
+
Dilek Hakkani-Tür, Asli Celikyilmaz, Larry Heck, and Gokhan Tur. 2013. A weakly-supervised approach for discovering new user intents from search query logs.
|
| 262 |
+
Dilek Hakkani-Tür, Yun-Cheng Ju, Geoffrey Zweig, and Gokhan Tur. 2015. Clustering novel intents in a conversational interaction system with semantic parsing. In Sixteenth Annual Conference of the International Speech Communication Association.
|
| 263 |
+
Kai Han, Andrea Vedaldi, and Andrew Zisserman. 2019. Learning to discover novel visual categories via deep transfer clustering. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8401-8409.
|
| 264 |
+
Yen-Chang Hsu, Zhaoyang Lv, and Zsolt Kira. 2017. Learning to cluster in order to transfer across domains and tasks. arXiv preprint arXiv:1711.10125.
|
| 265 |
+
Yen-Chang Hsu, Zhaoyang Lv, Joel Schlosser, Phillip Odom, and Zsolt Kira. 2019. Multi-class classification without multi-class labels. arXiv preprint arXiv:1901.00544.
|
| 266 |
+
Stefan Larson, Anish Mahendran, Joseph J Peper, Christopher Clarke, Andrew Lee, Parker Hill, Jonathan K Kummerfeld, Kevin Leach, Michael A Laurenzano, Lingjia Tang, et al. 2019. An evaluation dataset for intent classification and out-of-scope prediction. arXiv preprint arXiv:1909.02027.
|
| 267 |
+
Ting-En Lin, Hua Xu, and Hanlei Zhang. 2020. Discovering new intents via constrained deep adaptive clustering with cluster refinement. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8360-8367.
|
| 268 |
+
James MacQueen et al. 1967. Some methods for classification and analysis of multivariate observations. In Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, volume 1, pages 281-297. Oakland, CA, USA.
|
| 269 |
+
Srinivas Bangalore Padmasundari. 2018. Intent discovery through unsupervised semantic text clustering. Proc. Interspeech 2018, pages 606-610.
|
| 270 |
+
Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084.
|
| 271 |
+
|
| 272 |
+
Xiang Shen, Yinge Sun, Yao Zhang, and Mani Najmabadi. 2021. Semi-supervised intent discovery with contrastive learning. In Proceedings of the 3rd Workshop on Natural Language Processing for Conversational AI, pages 120-129.
|
| 273 |
+
|
| 274 |
+
Chen Shi, Qi Chen, Lei Sha, Sujian Li, Xu Sun, Houfeng Wang, and Lintao Zhang. 2018. Autodialabel: Labeling dialogue data with unsupervised learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 684-689, Brussels, Belgium. Association for Computational Linguistics.
|
| 275 |
+
|
| 276 |
+
Ximei Wang, Jinghan Gao, Mingsheng Long, and Jian-min Wang. 2021. Self-tuning for data-efficient deep learning. In International Conference on Machine Learning, pages 10738-10748. PMLR.
|
| 277 |
+
|
| 278 |
+
Junyuan Xie, Ross Girshick, and Ali Farhadi. 2016. Unsupervised deep embedding for clustering analysis. In International conference on machine learning, pages 478-487. PMLR.
|
| 279 |
+
|
| 280 |
+
Jiaming Xu, Peng Wang, Guanhua Tian, Bo Xu, Jun Zhao, Fangyuan Wang, and Hongwei Hao. 2015. Short text clustering via convolutional neural networks. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 62-69.
|
| 281 |
+
|
| 282 |
+
Bo Yang, Xiao Fu, Nicholas D Sidiropoulos, and Mingyi Hong. 2017. Towards k-means-friendly spaces: Simultaneous deep learning and clustering. In international conference on machine learning, pages 3861-3870. PMLR.
|
| 283 |
+
|
| 284 |
+
Hanlei Zhang, Hua Xu, Ting-En Lin, and Rui Lyu. 2021. Discovering new intents with deep aligned clustering. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 14365–14373.
|
| 285 |
+
|
| 286 |
+
Yuwei Zhang, Haode Zhang, Li-Ming Zhan, Xiao-Ming Wu, and Albert Lam. 2022. New intent discovery with pre-training and contrastive learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 256-269, Dublin, Ireland. Association for Computational Linguistics.
|
| 287 |
+
|
| 288 |
+
Yunhua Zhou, Peiju Liu, and Xipeng Qiu. 2022. KNN-contrastive learning for out-of-domain intent classification. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5129-5141, Dublin, Ireland. Association for Computational Linguistics.
|
| 289 |
+
|
| 290 |
+
# A Statistics of Datasets
|
| 291 |
+
|
| 292 |
+
We present detailed statistics of datasets in our experiments in Table 6.
|
| 293 |
+
|
| 294 |
+
CLINC (Larson et al., 2019) is a dataset designed for Out-of-domain intent detection, which contains
|
| 295 |
+
|
| 296 |
+
<table><tr><td rowspan="2">λ</td><td colspan="3">BANKING</td><td colspan="3">CLINC</td></tr><tr><td>NMI</td><td>ARI</td><td>ACC</td><td>NMI</td><td>ARI</td><td>ACC</td></tr><tr><td>0.0</td><td>80.65</td><td>54.11</td><td>63.83</td><td>93.27</td><td>76.12</td><td>81.69</td></tr><tr><td>0.1</td><td>82.83</td><td>58.99</td><td>70.06</td><td>94.36</td><td>80.23</td><td>86.13</td></tr><tr><td>0.3</td><td>84.43</td><td>62.62</td><td>72.53</td><td>94.73</td><td>81.79</td><td>88.09</td></tr><tr><td>0.5</td><td>84.81</td><td>63.91</td><td>74.38</td><td>95.32</td><td>83.41</td><td>89.07</td></tr><tr><td>0.7</td><td>84.73</td><td>63.95</td><td>74.58</td><td>95.36</td><td>83.26</td><td>88.40</td></tr><tr><td>0.9</td><td>85.16</td><td>65.34</td><td>75.94</td><td>95.69</td><td>84.97</td><td>90.31</td></tr><tr><td>1.0</td><td>82.94</td><td>61.98</td><td>73.21</td><td>93.35</td><td>78.46</td><td>85.51</td></tr></table>
|
| 297 |
+
|
| 298 |
+
Table 5: Detailed Results about the Effect of Exploration and Utilization.
|
| 299 |
+
|
| 300 |
+
150 intents from 10 domains and 22500 utterances. BANKING (Casanueva et al., 2020) is a dataset covering 77 intents and containing 13083 utterances.
|
| 301 |
+
|
| 302 |
+
StackOverflow (Xu et al., 2015) represents a dataset dispersed through Kaggle.com, encompassing 20 intents and 20000 utterances. We adopt the dataset processed by (Xu et al., 2015).
|
| 303 |
+
|
| 304 |
+
# B Experiment Details
|
| 305 |
+
|
| 306 |
+
Our main experiments use pre-trained BERT (bert-uncased, with 12-layer transformer), which is implemented in the Huggingface Transformers<sup>3</sup>. We try learning rate in $\{1e - 5,5e - 5\}$ and $\lambda$ in $\{0.5,0.6\}$ . The training batch size is 512, and the temperature scale $\tau$ is 0.1. All experiments were conducted in the Nvidia Ge-Force RTX-3090 Graphical Card with 24G graphical memory.
|
| 307 |
+
|
| 308 |
+
# C More Results on Effect of Exploration and Utilization
|
| 309 |
+
|
| 310 |
+
In this section, we detail the results of varying $\lambda$ in the Table 5. This result can be used as a supplement to Section 5.3, which further proves that if the model wants to achieve better results, both exploration and utilization are indispensable.
|
| 311 |
+
|
| 312 |
+
# D Estimate the Number of Intents $(K)$
|
| 313 |
+
|
| 314 |
+
A key point of intent discovery is whether the model can accurately predict the number of intents. DeepAligned proposes a simple yet effective estimation method. However, due to the alignment operation in the iterative process of clustering (see Zhang et al. (2021) for details), DeepAligned
|
| 315 |
+
|
| 316 |
+
<table><tr><td>Dataset</td><td>Classes</td><td>|Training|</td><td>|Validation|</td><td>|Test|</td><td>Vocabulary Size</td><td>Length (Avg)</td></tr><tr><td>CLINC</td><td>150</td><td>18000</td><td>2250</td><td>2250</td><td>7283</td><td>8.32</td></tr><tr><td>BANKING</td><td>77</td><td>9003</td><td>1000</td><td>3080</td><td>5028</td><td>11.91</td></tr><tr><td>StackOverflow</td><td>20</td><td>18000</td><td>1000</td><td>1000</td><td>17182</td><td>9.18</td></tr></table>
|
| 317 |
+
|
| 318 |
+
Table 6: Statistics of datasets. || denotes the total number of utterances. The StackOverflow is drawn from Lin et al. (2020)
|
| 319 |
+
|
| 320 |
+
<table><tr><td rowspan="2">Methods</td><td colspan="5">CLINC (k=150)</td><td colspan="5">BANKING (k=77)</td></tr><tr><td>K(Pred)</td><td>Error(↓)</td><td>NMI</td><td>ARI</td><td>ACC</td><td>K(Pred)</td><td>Error(↓)</td><td>NMI</td><td>ARI</td><td>ACC</td></tr><tr><td>MCL(BERT)</td><td>112</td><td>25.33</td><td>87.15</td><td>59.22</td><td>69.20</td><td>58</td><td>24.68</td><td>75.33</td><td>47.35</td><td>60.80</td></tr><tr><td>DTC(BERT)</td><td>195</td><td>30.00</td><td>89.15</td><td>63.18</td><td>66.65</td><td>110</td><td>42.86</td><td>77.61</td><td>47.50</td><td>54.94</td></tr><tr><td>DeepAligned</td><td>129</td><td>14.00</td><td>92.50</td><td>72.26</td><td>77.18</td><td>67</td><td>12.99</td><td>78.88</td><td>51.71</td><td>62.49</td></tr><tr><td>Ours</td><td>130</td><td>13.30</td><td>93.58</td><td>75.30</td><td>80.80</td><td>73</td><td>5.48</td><td>83.56</td><td>60.92</td><td>69.68</td></tr></table>
|
| 321 |
+
|
| 322 |
+
Table 7: The results of predicting $K$ . The $\hat{k}$ denotes the ground truth number of $K$ . The compared results are retrieved from (Zhang et al., 2021).
|
| 323 |
+
|
| 324 |
+

|
| 325 |
+
Figure 7: The results of predicting $K$ on CLINC and BANKING. The Red line denotes ground truth, the Brown line denotes the result of DeepAligned and the Bule line denotes the refinement of $K$ by our method.
|
| 326 |
+
|
| 327 |
+

|
| 328 |
+
|
| 329 |
+
needs to determine $K$ in advance and only limited labeled data is used, while a large number of unlabeled data are ignored. Unlikely, our method does not directly rely on pseudo labels so that we can continue to refine $K$ during subsequent clustering. We use the same settings as Zhang et al. (2021) and firstly assign the number of intents (i.e., $\mathcal{K}$ in intent assignments) as two times the ground truth number to investigate the ability to estimate $K$ . In the process of executing the EM algorithm, we refine $K$ per 10 epochs using the method as suggested in Section 3.3. To effectively demonstrate the impact and efficiency of our proposed framework on the estimation of $K$ , we did not consider dataset $\hat{D}^l$ in the experiment. We get the final performance of the model and the results are shown in Table 7 (Figure 7 shows the intermediate values of $K$ per epoch.) shows that our method can predict the number of intents more accurately and achieve
|
| 330 |
+
|
| 331 |
+
better results at the same time. During the experiment, we observed that the performance of model exhibited fluctuations attributed to the setting of hyperparameters. A more comprehensive and in-depth investigation of the estimation of $K$ will be left for future research endeavors.
|
| 332 |
+
|
| 333 |
+
# E Effect of $\hat{D}^l$ discovered in unlabeled data
|
| 334 |
+
|
| 335 |
+
In addition to the labeled data in hand, in Section 3.3, we also use the sample set $\hat{D}^l$ predicted known intents in unlabeled data during discovery (See Section 3.3 for the specific construction of $\hat{D}^l$ ). The nearest neighbor measure is based on the cosine similarity of the sample representation in the semantic space). In this section, we will further analyze the benefits brought by these discovered sample set. We have compared the effects of adding $\hat{D}^l$ and not adding $\hat{D}^l$ , and the comparison results are shown in Table 8. From Table 8, we can easily conclude that the added sample set $\hat{D}^l$ can improve the effectiveness. This also proves the importance of exploring the intrinsic structure of unlabeled data, which can not only improve the effect of preventing knowledge forgetting (Section 5.6) to improve the identification of IND, but also improve the effect of intent discovery, which is completely in line with our expectations.
|
| 336 |
+
|
| 337 |
+
<table><tr><td rowspan="2">Methods</td><td colspan="3">CLINC</td><td colspan="3">BANKING</td><td colspan="3">Stackoverflow</td></tr><tr><td>NMI</td><td>ARI</td><td>ACC</td><td>NMI</td><td>ARI</td><td>ACC</td><td>NMI</td><td>ARI</td><td>ACC</td></tr><tr><td>Our (Dl)</td><td>94.78</td><td>82.32</td><td>88.29</td><td>83.40</td><td>61.19</td><td>72.59</td><td>77.29</td><td>63.93</td><td>80.90</td></tr><tr><td>+Dl</td><td>95.01</td><td>83.00</td><td>88.99</td><td>84.02</td><td>62.92</td><td>74.03</td><td>77.32</td><td>65.70</td><td>80.50</td></tr></table>
|
| 338 |
+
|
| 339 |
+
Table 8: Ablation study on effect of $\hat{D}^l$ . $\hat{D}^l$ is the set of samples in $D^u$ that can be considered as known intents after the operation of $\mathcal{Z}$ .
|
| 340 |
+
|
| 341 |
+
A For every submission:
|
| 342 |
+
|
| 343 |
+
A1. Did you describe the limitations of your work?
|
| 344 |
+
|
| 345 |
+
Section "Limitations" (7th Section)
|
| 346 |
+
|
| 347 |
+
A2. Did you discuss any potential risks of your work?
|
| 348 |
+
|
| 349 |
+
Section "Limitations" (7th Section)
|
| 350 |
+
|
| 351 |
+
A3. Do the abstract and introduction summarize the paper's main claims?
|
| 352 |
+
|
| 353 |
+
"Abstract" and Section 1
|
| 354 |
+
|
| 355 |
+
A4. Have you used AI writing assistants when working on this paper?
|
| 356 |
+
|
| 357 |
+
Left blank.
|
| 358 |
+
|
| 359 |
+
B Did you use or create scientific artifacts?
|
| 360 |
+
|
| 361 |
+
Section 4.1 and Appendix A
|
| 362 |
+
|
| 363 |
+
B1. Did you cite the creators of artifacts you used?
|
| 364 |
+
|
| 365 |
+
Section 4.1
|
| 366 |
+
|
| 367 |
+
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
|
| 368 |
+
|
| 369 |
+
These datasets are available for all researchers in the NLP community.
|
| 370 |
+
|
| 371 |
+
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
|
| 372 |
+
|
| 373 |
+
These datasets are only for scientific research and are available for all members of the NLP research community. We have adhered to the typical method of utilizing these resources.
|
| 374 |
+
|
| 375 |
+
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
|
| 376 |
+
|
| 377 |
+
These datasets are only for scientific research and are available for all members of the NLP research community. We have adhered to the typical method of utilizing these resources.
|
| 378 |
+
|
| 379 |
+
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
|
| 380 |
+
|
| 381 |
+
Section 4.1
|
| 382 |
+
|
| 383 |
+
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
|
| 384 |
+
|
| 385 |
+
Appendix A
|
| 386 |
+
|
| 387 |
+
C Did you run computational experiments?
|
| 388 |
+
|
| 389 |
+
Section 4 and Section 5
|
| 390 |
+
|
| 391 |
+
C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used?
|
| 392 |
+
|
| 393 |
+
Appendix B
|
| 394 |
+
|
| 395 |
+
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
|
| 396 |
+
|
| 397 |
+
Section 4.3 and Appendix B
|
| 398 |
+
|
| 399 |
+
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
|
| 400 |
+
|
| 401 |
+
Section 4.3 and Section 5.1
|
| 402 |
+
|
| 403 |
+
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)?
|
| 404 |
+
|
| 405 |
+
Appendix B
|
| 406 |
+
|
| 407 |
+
D Did you use human annotators (e.g., crowdworkers) or research with human participants?
|
| 408 |
+
|
| 409 |
+
Left blank.
|
| 410 |
+
|
| 411 |
+
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
|
| 412 |
+
|
| 413 |
+
No response.
|
| 414 |
+
|
| 415 |
+
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)?
|
| 416 |
+
|
| 417 |
+
No response.
|
| 418 |
+
|
| 419 |
+
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
|
| 420 |
+
|
| 421 |
+
No response.
|
| 422 |
+
|
| 423 |
+
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
|
| 424 |
+
|
| 425 |
+
No response.
|
| 426 |
+
|
| 427 |
+
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
|
| 428 |
+
|
| 429 |
+
No response.
|
2023/A Probabilistic Framework for Discovering New Intents/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6f0b91691a31a0749fe8ff19d3391b4a2458c2e387090efbd8d63c47cd772aee
|
| 3 |
+
size 660714
|
2023/A Probabilistic Framework for Discovering New Intents/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Simple and Flexible Modeling for Mental Disorder Detection by Learning from Clinical Questionnaires/803c053e-27a1-45b7-89d4-70d6ed2bf19b_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Simple and Flexible Modeling for Mental Disorder Detection by Learning from Clinical Questionnaires/803c053e-27a1-45b7-89d4-70d6ed2bf19b_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Simple and Flexible Modeling for Mental Disorder Detection by Learning from Clinical Questionnaires/803c053e-27a1-45b7-89d4-70d6ed2bf19b_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e4cd9079290c82c95a7f585e32bae7f5742f64b717d699861d329f3fbce079d0
|
| 3 |
+
size 527507
|
2023/A Simple and Flexible Modeling for Mental Disorder Detection by Learning from Clinical Questionnaires/full.md
ADDED
|
@@ -0,0 +1,541 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Simple and Flexible Modeling for Mental Disorder Detection by Learning from Clinical Questionnaires
|
| 2 |
+
|
| 3 |
+
Hoyun Song Jisu Shin Huije Lee Jong C. Park*
|
| 4 |
+
|
| 5 |
+
School of Computing
|
| 6 |
+
|
| 7 |
+
Korea Advanced Institute of Science and Technology
|
| 8 |
+
|
| 9 |
+
{hysong1991, jisu.shin, angiquer,jongpark}@kaist.ac.kr
|
| 10 |
+
|
| 11 |
+
# Abstract
|
| 12 |
+
|
| 13 |
+
Social media is one of the most highly sought resources for analyzing characteristics of the language by its users. In particular, many researchers utilized various linguistic features of mental health problems from social media. However, existing approaches to detecting mental disorders face critical challenges, such as the scarcity of high-quality data or the trade-off between addressing the complexity of models and presenting interpretable results grounded in expert domain knowledge. To address these challenges, we design a simple but flexible model that preserves domain-based interpretability. We propose a novel approach that captures the semantic meanings directly from the text and compares them to symptom-related descriptions. Experimental results demonstrate that our model outperforms relevant baselines on various mental disorder detection tasks. Our detailed analysis shows that the proposed model is effective at leveraging domain knowledge, transferable to other mental disorders, and providing interpretable detection results.
|
| 14 |
+
|
| 15 |
+
# 1 Introduction
|
| 16 |
+
|
| 17 |
+
Mental health problems, a significant challenge in public healthcare, are usually accompanied by distinct symptoms, such as loss of interest or appetite, depressed moods, or excessive anxiety. As these symptoms can often be expressed over social media, detecting mental health conditions using social media text has been studied extensively (Yates et al., 2017; Coppersmith et al., 2018; Matero et al., 2019; Murarka et al., 2021; Harrigian et al., 2021; Jiang et al., 2021; Nguyen et al., 2022). Such approaches could give rise to a monitoring system that provides clinical experts with information about possible mental crises.
|
| 18 |
+
|
| 19 |
+
To automatically identify mental health problems, traditional approaches focus on finding linguistic patterns and styles from the language of
|
| 20 |
+
|
| 21 |
+
psychiatric patients. Utilizing these features, statistical models can explain the correlation between linguistic factors and mental illnesses. However, these approaches suffer from increased complexity of models, necessitating pipelines of steps, from engineering features to producing results. By contrast, more recent works have employed strong pretrained models, which allow a direct use of raw data and simplify model development (Matero et al., 2019; Jiang et al., 2020). While such end-to-end approaches may be effective at achieving higher performance, they often lack domain-based interpretation, which is essential for decision-support systems (Mullenbach et al., 2018). Hence, there is a trade-off between providing interpretable predictions based on domain knowledge and the simplicity of the models.
|
| 22 |
+
|
| 23 |
+
The lack of a sufficient sample size for high-quality data is another challenge in the clinical domain (De Choudhury et al., 2017; Harrigian et al., 2020). Despite the availability of diverse datasets and methods for detecting mental disorders, most of them aim primarily at identifying only clinical depression. To tackle such a problem, recent studies have focused on developing transferable linguistic features that can be used for the detection of various mental disorders (Aich and Parde, 2022; Uban et al., 2022). However, the linguistic features that are trained on a particular dataset may not be fully transferable to a different task (Ernala et al., 2019; Harrigian et al., 2020).
|
| 24 |
+
|
| 25 |
+
Others utilized symptom-related features that are more common properties of psychiatric patients, resulting in generalizability of depression detection (Nguyen et al., 2022). Despite this improvement, however, their approach still faces challenges because they rely on pipelined methods using manually-defined symptom patterns. Such symptom patterns for depression detection lack flexibility as they cannot be easily adapted to other mental disorders. In addition, the pipeline approach
|
| 26 |
+
|
| 27 |
+
with symptom extraction is quite complex to implement. It involves multiple steps, designing symptom patterns, training a symptom identification model, and detecting depression using the identified symptom patterns.
|
| 28 |
+
|
| 29 |
+
To address these challenges, we propose to design a simple and more flexible approach that also preserves interpretability. We are motivated by the process that humans use to quickly learn related features, often by reading just a single explanation. For example, when people are reading depression questionnaires, they readily understand the questions and learn about symptoms that are related to depression, allowing them to self-diagnose their levels of depression.
|
| 30 |
+
|
| 31 |
+
To this end, we employ the siamese network (Koch et al., 2015), which captures the semantic meaning of the text inputs and compares them directly to symptom-related descriptions. This process is simple since they find symptom-related clues directly from the input, rather than relying on hand-engineered features or intermediate models. Our proposed model, Multi-Head Siamese network (MHS), can be easily adapted to other mental illness domains by simply replacing the symptom-related descriptions. In addition, our model is designed to capture the distinct features of each symptom using multiple heads. By examining the learned weights of each symptom head, our model gives rise to human-understandable interpretations.
|
| 32 |
+
|
| 33 |
+
We evaluate the performance of our model, detecting texts containing mental health problems on four mental disorders. Furthermore, the detailed analysis of the proposed model shows its efficiency in utilizing symptom-related knowledge, its ability to be applied to different mental disorders, and its interpretable reasoning for detected results.
|
| 34 |
+
|
| 35 |
+
# 2 Related Work
|
| 36 |
+
|
| 37 |
+
Social media are commonly used for mental health research because of the ease of access to various aspects of human behavior studies. Similarly to other NLP domains, pre-trained language models, such as BERT (Devlin et al., 2019), are widely used for identifying mental health problems (Matero et al., 2019; Jiang et al., 2020; Murarka et al., 2021; Dinu and Moldovan, 2021).
|
| 38 |
+
|
| 39 |
+
Others have presented interpretable detection methods for the mental health domain based on linguistic features (Song et al., 2018; Uban et al., 2021). Various efforts have also been made to study
|
| 40 |
+
|
| 41 |
+
such linguistic features accompanying mental illness, such as differences in word usage (Tadesse et al., 2019; Jiang et al., 2020; Dinu and Moldovan, 2021), or in syntactic features (Kayi et al., 2017; Ireland and Iserman, 2018; Yang et al., 2020). Some studies address the differences between sentiments or emotional aspects (Preoticiuc-Pietro et al., 2015; Kirinde Gamaarachchige and Inkpen, 2019; Allen et al., 2019; Wang et al., 2021), or differences in topics (Tadesse et al., 2019; Kulkarni et al., 2021).
|
| 42 |
+
|
| 43 |
+
The linguistic features are also used for transferable methods across other mental disorders (Aich and Parde, 2022; Uban et al., 2022), focusing on the fact that a large number of studies have been done primarily on depression (De Choudhury et al., 2013; Yates et al., 2017; Eichstaedt et al., 2018; Song et al., 2018; Tadesse et al., 2019; Yang et al., 2020; Nguyen et al., 2022), compared to other disorders, such as anxiety disorder (Ireland and Iserman, 2018), anorexia (Uban et al., 2021), or schizophrenia (Kayi et al., 2017). However, such linguistic features do not generalize well to new user groups. For example, De Choudhury et al. (2017), Loveys et al. (2018), and Pendse et al. (2019) found that the linguistic styles may vary to their backgrounds. In addition, Harrigian et al. (2020) found that a model trained on a particular dataset does not always generalize to others. To handle such a generalization problem, Nguyen et al. (2022) and Zhang et al. (2022) focused on the shared and general properties (i.e., symptoms) of a mental health problem. However, unlike ours, which captures the symptom features directly from raw data, these methods require additional steps for learning symptom-related features.
|
| 44 |
+
|
| 45 |
+
In this paper, we use the siamese network (Koch et al., 2015), based on one-shot learning, exploited recently for simple networks (Chen and He, 2021; Zhu et al., 2021). We utilize the symptom descriptions sourced from DSM-5 (American Psychiatric Association, 2013) to make our model learn symptom-related knowledge.
|
| 46 |
+
|
| 47 |
+
# 3 Methodology
|
| 48 |
+
|
| 49 |
+
In this section, we introduce our simple but flexible modeling for leveraging clinical questionnaires. Our model aims to detect texts with mental illness episodes based on the presence of symptom-related features just by a single component.
|
| 50 |
+
|
| 51 |
+
An overview of our network is shown in Figure 1.
|
| 52 |
+
|
| 53 |
+

|
| 54 |
+
Figure 1: The model architecture of Multi-Head Siamese network (MHS). $S_{i}$ indicates a head of a symptom that contains the symptom-related descriptions $s_{(i,j)}$ , and $d_{i}$ indicates a distance value computed by cosine similarity between the target text and the descriptions. MHS compares the contextualized embeddings of the target text and symptom and predicts the probability of mental illness based on context similarity.
|
| 55 |
+
|
| 56 |
+
We designed our model based on the siamese network (Koch et al., 2015). As with the original siamese neural network, our model also contains a single feature extractor with shared parameters. The extractor directly obtains features from contextualized embeddings generated by sentence encoders. Then, employing the similarity function, we compare the similarity to see the presence of symptom-related features from the target text. In addition, we apply multi-headed learning to the original siamese network, repeating the comparison process for each distinct symptom. We describe the detailed structure in the following subsections.
|
| 57 |
+
|
| 58 |
+
# 3.1 Model Structure
|
| 59 |
+
|
| 60 |
+
Our model, the Multi-Head Siamese network (MHS), is an end-to-end model that takes raw input texts and produces the final result without the need for manual feature engineering. MHS is designed to take two types of inputs, the target text to be classified and descriptions of symptoms. The descriptions are grouped for each symptom, and each symptom group is the input for the corresponding symptom head. For example, assuming that we have $n$ symptoms for discriminating against mental disorder, we build a set of $n$ heads $(H)$ from $S_{1}$ to $S_{n}$ for the detection model as follows:
|
| 61 |
+
|
| 62 |
+
$$
|
| 63 |
+
H = \left\{S _ {1}, S _ {2}, \dots , S _ {n} \right\} \tag {1}
|
| 64 |
+
$$
|
| 65 |
+
|
| 66 |
+
Each head $S$ represents discrete symptoms, containing a number of descriptions and questions regarding the corresponding symptom. For example, if $S_{i}$ has $m$ sentences describing the symptom, we have a set $S_{i}$ of questions:
|
| 67 |
+
|
| 68 |
+
$$
|
| 69 |
+
S _ {i} = \left\{s _ {(i, 1)}, s _ {(i, 2)}, \dots , s _ {(i, m)} \right\} \tag {2}
|
| 70 |
+
$$
|
| 71 |
+
|
| 72 |
+
With a given input of the target sentence, our model obtains embedding vectors $(E_{\text {target }})$ by employing pre-trained sentence encoders, such as BERT or RoBERTa. We also get symptom embeddings by encoding all sentences from all heads $(H)$ .
|
| 73 |
+
|
| 74 |
+
Our siamese network employs a multi-channel convolutional neural network (CNN) for feature learning. We apply three channels for convolution layers, whose kernel sizes are 2, 3, and 5. Thus, our model is designed to capture informative clues with the window sizes of 2, 3, and 5 from texts. Each channel contains two convolutional layers and two max-pooling layers. The final convolutional layer is flattened into a single embedding vector. As a result, we obtain three feature embedding vectors $(F_{\text{target},k})$ with $k = 2, 3, 5$ from the target text:
|
| 75 |
+
|
| 76 |
+
$$
|
| 77 |
+
F _ {t a r g e t, k} = \operatorname {C o n v 1} d _ {k} \left(E _ {t a r g e t}\right) \tag {3}
|
| 78 |
+
$$
|
| 79 |
+
|
| 80 |
+
Through the same process, we also obtain feature embedding vectors from symptom texts from the $i^{th}$ head and $j^{th}$ sentence as follows:
|
| 81 |
+
|
| 82 |
+
$$
|
| 83 |
+
F _ {(i, j), k} = \operatorname {C o n v 1} d _ {k} \left(E _ {(i, j)}\right) \tag {4}
|
| 84 |
+
$$
|
| 85 |
+
|
| 86 |
+
We compute the distances $(d)$ between the target feature vector $(F_{target,k})$ and a symptom-sentence vector $(F_{(i,j),k})$ using cosine similarity, ranging from $[-1, 1]$ . We calculate a single distance value by taking the average of $K$ distance values, where $K$ represents the number of channels:
|
| 87 |
+
|
| 88 |
+
$$
|
| 89 |
+
s i m (\mathbf {x}, \mathbf {y}) = \frac {\mathbf {x y}}{\| \mathbf {x} \| \| \mathbf {y} \|} \tag {5}
|
| 90 |
+
$$
|
| 91 |
+
|
| 92 |
+
$$
|
| 93 |
+
d _ {(i, j)} = \frac {1}{K} \sum_ {k} \operatorname {s i m} \left(F _ {\text {t a r g e t}, k}, F _ {(i, j), k}\right) \tag {6}
|
| 94 |
+
$$
|
| 95 |
+
|
| 96 |
+
Finally, when there are distance values for all sentences, they are averaged to yield the distance value of the $i^{th}$ head $(d_i)$ :
|
| 97 |
+
|
| 98 |
+
$$
|
| 99 |
+
d _ {i} = \frac {1}{m} \sum_ {j = 1} ^ {m} d _ {(i, j)} \tag {7}
|
| 100 |
+
$$
|
| 101 |
+
|
| 102 |
+
To regularize the results, we choose to use averaging as an aggregation function for the distance values.
|
| 103 |
+
|
| 104 |
+
We iterate this process over the number of heads $(n)$ . After the siamese network step, all distance values $(d_i)$ are stacked into a $1 \times n$ vector $(D)$ . By applying the fully connected layer, the distance vector is reduced into a two-dimensional vector $o$ , which is an output probability of classifying mental illness:
|
| 105 |
+
|
| 106 |
+
$$
|
| 107 |
+
f: \mathbb {R} ^ {n} \rightarrow \mathbb {R} ^ {2} \tag {8}
|
| 108 |
+
$$
|
| 109 |
+
|
| 110 |
+
$$
|
| 111 |
+
o = f (D) = W ^ {T} \cdot D + b \tag {9}
|
| 112 |
+
$$
|
| 113 |
+
|
| 114 |
+
By analyzing the weights $(W)$ and distance values $(D)$ of the fully connected layer, we can examine which symptoms are activated as important information when classifying the related mental disorder. Further details are discussed in Section 5.4. The implementation code and symptom-sentences are made publicly available<sup>1</sup>.
|
| 115 |
+
|
| 116 |
+
# 3.2 Symptom Descriptions
|
| 117 |
+
|
| 118 |
+
In the present study, we focus on four mental disorders: major depressive disorder (MDD), bipolar disorder, generalized anxiety disorder (GAD), and borderline personality disorder (BPD). As summarized in Table 1, we compiled the diagnostic criteria for each mental disorder, sourced from DSM-5. We constructed heads based on the list of symptoms. For example, in the case of MDD, there are a total of 9 symptoms (D0-D8), so when constructing a
|
| 119 |
+
|
| 120 |
+
<table><tr><td>Mental Disorders</td><td>Diagnostic Criteria from DSM-5</td></tr><tr><td>Major Depressive Disorder (D0-D8)</td><td>D0. Depressed mood most of the day
|
| 121 |
+
D1. Diminished interest or pleasure
|
| 122 |
+
D2. Sleep disorders (insomnia or hypersomnia)
|
| 123 |
+
D3. Changes in weight or appetite when not dieting
|
| 124 |
+
D4. Fatigue or loss of energy
|
| 125 |
+
D5. Feeling worthlessness or guilty
|
| 126 |
+
D6. Diminished ability to think or concentrate
|
| 127 |
+
D7. A slowing down of thought and a reduction of physical movement
|
| 128 |
+
D8. Recurrent thoughts of death and suicidal ideation</td></tr><tr><td>Bipolar Disorder (D0-D8, M0-M7)</td><td>Major Depressive Episode
|
| 129 |
+
D0-D8: Same as major depressive disorder
|
| 130 |
+
Manic Episode
|
| 131 |
+
M0. A distinct period of persistently elevated or expansive mood
|
| 132 |
+
M1. Increase in goal-directed activity
|
| 133 |
+
M2. Inflated self-esteem or grandiosity
|
| 134 |
+
M3. Decreased need for sleep
|
| 135 |
+
M4. More talkative than usual
|
| 136 |
+
M5. Flight of ideas
|
| 137 |
+
M6. Distractibility
|
| 138 |
+
M7. Activities that have a high potential for painful consequences</td></tr><tr><td>Generalized Anxiety Disorder (A0-A6)</td><td>A0. Excessive anxiety and worry more than 6 months
|
| 139 |
+
A1. Difficult to control the worry
|
| 140 |
+
The anxiety and worry are associated with followings:
|
| 141 |
+
A2. Irritability
|
| 142 |
+
A3. Being easily fatigued
|
| 143 |
+
A4. Sleep disturbance
|
| 144 |
+
A5. Difficulty concentrating or mind going blank
|
| 145 |
+
A6. Muscle tension</td></tr><tr><td>Borderline Personality Disorder (B0-B8)</td><td>B0. Interpersonal relationships alternating between idealization and devaluation
|
| 146 |
+
B1. Recurrent suicidal or self-mutilating behavior
|
| 147 |
+
B2. Identity disturbance
|
| 148 |
+
B3. Affective instability
|
| 149 |
+
B4. Inappropriate anger or difficulty controlling anger
|
| 150 |
+
B5. Transient, stress-related paranoid ideation or severe dissociative symptom.
|
| 151 |
+
B6. Impulsive behaviors that are potentially self-damaging
|
| 152 |
+
B7. Frantic efforts to avoid abandonment
|
| 153 |
+
B8. Chronic feelings of emptiness</td></tr></table>
|
| 154 |
+
|
| 155 |
+
Table 1: A summary of diagnostic criteria for each mental disorder, sourced from DSM-5.
|
| 156 |
+
|
| 157 |
+
model detecting depressive symptoms, there will be a total of 9 heads $(n(H_{MDD}) = 9)$ . As for bipolar disorder, symptoms can be divided into depressive episodes (D0-D8) and manic episodes (M0-M7), with a total of 17 heads. The depressive episodes of bipolar disorder are the same as those of MDD.
|
| 158 |
+
|
| 159 |
+
Each head includes a description of diagnostic criteria and questions from self-tests corresponding to each symptom. As a result, each head contains two or more sentences $(n(S) \geq 2)$ . In the case of more than two related questions for a symptom, the corresponding head contains more than two sentences.
|
| 160 |
+
|
| 161 |
+
We collected the questions from the publicly available self-tests<sup>2</sup>. The process was conducted under the guidance of a psychology researcher. The complete list of collected sentences for each head is shown in Appendix C. Our model can easily
|
| 162 |
+
|
| 163 |
+
<table><tr><td>Subreddit</td><td>#samples</td><td>sent.</td><td>tok.</td><td>vocab.</td></tr><tr><td>r/depression</td><td>11,416</td><td>9.5</td><td>143.1</td><td>43,766</td></tr><tr><td>r/bipolar</td><td>10,941</td><td>10.5</td><td>157.1</td><td>54,426</td></tr><tr><td>r/anxiety</td><td>11,471</td><td>9.7</td><td>159.8</td><td>51,936</td></tr><tr><td>r/bpd</td><td>10,979</td><td>11.8</td><td>187.5</td><td>53,741</td></tr><tr><td>Random</td><td>40,570</td><td>8.8</td><td>123.0</td><td>198,988</td></tr><tr><td>Total</td><td>85,377</td><td>9.6</td><td>133.6</td><td>229,309</td></tr></table>
|
| 164 |
+
|
| 165 |
+
Table 2: The number of samples, average numbers of sentences and tokens, and the vocabulary size.
|
| 166 |
+
|
| 167 |
+
transfer to other mental disorders by just replacing symptom descriptions, as evidenced by the findings in Section 5.3.
|
| 168 |
+
|
| 169 |
+
# 4 Experiments
|
| 170 |
+
|
| 171 |
+
# 4.1 Dataset and Evaluation
|
| 172 |
+
|
| 173 |
+
In order to evaluate our model, we constructed four datasets to detect possible mental disorder episodes. We sampled posts from Reddit<sup>3</sup>, which is one of the largest online communities. Each sample is a concatenation of a title and a body from a post. Each dataset contains two groups of Reddit posts. One includes the posts collected from mental disorder-related subreddits as a text containing the mental illness contents, and the other is from random subreddits as a clean text. The detailed statistics of each group is shown in Table 2. We performed preprocessing by discarding posts containing URLs or individually identifiable information, and posts shorter than ten words (i.e., tokens). We only retained posts in English; otherwise, they are discarded.
|
| 174 |
+
|
| 175 |
+
We conducted four tasks, employing these collected datasets, discriminating texts sourced from mental disorder-related subreddits out of non-mental illness texts. The details of each task are as follows: MDD detection (r/depression+random), Bipolar disorder detection (r/bipolar+random), GAD detection (r/anxiety+random), and BPD detection (r/bpd+random).
|
| 176 |
+
|
| 177 |
+
To compare our model with baseline models with respect to classification performance, we report results using standard metrics, Accuracy (Acc.), F1 score (F1) for the mental illness group, and Area Under the Curve (AUC). The performance measure is reported by five-fold cross-validation, and each repetition is trained on six different seeds. We averaged after 30 runs $(5\times 6)$ to get the final result.
|
| 178 |
+
|
| 179 |
+
# 4.2 Baselines and Experimental Setup
|
| 180 |
+
|
| 181 |
+
In this subsection, we describe models and implementation details for experiments. More experi
|
| 182 |
+
|
| 183 |
+
mental details are shown in Appendix A.
|
| 184 |
+
|
| 185 |
+
1) Traditional Models We implemented two feature-based classifiers, a support vector machine (SVM) and a random forest (RF), with two versions: BoW, employing lexical features only (Tadesse et al., 2019; Jiang et al., 2020), and Feature, adding sentimental and syntactic features (Allen et al., 2019; Yang et al., 2020; Wang et al., 2021). 2) BERT (Devlin et al., 2019) is one of the most well-known baseline models using contextualized embeddings (Jiang et al., 2020; Matero et al., 2019). 3) XLNet (Yang et al., 2019) is another strong baseline with a pre-trained language model (Dinu and Moldovan, 2021). 4) RoBERTa (Liu et al., 2019) is a robustly optimized BERT and one of the most solid baselines in natural language classification (Dinu and Moldovan, 2021; Murarka et al., 2021). 5) GPT-2 (Radford et al., 2019) is a strong few-shot learner with a large Transformer-based language model. 6) PHQ9 (Nguyen et al., 2022) is a depression detection model constrained by the presence of PHQ9 symptoms.
|
| 186 |
+
|
| 187 |
+
We implemented our models using PyTorch and fine-tuned our models on one 24GB Nvidia RTX-3090 GPU, taking about 13 minutes for each epoch. The batch size and embedding size of all models are 8 and 512, respectively, and are fine-tuned over five epochs. We truncated each post at 512 tokens for all models. For each model, we manually fine-tuned the learning rates, choosing one out of $\{1\mathrm{e} - 5,$ $2\mathrm{e} - 5$ 1e-6,2e-6} that shows the best F1 score. We report the average results over 30 runs (five-fold cross-validations are trained on six different seeds) for the same pre-trained checkpoint.
|
| 188 |
+
|
| 189 |
+
# 4.3 Experimental Results
|
| 190 |
+
|
| 191 |
+
Table 3 shows the overall performance of our proposed model (MHS) and strong baselines on four tasks. Each task is about detecting texts with corresponding mental illness episodes on social media. We see that our model outperforms all competing approaches, including linguistic feature-based models, end-to-end pre-trained models, and a method that uses symptom-related knowledge.
|
| 192 |
+
|
| 193 |
+
Linguistic feature-based models exhibit significant performance variations based on the level of detail in their feature design. By contrast, MHS can simply find the features directly from the contextualized representation, giving better performance improvements. Pre-trained models with contextualized embeddings have the benefits that can be
|
| 194 |
+
|
| 195 |
+
<table><tr><td rowspan="2">Model</td><td colspan="3">MDD</td><td colspan="3">Bipolar</td><td colspan="3">GAD</td><td colspan="3">BPD</td></tr><tr><td>Acc.</td><td>F1 (±)</td><td>AUC</td><td>Acc.</td><td>F1 (±)</td><td>AUC</td><td>Acc.</td><td>F1 (±)</td><td>AUC</td><td>Acc.</td><td>F1 (±)</td><td>AUC</td></tr><tr><td>RF-BoW</td><td>89.9</td><td>73.7 (0.34)</td><td>80.4</td><td>90.9</td><td>75.8 (0.37)</td><td>81.1</td><td>91.7</td><td>76.3 (0.41)</td><td>81.7</td><td>90.3</td><td>73.2 (0.35)</td><td>79.8</td></tr><tr><td>SVM-Bow</td><td>91.2</td><td>78.0 (0.89)</td><td>83.6</td><td>90.2</td><td>78.2 (0.84)</td><td>81.4</td><td>92.9</td><td>83.3 (0.84)</td><td>88.5</td><td>93.4</td><td>83.6 (0.67)</td><td>88.9</td></tr><tr><td>RF-Feature</td><td>89.6</td><td>72.9 (0.54)</td><td>79.8</td><td>91.1</td><td>76.2 (0.54)</td><td>81.4</td><td>91.8</td><td>79.2 (0.77)</td><td>83.7</td><td>90.4</td><td>73.5 (0.45)</td><td>80.0</td></tr><tr><td>SVM-Feature</td><td>92.2</td><td>81.5 (0.59)</td><td>86.6</td><td>93.3</td><td>83.6 (0.77)</td><td>87.5</td><td>94.3</td><td>86.7 (0.81)</td><td>90.0</td><td>93.6</td><td>83.6 (0.41)</td><td>88.6</td></tr><tr><td>GPT-2</td><td>94.6</td><td>88.0 (0.51)</td><td>92.6</td><td>95.3</td><td>88.9 (0.63)</td><td>92.4</td><td>95.7</td><td>90.2 (0.35)</td><td>93.5</td><td>95.6</td><td>89.7 (0.49)</td><td>93.4</td></tr><tr><td>XLNet</td><td>94.4</td><td>87.9 (0.40)</td><td>92.1</td><td>95.2</td><td>88.8 (0.43)</td><td>92.4</td><td>95.7</td><td>89.8 (0.26)</td><td>93.2</td><td>95.6</td><td>89.4 (0.43)</td><td>92.9</td></tr><tr><td>BERT</td><td>94.2</td><td>87.3 (0.41)</td><td>92.4</td><td>95.0</td><td>88.1 (0.56)</td><td>91.3</td><td>95.3</td><td>88.5 (0.61)</td><td>91.9</td><td>95.0</td><td>88.9 (0.55)</td><td>93.2</td></tr><tr><td>BERT-PHQ9</td><td>94.4</td><td>87.2 (0.47)</td><td>91.8</td><td>95.2</td><td>88.4 (0.48)</td><td>91.8</td><td>95.2</td><td>88.2 (0.48)</td><td>91.4</td><td>95.1</td><td>88.9 (0.46)</td><td>92.5</td></tr><tr><td>BERT-MHS</td><td>94.9</td><td>88.6 (0.29)</td><td>93.0</td><td>95.4</td><td>89.2 (0.42)</td><td>92.3</td><td>95.7</td><td>90.3 (0.38)</td><td>93.7</td><td>95.7</td><td>90.0 (0.28)</td><td>93.7</td></tr><tr><td>RoBERTa</td><td>94.8</td><td>88.6 (0.34)</td><td>93.1</td><td>95.4</td><td>89.4 (0.56)</td><td>92.9</td><td>95.8</td><td>90.4 (0.35)</td><td>93.7</td><td>95.7</td><td>90.3 (0.35)</td><td>93.7</td></tr><tr><td>RoBERTa-PHQ9</td><td>94.9</td><td>88.6 (0.50)</td><td>92.6</td><td>95.4</td><td>89.4 (0.59)</td><td>92.6</td><td>95.5</td><td>89.4 (0.33)</td><td>92.4</td><td>95.6</td><td>89.9 (0.47)</td><td>93.3</td></tr><tr><td>RoBERTa-MHS</td><td>95.5</td><td>89.6 (0.31)*</td><td>93.8</td><td>95.8</td><td>90.4 (0.31)*</td><td>93.4</td><td>96.2</td><td>91.5 (0.28)*</td><td>94.3</td><td>95.9</td><td>90.8 (0.26)*</td><td>94.0</td></tr></table>
|
| 196 |
+
|
| 197 |
+
Table 3: Results on four mental disorder detection tasks. Each result is averaged after 30 runs. The best results for each task are shown in bold, and the second-best results are underlined. * denotes that the performance gain is statistically significant with $p < 0.05$ under all pairwise $t$ -tests.
|
| 198 |
+
|
| 199 |
+
<table><tr><td>Model</td><td>#parameters</td><td>Relative Size</td></tr><tr><td>BERT</td><td>108,311,810</td><td>1.00</td></tr><tr><td>MHS w/bert</td><td>108,967,319</td><td>1.01</td></tr><tr><td>RoBERTa</td><td>124,647,170</td><td>1.15</td></tr><tr><td>MHS w/roberta</td><td>125,302,679</td><td>1.16</td></tr></table>
|
| 200 |
+
|
| 201 |
+
easily fine-tuned for a wide range of tasks. However, compared to MHS, they lack a specific focus on domain-based features, while MHS is tailored to identify such features, leading to better performance.
|
| 202 |
+
|
| 203 |
+
We implemented our model and PHQ9 model with two different encoders, BERT and RoBERTa, and the tendency for performance improvement is the same on both encoders. Both PHQ9 and MHS leverage symptom-related information but differ in their architecture, specifically whether it is a multi-step pipeline or an end-to-end model. The end-to-end design of MHS allows for direct learning of complex relationships, reducing the potential for error propagation, and resulting in enhanced performance compared to the pipeline model. Moreover, for this pipeline model to apply to other mental disorders, a symptom pattern must be created for each mental disorder, which is challenging to achieve without expert-level knowledge. On the other hand, our proposed model overcomes these challenges by simply replacing symptom descriptions. A detailed analysis of the performance improvement is shown in Section 5.
|
| 204 |
+
|
| 205 |
+
# 4.4 Model Parameters
|
| 206 |
+
|
| 207 |
+
Table 4 shows the number of parameters for each model. Compared to the baseline models, the additional number of parameters for our siamese net
|
| 208 |
+
|
| 209 |
+
Table 4: The numbers of parameters for BERT, RoBERTa, and our models.
|
| 210 |
+
|
| 211 |
+
<table><tr><td>Model</td><td>Acc.</td><td>Pre.</td><td>Rec.</td><td>F1</td><td>AUC</td></tr><tr><td>CNNs w/bert emb.</td><td>94.0</td><td>89.8</td><td>82.9</td><td>86.2</td><td>90.1</td></tr><tr><td>+single-head</td><td>94.5</td><td>88.6</td><td>86.8</td><td>87.6</td><td>91.7</td></tr><tr><td>+multi-head +one description</td><td>94.9</td><td>87.3</td><td>90.2</td><td>88.7</td><td>93.2</td></tr><tr><td>+multi-head +multi-description</td><td>95.4</td><td>89.1</td><td>90.5</td><td>89.7</td><td>93.9</td></tr></table>
|
| 212 |
+
|
| 213 |
+
Table 5: An ablation study of different levels of knowledge and features affecting our model. The result is the average of the four tasks.
|
| 214 |
+
|
| 215 |
+
work is about 655K. It is a much smaller number than that of the additional parameters for RoBERTa and BERT (about 16M), but the performance of MHS (w/bert) is slightly better or shows little difference. It suggests that our proposed model, learning domain knowledge, achieves much efficient performance improvement by adding just a small number of parameters.
|
| 216 |
+
|
| 217 |
+
# 5 Model Analysis and Discussions
|
| 218 |
+
|
| 219 |
+
# 5.1 Ablation Study
|
| 220 |
+
|
| 221 |
+
We conducted an ablation study to investigate the effectiveness of each part in our proposed model. We removed the siamese network from our proposed methods, resulting in just convolutional neural networks (CNNs). We implemented a single-head siamese network in which all sentences from all heads are put together into just one head. We also implemented two versions of a multi-head siamese network employing just one description or multiple descriptions, respectively.
|
| 222 |
+
|
| 223 |
+
The experimental results are shown in Table 5. The result shows that our proposed model gives the best performance when all of the modules are combined. Compared to CNN models, the performances are improved when the siamese network is added. Note that the siamese network contributes to accurate detection, since it captures the
|
| 224 |
+
|
| 225 |
+
<table><tr><td rowspan="2">Model</td><td colspan="8">Detection Task</td></tr><tr><td colspan="2">MDD</td><td colspan="2">Bipolar</td><td colspan="2">GAD</td><td colspan="2">BPD</td></tr><tr><td>MHS</td><td>F1</td><td>AUC</td><td>F1</td><td>AUC</td><td>F1</td><td>AUC</td><td>F1</td><td>AUC</td></tr><tr><td>w/depression</td><td>89.6</td><td>93.8</td><td>89.4</td><td>92.7</td><td>89.5</td><td>93.4</td><td>89.8</td><td>93.5</td></tr><tr><td>w/bipolar</td><td>88.2</td><td>92.4</td><td>90.4</td><td>93.4</td><td>90.4</td><td>93.2</td><td>88.8</td><td>91.8</td></tr><tr><td>w/anxiety</td><td>88.5</td><td>92.7</td><td>89.2</td><td>93.2</td><td>91.5</td><td>94.3</td><td>88.9</td><td>92.9</td></tr><tr><td>w/bpd</td><td>88.3</td><td>92.4</td><td>89.3</td><td>92.5</td><td>88.8</td><td>92.8</td><td>90.8</td><td>94.0</td></tr></table>
|
| 226 |
+
|
| 227 |
+
Table 6: The results of four mental illness detection tasks. The notation $w$ (mental illness) indicates the model takes symptom descriptions of the specific mental illness as input, respectively.
|
| 228 |
+
|
| 229 |
+
symptom-related features by comparing target texts with symptom descriptions. In addition, the performances are also improved when employing a multi-head rather than a single-head. It implies that individually training each symptom yields better results than training all symptoms together, as each symptom has unique features. Compared to learning from only one description per head, the performance of learning from multiple descriptions is improved. It may be due to each head learning further about the symptom through various sentences, covering distinct aspects of each symptom.
|
| 230 |
+
|
| 231 |
+
# 5.2 Contribution of Symptom Descriptions
|
| 232 |
+
|
| 233 |
+
To assess the effectiveness of symptom descriptions in detecting the presence of symptoms, we measure their performance by replacing the descriptions of symptoms with those of other mental disorders. The results are shown in Table 6. We carried out four mental disorder detection tasks using four models, each utilizing symptom descriptions of four distinct mental disorders as inputs.
|
| 234 |
+
|
| 235 |
+
The models exhibit optimal performance when the input symptom description corresponds to the target mental disorder. It suggests that, by providing the model with accurate and appropriate symptom descriptions, MHS can learn effectively to identify the specific features associated with a particular mental disorder. This also implies that MHS can identify and utilize the nuanced distinctions in the characteristics of each symptom, leading to enhanced performance in detection.
|
| 236 |
+
|
| 237 |
+
# 5.3 Cross-domain Test
|
| 238 |
+
|
| 239 |
+
In order to investigate the flexibility of MHS, we evaluated its performance across datasets and other mental disorders.
|
| 240 |
+
|
| 241 |
+
Dataset Transferability Given that the ability to generalize to new and unseen data platforms is a crucial aspect of mental illness detection models (Harrigian et al., 2020), we evaluate their
|
| 242 |
+
|
| 243 |
+
<table><tr><td rowspan="2" colspan="2">Model</td><td colspan="2">RSDD</td><td colspan="2">eRisk</td></tr><tr><td>F1</td><td>AUC</td><td>F1</td><td>AUC</td></tr><tr><td rowspan="6">Train: r/depression</td><td>BERT</td><td>35.7</td><td>50.8</td><td>52.3</td><td>78.1</td></tr><tr><td>XLNet</td><td>34.9</td><td>50.5</td><td>52.8</td><td>78.5</td></tr><tr><td>RoBERTa</td><td>37.4</td><td>51.6</td><td>52.5</td><td>78.3</td></tr><tr><td>GPT-2</td><td>37.8</td><td>51.7</td><td>53.2</td><td>78.4</td></tr><tr><td>PHQ9</td><td>37.2</td><td>51.5</td><td>53.3</td><td>78.8</td></tr><tr><td>MHS</td><td>38.6</td><td>52.0</td><td>54.9</td><td>79.5</td></tr></table>
|
| 244 |
+
|
| 245 |
+
Table 7: The results of evaluation across the other dataset. Due to the uneven distribution of data, we report the weighted F1 scores for each test.
|
| 246 |
+
|
| 247 |
+
<table><tr><td rowspan="3" colspan="2">Model</td><td colspan="6">Test: Target</td></tr><tr><td colspan="2">Bipolar</td><td colspan="2">GAD</td><td colspan="2">BPD</td></tr><tr><td>F1</td><td>AUC</td><td>F1</td><td>AUC</td><td>F1</td><td>AUC</td></tr><tr><td rowspan="7">Train:MDD</td><td>Feature</td><td>54.0</td><td>69.1</td><td>49.5</td><td>66.6</td><td>55.2</td><td>69.8</td></tr><tr><td>BERT</td><td>62.0</td><td>73.7</td><td>51.7</td><td>67.8</td><td>60.9</td><td>72.8</td></tr><tr><td>XLNet</td><td>65.2</td><td>75.4</td><td>51.3</td><td>67.6</td><td>60.5</td><td>72.6</td></tr><tr><td>RoBERTa</td><td>65.1</td><td>75.6</td><td>58.6</td><td>71.6</td><td>64.9</td><td>75.4</td></tr><tr><td>GPT-2</td><td>65.2</td><td>75.7</td><td>59.6</td><td>72.1</td><td>62.6</td><td>73.5</td></tr><tr><td>MHS w/depression</td><td>66.7</td><td>76.6</td><td>55.5</td><td>69.8</td><td>60.2</td><td>72.6</td></tr><tr><td>MHS w/(=Target)</td><td>76.6</td><td>85.4</td><td>59.6</td><td>72.2</td><td>67.5</td><td>77.3</td></tr></table>
|
| 248 |
+
|
| 249 |
+
Table 8: The results of evaluation across the other mental disorders.
|
| 250 |
+
|
| 251 |
+
performance across different datasets. We selected two datasets, RSDD (Yates et al., 2017) and eRisk2018 (Losada et al., 2019), to evaluate cross-dataset transfer. Unlike our Reddit dataset (Subsection 4.1), sourced from communities specific to certain mental illnesses, RSDD and eRisk2018 data are based on user self-reports, resulting in data that is different from and potentially unseen by the Reddit dataset. We trained each model using the Reddit train dataset and evaluated its performance on the test sets of RSDD and eRisk2018, respectively.
|
| 252 |
+
|
| 253 |
+
As shown in Table 7, MHS outperforms all strong baselines over all datasets. The improved performance of MHS compared to GPT-2, a strong few-shot learner, is likely due to its ability to leverage domain-specific knowledge. The higher generalizability of MHS compared to PHQ9 is likely attributed to its end-to-end architecture, which allows for direct learning of symptom features from data, as opposed to PHQ9's reliance on pre-defined symptom patterns.
|
| 254 |
+
|
| 255 |
+
Domain Transferability As suggested by some researchers (Aich and Parde, 2022; Uban et al., 2022), we evaluated the transferability of MHS across other mental disorders by training the models on a depression dataset and testing on other mental disorder datasets (see Table 8). The results of the experiments indicate that MHS significantly outperforms all relevant baselines, particularly when it utilizes symptoms that match the target mental disorder. This suggests that the transferability of
|
| 256 |
+
|
| 257 |
+

|
| 258 |
+
Figure 2: Examples of weights learned during the training process for each task. Each row represents a distance computed by each head, indicating the particular knowledge of the related symptoms.
|
| 259 |
+
|
| 260 |
+
the model can be significantly enhanced by simply replacing symptom descriptions. This also implies that it may be feasible to develop a model that can classify texts related to various other mental disorders if the symptoms of those disorders are provided appropriately.
|
| 261 |
+
|
| 262 |
+
# 5.4 Interpretation
|
| 263 |
+
|
| 264 |
+
Using our model, we can interpret the detected results by analyzing their representations of learned weights and distance values. In order to see if our model properly learned symptom-related knowledge from a few descriptions and identified similar stories from the target texts, we looked into the learned weights produced by the last step of our model, the fully connected layer. To show the effectiveness of MHS, we visualize the examples of learned weights from training steps in Figure 2. The color scale represents the strength of the learned weights (i.e., the distance values of each head). Each row represents heads, indicating each symptom referring to Table 1, and each column represents the labels. We observe a clearly contrasting pattern in the distance weights for each task.
|
| 265 |
+
|
| 266 |
+
We could also identify which symptoms are mainly activated or not by investigating the learned weights during the training process. For example, in detecting MDD-related texts, most of the symptoms have higher weights than depression. It suggests that most of the symptoms give rise to a major role during the detection process.
|
| 267 |
+
|
| 268 |
+
An important criterion in diagnosing a mental illness by experts is the number of expressed symptoms. The number of symptoms must exceed a certain number to be diagnosed as a corresponding mental illness. In order to see if the human-level diagnostic process works in our model as well, we looked into the number of salient symptoms in true-positive samples. We calculated percentiles from the similarity scores for each symptom in the true-positive samples from test sets, and set the threshold by $70\%$ of the percentile. Then, when
|
| 269 |
+
|
| 270 |
+

|
| 271 |
+
Figure 3: The number of salient symptoms and probability of the final output from true-positive samples in MDD detection.
|
| 272 |
+
|
| 273 |
+
exceeding the threshold set by the criterion, the symptom was selected as a prominent feature in the text. We present the distribution of the numbers of salient symptoms and their averaged probabilities of the final output from test sets of detecting MDD-related texts in Figure 3.
|
| 274 |
+
|
| 275 |
+
In our model, the average probability is relatively low when there are fewer than three symptoms, but for three symptoms or more, our model makes a decision with high confidence at a similar level. It suggests that MHS also detects mental disorder-related texts with high confidence when the number of symptoms exceeds a specific number, the same as when humans diagnose. The criterion number being smaller in MHS may be due to the shorter length of social media texts, which may not fully convey the user's background and lifestyle.
|
| 276 |
+
|
| 277 |
+
# 5.5 Case Study
|
| 278 |
+
|
| 279 |
+
For the case study, we made an example based on the samples corresponding to each mental disorder in the psychology major textbook. We present example sentences for MDD and GAD (Table 9), and the model's predictions were correct in both cases. We set the same threshold as shown in Figure 3. The dominant symptoms predicted by the model are D0 (depressed mood), D1 (diminished interest), and D8 (suicidal ideation), for MDD, and A1 (difficult to control the worry), A2 (irritability), and A3 (easily fatigued), for GAD. In the case of D0
|
| 280 |
+
|
| 281 |
+
<table><tr><td>No.</td><td>Example</td><td>Expected Symptoms</td></tr><tr><td>1. (MDD)</td><td>Whenever I wake up in the morning, I hate myself, and I want to commit suicide. I didn't have any friends to hang out with because I did not need to make friends actively when I went to school. The only reason I am not committing suicide is I don't want my parents to cry.</td><td>D0 (81%) D1 (80%) D2 (25%) D3 (56%) D4 (10%) D5 (40%) D6 (47%) D7 (61%) D8 (71%)</td></tr><tr><td>2. (GAD)</td><td>I often feel anxious that something terrible is about to happen. For example, my husband will likely lose his job, or a family member will become ill or have an accident. I know these worries are unnecessary and excessive, but I can't stop worrying. I'm always nervous, so I feel exhausted even if I do nothing.</td><td>A0 (41%) A1 (72%) A2 (75%) A3 (79%) A4 (48%) A5 (34%) A6 (37%)</td></tr></table>
|
| 282 |
+
|
| 283 |
+
Table 9: Examples of texts related to MDD and GAD, respectively, and the corresponding symptoms that the models provide for interpretation. The notations of each symptom are referred to in Table 1.
|
| 284 |
+
|
| 285 |
+
and D1 in MDD, our model captures the feature related to the symptom, despite the absence of the term 'depress' or 'interest'. These cases support the assumption that our model can detect and interpret when symptoms of a particular mental illness are prominent in text.
|
| 286 |
+
|
| 287 |
+
# 6 Conclusion
|
| 288 |
+
|
| 289 |
+
In this paper, we proposed a simple but flexible model for detecting texts containing contents of mental health problems. Our model outperformed the state-of-the-art models and achieved human-interpretable results over symptoms regarding mental disorders. The proposed model demonstrates an exceptional ability to utilize domain knowledge as it is designed to capture relevant features from texts directly. Experimental results also indicate that MHS can quickly adapt to other mental disorder domains by simply replacing symptom descriptions. The scope of this paper was limited to the investigation of four mental disorder detection tasks. Nevertheless, this approach can be extended to other mental health conditions as long as the symptom-relevant questionnaires are provided accordingly.
|
| 290 |
+
|
| 291 |
+
# Limitations
|
| 292 |
+
|
| 293 |
+
It should be noted that, as our model and the baseline models in this study were trained using texts from social media and the experiments were conducted on online text, the results may not accurately reflect the performance in a clinical setting. A proper diagnosis by clinical experts necessitates a comprehensive analysis of various factors, including the number of manifested symptoms, the onset and history of symptoms, developmental background, lifestyle, and recent life changes, in order to gain a comprehensive understanding of the patient's condition. However, it is still challenging to capture detailed information such as personal secrets through online text, as these texts are often composed of fragments of daily life, episodic experiences, and emotive expressions rather than
|
| 294 |
+
|
| 295 |
+
providing a comprehensive view of an individual's life. Despite the domain-specific limitations imposed by the fragmentary text, we hope that our model may still serve as a valuable aid for clinical experts in their decision-making process. Furthermore, future research should aim to move beyond predicting psychological symptoms and disorders solely based on linguistic styles and expressions, and instead seek to uncover the underlying features that contribute to these expressions as our model does.
|
| 296 |
+
|
| 297 |
+
# Ethics Statement
|
| 298 |
+
|
| 299 |
+
Since privacy concerns and the risk to the individuals should always be considered, especially using social media data, we have employed mechanisms to avoid any harmful and negative consequences of releasing our model. To this end, we removed individually identifiable information such as user names, user IDs, or e-mail addresses. We also removed any URLs from our data not to be trained on such personal information in our model. As for the use of open datasets in this work, we used them in accordance with guidelines that allow their use within the established usage policy. Especially we ensure that no attempts can be made to establish contact with specific individuals or deanonymize users in the datasets.
|
| 300 |
+
|
| 301 |
+
Our paper may contain direct references to specific disorders or diseases (such as psychiatric patients, Siamese, or names of mental disorders) and expressions that could be considered offensive to particular individuals. We want to emphasize that these expressions are used solely for the purpose of academic discourse and are not intended to be disrespectful or offend anyone.
|
| 302 |
+
|
| 303 |
+
In addition, our proposed model is not intended to label or stigmatize individuals online but rather to serve as a warning system for potential threats to personal well-being and public health. It is important to note that even if this model identifies potential mental illnesses and symptoms, it should not be considered a definitive diagnosis. Still, the
|
| 304 |
+
|
| 305 |
+
model provides an indication of the likelihood of a disorder; it should be used as a reference for self-diagnose and in consultation with a mental health expert for an official diagnosis. An official diagnosis and results require consultation with medical and psychological experts, and this system aims at serving as an aid in the diagnostic process. We make our implementation code publicly available for research purposes, and we hope it will be used to improve the lives of individuals suffering from mental illnesses.
|
| 306 |
+
|
| 307 |
+
# Acknowledgements
|
| 308 |
+
|
| 309 |
+
This work was supported by the National Research Foundation of Korea (NRF) (No. RS-2023-00208054, A multi-modal abusive language detection system and automatic feedback with correction) grant funded by the Korean government.
|
| 310 |
+
|
| 311 |
+
# References
|
| 312 |
+
|
| 313 |
+
Ankit Aich and Natalie Parde. 2022. Are you really okay? a transfer learning-based approach for identification of underlying mental illnesses. In Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology, pages 89-104, Seattle, USA. Association for Computational Linguistics.
|
| 314 |
+
Kristen Allen, Shrey Bagroy, Alex Davis, and Tamar Krishnamurti. 2019. ConvSent at CLPsych 2019 task a: Using post-level sentiment features for suicide risk prediction on Reddit. In Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology, pages 182-187.
|
| 315 |
+
American Psychiatric Association. 2013. Diagnostic and statistical manual of mental disorders (5th ed.). VA: American Psychiatric Association, Arlington.
|
| 316 |
+
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901.
|
| 317 |
+
Xinlei Chen and Kaiming He. 2021. Exploring simple siamese representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15750-15758.
|
| 318 |
+
Glen Coppersmith, Ryan Leary, Patrick Crutchley, and Alex Fine. 2018. Natural language processing of social media as screening for suicide risk. *Biomedical informatics insights*, 10.
|
| 319 |
+
Munmun De Choudhury, Michael Gamon, Scott Counts, and Eric Horvitz. 2013. Predicting depression via social media. In Seventh international AAAAI conference on weblogs and social media.
|
| 320 |
+
|
| 321 |
+
Munmun De Choudhury, Sanket S Sharma, Tomaz Logar, Wouter Eekhout, and René Clausen Nielsen. 2017. Gender and cross-cultural differences in social media disclosures of mental illness. In Proceedings of the 2017 ACM conference on computer supported cooperative work and social computing, pages 353-369.
|
| 322 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 323 |
+
Anca Dinu and Andreea-Codrina Moldovan. 2021. Automatic Detection and Classification of Mental Illnesses from General Social Media Texts. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), pages 358-366.
|
| 324 |
+
Johannes C Eichstaedt, Robert J Smith, Raina M Merchant, Lyle H Ungar, Patrick Crutchley, Daniel Preoticiuc-Pietro, David A Asch, and H Andrew Schwartz. 2018. Facebook language predicts depression in medical records. Proceedings of the National Academy of Sciences, 115(44):11203-11208.
|
| 325 |
+
Sindhu Kiranmai Ernala, Michael L Birnbaum, Kristin A Candan, Asra F Rizvi, William A Sterling, John M Kane, and Munmun De Choudhury. 2019. Methodological gaps in predicting mental health states from social media: triangulating diagnostic signals. In Proceedings of the 2019 CHI conference on Human Factors in Computing Systems, pages 1-16, New York, NY, USA. Association for Computing Machinery.
|
| 326 |
+
Keith Harrigian, Carlos Aguirre, and Mark Dredze. 2020. Do models of mental health based on social media data generalize? In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 3774-3788, Online. Association for Computational Linguistics.
|
| 327 |
+
Keith Harrigian, Carlos Aguirre, and Mark Dredze. 2021. On the state of social media data for mental health research. In Proceedings of the Seventh Workshop on Computational Linguistics and Clinical Psychology: Improving Access, pages 15-24, Online. Association for Computational Linguistics.
|
| 328 |
+
Molly Ireland and Micah Iserman. 2018. Within and between-person differences in language used across anxiety support and neutral reddit communities. In Proceedings of the Fifth Workshop on Computational Linguistics and Clinical Psychology: From Keyboard to Clinic, pages 182-193.
|
| 329 |
+
Zheng Ping Jiang, Sarah Ita Levitan, Jonathan Zomick, and Julia Hirschberg. 2020. Detection of mental
|
| 330 |
+
|
| 331 |
+
health from Reddit via deep contextualized representations. In Proceedings of the 11th International Workshop on Health Text Mining and Information Analysis, pages 147-156.
|
| 332 |
+
Zhengping Jiang, Jonathan Zomick, Sarah Ita Levitan, Mark Serper, and Julia Hirschberg. 2021. Automatic detection and prediction of psychiatric hospitalizations from social media posts. In Proceedings of the Seventh Workshop on Computational Linguistics and Clinical Psychology: Improving Access, pages 116-121, Online. Association for Computational Linguistics.
|
| 333 |
+
Efsun Sarioglu Kayi, Mona Diab, Luca Pauselli, Michael Compton, and Glen Coppersmith. 2017. Predictive linguistic features of schizophrenia. In Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM* 2017), pages 241-250, Vancouver, Canada. Association for Computational Linguistics.
|
| 334 |
+
Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
|
| 335 |
+
Prasadith Kirinde Gamaarachchige and Diana Inkpen. 2019. Multi-task, multi-channel, multi-input learning for mental illness detection using social media text. In Proceedings of the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019), Hong Kong. Association for Computational Linguistics.
|
| 336 |
+
Gregory Koch, Richard Zemel, Ruslan Salakhutdinov, et al. 2015. Siamese neural networks for one-shot image recognition. In ICML deep learning workshop, volume 2. Lille.
|
| 337 |
+
Atharva Kulkarni, Amey Hangle, Pradnya Kulkarni, and Manisha Marathe. 2021. Cluster Analysis of Online Mental Health Discourse using Topic-Infused Deep Contextualized Representations. In Proceedings of the 12th International Workshop on Health Text Mining and Information Analysis, pages 83–93.
|
| 338 |
+
Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021. Gpt understands, too. arXiv preprint arXiv:2103.10385.
|
| 339 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
|
| 340 |
+
David E Losada, Fabio Crestani, and Javier Parapar. 2019. Overview of erisk 2019 early risk prediction on the internet. In International Conference of the Cross-Language Evaluation Forum for European Languages, pages 340-357. Springer.
|
| 341 |
+
Kate Loveys, Jonathan Torrez, Alex Fine, Glen Moriarty, and Glen Coppersmith. 2018. Cross-cultural differences in language markers of depression online.
|
| 342 |
+
|
| 343 |
+
In Proceedings of the Fifth Workshop on Computational Linguistics and Clinical Psychology: From Keyboard to Clinic, pages 78-87, New Orleans, LA. Association for Computational Linguistics.
|
| 344 |
+
Matthew Matero, Akash Idnani, Youngseo Son, Salvatore Giorgi, Huy Vu, Mohammad Zamani, Parth Limbachiya, Sharath Chandra Guntuku, and H. Andrew Schwartz. 2019. Suicide risk assessment with multi-level dual-context language and BERT. In Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology, pages 39-44, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 345 |
+
James Mullenbach, Sarah Wiegrefe, Jon Duke, Jimeng Sun, and Jacob Eisenstein. 2018. Explainable prediction of medical codes from clinical text. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1101-1111, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 346 |
+
Ankit Murarka, Balaji Radhakrishnan, and Sushma Ravichandran. 2021. Classification of mental illnesses on social media using RoBERTa. In Proceedings of the 12th International Workshop on Health Text Mining and Information Analysis, pages 59-68, online. Association for Computational Linguistics.
|
| 347 |
+
Thong Nguyen, Andrew Yates, Ayah Zirikly, Bart Desmet, and Arman Cohan. 2022. Improving the generalizability of depression detection by leveraging clinical questionnaires. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8446-8459, Dublin, Ireland. Association for Computational Linguistics.
|
| 348 |
+
Sachin R Pendse, Kate Niederhoffer, and Amit Sharma. 2019. Cross-Cultural Differences in the Use of Online Mental Health Support Forums. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW):1-29.
|
| 349 |
+
Daniel Preoticiuc-Pietro, Maarten Sap, H Andrew Schwartz, and Lyle Ungar. 2015. Mental Illness Detection at the World Well-Being Project for the CLPsych 2015 Shared Task. In Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology (CLPsych), pages 40-45.
|
| 350 |
+
Guanghui Qin and Jason Eisner. 2021. Learning how to ask: Querying LMs with mixtures of soft prompts. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5203-5212, Online. Association for Computational Linguistics.
|
| 351 |
+
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
|
| 352 |
+
|
| 353 |
+
Hoyun Song, Jinseon You, Jin-Woo Chung, and Jong C. Park. 2018. Feature attention network: Interpretable depression detection from social media. In Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation, Hong Kong. Association for Computational Linguistics.
|
| 354 |
+
Michael M Tadesse, Hongfei Lin, Bo Xu, and Liang Yang. 2019. Detection of depression-related posts in reddit social media forum. *IEEE Access*, 7:44883-44893.
|
| 355 |
+
Ana Sabina Uban, Berta Chulvi, and Paolo Rosso. 2021. Understanding Patterns of Anorexia Manifestations in Social Media Data with Deep Learning. In Proceedings of the Seventh Workshop on Computational Linguistics and Clinical Psychology: Improving Access, pages 224-236.
|
| 356 |
+
Ana Sabina Uban, Berta Chulvi, and Paolo Rosso. 2022. Multi-aspect transfer learning for detecting low resource mental disorders on social media. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 3202-3219, Marseille, France. European Language Resources Association.
|
| 357 |
+
Ning Wang, Fan Luo, Yuvraj Shivtare, Varsha D Badal, KP Subbalakshmi, Rajarathnam Chandramouli, and Ellen Lee. 2021. Learning Models for Suicide Prediction from Social Media Posts. arXiv preprint arXiv:2105.03315.
|
| 358 |
+
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771.
|
| 359 |
+
Xingwei Yang, Rhonda McEwen, Liza Robee Ong, and Morteza Zihayat. 2020. A big data analytics framework for detecting user-level depression from social networks. International Journal of Information Management, 54:102141.
|
| 360 |
+
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. XLNET: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32.
|
| 361 |
+
Andrew Yates, Arman Cohan, and Nazli Goharian. 2017. Depression and self-harm risk assessment in online forums. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2968-2978, Copenhagen, Denmark. Association for Computational Linguistics.
|
| 362 |
+
Zhiling Zhang, Siyuan Chen, Mengyue Wu, and Kenny Zhu. 2022. Symptom identification for interpretable detection of multiple mental disorders on social media. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9970-9985, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
|
| 363 |
+
|
| 364 |
+
Jinting Zhu, Julian Jang-Jaccard, Amardeep Singh, Paul A Watters, and Seyit Camtepe. 2021. Task-aware meta learning-based siamese neural network for classifying obfuscated malware. arXiv preprint arXiv:2110.13409.
|
| 365 |
+
|
| 366 |
+
# A Experimental Setups
|
| 367 |
+
|
| 368 |
+
We implemented two feature-based models, support vector machine (SVM) and random forest (RF). We fine-tuned SVM with Gaussian kernel and set $C$ to 100, and RF set max depth to 100. We employed BERT's vocabulary to train BoW models. For Feature models, we used a pre-trained sentiment classification model, and a Part-of-Speech Tagging model from the Huggingface library (Wolf et al., 2019). We fine-tuned the transformer baseline models employing the default settings from the Huggingface library: BERT (bert-base-cased), XLNet (xlnet-base-cased), RoBERTa (roberta-base), GPT-2 (gpt2). For all experiments, we set the batch size as 8 and fine-tuned all models on a single 24GB GeForce RTX 3090 GPU. For the implementation of the PHQ9 model, we follow the structure of the questionnaire-depression pair models by using the publicly available code from PHQ9 $^{4}$ (Nguyen et al., 2022). We utilized the symptom patterns which are provided by Nguyen et al. (2022). We trained each of the models using all six randomly selected seeds, and all the models were trained for 3 epochs. We optimize the model parameters of all models with the Adam optimizer (Kingma and Ba, 2014). The learning rates for BERT, XLNet, and RoBERTa models were manually fine-tuned, choosing one out of {1e-05, 2e-05, 1e-06, 2e-06} that shows the best F1 score. The learning rate for GPT-2 was selected from {1e-05, 2e-05}, and for PHQ9, the learning rate was set to 1e-03, which was provided as an optimized hyperparameter.
|
| 369 |
+
|
| 370 |
+
# B Comparison with Large Language Model
|
| 371 |
+
|
| 372 |
+
Recent developments in large language models (LLMs), such as GPT-3 (Brown et al., 2020), have demonstrated strong zero-shot performance across various NLP tasks. LLMs have the ability to achieve high performance without fine-tuning for downstream tasks, even with only zero or few examples, due to their large number of pre-trained parameters.
|
| 373 |
+
|
| 374 |
+
We experimented with obtaining results for the examples referred to in Table 9 by using GPT-3, a widely recognized LLM. To this end, we utilized instructional prompts by listing symptom descriptions for a specific mental illness. The examples
|
| 375 |
+
|
| 376 |
+
of prompt input and the result are shown in Table 10. The experimental results show that the model successfully outputs the classification results in a sentence when given instructional prompts for a specific mental illness. However, the process of selecting symptoms appears to focus on identifying multiple symptoms rather than pinpointing a specific symptom with precision.
|
| 377 |
+
|
| 378 |
+
These examples are presented for demonstration purposes only, and the results may vary depending on the utilization of different prompt optimizations (Liu et al., 2021; Qin and Eisner, 2021). This aspect of research is beyond the scope of our current study; thus, there is room for further research to be conducted in future work.
|
| 379 |
+
|
| 380 |
+
# C Details of Symptom Descriptions
|
| 381 |
+
|
| 382 |
+

|
| 383 |
+
Figure 4: An example mapping of questions into corresponding diagnostic criteria.
|
| 384 |
+
|
| 385 |
+
In this section, we present the symptom descriptions that were utilized in our current study. Table 11 shows the complete list of symptom descriptions. We used Diagnostic and Statistical Manual of Mental Disorders (DSM-5) (American Psychiatric Association, 2013) as a reference for the symptom descriptions, as it provides comprehensive guidelines for identifying symptoms of various mental disorders. We also incorporated publicly available clinical questionnaires from online sources. Subsequently, under the guidance of a psychology researcher, we conducted a mapping process of the questions in the self-test to the corresponding diagnostic criteria, as depicted in Figure 4.
|
| 386 |
+
|
| 387 |
+
<table><tr><td>No.</td><td>Example</td></tr><tr><td rowspan="3">1.(MDD)</td><td>Prompt:These are symptom descriptions of major depressive disorder:D0. Depressed mood most of the dayD1. Diminished interest or pleasureD2. Sleep disorders (insomnia or hypersomnia)D3. Changes in weight or appetite when not dietingD4. Fatigue or loss of energyD5. Feeling worthlessness or guiltyD6. Diminished ability to think or concentrateD7. A slowing down of thought and a reduction of physical movementD8. Recurrent thoughts of death and suicidal ideationPlease tell me if the user below has major depressive disorderand which symptoms does the user have? (choose from the above)</td></tr><tr><td>Input:Whenever I wake up in the morning, I hate myself, and I want to commit suicide.I didn't have any friends to hang out with because I did not need to make friends actively when I went to school. The only reason I am not committing suicide is I don't want my parents to cry.</td></tr><tr><td>Output:Yes, this user appears to have major depressive disorder and they have symptomsD0 (depressed mood most of the day), D1 (diminished interest or pleasure),D5 (feeling worthlessness or guilty), D6 (diminished ability to think or concentrate),and D8 (recurrent thoughts of death and suicidal ideation).</td></tr><tr><td rowspan="2">2.(GAD)</td><td>Prompt:These are symptom descriptions of generalized anxiety disorder:A0. Excessive anxiety and worry more than 6 montsA1. Difficult to control the worryThe anxiety and worry are associated with followings:A2. IrritabilityA3. Being easily fatiguedA4. Sleep disturbanceA5. Difficulty concentrating or mind going blackA6. Muscle tensionPlease tell me if the user below has generalized anxiety disorderand which symptoms does the user have? (choose from the above)</td></tr><tr><td>Input:I often feel anxious that something terrible is about to happen.For example, my husband will likely lose his job, or a family member will become ill or have an accident. I know these worries are unnecessary and excessive, butI can't stop worrying. I'm always nervous, so I feel exhausted even if I do nothing.Output:Yes, the user has generalized anxiety disorder and they have the following symptoms:A0. Excessive anxiety and worry more than 6 months, A1. Difficult to control the worry,A2. Irritability, A3. Being easily fatigued, A4. Sleep disturbance,A5. Difficulty concentrating or mind going blank, A6. Muscle tension.</td></tr></table>
|
| 388 |
+
|
| 389 |
+
Table 10: Results for input example texts related to MDD and GAD, using GPT-3, respectively.
|
| 390 |
+
|
| 391 |
+
# Major Depressive Disorder
|
| 392 |
+
|
| 393 |
+
D0: Depressed mood most of the day, nearly every day. Feeling down, depressed, or hopeless.
|
| 394 |
+
D1: Markedly diminished interest or pleasure in all, or almost all, activities most of the day, nearly every day. Little interest or pleasure in doing things.
|
| 395 |
+
D2: Insomnia or hypersomnia nearly every day. Trouble falling or staying asleep, or sleeping too much.
|
| 396 |
+
D3: Significant weight loss when not dieting or weight gain, or decrease or increase in appetite nearly every day. Poor appetite or overeating.
|
| 397 |
+
D4: Fatigue or loss of energy nearly every day. Feeling tired or having little energy.
|
| 398 |
+
D5: Feeling worthlessness or excessive or inappropriate guilt nearly every day. Feeling bad about yourself - or that you are a failure or have let yourself or your family down.
|
| 399 |
+
D6: Diminished ability to think or concentrate, or indecisiveness, nearly every day. Trouble concentrating on things, such as reading the newspaper or watching television.
|
| 400 |
+
D7: A slowing down of thought and a reduction of physical movement. Moving or speaking so slowly that other people could have noticed.
|
| 401 |
+
D8: Recurrent thoughts of death, recurrent suicidal ideation without a specific plan, or a suicide attempt or a specific plan for committing suicide. Thoughts that you would be better off dead, or of hurting yourself.
|
| 402 |
+
|
| 403 |
+
# Bipolar Disorder
|
| 404 |
+
|
| 405 |
+
Major Depressive Episode: D0-D8: Same as major depressive disorder. Manic Episode:
|
| 406 |
+
|
| 407 |
+
M0: A distinct period of abnormally and persistently elevated, expansive, or irritable mood and abnormally and persistently increased goal-directed activity or energy, lasting at least 1 week and present most of the day, nearly every day. Do you ever experience a persistent elevated or irritable mood for more than a week?
|
| 408 |
+
M1: Increase in goal-directed activity or psychomotor agitation (i.e., purposeless non-goal-directed activity). Do you ever experience persistently increased goal-directed activity for more than a week?
|
| 409 |
+
M2: Inflated self-esteem or grandiosity. Do you ever experience inflated self-esteem or grandiose thoughts about yourself?
|
| 410 |
+
M3: Decreased need for sleep (e.g., feels rested after only 3 hours of sleep). Do you ever feel little need for sleep, feeling rested after only a few hours?
|
| 411 |
+
M4: More talkative than usual or pressure to keep talking. Do you ever find yourself more talkative than usual?
|
| 412 |
+
M5: Flight of ideas or subjective experience that thoughts are racing. Do you experience racing thoughts or a flight of ideas?
|
| 413 |
+
M6: Distractibility (i.e., attention too easily drawn to unimportant or irrelevant external stimuli), as reported or observed. Do you notice (or others comment) that you are easily distracted?
|
| 414 |
+
M7: Excessive involvement in activities that have a high potential for painful consequences. Do you engage excessively in risky behaviors, sexually or financially?
|
| 415 |
+
|
| 416 |
+
# Anxiety Disorder
|
| 417 |
+
|
| 418 |
+
A0: Excessive anxiety and worry, occurring more days than not for at least 6 months, about a number of events or activities. Do you worry about lots of different things? Do you worry about things working out in the future? Do you worry about things that have already happened in the past? Do you worry about how well you do things?
|
| 419 |
+
A1: The individual finds it difficult to control the worry. Do you have trouble controlling your worries? Do you feel jumpy?
|
| 420 |
+
A2: The anxiety and worry are associated with irritability. Do you get irritable and/or easily annoyed when anxious?
|
| 421 |
+
A3: The anxiety and worry are associated with being easily fatigued. Does worry or anxiety make you feel fatigued or worn out?
|
| 422 |
+
A4: The anxiety and worry are associated with sleep disturbance (difficulty falling or staying asleep, or restless, unsatisfying sleep). Does worry or anxiety interfere with falling or staying asleep?
|
| 423 |
+
A5: The anxiety and worry are associated with difficulty concentrating or mind going blank. Does worry or anxiety make it hard to concentrate?
|
| 424 |
+
A6: The anxiety and worry are associated with muscle tension. Do your muscles get tense when you are worried or anxious?
|
| 425 |
+
|
| 426 |
+
# Borderline Personality Disorder
|
| 427 |
+
|
| 428 |
+
B0: A pattern of unstable and intense interpersonal relationships characterized by alternating between extremes of idealization and devaluation. My relationships are very intense, unstable, and alternate between the extremes of over idealizing and undervaluing people who are important to me.
|
| 429 |
+
B1: Recurrent suicidal behavior, gestures, or threats, or self-mutilating behavior.
|
| 430 |
+
|
| 431 |
+
Now, or in the past, when upset, I have engaged in recurrent suicidal behaviors, gestures, threats, or self-injurious behavior such as cutting, burning, or hitting myself.
|
| 432 |
+
|
| 433 |
+
B2: Identity disturbance: markedly and persistently unstable self-image or sense of self. I have a significant and persistently unstable image or sense of myself, or of who I am or
|
| 434 |
+
B3: Affective instability due to a marked reactivity of mood. My emotions change very quickly, and I experience intense episodes of sadness, irritability, and anxiety or panic attacks.
|
| 435 |
+
B4: Inappropriate, intense anger or difficulty controlling anger. My level of anger is often inappropriate, intense, and difficult to co
|
| 436 |
+
B5: Transient, stress-related paranoid ideation or severe dissociative symptoms.
|
| 437 |
+
|
| 438 |
+
I have very suspicious ideas, and am even paranoid or I experience episodes under stress when I feel that I, other people, or the situation is somewhat unreal.
|
| 439 |
+
|
| 440 |
+
B6: Impulsively in at least two areas that are potentially self-damaging (e.g., spending, sex, substance abuse, reckless driving, binge eating). I engage in two or more self-damaging acts such as excessive spending, unsafe and inappropriate sexual conduct, substance abuse, reckless driving, and binge eating.
|
| 441 |
+
B7: Frantic efforts to avoid real or imagined abandonment.
|
| 442 |
+
|
| 443 |
+
I engage in frantic efforts to avoid real or imagined abandonment by people who are close to me.
|
| 444 |
+
|
| 445 |
+
B8: Chronic feelings of emptiness.
|
| 446 |
+
|
| 447 |
+
I suffer from feelings of emptiness and boredom.
|
| 448 |
+
|
| 449 |
+
Table 11: The complete list of collected sentences for each head. The diagnostic criteria, sourced from DSM-5, are shown in bold, and questions from clinical questionnaires are underlined.
|
| 450 |
+
|
| 451 |
+
A For every submission:
|
| 452 |
+
|
| 453 |
+
A1. Did you describe the limitations of your work?
|
| 454 |
+
|
| 455 |
+
Yes, in the "Limitation" section
|
| 456 |
+
|
| 457 |
+
A2. Did you discuss any potential risks of your work?
|
| 458 |
+
|
| 459 |
+
Yes, in the "Ethics statement" section
|
| 460 |
+
|
| 461 |
+
A3. Do the abstract and introduction summarize the paper's main claims?
|
| 462 |
+
|
| 463 |
+
Yes, the paper's main claims are provided in the 1. Introduction section.
|
| 464 |
+
|
| 465 |
+
A4. Have you used AI writing assistants when working on this paper?
|
| 466 |
+
|
| 467 |
+
Left blank.
|
| 468 |
+
|
| 469 |
+
B Did you use or create scientific artifacts?
|
| 470 |
+
|
| 471 |
+
Yes, in 3. Methodology.
|
| 472 |
+
|
| 473 |
+
B1. Did you cite the creators of artifacts you used?
|
| 474 |
+
|
| 475 |
+
Yes, in 2. Related work
|
| 476 |
+
|
| 477 |
+
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
|
| 478 |
+
|
| 479 |
+
No, the codes will be publicly available after the reviewing process.
|
| 480 |
+
|
| 481 |
+
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
|
| 482 |
+
|
| 483 |
+
Yes, we discuss about the possible problems in the "Ethics statement" section.
|
| 484 |
+
|
| 485 |
+
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
|
| 486 |
+
|
| 487 |
+
Yes, it is also discussed in "Ethics statement"
|
| 488 |
+
|
| 489 |
+
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
|
| 490 |
+
|
| 491 |
+
Yes, in section 4.1 datasets
|
| 492 |
+
|
| 493 |
+
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
|
| 494 |
+
|
| 495 |
+
Yes, in section 4.1 datasets
|
| 496 |
+
|
| 497 |
+
C Did you run computational experiments?
|
| 498 |
+
|
| 499 |
+
Yes, in Section 4
|
| 500 |
+
|
| 501 |
+
C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used?
|
| 502 |
+
|
| 503 |
+
Yes, in section 4, and Appendix
|
| 504 |
+
|
| 505 |
+
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
|
| 506 |
+
|
| 507 |
+
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
|
| 508 |
+
|
| 509 |
+
Yes, in section 4, and Appendix
|
| 510 |
+
|
| 511 |
+
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
|
| 512 |
+
|
| 513 |
+
Yes, in section 4 and 5
|
| 514 |
+
|
| 515 |
+
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)?
|
| 516 |
+
|
| 517 |
+
Yes, in section 4
|
| 518 |
+
|
| 519 |
+
D Did you use human annotators (e.g., crowdworkers) or research with human participants?
|
| 520 |
+
|
| 521 |
+
Left blank.
|
| 522 |
+
|
| 523 |
+
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
|
| 524 |
+
|
| 525 |
+
No response.
|
| 526 |
+
|
| 527 |
+
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)?
|
| 528 |
+
|
| 529 |
+
No response.
|
| 530 |
+
|
| 531 |
+
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
|
| 532 |
+
|
| 533 |
+
No response.
|
| 534 |
+
|
| 535 |
+
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
|
| 536 |
+
|
| 537 |
+
No response.
|
| 538 |
+
|
| 539 |
+
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
|
| 540 |
+
|
| 541 |
+
No response.
|
2023/A Simple and Flexible Modeling for Mental Disorder Detection by Learning from Clinical Questionnaires/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:641a0eb259e6b6ed966f467932a3c5c69708834ce2e980b84b25df723d5bd002
|
| 3 |
+
size 745235
|
2023/A Simple and Flexible Modeling for Mental Disorder Detection by Learning from Clinical Questionnaires/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Survey for Efficient Open Domain Question Answering/872e4524-758a-44e3-aed7-10ee6b0fce7e_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Survey for Efficient Open Domain Question Answering/872e4524-758a-44e3-aed7-10ee6b0fce7e_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Survey for Efficient Open Domain Question Answering/872e4524-758a-44e3-aed7-10ee6b0fce7e_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d0ea44e577b5fc0e6498f6efb40629b2d73298e522f42ad65ae3db4934b2f5a9
|
| 3 |
+
size 1322346
|
2023/A Survey for Efficient Open Domain Question Answering/full.md
ADDED
|
@@ -0,0 +1,403 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Survey for Efficient Open Domain Question Answering
|
| 2 |
+
|
| 3 |
+
Qin Zhang $^{1}$ , Shangsi Chen $^{1}$ , Dongkuan Xu $^{2}$ , Qingqing Cao $^{3}$ , Xiaojun Chen $^{1}$ , Trevor Cohn $^{4}$ , Meng Fang $^{5*}$
|
| 4 |
+
|
| 5 |
+
<sup>1</sup>Shenzhen University; <sup>2</sup>North Carolina State University; <sup>3</sup>University of Washington;
|
| 6 |
+
|
| 7 |
+
<sup>4</sup>The University of Melbourne; <sup>5</sup>University of Liverpool
|
| 8 |
+
|
| 9 |
+
{qinzhang@, chenshangsi2021@email., xjchen@}szu.edu.cn; dxu27@ncsu.edu;
|
| 10 |
+
|
| 11 |
+
qicao@cs.washington.edu; tcohn@unimelb.edu.au; Meng.Fang@liverpool.ac.uk
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
Open domain question answering (ODQA) is a longstanding task aimed at answering factual questions from a large knowledge corpus without any explicit evidence in natural language processing (NLP). Recent works have predominantly focused on improving the answering accuracy and have achieved promising progress. However, higher accuracy often requires more memory consumption and inference latency, which might not necessarily be efficient enough for direct deployment in the real world. Thus, a trade-off between accuracy, memory consumption and processing speed is pursued. In this paper, we will survey recent advancements in the efficiency of ODQA models and conclude core techniques for achieving efficiency. Additionally, we will provide a quantitative analysis of memory cost, query speed, accuracy, and overall performance comparison. Our goal is to keep scholars informed of the latest advancements and open challenges in ODQA efficiency research and contribute to the further development of ODQA efficiency.
|
| 16 |
+
|
| 17 |
+
# 1 Introduction
|
| 18 |
+
|
| 19 |
+
Open domain question answering (Voorhees and Tice, 2000) is a longstanding task in natural language processing that can answer factoid questions, from a large knowledge corpus such as Wikipedia (Wikipedia, 2004) or BookCorpus (Zhu et al., 2015). Traditional QA models rely on explicit evidence texts to locate the answer (Cao et al., 2019; Khashabi et al., 2020; Huang et al., 2021), while ODQA models require the processing of large amounts of knowledge quickly to answer input questions. And compared to search engines, ODQA models aim to enhance user-friendliness by presenting the final answer to a question directly, rather than returning a list of relevant snippets or hyperlinks (Zhu et al., 2021).
|
| 20 |
+
|
| 21 |
+
Recently, ODQA systems have attracted considerable research attention and a classic framework
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
Figure 1: The general pipeline of ODQA models is shown in (I), along with three different ODQA frameworks: Retriever-Reader (a), Retriever-Only (b), and Generator-Only (c).
|
| 25 |
+
|
| 26 |
+
of the ODQA system is implemented by encompassing an information retriever (IR) and a reader, i.e., Retriever-Reader (Chen et al., 2017). The task of IR is to retrieve evidence pieces from a large knowledge corpus. Popularly used IR can be TF-IDF (Chen et al., 2017), BM25 (Mao et al., 2021) and DPR (Karpukhin et al., 2020), etc. The target of the reader is understanding and reasoning the retrieved evidence to yield the answer. It is often achieved by transformer-based language models, such as BERT (Devlin et al., 2019), ALBERT (Lan et al., 2019) or generator T5 (Raffel et al., 2020), BART (Lewis et al., 2020a), GPT (Brown et al., 2020), etc. This two-module system enjoys a broad range of applications (Zhu et al., 2021).
|
| 27 |
+
|
| 28 |
+
However, most general-purpose ODQA models are computationally intensive, slow to infer, and expensive to train. One of the reasons is the huge index/document size. For example, Karpukhin et al. (2020) processed an English Wikipedia corpus including 26 million articles and built a dense index with a size of 65GB. Besides, the majority of general-purpose ODQA models are developed with large pre-trained language models, which often contain millions of parameters. For instance, the state-
|
| 29 |
+
|
| 30 |
+
of-the-art ODQA models on the Natural Question dataset, R2-D2 (Fajcik et al., 2021) and UnitedQA (Cheng et al., 2021) have 1.29 billion and 2.09 billion model parameters, respectively. Storing the corpus index and pre-trained language models is memory-intensive (Xia et al., 2022) while evidence retrieving and reading are memory and time consuming. These make general-purpose ODQA models a big challenge for real-time use (Seo et al., 2019), such as on a mobile phone.
|
| 31 |
+
|
| 32 |
+
Towards this challenge, there are various trade-offs in building ODQA models that meet real-world application needs, such as the trade-offs among accuracy, memory consumption, inference speed, and so on (Izacard et al., 2020; Wu et al., 2020; Mao et al., 2021). NeurIPS 2020 organized an EfficientQA Competition (Min et al., 2021), aiming to build ODQA systems that can predict correct answers while also satisfying strict on-disk memory budgets. For this purpose, a line of work focused on building more efficient protocols. Besides Retriever-Reader, Retriever-Only (Lee et al., 2021b), Generator-Only (Roberts et al., 2020) are newly proposed protocols. See Fig. 1 for more details. Various efficiency techniques are also developed, such as index downsizing (Yamada et al., 2021; Lewis et al., 2022), fast searching (Lewis et al., 2021; Malkov and Yashunin, 2020), evidence retrieval or reading omitting (Roberts et al., 2020; Seonwoo et al., 2022; Lee et al., 2021b) and model size reducing (Yang and Seo, 2021; Singh et al., 2021) etc.
|
| 33 |
+
|
| 34 |
+
In this survey, we provide a comprehensive introduction to the broad range of methods that aim to improve efficiency with a focus on the ODQA task. In Section 2, we overview general-purpose ODQA models and discuss their strategies and limitations in terms of efficiency. In Section 3, we first walk through the key ODQA models which concentrate on efficiency, then conclude the core techniques used. Section 4 gives a quantitative analysis with an overall comparison of different frameworks and three specific aspects, i.e., memory cost, processing speed, and accuracy. Finally, in Section 5, we discuss the challenges reminded followed by the conclusion given in Section 6.1
|
| 35 |
+
|
| 36 |
+
# 2 Overview of ODQA models
|
| 37 |
+
|
| 38 |
+
In this section, we summarize ODQA models into three typical frameworks (see in Fig. 1): Retriever
|
| 39 |
+
|
| 40 |
+
Reader, Retriever-Only, and Generator-Only. As described in Section 1, Retriever-Reader models include two modules: a retriever and a reader. For retrievers, traditional non-neural methods, such as TF-IDF (Chen et al., 2017) and BM25 (Mao et al., 2021), use sparse representations to measure term matching between questions and passages. However, these approaches can only capture lexical information, limiting capabilities in matching questions and passages (Qu et al., 2021). Differently, recent neural network-based dual-encoder retrievers (Karpukhin et al., 2020) encode questions and documents into a latent dense vector space where text semantics beyond terms can be adequately learned and measured. For readers, considering the way of obtaining answers, there exist two categories: extractive readers and generative readers. Extractive readers normally answer the question using a span from the context and the goal is to classify the start and end positions of the answer in the retrieved evidence (Karpukhin et al., 2020; Qu et al., 2021). And generative readers are not restricted to the input context and freely generate answers by autoregressively predicting tokens (Raffel et al., 2020; Izacard and Grave, 2021). Distinctively, Retriever-Only models only use one retriever to extract answers directly from a phrase or QA-pair knowledge base. And Generator-Only models directly generate answers with the question, not involving evidence retrieval and reading (Lee et al., 2021c; Lewis et al., 2021).
|
| 41 |
+
|
| 42 |
+
Retriever-Reader ODQA methods generally obtain good performance. However, due to dense encoding for corpus passages and longer evidence for answer reasoning, they normally suffer from a larger index size and a slower processing speed. In addition, the dual-encoder retrievers like DPR, encoding for questions and documents independently, ignored interaction between them and limited the retrieval performance (Khattab et al., 2021; Lu et al., 2022). In Retriever-Only ODQA models, the omission of the reading/generating step greatly improves the speed of answering questions. But there are a few limitations for Retriever-Only ODQA models: (1) lower performance on average compared to Retriever-Reader ODQA models since less information is considered during answer inference; (2) high storage requirement in terms of indexes for fine-grained retrieval units such as phrases or QA pairs. For Generator-Only ODQA models, skipping evidence retrieving and reading makes
|
| 43 |
+
|
| 44 |
+
low memory costs and short processing time than two-stage systems. However, the performances of Generator-Only ODQA methods have much room for improvement. Additionally, real-world knowledge is updated routinely, and the huge training cost of the generative language models makes it laborious and impractical to keep them always up-to-date or retrain them frequently. Billions of parameters also make them storage-unfriendly and hard to apply on resource-constrained devices (Roberts et al., 2020). A diagram is provided with the typology of ODQA methods in Fig. 4 in the Appendix, while their main concerns are also indicated.
|
| 45 |
+
|
| 46 |
+
# 3 Efficient ODQA Models and Techniques
|
| 47 |
+
|
| 48 |
+
In this section, we first walk through the key ODQA models which concentrate on efficiency, and discuss their strengths and weaknesses as well as their unique characteristics in Section 3.1. Then we conclude the core techniques used in these models for improving the efficiency of ODQA, from data and model perspectives, respectively, in Section 3.2.
|
| 49 |
+
|
| 50 |
+
Before we start, we first take DPR on the Natural Questions (NQ) test dataset as an example to show the time each module needs during inference and their detailed memory costs in Fig. 2. We can see the total processing time DPR needs is 0.91 seconds $(\mathrm{s})^2$ where the inference speed is mainly affected by evidence searching $(74.79\%)$ and reading $(23.95\%)$ . The total memory cost of DPR is 79.32GB which is huge. The index takes up $81.95\%$ of the memory, the raw corpus takes $16.39\%$ space, and the remaining $1.66\%$ are for the models where the retriever model is around twice the size of the reader model.
|
| 51 |
+
|
| 52 |
+
Based on these observations, how to improve the efficiency of ODQA models focuses on the reduction of processing time and memory cost. To reduce processing time, we can accelerate evidence searching and reading. To reduce the memory cost, we can reduce the size of the index and model. Besides, some emerging directions are also proposed, such as jumping the retrieval part to generate answers using questions directly or retrieving answers directly to omit evidence reading. We introduce the details below.
|
| 53 |
+
|
| 54 |
+
# 3.1 Walk through Efficiency ODQA Models
|
| 55 |
+
|
| 56 |
+
In this subsection, we delve into the details of efficiency ODQA models. We categorize them into
|
| 57 |
+
|
| 58 |
+

|
| 59 |
+
Figure 2: Query processing time and memory for DPR on NQ test set. We test them on an Nvidia GeForce Rtx 2080 Ti GPU and show the average results of 1000 examples.
|
| 60 |
+
|
| 61 |
+

|
| 62 |
+
|
| 63 |
+
three classes regarding the different means of implementing efficiency, i.e., reducing processing time, reducing memory cost, and blazing new directions.
|
| 64 |
+
|
| 65 |
+
# 3.1.1 Reducing Processing Time
|
| 66 |
+
|
| 67 |
+
When giving a question, the processing time for ODQA involves three stages: question embedding, evidence searching, and evidence reading. Whereas evidence searching and evidence reading occupy most of the processing time, researchers mainly focus on narrowing the time cost of the two stages. By Accelerating Evidence Searching. Other than the traditional brute search method (Zhan et al., 2021), hierarchical navigable small world graphs (HNSW) (Malkov and Yashunin, 2020) and the approximate nearest neighbor (ANN) search (Johnson et al., 2021) techniques become increasingly popular, due to the characteristic of fast searching.
|
| 68 |
+
|
| 69 |
+
DPR (Yamada et al., 2021) and RePAQ (Lewis et al., 2021) adopt HNSW to achieve much faster search without a significant decline in retrieval accuracy. However, the negative effect HNSW brings is a larger index. For example, DPR with HNSW increases the index from 65GB to 151GB (Yamada et al., 2021). Besides, Locality Sensitive Hashing (LSH) (Neyshabur and Srebro, 2015) and Inverted File (IVF) (Sivic and Zisserman, 2003) are both efficient ANN methods to speedup search (Yamada et al., 2021; Lewis et al., 2022), but they often lead to a significant drop of retrieval accuracy (Yamada et al., 2021; Lewis et al., 2021, 2022). Concretely, LSH generates the same hashkey for similar embeddings through suitable hash functions, and then evidence retrieval is based on hashkeys (Wang et al., 2022). IVF constructs two-level indices using the K-means clustering method (Lewis et al., 2022). Different from LSH which can reduce the index size, IVF does not achieve this goal. Compared to LSH and IVF, Learned Index for large-scale DENSE passage Retrieval (LIDER) (Wang et al.,
|
| 70 |
+
|
| 71 |
+
2022) makes a trade-off between search speed and retrieval accuracy through dynamically learning a corpus index when training. It achieves a faster search with a fewer drop in retrieval accuracy compared IVF, by predicting the location from the learned key-location distribution of the dataset. Specifically, LIDER builds two-level indices with a similar method IVF uses. LIDER further maps the documents in indices into hashkeys using the LSH method and sorts them based on the hashkeys. Meanwhile, the hashkeys are also used to train a multi-layer linear regression model for the location prediction of a hashkey in the sorted indexes. During inference, with a query embedded by DPR (Karpukhin et al., 2020), LIDER first calculates its hashkey, and finds its $c$ nearest centroids. With these centroids, LIDER then searches the top-p nearest evidence in each subset in parallel. Finally, it merges all the retrieved evidence and selects the top-k ones as output. To conclude, LIDER is a powerful, efficient, and practical method for ODQA evidence searching.
|
| 72 |
+
|
| 73 |
+
By Accelerating Evidence Reading. Accelerating the evidence reading is another effective way to speed up the question processing of ODQA models. Actually, in the retrieved evidence, a high percentage of content is not pertinent to answers (Min et al., 2018). However, the reader module still allocated the same computational volume to these contents, which involves many unnecessary computations and prolongs the inference latency (Wu et al., 2020). Thus, the jumping reading strategy is proposed and studies have found it can bring certain inference speedup (Wu et al., 2020; Guan et al., 2022). Concretely, the jumping reading strategy dynamically identifies less relevant text blocks at each layer of computation by calculating an important score for each text block. Toward blocks with low scores, they will not be further involved in the subsequent processing.
|
| 74 |
+
|
| 75 |
+
Adaptive computation (AC) (Bengio et al., 2015; Graves, 2016) and Block-Skim (Guan et al., 2022) are efficient methods to ameliorate the reading efficiency following jumping reading strategy which manipulates the allocation of computation of the model input (Wu et al., 2020, 2021). SkyLineBuilder (Wu et al., 2020) applies AC to an extractive reader and dynamically decides which passage to allocate computation at each layer during reading. Further, Adaptive Passage Encoder (APE) (Wu et al., 2021) considers applying the AC strategy to
|
| 76 |
+
|
| 77 |
+
Fusion-in-Decoder (FiD) system. In APE, the AC strategy is used to early stop the encoder of the generator to read the evidence that is less likely to include answers. Meanwhile, inspired by the idea of passage filtering before retrieval (Yang and Seo, 2021), Block-Skim (Guan et al., 2022) is proposed which skips question-irrelevant text blocks to optimize the reading speed. It first slices an input sequence into text blocks with a fixed length. A CNN module is utilized to compute the importance score for each block in each transformer layer, then the unimportant blocks are skipped. Block-Skim implements an average of 2.56 times speedup inference than BERT-based models with little loss of accuracy on multiple extractive QA datasets. This enlightens us that all BERT-based Retriever-Reader ODQA models can be optimized by Block-skim to speed up their inference.
|
| 78 |
+
|
| 79 |
+
# 3.1.2 Reducing Memory Cost
|
| 80 |
+
|
| 81 |
+
For ODQA models, there are three kinds of memory cost: index, model, and raw corpus. Normally, reducing the sizes of the index and model are two ways to break through and to achieve storage efficiency, while reducing raw corpus size results in certain knowledge source loss and a significant drop in performance (Yang and Seo, 2021).
|
| 82 |
+
|
| 83 |
+
By Reducing Index Size. The index of a corpus takes a major proportion of memory cost during running an ODQA system. The evidence-searching module, which is strongly related to the index size, is also the module that takes the most time during reference. Thus, downsizing the index is key to improving the efficiency of ODQA models. A line of research has tried to achieve this goal.
|
| 84 |
+
|
| 85 |
+
BPR (Yamada et al., 2021) and DrBoost (Lewis et al., 2022) are representative works in this direction. BPR reduces the index size by sacrificing data precision while DrBoost achieves this through compacting embedding dimension (Lewis et al., 2022). Specifically, BPR (Yamada et al., 2021) leverages a learning-to hash technique (Cao et al., 2017; Wang et al., 2018) to hash continuous passage vectors into compact binary codes, which is different from DPR (Karpukhin et al., 2020) utilizing dense continuous embeddings of corpus passages. It optimizes the search efficiency of the retriever while maintaining accuracy through multi-target joint learning: evidence retrieval and reranking. During retrieval, top- $c$ passages are retrieved with the Hamming distance of the binary codes. Then, the retrieved evidences are reranked with maximum inner prod
|
| 86 |
+
|
| 87 |
+
uct search (MIPS) (Shrivastava and Li, 2014; Guo et al., 2016) between the query dense vector and the passage binary codes. Finally, the top- $k$ evidences are outputted, where $k$ is much smaller than $c$ . Differently, DrBoost (Lewis et al., 2022), a dense retrieval ensemble method inspired by boosting (Freund and Schapire, 1997), incrementally compacts the dimension of representations during training. Concretely, it builds sequentially multiple weak learners and integrates them into one stronger learner. Each weak learner consists of a BERT-based dual-encoder for encoding passages and questions by learning embeddings in low dimensions, normally 32-dim. The weak learners are trained iteratively using hard negative samples. The final embeddings for passages and questions are a linear combination of embeddings from all weak learners. Thus the dimension of the final embedding can be controlled by the iterative rounds during training, which makes the total embedding dimension flexible and the index size adjustable. One limitation of DrBoost is that it must keep multiple encoders simultaneously to compute the final representation of the question during inferring. To remedy this issue, DrBoost further distills all R question encoders (32 dim) into a single encoder (32*R dim). Therefore, the single encoder outputs the final question embedding directly, which achieves the goal of low resources.
|
| 88 |
+
|
| 89 |
+
By Reducing Model Size. Besides downsizing the index, compressing model is another way to cut the memory cost of ODQA systems. One way to accomplish this goal is building a comprehensive model to implement retrieval and reading simultaneously, instead of multiple models in traditional ODQA systems.
|
| 90 |
+
|
| 91 |
+
YONO (You Only Need One model) (Lee et al., 2021a) is a representative model in this way, which integrates retriever, reranker, and generator models into a T5-large based singular transformer pipeline. In this way, YONO achieves a less than 2GB model size which is as large as EMDR2 (Singh et al., 2021), and a higher QA performance. This makes YONO the best performance among models that are under the size of 2GB. Moreover, YONO can further manipulate its model size by adding or removing certain layers flexibly. To be specific, YONO first discards 18 decoder layers of the T5-large model and splits the rest model into four parts. The first 12 layers are for evidence retrieval; the middle 4 layers are for evidence reranking; the fol
|
| 92 |
+
|
| 93 |
+
lowing 8 layers are for impressive encoding and the last 6 layers are for decoding. The hidden representations are progressively improved along the pipeline. A fully end-to-end training over all stages is performed to make full use of the capability of all modules. However, YONO still needs to do evidence indexing and searching, which is time-consuming. Thus, how to improve the processing speed of YONO is still a problem that needs to be solved urgently.
|
| 94 |
+
|
| 95 |
+
# 3.1.3 One-stage Frameworks
|
| 96 |
+
|
| 97 |
+
Besides the methods which accelerate evidence searching and reading and the methods that reduce the size of the index and model, some one-stage frameworks are proposed as well, such as generating the answer using the input question directly or retrieving answers directly from a finer-grained knowledge base (ie., phrases or question-answer pairs).
|
| 98 |
+
|
| 99 |
+
Directly Generate Answers. Some researchers blazed a brand new path that omits the whole evidence retrieval process, including corpus indexing and evidence searching, by leveraging generative language models (such as T5, BART, GPT) to tackle ODQA tasks (Roberts et al., 2020; Brown et al., 2020; Lewis et al., 2020a). Generative models have learned and stored the knowledge of a large-size corpus. Given a question, they can generate the answers directly. Without the evidence retrieval process, they save much processing time during ODQA, making them inference efficient. The main advantage of Generator-Only methods is that they can answer open-domain questions without any access to external knowledge (Roberts et al., 2020). And they output the literal text of the answer in a more free-form fashion. However, generally, there is a significant gap in QA performance between generative models and Retriever-Reader ODQA models, as well as the adequacy of explanation. Thus, single generator-based ODQA models are further combined with existing evidence retriever models (Lewis et al., 2020b; Izacard and Grave, 2021; Singh et al., 2021) to obtain better QA performance.
|
| 100 |
+
|
| 101 |
+
Directly Retrieve Answers. As discussed in the first few paragraphs of Section 3, evidence reading takes non-negligible processing time. An innovative idea to improve the efficiency of ODQA is to omit evidence reading. Without evidence reading, the document corpus is first preprocessed into a knowledge base offline. When encountering a new
|
| 102 |
+
|
| 103 |
+
sample, the model searches the final answer from the knowledge base for the question directly (Seo et al., 2019; Lee et al., 2021b; Lewis et al., 2021).
|
| 104 |
+
|
| 105 |
+
RePAQ (Lewis et al., 2021) is representative of this framework. It first converts a large corpus to a knowledge base of question-answer (QA) pairs using a question generation model, then uses a lightweight QA-pair retriever to answer the questions. When inferring, it first calculates the similarity between the input question and each one in the knowledge base using the maximum inner product search (MIPS) technique (Shrivastava and Li, 2014; Guo et al., 2016), to retrieve the most similar QA pairs. The answer to the most similar question is returned as the output answer directly. However, the 220GB index for the 65 million QA pairs becomes a major drawback for RePAQ. Similarly, phrase-based ODQA models, such as DenSPI (Seo et al., 2019) and DensePhrases (Lee et al., 2021b), split the corpus documents into fine-grained phrases. They build an index for these phrases which can be retrieved directly as the predicted answers. Similar to RePAQ, omitting evidence reading makes phrase-based ODQA models faster than Retriever-Reader ODQA models when processing questions, as analyzed in Section 4.3.
|
| 106 |
+
|
| 107 |
+
# 3.2 Core Techniques
|
| 108 |
+
|
| 109 |
+
This section concludes the core techniques commonly used in existing ODQA models with respect to improving efficiency. It can be briefly divided into two categories: data-based and model-based techniques. Data-based techniques mainly focus on the reduction of the index, which can be downsized from different hierarchies such as the number of corpus passages, feature dimension, and storage unit per dimension. Model-based techniques try to reduce the model size while avoiding a significant drop in performance. Model pruning and knowledge distillation are commonly used techniques.
|
| 110 |
+
|
| 111 |
+
# 3.2.1 Data-based techniques
|
| 112 |
+
|
| 113 |
+
Passage Filtering. Among the huge corpus ODQA models rely on, there are massive passages that contain little useful information and are unlikely to be evidence for answers. Thus, filtering unrelated passages is a way to reduce the memory cost of corpus without a large negative impact. For example, some researchers have designed a linear classifier to discriminate and discard unnecessary passages before evidence retrieval (Izacard et al., 2020; Yang and Seo, 2021).
|
| 114 |
+
|
| 115 |
+
Dimension Reduction. Another way to reduce the memory cost is to reduce the dimension for dense passage representations. To achieve this goal, Izacard et al. (2020) learns an additional feed-forward layer to project the high-dimensional embeddings to lower ones. Principle component analysis (PCA) is another efficient technique that is commonly used to reduce the dimension of passage representations without a loss of important information (Ma et al., 2021; Zouhar et al., 2022). In work Ma et al. (2021), PCA is used to build a projection matrix to project the raw data onto the principal components using an orthonormal basis.
|
| 116 |
+
|
| 117 |
+
Product Quantization. Product quantization (PQ) (Jégou et al., 2011) further reduces the index size by reducing the storage cost of each dimension of the embeddings. It divides a $d$ -dimensional vector into $n$ sub-vectors with $d/n$ dimension and quantifies these sub-vectors independently using k-means (Izacard et al., 2020; Ma et al., 2021; Yang and Seo, 2021). However, PQ also results in a significant drop in accuracy while it reduces the index size.
|
| 118 |
+
|
| 119 |
+
The three techniques introduced above are adopted jointly in Fusion-in-Decoder with Knowledge Distillation (FiD-KD) (Izacard et al., 2020) to reduce the memory cost of one ODQA system. It obtains competitive performance compared to the original system while compressing memory from more than 70GB to less than 6GB.
|
| 120 |
+
|
| 121 |
+
# 3.2.2 Model-based techniques
|
| 122 |
+
|
| 123 |
+
Model Pruning. Most recent works on open domain question answering (Chen et al., 2017; Guu et al., 2020) prefer to adopt large pre-trained language models (Devlin et al., 2019; Raffel et al., 2020) as passage retriever, reader or generator due to their powerful deep semantic understanding capability. These large models have millions or even billions of parameters, requiring large storage, long training time, and leading to slow inference. To this point, some researchers have turned to adopt more lightweight language models (Yang and Seo, 2021). For example, a smaller pre-trained language model, MobileBERT (Sun et al., 2020), has been used to reduce the size of an ODQA system to 972MB (Yang and Seo, 2021). Parameter sharing is another way to constrain the model size. Skylinebuilder (Wu et al., 2020) and RePAQ downsize their model by using the parameter sharing encoders, i.e., ALBERT (Lan et al., 2019). More lightweight pre-trained language models have been proposed and verified in other natural language
|
| 124 |
+
|
| 125 |
+
tasks, such as machine reading comprehension (Fan et al., 2019; Sajjad et al., 2020; Lagunas et al., 2021; Xia et al., 2022). They obtain smaller model sizes and achieve high accuracy for downstream tasks, including ODQA tasks.
|
| 126 |
+
|
| 127 |
+
Knowledge Distillation. Compared to structure pruning, knowledge distillation pays more attention to effectively improving question processing speed. Knowledge distillation, which transfers knowledge from a large model into a small one, has been widely used in several NLP tasks, including ODQA and MRC tasks (Sanh et al., 2019; Sun et al., 2020; Izacard and Grave, 2020; Lewis et al., 2022; Yang and Seo, 2021). For example, Minimal R&R system (Yang and Seo, 2021) and DrBoost (Lewis et al., 2022) both integrate multiple modules into a single one via knowledge distillation.
|
| 128 |
+
|
| 129 |
+
# 4 Quantitative Analysis
|
| 130 |
+
|
| 131 |
+
This section gives a quantitative analysis of the aforementioned ODQA models. We first give an overall comparison of different frameworks and further discuss the methods quantitatively from three specific aspects: memory cost, processing speed, and accuracy<sup>3</sup>. At the end of the analysis, the following subsection summarizes and concludes what has been analyzed and discussed.
|
| 132 |
+
|
| 133 |
+
# 4.1 Overall Comparison
|
| 134 |
+
|
| 135 |
+
In Table 1 in Appendix B, we demonstrate a comprehensive comparison of efficiency-related ODQA models from three aspects: memory cost, processing speed, and answering quality. Specifically, total memory storage, detailed model size, and index size are listed to show details of memory cost. The number of questions that can be answered per second (Q/s) demonstrates the processing speed. Exact match (EM) scores on Natural Questions (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017) datasets indicate answering quality.
|
| 136 |
+
|
| 137 |
+
Concerning comparison between different frameworks, we can see that two-stage methods (Retriever-Reader) generally obtain better ODQA performances than one-stage methods (i.e., Retriever-Only and Generator-Only). The best end-to-end EM performance on NQ (55.9%) and TriviaQA (74.8%) datasets are obtained by R2-D2+reranker and GARextractive respectively.
|
| 138 |
+
|
| 139 |
+
They are both under the Retriever-Reader framework. The second-best ODQA performances on NQ (54.7%) and TriviaQA (72.1%) are obtained by UnitedQA and Fid-large+KD_DPR methods, which are also under the two-stage frameworks.
|
| 140 |
+
|
| 141 |
+
In terms of total memory cost, i.e., the sum of model size and the index size, Generator-Only systems keep generally low memory overhead. Except GPT-3, the rest of the Generator-Only systems take less than 50GB of memory, and five methods out of the eight are less than 5GB. On the contrary, most Retriever-Only ODQA models require huge memory, normally greater than 200GB. The method DenSPI needs a 2002.69GB memory cost, which is enormous. Retriever-Reader ODQA models have a wide range in terms of memory cost, from 0.31GB to 363.26GB. Overall speaking, Minimal R&R achieves the smallest memory overhead (0.31GB) while DenSPI keeps the largest one (2002.69GB).
|
| 142 |
+
|
| 143 |
+
In terms of processing speed, which determines how fast one ODQA system can answer a given question, one-stage methods generally achieve higher processing speed than two-stage methods, especially Retriever-Only systems. Among the eight Retriever-Only methods, five of them can process more than 20 questions per second (Q/s) and RePAQ_XL and RePQA_base can answer 800 and 1400 questions per second respectively, which is impressive. For the methods with slow processing speed, Fig-large and RAG-seq from the Retriever-Reader framework are the two slowest systems, which process less than 1 question per second.
|
| 144 |
+
|
| 145 |
+
To conclude, Fig. 3 gives a visual presentation for comprehensive comparison of efficiency-related ODQA models according to different frameworks. By using the NQ evaluation dataset as an example, it illustrates the detailed model size, index size, EM scores, and processing speed respectively. From Fig. 3, we can see each framework has its strengths and weaknesses. Retriever-Only systems achieve significantly high processing speeds but cost enormous memory storage. Generator-Only systems require the least memory storage. However, the main concern of them is the answering quality while the majority of these systems' EM scores are less than $30\%$ on NQ datasets. Two-stage Retriever-Reader systems relatively behave balanced. They achieve high EM scores and obtain moderate memory cost and processing speed.
|
| 146 |
+
|
| 147 |
+

|
| 148 |
+
|
| 149 |
+

|
| 150 |
+
|
| 151 |
+

|
| 152 |
+
Figure 3: Comprehensive comparison of ODQA models in terms of memory cost, processing speed, and EM accuracy on NQ evaluation dataset. The extractive-reader and generative-reader ODQA systems both belong to Retriever-Reader ODQA systems.
|
| 153 |
+
|
| 154 |
+

|
| 155 |
+
|
| 156 |
+
# 4.2 Details in Memory Cost
|
| 157 |
+
|
| 158 |
+
The total memory cost depends on the model size and the index size.
|
| 159 |
+
|
| 160 |
+
Index Size. For the index size, the two kinds of one-stage frameworks are two extremes. Generator-Only methods do not require creating an index file while Retriever-Only methods generally need a huge storage space for index. Most two-stage methods have a moderate index of 65GB or less.
|
| 161 |
+
|
| 162 |
+
For Retriever-Reader ODQA systems, the 65GB index set of dense passage embedding, developed by DPR (Karpukhin et al., 2020), is the most commonly adopted index set. It is adopted by 17 methods as we listed in Table 1 in Appendix B. Based on this index set, DrQA and GARextractive represent passages into sparse vectors, obtained a much smaller index size (26GB) (Chen et al., 2017; Mao et al., 2021). Through the product quantization (PQ) technique, DPR+PQ compresses the index size from 65GB to 2GB and the index size of RePAQ is from 220GB to 48GB. On the other side, BPR (Yamada et al., 2021) creates a small index of less than 2.1GB. It also improves the answering performance from $41.6\%$ to $49\%$ on the NQ dataset by replacing the BERT-based reader with the ELECTRA-large reader. Meanwhile, Minimal R&R (Yang and Seo, 2021) establishes the smallest index of less than 0.15GB with a price of a
|
| 163 |
+
|
| 164 |
+
significant drop in ODQA performance has been paid.
|
| 165 |
+
|
| 166 |
+
For Retriever-Only ODQA systems, DenSPI+Sparc (Lee et al., 2020) and DensePhrase (Lee et al., 2021b) smallen the phrase index by pointer sharing, phrase filtering, and PQ. However, the phrase index is still larger than 1000GB. DensePhrases further cuts down the index to 320GB by omitting sparse representations and using SpanBERT-based encoder while a relatively high performance remained. SpanBERT-based represents phrases into a lower-dimension space (Joshi et al., 2020) than that in DenSPI+Sparc. And DrBoost (Lewis et al., 2022) builds an index under 1GB where a passage is represented with a 190-dim vector through the boosting and PQ techniques.
|
| 167 |
+
|
| 168 |
+
Model Size The model size involves all modules present in one ODQA system, including the retriever and the reader. It has a great range, from 0.04GB to 700GB. Among all mentioned ODQA models, a quarter of ones have model sizes less than 1GB; the model sizes of $40\%$ systems are between $1 \sim 2\mathrm{GB}$ and $12.5\%$ ones have sizes between $2 \sim 3\mathrm{GB}$ ; $7.5\%$ systems have model sizes between $3 \sim 4\mathrm{GB}$ ; the remaining $15\%$ models weigh larger than 4GB. Specifically, GPT-3 (Brown et al., 2020) has an extremely huge model size of 700GB. Besides it, another three systems are obtaining rela
|
| 169 |
+
|
| 170 |
+
tively large models: T5-1.1-XL_SSM (45.27GB) (Roberts et al., 2020), UnitedQA (8.36GB) (Cheng et al., 2021) and R2-D2+reranker (5.16GB) (Fajcik et al., 2021), while the system with the smallest model (0.04GB) is achieved by RePAQ-base (Lewis et al., 2021). Specifically, GPT-3 keeps the largest model (700GB) and achieves relatively high performance, i.e., $71.2\%$ EM on TriviaQA (top 1) and $29.9\%$ EM on NQ dataset (top 3), compared to other models with the same Generator-Only framework. Minimal R&R (Yang and Seo, 2021) cuts down the total model size to 0.17GB. DrQA (Chen et al., 2017) has a small total model size 0.27GB in that its retriever is non-parameter BM25 and the reader relies on LSTM with fewer parameters. GARextractive (Mao et al., 2021) maintains a small total model size and achieves the best performance on TriviaQA $(74.8\%)$ and similar performance with DPR on NQ $(41.8\%)$ . RePAQ (Lewis et al., 2021) achieves the smallest model of 0.04GB. It also gains competitive performance compared to DPR.
|
| 171 |
+
|
| 172 |
+
Most ODQA models are implemented with PLMs that are less than 2GB. A few ODQA models keep the total model size more than 3GB to achieve higher performance, like FiD-large+KD_DPR (Izacard and Grave, 2020), RePAQ+FiD_large (Lewis et al., 2021), UnitedQA (Cheng et al., 2021) and R2-D2_reranker (Fajcik et al., 2021). As they employ either larger or more pre-trained language models to focus on improving performance.
|
| 173 |
+
|
| 174 |
+
# 4.3 Details on Latency
|
| 175 |
+
|
| 176 |
+
In terms of latency, i.e., processing speed, most ODQA models answer less than 10 questions per second. Retriever-Only ODQA models bring faster processing speed than the other three frameworks. Compared to phrase-base systems, the QA-pair-based system RePAQ (Lewis et al., 2021) and its variants win the fastest inference speed among the listed ODQA models, up to $1400\mathrm{Q / s}$ . Generator-Only ODQA models also achieve higher Q/s than Retriever-Reader ODQA models, as they do not need retrieving evidence from a larger corpus which is time-consuming.
|
| 177 |
+
|
| 178 |
+
# 5 Discussion
|
| 179 |
+
|
| 180 |
+
In this section, we summarize and illustrate the insights and future directions from the following aspects. We first summarize the key points to improve the effectiveness of ODQA systems, from
|
| 181 |
+
|
| 182 |
+
the two aspects of index and model respectively. In terms of index size, it is worth exploring deeper on generative models and the techniques of compacting embedding. In terms of model size, knowledge distillation is a promising direction to reduce model size while another direction is the application of lightweight models. In addition, one-stage ODQA models are also worthy of research.
|
| 183 |
+
|
| 184 |
+
Additionally, we provide some advice on model recommendations under different requirements. For example, if we pursue real-time feedback, Retriever-Only systems should be good choices; if we are limited by computing resources, Generator-Only systems are suitable candidates; and if we need to trade off performance, memory cost and processing time, Retriever-Reader systems are relatively more appropriate. In general, for researchers who are interested in improving the state-of-the-art efficiency methods on ODQA tasks, this survey can serve as an entry point to find opportunities for new research directions.
|
| 185 |
+
|
| 186 |
+
However, some salient challenges need to be addressed in the way of ODQA efficiency research. One of the worrisome things is that most ODQA approaches are computation-heavy and energy-expensive. How can the ODQA system be deployed in low-power devices with limited computing resources and mobile devices is still very challenging. Another thing is that it seems to be inadequate to evaluate the efficiency of ODQA models only on accuracy, memory, and processing time, due to many other factors that should be considered and traded off. For example, it is also important to establish what resource, e.g., money, time, and data for model training, power consumption, carbon emissions, etc.
|
| 187 |
+
|
| 188 |
+
# 6 Conclusion
|
| 189 |
+
|
| 190 |
+
In this survey, we retrospected the typical literature according to three different frameworks of open domain question answering (ODQA) systems. Further, we provided a broad overview of existing methods to increase efficiency for ODQA models and discussed their limitations. In addition, we performed a quantitative analysis in terms of efficiency and offered certain suggestions about method selections of open domain question answering. Finally, we discussed possible open challenges and potential future directions of efficient ODQA models.
|
| 191 |
+
|
| 192 |
+
# 7 Limitations
|
| 193 |
+
|
| 194 |
+
It seems to be difficult to evaluate the efficiency of ODQA models fairly and impartially due to multiple factors that should be considered and need to be traded off. On the one hand, it is not enough to only use accuracy, memory, and processing time to evaluate effectiveness. It is also important to establish what resource, e.g., money, power consumption, carbon emissions, etc., one attempt to constrain (Treviso et al., 2022). On the other hand, how to deploy models and what tools model implementation relies on contributes to inequity growth (Blodgett et al., 2020). It is extremely challenging to unify the deployment of all models and the tools they rely on and to achieve a truly fair and unbiased effectiveness comparison.
|
| 195 |
+
|
| 196 |
+
# 8 Ethics Statement
|
| 197 |
+
|
| 198 |
+
Our work focuses on summarizing and discussing the accuracy, inference speed, and memory cost of open domain question answering systems. We believe that our work is helpful for researchers who are interested in improving the state-of-the-art efficiency methods on ODQA tasks. We do not anticipate any ethical concerns arising from the research presented in this paper.
|
| 199 |
+
|
| 200 |
+
# Acknowledgments
|
| 201 |
+
|
| 202 |
+
This research was supported by National Natural Science Foundation of China (62206179, 92270122), Guangdong Provincial Natural Science Foundation (2022A1515010129, 2023A1515012584), University stability support program of Shenzhen (20220811121315001), Shenzhen Research Foundation for Basic Research, China (JCYJ20210324093000002).
|
| 203 |
+
|
| 204 |
+
# References
|
| 205 |
+
|
| 206 |
+
Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, and Doina Precup. 2015. Conditional computation in neural networks for faster models. CoRR, abs/1511.06297.
|
| 207 |
+
Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of "bias" in NLP. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5454-5476, Online. Association for Computational Linguistics.
|
| 208 |
+
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
|
| 209 |
+
|
| 210 |
+
Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc.
|
| 211 |
+
Yu Cao, Meng Fang, and Dacheng Tao. 2019. BAG: Bi-directional attention entity graph convolutional network for multi-hop reasoning question answering. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 357-362, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 212 |
+
Zhangjie Cao, Mingsheng Long, Jianmin Wang, and Philip S. Yu. 2017. Hashnet: Deep learning to hash by continuation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV).
|
| 213 |
+
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870-1879, Vancouver, Canada. Association for Computational Linguistics.
|
| 214 |
+
Hao Cheng, Yelong Shen, Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2021. UnitedQA: A hybrid approach for open domain question answering. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3080-3090, Online. Association for Computational Linguistics.
|
| 215 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 216 |
+
Romina Etezadi and Mehrnoush Shamsfard. 2022. The state of the art in open domain complex question answering: a survey. Applied Intelligence, pages 1-21.
|
| 217 |
+
Martin Fajcik, Martin Docekal, Karel Ondrej, and Pavel Smrz. 2021. R2-D2: A modular baseline for open-domain question answering. In Findings of the Association for Computational Linguistics: EMNLP
|
| 218 |
+
|
| 219 |
+
2021, pages 854-870, Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 220 |
+
Angela Fan, Edouard Grave, and Armand Joulin. 2019. Reducing transformer depth on demand with structured dropout. CoRR, abs/1909.11556.
|
| 221 |
+
Yoav Freund and Robert E Schapire. 1997. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119-139.
|
| 222 |
+
Alex Graves. 2016. Adaptive computation time for recurrent neural networks. ArXiv, abs/1603.08983.
|
| 223 |
+
Yue Guan, Zhengyi Li, Zhouhan Lin, Yuhao Zhu, Jingwen Leng, and Minyi Guo. 2022. Block-skim: Efficient question answering for transformer. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10):10710-10719.
|
| 224 |
+
Jiafeng Guo, Yinqiong Cai, Yixing Fan, Fei Sun, Ruqing Zhang, and Xueqi Cheng. 2022. Semantic models for the first-stage retrieval: A comprehensive review. ACM Trans. Inf. Syst., 40(4).
|
| 225 |
+
Ruiqi Guo, Sanjiv Kumar, Krzysztof Choromanski, and David Simcha. 2016. Quantization based fast inner product search. In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, volume 51 of Proceedings of Machine Learning Research, pages 482-490, Cadiz, Spain. PMLR.
|
| 226 |
+
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In International Conference on Machine Learning, pages 3929-3938. PMLR.
|
| 227 |
+
Yinya Huang, Meng Fang, Yu Cao, Liwei Wang, and Xiaodan Liang. 2021. DAGN: Discourse-aware graph network for logical reasoning. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5848-5855, Online. Association for Computational Linguistics.
|
| 228 |
+
Zhen Huang, Shiyi Xu, Minghao Hu, Xinyi Wang, Jinyan Qiu, Yongquan Fu, Yuncai Zhao, Yuxing Peng, and Changjian Wang. 2020. Recent trends in deep learning based open-domain textual question answering systems. IEEE Access, 8:94341-94356.
|
| 229 |
+
Gautier Izacard and Edouard Grave. 2020. Distilling knowledge from reader to retriever for question answering. CoRR, abs/2012.04584.
|
| 230 |
+
Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 874-880, Online. Association for Computational Linguistics.
|
| 231 |
+
|
| 232 |
+
Gautier Izacard, Fabio Petroni, Lucas Hosseini, Nicola De Cao, Sebastian Riedel, and Edouard Grave. 2020. A memory efficient baseline for open domain question answering. CoRR, abs/2012.15156.
|
| 233 |
+
Hervé Jégou, Matthijs Douze, and Cordelia Schmid. 2011. Product quantization for nearest neighbor search. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33:117-128.
|
| 234 |
+
Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2021. Billion-scale similarity search with gpus. IEEE Transactions on Big Data, 7(3):535-547.
|
| 235 |
+
Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. Span-BERT: Improving Pre-training by Representing and Predicting Spans. Transactions of the Association for Computational Linguistics, 8:64-77.
|
| 236 |
+
Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Annual Meeting of the Association for Computational Linguistics.
|
| 237 |
+
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769-6781, Online. Association for Computational Linguistics.
|
| 238 |
+
Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. UNIFIEDQA: Crossing format boundaries with a single QA system. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1896-1907, Online. Association for Computational Linguistics.
|
| 239 |
+
Omar Khattab, Christopher Potts, and Matei Zaharia. 2021. Relevance-guided Supervision for OpenQA with ColBERT. Transactions of the Association for Computational Linguistics, 9:929-944.
|
| 240 |
+
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453-466.
|
| 241 |
+
François Lagunas, Ella Charlaix, Victor Sanh, and Alexander Rush. 2021. Block pruning for faster transformers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10619-10629, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 242 |
+
|
| 243 |
+
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. ALBERT: A lite BERT for self-supervised learning of language representations. CoRR, abs/1909.11942.
|
| 244 |
+
Haejun Lee, Akhil Kedia, Jongwon Lee, Ashwin Paranjape, Christopher D. Manning, and Young-Gu Woo. 2021a. You only need one model for open-domain question answering. CoRR, abs/2112.07381.
|
| 245 |
+
Jinhyuk Lee, Minjoon Seo, Hannaneh Hajishirzi, and Jaewoo Kang. 2020. Contextualized sparse representations for real-time open-domain question answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 912-919, Online. Association for Computational Linguistics.
|
| 246 |
+
Jinhyuk Lee, Mujeen Sung, Jaewoo Kang, and Danqi Chen. 2021b. Learning dense representations of phrases at scale. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6634-6647, Online. Association for Computational Linguistics.
|
| 247 |
+
Jinhyuk Lee, Alexander Wettig, and Danqi Chen. 2021c. Phrase retrieval learns passage retrieval, too. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3661-3672, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 248 |
+
Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086–6096, Florence, Italy. Association for Computational Linguistics.
|
| 249 |
+
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020a. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.
|
| 250 |
+
Patrick Lewis, Barlas Oguz, Wenhan Xiong, Fabio Petroni, Scott Yih, and Sebastian Riedel. 2022. Boosted dense retriever. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3102-3117, Seattle, United States. Association for Computational Linguistics.
|
| 251 |
+
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Rocktuschel, Sebastian Riedel, and Douwe Kiela. 2020b.
|
| 252 |
+
|
| 253 |
+
Retrieval-augmented generation for knowledge-intensive nlp tasks. In Advances in Neural Information Processing Systems, volume 33, pages 9459-9474. Curran Associates, Inc.
|
| 254 |
+
Patrick Lewis, Yuxiang Wu, Linqing Liu, Pasquale Minervini, Heinrich Kuttler, Aleksandra Piktus, Pontus Stenetorp, and Sebastian Riedel. 2021. PAQ: 65 Million Probably-Asked Questions and What You Can Do With Them. Transactions of the Association for Computational Linguistics, 9:1098-1115.
|
| 255 |
+
Yuxiang Lu, Yiding Liu, Jiaxiang Liu, Yunsheng Shi, Zhengjie Huang, Shikun Feng Yu Sun, Hao Tian, Hua Wu, Shuaiqiang Wang, Dawei Yin, and Haifeng Wang. 2022. Ernie-search: Bridging cross-encoder with dual-encoder via self on-the-fly distillation for dense passage retrieval.
|
| 256 |
+
Xueguang Ma, Minghan Li, Kai Sun, Ji Xin, and Jimmy Lin. 2021. Simple and effective unsupervised redundancy elimination to compress dense vectors for passage retrieval. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2854-2859, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 257 |
+
Yu A. Malkov and D. A. Yashunin. 2020. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(4):824-836.
|
| 258 |
+
Yuning Mao, Pengcheng He, Xiaodong Liu, Ye- long Shen, Jianfeng Gao, Jiawei Han, and Weizhu Chen. 2021. Generation-augmented retrieval for open-domain question answering. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4089-4100, Online. Association for Computational Linguistics.
|
| 259 |
+
Sewon Min, Jordan Boyd-Graber, Chris Alberti, Danqi Chen, Eunsol Choi, Michael Collins, Kelvin Guu, Hannaneh Hajishirzi, Kenton Lee, Jennimaria Palomaki, Colin Raffel, Adam Roberts, Tom Kwiatkowski, Patrick Lewis, Yuxiang Wu, Heinrich Kuttler, Linqing Liu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel, Sohee Yang, Minjoon Seo, Gautier Izacard, Fabio Petroni, Lucas Hosseini, Nicola De Cao, Edouard Grave, Ikuya Yamada, Sonse Shimaoka, Masatoshi Suzuki, Shumpei Miyawaki, Shun Sato, Ryo Takahashi, Jun Suzuki, Martin Fajcik, Martin Docekal, Karel Ondrej, Pavel Smrz, Hao Cheng, Yelong Shen, Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao, Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Schlichtkrull, Sonal Gupta, Yashar Mehdad, and Wen-tau Yih. 2021. Neurips 2020 efficientqa competition: Systems, analyses and lessons learned. In Proceedings of the NeurIPS 2020 Competition and Demonstration
|
| 260 |
+
|
| 261 |
+
Track, volume 133 of Proceedings of Machine Learning Research, pages 86-111. PMLR.
|
| 262 |
+
Sewon Min, Victor Zhong, Richard Socher, and Caiming Xiong. 2018. Efficient and robust question answering from minimal context over documents. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1725-1735, Melbourne, Australia. Association for Computational Linguistics.
|
| 263 |
+
Behnam Neyshabur and Nathan Srebro. 2015. On symmetric and asymmetric lshs for inner product search. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, ICML'15, page 1926-1934. JMLR.org.
|
| 264 |
+
Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, and Haifeng Wang. 2021. RocketQA: An optimized training approach to dense passage retrieval for open-domain question answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5835-5847, Online. Association for Computational Linguistics.
|
| 265 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140):1-67.
|
| 266 |
+
Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the parameters of a language model? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5418-5426, Online. Association for Computational Linguistics.
|
| 267 |
+
Hassan Sajjad, Fahim Dalvi, Nadir Durrani, and Preslav Nakov. 2020. Poor man's bert: Smaller and faster transformer models. ArXiv, abs/2004.03844.
|
| 268 |
+
Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. 2019. Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter. CoRR, abs/1910.01108.
|
| 269 |
+
Minjoon Seo, Jinhyuk Lee, Tom Kwiatkowski, Ankur Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2019. Real-time open-domain question answering with dense-sparse phrase index. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4430-4441, Florence, Italy. Association for Computational Linguistics.
|
| 270 |
+
Yeon Seonwoo, Juhee Son, Jiho Jin, Sang-Woo Lee, Ji-Hoon Kim, Jung-Woo Ha, and Alice Oh. 2022. Two-step question retrieval for open-domain QA. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1487-1492, Dublin, Ireland. Association for Computational Linguistics.
|
| 271 |
+
|
| 272 |
+
Xiaoyu Shen, Svitlana Vakulenko, Marco Del Tredici, Gianni Barlacchi, Bill Byrne, and A. Gispert. 2022. Low-resource dense retrieval for open-domain question answering: A comprehensive survey. ArXiv, abs/2208.03197.
|
| 273 |
+
Anshumali Shrivastava and Ping Li. 2014. Asymmetric lsh (alsh) for sublinear time maximum inner product search (mips). In Advances in Neural Information Processing Systems, volume 27. Curran Associates, Inc.
|
| 274 |
+
Devendra Singh, Siva Reddy, Will Hamilton, Chris Dyer, and Dani Yogatama. 2021. End-to-end training of multi-document reader and retriever for open-domain question answering. In Advances in Neural Information Processing Systems, volume 34, pages 25968-25981. Curran Associates, Inc.
|
| 275 |
+
Sivic and Zisserman. 2003. Video google: a text retrieval approach to object matching in videos. In Proceedings Ninth IEEE International Conference on Computer Vision, pages 1470-1477 vol.2.
|
| 276 |
+
Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 2020. MobileBERT: a compact task-agnostic BERT for resource-limited devices. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2158–2170, Online. Association for Computational Linguistics.
|
| 277 |
+
Marcos Vinicius Treviso, Tianchu Ji, Ji-Ung Lee, Betty van Aken, Qingqing Cao, Manuel R. Ciosici, Michael Hassid, Kenneth Heafield, Sara Hooker, Pedro Henrique Martins, Andre F. T. Martins, Peter Milder, Colin Raffel, Edwin Simpson, Noam Slonim, Ranjan Balasubramanian, Leon Derczynski, and Roy Schwartz. 2022. Efficient methods for natural language processing: A survey. ArXiv, abs/2209.00099.
|
| 278 |
+
Ellen M. Voorhees and Dawn M. Tice. 2000. The TREC-8 question answering track. In Proceedings of the Second International Conference on Language Resources and Evaluation (LREC'00), Athens, Greece. European Language Resources Association (ELRA).
|
| 279 |
+
Jingdong Wang, Ting Zhang, jingkuan song, Nicu Sebe, and Heng Tao Shen. 2018. A survey on learning to hash. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4):769-790.
|
| 280 |
+
Yifan Wang, Haodi Ma, and Daisy Zhe Wang. 2022. Lider: An efficient high-dimensional learned index for large-scale dense passage retrieval. ArXiv, abs/2205.00970.
|
| 281 |
+
Wikipedia. 2004. Wikipedia. MediaPress.
|
| 282 |
+
Yuxiang Wu, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2021. Training adaptive computation for open-domain question answering with computational constraints. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the
|
| 283 |
+
|
| 284 |
+
11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 447-453, Online. Association for Computational Linguistics.
|
| 285 |
+
Yuxiang Wu, Sebastian Riedel, Pasquale Minervini, and Pontus Stenetorp. 2020. Don’t read too much into it: Adaptive computation for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3029–3039, Online. Association for Computational Linguistics.
|
| 286 |
+
Mengzhou Xia, Zexuan Zhong, and Danqi Chen. 2022. Structured pruning learns compact and accurate models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1513-1528, Dublin, Ireland. Association for Computational Linguistics.
|
| 287 |
+
Ikuya Yamada, Akari Asai, and Hannaneh Hajishirzi. 2021. Efficient passage retrieval with hashing for open-domain question answering. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 979-986, Online. Association for Computational Linguistics.
|
| 288 |
+
Sohee Yang and Minjoon Seo. 2021. Designing a minimal retrieve-and-read system for open-domain question answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5856-5865, Online. Association for Computational Linguistics.
|
| 289 |
+
Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, and Shaoping Ma. 2021. Jointly optimizing query encoder and product quantization to improve retrieval performance. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, CIKM '21, page 2487-2496, New York, NY, USA. Association for Computing Machinery.
|
| 290 |
+
Fengbin Zhu, Wenqiang Lei, Chao Wang, Jianming Zheng, Soujanya Poria, and Tat-Seng Chua. 2021. Retrieving and reading: A comprehensive survey on open-domain question answering. CoRR, abs/2101.00774.
|
| 291 |
+
Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 19-27.
|
| 292 |
+
Vilém Zouhar, Marius Mosbach, Miaoran Zhang, and Dietrich Klakow. 2022. Knowledge base index compression via dimensionality and precision reduction. In Proceedings of the 1st Workshop on
|
| 293 |
+
|
| 294 |
+
Semiparametric Methods in NLP: Decoupling Logic from Knowledge, pages 41-53, Dublin, Ireland and Online. Association for Computational Linguistics.
|
| 295 |
+
|
| 296 |
+

|
| 297 |
+
Figure 4: Typology of ODQA systems and their main concerns in terms of accuracy, memory size, and processing speed. The symbol $\checkmark$ in this figure demonstrates whether the corresponding models consider the concerns in terms of accuracy, index size, and speed respectively, or not.
|
| 298 |
+
|
| 299 |
+
# A Connection to Existing Related Surveys
|
| 300 |
+
|
| 301 |
+
ODQA has been discussed and summarized with a broad overview of techniques for NLP in several survey papers. However, they more focus on deep neural models for improving ODQA performance. Specifically, the survey given by Huang et al. (2020) introduces deep learning-based ODQA models proposed in the early years, which are mainly based on LSTM or CNN. Modern Transformer-based ODQA models are not included. Work given by Zhu et al. (2021) provides a comprehensive literature review of ODQA models, with particular attention to techniques incorporating neural machine reading comprehension models. Guo et al. (2022) focuses on the semantic models of the first-stage retrieval models. Shen et al. (2022) pays more attention to how to train the dense retrievers effectively with fewer annotation training data. Treviso et al. (2022) retrospect the efficient methods in nat
|
| 302 |
+
|
| 303 |
+
ural language processing (NLP). It mainly involves the upstream generic pre-trained language models and training methods. Etezadi and Shamsfard (2022) mainly concentrates on the comparison of ODQA methods for complex question answering. As far as we know, there is no survey summarizing ODQA methods from the efficiency perspective so far, which inspires us to overview the efficient ODQA models in this paper.
|
| 304 |
+
|
| 305 |
+
# B Corpus and Metrics Normally Used
|
| 306 |
+
|
| 307 |
+
Corpus. The most commonly used corpus for open domain question answering systems is the 2018-12-20 dump of Wikipedia corpus, which contains 21 million 100-word-long passages after removing semi-structured data (tables, information boxes, lists, and the disambiguation pages) (Karpukhin et al., 2020). Most ODQA models, such as RocketQA (Qu et al., 2021), FiD (Izacard and Grave, 2021), and R2-D2 (Fajcik et al., 2021), directly
|
| 308 |
+
|
| 309 |
+
Table 1: Comprehensive analysis of memory cost (model size, index size, and the total size), processing speed, and EM accuracy on NQ. Results marked with symbol * were obtained on an Nvidia GeForce Rtx 2080 Ti GPU over 100 examples.
|
| 310 |
+
|
| 311 |
+
<table><tr><td rowspan="2">Frameworks</td><td rowspan="2">Systems</td><td rowspan="2">Grouped by Memory Cost(GB)</td><td colspan="3">Memory Cost(GB)</td><td>Processing Speed</td><td colspan="2">EM score (%)</td></tr><tr><td>Models</td><td>Index</td><td>Total</td><td>Q/s</td><td>NQ</td><td>TriviaQA</td></tr><tr><td rowspan="24">Extractive-Reader</td><td>Minimal R&R</td><td rowspan="9">(0, 10]</td><td>0.17</td><td>0.15</td><td>0.31</td><td>-</td><td>32.60</td><td>48.75</td></tr><tr><td>SkylineBuilder</td><td>0.07</td><td>2.40</td><td>2.47</td><td>-</td><td>34.20</td><td>-</td></tr><tr><td>BM25+BERT-base</td><td>0.44</td><td>2.40</td><td>2.84</td><td>4.68*</td><td>-</td><td>-</td></tr><tr><td>GAR Extractive</td><td>0.44</td><td>2.40</td><td>2.84</td><td>2.61*</td><td>41.80</td><td>74.80</td></tr><tr><td>DrBoost+PQ(8-dim)</td><td>2.64</td><td>0.42</td><td>3.06</td><td>-</td><td>-</td><td>-</td></tr><tr><td>DPR+PQ</td><td>1.32</td><td>2.00</td><td>3.32</td><td>4.67*</td><td>38.40</td><td>52.00</td></tr><tr><td>BPR_BERT</td><td>1.32</td><td>2.10</td><td>3.42</td><td>4.81*</td><td>41.60</td><td>56.80</td></tr><tr><td>DrBoost+PQ(4-dim)</td><td>2.64</td><td>0.84</td><td>3.48</td><td>-</td><td>-</td><td>-</td></tr><tr><td>BPR ELECTRA-large</td><td>2.22</td><td>2.10</td><td>4.32</td><td>-</td><td>49.00</td><td>65.60</td></tr><tr><td>DrBoost</td><td rowspan="4">(10, 50]</td><td>2.64</td><td>13.00</td><td>15.64</td><td>-</td><td>-</td><td>-</td></tr><tr><td>ORQA</td><td>1.32</td><td>18.00</td><td>19.32</td><td>8.60</td><td>33.30</td><td>45.00</td></tr><tr><td>REALM</td><td>1.32</td><td>18.00</td><td>19.32</td><td>8.40</td><td>39.20</td><td>-</td></tr><tr><td>DrQA</td><td>0.27</td><td>26.00</td><td>26.27</td><td>1.80</td><td>35.70</td><td>-</td></tr><tr><td>ColBERT-QA-base</td><td rowspan="9">(50, 100]</td><td>0.88</td><td>65.00</td><td>65.88</td><td>-</td><td>42.30</td><td>64.60</td></tr><tr><td>ANCE</td><td>1.32</td><td>65.00</td><td>66.32</td><td>5.51*</td><td>46.00</td><td>57.50</td></tr><tr><td>DPR</td><td>1.32</td><td>65.00</td><td>66.32</td><td>1.60*</td><td>41.50</td><td>56.80</td></tr><tr><td>GAR+DPRextractive</td><td>1.32</td><td>65.00</td><td>66.32</td><td>1.25*</td><td>43.80</td><td>-</td></tr><tr><td>RocketQA</td><td>1.50</td><td>65.00</td><td>66.50</td><td>-</td><td>42.80</td><td>-</td></tr><tr><td>ColBERT-QA-large</td><td>1.76</td><td>65.00</td><td>66.76</td><td>-</td><td>47.80</td><td>70.10</td></tr><tr><td>ERNIE-Search_base</td><td>1.76</td><td>65.00</td><td>66.76</td><td>-</td><td>-</td><td>-</td></tr><tr><td>R2-D2_reranker</td><td>5.16</td><td>65.00</td><td>70.16</td><td>-</td><td>55.90</td><td>69.90</td></tr><tr><td>UnitedQA</td><td>8.36</td><td>65.00</td><td>73.36</td><td>-</td><td>54.70</td><td>70.50</td></tr><tr><td>DPR+HNSW</td><td rowspan="2">(100, 500])</td><td>1.32</td><td>151.00</td><td>152.32</td><td>5.82*</td><td>41.20</td><td>56.60</td></tr><tr><td>ERNIE-Search_2.4B</td><td>19.20</td><td>344.06</td><td>363.26</td><td>-</td><td>-</td><td>-</td></tr><tr><td rowspan="10">Generative-Reader</td><td>EMDR2</td><td>(10, 50]</td><td>1.76</td><td>32.00</td><td>33.76</td><td>-</td><td>52.50</td><td>71.40</td></tr><tr><td>YONO_retriever</td><td rowspan="9">(50, 100]</td><td>1.54</td><td>65.00</td><td>66.54</td><td>-</td><td>53.20</td><td>71.30</td></tr><tr><td>FiD-base</td><td>1.76</td><td>65.00</td><td>66.76</td><td>2.00</td><td>48.20</td><td>65.00</td></tr><tr><td>FiD-base+KD_DPR</td><td>1.76</td><td>65.00</td><td>66.76</td><td>-</td><td>49.60</td><td>68.80</td></tr><tr><td>YONO_reranker</td><td>1.76</td><td>65.00</td><td>66.76</td><td>-</td><td>53.20</td><td>71.90</td></tr><tr><td>GAR+DPRgenerated</td><td>2.50</td><td>65.00</td><td>67.50</td><td>1.25*</td><td>45.30</td><td>-</td></tr><tr><td>RAG-seq</td><td>2.50</td><td>65.00</td><td>67.50</td><td>0.80</td><td>44.50</td><td>56.80</td></tr><tr><td>FiD-large</td><td>3.96</td><td>65.00</td><td>68.96</td><td>0.50</td><td>51.40</td><td>67.60</td></tr><tr><td>FiD-large+KD_DPR</td><td>3.96</td><td>65.00</td><td>68.96</td><td>-</td><td>53.70</td><td>72.10</td></tr><tr><td>RePAQ+FiD_large</td><td>3.32</td><td>220.00</td><td>223.32</td><td>2.30</td><td>52.30</td><td>67.30</td></tr><tr><td rowspan="8">Retriever-Only</td><td>RePAQ_base+PQ</td><td>(10, 50]</td><td>0.04</td><td>48.00</td><td>48.04</td><td>100.00</td><td>41.20</td><td>-</td></tr><tr><td>RePAQ_base</td><td rowspan="7">(100, 500]</td><td>0.04</td><td>220.00</td><td>220.04</td><td>1400.00</td><td>40.90</td><td>-</td></tr><tr><td>RePAQ_base+reranker_base</td><td>0.09</td><td>220.00</td><td>220.09</td><td>55.00</td><td>45.70</td><td>-</td></tr><tr><td>RePAQ_XL</td><td>0.24</td><td>220.00</td><td>220.24</td><td>800.00</td><td>41.50</td><td>-</td></tr><tr><td>RePAQ_XL+reranker_XXL</td><td>1.18</td><td>220.00</td><td>221.18</td><td>6.00</td><td>47.60</td><td>52.10</td></tr><tr><td>DensePhrases</td><td>0.88</td><td>320.00</td><td>320.88</td><td>20.60</td><td>40.90</td><td>50.70</td></tr><tr><td>DenSPI+Sparc</td><td>2.69</td><td>1547.00</td><td>1549.69</td><td>2.10</td><td>14.50</td><td>34.40</td></tr><tr><td>DenSPI</td><td>2.69</td><td>2000.00</td><td>2002.69</td><td>2.90</td><td>8.10</td><td>30.70</td></tr><tr><td rowspan="8">Generator-Only</td><td>T5-1.1-small+SSM</td><td rowspan="5">(0, 10]</td><td>0.24</td><td>0.00</td><td>0.24</td><td>7.20*</td><td>25.50</td><td>-</td></tr><tr><td>T5-base</td><td>0.88</td><td>0.00</td><td>0.88</td><td>7.53*</td><td>25.90</td><td>29.10</td></tr><tr><td>BART-large</td><td>1.62</td><td>0.00</td><td>1.62</td><td>5.88*</td><td>26.50</td><td>26.70</td></tr><tr><td>GAR_generate</td><td>1.62</td><td>0.00</td><td>1.62</td><td>2.94*</td><td>38.10</td><td>62.20</td></tr><tr><td>T5-large</td><td>3.08</td><td>0.00</td><td>3.08</td><td>3.85*</td><td>28.50</td><td>35.90</td></tr><tr><td>T5-1.1-XL+SSM</td><td rowspan="2">(10, 50]</td><td>12.00</td><td>0.00</td><td>12.00</td><td>-</td><td>29.50</td><td>45.10</td></tr><tr><td>T5-1.1-XXL+SSM</td><td>45.27</td><td>0.00</td><td>45.27</td><td>-</td><td>35.20</td><td>61.60</td></tr><tr><td>GPT-3</td><td>(500, 1000]</td><td>700.00</td><td>0.00</td><td>700.00</td><td>-</td><td>29.90</td><td>71.20</td></tr></table>
|
| 312 |
+
|
| 313 |
+
Table 2: The statistical information of Wikipedia corpora used in ODQA models.
|
| 314 |
+
|
| 315 |
+
<table><tr><td>Wikipedia Corpus</td><td>Split Method</td><td>Retrieval Unit</td><td>Length of a Unit (tokens)</td><td>Number of Units (million)</td><td>Encoding Methods</td><td>Index Size (GB)</td><td>Relativised ODQA models</td></tr><tr><td rowspan="2">2016-12-21 dump of English Wikipedia</td><td rowspan="2">-</td><td rowspan="2">article</td><td rowspan="2">-</td><td rowspan="2">5.1</td><td>TF-IDF</td><td>26</td><td>DrQA</td></tr><tr><td>BM25</td><td>2.4</td><td>Skylinebuilder, GARextractive</td></tr><tr><td rowspan="5">2018-12-20 snapshot of English Wikipedia</td><td>BERT's tokenizer</td><td>block/passage</td><td>288</td><td>13</td><td>dense encoding</td><td>18</td><td>ORQA, REALM</td></tr><tr><td>-</td><td>block/passage</td><td>100</td><td>21</td><td>dense encoding</td><td>65</td><td>DPR, RocketQA, R2-D2, etc.</td></tr><tr><td rowspan="2">-</td><td rowspan="2">phrase</td><td rowspan="2"><=20</td><td rowspan="2">60000</td><td>TF-IDF+ dense encoding</td><td>2000</td><td>DenSPI</td></tr><tr><td>dense encoding</td><td>320</td><td>DensePhrases</td></tr><tr><td>generator</td><td>QA-pair</td><td>-</td><td>65</td><td>dense encoding</td><td>220</td><td>RePAQ</td></tr></table>
|
| 316 |
+
|
| 317 |
+
build the index for passages on this Wikipedia corpus. The size of the index file is 65GB. Based on this Wikipedia corpus, RePQA further generates 65 million QA pairs and indexes these QA pairs to a 220GB file. Some other methods, e.g. DrQA (Chen et al., 2017) and Skylinebuilder (Wu et al., 2020), encode and build indexes for documents from the 2016-12-21 dump of English Wikipedia which includes 5.1 million articles (Chen et al., 2017; Wu et al., 2020), and the size of this index file is 26GB.
|
| 318 |
+
|
| 319 |
+
Except for the different choices of the original corpus, there are also different partition and segmentation strategies. For example, ORQA (Lee et al., 2019) and REALM (Guu et al., 2020) segment the corpus documents into 13 million blocks, each of which has 288 tokens. DenseSPI (Seo et al., 2019), Dens+Sparc (Lee et al., 2020) and DensePhrases (Lee et al., 2021b) divide corpus documents into 60 billion phrases, each phrase including 20 tokens. The rest ODQA models segment corpus documents into 21 million passages with a length of 100 tokens, leading to a 65GB index (Karpukhin et al., 2020; Lewis et al., 2021; Izacard and Grave, 2021; Qu et al., 2021).
|
| 320 |
+
|
| 321 |
+
A comprehensive introduction is illustrated in Table 2. In general, the index size of the corpus is quite large, and the storage of the index is one of the main challenges for ODQA efficiency.
|
| 322 |
+
|
| 323 |
+
Metrics. There are various metrics to depict efficiency in different dimensions.
|
| 324 |
+
|
| 325 |
+
In terms of latency, training time (Mao et al., 2021), indexing time (Mao et al., 2021), query time (Yamada et al., 2021) and reasoning time are normally considered. The metrics $Q/s$ (questions per second) (Seo et al., 2019) and $FLOPs$ (floating
|
| 326 |
+
|
| 327 |
+
point operations) (Guan et al., 2022) are popular in measuring the total processing latency, where $Q / s$ is the number of questions one ODQA system can answer per second and $FLOPs$ is the number of floating point operations of the model.
|
| 328 |
+
|
| 329 |
+
In terms of memory, model parameter size, passage corpus size, index size, and training data size are important to influence factors of memory cost (Yamada et al., 2021). We measure the memory consumption for ODQA models using memory units (bytes) of corresponding data (corpus, index, and model) after loading into memory.
|
| 330 |
+
|
| 331 |
+
In terms of answering quality, EM (Exact Match accuracy) (Chen et al., 2017), F1-score, MRR@k (Mean Reciprocal Rank) (Qu et al., 2021), precision@k, Recall@k and retrieval accuracy@k (Karpukhin et al., 2020) are normally used to measure the quality of the ODQA models. Specifically, EM is the percentage of questions for which the predicted answers can match any one of the reference answers exactly, after string normalization (Qu et al., 2021). MRR@k is the mean reciprocal of the rank at which the first relevant passage was retrieved (Qu et al., 2021).
|
| 332 |
+
|
| 333 |
+
In this paper, we adopt metrics on latency, memory, and accuracy to evaluate ODQA models comprehensively. Specifically, we use $Q / s$ to measure the processing speed, use total memory overhead to evaluate the memory cost, and use EM score to estimate the end-to-end answer prediction quality as shown in Table 1.
|
| 334 |
+
|
| 335 |
+
# A For every submission:
|
| 336 |
+
|
| 337 |
+
A1. Did you describe the limitations of your work?
|
| 338 |
+
|
| 339 |
+
Left blank.
|
| 340 |
+
|
| 341 |
+
A2. Did you discuss any potential risks of your work?
|
| 342 |
+
|
| 343 |
+
Left blank.
|
| 344 |
+
|
| 345 |
+
A3. Do the abstract and introduction summarize the paper's main claims?
|
| 346 |
+
|
| 347 |
+
Left blank.
|
| 348 |
+
|
| 349 |
+
□ A4. Have you used AI writing assistants when working on this paper?
|
| 350 |
+
|
| 351 |
+
Left blank.
|
| 352 |
+
|
| 353 |
+
# B Did you use or create scientific artifacts?
|
| 354 |
+
|
| 355 |
+
Left blank.
|
| 356 |
+
|
| 357 |
+
B1. Did you cite the creators of artifacts you used?
|
| 358 |
+
|
| 359 |
+
Left blank.
|
| 360 |
+
|
| 361 |
+
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
|
| 362 |
+
|
| 363 |
+
Left blank.
|
| 364 |
+
|
| 365 |
+
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
|
| 366 |
+
|
| 367 |
+
Left blank.
|
| 368 |
+
|
| 369 |
+
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
|
| 370 |
+
|
| 371 |
+
Left blank.
|
| 372 |
+
|
| 373 |
+
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
|
| 374 |
+
|
| 375 |
+
Left blank.
|
| 376 |
+
|
| 377 |
+
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
|
| 378 |
+
|
| 379 |
+
Left blank.
|
| 380 |
+
|
| 381 |
+
# C Did you run computational experiments?
|
| 382 |
+
|
| 383 |
+
Left blank.
|
| 384 |
+
|
| 385 |
+
C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used?
|
| 386 |
+
|
| 387 |
+
Left blank.
|
| 388 |
+
|
| 389 |
+
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
|
| 390 |
+
|
| 391 |
+
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Left blank.
|
| 392 |
+
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Left blank.
|
| 393 |
+
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Left blank.
|
| 394 |
+
|
| 395 |
+
# D Did you use human annotators (e.g., crowdworkers) or research with human participants?
|
| 396 |
+
|
| 397 |
+
Left blank.
|
| 398 |
+
|
| 399 |
+
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Left blank.
|
| 400 |
+
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Left blank.
|
| 401 |
+
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Left blank.
|
| 402 |
+
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Left blank.
|
| 403 |
+
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Left blank.
|
2023/A Survey for Efficient Open Domain Question Answering/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9f05912f08a08934cccee7ee3a70e446be5e86eccfb8c316bbb0444bf4432566
|
| 3 |
+
size 581039
|
2023/A Survey for Efficient Open Domain Question Answering/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Survey of Deep Learning for Mathematical Reasoning/2b7d63a4-978e-4e60-8b54-4fb49d34c681_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Survey of Deep Learning for Mathematical Reasoning/2b7d63a4-978e-4e60-8b54-4fb49d34c681_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Survey of Deep Learning for Mathematical Reasoning/2b7d63a4-978e-4e60-8b54-4fb49d34c681_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cb353a8ac31a664ffae5c660e2cbf68dc82c94d886fca609b70f8408df5d6153
|
| 3 |
+
size 715275
|
2023/A Survey of Deep Learning for Mathematical Reasoning/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Survey of Deep Learning for Mathematical Reasoning/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:216f9d6a535433056b72402cc23ef1ac73a31919c972dd1bf2b0beb2734f3083
|
| 3 |
+
size 1253473
|
2023/A Survey of Deep Learning for Mathematical Reasoning/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Survey on Asking Clarification Questions Datasets in Conversational Systems/8d770436-f7f9-4204-a41b-8e2e7dd040b2_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Survey on Asking Clarification Questions Datasets in Conversational Systems/8d770436-f7f9-4204-a41b-8e2e7dd040b2_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Survey on Asking Clarification Questions Datasets in Conversational Systems/8d770436-f7f9-4204-a41b-8e2e7dd040b2_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:12b34c53dcb2b2302f5b99058d643ecc7c4840c6382a2ebc2d51d5d7f4432337
|
| 3 |
+
size 953426
|
2023/A Survey on Asking Clarification Questions Datasets in Conversational Systems/full.md
ADDED
|
@@ -0,0 +1,421 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Survey on Asking Clarification Questions Datasets in Conversational Systems
|
| 2 |
+
|
| 3 |
+
Hossein A. Rahmani†* Xi Wang†* Yue Feng† Qiang Zhang‡ Emine Yilmaz† Aldo Lipani†
|
| 4 |
+
|
| 5 |
+
†University College London, London, UK
|
| 6 |
+
|
| 7 |
+
$^{\ddagger}$ Zhejiang University, Hangzhou, China
|
| 8 |
+
|
| 9 |
+
{hossein.rahmani.22,xi-wang,yue.feng.20,emine.yilmaz,aldo.lipani}@ucl.ac.uk qiang.zhang.cs@zju.edu.cn
|
| 10 |
+
|
| 11 |
+
# Abstract
|
| 12 |
+
|
| 13 |
+
The ability to understand a user's underlying needs is critical for conversational systems, especially with limited input from users in a conversation. Thus, in such a domain, Asking Clarification Questions (ACQs) to reveal users' true intent from their queries or utterances arise as an essential task. However, it is noticeable that a key limitation of the existing ACQs studies is their incomparability, from inconsistent use of data, distinct experimental setups and evaluation strategies. Therefore, in this paper, to assist the development of ACQs techniques, we comprehensively analyse the current ACQs research status, which offers a detailed comparison of publicly available datasets, and discusses the applied evaluation metrics, joined with benchmarks for multiple ACQs-related tasks. In particular, given a thorough analysis of the ACQs task, we discuss a number of corresponding research directions for the investigation of ACQs as well as the development of conversational systems.
|
| 14 |
+
|
| 15 |
+
# 1 Introduction
|
| 16 |
+
|
| 17 |
+
Humans often resort to conversations and asking clarification questions to avoid misunderstandings when collaborating with others. Asking Clarification Questions (ACQs) is, therefore, a commonly used mechanism to boost efficiency on human-human as well as human-machine collaborative tasks (Shi et al., 2022; Zou et al., 2023; Shi et al., 2023; Feng et al., 2023). As an example of human-machine collaboration, conversational systems are developed to not only have a natural conversation with people but also to answer various questions of topics ranging from different domains (e.g., news, movie, and music) in an accurate and efficient manner (Gao et al., 2018). To effectively and efficiently answer various questions, it is essential for many existing conversational systems to capture
|
| 18 |
+
|
| 19 |
+
people's intents. Only then can conversational systems accurately reply to a series of questions from users (Anand et al., 2020; Zamani et al., 2022).
|
| 20 |
+
|
| 21 |
+
Nevertheless, one essential issue is that limited research exists on ACQs and most systems were trained with inconsistent and limited input of data resources. Indeed, in the literature, many studies introduced ACQs to assist conversational systems when applying to different / a mixture of domains (e.g., movie (Li et al., 2017) or open domain (Aliennejadi et al., 2019)). There is also a lack of commonly agreed benchmark datasets for the development of ACQs systems with comparable result analysis. However, on the other hand, in the literature (Aliennejadi et al., 2019; Zamani et al., 2020; Kumar and Black, 2020; Feng et al., 2023), a growing number of studies released publicly available datasets while showing a common interest in the ACQ research direction. This observed contradiction leads to a necessity for a comprehensive overview of the existing datasets as well as the current status of the ACQ research direction. By addressing this concern, many growing ACQs can be better designed, trained and tested with suitable features from properly selected datasets according to comprehensive guidance.
|
| 22 |
+
|
| 23 |
+
Therefore, in this paper, we offer an overview of the current status of the ACQ research progress. In particular, we aggregate and compare the datasets that have been considered for evaluating recent ACQ techniques from various aspects, such as their dimension, resource, recency and semantic closeness. Afterwards, with the overall discussion of publicly available datasets, we shed light on the model performance while running experiments of corresponding representative techniques on such datasets. Note that, we also release our implementation code for such experiments<sup>1</sup>. Next, we summarised the concluding remarks as well as follow-up suggestions for developing the ACQ techniques.
|
| 24 |
+
|
| 25 |
+
Table 1: A statistical summary of ACQ datasets for both Conv. Search and Conv. QA. The highlighted colours indicate the distinct corpus size of datasets (best viewed in colour).
|
| 26 |
+
|
| 27 |
+
<table><tr><td>Dataset</td><td># Domains</td><td>Scale</td><td># Clar. Q</td><td>Link</td></tr><tr><td colspan="5">Conversational Search</td></tr><tr><td>ClariT (Feng et al., 2023)</td><td>-</td><td>108K</td><td>260K</td><td>github.com/sweetalyssum/clarit</td></tr><tr><td>Qulac (Aliannejadi et al., 2019)</td><td>198</td><td>10K</td><td>3K</td><td>github.com/aliannejadi/qulac</td></tr><tr><td>ClariQ (Aliannejadi et al., 2021)</td><td>300</td><td>2M</td><td>4K</td><td>github.com/aliannejadi/ClariQ</td></tr><tr><td>TavakoliCQ (Tavakoli et al., 2021)</td><td>3</td><td>170K</td><td>7K</td><td>github.com/Leila-Ta/Clarification_CQA</td></tr><tr><td>MIMICS (Zamani et al., 2020)</td><td>-</td><td>462K</td><td>586K</td><td>github.com/microsoft/MIMICS</td></tr><tr><td>MANtIS (Penha et al., 2019)</td><td>14</td><td>80K</td><td>435</td><td>guzpenha.github.io/MANtIS/</td></tr><tr><td>ClariQ-FKw (Sekulić et al., 2021)</td><td>230</td><td>2K</td><td>2K</td><td>github.com/isekulic/CQ-generation</td></tr><tr><td>MSDialog (Qu et al., 2018)</td><td>12</td><td>35K</td><td>877</td><td>ciir.cs.umass.edu/downloads/msdialog</td></tr><tr><td>MIMICS-Dou (Tavakoli et al., 2022)</td><td>-</td><td>1K</td><td>1K</td><td>github.com/Leila-Ta/MIMICS-Duo</td></tr><tr><td colspan="5">Conversational Question Answering</td></tr><tr><td>ClarQ (Kumar and Black, 2020)</td><td>173</td><td>2M</td><td>2M</td><td>github.com/vaibhav4595/ClarQ</td></tr><tr><td>RaoCQ (Rao and Daumé III, 2018)</td><td>3</td><td>77K</td><td>770K</td><td>github.com/raosudha89/ranking_clarificationquestions</td></tr><tr><td>AmazonCQ (Rao and Daumé III, 2019)</td><td>2</td><td>24K</td><td>179K</td><td>github.com/raosudha89/clarification_question_generationPytorch</td></tr><tr><td>CLAQUA (Xu et al., 2019)</td><td>110</td><td>40K</td><td>40K</td><td>github.com/msra-nlc/MSParS_V2.0</td></tr></table>
|
| 28 |
+
|
| 29 |
+
Our Contributions. The main contributions of this work can be summarized as follows:
|
| 30 |
+
|
| 31 |
+
- We systematically search through 77 relevant papers, selected as per their recency, reliability and use frequency, in the ACQ domain from top-tier venues.
|
| 32 |
+
- We compare the ACQ datasets from their contributions to the development of ACQ techniques and experimentally show the performance of representative techniques.
|
| 33 |
+
- We introduce a visualised semantic encoding strategy to explain dataset suitability when selected for their corresponding experiments.
|
| 34 |
+
- We analytically outline promising open research directions in the construction of future datasets for ACQs, which sheds light on the development of future research.
|
| 35 |
+
|
| 36 |
+
# 2 Conversational Systems
|
| 37 |
+
|
| 38 |
+
A conversational system functions to assist users while addressing various tasks or acting as a partner in casual conversations (Gao et al., 2018). In particular, conversation systems can be classified into four main categories: (1) Conversational Search (Conv. Search); (2) Conversational Question Answering (Conv. QA); (3) Task-oriented Dialogues Systems (TDSs); and (4) Social Chatbots (Gao et al., 2019; Anand et al., 2020). In particular, the first two types, Conv. Search and Conv. QA, extend the classic search and QA systems to a conversational nature (Anand et al., 2020; Zaib et al., 2021). For TDSs and social chatbots, they are more recent research topics and were introduced to build systems for assisting users while addressing a specific
|
| 39 |
+
|
| 40 |
+
task or offering emotional connection and companionship via conversations (Gao et al., 2019). However, due to the limited resources that investigate the challenge of asking clarification questions when developing these two systems, this study focuses on Conv. Search and Conv. QA systems.
|
| 41 |
+
|
| 42 |
+
Moreover, ACQs in conversational systems partially focus on three main tasks, namely, Clarification Need Prediction $(T_{1})$ , Asking Clarification Questions $(T_{2})$ , and User Satisfaction with CQs $(T_{3})$ (Zamani et al., 2020; Tavakoli et al., 2022; Aliannejadi et al., 2019). First, $T_{1}$ evaluates the necessity of asking clarification questions when users provide their initial queries or requests. Next, with a positive decision, we turn to the action of providing suitable clarification questions (i.e., $T_{2}$ ) by following two main routines: generation or selection from a pool of candidate clarification questions. Afterwards, the third task $T_{3}$ is to evaluate the effectiveness of the corresponding clarification questions while considering user satisfaction levels from multiple aspects (e.g., the usefulness or relevance of clarification questions). An effective ACQ-encoded conversational system requires a joint effort to address the three tasks satisfactorily to enhance users' conversational experience. Therefore, in this survey, we explore the relevant ACQ datasets and discuss their suitability while addressing the above three tasks.
|
| 43 |
+
|
| 44 |
+
# 3 ACQ Datasets
|
| 45 |
+
|
| 46 |
+
In this section, we describe the main characteristics of the existing and relevant ACQ datasets. Note that we include some additional information, such as the corresponding institution, in Appendix A. A careful dataset selection and aggregation strat
|
| 47 |
+
|
| 48 |
+
Table 2: A Summary of collection details of ACQ datasets. $-$ means that the information is not available. 'SE' is StackExchange, 'MC' refers to Microsoft Community, and 'KB' is Knowledge Base. The detailed information of each dataset, such as the exact source domains, can be accessed in Appendix A.
|
| 49 |
+
|
| 50 |
+
<table><tr><td>Dataset</td><td>Published</td><td>Built</td><td>Resource</td><td>Clar. Source</td></tr><tr><td colspan="5">Conversational Search</td></tr><tr><td>ClariT (Feng et al., 2023)</td><td>2023</td><td>Aug. 2018</td><td>General queries from task-oriented dialogues</td><td>Crowdsourcing</td></tr><tr><td>Qulac (Aliannejadi et al., 2019)</td><td>2019</td><td>2009-2012</td><td>198 topics from TREC WEB Data</td><td>Crowdsourcing</td></tr><tr><td>ClariQ (Aliannejadi et al., 2021)</td><td>2021</td><td>2009-2014</td><td>300 topics from TREC WEB Data</td><td>Crowdsourcing</td></tr><tr><td>TavakoliCQ (Tavakoli et al., 2021)</td><td>2021</td><td>Jul. 2009 to Sep. 2019</td><td>3 domains of SE</td><td>Post and Comment</td></tr><tr><td>MIMICS (Zamani et al., 2020)</td><td>2020</td><td>Sep. 2019</td><td>General queries from Bing users</td><td>Machine Generated</td></tr><tr><td>MANtIS (Penha et al., 2019)</td><td>2019</td><td>Mar. 2019</td><td>14 domains of SE</td><td>Post and Comment</td></tr><tr><td>ClariQ-FKw (Sekulić et al., 2021)</td><td>2021</td><td>2009-2014</td><td>TREC WEB Data</td><td>Crowdsourcing</td></tr><tr><td>MSDialog (Qu et al., 2018)</td><td>2018</td><td>Nov. 2005 to Oct. 2017</td><td>4 domains of MC</td><td>Crowdsourcing</td></tr><tr><td>MIMICS-Duo (Tavakoli et al., 2022)</td><td>2022</td><td>Jan. 2022 to Feb. 2022</td><td>General queries from Bing users</td><td>HIT on MTurk, Qualtrics</td></tr><tr><td colspan="5">Conversational Question Answering</td></tr><tr><td>ClarQ (Kumar and Black, 2020)</td><td>2020</td><td>-</td><td>173 domains of SE</td><td>Post and Comment</td></tr><tr><td>RaoCQ (Rao and Daumé III, 2018)</td><td>2018</td><td>-</td><td>3 domains of SE</td><td>Post and Comment</td></tr><tr><td>AmazonCQ (Rao and Daumé III, 2019)</td><td>2019</td><td>-</td><td>A category of Amazon dataset</td><td>Review and Comment</td></tr><tr><td>CLAQUA (Xu et al., 2019)</td><td>2019</td><td>-</td><td>From an open-domain KB</td><td>Crowdsourcing</td></tr></table>
|
| 51 |
+
|
| 52 |
+
$\mathrm{egy}^2$ has been applied to this survey to ensure their recency and accessibility.
|
| 53 |
+
|
| 54 |
+
To offer an overview of dataset dimensions, in Table 1, we describe the ACQ datasets in statistics, together with links to access the datasets. The statistical information includes the number of the considered domains from the corresponding resource; the size of the whole dataset; the number of clarification questions in each dataset. These datasets can be grouped into three sets (large, medium and small, highlighted in pink, cyan and yellow colours) with varied scales of datasets: 1) Large datasets with greater than $10\mathrm{k}$ clarification questions (i.e., ClariT, MIMICS, ClarQ, RaoCQ, AmazonCQ, CLAQUA). Note that all the Conv. QA datasets are classified as large datasets due to the fact that it is more convenient to prepare clarification questions within a QA pair than in a dialogue. 2) Medium datasets with no less than $1\mathrm{K}$ clarification questions (i.e., Qulac, ClariQ, TavakoliCQ, ClariQ-FKw, MIMICS-Dou); 3) Small datasets that have no more than $1\mathrm{K}$ instances and only include MANtIS and MSDialog. In what follows, we compare datasets for developing conversational search and QA systems, according to their key characteristics.
|
| 55 |
+
|
| 56 |
+
# 3.1 Conversational Search
|
| 57 |
+
|
| 58 |
+
Conversational Search (Conv. Search) refers to information retrieval systems that permit a mixed-initiative interaction with one or more users using a conversational interface (Anand et al., 2020). To develop effective Conv. Search systems, many previous studies released a number of datasets and
|
| 59 |
+
|
| 60 |
+
made them publicly available. Here, we briefly describe such datasets:
|
| 61 |
+
|
| 62 |
+
- ClariT (Feng et al., 2023): The first clarification question dataset for task-oriented information seeking, which asks questions to clarify user requests and user profiles based on task knowledge.
|
| 63 |
+
- Qulac (Aliannejadi et al., 2019): The first clarification question dataset in an open-domain information-seeking conversational search setting with a joint offline evaluation framework.
|
| 64 |
+
- ClariQ (Aliannejadi et al., 2020, 2021): An extended Qulac with additional crowdsourced topics, questions and answers in the training corpus as well as synthetic multi-turn conversations.
|
| 65 |
+
- TavakoliCQ (Tavakoli et al., 2021; Tavakoli, 2020): It includes clarification questions collected from the StackExchange QA community and based on three resource categories that have the top number of posts.
|
| 66 |
+
- MIMICS (Zamani et al., 2020): This dataset comprises three sub-datasets that are all sourced from the application of the clarification pane in Microsoft Bing. In particular, they differ in if such a sub-dataset is based on single or multiple clarification panes (i.e., MIMICS-Click or ClickExplore) or focusing on real search queries and their corresponding query-clarification pairs (i.e., MIMICS-Manual).
|
| 67 |
+
|
| 68 |
+
- MANtIS (Penha et al., 2019): A multidomain (14 domains) conversational information-seeking dataset, sourced from StackExchange, like TavakoliCQ, with joint user intent annotations on the included utterances.
|
| 69 |
+
- ClariQ-FKw (Sekulic et al., 2021): This dataset introduces facets (the keywords that disambiguate a query) to the ClariQ, which results in an updated version with a set of query-facet-clarification question triples.
|
| 70 |
+
- MSDialog (Qu et al., 2018): This dataset was constructed from the dialogues on Microsoft Community $^3$ – a forum that provides technical support for Microsoft products – and also details user intent types on an utterance level.
|
| 71 |
+
- MIMICS-Duo (Tavakoli et al., 2022): A dataset, stands upon the queries from MIMICS-ClickExplore, that enables both online and offline evaluations for clarification selection and generation approach.
|
| 72 |
+
|
| 73 |
+
# 3.2 Conversational Question Answering
|
| 74 |
+
|
| 75 |
+
The idea behind Conversational Question Answering (Conv. QA) is to ask the system a question about a provided passage offering a conversational interface (Zaib et al., 2021). Conv. QA has recently received growing attention in the research community while introducing multiple available large-scale datasets. A brief discussion of such datasets are as follows:
|
| 76 |
+
|
| 77 |
+
- ClarQ (Kumar and Black, 2020): This dataset is sourced from the post-question pairs in StackExchange and developed with self-supervised approaches within a bootstrapping framework.
|
| 78 |
+
- RaoCQ (Rao and Daumé III, 2018): Another StackExchange-based dataset with a large volume of post-question-answer triples from three selected domains.
|
| 79 |
+
- AmazonCQ (Rao and Daumé III, 2019): An Amazon platform-based Clarification QA dataset with questions targeting the missing information of products and answers provided by sellers or other users. In addition, a context is offered that contains both the product title and description.
|
| 80 |
+
|
| 81 |
+
Table 3: Summary of tasks and evaluation method on ACQs datasets. The tasks can be generation and ranking, which are indicated by 'G' and 'R', respectively.
|
| 82 |
+
|
| 83 |
+
<table><tr><td rowspan="2">Dataset</td><td colspan="3">Task</td><td rowspan="2">Eval. Method</td></tr><tr><td>\(T_1\)</td><td>\(T_2\)</td><td>\(T_3\)</td></tr><tr><td colspan="5">Conv. Search</td></tr><tr><td>ClariT (2023)</td><td>✓</td><td>G</td><td>-</td><td>Offline</td></tr><tr><td>Qulac (2019)</td><td>-</td><td>R</td><td>-</td><td>Offline</td></tr><tr><td>ClariQ (2021)</td><td>✓</td><td>R</td><td>-</td><td>Offline</td></tr><tr><td>TavakoliCQ (2021)</td><td>-</td><td>G</td><td>-</td><td>Offline</td></tr><tr><td>MIMICS (2020)</td><td>✓</td><td>R, G</td><td>✓</td><td>Offline/Online</td></tr><tr><td>MANtIS (2019)</td><td>-</td><td>R, G</td><td>-</td><td>Offline</td></tr><tr><td>ClariQ-FKw (2021)</td><td>-</td><td>G</td><td>-</td><td>Offline</td></tr><tr><td>MSDialog (2018)</td><td>-</td><td>R, G</td><td>-</td><td>Offline</td></tr><tr><td>MIMICS-Duo (2022)</td><td>✓</td><td>R, G</td><td>✓</td><td>Offline/Online</td></tr><tr><td colspan="5">Conv. QA</td></tr><tr><td>ClariQ (2020)</td><td>-</td><td>R</td><td>-</td><td>Offline</td></tr><tr><td>RaoCQ (2018)</td><td>-</td><td>R</td><td>-</td><td>Offline</td></tr><tr><td>AmazonCQ (2019)</td><td>-</td><td>G</td><td>-</td><td>Offline</td></tr><tr><td>CLAQUA (2019)</td><td>✓</td><td>G</td><td>-</td><td>Offline</td></tr></table>
|
| 84 |
+
|
| 85 |
+
- CLAQUA (Xu et al., 2019): A clarification-focused dataset that supports the supervised evaluation of text understanding and generation modules, along with a knowledge-based QA system (KBQA).
|
| 86 |
+
|
| 87 |
+
# 3.3 Datasets Analysis
|
| 88 |
+
|
| 89 |
+
As discussed in Section 1, a major concern of developing the techniques for asking clarification questions is using suitable datasets to train, validate and test the corresponding approach. In particular, it is essential to be aware of the information on when, how and where a dataset is collected. Such information offers a comprehensive description of datasets for their various characteristics, such as their recency and reliability. Therefore, in Table 2, we describe the collection details of each ACQ dataset. In particular, we include the time when the datasets were built as well as the year the corresponding papers were published to indicate the recency of the datasets. In addition, we summarise the source of the data collection, which tells where the datasets came from. Next, we aggregate the main strategies for preparing the clarification questions. At first, due to our data selection strategy, most of the datasets are based on relatively recent information. However, we still observe that some datasets rely on the data collected years ago. For example, the Qulac, ClariQ and ClariQ-FKw datasets consistently use the TREC WEB data but run between 2009 and 2014. The most recent dataset is MIMICS-Duo which was built in 2022, and ClariT is the most recently published dataset in 2023. In particular, all the Conv. QA datasets are limited,
|
| 90 |
+
|
| 91 |
+

|
| 92 |
+
(a) tSNE on Conv. Search Datasets
|
| 93 |
+
Figure 1: tSNE on ACQ Datasets
|
| 94 |
+
|
| 95 |
+

|
| 96 |
+
(b) tSNE on Conv. QA Datasets
|
| 97 |
+
|
| 98 |
+
with no time information on when their data was collected, which makes them incomparable based on this measure. On the other hand, regarding how and where the datasets were collected, the TREC WEB data, StackExchange and Bing are the commonly considered resource for preparing clarification questions in a dataset. Such platforms' search and question-answering nature is the leading cause of such a finding. Afterwards, the crowdsourcing strategy is commonly applied to generate qualified clarification questions. Note that the posts and comments of StackExchange are also widely used to provide clarification questions. According to the provided information, we conclude that the datasets have been collected based on varied strategies, on different periods and use inconsistent resources. However, it is difficult to tell how exactly a dataset is different from others and how to properly select a set of datasets to show the performance of a newly introduced model. Therefore, in this survey, we introduce a visualisation-based approach to assist the selection of datasets for an improved experimental setup.
|
| 99 |
+
|
| 100 |
+
In Figures 1a and 1b, we use the t-distributed Stochastic Neighbor Embedding (i.e., t-SNE) method to visualize the semantic representation of clarification questions (semantic embeddings) for Conv. Search and Conv. QA datasets. As one can see from Figure 1a, Qulac and ClariQ datasets, and MIMICS and MIMICS-Dou datasets highly overlapped with each other. It was expected to be seen as ClariQ and MIMICS-Duo are built on top of Qulac and MIMICS, respectively. This indicates that achieving a high-quality performance of a proposed asking clarification model on both Qulac and ClariQ (or MIMICS and MIMICS-Duo) is not satis
|
| 101 |
+
|
| 102 |
+
factory as they include clarification questions with close semantic meanings. Figure 1a shows that Conv. Search datasets form 5 distinct clusters that can be used to evaluate asking clarification models. For example, the models' generalisability can be evaluated on the ClariT, Qulac, TavakaliCQ, MIMICS, and MSDialog datasets, which locates with few overlapped instances between them. More importantly, comparing Figures 1a and 1b reveals that clarification questions in Conv. Search are very focused while the clarification questions in Conv. QA datasets are more widely distributed. This indicates the high similarities among the Conv. Search-based data and the resulting necessity of properly selecting those publicly available datasets.
|
| 103 |
+
|
| 104 |
+
# 4 Evaluation Metrics
|
| 105 |
+
|
| 106 |
+
In this section, we detail the description of the applicable evaluation metrics for the included datasets when evaluating ACQs approaches. In particular, as previously discussed, we discuss such metrics accordingly if they are automatic or human-involved.
|
| 107 |
+
|
| 108 |
+
# 4.1 Automatic Evaluation
|
| 109 |
+
|
| 110 |
+
With a ready dataset, ACQ-based conversational systems can be evaluated using a variety of automatic evaluation metrics. The widely-used metrics can be categorized into two groups based on the strategy of giving clarification questions, i.e., ranking or generation. For the ranking route, the commonly used evaluation metrics include (1) MAP (Jarvelin, 2000), (2) Precision (Jarvelin and Kekäläinen, 2017), (3) Recall (Jarvelin, 2000), (4) F1-score (Beitzel, 2006), (5) Normalized Discounted Cumulative Gain (nDCG) (Wang et al., 2013), (6) Mean Reciprocal
|
| 111 |
+
|
| 112 |
+
Rank (MRR) (Voorhees et al., 1999; Radev et al., 2002), and (7) Mean Square Error (MSE) (Beitzel, 2006). The main idea behind using these metrics is to evaluate the relevance of the top-ranked clarification questions by the system to reveal the corresponding user intent. On the other hand, some common metrics for the generation route include (8) BLEU (Papineni et al., 2002), (9) METEOR (Banerjee and Lavie, 2005), (10) ROUGE (Lin, 2004). BLEU and ROUGE were originally developed to evaluate machine translation and text summarization results, respectively. Recently, they have also been applied as evaluation metrics while addressing the ACQ task (Sekulic et al., 2021; Zhang and Zhu, 2021; Shao et al., 2022). Their scores are both based on the n-gram overlap between generated and reference questions. The difference between BLEU and ROUGE corresponds to the precision and recall metrics. BLEU calculates the ratio of predicted terms in the reference question, while ROUGE scores indicate the ratios of terms from the reference are included in the predicted text. Next, ROUGE-L, a newer version of ROUGE – focuses on the longest common subsequence – is recently being used in evaluating ACQ models. However, these above metrics are limited while ignoring human judgements. Therefore the METEOR was introduced to address such a concern by considering the stems, WordNet synonyms, and paraphrases of n-grams.
|
| 113 |
+
|
| 114 |
+
The main advantage of using automatic evaluation metrics is that they are not expensive for consideration and can be applied easily. However, they are not always aligned with human judgments. Therefore, recent studies also consider human evaluation besides their automatic evaluation to show how the generated or selected CQs impact on the performance of their conversation systems.
|
| 115 |
+
|
| 116 |
+
# 4.2 Human Evaluation
|
| 117 |
+
|
| 118 |
+
In addition to automatic evaluation metrics, human evaluation provides a more accurate and qualitative evaluation of generated or ranked CQs. An essential reason is that automatic evaluation metrics mainly consider n-gram overlaps or ranking of CQs instead of their semantic meaning or other quality-wise aspects. Thus, human annotations are increasingly used to evaluate clarifying questions. The human annotation process consists of scoring generated or selected CQs based on several quality dimensions. Compared to automatic evaluation,
|
| 119 |
+
|
| 120 |
+
Table 4: Clarification need prediction performance of best representative methods from traditional ML and language models (RandomForest and BERT) on datasets. $\uparrow$ or $\downarrow$ is added to BERT to indicate a consistent performance change on all evaluation metrics. (The results of all methods are added to Table 7 in Appendix B.1).
|
| 121 |
+
|
| 122 |
+
<table><tr><td>Model</td><td>Precision</td><td>Recall</td><td>F1</td></tr><tr><td></td><td colspan="3">ClariQ</td></tr><tr><td>RandomForest</td><td>0.3540</td><td>0.3806</td><td>0.3717</td></tr><tr><td>BERT</td><td>0.3804</td><td>0.3249</td><td>0.3344</td></tr><tr><td></td><td colspan="3">CLAQUA</td></tr><tr><td>RandomForest</td><td>0.2860</td><td>0.5000</td><td>0.3638</td></tr><tr><td>BERT ↑</td><td>0.6349</td><td>0.625</td><td>0.6255</td></tr><tr><td>Model</td><td>MAE</td><td>MSE</td><td>R²</td></tr><tr><td></td><td colspan="3">MIMICS</td></tr><tr><td>RandomForest</td><td>2.4404</td><td>7.969</td><td>-0.0012</td></tr><tr><td>BERT ↓</td><td>2.4562</td><td>8.1277</td><td>-0.0211</td></tr><tr><td></td><td colspan="3">MIMICS-Duo</td></tr><tr><td>RandomForest</td><td>2.8502</td><td>11.206</td><td>-0.0079</td></tr><tr><td>BERT ↓</td><td>2.8801</td><td>11.2268</td><td>-0.0098</td></tr></table>
|
| 123 |
+
|
| 124 |
+
human evaluation is naturally more expensive due to the manual annotation effort, but it provides a more accurate picture of the quality of the output. The main aspects that are evaluated using human annotations include (1) relevance (Aliannejadi et al., 2020), which shows if a CQ is relevant to the user's information need (2) usefulness (Rosset et al., 2020) that is related to adequacy and informativeness of a question, (3) naturalness (Li et al., 2019) that evaluates a question if it is natural, fluent, and likely generated by a human and (4) clarification (Aliannejadi et al., 2021) that shows how the user's feedback influences the model's next CQ question. There are also humanness (See et al., 2019), engangengness (Li et al., 2019), interestingness (Li et al., 2019), knowledgeable (Li et al., 2019), that evaluate a CQ by considering the whole conversation, instead of an individual query-question pair. However, the ACQ domain lacks a consistent or agreed terminology for the used human evaluation metrics. In addition, some of them could have overlapped focus when evaluating the clarification questions. For example, the usefulness can also be evaluated based on the knowledgeable of the corresponding clarification question.
|
| 125 |
+
|
| 126 |
+
# 5 Model Performance on ACQ
|
| 127 |
+
|
| 128 |
+
In this section, to offer a complete view of the current progress of the ACQ task, we discuss the main
|
| 129 |
+
|
| 130 |
+
observations of the recent ACQ techniques when running on various ACQ datasets. Moreover, for each of the ACQ-related tasks, i.e., $T_{1}$ , $T_{2}$ and $T_{3}$ , we show the performance of many commonly used baselines while running on the applicable datasets for offering some additional concluding remarks.
|
| 131 |
+
|
| 132 |
+
First, according to our exploration of experimental results of recent ACQ techniques, we observe three main limitations of their inconsistent experimental setups, used baselines and model generalisability. Indeed, many research studies have inconsistent uses of datasets as well as incomparable results with distinct experimental setups. For example, Krasakis et al. (2020) and Bi et al. (2021) both used the Qulac dataset. In (Krasakis et al., 2020), they randomly kept 40 topics for testing their performance of a heuristic ranker. However, instead of following (Krasakis et al., 2020), Bi et al. (2021) used a few-turn-based setup while leveraging the Qulac dataset for asking clarification questions. Next, another common issue is the use of different baselines to show the leading performance of newly introduced techniques. For example, the study in (Aliannejadi et al., 2019) primarily employed ranking-based models, such as RM3, LambdaMART, and RankNet, to evaluate the performance of their question retrieval model. In contrast, the study in (Aliannejadi et al., 2021) utilized language models like RoBERTa and ELECTRA to evaluate the performance of their question relevance model. More importantly, many techniques were introduced while tested on a single dataset to show their top performance (e.g., (Krasakis et al., 2020; Sekulic et al., 2022; Zhao et al., 2022)), which lead to a significant generalisability concern. This also indicates the necessity of developing a benchmark while evaluating the ACQ techniques and identifying the exact state-of-the-art. Next, to acquire an overview of model performance while running experiments on the included datasets, we present the experimental results with representative approaches on the three ACQs subtasks, i.e., $T_{1}$ , $T_{2}$ and $T_{3}$ that are discussed in Section 2. The details of our experiments can be found in Appendix B. Table 4 shows the results of two top-performing models (i.e., BERT and RandomForest) for the clarification need prediction task $(T_{1})$ from traditional ML and language models. A key observation is that the prediction of clarification need should be selectively made in a classification or regression setup. In particular, BERT, a language
|
| 133 |
+
|
| 134 |
+
Table 5: Question relevance ranking performance evaluation on representative approaches. ‘P’ and ‘R’ refers to Precision and Recall. ↑ or ↓ is added to Doc2Query + BM25 to indicate a consistent performance change to BM25 on all evaluation metrics.
|
| 135 |
+
|
| 136 |
+
<table><tr><td>Model</td><td>MAP</td><td>P@10</td><td>R@10</td><td>NDCG</td></tr><tr><td></td><td colspan="4">Qulac</td></tr><tr><td>BM25</td><td>0.6306</td><td>0.9196</td><td>0.1864</td><td>0.9043</td></tr><tr><td>Doc2Query + BM25</td><td>0.6289</td><td>0.9196</td><td>0.1860</td><td>0.9069</td></tr><tr><td></td><td colspan="4">ClariQ</td></tr><tr><td>BM25</td><td>0.6360</td><td>0.7500</td><td>0.5742</td><td>0.7211</td></tr><tr><td>Doc2Query + BM25 ↑</td><td>0.6705</td><td>0.7899</td><td>0.6006</td><td>0.7501</td></tr><tr><td></td><td colspan="4">TavakoliCQ</td></tr><tr><td>BM25</td><td>0.3340</td><td>0.0463</td><td>0.4636</td><td>0.3743</td></tr><tr><td>Doc2Query + BM25 ↑</td><td>0.3781</td><td>0.0540</td><td>0.5405</td><td>0.4260</td></tr><tr><td></td><td colspan="4">MANtIS</td></tr><tr><td>BM25</td><td>0.6502</td><td>0.0679</td><td>0.6795</td><td>0.6582</td></tr><tr><td>Doc2Query + BM25 ↑</td><td>0.7634</td><td>0.0830</td><td>0.8301</td><td>0.7802</td></tr><tr><td></td><td colspan="4">ClariQ-FKw</td></tr><tr><td>BM25</td><td>0.7127</td><td>0.5880</td><td>0.7181</td><td>0.7910</td></tr><tr><td>Doc2Query + BM25</td><td>0.7073</td><td>0.5940</td><td>0.7244</td><td>0.7874</td></tr><tr><td></td><td colspan="4">MSDialog</td></tr><tr><td>BM25</td><td>0.8595</td><td>0.0929</td><td>0.9293</td><td>0.8781</td></tr><tr><td>Doc2Query + BM25 ↓</td><td>0.8430</td><td>0.0908</td><td>0.9087</td><td>0.8624</td></tr><tr><td></td><td colspan="4">ClarQ</td></tr><tr><td>BM25</td><td>0.2011</td><td>0.0259</td><td>0.2596</td><td>0.2200</td></tr><tr><td>Doc2Query + BM25 ↓</td><td>0.1977</td><td>0.0263</td><td>0.2630</td><td>0.2168</td></tr><tr><td></td><td colspan="4">RaoCQ</td></tr><tr><td>BM25</td><td>0.1511</td><td>0.0236</td><td>0.2362</td><td>0.1797</td></tr><tr><td>Doc2Query + BM25</td><td>0.1509</td><td>0.0241</td><td>0.2415</td><td>0.1811</td></tr><tr><td></td><td colspan="4">CLAQUA</td></tr><tr><td>BM25</td><td>0.9600</td><td>0.0992</td><td>0.9920</td><td>0.9683</td></tr><tr><td>Doc2Query + BM25 ↓</td><td>0.9395</td><td>0.0990</td><td>0.9901</td><td>0.9523</td></tr></table>
|
| 137 |
+
|
| 138 |
+
model that well classifies the classification need on ClariQ and CLAQUA datasets, does not consistently outperform a classic approach, RandomForest, in addressing a regression-wise task (as per the results on MIMICS and MIMICS-Duo). Next, for the second sub-task, ask clarification questions, which can be addressed via generation or ranking. However, clarification question generation requires a detailed context description and associated information. The existing approaches (e.g., Seq2Seq models) could be either naive in solely taking the query as input for CQ generation or difficult to generalise to many datasets while using specific information. Therefore, in this study, we compare the ranking performance when applying some commonly used ranking baselines (i.e., BM25 and BM25 with query expanded via the Doc2Query technique (Nogueira et al., 2019)) on every dataset. Table 5 presents the experimental results of these two approaches on every dataset. Note that, we ignore the experimental results on ClariT, MIMICS, MIMICS-DUO and AmazonCQ since they
|
| 139 |
+
|
| 140 |
+
are different from other datasets in having queries with multiple relevant clarification questions. For the results, we observe that the query expansion via Doc2Query can be effective for most of the conversational search datasets, due to their shorter queries. However, when query expansion is applied to a Conv. QA dataset, it is not promising for an improved performance. Another observation is that the Qulac, ClariQ and ClariQ-FKw datasets have similar clarification questions in their dataset as per Figure 1a and Doc2Query-based query expansion has limited improvement to BM25 on these datasets. However, for another two corpus, TavakoliCQ and MANtIS, with distinct clarification questions, a bigger improvement margin can be observed. This also indicates the usefulness of our introduced visualisation-based strategy for dataset selection.
|
| 141 |
+
|
| 142 |
+
Next, for the third task, it is crucial to determine user satisfaction with clarification questions (CQs), as it provides insight into how well the CQs are serving their intended purpose. However, obtaining the necessary data for evaluating user satisfaction can be challenging. In the literature, only two datasets (i.e., MIMICS and MIMICS-Duo) include information for this task. In Table 6, we present the corresponding results. A similar observation to the clarification need prediction task is that the language model can assist an ACQ technique in effectively evaluating user satisfaction. However, due to the limited number of applicable datasets, this observation might not be consistent in a different context. This also aligns with the current status of the ACQ research task while evaluating the newly proposed ACQ techniques.
|
| 143 |
+
|
| 144 |
+
Overall speaking, with the presented experimental results, we indicate the inconsistent performance of models while evaluated on different datasets. In particular, we also discuss the limited numbers of useful datasets while evaluating ACQ techniques (e.g., the models' performance on user satisfaction prediction).
|
| 145 |
+
|
| 146 |
+
# 6 Discussion and Future Challenges
|
| 147 |
+
|
| 148 |
+
From the exploration of datasets as well as the experimental results on them, in this section, we highlight the concluding remarks on the current status of the ACQ research task, mainly from the dataset point of view. In addition, we discuss the promising directions based on the main findings listed below.
|
| 149 |
+
|
| 150 |
+
Table 6: User satisfaction prediction with CQs performance of running best representative methods from traditional ML and language models (MultinomialNB and distilBERT) on datasets. $\uparrow$ is added to distilBERT to indicate a consistent performance improvement on all evaluation metrics. (The results of all methods are added on Table 8 in Appendix B.3).
|
| 151 |
+
|
| 152 |
+
<table><tr><td>Model</td><td>Precision</td><td>Recall</td><td>F1</td></tr><tr><td></td><td colspan="3">MIMICS</td></tr><tr><td>MultinomialNB</td><td>0.8255</td><td>0.7842</td><td>0.7758</td></tr><tr><td>distilBERT↑</td><td>0.9453</td><td>0.9397</td><td>0.939</td></tr><tr><td></td><td colspan="3">MIMICS-Duo</td></tr><tr><td>MultinomialNB</td><td>0.4407</td><td>0.2787</td><td>0.2336</td></tr><tr><td>distilBERT</td><td>0.2766</td><td>0.2803</td><td>0.2777</td></tr></table>
|
| 153 |
+
|
| 154 |
+
Findings. (1) Missing Standard Benchmark. Existing datasets are underdeveloped, and difficult to constitute a standard benchmark while introducing novel ACQ techniques. As a consequence, it is challenging to effectively and accurately compare the proposed techniques and capture the true state-of-the-art. (2) Few User-System Interactions Recorded for Evaluation. In the literature, only the MIMICS dataset was collected by using a clarification pane that simulates such interactions. This makes it challenging to evaluate models in a near-realistic scenario and to estimate how well they could perform in a real-world setting. (3) Inconsistent Dataset Collection and Formatting. Many included datasets in this paper are frequently presented in distinct structures and can only be applied with a tailored setup. This is a problem while developing techniques and evaluating them on multiple datasets. (4) Inconsistent Model Evaluation. Many newly introduced models apply customised evaluation strategies even while using an identical dataset for addressing a specific asking clarification task. This lead to difficulties in model performance comparison.
|
| 155 |
+
|
| 156 |
+
Future Research Directions. (1) Benchmark Development. For the development of an ACQs technique, it is important that the models are compared to a common-accepted benchmark to make the corresponding conclusions. However, according to the above findings, currently, it is still unavailable. Therefore, benchmark development is the first key future direction. (2) ACQ Evaluation Framework. Aside from the benchmark development, it is also essential for a proper evaluation of newly introduced techniques. In particu
|
| 157 |
+
|
| 158 |
+
lar, due to the human-machine interaction nature of the ACQ techniques, it is valuable for evaluation metrics to take user satisfaction information into account. In addition, the introduction of a corresponding evaluation framework can assist the development of ACQ techniques with systematic evaluations. (3) Large-Scale Human-to-Machine Dataset. Existing datasets have many limitations that increase the difficulty of developing large-scale models for generating or ranking clarification questions. It remains challenging to collect and build large amounts of data. In the near future, researchers should optimize the process of ACQs based on the current retrieval technologies (see Trippas et al., 2018) for a description of collecting such datasets). (4) Multi-Modal ACQs Dataset. Recently multi-modal conversational information seeking has received attention in conversational systems (Deldjoo et al., 2021). Amazon Alexa<sup>4</sup> organised the first conversational system challenge to incorporate multi-modal (voice and vision) customer experience. However, there is a lack of existing datasets containing multi-modal information for ACQs.
|
| 159 |
+
|
| 160 |
+
# Limitations
|
| 161 |
+
|
| 162 |
+
In this section, we outline the key limitations of our research. Our findings on the ACQ models are not as advanced as the current state-of-the-art, but they serve as a benchmark for others to compare with when using similar datasets. Additionally, to conduct more extensive experiments on larger datasets and more advanced models, we require additional computational resources. Specifically, generating clarification questions is a demanding task as it requires the use of powerful language models.
|
| 163 |
+
|
| 164 |
+
# Acknowledgments
|
| 165 |
+
|
| 166 |
+
This research is supported by the Engineering and Physical Sciences Research Council [EP/S021566/1] and the EPSRC Fellowship titled "Task Based Information Retrieval" [EP/P024289/1].
|
| 167 |
+
|
| 168 |
+
# References
|
| 169 |
+
|
| 170 |
+
Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin,
|
| 171 |
+
|
| 172 |
+
Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. {TensorFlow}: a system for {LargeScale} machine learning. In 12th USENIX symposium on operating systems design and implementation (OSDI 16), pages 265-283.
|
| 173 |
+
Mohammad Aliennejadi, Julia Kiseleva, Aleksandr Chuklin, Jeff Dalton, and Mikhail Burtsev. 2020. Convai3: Generating clarifying questions for open-domain dialogue systems (clariq). arXiv preprint arXiv:2009.11352.
|
| 174 |
+
Mohammad Aliennejadi, Julia Kiseleva, Aleksandr Chuklin, Jeff Dalton, and Mikhail Burtsev. 2021. Building and evaluating open-domain dialogue corpora with clarifying questions. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4473-4484.
|
| 175 |
+
Mohammad Aliannejadi, Hamed Zamani, Fabio Crestani, and W. Bruce Croft. 2019. Asking clarifying questions in open-domain information-seeking conversations. In International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), SIGIR '19.
|
| 176 |
+
Giambattista Amati, Giuseppe Amodeo, Marco Bianchi, Carlo Gaibisso, and Giorgio Gambosi. 2008. Fub, iasi-cnr and university of tor vergata at trec 2008 blog track. Technical report, FONDAZIONE UGO BORDONI ROME (ITALY).
|
| 177 |
+
Gianni Amati and Cornelis Joost Van Rijsbergen. 2002. Probabilistic models of information retrieval based on measuring the divergence from randomness. ACM Transactions on Information Systems (TOIS), 20(4):357-389.
|
| 178 |
+
Avishek Anand, Lawrence Cavedon, Hideo Joho, Mark Sanderson, and Benno Stein. 2020. Conversational search (dagstuhl seminar 19461). In Dagstuhl Reports, volume 9. Schloss Dagstuhl-Leibniz-Zentrum für Informatik.
|
| 179 |
+
Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages 65-72.
|
| 180 |
+
Steven M Beitzel. 2006. On understanding and classifying web queries. Illinois Institute of Technology.
|
| 181 |
+
Keping Bi, Qingyao Ai, and W Bruce Croft. 2021. Asking clarifying questions based on negative feedback in conversational search. In Proc. of ICTIR.
|
| 182 |
+
Leo Breiman. 2001. Random forests. Machine learning, 45(1):5-32.
|
| 183 |
+
Corinna Cortes and Vladimir Vapnik. 1995. Support-vector networks. Machine learning, 20(3):273-297.
|
| 184 |
+
|
| 185 |
+
Yashar Deldjoo, Johanne R Trippas, and Hamed Za-mani. 2021. Towards multi-modal conversational information seeking. In Proceedings of the 44th International ACM SIGIR conference on research and development in Information Retrieval, pages 1577-1587.
|
| 186 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
|
| 187 |
+
Yue Feng, Hossein A Rahmani, Aldo Lipani, and Emine Yilmaz. 2023. Towards asking clarification questions for information seeking on task-oriented dialogues. arXiv preprint arXiv:2305.13690.
|
| 188 |
+
Jianfeng Gao, Michel Galley, and Lihong Li. 2018. Neural approaches to conversational ai. In The 41st international ACM SIGIR conference on research & development in information retrieval, pages 1371-1374.
|
| 189 |
+
Jianfeng Gao, Michel Galley, and Lihong Li. 2019. Neural approaches to conversational AI: Question answering, task-oriented dialogues and social chatbots. Now Foundations and Trends.
|
| 190 |
+
Kalervo Jarvelin. 2000. Ir evaluation methods for retrieving highly relevant documents. In Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, July 2000.
|
| 191 |
+
Kalervo Jarvelin and Jaana Kekalainen. 2017. Ir evaluation methods for retrieving highly relevant documents. In ACM SIGIR Forum, volume 51, pages 243-250. ACM New York, NY, USA.
|
| 192 |
+
Omar Khattab and Matei Zaharia. 2020. Colbert: Efficient and effective passage search via contextualized late interaction over bert. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, pages 39-48.
|
| 193 |
+
Antonios Minas Krasakis, Mohammad Aliannejadi, Nikos Voskarides, and Evangelos Kanoulas. 2020. Analysing the effect of clarifying questions on document ranking in conversational search. In Proc. of ICTIR.
|
| 194 |
+
Vaibhav Kumar and Alan W Black. 2020. Clarq: A large-scale and diverse dataset for clarification question generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7296-7301.
|
| 195 |
+
Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. arXiv preprint arXiv:1901.07291.
|
| 196 |
+
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942.
|
| 197 |
+
|
| 198 |
+
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
|
| 199 |
+
Jiwei Li, Alexander H Miller, Sumit Chopra, Marc'Aurelio Ranzato, and Jason Weston. 2017. Dialogue learning with human-in-the-loop. In Proceedings of the 5th International Conference on Learning Representations, ICLR 2017.
|
| 200 |
+
Margaret Li, Jason Weston, and Stephen Roller. 2019. Acute-eval: Improved dialogue evaluation with optimized questions and multi-turn comparisons. arXiv preprint arXiv:1909.03087.
|
| 201 |
+
Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.
|
| 202 |
+
Wei-Yin Loh. 2011. Classification and regression trees. Wiley interdisciplinary reviews: data mining and knowledge discovery, 1(1):14-23.
|
| 203 |
+
Craig Macdonald and Nicola Tonellotto. 2020. Declarative experimentation in information retrieval using pyterrier. In Proceedings of ICTIR 2020.
|
| 204 |
+
Christopher D Manning. 2008. Introduction to information retrieval. Syngress Publishing..
|
| 205 |
+
Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton Van Den Hengel. 2015. Image-based recommendations on styles and substitutes. In Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval, pages 43-52.
|
| 206 |
+
Julian McAuley and Alex Yang. 2016. Addressing complex and subjective product-related queries with customer reviews. In Proceedings of the 25th International Conference on World Wide Web, pages 625-635.
|
| 207 |
+
Rodrigo Nogueira, Wei Yang, Jimmy Lin, and Kyunghyun Cho. 2019. Document expansion by query prediction. arXiv preprint arXiv:1904.08375.
|
| 208 |
+
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311-318.
|
| 209 |
+
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.
|
| 210 |
+
|
| 211 |
+
Gustavo Penha, Alexandru Balan, and Claudia Hauff. 2019. Introducing mantis: a novel multi-domain information seeking dialogues dataset. arXiv preprint arXiv:1912.04639.
|
| 212 |
+
Chen Qu, Liu Yang, W Bruce Croft, Johanne R Trippas, Yongfeng Zhang, and Minghui Qiu. 2018. Analyzing and characterizing user intent in information-seeking conversations. In The 41st international acm SIGIR conference on research & development in information retrieval, pages 989-992.
|
| 213 |
+
Dragomir R Radev, Hong Qi, Harris Wu, and Weiguo Fan. 2002. Evaluating web-based question answering systems. In LREC. CiteSeer.
|
| 214 |
+
Sudha Rao and Hal Daumé III. 2018. Learning to ask good questions: Ranking clarification questions using neural expected value of perfect information. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2737-2746.
|
| 215 |
+
Sudha Rao and Hal Daumé III. 2019. Answer-based adversarial training for generating clarification questions. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 143-155.
|
| 216 |
+
Corbin Rosset, Chenyan Xiong, Xia Song, Daniel Campos, Nick Craswell, Saurabh Tiwary, and Paul Bennett. 2020. Leading conversational search by suggesting useful questions. In Proceedings of The Web Conference 2020, pages 1160-1170.
|
| 217 |
+
Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
|
| 218 |
+
Abigail See, Stephen Roller, Douwe Kiela, and Jason Weston. 2019. What makes a good conversation? how controllable attributes affect human judgments. arXiv preprint arXiv:1902.08654.
|
| 219 |
+
Ivan Sekulic, Mohammad Aliannejadi, and Fabio Crestani. 2021. Towards facet-driven generation of clarifying questions for conversational search. In Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval, pages 167-175.
|
| 220 |
+
Ivan Sekulic, Mohammad Aliannejadi, and Fabio Crestani. 2022. Exploiting document-based features for clarification in conversational search. In European Conference on Information Retrieval.
|
| 221 |
+
Taihua Shao, Fei Cai, Wanyu Chen, and Honghui Chen. 2022. Self-supervised clarification question generation for ambiguous multi-turn conversation. Information Sciences, 587:626-641.
|
| 222 |
+
Zhengxiang Shi, Yue Feng, and Aldo Lipani. 2022. Learning to execute or ask clarification questions. arXiv preprint arXiv:2204.08373.
|
| 223 |
+
|
| 224 |
+
Zhengxiang Shi, Jerome Ramos, To Eun Kim, Xi Wang, Hossein A Rahmani, and Aldo Lipani. 2023. When and what to ask through world states and text instructions: Iglu nlp challenge solution. arXiv preprint arXiv:2305.05754.
|
| 225 |
+
Leila Tavakoli. 2020. Generating clarifying questions in conversational search systems. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pages 3253-3256.
|
| 226 |
+
Leila Tavakoli, Johanne R Trippas, Hamed Zamani, Falk Scholer, and Mark Sanderson. 2022. Mimics-duo: Offline & online evaluation of search clarification. arXiv preprint arXiv:2206.04417.
|
| 227 |
+
Leila Tavakoli, Hamed Zamani, Falk Scholer, William Bruce Croft, and Mark Sanderson. 2021. Analyzing clarification in asynchronous information-seeking conversations. Journal of the Association for Information Science and Technology.
|
| 228 |
+
Johanne R Trippas, Damiano Spina, Lawrence Cavedon, Hideo Joho, and Mark Sanderson. 2018. Informing the design of spoken conversational search: Perspective paper. In Proceedings of the 2018 conference on human information interaction & retrieval, pages 32-41.
|
| 229 |
+
Ellen M Voorhees et al. 1999. The trec-8 question answering track report. In Trec, volume 99, pages 77-82.
|
| 230 |
+
Yining Wang, Liwei Wang, Yuanzhi Li, Di He, and Tie-Yan Liu. 2013. A theoretical analysis of ndcg type ranking measures. In Conference on learning theory, pages 25-54. PMLR.
|
| 231 |
+
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771.
|
| 232 |
+
Jingjing Xu, Yuechen Wang, Duyu Tang, Nan Duan, Pengcheng Yang, Qi Zeng, Ming Zhou, and Xu Sun. 2019. Asking clarification questions in knowledge-based question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1618-1629.
|
| 233 |
+
Xin Yan and Xiaogang Su. 2009. Linear regression analysis: theory and computing. world scientific.
|
| 234 |
+
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32.
|
| 235 |
+
Munazza Zaib, Wei Emma Zhang, Quan Z Sheng, Adnan Mahmood, and Yang Zhang. 2021. Conversational question answering: A survey. arXiv preprint arXiv:2106.00874.
|
| 236 |
+
|
| 237 |
+
Hamed Zamani, Gord Lueck, Everest Chen, Rodolfo Quispe, Flint Luu, and Nick Craswell. 2020. Mimics: A large-scale data collection for search clarification. In Proceedings of the 29th acm international conference on information & knowledge management, pages 3189-3196.
|
| 238 |
+
Hamed Zamani, Johanne R Trippas, Jeff Dalton, and Filip Radlinski. 2022. Conversational information seeking. arXiv preprint arXiv:2201.08808.
|
| 239 |
+
Zhiling Zhang and Kenny Zhu. 2021. Diverse and specific clarification question generation with keywords. In Proceedings of the Web Conference 2021, pages 3501-3511.
|
| 240 |
+
Ziliang Zhao, Zhicheng Dou, Jiaxin Mao, and Ji-Rong Wen. 2022. Generating clarifying questions with web search results. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval.
|
| 241 |
+
Jie Zou, Aixin Sun, Cheng Long, Mohammad Aliannejadi, and Evangelos Kanoulas. 2023. Asking clarifying questions: To benefit or to disturb users in web search? Information Processing & Management, 60(2):103176.
|
| 242 |
+
|
| 243 |
+
# A Datasets Details
|
| 244 |
+
|
| 245 |
+
# A.0.1 ClariT
|
| 246 |
+
|
| 247 |
+
The ClariT dataset (Feng et al., 2023) was released in 2023 by researchers from the University College London. ClariT is the first dataset for asking clarification questions in task-oriented conversational information seeking. They built ClariT based on an existing dataset ShARC<sup>5</sup>, which clarifies users' information needs in task-oriented dialogues. They extended dialogues in ShARC with user profiles to ask clarification questions considering personalized information. To ask clarification questions efficiently, they also removed unnecessary clarification questions in the original dialogues. The collected dataset consists of over $108k$ multi-turn conversations including clarification questions, user profiles, and corresponding task knowledge in general domains.
|
| 248 |
+
|
| 249 |
+
# A.0.2 Qulac
|
| 250 |
+
|
| 251 |
+
The Qulac (Questions for lack of charity) (Aliennejadi et al., 2019) dataset is a joint effort by researchers from the Università della Svizzera Italiana and the University of Massachusetts Amherst. Qulac is the first dataset as well as an offline evaluation framework for studying clarification questions in open-domain information-seeking conversational search systems. To acquire the clarification questions, they proposed a four-step strategy: (1) they defined the topics and their facets borrowed from TREC Web Track; (2) they collected several candidates clarification questions for each query through crowdsourcing in which they asked human annotators to generate questions for a given query according to the results showed using a commercial search engine; (3) they assessed the relevance of the questions to each facet and collected new questions for those facets that require more specific questions; (4) finally, they collected the answers for every query-facet-question triplet. The collected dataset consists of over 10,277 single-turn conversations including clarification questions and their answers on multi-faceted and ambiguous queries for 198 topics with 762 facets.
|
| 252 |
+
|
| 253 |
+
# A.0.3 ClariQ
|
| 254 |
+
|
| 255 |
+
The ClariQ dataset (Aliannejadi et al., 2020, 2021) was released in 2020 by researchers from the University of Amsterdam, Microsoft, Google, Univer
|
| 256 |
+
|
| 257 |
+
sity of Glasgow, and MIPT. The ClariQ dataset was collected as part of the ConvAI37 challenge which was co-organized with the SCAI8 workshop. The ClariQ dataset is an extended version of Qulac, i.e., new topics, questions, and answers have been added in the training set using crowdsourcing. Like Qulac, ClariQ consists of single-turn conversations (initial_request, followed by clarification questions and answers). Moreover, it comes with synthetic multi-turn conversations (up to three turns). ClariQ features approximately 18K single-turn conversations, as well as 1.8 million multi-turn conversations.
|
| 258 |
+
|
| 259 |
+
# A.0.4 TavakoliCQ
|
| 260 |
+
|
| 261 |
+
Recently Tavakoli et al. (Tavakoli et al., 2021; Tavakoli, 2020), from RMIT University and the University of Massachusetts Amherst, explore the ACQs to provide insightful analysis into how they are used to disambiguate the user ambiguous request and information needs. To this purpose, they extracted a set of clarification questions from posts on the StackExchange question answering community (Tavakoli, 2020). They investigate three sites with the highest number of posts from three different categories covering a period from July 2009 to September 2019. Therefore, the created dataset includes three domains, i.e., business domain with 13,187 posts, culture with 107,266 posts, and life/arts with 55,959 posts. To identify the potential clarification questions, they collected the comments of each post that contain at least one sentence with a question mark, excluding questions submitted by the author of the post and questions that appeared in quotation marks. Their finding indicates that the most useful clarification questions have similar patterns, regardless of the domain.
|
| 262 |
+
|
| 263 |
+
# A.0.5 MIMICS
|
| 264 |
+
|
| 265 |
+
MIMICS (stands for the Microsoft's Mixed-Initiative Conversation Search Data) (Zamani et al., 2020). This is a large-scale dataset for search clarification which is introduced in 2020 by researchers from Microsoft. Recently, Microsoft Bing added a clarification pane to its results page to clarify faceted and ambiguous queries. Each clarification pane includes a clarification question and up to five candidate answers. They used in
|
| 266 |
+
|
| 267 |
+
ternal algorithms and machine learning models based on users' history with the search engine and content analysis to generate clarification questions and candidate answers. The final MIMICS dataset contains three datasets: (1) MIMICS-Click includes 414, 362 unique queries, each related to exactly one clarification pane, and the corresponding aggregated user interaction clicks; (2) MIMICS-ClickExplore contains the aggregated user interaction signals for over 64,007 unique queries, each with multiple clarification panes, i.e., 168,921 query-clarification pairs; (3) MIMICS-Manual includes over 2k unique real search queries and 2.8k query-clarification pairs. Each query-clarification pair in this dataset has been manually labeled by at least three trained annotators and the majority voting has been used to aggregate annotations. It also contains graded quality labels for each clarification question, the candidate answer set, and the landing result page for each candidate answer.
|
| 268 |
+
|
| 269 |
+
# A.0.6 MANtIS
|
| 270 |
+
|
| 271 |
+
The MANtIS (short for Multi-domainAIInformation Seeking dialogues) dataset (Penha et al., 2019) is a large-scale dataset containing multi-domain and grounded information-seeking dialogues introduced by researchers from TU Delft. They built the MANtIS dataset using extraction of conversations from the StackExchange question answering community. This dataset includes 14 domains on StackExchange. Each question-answering thread of a StackExchange site is a conversation between an information seeker and an information provider. These conversations are included if (1) it takes place between exactly two users; (2) it consists of at least 2 utterances per user; (3) it has not been marked as spam, offensive, edited, or deprecated; (4) the provider's utterances contain at least a reference (a hyperlink), and; (5) the final utterance belongs to the seeker and contains positive feedback. The final MANtIS dataset includes 80k conversations over 14 domains. Then, to indicate the type of user intent, they sampled 1,365 conversations from MANtIS and annotate their utterances according to the user intent, such as original question, follow-up question, potential answer, positive feedback, negative feedback, etc. The final sample contains 6,701 user intent labels.
|
| 272 |
+
|
| 273 |
+
# A.0.7 ClariQ-FKw
|
| 274 |
+
|
| 275 |
+
The ClariQ-FKw (FKw stands for Facet Keywords) (Sekulić et al., 2021) was proposed by researchers
|
| 276 |
+
|
| 277 |
+
from the University of Amsterdam and the Università della Svizzera Italiana in 2021. Their main objective was to use text generation-based large-scale language models to generate clarification questions for ambiguous queries and their facets, where by facets they mean keywords that disambiguate the query. The dataset includes queries, facets, and clarification questions, which form triplets construed on top of the ClariQ (Aliannejadi et al., 2020) dataset. To this end, they perform a simple data filtering to convert ClariQ data samples to the appropriate triplets and derive the facets from topic descriptions. The final ClariQ-FKw contains 2, 181 triplets.
|
| 278 |
+
|
| 279 |
+
# A.0.8 MSDialog
|
| 280 |
+
|
| 281 |
+
The MSDialog (Qu et al., 2018) proposed by researchers from the University of Massachusetts Amherst, RMIT University, Rutgers University, and Alibaba Group, is used to analyse information-seeking conversations by user intent distribution, co-occurrence, and flow patterns in conversational search systems. The MSDialog dataset is constructed based on the question-answering interactions between information seekers and providers on the online forum for Microsoft products. Thus, to create the MSDialog dataset, they first crawled over 35k multi-turn QA threads (i.e., dialogues) containing 300k utterances from the Microsoft Community $^{10}$ – a forum that provides technical support for Microsoft products – and then annotated the user intent types on an utterance level based on crowdsourcing using Amazon Mechanical Turk (MTurk) $^{11}$ . To provide a high-quality and consistent dataset, they selected about 2.4k dialogues based on four criteria, conversations 1) with 3 to 10 turns; 2) with 2 to 4 participants; 3) with at least one correct answer selected by the community, and; 4) that fall into one of the following categories: Windows, Office, Bing, and Skype, which are the major categories of Microsoft products. The final annotated dataset contains 2, 199 multi-turn dialogues with 10, 020 utterances.
|
| 282 |
+
|
| 283 |
+
# A.0.9 MIMICS-Duo
|
| 284 |
+
|
| 285 |
+
The MIMICS-Duo (Tavakoli et al., 2022) dataset is proposed by researchers at RMIT University, the University of Melbourne, and the University of Massachusetts Amherst. It provides the online and offline evaluation of clarification selection and
|
| 286 |
+
|
| 287 |
+
generation approaches. It is constructed based on the queries in MIMICS-ClickExplore (Zamani et al., 2020), a sub-dataset of MIMICS (Zamani et al., 2020) that consists of online signals, such as user engagement based on click-through rate. The MIMICS-Duo contains over 300 search queries and 1,034 query-clarification pairs.
|
| 288 |
+
|
| 289 |
+
# A.0.10 ClarQ
|
| 290 |
+
|
| 291 |
+
The ClarQ dataset (Kumar and Black, 2020) was created in 2020 by Carnegie Mellon University. The ClarQ is designed for large-scale clarification question generation models. To do this, the ClarQ dataset is built with a bootstrapping framework based on self supervision approaches on top of the post-comment tuples extracted from StackExchange $^{12}$ question answering community. To construct the ClarQ, they first extracted the posts and their comments from 173 domains. Then, they filtered unanswered posts and only considered comments to posts with at least one final answer as a potential candidate for a clarification question. The ClarQ dataset consists of about 2 million post-question tuples across 173 domains.
|
| 292 |
+
|
| 293 |
+
# A.0.11 RaoCQ
|
| 294 |
+
|
| 295 |
+
Rao and Daume III [2018] from the University of Maryland study the problem of ranking clarification questions and propose an ACQs dataset on top of StackExchange. To create this dataset, they use a dump of StackExchange and create a number of post-question-answer triplets, where the post is the initial unedited request, the question is the first comment containing a question (i.e., indicated by a question mark), and the answer is either the edits made to the post after the question (i.e., the edit closest in time following the question) or the author's answer of the post to the question in the comment section. The final dataset includes a total of 77,097 triples across three domains askubuntu,unix, and superuser.
|
| 296 |
+
|
| 297 |
+
# A.0.12 AmazonCQ
|
| 298 |
+
|
| 299 |
+
Rao and Daumé III [2019] from Microsoft and the University of Maryland, released a dataset for generating clarification questions. The dataset contains a context that is a combination of product title and description from the Amazon website, a question that is a clarification question asked to the product about some missing information in the context, and the answer that is the seller's (or other users')
|
| 300 |
+
|
| 301 |
+
reply to the question. To construct this dataset, they combined the Amazon Question Answering dataset created by (McAuley and Yang, 2016) and the Amazon Review dataset proposed by (McAuley et al., 2015). The final dataset consists of 15,859 contexts (i.e., product description) with 3 to 10 clarification questions, on average 7, per context.
|
| 302 |
+
|
| 303 |
+
# A.0.13 CLAQUA
|
| 304 |
+
|
| 305 |
+
The CLAQUA dataset (Xu et al., 2019) was created by researchers from of Peking University, the University of Science and Technology of China, and Microsoft Research Asia in 2019. They propose the CLAQUA dataset to provide a supervised resources for training, evaluation and creating powerful models for clarification-related text understanding and generation in knowledge-based question answering (KBQA) systems. The CLAQUA dataset is constructed in three steps, (1) sub-graph extraction, (2) ambiguous question annotation, and (3) clarification question annotation. In the first step, they extract ambiguous sub-graphs from an open-domain knowledge base, like FreeBase. They focus on shared-name ambiguity where two entities have the same name and there is a lack of necessary distinguishing information. Then, in the second step, they provide a table listing the shared entity names, their types, and their descriptions. Based on this table, annotators need to write ambiguous questions. Finally, in the third step, based on entities and the annotated ambiguous question, annotators are required to summarize distinguishing information and write a multi-choice clarification question including a spacial character that separate entity and pattern information. They provided these steps for single- and multi-turn conversations. The final CLAQUA dataset contains 17, 163 and 22, 213 single-turn and multi-turn conversations, respectively.
|
| 306 |
+
|
| 307 |
+
# B Experiments on Model Performance
|
| 308 |
+
|
| 309 |
+
# B.1 Clarification Need Prediction
|
| 310 |
+
|
| 311 |
+
The clarification need prediction is a major task in search clarification to decide whether to ask clarification questions. Between the discussed CQ datasets only ClariQ (Aliannejadi et al., 2020, 2021), MIMICS (Zamani et al., 2020), MIMICS-Duo (Tavakoli et al., 2022), and CLAQUA (Xu et al., 2019) provide the necessary information for the clarification need prediction task. The ClariQ and CLAQUA datasets model the clarification need
|
| 312 |
+
|
| 313 |
+
prediction task as a classification problem. They both present the initial user request with a classification label that indicates the level of clarification required. In contrast to the ClariQ and CLAQUA datasets, the task in the MIMICS and MIMICS-Dou datasets is modelled as a regression task for predicting user engagement. Specifically, these datasets aim to predict the degree to which users find the clarification process useful and enjoy interacting with it. Based on this prediction, the system can make a decision on whether or not to request clarification. We subsequently evaluated the prediction task for clarification needs using a variety of traditional machine learning models and language models. The traditional machine learning models employed as baselines include Random Forest (Breiman, 2001), Decision Tree (Loh, 2011), Multinomial Naive Bayes (MultinomialNB) (Manning, 2008), Support Vector Machines (SVM) (Cortes and Vapnik, 1995), and Linear Regression (Yan and Su, 2009). The language model baselines utilized include BART (Lewis et al., 2019), XLNet (Yang et al., 2019), XLM (Lample and Conneau, 2019), Albert (Lan et al., 2019), distilBERT (Sanh et al., 2019), and BERT (Devlin et al., 2018). These models were applied to both classification and regression tasks. The input to traditional ML models is a matrix of TF-IDF features extracted from the raw input text. We use Scikit-learn $^{13}$ (Pedregosa et al., 2011), HuggingFace $^{14}$ (Wolf et al., 2019), and TensorFlow (Abadi et al., 2016) for the implementation of the aforementioned models.
|
| 314 |
+
|
| 315 |
+
# B.2 Question Relevance Ranking Baselines
|
| 316 |
+
|
| 317 |
+
To address the second task, namely asking clarification questions, many studies have explored either generation or ranking strategies. However, as we argued in Section 5, the generation techniques require rich information for satisfactory performance and they are difficult to be applied to many datasets if some specific information is required. Therefore, we consider the ranking task for summarising the model performance on the asking clarification question task and present the results of BM25 and Doc2Query + BM25. Note that, the BM25-based techniques are considered with their competitive performance in addressing the clarification question ranking task (Aliannejadi et al., 2021). We also compare some additional ranking
|
| 318 |
+
|
| 319 |
+
techniques, such as the PL2 (Amati and Van Rijsbergen, 2002), DPH (Amati et al., 2008) and another recent dense retriever (i.e., ColBERT (Khattab and Zaharia, 2020)). However, the inclusion of such approaches is not useful while comparing the use of different datasets. Therefore, we only present the results of the above two approaches in Table 5. As for the implementation, we leverage PyTerrier $^{15}$ (Macdonald and Tonelloto, 2020), a recently developed Python framework for conducting information retrieval experiments.
|
| 320 |
+
|
| 321 |
+
# B.3 User Satisfaction with CQs
|
| 322 |
+
|
| 323 |
+
In this experiment, we explored the task of determining user satisfaction with CQs by utilizing a variety of models from both traditional machine learning and language models on the ACQs datasets. To conduct this experiment, we employed the same models that we previously used for the Clarification Need Prediction task. By using the same models for both tasks, we aim to examine how well these models perform in predicting user satisfaction with CQs and how their performance compares to their performance in predicting the need for clarification. This will allow us to understand the strengths and limitations of these models in predicting user satisfaction and make informed decisions on which models to use in future applications. Only two datasets (i.e., MIMICS (Zamani et al., 2020) and MIMICS-Duo (Tavakoli et al., 2022)) out of 12 datasets provide the user satisfaction information. In both MIMICS and MIMICS-Dou, each clarification question is given a label to indicate how a user is satisfied with the clarification question. For MIMICS the labels are Good, Fair, or Bad. A good clarifying question is accurate, fluent, and grammatically correct. A fair clarifying question may not meet all of these criteria but is still acceptable. Otherwise, it is considered bad. While in MIMICS-Dou, users' satisfaction with clarification questions is assessed on a 5-level scale that is Very Bad, Bad, Fair, Good, and Very Good. Thus, we formulate user satisfaction with CQs task as a supervised classification in our experiments.
|
| 324 |
+
|
| 325 |
+
Table 7: The performance of all methods on clarification need prediction on MIMICS and MIMICS-Duo. The best models are in bold.
|
| 326 |
+
|
| 327 |
+
<table><tr><td rowspan="2">Model</td><td colspan="3">MIMICS</td><td colspan="3">MIMICS-Duo</td></tr><tr><td>Precision</td><td>Recall</td><td>F1</td><td>Precision</td><td>Recall</td><td>F1</td></tr><tr><td>RandomForest</td><td>0.3540</td><td>0.3806</td><td>0.3717</td><td>0.2860</td><td>0.5000</td><td>0.3638</td></tr><tr><td>DecisionTree</td><td>0.2125</td><td>0.2520</td><td>0.2028</td><td>0.5329</td><td>0.5095</td><td>0.4305</td></tr><tr><td>SVM</td><td>0.2858</td><td>0.3024</td><td>0.2772</td><td>0.5281</td><td>0.5088</td><td>0.4333</td></tr><tr><td>MultinomialNB</td><td>0.2924</td><td>0.3186</td><td>0.2876</td><td>0.5185</td><td>0.5178</td><td>0.5166</td></tr><tr><td>LogisticRegression</td><td>0.2749</td><td>0.2878</td><td>0.2816</td><td>0.7862</td><td>0.5010</td><td>0.3660</td></tr><tr><td>BART</td><td>0.5083</td><td>0.3344</td><td>0.3657</td><td>0.5869</td><td>0.5503</td><td>0.5194</td></tr><tr><td>XLNet</td><td>0.1385</td><td>0.2500</td><td>0.1782</td><td>0.286</td><td>0.5</td><td>0.3638</td></tr><tr><td>XLM</td><td>0.0119</td><td>0.2500</td><td>0.0227</td><td>0.286</td><td>0.5</td><td>0.3638</td></tr><tr><td>Albert</td><td>0.2920</td><td>0.2877</td><td>0.2855</td><td>0.286</td><td>0.5</td><td>0.3638</td></tr><tr><td>distilBERT</td><td>0.3391</td><td>0.3305</td><td>0.3322</td><td>0.5941</td><td>0.594</td><td>0.5941</td></tr><tr><td>BERT</td><td>0.3804</td><td>0.3249</td><td>0.3344</td><td>0.6349</td><td>0.625</td><td>0.6255</td></tr><tr><td></td><td colspan="3">MIMICS</td><td colspan="3">MIMICS-Duo</td></tr><tr><td></td><td>MAE</td><td>MSE</td><td>R²</td><td>MAE</td><td>MSE</td><td>R²</td></tr><tr><td>RandomForest</td><td>2.4404</td><td>7.969</td><td>-0.0012</td><td>2.8502</td><td>11.206</td><td>-0.0079</td></tr><tr><td>DecisionTree</td><td>2.6374</td><td>10.0143</td><td>-0.2581</td><td>3.052</td><td>14.2306</td><td>-0.2799</td></tr><tr><td>SVR</td><td>2.4447</td><td>8.1852</td><td>-0.0283</td><td>2.7801</td><td>14.6398</td><td>-0.3167</td></tr><tr><td>MultinomialNB</td><td>3.3364</td><td>16.7424</td><td>-1.1034</td><td>2.7971</td><td>18.942</td><td>-0.7037</td></tr><tr><td>LogisticRegression</td><td>3.4084</td><td>17.9488</td><td>-1.2549</td><td>2.7971</td><td>18.942</td><td>-0.7037</td></tr><tr><td>BART</td><td>2.3903</td><td>8.5296</td><td>-0.0716</td><td>2.7233</td><td>10.3239</td><td>0.0714</td></tr><tr><td>XLNet</td><td>2.4582</td><td>8.1836</td><td>-0.0281</td><td>2.7971</td><td>18.942</td><td>-0.7037</td></tr><tr><td>XLM</td><td>2.6214</td><td>9.9151</td><td>-0.2456</td><td>2.7971</td><td>18.942</td><td>-0.7037</td></tr><tr><td>Albert</td><td>2.4339</td><td>8.0300</td><td>-0.0088</td><td>2.7971</td><td>18.942</td><td>-0.7037</td></tr><tr><td>distilBERT</td><td>2.3325</td><td>7.8685</td><td>0.0115</td><td>2.7744</td><td>11.0613</td><td>0.0051</td></tr><tr><td>BERT</td><td>2.4562</td><td>8.1277</td><td>-0.0211</td><td>2.8801</td><td>11.2268</td><td>-0.0098</td></tr></table>
|
| 328 |
+
|
| 329 |
+
Table 8: The performance of all methods on user satisfaction prediction with CQs on MIMICS and MIMICS-Duo. The best models are in bold.
|
| 330 |
+
|
| 331 |
+
<table><tr><td rowspan="2">Model</td><td colspan="3">MIMICS</td><td colspan="3">MIMICS-Duo</td></tr><tr><td>Precision</td><td>Recall</td><td>F1</td><td>Precision</td><td>Recall</td><td>F1</td></tr><tr><td>RandomForest</td><td>0.7522</td><td>0.5172</td><td>0.3686</td><td>0.1256</td><td>0.25</td><td>0.1672</td></tr><tr><td>DecisionTree</td><td>0.5648</td><td>0.5168</td><td>0.4050</td><td>0.2218</td><td>0.2311</td><td>0.2163</td></tr><tr><td>SVM</td><td>0.736</td><td>0.5947</td><td>0.5212</td><td>0.2379</td><td>0.2498</td><td>0.2157</td></tr><tr><td>MultinomialNB</td><td>0.8255</td><td>0.7842</td><td>0.7758</td><td>0.4407</td><td>0.2787</td><td>0.2336</td></tr><tr><td>LogisticRegression</td><td>0.7522</td><td>0.5172</td><td>0.3686</td><td>0.3762</td><td>0.2542</td><td>0.1761</td></tr><tr><td>BART</td><td>0.9385</td><td>0.931</td><td>0.9302</td><td>0.1256</td><td>0.25</td><td>0.1672</td></tr><tr><td>XLNet</td><td>0.9219</td><td>0.9217</td><td>0.9217</td><td>0.1256</td><td>0.25</td><td>0.1672</td></tr><tr><td>XLM</td><td>0.9348</td><td>0.9309</td><td>0.9303</td><td>0.1256</td><td>0.25</td><td>0.1672</td></tr><tr><td>Albert</td><td>0.9385</td><td>0.931</td><td>0.9302</td><td>0.1256</td><td>0.25</td><td>0.1672</td></tr><tr><td>distilBERT</td><td>0.9453</td><td>0.9397</td><td>0.939</td><td>0.2766</td><td>0.2803</td><td>0.2777</td></tr><tr><td>BERT</td><td>0.9385</td><td>0.931</td><td>0.9302</td><td>0.2851</td><td>0.264</td><td>0.2056</td></tr></table>
|
| 332 |
+
|
| 333 |
+
A For every submission:
|
| 334 |
+
|
| 335 |
+
A1. Did you describe the limitations of your work?
|
| 336 |
+
|
| 337 |
+
After Section 6
|
| 338 |
+
|
| 339 |
+
A2. Did you discuss any potential risks of your work?
|
| 340 |
+
|
| 341 |
+
Not applicable. Left blank.
|
| 342 |
+
|
| 343 |
+
A3. Do the abstract and introduction summarize the paper's main claims?
|
| 344 |
+
|
| 345 |
+
1
|
| 346 |
+
|
| 347 |
+
A4. Have you used AI writing assistants when working on this paper?
|
| 348 |
+
|
| 349 |
+
Left blank.
|
| 350 |
+
|
| 351 |
+
B Did you use or create scientific artifacts?
|
| 352 |
+
|
| 353 |
+
Left blank.
|
| 354 |
+
|
| 355 |
+
B1. Did you cite the creators of artifacts you used?
|
| 356 |
+
|
| 357 |
+
Not applicable. Left blank.
|
| 358 |
+
|
| 359 |
+
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
|
| 360 |
+
|
| 361 |
+
Not applicable. Left blank.
|
| 362 |
+
|
| 363 |
+
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
|
| 364 |
+
|
| 365 |
+
Not applicable. Left blank.
|
| 366 |
+
|
| 367 |
+
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
|
| 368 |
+
|
| 369 |
+
Not applicable. Left blank.
|
| 370 |
+
|
| 371 |
+
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
|
| 372 |
+
|
| 373 |
+
Not applicable. Left blank.
|
| 374 |
+
|
| 375 |
+
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
|
| 376 |
+
|
| 377 |
+
Not applicable. Left blank.
|
| 378 |
+
|
| 379 |
+
C Did you run computational experiments?
|
| 380 |
+
|
| 381 |
+
Left blank.
|
| 382 |
+
|
| 383 |
+
C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used?
|
| 384 |
+
|
| 385 |
+
Not applicable. Left blank.
|
| 386 |
+
|
| 387 |
+
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
|
| 388 |
+
|
| 389 |
+
Not applicable. Left blank.
|
| 390 |
+
|
| 391 |
+
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
|
| 392 |
+
|
| 393 |
+
Not applicable. Left blank.
|
| 394 |
+
|
| 395 |
+
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)?
|
| 396 |
+
|
| 397 |
+
Not applicable. Left blank.
|
| 398 |
+
|
| 399 |
+
D Did you use human annotators (e.g., crowdworkers) or research with human participants?
|
| 400 |
+
|
| 401 |
+
Left blank.
|
| 402 |
+
|
| 403 |
+
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
|
| 404 |
+
|
| 405 |
+
Not applicable. Left blank.
|
| 406 |
+
|
| 407 |
+
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)?
|
| 408 |
+
|
| 409 |
+
Not applicable. Left blank.
|
| 410 |
+
|
| 411 |
+
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
|
| 412 |
+
|
| 413 |
+
Not applicable. Left blank.
|
| 414 |
+
|
| 415 |
+
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
|
| 416 |
+
|
| 417 |
+
Not applicable. Left blank.
|
| 418 |
+
|
| 419 |
+
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
|
| 420 |
+
|
| 421 |
+
Not applicable. Left blank.
|
2023/A Survey on Asking Clarification Questions Datasets in Conversational Systems/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e30a7833cc31ebc755b16eedd679c853c22d2f5f8f81044449421db9ed035832
|
| 3 |
+
size 697514
|
2023/A Survey on Asking Clarification Questions Datasets in Conversational Systems/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Survey on Zero Pronoun Translation/3ae06d18-3d40-4838-ba49-dbe91c97e883_content_list.json
ADDED
|
@@ -0,0 +1,2162 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "A Survey on Zero Pronoun Translation",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
294,
|
| 8 |
+
90,
|
| 9 |
+
702,
|
| 10 |
+
111
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Longyue Wang*, Siyou Liu*, Mingzhou Xu, Linfeng Song, Shuming Shi, Zhaopeng Tu",
|
| 17 |
+
"bbox": [
|
| 18 |
+
129,
|
| 19 |
+
135,
|
| 20 |
+
872,
|
| 21 |
+
154
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "Tencent AI Lab",
|
| 28 |
+
"bbox": [
|
| 29 |
+
436,
|
| 30 |
+
155,
|
| 31 |
+
564,
|
| 32 |
+
168
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "{vinnylwang, lifengjin, shumingshi, zptu}@tencent.com",
|
| 39 |
+
"bbox": [
|
| 40 |
+
242,
|
| 41 |
+
170,
|
| 42 |
+
759,
|
| 43 |
+
186
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "guofeng-ai@googlegroups.com",
|
| 50 |
+
"bbox": [
|
| 51 |
+
363,
|
| 52 |
+
187,
|
| 53 |
+
638,
|
| 54 |
+
203
|
| 55 |
+
],
|
| 56 |
+
"page_idx": 0
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"text": "Abstract",
|
| 61 |
+
"text_level": 1,
|
| 62 |
+
"bbox": [
|
| 63 |
+
260,
|
| 64 |
+
252,
|
| 65 |
+
339,
|
| 66 |
+
268
|
| 67 |
+
],
|
| 68 |
+
"page_idx": 0
|
| 69 |
+
},
|
| 70 |
+
{
|
| 71 |
+
"type": "text",
|
| 72 |
+
"text": "Zero pronouns (ZPs) are frequently omitted in pro-drop languages (e.g. Chinese, Hungarian, and Hindi), but should be recalled in nonpro-drop languages (e.g. English). This phenomenon has been studied extensively in machine translation (MT), as it poses a significant challenge for MT systems due to the difficulty in determining the correct antecedent for the pronoun. This survey paper highlights the major works that have been undertaken in zero pronoun translation (ZPT) after the neural revolution so that researchers can recognize the current state and future directions of this field. We provide an organization of the literature based on evolution, dataset, method, and evaluation. In addition, we compare and analyze competing models and evaluation metrics on different benchmarks. We uncover a number of insightful findings such as: 1) ZPT is in line with the development trend of large language model; 2) data limitation causes learning bias in languages and domains; 3) performance improvements are often reported on single benchmarks, but advanced methods are still far from real-world use; 4) general-purpose metrics are not reliable on nuances and complexities of ZPT, emphasizing the necessity of targeted metrics; 5) apart from commonly-cited errors, ZPs will cause risks of gender bias.",
|
| 73 |
+
"bbox": [
|
| 74 |
+
144,
|
| 75 |
+
281,
|
| 76 |
+
460,
|
| 77 |
+
692
|
| 78 |
+
],
|
| 79 |
+
"page_idx": 0
|
| 80 |
+
},
|
| 81 |
+
{
|
| 82 |
+
"type": "text",
|
| 83 |
+
"text": "1 Introduction",
|
| 84 |
+
"text_level": 1,
|
| 85 |
+
"bbox": [
|
| 86 |
+
114,
|
| 87 |
+
705,
|
| 88 |
+
258,
|
| 89 |
+
721
|
| 90 |
+
],
|
| 91 |
+
"page_idx": 0
|
| 92 |
+
},
|
| 93 |
+
{
|
| 94 |
+
"type": "text",
|
| 95 |
+
"text": "Pronouns play an important role in natural language, as they enable speakers to refer to people, objects, or events without repeating the nouns that represent them. Zero pronoun $(\\mathrm{ZP})^{1}$ is a complex phenomenon that appears frequently in pronoun-dropping (pro-drop) languages such as Chinese, Hungarian, and Hindi. Specifically, pronouns are often omitted when they can be pragmatically",
|
| 96 |
+
"bbox": [
|
| 97 |
+
112,
|
| 98 |
+
732,
|
| 99 |
+
489,
|
| 100 |
+
860
|
| 101 |
+
],
|
| 102 |
+
"page_idx": 0
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"type": "text",
|
| 106 |
+
"text": "or grammatically inferable from intra- and intersentential contexts (Li and Thomson, 1979). Since recovery of such ZPs generally fails, this poses difficulties for several generation tasks, including dialogue modelling (Su et al., 2019), question answering (Tan et al., 2021), and machine translation (Wang, 2019).",
|
| 107 |
+
"bbox": [
|
| 108 |
+
507,
|
| 109 |
+
253,
|
| 110 |
+
884,
|
| 111 |
+
363
|
| 112 |
+
],
|
| 113 |
+
"page_idx": 0
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"type": "text",
|
| 117 |
+
"text": "When translating texts from pro-drop to non-prodrop languages (e.g. Chinese $\\Rightarrow$ English), this phenomenon leads to serious problems for translation models in terms of: 1) completeness, since translation of such invisible pronouns cannot be normally reproduced; 2) correctness, because understanding the semantics of a source sentence needs to identifying and resolving the pronominal reference.",
|
| 118 |
+
"bbox": [
|
| 119 |
+
507,
|
| 120 |
+
366,
|
| 121 |
+
884,
|
| 122 |
+
493
|
| 123 |
+
],
|
| 124 |
+
"page_idx": 0
|
| 125 |
+
},
|
| 126 |
+
{
|
| 127 |
+
"type": "text",
|
| 128 |
+
"text": "Figure 1 shows ZP examples in three typological patterns determined by language family (detailed in Appendix §A.1). Taking a full-drop language for instance, the first-person subject and third-person object pronouns are omitted in Hindi input while these pronouns are all compulsory in English translation. This is not a problem for human beings since we can easily recall these missing pronoun from the context. However, even a real-life MT system still fails to accurately translate ZPs.",
|
| 129 |
+
"bbox": [
|
| 130 |
+
507,
|
| 131 |
+
494,
|
| 132 |
+
884,
|
| 133 |
+
653
|
| 134 |
+
],
|
| 135 |
+
"page_idx": 0
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"type": "text",
|
| 139 |
+
"text": "In response to this problem, zero pronoun translation (ZPT) has been studied extensively in the MT community on three significant challenges:",
|
| 140 |
+
"bbox": [
|
| 141 |
+
507,
|
| 142 |
+
655,
|
| 143 |
+
882,
|
| 144 |
+
702
|
| 145 |
+
],
|
| 146 |
+
"page_idx": 0
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"type": "list",
|
| 150 |
+
"sub_type": "text",
|
| 151 |
+
"list_items": [
|
| 152 |
+
"- Dataset: there is limited availability of ZP-annotated parallel data, making it difficult to develop systems that can handle ZP complexities.",
|
| 153 |
+
"- Approach: due to the ability to capture semantic information with distributed representations, ideally, the representations of NMT should embed ZP information by learning the alignments between bilingual pronouns from the training corpus. In practice, however, NMT models only manage to successfully translate some simple ZPs, but still fail when translating complex ones (e.g. subject vs. object ZPs).",
|
| 154 |
+
"- Evaluation: general evaluation metrics for MT"
|
| 155 |
+
],
|
| 156 |
+
"bbox": [
|
| 157 |
+
507,
|
| 158 |
+
705,
|
| 159 |
+
884,
|
| 160 |
+
917
|
| 161 |
+
],
|
| 162 |
+
"page_idx": 0
|
| 163 |
+
},
|
| 164 |
+
{
|
| 165 |
+
"type": "page_footnote",
|
| 166 |
+
"text": "*Longyue Wang and Siyou Liu contributed equally to this work.",
|
| 167 |
+
"bbox": [
|
| 168 |
+
112,
|
| 169 |
+
868,
|
| 170 |
+
487,
|
| 171 |
+
891
|
| 172 |
+
],
|
| 173 |
+
"page_idx": 0
|
| 174 |
+
},
|
| 175 |
+
{
|
| 176 |
+
"type": "page_footnote",
|
| 177 |
+
"text": "${}^{1}\\mathrm{{ZP}}$ is also called dropped pronoun. The linguistic concept is detailed in Appendix §A.3.",
|
| 178 |
+
"bbox": [
|
| 179 |
+
112,
|
| 180 |
+
892,
|
| 181 |
+
485,
|
| 182 |
+
917
|
| 183 |
+
],
|
| 184 |
+
"page_idx": 0
|
| 185 |
+
},
|
| 186 |
+
{
|
| 187 |
+
"type": "page_number",
|
| 188 |
+
"text": "3325",
|
| 189 |
+
"bbox": [
|
| 190 |
+
480,
|
| 191 |
+
927,
|
| 192 |
+
519,
|
| 193 |
+
940
|
| 194 |
+
],
|
| 195 |
+
"page_idx": 0
|
| 196 |
+
},
|
| 197 |
+
{
|
| 198 |
+
"type": "footer",
|
| 199 |
+
"text": "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics",
|
| 200 |
+
"bbox": [
|
| 201 |
+
226,
|
| 202 |
+
945,
|
| 203 |
+
769,
|
| 204 |
+
957
|
| 205 |
+
],
|
| 206 |
+
"page_idx": 0
|
| 207 |
+
},
|
| 208 |
+
{
|
| 209 |
+
"type": "footer",
|
| 210 |
+
"text": "Volume 1: Long Papers, pages 3325-3339",
|
| 211 |
+
"bbox": [
|
| 212 |
+
368,
|
| 213 |
+
958,
|
| 214 |
+
628,
|
| 215 |
+
971
|
| 216 |
+
],
|
| 217 |
+
"page_idx": 0
|
| 218 |
+
},
|
| 219 |
+
{
|
| 220 |
+
"type": "footer",
|
| 221 |
+
"text": "July 9-14, 2023 ©2023 Association for Computational Linguistics",
|
| 222 |
+
"bbox": [
|
| 223 |
+
295,
|
| 224 |
+
972,
|
| 225 |
+
700,
|
| 226 |
+
985
|
| 227 |
+
],
|
| 228 |
+
"page_idx": 0
|
| 229 |
+
},
|
| 230 |
+
{
|
| 231 |
+
"type": "image",
|
| 232 |
+
"img_path": "images/cf3b0196b3e4229b7fd27ee1731af770958a695039aec86302674bc679bf8d48.jpg",
|
| 233 |
+
"image_caption": [
|
| 234 |
+
"Figure 1: An overview of pro-drop languages by considering their typological patterns and language families. Example of ZP phenomenon in other languages (i.e. Korean, Hungarian and Hindi). Words in brackets are pronouns that are invisible in source language (implicit and explicit). The underlined words are corresponding antecedents. \"EN\" represents the human translation in English, which is a non-pro-drop language. \"OT\" is output translated by SOTA NMT systems with inappropriate translations."
|
| 235 |
+
],
|
| 236 |
+
"image_footnote": [],
|
| 237 |
+
"bbox": [
|
| 238 |
+
144,
|
| 239 |
+
97,
|
| 240 |
+
853,
|
| 241 |
+
218
|
| 242 |
+
],
|
| 243 |
+
"page_idx": 1
|
| 244 |
+
},
|
| 245 |
+
{
|
| 246 |
+
"type": "table",
|
| 247 |
+
"img_path": "images/7e8526ee59b944ac35e65f01e1951b9ae71ab2a501e653f6de93426f783b1e3f.jpg",
|
| 248 |
+
"table_caption": [],
|
| 249 |
+
"table_footnote": [],
|
| 250 |
+
"table_body": "<table><tr><td>KO</td><td>A: े \nB: े</td></tr><tr><td>EN</td><td>A: Do you need this? B: (I) need (it).</td></tr><tr><td>OT</td><td>A: Do you need this? B: I need.</td></tr></table>",
|
| 251 |
+
"bbox": [
|
| 252 |
+
119,
|
| 253 |
+
229,
|
| 254 |
+
371,
|
| 255 |
+
319
|
| 256 |
+
],
|
| 257 |
+
"page_idx": 1
|
| 258 |
+
},
|
| 259 |
+
{
|
| 260 |
+
"type": "table",
|
| 261 |
+
"img_path": "images/44f30a67e2c0ad7304ac56ee88abceed478e31fa3a0840c930578af7bbde4b5e.jpg",
|
| 262 |
+
"table_caption": [],
|
| 263 |
+
"table_footnote": [],
|
| 264 |
+
"table_body": "<table><tr><td>HU</td><td>A: látátok a macskát? B: látjuk.</td></tr><tr><td>EN</td><td>A: Do (you) see the cat? B: (We) see (it).</td></tr><tr><td>OT</td><td>A: Do you see the cat? B: We see.</td></tr></table>",
|
| 265 |
+
"bbox": [
|
| 266 |
+
374,
|
| 267 |
+
229,
|
| 268 |
+
625,
|
| 269 |
+
319
|
| 270 |
+
],
|
| 271 |
+
"page_idx": 1
|
| 272 |
+
},
|
| 273 |
+
{
|
| 274 |
+
"type": "table",
|
| 275 |
+
"img_path": "images/347d635e268c8762359a51ca08d3860fd2491e7e66858081b308311457c2b983.jpg",
|
| 276 |
+
"table_caption": [],
|
| 277 |
+
"table_footnote": [],
|
| 278 |
+
"table_body": "<table><tr><td>HI</td><td>A: निकान्दी नागया को सिताना ? \nB: को सिताना .</td></tr><tr><td>EN</td><td>A: Did you give the food to Nadya? \nB: Yes, (Ⅰ) gave (her) (food).</td></tr><tr><td>OT</td><td>A: Did you eat Nadya? \nB: Yes given.</td></tr></table>",
|
| 279 |
+
"bbox": [
|
| 280 |
+
625,
|
| 281 |
+
229,
|
| 282 |
+
877,
|
| 283 |
+
319
|
| 284 |
+
],
|
| 285 |
+
"page_idx": 1
|
| 286 |
+
},
|
| 287 |
+
{
|
| 288 |
+
"type": "text",
|
| 289 |
+
"text": "are not sensitive enough to capture translation errors caused by ZPs.",
|
| 290 |
+
"bbox": [
|
| 291 |
+
127,
|
| 292 |
+
429,
|
| 293 |
+
485,
|
| 294 |
+
460
|
| 295 |
+
],
|
| 296 |
+
"page_idx": 1
|
| 297 |
+
},
|
| 298 |
+
{
|
| 299 |
+
"type": "text",
|
| 300 |
+
"text": "We believe that it is the right time to take stock of what has been achieved in ZPT, so that researchers can get a bigger picture of where this line of research stands. In this paper, we present a survey of the major works on datasets, approaches and evaluation metrics that have been undertaken in ZPT. We first introduce the background of linguistic phenomenon and literature selection in Section 2. Section 3 discusses the evolution of ZP-related tasks. Section 4 summarizes the annotated datasets, which are significant to pushing the studies move forward. Furthermore, we investigated advanced approaches for improving ZPT models in Section 5. In addition to this, Section 6 covers the evaluation methods that have been introduced to account for improvements in this field. We conclude by presenting avenues for future research in Section 7.",
|
| 301 |
+
"bbox": [
|
| 302 |
+
112,
|
| 303 |
+
463,
|
| 304 |
+
489,
|
| 305 |
+
751
|
| 306 |
+
],
|
| 307 |
+
"page_idx": 1
|
| 308 |
+
},
|
| 309 |
+
{
|
| 310 |
+
"type": "text",
|
| 311 |
+
"text": "2 Background",
|
| 312 |
+
"text_level": 1,
|
| 313 |
+
"bbox": [
|
| 314 |
+
112,
|
| 315 |
+
770,
|
| 316 |
+
257,
|
| 317 |
+
785
|
| 318 |
+
],
|
| 319 |
+
"page_idx": 1
|
| 320 |
+
},
|
| 321 |
+
{
|
| 322 |
+
"type": "text",
|
| 323 |
+
"text": "2.1 Linguistic Phenomenon",
|
| 324 |
+
"text_level": 1,
|
| 325 |
+
"bbox": [
|
| 326 |
+
112,
|
| 327 |
+
799,
|
| 328 |
+
346,
|
| 329 |
+
814
|
| 330 |
+
],
|
| 331 |
+
"page_idx": 1
|
| 332 |
+
},
|
| 333 |
+
{
|
| 334 |
+
"type": "text",
|
| 335 |
+
"text": "Definition of Zero Pronoun Cohesion is a significant property of discourse, and it occurs whenever \"the interpretation of some element in the discourse is dependent on that of another\" (Halliday and Hasan, 1976). As one of cohesive devices, anaphora is the use of an expression whose inter",
|
| 336 |
+
"bbox": [
|
| 337 |
+
112,
|
| 338 |
+
822,
|
| 339 |
+
490,
|
| 340 |
+
917
|
| 341 |
+
],
|
| 342 |
+
"page_idx": 1
|
| 343 |
+
},
|
| 344 |
+
{
|
| 345 |
+
"type": "text",
|
| 346 |
+
"text": "pretation depends specifically upon antecedent expression while zero anaphora is a more complex scenario in pro-drop languages. A ZP is a gap in a sentence, which refers to an entity that supplies the necessary information for interpreting the gap (Zhao and Ng, 2007). ZPs can be categorized into anaphoric and non-anaphoric ZP according to whether it refers to an antecedent or not. In pro-drop languages such as Chinese and Japanese, ZPs occur much more frequently compared to nonpro-drop languages such as English. The ZP phenomenon can be considered one of the most difficult problems in natural language processing (Peral and Ferrández, 2003).",
|
| 347 |
+
"bbox": [
|
| 348 |
+
507,
|
| 349 |
+
429,
|
| 350 |
+
884,
|
| 351 |
+
652
|
| 352 |
+
],
|
| 353 |
+
"page_idx": 1
|
| 354 |
+
},
|
| 355 |
+
{
|
| 356 |
+
"type": "text",
|
| 357 |
+
"text": "Extent of Zero Pronoun To investigate the extent of pronoun-dropping, we quantitatively analyzed ZPs in two corpora and details are shown in Appendix §A.2. We found that the frequencies and types of ZPs vary in different genres: (1) $26\\%$ of Chinese pronouns were dropped in the dialogue domain, while $7\\%$ were dropped in the newswire domain; (2) the most frequent ZP in newswire text is the third person singular它(“it”)(Baran et al., 2012), while that in SMS dialogues is the first person我(“I”)and我们(“we”)(Rao et al., 2015). This may lead to differences in model behavior and quality across domains. This high proportion within informal genres such as dialogues and conversation shows the importance of addressing the challenge of translation of ZPs.",
|
| 358 |
+
"bbox": [
|
| 359 |
+
507,
|
| 360 |
+
661,
|
| 361 |
+
884,
|
| 362 |
+
917
|
| 363 |
+
],
|
| 364 |
+
"page_idx": 1
|
| 365 |
+
},
|
| 366 |
+
{
|
| 367 |
+
"type": "page_number",
|
| 368 |
+
"text": "3326",
|
| 369 |
+
"bbox": [
|
| 370 |
+
480,
|
| 371 |
+
927,
|
| 372 |
+
521,
|
| 373 |
+
940
|
| 374 |
+
],
|
| 375 |
+
"page_idx": 1
|
| 376 |
+
},
|
| 377 |
+
{
|
| 378 |
+
"type": "text",
|
| 379 |
+
"text": "2.2 Literature Selection",
|
| 380 |
+
"text_level": 1,
|
| 381 |
+
"bbox": [
|
| 382 |
+
112,
|
| 383 |
+
84,
|
| 384 |
+
317,
|
| 385 |
+
99
|
| 386 |
+
],
|
| 387 |
+
"page_idx": 2
|
| 388 |
+
},
|
| 389 |
+
{
|
| 390 |
+
"type": "text",
|
| 391 |
+
"text": "We used the following methodology to provide a comprehensive and unbiased overview of the current state of the art, while minimizing the risk of omitting key references:",
|
| 392 |
+
"bbox": [
|
| 393 |
+
112,
|
| 394 |
+
105,
|
| 395 |
+
489,
|
| 396 |
+
168
|
| 397 |
+
],
|
| 398 |
+
"page_idx": 2
|
| 399 |
+
},
|
| 400 |
+
{
|
| 401 |
+
"type": "list",
|
| 402 |
+
"sub_type": "text",
|
| 403 |
+
"list_items": [
|
| 404 |
+
"- Search Strategy: We conducted a systematic search in major databases (e.g. Google Scholar) to identify the relevant articles and resources. Our search terms included combinations of keywords, such as \"zero pronouns,\" \"zero pronoun translation,\" and \"coreference resolution.\"",
|
| 405 |
+
"- Selection Criteria: To maintain the focus and quality of our review, we established the following criteria. (1) Inclusion, where articles are published in journals, conferences and workshop proceedings. (2) Exclusion, where articles that are not available in English or do not provide sufficient details to assess the validity of their results.",
|
| 406 |
+
"- Screening and Selection: First, we screened the titles and abstracts based on our Selection Criteria. Then, we assessed the full texts of the remaining articles for eligibility. We also checked the reference lists of relevant articles to identify any additional sources that may have been missed during the initial search.",
|
| 407 |
+
"- Data Extraction and Synthesis: We extracted key information from the selected articles, such as dataset characteristics, and main findings. This data was synthesized and organized to provide a comprehensive analysis of the current state of the art in ZPT."
|
| 408 |
+
],
|
| 409 |
+
"bbox": [
|
| 410 |
+
112,
|
| 411 |
+
171,
|
| 412 |
+
489,
|
| 413 |
+
596
|
| 414 |
+
],
|
| 415 |
+
"page_idx": 2
|
| 416 |
+
},
|
| 417 |
+
{
|
| 418 |
+
"type": "text",
|
| 419 |
+
"text": "3 Evolution of Zero Pronoun Modelling",
|
| 420 |
+
"text_level": 1,
|
| 421 |
+
"bbox": [
|
| 422 |
+
112,
|
| 423 |
+
609,
|
| 424 |
+
473,
|
| 425 |
+
626
|
| 426 |
+
],
|
| 427 |
+
"page_idx": 2
|
| 428 |
+
},
|
| 429 |
+
{
|
| 430 |
+
"type": "text",
|
| 431 |
+
"text": "Considering the evolution of ZP modelling, we cannot avoid discussing other related tasks. Thus, we first review three typical ZP tasks and conclude their essential relations and future trends.",
|
| 432 |
+
"bbox": [
|
| 433 |
+
112,
|
| 434 |
+
634,
|
| 435 |
+
489,
|
| 436 |
+
699
|
| 437 |
+
],
|
| 438 |
+
"page_idx": 2
|
| 439 |
+
},
|
| 440 |
+
{
|
| 441 |
+
"type": "text",
|
| 442 |
+
"text": "3.1 Overview",
|
| 443 |
+
"text_level": 1,
|
| 444 |
+
"bbox": [
|
| 445 |
+
112,
|
| 446 |
+
711,
|
| 447 |
+
236,
|
| 448 |
+
725
|
| 449 |
+
],
|
| 450 |
+
"page_idx": 2
|
| 451 |
+
},
|
| 452 |
+
{
|
| 453 |
+
"type": "text",
|
| 454 |
+
"text": "ZP resolution is the earliest task to handle the understanding problem of ZP (Zhao and Ng, 2007). ZP recovery and translation aim to directly generate ZPs in monolingual and crosslingual scenarios, respectively (Yang and Xue, 2010; Chung and Gildea, 2010). This is illustrated in Figure 2.",
|
| 455 |
+
"bbox": [
|
| 456 |
+
112,
|
| 457 |
+
732,
|
| 458 |
+
489,
|
| 459 |
+
829
|
| 460 |
+
],
|
| 461 |
+
"page_idx": 2
|
| 462 |
+
},
|
| 463 |
+
{
|
| 464 |
+
"type": "text",
|
| 465 |
+
"text": "Zero Pronoun Resolution The task contains three steps: ZP detection, anaphoricity determination and reference linking. Earlier works investigated rich features using traditional ML models (Zhao and Ng, 2007; Kong and Zhou, 2010; Chen",
|
| 466 |
+
"bbox": [
|
| 467 |
+
112,
|
| 468 |
+
839,
|
| 469 |
+
489,
|
| 470 |
+
917
|
| 471 |
+
],
|
| 472 |
+
"page_idx": 2
|
| 473 |
+
},
|
| 474 |
+
{
|
| 475 |
+
"type": "text",
|
| 476 |
+
"text": "and Ng, 2013, 2015). Recent studies exploited neural models to achieve the better performance (Chen and Ng, 2016; Yin et al., 2018; Song et al., 2020). The CoNLL2011 and CoNLL2012 are commonly used benchmarks on modeling unrestricted coreference. The corpus contains 144K coreference instances, but dropped subjects only occupy $15\\%$ .",
|
| 477 |
+
"bbox": [
|
| 478 |
+
507,
|
| 479 |
+
84,
|
| 480 |
+
884,
|
| 481 |
+
197
|
| 482 |
+
],
|
| 483 |
+
"page_idx": 2
|
| 484 |
+
},
|
| 485 |
+
{
|
| 486 |
+
"type": "text",
|
| 487 |
+
"text": "Zero Pronoun Recovery Given a source sentence, this aims to insert omitted pronouns in proper positions without changing the original meaning (Yang and Xue, 2010; Yang et al., 2015, 2019a). It is different from ZP resolution, which identifies the antecedent of a referential pronoun (Mitkov, 2014). Previous studies regarded ZP recovery as a classification or sequence labelling problem, which only achieve $40\\sim 60\\%$ F1 scores on closed datasets (Zhang et al., 2019; Song et al., 2020), indicating the difficulty of generating ZPs. It is worth noting that ZP recovery models can work for ZPT task in a pipeline manner: input sentences are labeled with ZPs using an external recovery system and then fed into a standard MT model (Chung and Gildea, 2010; Wang et al., 2016a).",
|
| 488 |
+
"bbox": [
|
| 489 |
+
507,
|
| 490 |
+
206,
|
| 491 |
+
884,
|
| 492 |
+
464
|
| 493 |
+
],
|
| 494 |
+
"page_idx": 2
|
| 495 |
+
},
|
| 496 |
+
{
|
| 497 |
+
"type": "text",
|
| 498 |
+
"text": "Zero Pronoun Translation When pronouns are omitted in a source sentence, ZPT aims to generate ZPs in its target translation. Early studies have investigate a number of works for SMT models (Chung and Gildea, 2010; Le Nagard and Koehn, 2010; Taira et al., 2012; Xiang et al., 2013; Wang et al., 2016a). Recent years have seen a surge of interest in NMT (Yu et al., 2020; Wang et al., 2018a), since the problem still exists in advanced NMT systems. ZPT is also related to pronoun translation, which aims to correctly translate explicit pronoun in terms of feminine and masculine. The DiscoMT<sup>3</sup> is a commonly-cited benchmark on pronoun translation, however, there was no standard ZPT benchmarks up until now.",
|
| 499 |
+
"bbox": [
|
| 500 |
+
507,
|
| 501 |
+
475,
|
| 502 |
+
882,
|
| 503 |
+
715
|
| 504 |
+
],
|
| 505 |
+
"page_idx": 2
|
| 506 |
+
},
|
| 507 |
+
{
|
| 508 |
+
"type": "text",
|
| 509 |
+
"text": "3.2 Discussions and Findings",
|
| 510 |
+
"text_level": 1,
|
| 511 |
+
"bbox": [
|
| 512 |
+
507,
|
| 513 |
+
728,
|
| 514 |
+
754,
|
| 515 |
+
743
|
| 516 |
+
],
|
| 517 |
+
"page_idx": 2
|
| 518 |
+
},
|
| 519 |
+
{
|
| 520 |
+
"type": "text",
|
| 521 |
+
"text": "By comparing different ZP-aware tasks, we found three future trends:",
|
| 522 |
+
"bbox": [
|
| 523 |
+
507,
|
| 524 |
+
750,
|
| 525 |
+
880,
|
| 526 |
+
780
|
| 527 |
+
],
|
| 528 |
+
"page_idx": 2
|
| 529 |
+
},
|
| 530 |
+
{
|
| 531 |
+
"type": "text",
|
| 532 |
+
"text": "1. From Intermediate to End. In real-life systems, ZP resolution and recovery are intermediate tasks while ZPT can be directly reflected in system output. ZP resolution and recovery will be replaced by ZPT although they currently work with some MT systems in a pipeline way.",
|
| 533 |
+
"bbox": [
|
| 534 |
+
505,
|
| 535 |
+
785,
|
| 536 |
+
882,
|
| 537 |
+
882
|
| 538 |
+
],
|
| 539 |
+
"page_idx": 2
|
| 540 |
+
},
|
| 541 |
+
{
|
| 542 |
+
"type": "page_footnote",
|
| 543 |
+
"text": "$^{2}$ https://cemantix.org.",
|
| 544 |
+
"bbox": [
|
| 545 |
+
529,
|
| 546 |
+
890,
|
| 547 |
+
695,
|
| 548 |
+
904
|
| 549 |
+
],
|
| 550 |
+
"page_idx": 2
|
| 551 |
+
},
|
| 552 |
+
{
|
| 553 |
+
"type": "page_footnote",
|
| 554 |
+
"text": "<sup>3</sup>https://aclanthology.org/W15-2500.",
|
| 555 |
+
"bbox": [
|
| 556 |
+
529,
|
| 557 |
+
904,
|
| 558 |
+
794,
|
| 559 |
+
917
|
| 560 |
+
],
|
| 561 |
+
"page_idx": 2
|
| 562 |
+
},
|
| 563 |
+
{
|
| 564 |
+
"type": "page_number",
|
| 565 |
+
"text": "3327",
|
| 566 |
+
"bbox": [
|
| 567 |
+
480,
|
| 568 |
+
927,
|
| 569 |
+
519,
|
| 570 |
+
940
|
| 571 |
+
],
|
| 572 |
+
"page_idx": 2
|
| 573 |
+
},
|
| 574 |
+
{
|
| 575 |
+
"type": "image",
|
| 576 |
+
"img_path": "images/ac673922d89cf3627b299ace7d3b021facf23139aed6fea8d497cd4aa62d0eae.jpg",
|
| 577 |
+
"image_caption": [
|
| 578 |
+
"Figure 2: An overview of three ZP-aware tasks (taking Chinese-English for instance): ZP resolution, ZP recovery and ZP translation. As seen, the input is the same while the output varies according to different tasks."
|
| 579 |
+
],
|
| 580 |
+
"image_footnote": [],
|
| 581 |
+
"bbox": [
|
| 582 |
+
376,
|
| 583 |
+
82,
|
| 584 |
+
836,
|
| 585 |
+
262
|
| 586 |
+
],
|
| 587 |
+
"page_idx": 3
|
| 588 |
+
},
|
| 589 |
+
{
|
| 590 |
+
"type": "text",
|
| 591 |
+
"text": "2. From Separate To Unified. With the development of large language models (LLMs), it is unnecessary to keep a specific model for each task. For example, Song et al. (2020) leveraged a unified BERT-based architecture to model ZP resolution and recovery. Furthermore, we observed that $\\mathrm{ChatGPT}^4$ already possesses the capability for ZP resolution and recovery.",
|
| 592 |
+
"bbox": [
|
| 593 |
+
105,
|
| 594 |
+
326,
|
| 595 |
+
490,
|
| 596 |
+
456
|
| 597 |
+
],
|
| 598 |
+
"page_idx": 3
|
| 599 |
+
},
|
| 600 |
+
{
|
| 601 |
+
"type": "text",
|
| 602 |
+
"text": "4 Datasets",
|
| 603 |
+
"text_level": 1,
|
| 604 |
+
"bbox": [
|
| 605 |
+
112,
|
| 606 |
+
474,
|
| 607 |
+
223,
|
| 608 |
+
488
|
| 609 |
+
],
|
| 610 |
+
"page_idx": 3
|
| 611 |
+
},
|
| 612 |
+
{
|
| 613 |
+
"type": "text",
|
| 614 |
+
"text": "4.1 Overview",
|
| 615 |
+
"text_level": 1,
|
| 616 |
+
"bbox": [
|
| 617 |
+
112,
|
| 618 |
+
501,
|
| 619 |
+
235,
|
| 620 |
+
514
|
| 621 |
+
],
|
| 622 |
+
"page_idx": 3
|
| 623 |
+
},
|
| 624 |
+
{
|
| 625 |
+
"type": "text",
|
| 626 |
+
"text": "Modeling ZPs has so far not been extensively explored in prior research, largely due to the lack of publicly available data sets. Existing works mostly focused on human-annotated, small-scale and single-domain corpora such as OntoNotes (Pradhan et al., 2012; Aloraini and Poesio, 2020) and Treebanks (Yang and Xue, 2010; Chung and Gildea, 2010). We summarize representative corpora as:",
|
| 627 |
+
"bbox": [
|
| 628 |
+
112,
|
| 629 |
+
524,
|
| 630 |
+
489,
|
| 631 |
+
652
|
| 632 |
+
],
|
| 633 |
+
"page_idx": 3
|
| 634 |
+
},
|
| 635 |
+
{
|
| 636 |
+
"type": "list",
|
| 637 |
+
"sub_type": "text",
|
| 638 |
+
"list_items": [
|
| 639 |
+
"- OntoNotes. This is annotated with structural information (e.g. syntax and predicate argument structure) and shallow semantics (e.g. word sense linked to an ontology and coreference). It comprises various genres of text (news, conversational telephone speech, weblogs, usenet newsgroups, broadcast, talk shows) in English, Chinese, and Arabic languages. ZP sentences are extracted for ZP resolution task (Chen and Ng, 2013, 2016).",
|
| 640 |
+
"- TVSub. This extracts Chinese-English subtitles from television episodes. Its source-side sentences are automatically annotated with ZPs by a"
|
| 641 |
+
],
|
| 642 |
+
"bbox": [
|
| 643 |
+
112,
|
| 644 |
+
655,
|
| 645 |
+
489,
|
| 646 |
+
866
|
| 647 |
+
],
|
| 648 |
+
"page_idx": 3
|
| 649 |
+
},
|
| 650 |
+
{
|
| 651 |
+
"type": "text",
|
| 652 |
+
"text": "heuristic algorithm (Wang et al., 2016a), which was generally used to study dialogue translation and zero anaphora phenomenon (Wang et al., 2018a; Tan et al., 2021).",
|
| 653 |
+
"bbox": [
|
| 654 |
+
521,
|
| 655 |
+
326,
|
| 656 |
+
884,
|
| 657 |
+
391
|
| 658 |
+
],
|
| 659 |
+
"page_idx": 3
|
| 660 |
+
},
|
| 661 |
+
{
|
| 662 |
+
"type": "list",
|
| 663 |
+
"sub_type": "text",
|
| 664 |
+
"list_items": [
|
| 665 |
+
"- CTB. $^{7}$ This is a part-of-speech tagged and fully bracketed Chinese language corpus. The text are extracted from various domains including newswire, government documents, magazine articles, various broadcast news and broadcast conversation programs, web newsgroups and weblogs. Instances with empty category are extracted for ZP recovery task (Yang and Xue, 2010; Chung and Gildea, 2010).",
|
| 666 |
+
"- BaiduKnows. The source-side sentences are collected from the Baidu Knows website, which were annotated with ZP labels with boundary tags. It is widely-used the task of ZP recovery (Zhang et al., 2019; Song et al., 2020)."
|
| 667 |
+
],
|
| 668 |
+
"bbox": [
|
| 669 |
+
509,
|
| 670 |
+
394,
|
| 671 |
+
882,
|
| 672 |
+
621
|
| 673 |
+
],
|
| 674 |
+
"page_idx": 3
|
| 675 |
+
},
|
| 676 |
+
{
|
| 677 |
+
"type": "text",
|
| 678 |
+
"text": "4.2 Discussions and Findings",
|
| 679 |
+
"text_level": 1,
|
| 680 |
+
"bbox": [
|
| 681 |
+
507,
|
| 682 |
+
634,
|
| 683 |
+
754,
|
| 684 |
+
649
|
| 685 |
+
],
|
| 686 |
+
"page_idx": 3
|
| 687 |
+
},
|
| 688 |
+
{
|
| 689 |
+
"type": "text",
|
| 690 |
+
"text": "Table 1 lists statistics of existing ZP datasets and we found the limitations and trends:",
|
| 691 |
+
"bbox": [
|
| 692 |
+
505,
|
| 693 |
+
655,
|
| 694 |
+
880,
|
| 695 |
+
686
|
| 696 |
+
],
|
| 697 |
+
"page_idx": 3
|
| 698 |
+
},
|
| 699 |
+
{
|
| 700 |
+
"type": "text",
|
| 701 |
+
"text": "1. Language Bias. Most works used Chinese and Japanese datasets as testbed for training ZP models (Song et al., 2020; Ri et al., 2021). However, there were limited data available for other prodrop languages (e.g. Portuguese and Spanish), resulting that linguists mainly used them for corpus analysis (Pereira, 2009; Russo et al., 2012). However, ZP phenomenon may vary across languages in terms of word form, occurrence frequency and category distribution, leading to learning bias on linguistic knowledge. Thus, it is necessary to establish ZP datasets for various languages (Prasad,",
|
| 702 |
+
"bbox": [
|
| 703 |
+
504,
|
| 704 |
+
689,
|
| 705 |
+
882,
|
| 706 |
+
882
|
| 707 |
+
],
|
| 708 |
+
"page_idx": 3
|
| 709 |
+
},
|
| 710 |
+
{
|
| 711 |
+
"type": "page_footnote",
|
| 712 |
+
"text": "<sup>7</sup>https://catalog.ldc.upenn.edu/LDC2013T21.",
|
| 713 |
+
"bbox": [
|
| 714 |
+
529,
|
| 715 |
+
890,
|
| 716 |
+
845,
|
| 717 |
+
904
|
| 718 |
+
],
|
| 719 |
+
"page_idx": 3
|
| 720 |
+
},
|
| 721 |
+
{
|
| 722 |
+
"type": "page_footnote",
|
| 723 |
+
"text": "4https://openai.com/blog/chatgpt.",
|
| 724 |
+
"bbox": [
|
| 725 |
+
134,
|
| 726 |
+
878,
|
| 727 |
+
383,
|
| 728 |
+
891
|
| 729 |
+
],
|
| 730 |
+
"page_idx": 3
|
| 731 |
+
},
|
| 732 |
+
{
|
| 733 |
+
"type": "page_footnote",
|
| 734 |
+
"text": "$^{5}$ https://catalog.ldc.upenn.edu/LDC2013T19.",
|
| 735 |
+
"bbox": [
|
| 736 |
+
136,
|
| 737 |
+
891,
|
| 738 |
+
450,
|
| 739 |
+
904
|
| 740 |
+
],
|
| 741 |
+
"page_idx": 3
|
| 742 |
+
},
|
| 743 |
+
{
|
| 744 |
+
"type": "page_footnote",
|
| 745 |
+
"text": "$^{6}$ https://github.com/longyuewangdcu/tvsub.",
|
| 746 |
+
"bbox": [
|
| 747 |
+
136,
|
| 748 |
+
904,
|
| 749 |
+
443,
|
| 750 |
+
917
|
| 751 |
+
],
|
| 752 |
+
"page_idx": 3
|
| 753 |
+
},
|
| 754 |
+
{
|
| 755 |
+
"type": "page_number",
|
| 756 |
+
"text": "3328",
|
| 757 |
+
"bbox": [
|
| 758 |
+
480,
|
| 759 |
+
927,
|
| 760 |
+
519,
|
| 761 |
+
940
|
| 762 |
+
],
|
| 763 |
+
"page_idx": 3
|
| 764 |
+
},
|
| 765 |
+
{
|
| 766 |
+
"type": "table",
|
| 767 |
+
"img_path": "images/dee1e6f378913a766fe88b1abbede2e71272e31805939455c06eda6fa56d822b.jpg",
|
| 768 |
+
"table_caption": [],
|
| 769 |
+
"table_footnote": [],
|
| 770 |
+
"table_body": "<table><tr><td rowspan=\"2\">Dataset</td><td rowspan=\"2\">Lang.</td><td rowspan=\"2\">Anno.</td><td rowspan=\"2\">Domain</td><td rowspan=\"2\">Size</td><td colspan=\"3\">Task</td></tr><tr><td>Reso.</td><td>Reco.</td><td>Trans.</td></tr><tr><td>OntoNotes (Pradhan et al., 2012)</td><td>ZH</td><td>Human</td><td>Mixed Sources</td><td>42.6K</td><td>✓</td><td>✗</td><td>✗</td></tr><tr><td>OntoNotes (Aloraini and Poesio, 2020)</td><td>AR</td><td>Human</td><td>News</td><td>9.4K</td><td>✓</td><td>✗</td><td>✗</td></tr><tr><td>CTB (Yang and Xue, 2010)</td><td>ZH</td><td>Human</td><td>News</td><td>10.6K</td><td>✗</td><td>✓</td><td>✗</td></tr><tr><td>KTB (Chung and Gildea, 2010)</td><td>KO</td><td>Human</td><td>News</td><td>5.0K</td><td>✗</td><td>✓</td><td>✗</td></tr><tr><td>BaiduKnows (Zhang et al., 2019)</td><td>ZH</td><td>Human</td><td>Baidu Knows</td><td>5.0K</td><td>✗</td><td>✓</td><td>✗</td></tr><tr><td>TVsub (Wang et al., 2018a)</td><td>ZH, EN</td><td>Auto</td><td>Movie Subtitles</td><td>2.2M</td><td>✗</td><td>✗</td><td>✓</td></tr><tr><td>ZAC (Pereira, 2009)</td><td>PT</td><td>Human</td><td>Mixed Sources</td><td>0.6K</td><td>✓</td><td>✗</td><td>✗</td></tr><tr><td>Nagoya (Zhan and Nakaiwa, 2015)</td><td>JA</td><td>Auto</td><td>Scientific Paper</td><td>1.2K</td><td>✓</td><td>✗</td><td>✗</td></tr><tr><td>SKku (Park et al., 2015)</td><td>KO</td><td>Human</td><td>Dialogue</td><td>1.1K</td><td>✓</td><td>✗</td><td>✗</td></tr><tr><td>UPENN (Prasad, 2000)</td><td>HI</td><td>Human</td><td>News</td><td>2.2K</td><td>✓</td><td>✗</td><td>✗</td></tr><tr><td>LATL (Russo et al., 2012)</td><td>IT, ES</td><td>Human</td><td>Europarl</td><td>2.0K</td><td>✓</td><td>✗</td><td>✓</td></tr><tr><td>UCFV (Bacolini, 2017)</td><td>HE</td><td>Human</td><td>Dialogue</td><td>0.1K</td><td>✓</td><td>✗</td><td>✗</td></tr></table>",
|
| 771 |
+
"bbox": [
|
| 772 |
+
119,
|
| 773 |
+
80,
|
| 774 |
+
884,
|
| 775 |
+
315
|
| 776 |
+
],
|
| 777 |
+
"page_idx": 4
|
| 778 |
+
},
|
| 779 |
+
{
|
| 780 |
+
"type": "text",
|
| 781 |
+
"text": "Table 1: A summary of existing datasets regarding ZP. We classify them according to language (Lang.), annotation type (Anno.) and text domain. We also report the number of sentences (Size). \"Reso.\", \"Reco.\" and \"Trans.\" indicate whether a dataset can be used for specific ZP tasks. The symbol $\\checkmark$ or $X$ means \"Yes\" or \"No\".",
|
| 782 |
+
"bbox": [
|
| 783 |
+
112,
|
| 784 |
+
323,
|
| 785 |
+
882,
|
| 786 |
+
367
|
| 787 |
+
],
|
| 788 |
+
"page_idx": 4
|
| 789 |
+
},
|
| 790 |
+
{
|
| 791 |
+
"type": "text",
|
| 792 |
+
"text": "2000; Bacolini, 2017).",
|
| 793 |
+
"bbox": [
|
| 794 |
+
127,
|
| 795 |
+
392,
|
| 796 |
+
299,
|
| 797 |
+
407
|
| 798 |
+
],
|
| 799 |
+
"page_idx": 4
|
| 800 |
+
},
|
| 801 |
+
{
|
| 802 |
+
"type": "list",
|
| 803 |
+
"sub_type": "text",
|
| 804 |
+
"list_items": [
|
| 805 |
+
"2. Domain Bias. Most corpora were established in one single domain (e.g. news), which may not contain rich ZP phenomena. Because the frequencies and types of ZPs vary in different genres (Yang et al., 2015). Future works need more multi-domain datasets to better model behavior and quality for real-life use.",
|
| 806 |
+
"3. Become An Independent Research Problem. Early works extracted ZP information from closed annotations (e.g. OntoNotes and Treebanks) (Yang and Xue, 2010; Chung and Gildea, 2010), which were considered as a sub-problem of coreference or syntactic parsing. With further investigation on the problem, MT community paid more attention to it by manually or automatically constructing ZP recovery and translation datasets (e.g. BaiduKnows and TVsub) (Wang et al., 2018a; Zhang et al., 2019).",
|
| 807 |
+
"4. Coping with Data Scarcity. The scarcity of ZPT data remains a core issue (currently only $2.2\\mathrm{M} \\sim 0.1\\mathrm{K}$ sentences) due to two challenges: (1) it requires experts for both source ZP annotation and target translation (Wang et al., 2016c, 2018a); (2) annotating the training data manually spends much time and money. Nonetheless, it is still necessary to establish testing datasets for validating/analyzing the model performance. Besides, pre-trained modes are already equipped with some capabilities on discourse (Chen et al., 2019; Koto et al., 2021). This highlights the importance of formulating the downstream task in"
|
| 808 |
+
],
|
| 809 |
+
"bbox": [
|
| 810 |
+
105,
|
| 811 |
+
410,
|
| 812 |
+
490,
|
| 813 |
+
915
|
| 814 |
+
],
|
| 815 |
+
"page_idx": 4
|
| 816 |
+
},
|
| 817 |
+
{
|
| 818 |
+
"type": "text",
|
| 819 |
+
"text": "a manner that can effectively leverage the capabilities of the pre-trained models.",
|
| 820 |
+
"bbox": [
|
| 821 |
+
522,
|
| 822 |
+
392,
|
| 823 |
+
882,
|
| 824 |
+
423
|
| 825 |
+
],
|
| 826 |
+
"page_idx": 4
|
| 827 |
+
},
|
| 828 |
+
{
|
| 829 |
+
"type": "text",
|
| 830 |
+
"text": "5 Approaches",
|
| 831 |
+
"text_level": 1,
|
| 832 |
+
"bbox": [
|
| 833 |
+
507,
|
| 834 |
+
437,
|
| 835 |
+
648,
|
| 836 |
+
454
|
| 837 |
+
],
|
| 838 |
+
"page_idx": 4
|
| 839 |
+
},
|
| 840 |
+
{
|
| 841 |
+
"type": "text",
|
| 842 |
+
"text": "5.1 Overview",
|
| 843 |
+
"text_level": 1,
|
| 844 |
+
"bbox": [
|
| 845 |
+
507,
|
| 846 |
+
464,
|
| 847 |
+
631,
|
| 848 |
+
478
|
| 849 |
+
],
|
| 850 |
+
"page_idx": 4
|
| 851 |
+
},
|
| 852 |
+
{
|
| 853 |
+
"type": "text",
|
| 854 |
+
"text": "Early researchers have investigated several approaches for conventional statistical machine translation (SMT) (Le Nagard and Koehn, 2010; Xiang et al., 2013; Wang et al., 2016a). Modeling ZPs for advanced NMT models, however, has received more attention, resulting in better performance in this field (Wang et al., 2018a; Tan et al., 2021; Hwang et al., 2021). Generally prior works fall into three categories: (1) Pipeline, where input sentences are labeled with ZPs using an external ZP recovery system and then fed into a standard MT model (Chung and Gildea, 2010; Wang et al., 2016a); (2) Implicit, where ZP phenomenon is implicitly resolved by modelling document-level contexts (Yu et al., 2020; Ri et al., 2021); (3) End-to-End, where ZP prediction and translation are jointly learned in an end-to-end manner (Wang et al., 2019; Tan et al., 2021).",
|
| 855 |
+
"bbox": [
|
| 856 |
+
505,
|
| 857 |
+
485,
|
| 858 |
+
884,
|
| 859 |
+
775
|
| 860 |
+
],
|
| 861 |
+
"page_idx": 4
|
| 862 |
+
},
|
| 863 |
+
{
|
| 864 |
+
"type": "text",
|
| 865 |
+
"text": "Pipeline The pipeline method of ZPT borrows from that in pronoun translation (Le Nagard and Koehn, 2010; Pradhan et al., 2012) due to the strong relevance between the two tasks. Chung and Gildea (2010) systematically examine the effects of empty category $(\\mathrm{EC})^9$ on SMT with pattern-,",
|
| 866 |
+
"bbox": [
|
| 867 |
+
507,
|
| 868 |
+
785,
|
| 869 |
+
882,
|
| 870 |
+
883
|
| 871 |
+
],
|
| 872 |
+
"page_idx": 4
|
| 873 |
+
},
|
| 874 |
+
{
|
| 875 |
+
"type": "page_footnote",
|
| 876 |
+
"text": "In linguistics, it is an element in syntax that does not have any phonological content and is therefore unpronounced.",
|
| 877 |
+
"bbox": [
|
| 878 |
+
507,
|
| 879 |
+
892,
|
| 880 |
+
882,
|
| 881 |
+
917
|
| 882 |
+
],
|
| 883 |
+
"page_idx": 4
|
| 884 |
+
},
|
| 885 |
+
{
|
| 886 |
+
"type": "page_number",
|
| 887 |
+
"text": "3329",
|
| 888 |
+
"bbox": [
|
| 889 |
+
480,
|
| 890 |
+
927,
|
| 891 |
+
519,
|
| 892 |
+
940
|
| 893 |
+
],
|
| 894 |
+
"page_idx": 4
|
| 895 |
+
},
|
| 896 |
+
{
|
| 897 |
+
"type": "text",
|
| 898 |
+
"text": "CRF- and parsing-based methods. The results show that this can really improve the translation quality, even though the automatic prediction of EC is not highly accurate. Besides, Wang et al. (2016a,b, 2017b) proposed to integrate neural-based ZP recovery with SMT systems, showing better performance on both ZP recovery and overall translation. When entering the era of NMT, ZP recovery is also employed as an external system. Assuming that no-pro-drop languages can benefit pro-drop ones, Ohtani et al. (2019) tagged the coreference information in the source language, and then encoded it using a graph-based encoder integrated with NMT model. Tan et al. (2019) recovered ZP in the source sentence via a BiLSTM-CRF model (Lample et al., 2016). Different from the conventional ZP recovery methods, the label is the corresponding translation of ZP around with special tokens. They then trained a NMT model on this modified data, letting the model learn the copy behaviors. Tan et al. (2021) used ZP detector to predict the ZP position and inserted a special token. Second, they used a attention-based ZP recovery model to recover the ZP word on the corresponding ZP position.",
|
| 899 |
+
"bbox": [
|
| 900 |
+
115,
|
| 901 |
+
84,
|
| 902 |
+
489,
|
| 903 |
+
469
|
| 904 |
+
],
|
| 905 |
+
"page_idx": 5
|
| 906 |
+
},
|
| 907 |
+
{
|
| 908 |
+
"type": "text",
|
| 909 |
+
"text": "End-to-End Due the lack of training data on ZPT, a couple of studies pay attention to data augmentation. Sugiyama and Yoshinaga (2019) employed the back-translation on a context-aware NMT model to augment the training data. With the help of context, the pronoun in no-pronoun-drop language can be translated correctly into pronoun-drop language. They also build a contrastive dataset to filter the pseudo data. Besides, Kimura et al. (2019) investigated the selective standards in detail to filter the pseudo data. Ri et al. (2021) deleted the personal pronoun in the sentence to augment the training data. And they trained a classifier to keep the sentences that pronouns can be recovered without any context.",
|
| 910 |
+
"bbox": [
|
| 911 |
+
115,
|
| 912 |
+
483,
|
| 913 |
+
485,
|
| 914 |
+
722
|
| 915 |
+
],
|
| 916 |
+
"page_idx": 5
|
| 917 |
+
},
|
| 918 |
+
{
|
| 919 |
+
"type": "text",
|
| 920 |
+
"text": "About model architecture, Wang et al. (2018a) first proposed a reconstruction-based approach to reconstruct the ZP-annotated source sentence from the hidden states of either encoder or decoder, or both. The central idea behind is to guide the corresponding hidden states to embed the recalled source-side ZP information and subsequently to help the NMT model generate the missing pronouns with these enhanced hidden representations. Although this model achieved significant improvements, there nonetheless exist two drawbacks: 1) there is no interaction between the two separate",
|
| 921 |
+
"bbox": [
|
| 922 |
+
115,
|
| 923 |
+
726,
|
| 924 |
+
485,
|
| 925 |
+
917
|
| 926 |
+
],
|
| 927 |
+
"page_idx": 5
|
| 928 |
+
},
|
| 929 |
+
{
|
| 930 |
+
"type": "text",
|
| 931 |
+
"text": "reconstructors, which misses the opportunity to exploit useful relations between encoder and decoder representations; and 2) testing phase needs an external ZP prediction model and it only has an accuracy of $66\\%$ in F1-score, which propagates numerous errors to the translation model. Thus, Wang et al. (2018b) further proposed to improve the reconstruction-based model by using shared reconstructor and joint learning. Furthermore, relying on external ZP models in decoding makes these approaches unwieldy in practice, due to introducing more computation cost and complexity.",
|
| 932 |
+
"bbox": [
|
| 933 |
+
512,
|
| 934 |
+
84,
|
| 935 |
+
882,
|
| 936 |
+
275
|
| 937 |
+
],
|
| 938 |
+
"page_idx": 5
|
| 939 |
+
},
|
| 940 |
+
{
|
| 941 |
+
"type": "text",
|
| 942 |
+
"text": "About learning objective, contrastive learning is often used to let the output more close to golden data while far away from negative samples. Yang et al. (2019b) proposed a contrastive learning to reduce the word omitted error. To construct the negative samples, they randomly dropped the word by considering its frequency or part-of-speech tag. Hwang et al. (2021) further considered the coreference information to construct the negative sample. According to the coreference information, they took place the antecedent in context with empty, mask or random token to get the negative samples. Besides, Jwalapuram et al. (2020) served the pronoun mistranslated output as the negative samples while golden sentences as positive sample. To get the negative samples, they aligned the word between model outputs and golden references to get the sentences with mistranslated pronoun.",
|
| 943 |
+
"bbox": [
|
| 944 |
+
512,
|
| 945 |
+
279,
|
| 946 |
+
882,
|
| 947 |
+
567
|
| 948 |
+
],
|
| 949 |
+
"page_idx": 5
|
| 950 |
+
},
|
| 951 |
+
{
|
| 952 |
+
"type": "text",
|
| 953 |
+
"text": "Implicit Some works consider not just the ZPT issue but rather focus on the overall discourse problem. The document-level NMT models (Wang et al., 2017a; Werlen et al., 2018; Ma et al., 2020; Lopes et al., 2020) are expected to have strong capabilities in discourse modelling such as translation consistency and ZPT. Another method is the round-trip translation, which is commonly-used in automatic post-editing (APE) (Freitag et al., 2019), quality estimation (QE) (Moon et al., 2020) to correct of detect the translation errors. Voita et al. (2019) served this idea on context-aware NMT to correct the discourse error in the output. They employed the round-trip translation on monolingual data to get the parallel corpus in the target language. They then used the corpus to train a model to repair discourse phenomenon in MT output. Wang et al. (2019) proposed a fully unified ZPT model, which absolutely released the reliance on external ZP models at decoding time. Besides, they exploited to jointly learn inter-sentential con",
|
| 954 |
+
"bbox": [
|
| 955 |
+
512,
|
| 956 |
+
581,
|
| 957 |
+
882,
|
| 958 |
+
917
|
| 959 |
+
],
|
| 960 |
+
"page_idx": 5
|
| 961 |
+
},
|
| 962 |
+
{
|
| 963 |
+
"type": "page_number",
|
| 964 |
+
"text": "3330",
|
| 965 |
+
"bbox": [
|
| 966 |
+
482,
|
| 967 |
+
928,
|
| 968 |
+
519,
|
| 969 |
+
940
|
| 970 |
+
],
|
| 971 |
+
"page_idx": 5
|
| 972 |
+
},
|
| 973 |
+
{
|
| 974 |
+
"type": "table",
|
| 975 |
+
"img_path": "images/0de04b95541aad5c64dc76fbbf526de8548af1d0ce69cd8f5dee1af7a39dd6bd.jpg",
|
| 976 |
+
"table_caption": [],
|
| 977 |
+
"table_footnote": [],
|
| 978 |
+
"table_body": "<table><tr><td rowspan=\"2\">Model</td><td colspan=\"2\">TVsub</td><td colspan=\"2\">BaiduKnows</td><td colspan=\"2\">Webnovel</td></tr><tr><td>BLEU</td><td>APT</td><td>BLEU</td><td>APT</td><td>BLEU</td><td>APT</td></tr><tr><td>Baseline (Vaswani et al., 2017)</td><td>29.4</td><td>47.4</td><td>12.7</td><td>25.4</td><td>11.7</td><td>30.9</td></tr><tr><td>Pipeline (Song et al., 2020)</td><td>29.8</td><td>49.5</td><td>13.2</td><td>56.4</td><td>11.6</td><td>32.0</td></tr><tr><td>Implicit (Ma et al., 2020)</td><td>29.8</td><td>53.5</td><td>13.9</td><td>26.3</td><td>12.2</td><td>35.3</td></tr><tr><td>End-to-End (Wang et al., 2018a)</td><td>30.0</td><td>52.3</td><td>12.3</td><td>30.4</td><td>12.0</td><td>33.4</td></tr><tr><td>ORACLE</td><td>32.8</td><td>86.9</td><td>14.7</td><td>88.8</td><td>12.8</td><td>85.1</td></tr></table>",
|
| 979 |
+
"bbox": [
|
| 980 |
+
201,
|
| 981 |
+
80,
|
| 982 |
+
803,
|
| 983 |
+
225
|
| 984 |
+
],
|
| 985 |
+
"page_idx": 6
|
| 986 |
+
},
|
| 987 |
+
{
|
| 988 |
+
"type": "text",
|
| 989 |
+
"text": "Table 2: A comparison of representative ZPT methods with different benchmarks. The ZPT methods are detailed in Section 5.1. The Baseline is a standard Transformer-big model while ORACLE is manually recovering ZPs in input sentences and then feeding them into the Baseline (Wu et al., 2020). As detailed in Section 4.1, TVSub (both translation and ZP training data) and BaiduKnows (ZP training data) are widely-used benchmarks in movie subtitle and Q&A forum domains, respectively. The Webnovel is our in-house testing data (no training data) in web fiction domain. As detailed in Section 6.1, BLEU is a general-purpose evaluation metric while APT is a ZP-targeted one.",
|
| 990 |
+
"bbox": [
|
| 991 |
+
112,
|
| 992 |
+
234,
|
| 993 |
+
882,
|
| 994 |
+
322
|
| 995 |
+
],
|
| 996 |
+
"page_idx": 6
|
| 997 |
+
},
|
| 998 |
+
{
|
| 999 |
+
"type": "text",
|
| 1000 |
+
"text": "text (Sordoni et al., 2015) to further improve ZP prediction and translation.",
|
| 1001 |
+
"bbox": [
|
| 1002 |
+
112,
|
| 1003 |
+
344,
|
| 1004 |
+
487,
|
| 1005 |
+
378
|
| 1006 |
+
],
|
| 1007 |
+
"page_idx": 6
|
| 1008 |
+
},
|
| 1009 |
+
{
|
| 1010 |
+
"type": "text",
|
| 1011 |
+
"text": "5.2 Discussions and Findings",
|
| 1012 |
+
"text_level": 1,
|
| 1013 |
+
"bbox": [
|
| 1014 |
+
112,
|
| 1015 |
+
395,
|
| 1016 |
+
357,
|
| 1017 |
+
412
|
| 1018 |
+
],
|
| 1019 |
+
"page_idx": 6
|
| 1020 |
+
},
|
| 1021 |
+
{
|
| 1022 |
+
"type": "text",
|
| 1023 |
+
"text": "Table 1 shows that only the TVsub is suitable for both training and testing in ZPT task, while others like LATL is too small and only suitable for testing. To facilitate fair and comprehensive comparisons of different models across different benchmarkss, we expanded the BaiduKnows by adding human translations and included in-house dataset<sup>10</sup>. As shown in Table 2, we re-implemented three representative ZPT methods and conducted experiments on three benchmarks, which are diverse in terms of domain, size, annotation type, and task. As the training data in three benchmarks decrease, the difficulty of modelling ZPT gradually increases.",
|
| 1024 |
+
"bbox": [
|
| 1025 |
+
112,
|
| 1026 |
+
420,
|
| 1027 |
+
489,
|
| 1028 |
+
632
|
| 1029 |
+
],
|
| 1030 |
+
"page_idx": 6
|
| 1031 |
+
},
|
| 1032 |
+
{
|
| 1033 |
+
"type": "text",
|
| 1034 |
+
"text": "1. Existing Methods Can Help ZPT But Not Enough. Three ZPT models can improve ZP translation in most cases, although there are still considerable differences among different domain of benchmarks (BLEU and APT $\\uparrow$ ). Introducing ZPT methods has little impact on BLEU score $(-0.4\\sim +0.6$ point on average), however, they can improve APT over baseline by $+1.1\\sim +30.1$ . When integrating golden ZP labels into baseline models (ORACLE), their BLEU and APT scores largely increased by $+3.4$ and $+63.4$ points, respectively. The performance gap between Oracle and others shows that there is still a large space for further improvement for ZPT.",
|
| 1035 |
+
"bbox": [
|
| 1036 |
+
107,
|
| 1037 |
+
637,
|
| 1038 |
+
489,
|
| 1039 |
+
864
|
| 1040 |
+
],
|
| 1041 |
+
"page_idx": 6
|
| 1042 |
+
},
|
| 1043 |
+
{
|
| 1044 |
+
"type": "text",
|
| 1045 |
+
"text": "2. Pipeline Methods Are Easier to Integrate with NMT. This is currently a simple way to enhance ZPT ability in real-life systems. As shown in Table 3, we analyzed the outputs of pipeline method and identify challenges from three perspectives: (1) out-of-domain, where it lacks in-domain data for training robust ZP recovery models. The distribution of ZP types is quite different between ZP recovery training data (out-of-domain) and ZPT testset (in-domain). This leads to that the ZP recovery model often predicts wrong ZP forms (possessive adjective vs. subject). (2) error propagation, where the external ZP recovery model may provide incorrect ZP words to the followed NMT model. As seen, $\\mathrm{ZPR} +$ performs worse than a plain NMT model NMT due to wrong pronouns predicted by the ZPR model (你们 vs.我). (3) multiple ZPs, where there is a $10\\%$ percentage of sentences that contain more than two ZPs, resulting in more challenges to accurately and simultaneously predict them. As seen, two ZPs are incorrectly predicted into \"我\" instead of \"他\".",
|
| 1046 |
+
"bbox": [
|
| 1047 |
+
502,
|
| 1048 |
+
344,
|
| 1049 |
+
885,
|
| 1050 |
+
697
|
| 1051 |
+
],
|
| 1052 |
+
"page_idx": 6
|
| 1053 |
+
},
|
| 1054 |
+
{
|
| 1055 |
+
"type": "text",
|
| 1056 |
+
"text": "3. Data-Level Methods Do Not Change Model Architecture. This is more friendly to NMT. Some researchers targeted making better usage of the limited training data (Tan et al., 2019; Ohtani et al., 2019; Tan et al., 2021). They trained an external model on the ZP data to recover the ZP information in the input sequence of the MT model (Tan et al., 2019; Ohtani et al., 2019; Tan et al., 2021) or correct the errors in the translation outputs (Voita et al., 2019). Others aimed to up-sample the training data for the ZPT task (Sugiyama and Yoshinaga, 2019; Kimura et al., 2019; Ri et al., 2021). They preferred to",
|
| 1057 |
+
"bbox": [
|
| 1058 |
+
502,
|
| 1059 |
+
702,
|
| 1060 |
+
884,
|
| 1061 |
+
910
|
| 1062 |
+
],
|
| 1063 |
+
"page_idx": 6
|
| 1064 |
+
},
|
| 1065 |
+
{
|
| 1066 |
+
"type": "page_footnote",
|
| 1067 |
+
"text": "10 The Webnovel testing dataset contains 1,658 Chinese-English sentence pairs in 24 documents, with the target side translated by professional human translators.",
|
| 1068 |
+
"bbox": [
|
| 1069 |
+
112,
|
| 1070 |
+
879,
|
| 1071 |
+
489,
|
| 1072 |
+
917
|
| 1073 |
+
],
|
| 1074 |
+
"page_idx": 6
|
| 1075 |
+
},
|
| 1076 |
+
{
|
| 1077 |
+
"type": "page_number",
|
| 1078 |
+
"text": "3331",
|
| 1079 |
+
"bbox": [
|
| 1080 |
+
480,
|
| 1081 |
+
927,
|
| 1082 |
+
517,
|
| 1083 |
+
940
|
| 1084 |
+
],
|
| 1085 |
+
"page_idx": 6
|
| 1086 |
+
},
|
| 1087 |
+
{
|
| 1088 |
+
"type": "table",
|
| 1089 |
+
"img_path": "images/19dee627b811da50d8967f806fae776b552806d50532141344674bf27f1a421e.jpg",
|
| 1090 |
+
"table_caption": [],
|
| 1091 |
+
"table_footnote": [],
|
| 1092 |
+
"table_body": "<table><tr><td rowspan=\"4\">1. Out-of-Domain</td><td>INP.</td><td>[他的]p主要研究领域为...</td></tr><tr><td>NMT</td><td>The main research areas are ...</td></tr><tr><td>ZPR</td><td>我主要研究领域为...</td></tr><tr><td>ZPR+</td><td>My main research areas are ...</td></tr><tr><td rowspan=\"4\">2. Error Propagation</td><td>INP.</td><td>如果[你们]s见到她...</td></tr><tr><td>NMT</td><td>If you see her ...</td></tr><tr><td>ZPR</td><td>如果我见到她...</td></tr><tr><td>ZPR+</td><td>If I see her ...</td></tr><tr><td rowspan=\"4\">3. Multiple ZPs</td><td>INP.</td><td>[他]s好久没... [他]s怪想念的。</td></tr><tr><td>NMT</td><td>for a long time did not ... strange miss.</td></tr><tr><td>ZPR</td><td>我好久没...我怪想念的。</td></tr><tr><td>ZPR+</td><td>I haven't ... for a long time, I miss.</td></tr></table>",
|
| 1093 |
+
"bbox": [
|
| 1094 |
+
119,
|
| 1095 |
+
80,
|
| 1096 |
+
489,
|
| 1097 |
+
294
|
| 1098 |
+
],
|
| 1099 |
+
"page_idx": 7
|
| 1100 |
+
},
|
| 1101 |
+
{
|
| 1102 |
+
"type": "text",
|
| 1103 |
+
"text": "improve the ZPT performance via a data augmentation without modifying the MT architecture (Wang et al., 2016a; Sugiyama and Yoshinaga, 2019). Kimura et al. (2019); Ri et al. (2021) verified that the performance can be further improved by denoising the pseudo data.",
|
| 1104 |
+
"bbox": [
|
| 1105 |
+
126,
|
| 1106 |
+
400,
|
| 1107 |
+
489,
|
| 1108 |
+
495
|
| 1109 |
+
],
|
| 1110 |
+
"page_idx": 7
|
| 1111 |
+
},
|
| 1112 |
+
{
|
| 1113 |
+
"type": "text",
|
| 1114 |
+
"text": "4. Multitask and Multi-Lingual Learning. ZPT is a hard task to be done alone, researchers are investigating how to leverage other related NLP tasks to improve ZPT by training models to perform multiple tasks simultaneously (Wang et al., 2018a). Since ZPT is a cross-lingual problem, researchers are exploring techniques for training models that can work across multiple languages, rather than being limited to a single language (Aloraini and Poesio, 2020).",
|
| 1115 |
+
"bbox": [
|
| 1116 |
+
107,
|
| 1117 |
+
499,
|
| 1118 |
+
489,
|
| 1119 |
+
659
|
| 1120 |
+
],
|
| 1121 |
+
"page_idx": 7
|
| 1122 |
+
},
|
| 1123 |
+
{
|
| 1124 |
+
"type": "text",
|
| 1125 |
+
"text": "6 Evaluation Methods",
|
| 1126 |
+
"text_level": 1,
|
| 1127 |
+
"bbox": [
|
| 1128 |
+
112,
|
| 1129 |
+
674,
|
| 1130 |
+
324,
|
| 1131 |
+
690
|
| 1132 |
+
],
|
| 1133 |
+
"page_idx": 7
|
| 1134 |
+
},
|
| 1135 |
+
{
|
| 1136 |
+
"type": "text",
|
| 1137 |
+
"text": "6.1 Overview",
|
| 1138 |
+
"text_level": 1,
|
| 1139 |
+
"bbox": [
|
| 1140 |
+
112,
|
| 1141 |
+
699,
|
| 1142 |
+
236,
|
| 1143 |
+
714
|
| 1144 |
+
],
|
| 1145 |
+
"page_idx": 7
|
| 1146 |
+
},
|
| 1147 |
+
{
|
| 1148 |
+
"type": "text",
|
| 1149 |
+
"text": "There are three kinds of automatic metrics to evaluate performances of related models:",
|
| 1150 |
+
"bbox": [
|
| 1151 |
+
112,
|
| 1152 |
+
720,
|
| 1153 |
+
487,
|
| 1154 |
+
751
|
| 1155 |
+
],
|
| 1156 |
+
"page_idx": 7
|
| 1157 |
+
},
|
| 1158 |
+
{
|
| 1159 |
+
"type": "list",
|
| 1160 |
+
"sub_type": "text",
|
| 1161 |
+
"list_items": [
|
| 1162 |
+
"- Accuracy of ZP Recovery: this aims to measure model performance on detecting and predicting ZPs of sentences in one pro-drop language. For instance, the micro F1-score is used to evaluating Chinese ZPR systems Song et al. (2020).<sup>11</sup>",
|
| 1163 |
+
"- General Translation Quality: there are a number of automatic evaluation metrics for measuring general performance of MT systems (Snover"
|
| 1164 |
+
],
|
| 1165 |
+
"bbox": [
|
| 1166 |
+
112,
|
| 1167 |
+
753,
|
| 1168 |
+
487,
|
| 1169 |
+
885
|
| 1170 |
+
],
|
| 1171 |
+
"page_idx": 7
|
| 1172 |
+
},
|
| 1173 |
+
{
|
| 1174 |
+
"type": "table",
|
| 1175 |
+
"img_path": "images/2905a63b1b4bbac37d508a71f4d7e4a715b19aba0bd6fc0c854ee54dd4942964.jpg",
|
| 1176 |
+
"table_caption": [
|
| 1177 |
+
"Table 3: Errors in a pipeline-based ZPT and NMT models. INP. represents Chinese input and NMT indicates a sentence-level NMT models. ZPR denotes ZP-annotated output predicted by ZP recovery models. Red words are ZPs that are invisible in decoding."
|
| 1178 |
+
],
|
| 1179 |
+
"table_footnote": [],
|
| 1180 |
+
"table_body": "<table><tr><td>Metric</td><td>T.S.</td><td>B.K.</td><td>I.H.</td><td>Ave.</td></tr><tr><td>BLEU</td><td>0.09</td><td>0.76</td><td>0.57</td><td>0.47</td></tr><tr><td>TER</td><td>0.41</td><td>0.01</td><td>0.26</td><td>0.23</td></tr><tr><td>METEOR</td><td>0.23</td><td>0.74</td><td>0.28</td><td>0.42</td></tr><tr><td>COMET</td><td>0.59</td><td>0.15</td><td>0.37</td><td>0.37</td></tr><tr><td>APT</td><td>0.68</td><td>0.76</td><td>0.58</td><td>0.67</td></tr></table>",
|
| 1181 |
+
"bbox": [
|
| 1182 |
+
549,
|
| 1183 |
+
80,
|
| 1184 |
+
847,
|
| 1185 |
+
196
|
| 1186 |
+
],
|
| 1187 |
+
"page_idx": 7
|
| 1188 |
+
},
|
| 1189 |
+
{
|
| 1190 |
+
"type": "text",
|
| 1191 |
+
"text": "Table 4: Correlation between the manual evaluation and other automatic metrics, which are applied on different ZPT benchmarks, which are same as in Table 2.",
|
| 1192 |
+
"bbox": [
|
| 1193 |
+
507,
|
| 1194 |
+
206,
|
| 1195 |
+
882,
|
| 1196 |
+
248
|
| 1197 |
+
],
|
| 1198 |
+
"page_idx": 7
|
| 1199 |
+
},
|
| 1200 |
+
{
|
| 1201 |
+
"type": "text",
|
| 1202 |
+
"text": "et al., 2006). BLEU (Papineni et al., 2002) is the most widely-used one, which measures the precision of n-grams of the MT output compared to the reference, weighted by a brevity penalty to punish overly short translations. METEOR (Banerjee and Lavie, 2005) incorporates semantic information by calculating either exact match, stem match, or synonymy match. Furthermore, COMET (Rei et al., 2020) is a neural framework for training multilingual MT evaluation models which obtains new SOTA levels of correlation with human judgements.",
|
| 1203 |
+
"bbox": [
|
| 1204 |
+
521,
|
| 1205 |
+
278,
|
| 1206 |
+
884,
|
| 1207 |
+
470
|
| 1208 |
+
],
|
| 1209 |
+
"page_idx": 7
|
| 1210 |
+
},
|
| 1211 |
+
{
|
| 1212 |
+
"type": "text",
|
| 1213 |
+
"text": "- Pronoun-Aware Translation Quality: Previous works usually evaluate ZPT using the BLEU metric (Wang et al., 2016a, 2018a; Yu et al., 2020; Ri et al., 2021), however, general-purpose metrics cannot characterize the performance of ZP translation. As shown in Table 3, the missed or incorrect pronouns may not affect BLEU scores but severely harm true performances. To fix this gap, some works proposed pronoun-targeted evaluation metrics (Werlen and Popescu-Belis, 2017; Läubli et al., 2018).",
|
| 1214 |
+
"bbox": [
|
| 1215 |
+
507,
|
| 1216 |
+
474,
|
| 1217 |
+
882,
|
| 1218 |
+
650
|
| 1219 |
+
],
|
| 1220 |
+
"page_idx": 7
|
| 1221 |
+
},
|
| 1222 |
+
{
|
| 1223 |
+
"type": "text",
|
| 1224 |
+
"text": "6.2 Discussions and Findings",
|
| 1225 |
+
"text_level": 1,
|
| 1226 |
+
"bbox": [
|
| 1227 |
+
507,
|
| 1228 |
+
669,
|
| 1229 |
+
754,
|
| 1230 |
+
686
|
| 1231 |
+
],
|
| 1232 |
+
"page_idx": 7
|
| 1233 |
+
},
|
| 1234 |
+
{
|
| 1235 |
+
"type": "text",
|
| 1236 |
+
"text": "As shown in Table 4, we compare different evaluation metrics on ZPT systems. About general-purpose metrics, we employed BLEU, TER, METEOR and COMET. About ZP-targeted metrics, we implemented and adapted APT (Werlen and Popescu-Belis, 2017) to evaluate ZPs, and experimented on three Chinese-English benchmarks (same as Section 5.2). For human evaluation, we randomly select a hundred groups of samples from each dataset, each group contains an oracle source sentence and the hypotheses from six examined MT systems. We asked expert raters to score all of these samples in 1 to 5 scores to reflect the cohesion quality of translations (detailed in Appendix",
|
| 1237 |
+
"bbox": [
|
| 1238 |
+
505,
|
| 1239 |
+
694,
|
| 1240 |
+
884,
|
| 1241 |
+
919
|
| 1242 |
+
],
|
| 1243 |
+
"page_idx": 7
|
| 1244 |
+
},
|
| 1245 |
+
{
|
| 1246 |
+
"type": "page_footnote",
|
| 1247 |
+
"text": "11https://github.com/freesunshine0316/ lab-zp-joint.",
|
| 1248 |
+
"bbox": [
|
| 1249 |
+
112,
|
| 1250 |
+
891,
|
| 1251 |
+
416,
|
| 1252 |
+
917
|
| 1253 |
+
],
|
| 1254 |
+
"page_idx": 7
|
| 1255 |
+
},
|
| 1256 |
+
{
|
| 1257 |
+
"type": "page_number",
|
| 1258 |
+
"text": "3332",
|
| 1259 |
+
"bbox": [
|
| 1260 |
+
480,
|
| 1261 |
+
927,
|
| 1262 |
+
519,
|
| 1263 |
+
940
|
| 1264 |
+
],
|
| 1265 |
+
"page_idx": 7
|
| 1266 |
+
},
|
| 1267 |
+
{
|
| 1268 |
+
"type": "text",
|
| 1269 |
+
"text": "$\\S \\mathrm{A.4})$ . The professional annotators are bilingual professionals with expertise in both Chinese and English. They have a deep understanding of the ZP problem and have been specifically trained to identify and annotate ZPs accurately. Our main findings are:",
|
| 1270 |
+
"bbox": [
|
| 1271 |
+
112,
|
| 1272 |
+
84,
|
| 1273 |
+
487,
|
| 1274 |
+
180
|
| 1275 |
+
],
|
| 1276 |
+
"page_idx": 8
|
| 1277 |
+
},
|
| 1278 |
+
{
|
| 1279 |
+
"type": "list",
|
| 1280 |
+
"sub_type": "text",
|
| 1281 |
+
"list_items": [
|
| 1282 |
+
"1. General-Purpose Evaluation Are Not Applicable to ZPT. As seen, APT reaches around 0.67 Pearson scores with human judges, while general-purpose metrics reach $0.47 \\sim 23$ . The APT shows a high correlation with human judges on three benchmarks, indicating that (1) general-purpose metrics are not specifically designed to measure performance on ZPT; (2) researchers need to develop more targeted evaluation metrics that are better suited to this task.",
|
| 1283 |
+
"2. Human Evaluations Are Required as A Complement. Even we use targeted evaluation, some nuances and complexities remain unrecognized by automatic methods. Thus, we call upon the research community to employ human evaluation according to WMT (Kocmi et al., 2022) especially in chat and literary shared tasks (Farinha et al., 2022; Wang et al., 2023c).",
|
| 1284 |
+
"3. The Risk of Gender Bias. The gender bias refers to the tendency of MT systems to produce output that reflects societal stereotypes or biases related to gender (Vanmassenhove et al., 2019). We found gender errors in ZPT outputs, when models make errors in identifying the antecedent of a ZP. This can be caused by the biases present in the training data, as well as the limitations in the models and the evaluation metrics. Therefore, researchers need to pay more attention to mitigate these biases, such as using diverse data sets and debiasing techniques, to improve the accuracy and fairness of ZPT methods."
|
| 1285 |
+
],
|
| 1286 |
+
"bbox": [
|
| 1287 |
+
105,
|
| 1288 |
+
183,
|
| 1289 |
+
489,
|
| 1290 |
+
687
|
| 1291 |
+
],
|
| 1292 |
+
"page_idx": 8
|
| 1293 |
+
},
|
| 1294 |
+
{
|
| 1295 |
+
"type": "text",
|
| 1296 |
+
"text": "7 Conclusion and Future Work",
|
| 1297 |
+
"text_level": 1,
|
| 1298 |
+
"bbox": [
|
| 1299 |
+
112,
|
| 1300 |
+
701,
|
| 1301 |
+
401,
|
| 1302 |
+
715
|
| 1303 |
+
],
|
| 1304 |
+
"page_idx": 8
|
| 1305 |
+
},
|
| 1306 |
+
{
|
| 1307 |
+
"type": "text",
|
| 1308 |
+
"text": "ZPT is a challenging and interesting task, which needs abilities of models on discourse-aware understanding and generation. Figure 3 best illustrates the increase in scientific publications related to ZP over the past few years. This paper is a literature review of existing research on zero pronoun translation, providing insights into the challenges and opportunities of this area and proposing potential directions for future research.",
|
| 1309 |
+
"bbox": [
|
| 1310 |
+
112,
|
| 1311 |
+
726,
|
| 1312 |
+
487,
|
| 1313 |
+
869
|
| 1314 |
+
],
|
| 1315 |
+
"page_idx": 8
|
| 1316 |
+
},
|
| 1317 |
+
{
|
| 1318 |
+
"type": "text",
|
| 1319 |
+
"text": "As we look to the future, we intend to delve deeper into the challenges of ZPT. Our plan is to leverage large language models, which have shown",
|
| 1320 |
+
"bbox": [
|
| 1321 |
+
112,
|
| 1322 |
+
871,
|
| 1323 |
+
487,
|
| 1324 |
+
917
|
| 1325 |
+
],
|
| 1326 |
+
"page_idx": 8
|
| 1327 |
+
},
|
| 1328 |
+
{
|
| 1329 |
+
"type": "image",
|
| 1330 |
+
"img_path": "images/c7083d4e35ba64335cd290879ab2276b5803c918305f8c0205d602828d34188e.jpg",
|
| 1331 |
+
"image_caption": [
|
| 1332 |
+
"Figure 3: Number of papers mentioning \"zero pronoun\" per year according Google Scholar."
|
| 1333 |
+
],
|
| 1334 |
+
"image_footnote": [],
|
| 1335 |
+
"bbox": [
|
| 1336 |
+
521,
|
| 1337 |
+
82,
|
| 1338 |
+
865,
|
| 1339 |
+
249
|
| 1340 |
+
],
|
| 1341 |
+
"page_idx": 8
|
| 1342 |
+
},
|
| 1343 |
+
{
|
| 1344 |
+
"type": "text",
|
| 1345 |
+
"text": "great potential in dealing with complex tasks, to tackle this particular challenge (Lu et al., 2023; Wang et al., 2023b; Lyu et al., 2023). Moreover, we plan to evaluate our approach on more discourse-aware tasks. Specifically, we aim to utilize the GuoFeng Benchmark (Wang et al., 2022, 2023a), which presents a comprehensive testing ground for evaluating the performance of models on a variety of discourse-level translation tasks. By doing so, we hope to gain more insights into the strengths and weaknesses of our approach, and continually refine it to achieve better performance.",
|
| 1346 |
+
"bbox": [
|
| 1347 |
+
507,
|
| 1348 |
+
316,
|
| 1349 |
+
884,
|
| 1350 |
+
508
|
| 1351 |
+
],
|
| 1352 |
+
"page_idx": 8
|
| 1353 |
+
},
|
| 1354 |
+
{
|
| 1355 |
+
"type": "text",
|
| 1356 |
+
"text": "Acknowledgement",
|
| 1357 |
+
"text_level": 1,
|
| 1358 |
+
"bbox": [
|
| 1359 |
+
509,
|
| 1360 |
+
524,
|
| 1361 |
+
672,
|
| 1362 |
+
539
|
| 1363 |
+
],
|
| 1364 |
+
"page_idx": 8
|
| 1365 |
+
},
|
| 1366 |
+
{
|
| 1367 |
+
"type": "text",
|
| 1368 |
+
"text": "The authors express their sincere gratitude to all reviewers whose keen interest and insightful feedback have significantly improved the quality of this paper. Their affirmation and encouragement have further solidified our commitment to the path of computational linguistics. This work is part of the GuoFeng AI (guofeng-ai@googlegroups.com) and TranSmart (Huang et al., 2021) projects.",
|
| 1369 |
+
"bbox": [
|
| 1370 |
+
507,
|
| 1371 |
+
552,
|
| 1372 |
+
882,
|
| 1373 |
+
681
|
| 1374 |
+
],
|
| 1375 |
+
"page_idx": 8
|
| 1376 |
+
},
|
| 1377 |
+
{
|
| 1378 |
+
"type": "text",
|
| 1379 |
+
"text": "Limitations",
|
| 1380 |
+
"text_level": 1,
|
| 1381 |
+
"bbox": [
|
| 1382 |
+
509,
|
| 1383 |
+
696,
|
| 1384 |
+
613,
|
| 1385 |
+
711
|
| 1386 |
+
],
|
| 1387 |
+
"page_idx": 8
|
| 1388 |
+
},
|
| 1389 |
+
{
|
| 1390 |
+
"type": "text",
|
| 1391 |
+
"text": "We list the main limitations of this work as follows:",
|
| 1392 |
+
"bbox": [
|
| 1393 |
+
505,
|
| 1394 |
+
722,
|
| 1395 |
+
882,
|
| 1396 |
+
738
|
| 1397 |
+
],
|
| 1398 |
+
"page_idx": 8
|
| 1399 |
+
},
|
| 1400 |
+
{
|
| 1401 |
+
"type": "text",
|
| 1402 |
+
"text": "1. Zero Pronoun in Different Languages: The zero pronoun phenomenon may vary across languages in terms of word form, occurrence frequency and category distribution etc. Due to page limitation, some examples are mainly discussed in Chinese and/or English. However, most results and findings can be applied to other pro-drop languages, which is further supported by other works (Ri et al., 2021; Aloraini and Poesio, 2020; Vincent et al., 2022). In Appendix §A.1, we add details on the phenomenon in various pro-drop",
|
| 1403 |
+
"bbox": [
|
| 1404 |
+
505,
|
| 1405 |
+
741,
|
| 1406 |
+
882,
|
| 1407 |
+
917
|
| 1408 |
+
],
|
| 1409 |
+
"page_idx": 8
|
| 1410 |
+
},
|
| 1411 |
+
{
|
| 1412 |
+
"type": "page_number",
|
| 1413 |
+
"text": "3333",
|
| 1414 |
+
"bbox": [
|
| 1415 |
+
480,
|
| 1416 |
+
927,
|
| 1417 |
+
519,
|
| 1418 |
+
940
|
| 1419 |
+
],
|
| 1420 |
+
"page_idx": 8
|
| 1421 |
+
},
|
| 1422 |
+
{
|
| 1423 |
+
"type": "text",
|
| 1424 |
+
"text": "languages such as Arabic, Swahili, Portuguese, Hindi, and Japanese.",
|
| 1425 |
+
"bbox": [
|
| 1426 |
+
127,
|
| 1427 |
+
84,
|
| 1428 |
+
487,
|
| 1429 |
+
116
|
| 1430 |
+
],
|
| 1431 |
+
"page_idx": 9
|
| 1432 |
+
},
|
| 1433 |
+
{
|
| 1434 |
+
"type": "text",
|
| 1435 |
+
"text": "2. More Details on Datasets and Methods: We have no space to give more details on datasets and models. We will use a Github repository to release all mentioned datasets, code, and models, which can improve the reproducibility of this research direction.",
|
| 1436 |
+
"bbox": [
|
| 1437 |
+
107,
|
| 1438 |
+
118,
|
| 1439 |
+
489,
|
| 1440 |
+
214
|
| 1441 |
+
],
|
| 1442 |
+
"page_idx": 9
|
| 1443 |
+
},
|
| 1444 |
+
{
|
| 1445 |
+
"type": "text",
|
| 1446 |
+
"text": "Ethics Statement",
|
| 1447 |
+
"text_level": 1,
|
| 1448 |
+
"bbox": [
|
| 1449 |
+
114,
|
| 1450 |
+
227,
|
| 1451 |
+
265,
|
| 1452 |
+
242
|
| 1453 |
+
],
|
| 1454 |
+
"page_idx": 9
|
| 1455 |
+
},
|
| 1456 |
+
{
|
| 1457 |
+
"type": "text",
|
| 1458 |
+
"text": "We take ethical considerations very seriously, and strictly adhere to the ACL Ethics Policy. In this paper, we present a survey of the major works on datasets, approaches and evaluation metrics that have been undertaken in ZPT. Resources and methods used in this paper are publicly available and have been widely adopted by researches of machine translation. We ensure that the findings and conclusions of this paper are reported accurately and objectively.",
|
| 1459 |
+
"bbox": [
|
| 1460 |
+
112,
|
| 1461 |
+
253,
|
| 1462 |
+
489,
|
| 1463 |
+
414
|
| 1464 |
+
],
|
| 1465 |
+
"page_idx": 9
|
| 1466 |
+
},
|
| 1467 |
+
{
|
| 1468 |
+
"type": "text",
|
| 1469 |
+
"text": "References",
|
| 1470 |
+
"text_level": 1,
|
| 1471 |
+
"bbox": [
|
| 1472 |
+
115,
|
| 1473 |
+
441,
|
| 1474 |
+
213,
|
| 1475 |
+
456
|
| 1476 |
+
],
|
| 1477 |
+
"page_idx": 9
|
| 1478 |
+
},
|
| 1479 |
+
{
|
| 1480 |
+
"type": "list",
|
| 1481 |
+
"sub_type": "ref_text",
|
| 1482 |
+
"list_items": [
|
| 1483 |
+
"Abdulrahman Aloraini and Massimo Poesio. 2020. Cross-lingual zero pronoun resolution. In LREC.",
|
| 1484 |
+
"Ilaria Bacolini. 2017. Exploring the partial pro-drop property in modern Hebrew. Università Ca'Foscari Venezia.",
|
| 1485 |
+
"Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In ACL.",
|
| 1486 |
+
"Elizabeth Baran, Yaqin Yang, and Nianwen Xue. 2012. Annotating dropped pronouns in chinese newswire text. In LREC.",
|
| 1487 |
+
"Chen Chen and Vincent Ng. 2013. Chinese zero pronoun resolution: Some recent advances. In EMNLP.",
|
| 1488 |
+
"Chen Chen and Vincent Ng. 2015. Chinese zero pronoun resolution: A joint unsupervised discourse-aware model rivaling state-of-the-art solvers. In ACL-IJCNLP.",
|
| 1489 |
+
"Chen Chen and Vincent Ng. 2016. Chinese zero pronoun resolution with deep neural networks. In ACL.",
|
| 1490 |
+
"Mingda Chen, Zewei Chu, and Kevin Gimpel. 2019. Evaluation benchmarks and learning criteria for discourse-aware sentence representations. In EMNLP-IJCNLP.",
|
| 1491 |
+
"Tagyoung Chung and Daniel Gildea. 2010. Effects of empty categories on machine translation. In EMNLP.",
|
| 1492 |
+
"Ana C Farinha, M Amin Farajian, Marianna Buchicchio, Patrick Fernandes, Jose GC De Souza, Helena Moniz,"
|
| 1493 |
+
],
|
| 1494 |
+
"bbox": [
|
| 1495 |
+
115,
|
| 1496 |
+
464,
|
| 1497 |
+
487,
|
| 1498 |
+
917
|
| 1499 |
+
],
|
| 1500 |
+
"page_idx": 9
|
| 1501 |
+
},
|
| 1502 |
+
{
|
| 1503 |
+
"type": "list",
|
| 1504 |
+
"sub_type": "ref_text",
|
| 1505 |
+
"list_items": [
|
| 1506 |
+
"and André FT Martins. 2022. Findings of the wmt 2022 shared task on chat translation. In Proceedings of the 7th Conference on Machine Translation.",
|
| 1507 |
+
"Markus Freitag, Isaac Caswell, and Scott Roy. 2019. Ape at scale and its implications on mt evaluation biases. In Proceedings of the 4th Conference on Machine Translation.",
|
| 1508 |
+
"Michael Alexander Kirkwood Halliday and Ruqaiya Hasan. 1976. Cohesion in english. Longman.",
|
| 1509 |
+
"Guoping Huang, Lemao Liu, Xing Wang, Longyue Wang, Huayang Li, Zhaopeng Tu, Chengyan Huang, and Shuming Shi. 2021. Transmart: A practical interactive machine translation system. arXiv preprint arXiv:2105.13072.",
|
| 1510 |
+
"Yongkeun Hwang, Hyeongu Yun, and Kyomin Jung. 2021. Contrastive learning for context-aware neural machine translation using coreference information. In Proceedings of the 6th Conference on Machine Translation.",
|
| 1511 |
+
"Prathyusha Jwalapuram, Shafiq Joty, and Youlin Shen. 2020. Pronoun-targeted fine-tuning for nmt with hybrid losses. In EMNLP.",
|
| 1512 |
+
"Ryuichiro Kimura, Shohei Iida, Hongyi Cui, Po-Hsuan Hung, Takehito Utsuro, and Masaaki Nagata. 2019. Selecting informative context sentence by forced back-translation. In Proceedings of Machine Translation Summit XVII.",
|
| 1513 |
+
"Tom Kocmi, Rachel Bawden, Ondrej Bojar, Anton Dvorkovich, Christian Federmann, Mark Fishel, Thamme Gowda, Yvette Graham, Roman Grundkiewicz, Barry Haddow, et al. 2022. Findings of the 2022 conference on machine translation (wmt22). In Proceedings of the 7th Conference on Machine Translation.",
|
| 1514 |
+
"Fang Kong and Guodong Zhou. 2010. A tree kernel-based unified framework for chinese zero anaphora resolution. In EMNLP.",
|
| 1515 |
+
"Fajri Koto, Joy Han Lau, and Timothy Baldwin. 2021. Discourse probing of pretrained language models. In NAACL.",
|
| 1516 |
+
"Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In *NAACL*.",
|
| 1517 |
+
"Samuel Läubli, Rico Sennrich, and Martin Volk. 2018. Has machine translation achieved human parity? a case for document-level evaluation. In EMNLP.",
|
| 1518 |
+
"Ronan Le Nagard and Philipp Koehn. 2010. Aiding pronoun translation with co-reference resolution. In Proceedings of the Joint 5th Workshop on Statistical Machine Translation and MetricsMATR."
|
| 1519 |
+
],
|
| 1520 |
+
"bbox": [
|
| 1521 |
+
510,
|
| 1522 |
+
85,
|
| 1523 |
+
880,
|
| 1524 |
+
917
|
| 1525 |
+
],
|
| 1526 |
+
"page_idx": 9
|
| 1527 |
+
},
|
| 1528 |
+
{
|
| 1529 |
+
"type": "page_number",
|
| 1530 |
+
"text": "3334",
|
| 1531 |
+
"bbox": [
|
| 1532 |
+
480,
|
| 1533 |
+
928,
|
| 1534 |
+
519,
|
| 1535 |
+
940
|
| 1536 |
+
],
|
| 1537 |
+
"page_idx": 9
|
| 1538 |
+
},
|
| 1539 |
+
{
|
| 1540 |
+
"type": "list",
|
| 1541 |
+
"sub_type": "ref_text",
|
| 1542 |
+
"list_items": [
|
| 1543 |
+
"Charles Li and Sandra Thomson. 1979. Third-person pronouns and zero-anaphora in chinese discourse in discourse and syntax. Syntax and Semantics Ann Arbor, Mich, 12:311-335.",
|
| 1544 |
+
"António V Lopes, M Amin Farajian, Rachel Bawden, Michael Zhang, and André FT Martins. 2020. Document-level neural mt: A systematic comparison. In EAMT.",
|
| 1545 |
+
"Qingyu Lu, Baopu Qiu, Liang Ding, Liping Xie, and Dacheng Tao. 2023. Error analysis prompting enables human-like translation evaluation in large language models: A case study on chatgpt. arXiv preprint arXiv:2303.13809.",
|
| 1546 |
+
"Chenyang Lyu, Jitao Xu, and Longyue Wang. 2023. New trends in machine translation using large language models: Case examples with chatgpt. arXiv preprint arXiv:2305.01181.",
|
| 1547 |
+
"Shuming Ma, Dongdong Zhang, and Ming Zhou. 2020. A simple and effective unified encoder for document-level machine translation. In ACL.",
|
| 1548 |
+
"Ruslan Mitkov. 2014. Anaphora resolution. Routledge.",
|
| 1549 |
+
"Jihyung Moon, Hyunchang Cho, and Eunjeong L Park. 2020. Revisiting round-trip translation for quality estimation. In EACL.",
|
| 1550 |
+
"Takumi Ohtani, Hidetaka Kamigaito, Masaaki Nagata, and Manabu Okumura. 2019. Context-aware neural machine translation with coreference information. In DiscoMT.",
|
| 1551 |
+
"Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In ACL.",
|
| 1552 |
+
"Arum Park, Seunghee Lim, and Munpyo Hong. 2015. Zero object resolution in korean. In Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation.",
|
| 1553 |
+
"Jesús Peral and Antonio Ferrández. 2003. Translation of pronominal anaphora between english and spanish: Discrepancies and evaluation. In JAIR.",
|
| 1554 |
+
"Simone Pereira. 2009. Zac. pb: An annotated corpus for zero anaphora resolution in portuguese. In Proceedings of the Student Research Workshop.",
|
| 1555 |
+
"Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll-2012 shared task: Modeling multilingual unrestricted coreference in ontonotes. In CoNLL-WS.",
|
| 1556 |
+
"Rashmi Prasad. 2000. A corpus study of zero pronouns in Hindi: An account based on centering transition preferences. In DAARC.",
|
| 1557 |
+
"Sudha Rao, Allyson Ettinger, Hal Daumé III, and Philip Resnik. 2015. Dialogue focus tracking for zero pronoun resolution. In *NAACL*."
|
| 1558 |
+
],
|
| 1559 |
+
"bbox": [
|
| 1560 |
+
115,
|
| 1561 |
+
85,
|
| 1562 |
+
487,
|
| 1563 |
+
916
|
| 1564 |
+
],
|
| 1565 |
+
"page_idx": 10
|
| 1566 |
+
},
|
| 1567 |
+
{
|
| 1568 |
+
"type": "list",
|
| 1569 |
+
"sub_type": "ref_text",
|
| 1570 |
+
"list_items": [
|
| 1571 |
+
"Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for mt evaluation. In EMNLP.",
|
| 1572 |
+
"Ryokan Ri, Toshiaki Nakazawa, and Yoshimasa Tsuruoka. 2021. Zero-pronoun data augmentation for japanese-to-english translation. In WAT.",
|
| 1573 |
+
"Lorenza Russo, Sharid Loáiciga, and Asheesh Gulati. 2012. Italian and spanish null subjects: a case study evaluation in an mt perspective. In LREC.",
|
| 1574 |
+
"Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In AMTA.",
|
| 1575 |
+
"Linfeng Song, Kun Xu, Yue Zhang, Jianshu Chen, and Dong Yu. 2020. Zpr2: Joint zero pronoun recovery and resolution using multi-task learning and bert. In ACL.",
|
| 1576 |
+
"Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, and Jian-Yun Nie. 2015. A hierarchical recurrent encoder-decoder for generative context-aware query suggestion. In CIKM.",
|
| 1577 |
+
"Hui Su, Xiaoyu Shen, Rongzhi Zhang, Fei Sun, Pengwei Hu, Cheng Niu, and Jie Zhou. 2019. Improving multi-turn dialogue modelling with utterance rewrite. In ACL.",
|
| 1578 |
+
"Amane Sugiyama and Naoki Yoshinaga. 2019. Data augmentation using back-translation for context-aware neural machine translation. In DiscoMT.",
|
| 1579 |
+
"Hirotoshi Taira, Katsuhito Sudoh, and Masaaki Nagata. 2012. Zero pronoun resolution can improve the quality of J-E translation. In Proceedings of the 6th Workshop on Syntax, Semantics and Structure in Statistical Translation.",
|
| 1580 |
+
"Xin Tan, Shaohui Kuang, and Deyi Xiong. 2019. Detecting and translating dropped pronouns in neural machine translation. In NLPCC.",
|
| 1581 |
+
"Xin Tan, Longyin Zhang, and Guodong Zhou. 2021. Coupling context modeling with zero pronoun recovering for document-level natural language generation. In EMNLP.",
|
| 1582 |
+
"Eva Vanmassenhove, Dimitar Shterionov, and Andy Way. 2019. Lost in translation: Loss and decay of linguistic richness in machine translation. In Proceedings of Machine Translation Summit XVII.",
|
| 1583 |
+
"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS.",
|
| 1584 |
+
"Sebastian T Vincent, Loic Barrault, and Carolina Scarton. 2022. Controlling extra-textual attributes about dialogue participants: A case study of english-top polish neural machine translation. In EAMT."
|
| 1585 |
+
],
|
| 1586 |
+
"bbox": [
|
| 1587 |
+
510,
|
| 1588 |
+
85,
|
| 1589 |
+
882,
|
| 1590 |
+
917
|
| 1591 |
+
],
|
| 1592 |
+
"page_idx": 10
|
| 1593 |
+
},
|
| 1594 |
+
{
|
| 1595 |
+
"type": "page_number",
|
| 1596 |
+
"text": "3335",
|
| 1597 |
+
"bbox": [
|
| 1598 |
+
480,
|
| 1599 |
+
928,
|
| 1600 |
+
519,
|
| 1601 |
+
940
|
| 1602 |
+
],
|
| 1603 |
+
"page_idx": 10
|
| 1604 |
+
},
|
| 1605 |
+
{
|
| 1606 |
+
"type": "list",
|
| 1607 |
+
"sub_type": "ref_text",
|
| 1608 |
+
"list_items": [
|
| 1609 |
+
"Elena Voita, Rico Sennrich, and Ivan Titov. 2019. Context-aware monolingual repair for neural machine translation. In EMNLP.",
|
| 1610 |
+
"Longyue Wang. 2019. *Discourse-aware neural machine translation*. Ph.D. thesis, Ph. D. thesis, Dublin City University, Dublin, Ireland.",
|
| 1611 |
+
"Longyue Wang, Zefeng Du, DongHuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Shuming Shi, and Zhaopeng Tu. 2023a. GuoFeng: A discourse-aware evaluation benchmark for language understanding, translation and generation.",
|
| 1612 |
+
"Longyue Wang, Chenyang Lyu, Tianbo Ji, Zhirui Zhang, Dian Yu, Shuming Shi, and Zhaopeng Tu. 2023b. Document-level machine translation with large language models. arXiv preprint arXiv:2304.02210.",
|
| 1613 |
+
"Longyue Wang, Zhaopeng Tu, Chenyang Lyu, Zefeng Du, Dian Yu, Liting Zhou, Siyou Liu, Yan Gu, et al. 2023c. Findings of the wmt 2023 shared task on discourse-level literary translation. In Proceedings of the 8th Conference on Machine Translation.",
|
| 1614 |
+
"Longyue Wang, Zhaopeng Tu, Shuming Shi, Tong Zhang, Yvette Graham, and Qun Liu. 2018a. Translating pro-drop languages with reconstruction models. In AAAI.",
|
| 1615 |
+
"Longyue Wang, Zhaopeng Tu, Xing Wang, and Shuming Shi. 2019. One model to learn both: Zero pronoun prediction and translation. In EMNLP-IJCNLP.",
|
| 1616 |
+
"Longyue Wang, Zhaopeng Tu, Andy Way, and Qun Liu. 2017a. Exploiting cross-sentence context for neural machine translation. In EMNLP.",
|
| 1617 |
+
"Longyue Wang, Zhaopeng Tu, Andy Way, and Qun Liu. 2018b. Learning to jointly translate and predict dropped pronouns with a shared reconstruction mechanism. In EMNLP.",
|
| 1618 |
+
"Longyue Wang, Zhaopeng Tu, Xiaojun Zhang, Hang Li, Andy Way, and Qun Liu. 2016a. A novel approach for dropped pronoun translation. In NAACL.",
|
| 1619 |
+
"Longyue Wang, Zhaopeng Tu, Xiaojun Zhang, Siyou Liu, Hang Li, Andy Way, and Qun Liu. 2017b. A novel and robust approach for pro-drop language translation. Machine Translation, 31(1-2):65-87.",
|
| 1620 |
+
"Longyue Wang, Mingzhou Xu, Derek F. Wong, Hongye Liu, Linfeng Song, Lidia S. Chao, Shuming Shi, and Zhaopeng Tu. 2022. GuoFeng: A benchmark for zero pronoun recovery and translation. In EMNLP.",
|
| 1621 |
+
"Longyue Wang, Xiaojun Zhang, Zhaopeng Tu, Hang Li, and Qun Liu. 2016b. Dropped pronoun generation for dialogue machine translation. In ICASSP.",
|
| 1622 |
+
"Longyue Wang, Xiaojun Zhang, Zhaopeng Tu, Qun Liu, and Andy Way. 2016c. Automatic construction of discourse corpora for dialogue translation. In LREC."
|
| 1623 |
+
],
|
| 1624 |
+
"bbox": [
|
| 1625 |
+
115,
|
| 1626 |
+
85,
|
| 1627 |
+
489,
|
| 1628 |
+
917
|
| 1629 |
+
],
|
| 1630 |
+
"page_idx": 11
|
| 1631 |
+
},
|
| 1632 |
+
{
|
| 1633 |
+
"type": "list",
|
| 1634 |
+
"sub_type": "ref_text",
|
| 1635 |
+
"list_items": [
|
| 1636 |
+
"Lesly Miculicich Werlen and Andrei Popescu-Belis. 2017. Validation of an automatic metric for the accuracy of pronoun translation (apt). In DiscoMT.",
|
| 1637 |
+
"Lesly Miculicich Werlen, Dhananjay Ram, Nikolaos Pappas, and James Henderson. 2018. Document-level neural machine translation with hierarchical attention networks. In EMNLP.",
|
| 1638 |
+
"Shuangzhi Wu, Xing Wang, Longyue Wang, Fangxu Liu, Jun Xie, Zhaopeng Tu, Shuming Shi, and Mu Li. 2020. Tencent neural machine translation systems for the wmt20 news translation task. In Proceedings of the 5th Conference on Machine Translation.",
|
| 1639 |
+
"Bing Xiang, Xiaoqiang Luo, and Bowen Zhou. 2013. Enlisting the ghost: Modeling empty categories for machine translation. In ACL.",
|
| 1640 |
+
"Jingxuan Yang, Jianzhuo Tong, Si Li, Sheng Gao, Jun Guo, and Nianwen Xue. 2019a. Recovering dropped pronouns in chinese conversations via modeling their referents. In *NAACL*.",
|
| 1641 |
+
"Yaqin Yang, Yalin Liu, and Nianwen Xue. 2015. Recovering dropped pronouns from chinese text messages. In ACL-IJCNLP.",
|
| 1642 |
+
"Yaqin Yang and Nianwen Xue. 2010. Chasing the ghost: recovering empty categories in the chinese treebank. In COLING.",
|
| 1643 |
+
"Zonghan Yang, Yong Cheng, Yang Liu, and Maosong Sun. 2019b. Reducing word omission errors in neural machine translation: A contrastive learning approach. In ACL.",
|
| 1644 |
+
"Qingyu Yin, Yu Zhang, Weinan Zhang, Ting Liu, and William Yang Wang. 2018. Zero pronoun resolution with attention-based neural network. In COLING.",
|
| 1645 |
+
"Lei Yu, Laurent Sartran, Wojciech Stokowiec, Wang Ling, Lingpeng Kong, Phil Blunsom, and Chris Dyer. 2020. Better document-level machine translation with bayes' rule. In TACL.",
|
| 1646 |
+
"Dong Zhan and Hiromi Nakaiwa. 2015. Automatic detection of antecedents of japanese zero pronouns using a japanese-english bilingual corpus. In Proceedings of Machine Translation Summit XV.",
|
| 1647 |
+
"Weinan Zhang, Ting Liu, Qingyu Yin, and Yu Zhang. 2019. Neural recovery machine for Chinese dropped pronoun. In Frontiers of Computer Science.",
|
| 1648 |
+
"Shanheng Zhao and Hwee Tou Ng. 2007. Identification and resolution of chinese zero pronouns: A machine learning approach. In EMNLP-CoNLL."
|
| 1649 |
+
],
|
| 1650 |
+
"bbox": [
|
| 1651 |
+
510,
|
| 1652 |
+
85,
|
| 1653 |
+
882,
|
| 1654 |
+
813
|
| 1655 |
+
],
|
| 1656 |
+
"page_idx": 11
|
| 1657 |
+
},
|
| 1658 |
+
{
|
| 1659 |
+
"type": "page_number",
|
| 1660 |
+
"text": "3336",
|
| 1661 |
+
"bbox": [
|
| 1662 |
+
480,
|
| 1663 |
+
928,
|
| 1664 |
+
519,
|
| 1665 |
+
940
|
| 1666 |
+
],
|
| 1667 |
+
"page_idx": 11
|
| 1668 |
+
},
|
| 1669 |
+
{
|
| 1670 |
+
"type": "text",
|
| 1671 |
+
"text": "A Appendix",
|
| 1672 |
+
"text_level": 1,
|
| 1673 |
+
"bbox": [
|
| 1674 |
+
114,
|
| 1675 |
+
84,
|
| 1676 |
+
238,
|
| 1677 |
+
99
|
| 1678 |
+
],
|
| 1679 |
+
"page_idx": 12
|
| 1680 |
+
},
|
| 1681 |
+
{
|
| 1682 |
+
"type": "text",
|
| 1683 |
+
"text": "A.1 Zero Pronoun in Different Languages",
|
| 1684 |
+
"text_level": 1,
|
| 1685 |
+
"bbox": [
|
| 1686 |
+
112,
|
| 1687 |
+
112,
|
| 1688 |
+
460,
|
| 1689 |
+
128
|
| 1690 |
+
],
|
| 1691 |
+
"page_idx": 12
|
| 1692 |
+
},
|
| 1693 |
+
{
|
| 1694 |
+
"type": "text",
|
| 1695 |
+
"text": "The pronoun-dropping conditions vary from language to language, and can be quite intricate. Previous works define these typological patterns as pro-drop that can be subcategorized into three categories (as shown in Figure 1):",
|
| 1696 |
+
"bbox": [
|
| 1697 |
+
112,
|
| 1698 |
+
134,
|
| 1699 |
+
489,
|
| 1700 |
+
214
|
| 1701 |
+
],
|
| 1702 |
+
"page_idx": 12
|
| 1703 |
+
},
|
| 1704 |
+
{
|
| 1705 |
+
"type": "list",
|
| 1706 |
+
"sub_type": "text",
|
| 1707 |
+
"list_items": [
|
| 1708 |
+
"- Topic Pro-drop Language allows referential pronouns to be omitted, or be phonologically null. Such dropped pronouns can be inferred from previous discourse, from the context of the conversation, or generally shared knowledge.",
|
| 1709 |
+
"- Partial Pro-drop Language allows for the deletion of the subject pronoun. Such missing pronoun is not inferred strictly from pragmatics, but partially indicated by the morphology of the verb.",
|
| 1710 |
+
"- Full Pro-drop Language has rich subject agreement morphology where subjects are freely dropped under the appropriate discourse conditions."
|
| 1711 |
+
],
|
| 1712 |
+
"bbox": [
|
| 1713 |
+
112,
|
| 1714 |
+
216,
|
| 1715 |
+
489,
|
| 1716 |
+
429
|
| 1717 |
+
],
|
| 1718 |
+
"page_idx": 12
|
| 1719 |
+
},
|
| 1720 |
+
{
|
| 1721 |
+
"type": "text",
|
| 1722 |
+
"text": "A.2 Analysis of Zero Pronoun",
|
| 1723 |
+
"text_level": 1,
|
| 1724 |
+
"bbox": [
|
| 1725 |
+
114,
|
| 1726 |
+
444,
|
| 1727 |
+
366,
|
| 1728 |
+
458
|
| 1729 |
+
],
|
| 1730 |
+
"page_idx": 12
|
| 1731 |
+
},
|
| 1732 |
+
{
|
| 1733 |
+
"type": "text",
|
| 1734 |
+
"text": "As shown in Table 5, $26\\%$ of Chinese pronouns were dropped in the dialogue domain, while $7\\%$ were dropped in the newswire domain. ZPs in formal text genres (e.g. newswire) are not as common as those in informal genres (e.g. dialogue), and the most frequently dropped pronouns in Chinese newswire is the third person singular it (\"it\") (Baran et al., 2012), which may not be crucial to translation performance.",
|
| 1735 |
+
"bbox": [
|
| 1736 |
+
112,
|
| 1737 |
+
466,
|
| 1738 |
+
489,
|
| 1739 |
+
612
|
| 1740 |
+
],
|
| 1741 |
+
"page_idx": 12
|
| 1742 |
+
},
|
| 1743 |
+
{
|
| 1744 |
+
"type": "table",
|
| 1745 |
+
"img_path": "images/06c41a3c0ee98edf46fc78ae470bd2e117984553600aa81018fb3e00d46048fd.jpg",
|
| 1746 |
+
"table_caption": [],
|
| 1747 |
+
"table_footnote": [],
|
| 1748 |
+
"table_body": "<table><tr><td>Genres</td><td>Sent.</td><td>ZH Pro.</td><td>EN Pro.</td><td>ZPs</td></tr><tr><td>Dialogue</td><td>2.15M</td><td>1.66M</td><td>2.26M</td><td>26.55%</td></tr><tr><td>News</td><td>3.29M</td><td>2.27M</td><td>2.45M</td><td>7.35%</td></tr></table>",
|
| 1749 |
+
"bbox": [
|
| 1750 |
+
119,
|
| 1751 |
+
624,
|
| 1752 |
+
487,
|
| 1753 |
+
684
|
| 1754 |
+
],
|
| 1755 |
+
"page_idx": 12
|
| 1756 |
+
},
|
| 1757 |
+
{
|
| 1758 |
+
"type": "text",
|
| 1759 |
+
"text": "Table 5: Extent of pronoun-dropping in different genres. The Dialogue corpus consists of subtitles in Opensubti-tle2018 and the News corpus is CWMT2013 news data.",
|
| 1760 |
+
"bbox": [
|
| 1761 |
+
112,
|
| 1762 |
+
694,
|
| 1763 |
+
489,
|
| 1764 |
+
737
|
| 1765 |
+
],
|
| 1766 |
+
"page_idx": 12
|
| 1767 |
+
},
|
| 1768 |
+
{
|
| 1769 |
+
"type": "text",
|
| 1770 |
+
"text": "pronouns can be omitted to make the sentence compact yet comprehensible when the identity of the pronouns can be inferred from the context. These omissions may not be problems for our humans since we can easily recall the missing pronouns from the context.",
|
| 1771 |
+
"bbox": [
|
| 1772 |
+
507,
|
| 1773 |
+
84,
|
| 1774 |
+
884,
|
| 1775 |
+
180
|
| 1776 |
+
],
|
| 1777 |
+
"page_idx": 12
|
| 1778 |
+
},
|
| 1779 |
+
{
|
| 1780 |
+
"type": "text",
|
| 1781 |
+
"text": "A.4 Human Evaluation Guideline",
|
| 1782 |
+
"text_level": 1,
|
| 1783 |
+
"bbox": [
|
| 1784 |
+
507,
|
| 1785 |
+
191,
|
| 1786 |
+
789,
|
| 1787 |
+
206
|
| 1788 |
+
],
|
| 1789 |
+
"page_idx": 12
|
| 1790 |
+
},
|
| 1791 |
+
{
|
| 1792 |
+
"type": "text",
|
| 1793 |
+
"text": "We carefully design an evaluation protocol according to error types made by various NMT systems, which can be grouped into five categories: 1) The translation can not preserve the original semantics due to misunderstanding the anaphora of ZPs. Furthermore, the structure of translation is inappropriately or grammatically incorrect due to incorrect ZPs or lack of ZPs; 2) The sentence structure is correct, but translation can not preserve the original semantics due to misunderstanding the anaphora of ZPs; 3) The translation can preserve the original semantics, but the structure of translation is inappropriately generated or grammatically incorrect due to the lack of ZPs; 4) where a source ZP is incorrectly translated or not translated, but the translation can reflect the meaning of the source; 5) where translation preserves the meaning of the source and all ZPs are translated. Finally, we average the score of each target sentence that contains ZPs to be the final score of our human evaluation. For human evaluation, we randomly select a hundred groups of samples from each domain, each group contains an oracle source sentence and the hypotheses from six examined MT systems. Following this protocol, we asked expert raters to score all of these samples in 1 to 5 scores to reflect the quality of ZP translations. For the inter-agreement, we simply define that a large than 3 is a good translation and a bad translation is less than 3. The annotators reached an agreement of annotations on $91\\%$ (2750 out of 3000) samples. In general, the process of manual labeling took five professional annotators one month in total, which cost US $5,000.",
|
| 1794 |
+
"bbox": [
|
| 1795 |
+
507,
|
| 1796 |
+
211,
|
| 1797 |
+
884,
|
| 1798 |
+
743
|
| 1799 |
+
],
|
| 1800 |
+
"page_idx": 12
|
| 1801 |
+
},
|
| 1802 |
+
{
|
| 1803 |
+
"type": "text",
|
| 1804 |
+
"text": "A.3 The Linguistic Concept",
|
| 1805 |
+
"text_level": 1,
|
| 1806 |
+
"bbox": [
|
| 1807 |
+
112,
|
| 1808 |
+
768,
|
| 1809 |
+
349,
|
| 1810 |
+
783
|
| 1811 |
+
],
|
| 1812 |
+
"page_idx": 12
|
| 1813 |
+
},
|
| 1814 |
+
{
|
| 1815 |
+
"type": "text",
|
| 1816 |
+
"text": "Zero anaphora is the use of an expression whose interpretation depends specifically upon antecedent expression. The anaphoric (referring) term is called an anaphor. Sometimes anaphor may rely on the postcedent expression, and this phenomenon is called cataphora. Zero Anaphora (pronoun-dropping) is a more complex case of anaphora. In pro-drop languages such as Chinese and Japanese,",
|
| 1817 |
+
"bbox": [
|
| 1818 |
+
112,
|
| 1819 |
+
790,
|
| 1820 |
+
489,
|
| 1821 |
+
919
|
| 1822 |
+
],
|
| 1823 |
+
"page_idx": 12
|
| 1824 |
+
},
|
| 1825 |
+
{
|
| 1826 |
+
"type": "page_number",
|
| 1827 |
+
"text": "3337",
|
| 1828 |
+
"bbox": [
|
| 1829 |
+
480,
|
| 1830 |
+
927,
|
| 1831 |
+
519,
|
| 1832 |
+
940
|
| 1833 |
+
],
|
| 1834 |
+
"page_idx": 12
|
| 1835 |
+
},
|
| 1836 |
+
{
|
| 1837 |
+
"type": "text",
|
| 1838 |
+
"text": "A For every submission:",
|
| 1839 |
+
"bbox": [
|
| 1840 |
+
115,
|
| 1841 |
+
107,
|
| 1842 |
+
322,
|
| 1843 |
+
122
|
| 1844 |
+
],
|
| 1845 |
+
"page_idx": 13
|
| 1846 |
+
},
|
| 1847 |
+
{
|
| 1848 |
+
"type": "list",
|
| 1849 |
+
"sub_type": "text",
|
| 1850 |
+
"list_items": [
|
| 1851 |
+
"A1. Did you describe the limitations of your work? Section Limitations.",
|
| 1852 |
+
"A2. Did you discuss any potential risks of your work? Section Ethics Statement.",
|
| 1853 |
+
"A3. Do the abstract and introduction summarize the paper's main claims? Section Abstract and 1.",
|
| 1854 |
+
"A4. Have you used AI writing assistants when working on this paper? Left blank."
|
| 1855 |
+
],
|
| 1856 |
+
"bbox": [
|
| 1857 |
+
129,
|
| 1858 |
+
126,
|
| 1859 |
+
695,
|
| 1860 |
+
287
|
| 1861 |
+
],
|
| 1862 |
+
"page_idx": 13
|
| 1863 |
+
},
|
| 1864 |
+
{
|
| 1865 |
+
"type": "text",
|
| 1866 |
+
"text": "B Did you use or create scientific artifacts?",
|
| 1867 |
+
"bbox": [
|
| 1868 |
+
114,
|
| 1869 |
+
300,
|
| 1870 |
+
489,
|
| 1871 |
+
316
|
| 1872 |
+
],
|
| 1873 |
+
"page_idx": 13
|
| 1874 |
+
},
|
| 1875 |
+
{
|
| 1876 |
+
"type": "text",
|
| 1877 |
+
"text": "Left blank.",
|
| 1878 |
+
"bbox": [
|
| 1879 |
+
132,
|
| 1880 |
+
321,
|
| 1881 |
+
215,
|
| 1882 |
+
336
|
| 1883 |
+
],
|
| 1884 |
+
"page_idx": 13
|
| 1885 |
+
},
|
| 1886 |
+
{
|
| 1887 |
+
"type": "list",
|
| 1888 |
+
"sub_type": "text",
|
| 1889 |
+
"list_items": [
|
| 1890 |
+
"B1. Did you cite the creators of artifacts you used? Not applicable. Left blank.",
|
| 1891 |
+
"B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank.",
|
| 1892 |
+
"B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank.",
|
| 1893 |
+
"B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank.",
|
| 1894 |
+
"B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank.",
|
| 1895 |
+
"B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Not applicable. Left blank."
|
| 1896 |
+
],
|
| 1897 |
+
"bbox": [
|
| 1898 |
+
127,
|
| 1899 |
+
347,
|
| 1900 |
+
880,
|
| 1901 |
+
753
|
| 1902 |
+
],
|
| 1903 |
+
"page_idx": 13
|
| 1904 |
+
},
|
| 1905 |
+
{
|
| 1906 |
+
"type": "text",
|
| 1907 |
+
"text": "C Did you run computational experiments?",
|
| 1908 |
+
"bbox": [
|
| 1909 |
+
114,
|
| 1910 |
+
764,
|
| 1911 |
+
492,
|
| 1912 |
+
781
|
| 1913 |
+
],
|
| 1914 |
+
"page_idx": 13
|
| 1915 |
+
},
|
| 1916 |
+
{
|
| 1917 |
+
"type": "text",
|
| 1918 |
+
"text": "Section 5.2 and Section 6.2.",
|
| 1919 |
+
"bbox": [
|
| 1920 |
+
132,
|
| 1921 |
+
787,
|
| 1922 |
+
339,
|
| 1923 |
+
801
|
| 1924 |
+
],
|
| 1925 |
+
"page_idx": 13
|
| 1926 |
+
},
|
| 1927 |
+
{
|
| 1928 |
+
"type": "text",
|
| 1929 |
+
"text": "C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? This is a survey and all details are same as related citations.",
|
| 1930 |
+
"bbox": [
|
| 1931 |
+
129,
|
| 1932 |
+
813,
|
| 1933 |
+
880,
|
| 1934 |
+
860
|
| 1935 |
+
],
|
| 1936 |
+
"page_idx": 13
|
| 1937 |
+
},
|
| 1938 |
+
{
|
| 1939 |
+
"type": "header",
|
| 1940 |
+
"text": "ACL 2023 Responsible NLP Checklist",
|
| 1941 |
+
"bbox": [
|
| 1942 |
+
132,
|
| 1943 |
+
84,
|
| 1944 |
+
433,
|
| 1945 |
+
99
|
| 1946 |
+
],
|
| 1947 |
+
"page_idx": 13
|
| 1948 |
+
},
|
| 1949 |
+
{
|
| 1950 |
+
"type": "page_footnote",
|
| 1951 |
+
"text": "The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.",
|
| 1952 |
+
"bbox": [
|
| 1953 |
+
112,
|
| 1954 |
+
868,
|
| 1955 |
+
877,
|
| 1956 |
+
892
|
| 1957 |
+
],
|
| 1958 |
+
"page_idx": 13
|
| 1959 |
+
},
|
| 1960 |
+
{
|
| 1961 |
+
"type": "page_number",
|
| 1962 |
+
"text": "3338",
|
| 1963 |
+
"bbox": [
|
| 1964 |
+
480,
|
| 1965 |
+
928,
|
| 1966 |
+
519,
|
| 1967 |
+
940
|
| 1968 |
+
],
|
| 1969 |
+
"page_idx": 13
|
| 1970 |
+
},
|
| 1971 |
+
{
|
| 1972 |
+
"type": "text",
|
| 1973 |
+
"text": "C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?",
|
| 1974 |
+
"bbox": [
|
| 1975 |
+
129,
|
| 1976 |
+
84,
|
| 1977 |
+
878,
|
| 1978 |
+
115
|
| 1979 |
+
],
|
| 1980 |
+
"page_idx": 14
|
| 1981 |
+
},
|
| 1982 |
+
{
|
| 1983 |
+
"type": "text",
|
| 1984 |
+
"text": "This is a survey and all details are same as related citations.",
|
| 1985 |
+
"bbox": [
|
| 1986 |
+
149,
|
| 1987 |
+
117,
|
| 1988 |
+
596,
|
| 1989 |
+
131
|
| 1990 |
+
],
|
| 1991 |
+
"page_idx": 14
|
| 1992 |
+
},
|
| 1993 |
+
{
|
| 1994 |
+
"type": "text",
|
| 1995 |
+
"text": "C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?",
|
| 1996 |
+
"bbox": [
|
| 1997 |
+
129,
|
| 1998 |
+
142,
|
| 1999 |
+
878,
|
| 2000 |
+
190
|
| 2001 |
+
],
|
| 2002 |
+
"page_idx": 14
|
| 2003 |
+
},
|
| 2004 |
+
{
|
| 2005 |
+
"type": "text",
|
| 2006 |
+
"text": "Section 5.2",
|
| 2007 |
+
"bbox": [
|
| 2008 |
+
149,
|
| 2009 |
+
192,
|
| 2010 |
+
236,
|
| 2011 |
+
205
|
| 2012 |
+
],
|
| 2013 |
+
"page_idx": 14
|
| 2014 |
+
},
|
| 2015 |
+
{
|
| 2016 |
+
"type": "text",
|
| 2017 |
+
"text": "C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)?",
|
| 2018 |
+
"bbox": [
|
| 2019 |
+
127,
|
| 2020 |
+
218,
|
| 2021 |
+
878,
|
| 2022 |
+
265
|
| 2023 |
+
],
|
| 2024 |
+
"page_idx": 14
|
| 2025 |
+
},
|
| 2026 |
+
{
|
| 2027 |
+
"type": "text",
|
| 2028 |
+
"text": "Not applicable. Left blank.",
|
| 2029 |
+
"bbox": [
|
| 2030 |
+
149,
|
| 2031 |
+
267,
|
| 2032 |
+
349,
|
| 2033 |
+
282
|
| 2034 |
+
],
|
| 2035 |
+
"page_idx": 14
|
| 2036 |
+
},
|
| 2037 |
+
{
|
| 2038 |
+
"type": "text",
|
| 2039 |
+
"text": "D Did you use human annotators (e.g., crowdworkers) or research with human participants?",
|
| 2040 |
+
"bbox": [
|
| 2041 |
+
114,
|
| 2042 |
+
292,
|
| 2043 |
+
877,
|
| 2044 |
+
310
|
| 2045 |
+
],
|
| 2046 |
+
"page_idx": 14
|
| 2047 |
+
},
|
| 2048 |
+
{
|
| 2049 |
+
"type": "text",
|
| 2050 |
+
"text": "Section 6.2.",
|
| 2051 |
+
"bbox": [
|
| 2052 |
+
132,
|
| 2053 |
+
313,
|
| 2054 |
+
221,
|
| 2055 |
+
328
|
| 2056 |
+
],
|
| 2057 |
+
"page_idx": 14
|
| 2058 |
+
},
|
| 2059 |
+
{
|
| 2060 |
+
"type": "text",
|
| 2061 |
+
"text": "D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?",
|
| 2062 |
+
"bbox": [
|
| 2063 |
+
129,
|
| 2064 |
+
338,
|
| 2065 |
+
878,
|
| 2066 |
+
372
|
| 2067 |
+
],
|
| 2068 |
+
"page_idx": 14
|
| 2069 |
+
},
|
| 2070 |
+
{
|
| 2071 |
+
"type": "text",
|
| 2072 |
+
"text": "Appendix A.4.",
|
| 2073 |
+
"bbox": [
|
| 2074 |
+
149,
|
| 2075 |
+
374,
|
| 2076 |
+
255,
|
| 2077 |
+
388
|
| 2078 |
+
],
|
| 2079 |
+
"page_idx": 14
|
| 2080 |
+
},
|
| 2081 |
+
{
|
| 2082 |
+
"type": "text",
|
| 2083 |
+
"text": "D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)?",
|
| 2084 |
+
"bbox": [
|
| 2085 |
+
129,
|
| 2086 |
+
398,
|
| 2087 |
+
878,
|
| 2088 |
+
447
|
| 2089 |
+
],
|
| 2090 |
+
"page_idx": 14
|
| 2091 |
+
},
|
| 2092 |
+
{
|
| 2093 |
+
"type": "text",
|
| 2094 |
+
"text": "Appendix A.4.",
|
| 2095 |
+
"bbox": [
|
| 2096 |
+
149,
|
| 2097 |
+
449,
|
| 2098 |
+
255,
|
| 2099 |
+
464
|
| 2100 |
+
],
|
| 2101 |
+
"page_idx": 14
|
| 2102 |
+
},
|
| 2103 |
+
{
|
| 2104 |
+
"type": "text",
|
| 2105 |
+
"text": "D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?",
|
| 2106 |
+
"bbox": [
|
| 2107 |
+
129,
|
| 2108 |
+
473,
|
| 2109 |
+
878,
|
| 2110 |
+
521
|
| 2111 |
+
],
|
| 2112 |
+
"page_idx": 14
|
| 2113 |
+
},
|
| 2114 |
+
{
|
| 2115 |
+
"type": "text",
|
| 2116 |
+
"text": "Appendix A.4.",
|
| 2117 |
+
"bbox": [
|
| 2118 |
+
149,
|
| 2119 |
+
524,
|
| 2120 |
+
255,
|
| 2121 |
+
539
|
| 2122 |
+
],
|
| 2123 |
+
"page_idx": 14
|
| 2124 |
+
},
|
| 2125 |
+
{
|
| 2126 |
+
"type": "list",
|
| 2127 |
+
"sub_type": "text",
|
| 2128 |
+
"list_items": [
|
| 2129 |
+
"D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank.",
|
| 2130 |
+
"D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?"
|
| 2131 |
+
],
|
| 2132 |
+
"bbox": [
|
| 2133 |
+
127,
|
| 2134 |
+
549,
|
| 2135 |
+
878,
|
| 2136 |
+
623
|
| 2137 |
+
],
|
| 2138 |
+
"page_idx": 14
|
| 2139 |
+
},
|
| 2140 |
+
{
|
| 2141 |
+
"type": "text",
|
| 2142 |
+
"text": "Not applicable. Left blank.",
|
| 2143 |
+
"bbox": [
|
| 2144 |
+
149,
|
| 2145 |
+
626,
|
| 2146 |
+
349,
|
| 2147 |
+
640
|
| 2148 |
+
],
|
| 2149 |
+
"page_idx": 14
|
| 2150 |
+
},
|
| 2151 |
+
{
|
| 2152 |
+
"type": "page_number",
|
| 2153 |
+
"text": "3339",
|
| 2154 |
+
"bbox": [
|
| 2155 |
+
480,
|
| 2156 |
+
927,
|
| 2157 |
+
519,
|
| 2158 |
+
940
|
| 2159 |
+
],
|
| 2160 |
+
"page_idx": 14
|
| 2161 |
+
}
|
| 2162 |
+
]
|
2023/A Survey on Zero Pronoun Translation/3ae06d18-3d40-4838-ba49-dbe91c97e883_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Survey on Zero Pronoun Translation/3ae06d18-3d40-4838-ba49-dbe91c97e883_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e0ed2b5032f490aca944bf85f78d98dd3f822c4d166b5ce74795b571c415f2ef
|
| 3 |
+
size 657734
|
2023/A Survey on Zero Pronoun Translation/full.md
ADDED
|
@@ -0,0 +1,396 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Survey on Zero Pronoun Translation
|
| 2 |
+
|
| 3 |
+
Longyue Wang*, Siyou Liu*, Mingzhou Xu, Linfeng Song, Shuming Shi, Zhaopeng Tu
|
| 4 |
+
|
| 5 |
+
Tencent AI Lab
|
| 6 |
+
|
| 7 |
+
{vinnylwang, lifengjin, shumingshi, zptu}@tencent.com
|
| 8 |
+
|
| 9 |
+
guofeng-ai@googlegroups.com
|
| 10 |
+
|
| 11 |
+
# Abstract
|
| 12 |
+
|
| 13 |
+
Zero pronouns (ZPs) are frequently omitted in pro-drop languages (e.g. Chinese, Hungarian, and Hindi), but should be recalled in nonpro-drop languages (e.g. English). This phenomenon has been studied extensively in machine translation (MT), as it poses a significant challenge for MT systems due to the difficulty in determining the correct antecedent for the pronoun. This survey paper highlights the major works that have been undertaken in zero pronoun translation (ZPT) after the neural revolution so that researchers can recognize the current state and future directions of this field. We provide an organization of the literature based on evolution, dataset, method, and evaluation. In addition, we compare and analyze competing models and evaluation metrics on different benchmarks. We uncover a number of insightful findings such as: 1) ZPT is in line with the development trend of large language model; 2) data limitation causes learning bias in languages and domains; 3) performance improvements are often reported on single benchmarks, but advanced methods are still far from real-world use; 4) general-purpose metrics are not reliable on nuances and complexities of ZPT, emphasizing the necessity of targeted metrics; 5) apart from commonly-cited errors, ZPs will cause risks of gender bias.
|
| 14 |
+
|
| 15 |
+
# 1 Introduction
|
| 16 |
+
|
| 17 |
+
Pronouns play an important role in natural language, as they enable speakers to refer to people, objects, or events without repeating the nouns that represent them. Zero pronoun $(\mathrm{ZP})^{1}$ is a complex phenomenon that appears frequently in pronoun-dropping (pro-drop) languages such as Chinese, Hungarian, and Hindi. Specifically, pronouns are often omitted when they can be pragmatically
|
| 18 |
+
|
| 19 |
+
or grammatically inferable from intra- and intersentential contexts (Li and Thomson, 1979). Since recovery of such ZPs generally fails, this poses difficulties for several generation tasks, including dialogue modelling (Su et al., 2019), question answering (Tan et al., 2021), and machine translation (Wang, 2019).
|
| 20 |
+
|
| 21 |
+
When translating texts from pro-drop to non-prodrop languages (e.g. Chinese $\Rightarrow$ English), this phenomenon leads to serious problems for translation models in terms of: 1) completeness, since translation of such invisible pronouns cannot be normally reproduced; 2) correctness, because understanding the semantics of a source sentence needs to identifying and resolving the pronominal reference.
|
| 22 |
+
|
| 23 |
+
Figure 1 shows ZP examples in three typological patterns determined by language family (detailed in Appendix §A.1). Taking a full-drop language for instance, the first-person subject and third-person object pronouns are omitted in Hindi input while these pronouns are all compulsory in English translation. This is not a problem for human beings since we can easily recall these missing pronoun from the context. However, even a real-life MT system still fails to accurately translate ZPs.
|
| 24 |
+
|
| 25 |
+
In response to this problem, zero pronoun translation (ZPT) has been studied extensively in the MT community on three significant challenges:
|
| 26 |
+
|
| 27 |
+
- Dataset: there is limited availability of ZP-annotated parallel data, making it difficult to develop systems that can handle ZP complexities.
|
| 28 |
+
- Approach: due to the ability to capture semantic information with distributed representations, ideally, the representations of NMT should embed ZP information by learning the alignments between bilingual pronouns from the training corpus. In practice, however, NMT models only manage to successfully translate some simple ZPs, but still fail when translating complex ones (e.g. subject vs. object ZPs).
|
| 29 |
+
- Evaluation: general evaluation metrics for MT
|
| 30 |
+
|
| 31 |
+

|
| 32 |
+
Figure 1: An overview of pro-drop languages by considering their typological patterns and language families. Example of ZP phenomenon in other languages (i.e. Korean, Hungarian and Hindi). Words in brackets are pronouns that are invisible in source language (implicit and explicit). The underlined words are corresponding antecedents. "EN" represents the human translation in English, which is a non-pro-drop language. "OT" is output translated by SOTA NMT systems with inappropriate translations.
|
| 33 |
+
|
| 34 |
+
<table><tr><td>KO</td><td>A: े
|
| 35 |
+
B: े</td></tr><tr><td>EN</td><td>A: Do you need this? B: (I) need (it).</td></tr><tr><td>OT</td><td>A: Do you need this? B: I need.</td></tr></table>
|
| 36 |
+
|
| 37 |
+
<table><tr><td>HU</td><td>A: látátok a macskát? B: látjuk.</td></tr><tr><td>EN</td><td>A: Do (you) see the cat? B: (We) see (it).</td></tr><tr><td>OT</td><td>A: Do you see the cat? B: We see.</td></tr></table>
|
| 38 |
+
|
| 39 |
+
<table><tr><td>HI</td><td>A: निकान्दी नागया को सिताना ?
|
| 40 |
+
B: को सिताना .</td></tr><tr><td>EN</td><td>A: Did you give the food to Nadya?
|
| 41 |
+
B: Yes, (Ⅰ) gave (her) (food).</td></tr><tr><td>OT</td><td>A: Did you eat Nadya?
|
| 42 |
+
B: Yes given.</td></tr></table>
|
| 43 |
+
|
| 44 |
+
are not sensitive enough to capture translation errors caused by ZPs.
|
| 45 |
+
|
| 46 |
+
We believe that it is the right time to take stock of what has been achieved in ZPT, so that researchers can get a bigger picture of where this line of research stands. In this paper, we present a survey of the major works on datasets, approaches and evaluation metrics that have been undertaken in ZPT. We first introduce the background of linguistic phenomenon and literature selection in Section 2. Section 3 discusses the evolution of ZP-related tasks. Section 4 summarizes the annotated datasets, which are significant to pushing the studies move forward. Furthermore, we investigated advanced approaches for improving ZPT models in Section 5. In addition to this, Section 6 covers the evaluation methods that have been introduced to account for improvements in this field. We conclude by presenting avenues for future research in Section 7.
|
| 47 |
+
|
| 48 |
+
# 2 Background
|
| 49 |
+
|
| 50 |
+
# 2.1 Linguistic Phenomenon
|
| 51 |
+
|
| 52 |
+
Definition of Zero Pronoun Cohesion is a significant property of discourse, and it occurs whenever "the interpretation of some element in the discourse is dependent on that of another" (Halliday and Hasan, 1976). As one of cohesive devices, anaphora is the use of an expression whose inter
|
| 53 |
+
|
| 54 |
+
pretation depends specifically upon antecedent expression while zero anaphora is a more complex scenario in pro-drop languages. A ZP is a gap in a sentence, which refers to an entity that supplies the necessary information for interpreting the gap (Zhao and Ng, 2007). ZPs can be categorized into anaphoric and non-anaphoric ZP according to whether it refers to an antecedent or not. In pro-drop languages such as Chinese and Japanese, ZPs occur much more frequently compared to nonpro-drop languages such as English. The ZP phenomenon can be considered one of the most difficult problems in natural language processing (Peral and Ferrández, 2003).
|
| 55 |
+
|
| 56 |
+
Extent of Zero Pronoun To investigate the extent of pronoun-dropping, we quantitatively analyzed ZPs in two corpora and details are shown in Appendix §A.2. We found that the frequencies and types of ZPs vary in different genres: (1) $26\%$ of Chinese pronouns were dropped in the dialogue domain, while $7\%$ were dropped in the newswire domain; (2) the most frequent ZP in newswire text is the third person singular它(“it”)(Baran et al., 2012), while that in SMS dialogues is the first person我(“I”)and我们(“we”)(Rao et al., 2015). This may lead to differences in model behavior and quality across domains. This high proportion within informal genres such as dialogues and conversation shows the importance of addressing the challenge of translation of ZPs.
|
| 57 |
+
|
| 58 |
+
# 2.2 Literature Selection
|
| 59 |
+
|
| 60 |
+
We used the following methodology to provide a comprehensive and unbiased overview of the current state of the art, while minimizing the risk of omitting key references:
|
| 61 |
+
|
| 62 |
+
- Search Strategy: We conducted a systematic search in major databases (e.g. Google Scholar) to identify the relevant articles and resources. Our search terms included combinations of keywords, such as "zero pronouns," "zero pronoun translation," and "coreference resolution."
|
| 63 |
+
- Selection Criteria: To maintain the focus and quality of our review, we established the following criteria. (1) Inclusion, where articles are published in journals, conferences and workshop proceedings. (2) Exclusion, where articles that are not available in English or do not provide sufficient details to assess the validity of their results.
|
| 64 |
+
- Screening and Selection: First, we screened the titles and abstracts based on our Selection Criteria. Then, we assessed the full texts of the remaining articles for eligibility. We also checked the reference lists of relevant articles to identify any additional sources that may have been missed during the initial search.
|
| 65 |
+
- Data Extraction and Synthesis: We extracted key information from the selected articles, such as dataset characteristics, and main findings. This data was synthesized and organized to provide a comprehensive analysis of the current state of the art in ZPT.
|
| 66 |
+
|
| 67 |
+
# 3 Evolution of Zero Pronoun Modelling
|
| 68 |
+
|
| 69 |
+
Considering the evolution of ZP modelling, we cannot avoid discussing other related tasks. Thus, we first review three typical ZP tasks and conclude their essential relations and future trends.
|
| 70 |
+
|
| 71 |
+
# 3.1 Overview
|
| 72 |
+
|
| 73 |
+
ZP resolution is the earliest task to handle the understanding problem of ZP (Zhao and Ng, 2007). ZP recovery and translation aim to directly generate ZPs in monolingual and crosslingual scenarios, respectively (Yang and Xue, 2010; Chung and Gildea, 2010). This is illustrated in Figure 2.
|
| 74 |
+
|
| 75 |
+
Zero Pronoun Resolution The task contains three steps: ZP detection, anaphoricity determination and reference linking. Earlier works investigated rich features using traditional ML models (Zhao and Ng, 2007; Kong and Zhou, 2010; Chen
|
| 76 |
+
|
| 77 |
+
and Ng, 2013, 2015). Recent studies exploited neural models to achieve the better performance (Chen and Ng, 2016; Yin et al., 2018; Song et al., 2020). The CoNLL2011 and CoNLL2012 are commonly used benchmarks on modeling unrestricted coreference. The corpus contains 144K coreference instances, but dropped subjects only occupy $15\%$ .
|
| 78 |
+
|
| 79 |
+
Zero Pronoun Recovery Given a source sentence, this aims to insert omitted pronouns in proper positions without changing the original meaning (Yang and Xue, 2010; Yang et al., 2015, 2019a). It is different from ZP resolution, which identifies the antecedent of a referential pronoun (Mitkov, 2014). Previous studies regarded ZP recovery as a classification or sequence labelling problem, which only achieve $40\sim 60\%$ F1 scores on closed datasets (Zhang et al., 2019; Song et al., 2020), indicating the difficulty of generating ZPs. It is worth noting that ZP recovery models can work for ZPT task in a pipeline manner: input sentences are labeled with ZPs using an external recovery system and then fed into a standard MT model (Chung and Gildea, 2010; Wang et al., 2016a).
|
| 80 |
+
|
| 81 |
+
Zero Pronoun Translation When pronouns are omitted in a source sentence, ZPT aims to generate ZPs in its target translation. Early studies have investigate a number of works for SMT models (Chung and Gildea, 2010; Le Nagard and Koehn, 2010; Taira et al., 2012; Xiang et al., 2013; Wang et al., 2016a). Recent years have seen a surge of interest in NMT (Yu et al., 2020; Wang et al., 2018a), since the problem still exists in advanced NMT systems. ZPT is also related to pronoun translation, which aims to correctly translate explicit pronoun in terms of feminine and masculine. The DiscoMT<sup>3</sup> is a commonly-cited benchmark on pronoun translation, however, there was no standard ZPT benchmarks up until now.
|
| 82 |
+
|
| 83 |
+
# 3.2 Discussions and Findings
|
| 84 |
+
|
| 85 |
+
By comparing different ZP-aware tasks, we found three future trends:
|
| 86 |
+
|
| 87 |
+
1. From Intermediate to End. In real-life systems, ZP resolution and recovery are intermediate tasks while ZPT can be directly reflected in system output. ZP resolution and recovery will be replaced by ZPT although they currently work with some MT systems in a pipeline way.
|
| 88 |
+
|
| 89 |
+

|
| 90 |
+
Figure 2: An overview of three ZP-aware tasks (taking Chinese-English for instance): ZP resolution, ZP recovery and ZP translation. As seen, the input is the same while the output varies according to different tasks.
|
| 91 |
+
|
| 92 |
+
2. From Separate To Unified. With the development of large language models (LLMs), it is unnecessary to keep a specific model for each task. For example, Song et al. (2020) leveraged a unified BERT-based architecture to model ZP resolution and recovery. Furthermore, we observed that $\mathrm{ChatGPT}^4$ already possesses the capability for ZP resolution and recovery.
|
| 93 |
+
|
| 94 |
+
# 4 Datasets
|
| 95 |
+
|
| 96 |
+
# 4.1 Overview
|
| 97 |
+
|
| 98 |
+
Modeling ZPs has so far not been extensively explored in prior research, largely due to the lack of publicly available data sets. Existing works mostly focused on human-annotated, small-scale and single-domain corpora such as OntoNotes (Pradhan et al., 2012; Aloraini and Poesio, 2020) and Treebanks (Yang and Xue, 2010; Chung and Gildea, 2010). We summarize representative corpora as:
|
| 99 |
+
|
| 100 |
+
- OntoNotes. This is annotated with structural information (e.g. syntax and predicate argument structure) and shallow semantics (e.g. word sense linked to an ontology and coreference). It comprises various genres of text (news, conversational telephone speech, weblogs, usenet newsgroups, broadcast, talk shows) in English, Chinese, and Arabic languages. ZP sentences are extracted for ZP resolution task (Chen and Ng, 2013, 2016).
|
| 101 |
+
- TVSub. This extracts Chinese-English subtitles from television episodes. Its source-side sentences are automatically annotated with ZPs by a
|
| 102 |
+
|
| 103 |
+
heuristic algorithm (Wang et al., 2016a), which was generally used to study dialogue translation and zero anaphora phenomenon (Wang et al., 2018a; Tan et al., 2021).
|
| 104 |
+
|
| 105 |
+
- CTB. $^{7}$ This is a part-of-speech tagged and fully bracketed Chinese language corpus. The text are extracted from various domains including newswire, government documents, magazine articles, various broadcast news and broadcast conversation programs, web newsgroups and weblogs. Instances with empty category are extracted for ZP recovery task (Yang and Xue, 2010; Chung and Gildea, 2010).
|
| 106 |
+
- BaiduKnows. The source-side sentences are collected from the Baidu Knows website, which were annotated with ZP labels with boundary tags. It is widely-used the task of ZP recovery (Zhang et al., 2019; Song et al., 2020).
|
| 107 |
+
|
| 108 |
+
# 4.2 Discussions and Findings
|
| 109 |
+
|
| 110 |
+
Table 1 lists statistics of existing ZP datasets and we found the limitations and trends:
|
| 111 |
+
|
| 112 |
+
1. Language Bias. Most works used Chinese and Japanese datasets as testbed for training ZP models (Song et al., 2020; Ri et al., 2021). However, there were limited data available for other prodrop languages (e.g. Portuguese and Spanish), resulting that linguists mainly used them for corpus analysis (Pereira, 2009; Russo et al., 2012). However, ZP phenomenon may vary across languages in terms of word form, occurrence frequency and category distribution, leading to learning bias on linguistic knowledge. Thus, it is necessary to establish ZP datasets for various languages (Prasad,
|
| 113 |
+
|
| 114 |
+
<table><tr><td rowspan="2">Dataset</td><td rowspan="2">Lang.</td><td rowspan="2">Anno.</td><td rowspan="2">Domain</td><td rowspan="2">Size</td><td colspan="3">Task</td></tr><tr><td>Reso.</td><td>Reco.</td><td>Trans.</td></tr><tr><td>OntoNotes (Pradhan et al., 2012)</td><td>ZH</td><td>Human</td><td>Mixed Sources</td><td>42.6K</td><td>✓</td><td>✗</td><td>✗</td></tr><tr><td>OntoNotes (Aloraini and Poesio, 2020)</td><td>AR</td><td>Human</td><td>News</td><td>9.4K</td><td>✓</td><td>✗</td><td>✗</td></tr><tr><td>CTB (Yang and Xue, 2010)</td><td>ZH</td><td>Human</td><td>News</td><td>10.6K</td><td>✗</td><td>✓</td><td>✗</td></tr><tr><td>KTB (Chung and Gildea, 2010)</td><td>KO</td><td>Human</td><td>News</td><td>5.0K</td><td>✗</td><td>✓</td><td>✗</td></tr><tr><td>BaiduKnows (Zhang et al., 2019)</td><td>ZH</td><td>Human</td><td>Baidu Knows</td><td>5.0K</td><td>✗</td><td>✓</td><td>✗</td></tr><tr><td>TVsub (Wang et al., 2018a)</td><td>ZH, EN</td><td>Auto</td><td>Movie Subtitles</td><td>2.2M</td><td>✗</td><td>✗</td><td>✓</td></tr><tr><td>ZAC (Pereira, 2009)</td><td>PT</td><td>Human</td><td>Mixed Sources</td><td>0.6K</td><td>✓</td><td>✗</td><td>✗</td></tr><tr><td>Nagoya (Zhan and Nakaiwa, 2015)</td><td>JA</td><td>Auto</td><td>Scientific Paper</td><td>1.2K</td><td>✓</td><td>✗</td><td>✗</td></tr><tr><td>SKku (Park et al., 2015)</td><td>KO</td><td>Human</td><td>Dialogue</td><td>1.1K</td><td>✓</td><td>✗</td><td>✗</td></tr><tr><td>UPENN (Prasad, 2000)</td><td>HI</td><td>Human</td><td>News</td><td>2.2K</td><td>✓</td><td>✗</td><td>✗</td></tr><tr><td>LATL (Russo et al., 2012)</td><td>IT, ES</td><td>Human</td><td>Europarl</td><td>2.0K</td><td>✓</td><td>✗</td><td>✓</td></tr><tr><td>UCFV (Bacolini, 2017)</td><td>HE</td><td>Human</td><td>Dialogue</td><td>0.1K</td><td>✓</td><td>✗</td><td>✗</td></tr></table>
|
| 115 |
+
|
| 116 |
+
Table 1: A summary of existing datasets regarding ZP. We classify them according to language (Lang.), annotation type (Anno.) and text domain. We also report the number of sentences (Size). "Reso.", "Reco." and "Trans." indicate whether a dataset can be used for specific ZP tasks. The symbol $\checkmark$ or $X$ means "Yes" or "No".
|
| 117 |
+
|
| 118 |
+
2000; Bacolini, 2017).
|
| 119 |
+
|
| 120 |
+
2. Domain Bias. Most corpora were established in one single domain (e.g. news), which may not contain rich ZP phenomena. Because the frequencies and types of ZPs vary in different genres (Yang et al., 2015). Future works need more multi-domain datasets to better model behavior and quality for real-life use.
|
| 121 |
+
3. Become An Independent Research Problem. Early works extracted ZP information from closed annotations (e.g. OntoNotes and Treebanks) (Yang and Xue, 2010; Chung and Gildea, 2010), which were considered as a sub-problem of coreference or syntactic parsing. With further investigation on the problem, MT community paid more attention to it by manually or automatically constructing ZP recovery and translation datasets (e.g. BaiduKnows and TVsub) (Wang et al., 2018a; Zhang et al., 2019).
|
| 122 |
+
4. Coping with Data Scarcity. The scarcity of ZPT data remains a core issue (currently only $2.2\mathrm{M} \sim 0.1\mathrm{K}$ sentences) due to two challenges: (1) it requires experts for both source ZP annotation and target translation (Wang et al., 2016c, 2018a); (2) annotating the training data manually spends much time and money. Nonetheless, it is still necessary to establish testing datasets for validating/analyzing the model performance. Besides, pre-trained modes are already equipped with some capabilities on discourse (Chen et al., 2019; Koto et al., 2021). This highlights the importance of formulating the downstream task in
|
| 123 |
+
|
| 124 |
+
a manner that can effectively leverage the capabilities of the pre-trained models.
|
| 125 |
+
|
| 126 |
+
# 5 Approaches
|
| 127 |
+
|
| 128 |
+
# 5.1 Overview
|
| 129 |
+
|
| 130 |
+
Early researchers have investigated several approaches for conventional statistical machine translation (SMT) (Le Nagard and Koehn, 2010; Xiang et al., 2013; Wang et al., 2016a). Modeling ZPs for advanced NMT models, however, has received more attention, resulting in better performance in this field (Wang et al., 2018a; Tan et al., 2021; Hwang et al., 2021). Generally prior works fall into three categories: (1) Pipeline, where input sentences are labeled with ZPs using an external ZP recovery system and then fed into a standard MT model (Chung and Gildea, 2010; Wang et al., 2016a); (2) Implicit, where ZP phenomenon is implicitly resolved by modelling document-level contexts (Yu et al., 2020; Ri et al., 2021); (3) End-to-End, where ZP prediction and translation are jointly learned in an end-to-end manner (Wang et al., 2019; Tan et al., 2021).
|
| 131 |
+
|
| 132 |
+
Pipeline The pipeline method of ZPT borrows from that in pronoun translation (Le Nagard and Koehn, 2010; Pradhan et al., 2012) due to the strong relevance between the two tasks. Chung and Gildea (2010) systematically examine the effects of empty category $(\mathrm{EC})^9$ on SMT with pattern-,
|
| 133 |
+
|
| 134 |
+
CRF- and parsing-based methods. The results show that this can really improve the translation quality, even though the automatic prediction of EC is not highly accurate. Besides, Wang et al. (2016a,b, 2017b) proposed to integrate neural-based ZP recovery with SMT systems, showing better performance on both ZP recovery and overall translation. When entering the era of NMT, ZP recovery is also employed as an external system. Assuming that no-pro-drop languages can benefit pro-drop ones, Ohtani et al. (2019) tagged the coreference information in the source language, and then encoded it using a graph-based encoder integrated with NMT model. Tan et al. (2019) recovered ZP in the source sentence via a BiLSTM-CRF model (Lample et al., 2016). Different from the conventional ZP recovery methods, the label is the corresponding translation of ZP around with special tokens. They then trained a NMT model on this modified data, letting the model learn the copy behaviors. Tan et al. (2021) used ZP detector to predict the ZP position and inserted a special token. Second, they used a attention-based ZP recovery model to recover the ZP word on the corresponding ZP position.
|
| 135 |
+
|
| 136 |
+
End-to-End Due the lack of training data on ZPT, a couple of studies pay attention to data augmentation. Sugiyama and Yoshinaga (2019) employed the back-translation on a context-aware NMT model to augment the training data. With the help of context, the pronoun in no-pronoun-drop language can be translated correctly into pronoun-drop language. They also build a contrastive dataset to filter the pseudo data. Besides, Kimura et al. (2019) investigated the selective standards in detail to filter the pseudo data. Ri et al. (2021) deleted the personal pronoun in the sentence to augment the training data. And they trained a classifier to keep the sentences that pronouns can be recovered without any context.
|
| 137 |
+
|
| 138 |
+
About model architecture, Wang et al. (2018a) first proposed a reconstruction-based approach to reconstruct the ZP-annotated source sentence from the hidden states of either encoder or decoder, or both. The central idea behind is to guide the corresponding hidden states to embed the recalled source-side ZP information and subsequently to help the NMT model generate the missing pronouns with these enhanced hidden representations. Although this model achieved significant improvements, there nonetheless exist two drawbacks: 1) there is no interaction between the two separate
|
| 139 |
+
|
| 140 |
+
reconstructors, which misses the opportunity to exploit useful relations between encoder and decoder representations; and 2) testing phase needs an external ZP prediction model and it only has an accuracy of $66\%$ in F1-score, which propagates numerous errors to the translation model. Thus, Wang et al. (2018b) further proposed to improve the reconstruction-based model by using shared reconstructor and joint learning. Furthermore, relying on external ZP models in decoding makes these approaches unwieldy in practice, due to introducing more computation cost and complexity.
|
| 141 |
+
|
| 142 |
+
About learning objective, contrastive learning is often used to let the output more close to golden data while far away from negative samples. Yang et al. (2019b) proposed a contrastive learning to reduce the word omitted error. To construct the negative samples, they randomly dropped the word by considering its frequency or part-of-speech tag. Hwang et al. (2021) further considered the coreference information to construct the negative sample. According to the coreference information, they took place the antecedent in context with empty, mask or random token to get the negative samples. Besides, Jwalapuram et al. (2020) served the pronoun mistranslated output as the negative samples while golden sentences as positive sample. To get the negative samples, they aligned the word between model outputs and golden references to get the sentences with mistranslated pronoun.
|
| 143 |
+
|
| 144 |
+
Implicit Some works consider not just the ZPT issue but rather focus on the overall discourse problem. The document-level NMT models (Wang et al., 2017a; Werlen et al., 2018; Ma et al., 2020; Lopes et al., 2020) are expected to have strong capabilities in discourse modelling such as translation consistency and ZPT. Another method is the round-trip translation, which is commonly-used in automatic post-editing (APE) (Freitag et al., 2019), quality estimation (QE) (Moon et al., 2020) to correct of detect the translation errors. Voita et al. (2019) served this idea on context-aware NMT to correct the discourse error in the output. They employed the round-trip translation on monolingual data to get the parallel corpus in the target language. They then used the corpus to train a model to repair discourse phenomenon in MT output. Wang et al. (2019) proposed a fully unified ZPT model, which absolutely released the reliance on external ZP models at decoding time. Besides, they exploited to jointly learn inter-sentential con
|
| 145 |
+
|
| 146 |
+
<table><tr><td rowspan="2">Model</td><td colspan="2">TVsub</td><td colspan="2">BaiduKnows</td><td colspan="2">Webnovel</td></tr><tr><td>BLEU</td><td>APT</td><td>BLEU</td><td>APT</td><td>BLEU</td><td>APT</td></tr><tr><td>Baseline (Vaswani et al., 2017)</td><td>29.4</td><td>47.4</td><td>12.7</td><td>25.4</td><td>11.7</td><td>30.9</td></tr><tr><td>Pipeline (Song et al., 2020)</td><td>29.8</td><td>49.5</td><td>13.2</td><td>56.4</td><td>11.6</td><td>32.0</td></tr><tr><td>Implicit (Ma et al., 2020)</td><td>29.8</td><td>53.5</td><td>13.9</td><td>26.3</td><td>12.2</td><td>35.3</td></tr><tr><td>End-to-End (Wang et al., 2018a)</td><td>30.0</td><td>52.3</td><td>12.3</td><td>30.4</td><td>12.0</td><td>33.4</td></tr><tr><td>ORACLE</td><td>32.8</td><td>86.9</td><td>14.7</td><td>88.8</td><td>12.8</td><td>85.1</td></tr></table>
|
| 147 |
+
|
| 148 |
+
Table 2: A comparison of representative ZPT methods with different benchmarks. The ZPT methods are detailed in Section 5.1. The Baseline is a standard Transformer-big model while ORACLE is manually recovering ZPs in input sentences and then feeding them into the Baseline (Wu et al., 2020). As detailed in Section 4.1, TVSub (both translation and ZP training data) and BaiduKnows (ZP training data) are widely-used benchmarks in movie subtitle and Q&A forum domains, respectively. The Webnovel is our in-house testing data (no training data) in web fiction domain. As detailed in Section 6.1, BLEU is a general-purpose evaluation metric while APT is a ZP-targeted one.
|
| 149 |
+
|
| 150 |
+
text (Sordoni et al., 2015) to further improve ZP prediction and translation.
|
| 151 |
+
|
| 152 |
+
# 5.2 Discussions and Findings
|
| 153 |
+
|
| 154 |
+
Table 1 shows that only the TVsub is suitable for both training and testing in ZPT task, while others like LATL is too small and only suitable for testing. To facilitate fair and comprehensive comparisons of different models across different benchmarkss, we expanded the BaiduKnows by adding human translations and included in-house dataset<sup>10</sup>. As shown in Table 2, we re-implemented three representative ZPT methods and conducted experiments on three benchmarks, which are diverse in terms of domain, size, annotation type, and task. As the training data in three benchmarks decrease, the difficulty of modelling ZPT gradually increases.
|
| 155 |
+
|
| 156 |
+
1. Existing Methods Can Help ZPT But Not Enough. Three ZPT models can improve ZP translation in most cases, although there are still considerable differences among different domain of benchmarks (BLEU and APT $\uparrow$ ). Introducing ZPT methods has little impact on BLEU score $(-0.4\sim +0.6$ point on average), however, they can improve APT over baseline by $+1.1\sim +30.1$ . When integrating golden ZP labels into baseline models (ORACLE), their BLEU and APT scores largely increased by $+3.4$ and $+63.4$ points, respectively. The performance gap between Oracle and others shows that there is still a large space for further improvement for ZPT.
|
| 157 |
+
|
| 158 |
+
2. Pipeline Methods Are Easier to Integrate with NMT. This is currently a simple way to enhance ZPT ability in real-life systems. As shown in Table 3, we analyzed the outputs of pipeline method and identify challenges from three perspectives: (1) out-of-domain, where it lacks in-domain data for training robust ZP recovery models. The distribution of ZP types is quite different between ZP recovery training data (out-of-domain) and ZPT testset (in-domain). This leads to that the ZP recovery model often predicts wrong ZP forms (possessive adjective vs. subject). (2) error propagation, where the external ZP recovery model may provide incorrect ZP words to the followed NMT model. As seen, $\mathrm{ZPR} +$ performs worse than a plain NMT model NMT due to wrong pronouns predicted by the ZPR model (你们 vs.我). (3) multiple ZPs, where there is a $10\%$ percentage of sentences that contain more than two ZPs, resulting in more challenges to accurately and simultaneously predict them. As seen, two ZPs are incorrectly predicted into "我" instead of "他".
|
| 159 |
+
|
| 160 |
+
3. Data-Level Methods Do Not Change Model Architecture. This is more friendly to NMT. Some researchers targeted making better usage of the limited training data (Tan et al., 2019; Ohtani et al., 2019; Tan et al., 2021). They trained an external model on the ZP data to recover the ZP information in the input sequence of the MT model (Tan et al., 2019; Ohtani et al., 2019; Tan et al., 2021) or correct the errors in the translation outputs (Voita et al., 2019). Others aimed to up-sample the training data for the ZPT task (Sugiyama and Yoshinaga, 2019; Kimura et al., 2019; Ri et al., 2021). They preferred to
|
| 161 |
+
|
| 162 |
+
<table><tr><td rowspan="4">1. Out-of-Domain</td><td>INP.</td><td>[他的]p主要研究领域为...</td></tr><tr><td>NMT</td><td>The main research areas are ...</td></tr><tr><td>ZPR</td><td>我主要研究领域为...</td></tr><tr><td>ZPR+</td><td>My main research areas are ...</td></tr><tr><td rowspan="4">2. Error Propagation</td><td>INP.</td><td>如果[你们]s见到她...</td></tr><tr><td>NMT</td><td>If you see her ...</td></tr><tr><td>ZPR</td><td>如果我见到她...</td></tr><tr><td>ZPR+</td><td>If I see her ...</td></tr><tr><td rowspan="4">3. Multiple ZPs</td><td>INP.</td><td>[他]s好久没... [他]s怪想念的。</td></tr><tr><td>NMT</td><td>for a long time did not ... strange miss.</td></tr><tr><td>ZPR</td><td>我好久没...我怪想念的。</td></tr><tr><td>ZPR+</td><td>I haven't ... for a long time, I miss.</td></tr></table>
|
| 163 |
+
|
| 164 |
+
improve the ZPT performance via a data augmentation without modifying the MT architecture (Wang et al., 2016a; Sugiyama and Yoshinaga, 2019). Kimura et al. (2019); Ri et al. (2021) verified that the performance can be further improved by denoising the pseudo data.
|
| 165 |
+
|
| 166 |
+
4. Multitask and Multi-Lingual Learning. ZPT is a hard task to be done alone, researchers are investigating how to leverage other related NLP tasks to improve ZPT by training models to perform multiple tasks simultaneously (Wang et al., 2018a). Since ZPT is a cross-lingual problem, researchers are exploring techniques for training models that can work across multiple languages, rather than being limited to a single language (Aloraini and Poesio, 2020).
|
| 167 |
+
|
| 168 |
+
# 6 Evaluation Methods
|
| 169 |
+
|
| 170 |
+
# 6.1 Overview
|
| 171 |
+
|
| 172 |
+
There are three kinds of automatic metrics to evaluate performances of related models:
|
| 173 |
+
|
| 174 |
+
- Accuracy of ZP Recovery: this aims to measure model performance on detecting and predicting ZPs of sentences in one pro-drop language. For instance, the micro F1-score is used to evaluating Chinese ZPR systems Song et al. (2020).<sup>11</sup>
|
| 175 |
+
- General Translation Quality: there are a number of automatic evaluation metrics for measuring general performance of MT systems (Snover
|
| 176 |
+
|
| 177 |
+
Table 3: Errors in a pipeline-based ZPT and NMT models. INP. represents Chinese input and NMT indicates a sentence-level NMT models. ZPR denotes ZP-annotated output predicted by ZP recovery models. Red words are ZPs that are invisible in decoding.
|
| 178 |
+
|
| 179 |
+
<table><tr><td>Metric</td><td>T.S.</td><td>B.K.</td><td>I.H.</td><td>Ave.</td></tr><tr><td>BLEU</td><td>0.09</td><td>0.76</td><td>0.57</td><td>0.47</td></tr><tr><td>TER</td><td>0.41</td><td>0.01</td><td>0.26</td><td>0.23</td></tr><tr><td>METEOR</td><td>0.23</td><td>0.74</td><td>0.28</td><td>0.42</td></tr><tr><td>COMET</td><td>0.59</td><td>0.15</td><td>0.37</td><td>0.37</td></tr><tr><td>APT</td><td>0.68</td><td>0.76</td><td>0.58</td><td>0.67</td></tr></table>
|
| 180 |
+
|
| 181 |
+
Table 4: Correlation between the manual evaluation and other automatic metrics, which are applied on different ZPT benchmarks, which are same as in Table 2.
|
| 182 |
+
|
| 183 |
+
et al., 2006). BLEU (Papineni et al., 2002) is the most widely-used one, which measures the precision of n-grams of the MT output compared to the reference, weighted by a brevity penalty to punish overly short translations. METEOR (Banerjee and Lavie, 2005) incorporates semantic information by calculating either exact match, stem match, or synonymy match. Furthermore, COMET (Rei et al., 2020) is a neural framework for training multilingual MT evaluation models which obtains new SOTA levels of correlation with human judgements.
|
| 184 |
+
|
| 185 |
+
- Pronoun-Aware Translation Quality: Previous works usually evaluate ZPT using the BLEU metric (Wang et al., 2016a, 2018a; Yu et al., 2020; Ri et al., 2021), however, general-purpose metrics cannot characterize the performance of ZP translation. As shown in Table 3, the missed or incorrect pronouns may not affect BLEU scores but severely harm true performances. To fix this gap, some works proposed pronoun-targeted evaluation metrics (Werlen and Popescu-Belis, 2017; Läubli et al., 2018).
|
| 186 |
+
|
| 187 |
+
# 6.2 Discussions and Findings
|
| 188 |
+
|
| 189 |
+
As shown in Table 4, we compare different evaluation metrics on ZPT systems. About general-purpose metrics, we employed BLEU, TER, METEOR and COMET. About ZP-targeted metrics, we implemented and adapted APT (Werlen and Popescu-Belis, 2017) to evaluate ZPs, and experimented on three Chinese-English benchmarks (same as Section 5.2). For human evaluation, we randomly select a hundred groups of samples from each dataset, each group contains an oracle source sentence and the hypotheses from six examined MT systems. We asked expert raters to score all of these samples in 1 to 5 scores to reflect the cohesion quality of translations (detailed in Appendix
|
| 190 |
+
|
| 191 |
+
$\S \mathrm{A.4})$ . The professional annotators are bilingual professionals with expertise in both Chinese and English. They have a deep understanding of the ZP problem and have been specifically trained to identify and annotate ZPs accurately. Our main findings are:
|
| 192 |
+
|
| 193 |
+
1. General-Purpose Evaluation Are Not Applicable to ZPT. As seen, APT reaches around 0.67 Pearson scores with human judges, while general-purpose metrics reach $0.47 \sim 23$ . The APT shows a high correlation with human judges on three benchmarks, indicating that (1) general-purpose metrics are not specifically designed to measure performance on ZPT; (2) researchers need to develop more targeted evaluation metrics that are better suited to this task.
|
| 194 |
+
2. Human Evaluations Are Required as A Complement. Even we use targeted evaluation, some nuances and complexities remain unrecognized by automatic methods. Thus, we call upon the research community to employ human evaluation according to WMT (Kocmi et al., 2022) especially in chat and literary shared tasks (Farinha et al., 2022; Wang et al., 2023c).
|
| 195 |
+
3. The Risk of Gender Bias. The gender bias refers to the tendency of MT systems to produce output that reflects societal stereotypes or biases related to gender (Vanmassenhove et al., 2019). We found gender errors in ZPT outputs, when models make errors in identifying the antecedent of a ZP. This can be caused by the biases present in the training data, as well as the limitations in the models and the evaluation metrics. Therefore, researchers need to pay more attention to mitigate these biases, such as using diverse data sets and debiasing techniques, to improve the accuracy and fairness of ZPT methods.
|
| 196 |
+
|
| 197 |
+
# 7 Conclusion and Future Work
|
| 198 |
+
|
| 199 |
+
ZPT is a challenging and interesting task, which needs abilities of models on discourse-aware understanding and generation. Figure 3 best illustrates the increase in scientific publications related to ZP over the past few years. This paper is a literature review of existing research on zero pronoun translation, providing insights into the challenges and opportunities of this area and proposing potential directions for future research.
|
| 200 |
+
|
| 201 |
+
As we look to the future, we intend to delve deeper into the challenges of ZPT. Our plan is to leverage large language models, which have shown
|
| 202 |
+
|
| 203 |
+

|
| 204 |
+
Figure 3: Number of papers mentioning "zero pronoun" per year according Google Scholar.
|
| 205 |
+
|
| 206 |
+
great potential in dealing with complex tasks, to tackle this particular challenge (Lu et al., 2023; Wang et al., 2023b; Lyu et al., 2023). Moreover, we plan to evaluate our approach on more discourse-aware tasks. Specifically, we aim to utilize the GuoFeng Benchmark (Wang et al., 2022, 2023a), which presents a comprehensive testing ground for evaluating the performance of models on a variety of discourse-level translation tasks. By doing so, we hope to gain more insights into the strengths and weaknesses of our approach, and continually refine it to achieve better performance.
|
| 207 |
+
|
| 208 |
+
# Acknowledgement
|
| 209 |
+
|
| 210 |
+
The authors express their sincere gratitude to all reviewers whose keen interest and insightful feedback have significantly improved the quality of this paper. Their affirmation and encouragement have further solidified our commitment to the path of computational linguistics. This work is part of the GuoFeng AI (guofeng-ai@googlegroups.com) and TranSmart (Huang et al., 2021) projects.
|
| 211 |
+
|
| 212 |
+
# Limitations
|
| 213 |
+
|
| 214 |
+
We list the main limitations of this work as follows:
|
| 215 |
+
|
| 216 |
+
1. Zero Pronoun in Different Languages: The zero pronoun phenomenon may vary across languages in terms of word form, occurrence frequency and category distribution etc. Due to page limitation, some examples are mainly discussed in Chinese and/or English. However, most results and findings can be applied to other pro-drop languages, which is further supported by other works (Ri et al., 2021; Aloraini and Poesio, 2020; Vincent et al., 2022). In Appendix §A.1, we add details on the phenomenon in various pro-drop
|
| 217 |
+
|
| 218 |
+
languages such as Arabic, Swahili, Portuguese, Hindi, and Japanese.
|
| 219 |
+
|
| 220 |
+
2. More Details on Datasets and Methods: We have no space to give more details on datasets and models. We will use a Github repository to release all mentioned datasets, code, and models, which can improve the reproducibility of this research direction.
|
| 221 |
+
|
| 222 |
+
# Ethics Statement
|
| 223 |
+
|
| 224 |
+
We take ethical considerations very seriously, and strictly adhere to the ACL Ethics Policy. In this paper, we present a survey of the major works on datasets, approaches and evaluation metrics that have been undertaken in ZPT. Resources and methods used in this paper are publicly available and have been widely adopted by researches of machine translation. We ensure that the findings and conclusions of this paper are reported accurately and objectively.
|
| 225 |
+
|
| 226 |
+
# References
|
| 227 |
+
|
| 228 |
+
Abdulrahman Aloraini and Massimo Poesio. 2020. Cross-lingual zero pronoun resolution. In LREC.
|
| 229 |
+
Ilaria Bacolini. 2017. Exploring the partial pro-drop property in modern Hebrew. Università Ca'Foscari Venezia.
|
| 230 |
+
Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In ACL.
|
| 231 |
+
Elizabeth Baran, Yaqin Yang, and Nianwen Xue. 2012. Annotating dropped pronouns in chinese newswire text. In LREC.
|
| 232 |
+
Chen Chen and Vincent Ng. 2013. Chinese zero pronoun resolution: Some recent advances. In EMNLP.
|
| 233 |
+
Chen Chen and Vincent Ng. 2015. Chinese zero pronoun resolution: A joint unsupervised discourse-aware model rivaling state-of-the-art solvers. In ACL-IJCNLP.
|
| 234 |
+
Chen Chen and Vincent Ng. 2016. Chinese zero pronoun resolution with deep neural networks. In ACL.
|
| 235 |
+
Mingda Chen, Zewei Chu, and Kevin Gimpel. 2019. Evaluation benchmarks and learning criteria for discourse-aware sentence representations. In EMNLP-IJCNLP.
|
| 236 |
+
Tagyoung Chung and Daniel Gildea. 2010. Effects of empty categories on machine translation. In EMNLP.
|
| 237 |
+
Ana C Farinha, M Amin Farajian, Marianna Buchicchio, Patrick Fernandes, Jose GC De Souza, Helena Moniz,
|
| 238 |
+
|
| 239 |
+
and André FT Martins. 2022. Findings of the wmt 2022 shared task on chat translation. In Proceedings of the 7th Conference on Machine Translation.
|
| 240 |
+
Markus Freitag, Isaac Caswell, and Scott Roy. 2019. Ape at scale and its implications on mt evaluation biases. In Proceedings of the 4th Conference on Machine Translation.
|
| 241 |
+
Michael Alexander Kirkwood Halliday and Ruqaiya Hasan. 1976. Cohesion in english. Longman.
|
| 242 |
+
Guoping Huang, Lemao Liu, Xing Wang, Longyue Wang, Huayang Li, Zhaopeng Tu, Chengyan Huang, and Shuming Shi. 2021. Transmart: A practical interactive machine translation system. arXiv preprint arXiv:2105.13072.
|
| 243 |
+
Yongkeun Hwang, Hyeongu Yun, and Kyomin Jung. 2021. Contrastive learning for context-aware neural machine translation using coreference information. In Proceedings of the 6th Conference on Machine Translation.
|
| 244 |
+
Prathyusha Jwalapuram, Shafiq Joty, and Youlin Shen. 2020. Pronoun-targeted fine-tuning for nmt with hybrid losses. In EMNLP.
|
| 245 |
+
Ryuichiro Kimura, Shohei Iida, Hongyi Cui, Po-Hsuan Hung, Takehito Utsuro, and Masaaki Nagata. 2019. Selecting informative context sentence by forced back-translation. In Proceedings of Machine Translation Summit XVII.
|
| 246 |
+
Tom Kocmi, Rachel Bawden, Ondrej Bojar, Anton Dvorkovich, Christian Federmann, Mark Fishel, Thamme Gowda, Yvette Graham, Roman Grundkiewicz, Barry Haddow, et al. 2022. Findings of the 2022 conference on machine translation (wmt22). In Proceedings of the 7th Conference on Machine Translation.
|
| 247 |
+
Fang Kong and Guodong Zhou. 2010. A tree kernel-based unified framework for chinese zero anaphora resolution. In EMNLP.
|
| 248 |
+
Fajri Koto, Joy Han Lau, and Timothy Baldwin. 2021. Discourse probing of pretrained language models. In NAACL.
|
| 249 |
+
Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In *NAACL*.
|
| 250 |
+
Samuel Läubli, Rico Sennrich, and Martin Volk. 2018. Has machine translation achieved human parity? a case for document-level evaluation. In EMNLP.
|
| 251 |
+
Ronan Le Nagard and Philipp Koehn. 2010. Aiding pronoun translation with co-reference resolution. In Proceedings of the Joint 5th Workshop on Statistical Machine Translation and MetricsMATR.
|
| 252 |
+
|
| 253 |
+
Charles Li and Sandra Thomson. 1979. Third-person pronouns and zero-anaphora in chinese discourse in discourse and syntax. Syntax and Semantics Ann Arbor, Mich, 12:311-335.
|
| 254 |
+
António V Lopes, M Amin Farajian, Rachel Bawden, Michael Zhang, and André FT Martins. 2020. Document-level neural mt: A systematic comparison. In EAMT.
|
| 255 |
+
Qingyu Lu, Baopu Qiu, Liang Ding, Liping Xie, and Dacheng Tao. 2023. Error analysis prompting enables human-like translation evaluation in large language models: A case study on chatgpt. arXiv preprint arXiv:2303.13809.
|
| 256 |
+
Chenyang Lyu, Jitao Xu, and Longyue Wang. 2023. New trends in machine translation using large language models: Case examples with chatgpt. arXiv preprint arXiv:2305.01181.
|
| 257 |
+
Shuming Ma, Dongdong Zhang, and Ming Zhou. 2020. A simple and effective unified encoder for document-level machine translation. In ACL.
|
| 258 |
+
Ruslan Mitkov. 2014. Anaphora resolution. Routledge.
|
| 259 |
+
Jihyung Moon, Hyunchang Cho, and Eunjeong L Park. 2020. Revisiting round-trip translation for quality estimation. In EACL.
|
| 260 |
+
Takumi Ohtani, Hidetaka Kamigaito, Masaaki Nagata, and Manabu Okumura. 2019. Context-aware neural machine translation with coreference information. In DiscoMT.
|
| 261 |
+
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In ACL.
|
| 262 |
+
Arum Park, Seunghee Lim, and Munpyo Hong. 2015. Zero object resolution in korean. In Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation.
|
| 263 |
+
Jesús Peral and Antonio Ferrández. 2003. Translation of pronominal anaphora between english and spanish: Discrepancies and evaluation. In JAIR.
|
| 264 |
+
Simone Pereira. 2009. Zac. pb: An annotated corpus for zero anaphora resolution in portuguese. In Proceedings of the Student Research Workshop.
|
| 265 |
+
Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll-2012 shared task: Modeling multilingual unrestricted coreference in ontonotes. In CoNLL-WS.
|
| 266 |
+
Rashmi Prasad. 2000. A corpus study of zero pronouns in Hindi: An account based on centering transition preferences. In DAARC.
|
| 267 |
+
Sudha Rao, Allyson Ettinger, Hal Daumé III, and Philip Resnik. 2015. Dialogue focus tracking for zero pronoun resolution. In *NAACL*.
|
| 268 |
+
|
| 269 |
+
Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for mt evaluation. In EMNLP.
|
| 270 |
+
Ryokan Ri, Toshiaki Nakazawa, and Yoshimasa Tsuruoka. 2021. Zero-pronoun data augmentation for japanese-to-english translation. In WAT.
|
| 271 |
+
Lorenza Russo, Sharid Loáiciga, and Asheesh Gulati. 2012. Italian and spanish null subjects: a case study evaluation in an mt perspective. In LREC.
|
| 272 |
+
Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In AMTA.
|
| 273 |
+
Linfeng Song, Kun Xu, Yue Zhang, Jianshu Chen, and Dong Yu. 2020. Zpr2: Joint zero pronoun recovery and resolution using multi-task learning and bert. In ACL.
|
| 274 |
+
Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, and Jian-Yun Nie. 2015. A hierarchical recurrent encoder-decoder for generative context-aware query suggestion. In CIKM.
|
| 275 |
+
Hui Su, Xiaoyu Shen, Rongzhi Zhang, Fei Sun, Pengwei Hu, Cheng Niu, and Jie Zhou. 2019. Improving multi-turn dialogue modelling with utterance rewrite. In ACL.
|
| 276 |
+
Amane Sugiyama and Naoki Yoshinaga. 2019. Data augmentation using back-translation for context-aware neural machine translation. In DiscoMT.
|
| 277 |
+
Hirotoshi Taira, Katsuhito Sudoh, and Masaaki Nagata. 2012. Zero pronoun resolution can improve the quality of J-E translation. In Proceedings of the 6th Workshop on Syntax, Semantics and Structure in Statistical Translation.
|
| 278 |
+
Xin Tan, Shaohui Kuang, and Deyi Xiong. 2019. Detecting and translating dropped pronouns in neural machine translation. In NLPCC.
|
| 279 |
+
Xin Tan, Longyin Zhang, and Guodong Zhou. 2021. Coupling context modeling with zero pronoun recovering for document-level natural language generation. In EMNLP.
|
| 280 |
+
Eva Vanmassenhove, Dimitar Shterionov, and Andy Way. 2019. Lost in translation: Loss and decay of linguistic richness in machine translation. In Proceedings of Machine Translation Summit XVII.
|
| 281 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS.
|
| 282 |
+
Sebastian T Vincent, Loic Barrault, and Carolina Scarton. 2022. Controlling extra-textual attributes about dialogue participants: A case study of english-top polish neural machine translation. In EAMT.
|
| 283 |
+
|
| 284 |
+
Elena Voita, Rico Sennrich, and Ivan Titov. 2019. Context-aware monolingual repair for neural machine translation. In EMNLP.
|
| 285 |
+
Longyue Wang. 2019. *Discourse-aware neural machine translation*. Ph.D. thesis, Ph. D. thesis, Dublin City University, Dublin, Ireland.
|
| 286 |
+
Longyue Wang, Zefeng Du, DongHuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Shuming Shi, and Zhaopeng Tu. 2023a. GuoFeng: A discourse-aware evaluation benchmark for language understanding, translation and generation.
|
| 287 |
+
Longyue Wang, Chenyang Lyu, Tianbo Ji, Zhirui Zhang, Dian Yu, Shuming Shi, and Zhaopeng Tu. 2023b. Document-level machine translation with large language models. arXiv preprint arXiv:2304.02210.
|
| 288 |
+
Longyue Wang, Zhaopeng Tu, Chenyang Lyu, Zefeng Du, Dian Yu, Liting Zhou, Siyou Liu, Yan Gu, et al. 2023c. Findings of the wmt 2023 shared task on discourse-level literary translation. In Proceedings of the 8th Conference on Machine Translation.
|
| 289 |
+
Longyue Wang, Zhaopeng Tu, Shuming Shi, Tong Zhang, Yvette Graham, and Qun Liu. 2018a. Translating pro-drop languages with reconstruction models. In AAAI.
|
| 290 |
+
Longyue Wang, Zhaopeng Tu, Xing Wang, and Shuming Shi. 2019. One model to learn both: Zero pronoun prediction and translation. In EMNLP-IJCNLP.
|
| 291 |
+
Longyue Wang, Zhaopeng Tu, Andy Way, and Qun Liu. 2017a. Exploiting cross-sentence context for neural machine translation. In EMNLP.
|
| 292 |
+
Longyue Wang, Zhaopeng Tu, Andy Way, and Qun Liu. 2018b. Learning to jointly translate and predict dropped pronouns with a shared reconstruction mechanism. In EMNLP.
|
| 293 |
+
Longyue Wang, Zhaopeng Tu, Xiaojun Zhang, Hang Li, Andy Way, and Qun Liu. 2016a. A novel approach for dropped pronoun translation. In NAACL.
|
| 294 |
+
Longyue Wang, Zhaopeng Tu, Xiaojun Zhang, Siyou Liu, Hang Li, Andy Way, and Qun Liu. 2017b. A novel and robust approach for pro-drop language translation. Machine Translation, 31(1-2):65-87.
|
| 295 |
+
Longyue Wang, Mingzhou Xu, Derek F. Wong, Hongye Liu, Linfeng Song, Lidia S. Chao, Shuming Shi, and Zhaopeng Tu. 2022. GuoFeng: A benchmark for zero pronoun recovery and translation. In EMNLP.
|
| 296 |
+
Longyue Wang, Xiaojun Zhang, Zhaopeng Tu, Hang Li, and Qun Liu. 2016b. Dropped pronoun generation for dialogue machine translation. In ICASSP.
|
| 297 |
+
Longyue Wang, Xiaojun Zhang, Zhaopeng Tu, Qun Liu, and Andy Way. 2016c. Automatic construction of discourse corpora for dialogue translation. In LREC.
|
| 298 |
+
|
| 299 |
+
Lesly Miculicich Werlen and Andrei Popescu-Belis. 2017. Validation of an automatic metric for the accuracy of pronoun translation (apt). In DiscoMT.
|
| 300 |
+
Lesly Miculicich Werlen, Dhananjay Ram, Nikolaos Pappas, and James Henderson. 2018. Document-level neural machine translation with hierarchical attention networks. In EMNLP.
|
| 301 |
+
Shuangzhi Wu, Xing Wang, Longyue Wang, Fangxu Liu, Jun Xie, Zhaopeng Tu, Shuming Shi, and Mu Li. 2020. Tencent neural machine translation systems for the wmt20 news translation task. In Proceedings of the 5th Conference on Machine Translation.
|
| 302 |
+
Bing Xiang, Xiaoqiang Luo, and Bowen Zhou. 2013. Enlisting the ghost: Modeling empty categories for machine translation. In ACL.
|
| 303 |
+
Jingxuan Yang, Jianzhuo Tong, Si Li, Sheng Gao, Jun Guo, and Nianwen Xue. 2019a. Recovering dropped pronouns in chinese conversations via modeling their referents. In *NAACL*.
|
| 304 |
+
Yaqin Yang, Yalin Liu, and Nianwen Xue. 2015. Recovering dropped pronouns from chinese text messages. In ACL-IJCNLP.
|
| 305 |
+
Yaqin Yang and Nianwen Xue. 2010. Chasing the ghost: recovering empty categories in the chinese treebank. In COLING.
|
| 306 |
+
Zonghan Yang, Yong Cheng, Yang Liu, and Maosong Sun. 2019b. Reducing word omission errors in neural machine translation: A contrastive learning approach. In ACL.
|
| 307 |
+
Qingyu Yin, Yu Zhang, Weinan Zhang, Ting Liu, and William Yang Wang. 2018. Zero pronoun resolution with attention-based neural network. In COLING.
|
| 308 |
+
Lei Yu, Laurent Sartran, Wojciech Stokowiec, Wang Ling, Lingpeng Kong, Phil Blunsom, and Chris Dyer. 2020. Better document-level machine translation with bayes' rule. In TACL.
|
| 309 |
+
Dong Zhan and Hiromi Nakaiwa. 2015. Automatic detection of antecedents of japanese zero pronouns using a japanese-english bilingual corpus. In Proceedings of Machine Translation Summit XV.
|
| 310 |
+
Weinan Zhang, Ting Liu, Qingyu Yin, and Yu Zhang. 2019. Neural recovery machine for Chinese dropped pronoun. In Frontiers of Computer Science.
|
| 311 |
+
Shanheng Zhao and Hwee Tou Ng. 2007. Identification and resolution of chinese zero pronouns: A machine learning approach. In EMNLP-CoNLL.
|
| 312 |
+
|
| 313 |
+
# A Appendix
|
| 314 |
+
|
| 315 |
+
# A.1 Zero Pronoun in Different Languages
|
| 316 |
+
|
| 317 |
+
The pronoun-dropping conditions vary from language to language, and can be quite intricate. Previous works define these typological patterns as pro-drop that can be subcategorized into three categories (as shown in Figure 1):
|
| 318 |
+
|
| 319 |
+
- Topic Pro-drop Language allows referential pronouns to be omitted, or be phonologically null. Such dropped pronouns can be inferred from previous discourse, from the context of the conversation, or generally shared knowledge.
|
| 320 |
+
- Partial Pro-drop Language allows for the deletion of the subject pronoun. Such missing pronoun is not inferred strictly from pragmatics, but partially indicated by the morphology of the verb.
|
| 321 |
+
- Full Pro-drop Language has rich subject agreement morphology where subjects are freely dropped under the appropriate discourse conditions.
|
| 322 |
+
|
| 323 |
+
# A.2 Analysis of Zero Pronoun
|
| 324 |
+
|
| 325 |
+
As shown in Table 5, $26\%$ of Chinese pronouns were dropped in the dialogue domain, while $7\%$ were dropped in the newswire domain. ZPs in formal text genres (e.g. newswire) are not as common as those in informal genres (e.g. dialogue), and the most frequently dropped pronouns in Chinese newswire is the third person singular it ("it") (Baran et al., 2012), which may not be crucial to translation performance.
|
| 326 |
+
|
| 327 |
+
<table><tr><td>Genres</td><td>Sent.</td><td>ZH Pro.</td><td>EN Pro.</td><td>ZPs</td></tr><tr><td>Dialogue</td><td>2.15M</td><td>1.66M</td><td>2.26M</td><td>26.55%</td></tr><tr><td>News</td><td>3.29M</td><td>2.27M</td><td>2.45M</td><td>7.35%</td></tr></table>
|
| 328 |
+
|
| 329 |
+
Table 5: Extent of pronoun-dropping in different genres. The Dialogue corpus consists of subtitles in Opensubti-tle2018 and the News corpus is CWMT2013 news data.
|
| 330 |
+
|
| 331 |
+
pronouns can be omitted to make the sentence compact yet comprehensible when the identity of the pronouns can be inferred from the context. These omissions may not be problems for our humans since we can easily recall the missing pronouns from the context.
|
| 332 |
+
|
| 333 |
+
# A.4 Human Evaluation Guideline
|
| 334 |
+
|
| 335 |
+
We carefully design an evaluation protocol according to error types made by various NMT systems, which can be grouped into five categories: 1) The translation can not preserve the original semantics due to misunderstanding the anaphora of ZPs. Furthermore, the structure of translation is inappropriately or grammatically incorrect due to incorrect ZPs or lack of ZPs; 2) The sentence structure is correct, but translation can not preserve the original semantics due to misunderstanding the anaphora of ZPs; 3) The translation can preserve the original semantics, but the structure of translation is inappropriately generated or grammatically incorrect due to the lack of ZPs; 4) where a source ZP is incorrectly translated or not translated, but the translation can reflect the meaning of the source; 5) where translation preserves the meaning of the source and all ZPs are translated. Finally, we average the score of each target sentence that contains ZPs to be the final score of our human evaluation. For human evaluation, we randomly select a hundred groups of samples from each domain, each group contains an oracle source sentence and the hypotheses from six examined MT systems. Following this protocol, we asked expert raters to score all of these samples in 1 to 5 scores to reflect the quality of ZP translations. For the inter-agreement, we simply define that a large than 3 is a good translation and a bad translation is less than 3. The annotators reached an agreement of annotations on $91\%$ (2750 out of 3000) samples. In general, the process of manual labeling took five professional annotators one month in total, which cost US $5,000.
|
| 336 |
+
|
| 337 |
+
# A.3 The Linguistic Concept
|
| 338 |
+
|
| 339 |
+
Zero anaphora is the use of an expression whose interpretation depends specifically upon antecedent expression. The anaphoric (referring) term is called an anaphor. Sometimes anaphor may rely on the postcedent expression, and this phenomenon is called cataphora. Zero Anaphora (pronoun-dropping) is a more complex case of anaphora. In pro-drop languages such as Chinese and Japanese,
|
| 340 |
+
|
| 341 |
+
A For every submission:
|
| 342 |
+
|
| 343 |
+
A1. Did you describe the limitations of your work? Section Limitations.
|
| 344 |
+
A2. Did you discuss any potential risks of your work? Section Ethics Statement.
|
| 345 |
+
A3. Do the abstract and introduction summarize the paper's main claims? Section Abstract and 1.
|
| 346 |
+
A4. Have you used AI writing assistants when working on this paper? Left blank.
|
| 347 |
+
|
| 348 |
+
B Did you use or create scientific artifacts?
|
| 349 |
+
|
| 350 |
+
Left blank.
|
| 351 |
+
|
| 352 |
+
B1. Did you cite the creators of artifacts you used? Not applicable. Left blank.
|
| 353 |
+
B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Not applicable. Left blank.
|
| 354 |
+
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Not applicable. Left blank.
|
| 355 |
+
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank.
|
| 356 |
+
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank.
|
| 357 |
+
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Not applicable. Left blank.
|
| 358 |
+
|
| 359 |
+
C Did you run computational experiments?
|
| 360 |
+
|
| 361 |
+
Section 5.2 and Section 6.2.
|
| 362 |
+
|
| 363 |
+
C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? This is a survey and all details are same as related citations.
|
| 364 |
+
|
| 365 |
+
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
|
| 366 |
+
|
| 367 |
+
This is a survey and all details are same as related citations.
|
| 368 |
+
|
| 369 |
+
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
|
| 370 |
+
|
| 371 |
+
Section 5.2
|
| 372 |
+
|
| 373 |
+
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)?
|
| 374 |
+
|
| 375 |
+
Not applicable. Left blank.
|
| 376 |
+
|
| 377 |
+
D Did you use human annotators (e.g., crowdworkers) or research with human participants?
|
| 378 |
+
|
| 379 |
+
Section 6.2.
|
| 380 |
+
|
| 381 |
+
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
|
| 382 |
+
|
| 383 |
+
Appendix A.4.
|
| 384 |
+
|
| 385 |
+
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)?
|
| 386 |
+
|
| 387 |
+
Appendix A.4.
|
| 388 |
+
|
| 389 |
+
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
|
| 390 |
+
|
| 391 |
+
Appendix A.4.
|
| 392 |
+
|
| 393 |
+
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank.
|
| 394 |
+
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
|
| 395 |
+
|
| 396 |
+
Not applicable. Left blank.
|
2023/A Survey on Zero Pronoun Translation/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1e43a3bf31bb9caeebc162e0391b09d47b849b8ef90cfcecb56914b10ed38d6a
|
| 3 |
+
size 356713
|
2023/A Survey on Zero Pronoun Translation/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/A Synthetic Data Generation Framework for Grounded Dialogues/ff57bbb3-298c-4198-bfad-c2e95d1773a0_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|